sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
listlengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
listlengths
0
25
languages
listlengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
listlengths
0
352
processed_texts
listlengths
1
353
b13373686d536c6a45b0069e96cf83bab72f33db
# NFCorpus: 20 generated queries (BEIR Benchmark) This HF dataset contains the top-20 synthetic queries generated for each passage in the above BEIR benchmark dataset. - DocT5query model used: [BeIR/query-gen-msmarco-t5-base-v1](https://huggingface.co/BeIR/query-gen-msmarco-t5-base-v1) - id (str): unique document id in NFCorpus in the BEIR benchmark (`corpus.jsonl`). - Questions generated: 20 - Code used for generation: [evaluate_anserini_docT5query_parallel.py](https://github.com/beir-cellar/beir/blob/main/examples/retrieval/evaluation/sparse/evaluate_anserini_docT5query_parallel.py) Below contains the old dataset card for the BEIR benchmark. # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** [email protected] ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.Top-20 generated queries for every passage in NFCorpus # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** [email protected] ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
income/cqadupstack-android-top-20-gen-queries
[ "task_categories:text-retrieval", "multilinguality:monolingual", "language:en", "license:cc-by-sa-4.0", "region:us" ]
2023-01-24T19:50:29+00:00
{"annotations_creators": [], "language_creators": [], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": {"msmarco": ["1M<n<10M"], "trec-covid": ["100k<n<1M"], "nfcorpus": ["1K<n<10K"], "nq": ["1M<n<10M"], "hotpotqa": ["1M<n<10M"], "fiqa": ["10K<n<100K"], "arguana": ["1K<n<10K"], "touche-2020": ["100K<n<1M"], "cqadupstack": ["100K<n<1M"], "quora": ["100K<n<1M"], "dbpedia": ["1M<n<10M"], "scidocs": ["10K<n<100K"], "fever": ["1M<n<10M"], "climate-fever": ["1M<n<10M"], "scifact": ["1K<n<10K"]}, "source_datasets": [], "task_categories": ["text-retrieval"], "paperswithcode_id": "beir", "pretty_name": "BEIR Benchmark"}
2023-01-24T19:50:32+00:00
[]
[ "en" ]
TAGS #task_categories-text-retrieval #multilinguality-monolingual #language-English #license-cc-by-sa-4.0 #region-us
NFCorpus: 20 generated queries (BEIR Benchmark) =============================================== This HF dataset contains the top-20 synthetic queries generated for each passage in the above BEIR benchmark dataset. * DocT5query model used: BeIR/query-gen-msmarco-t5-base-v1 * id (str): unique document id in NFCorpus in the BEIR benchmark ('URL'). * Questions generated: 20 * Code used for generation: evaluate\_anserini\_docT5query\_parallel.py Below contains the old dataset card for the BEIR benchmark. Dataset Card for BEIR Benchmark =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: URL * Point of Contact: URL@URL ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: * Fact-checking: FEVER, Climate-FEVER, SciFact * Question-Answering: NQ, HotpotQA, FiQA-2018 * Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus * News Retrieval: TREC-NEWS, Robust04 * Argument Retrieval: Touche-2020, ArguAna * Duplicate Question Retrieval: Quora, CqaDupstack * Citation-Prediction: SCIDOCS * Tweet Retrieval: Signal-1M * Entity Retrieval: DBPedia All these datasets have been preprocessed and can be used for your experiments. ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found here. ### Languages All tasks are in English ('en'). Dataset Structure ----------------- All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: * 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{"\_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}' * 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\_id' with unique query identifier and 'text' with query text. For example: '{"\_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}' * 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1' ### Data Instances A high level example of any beir dataset: ### Data Fields Examples from all configurations have the following features: ### Corpus * 'corpus': a 'dict' feature representing the document title and passage text, made up of: + '\_id': a 'string' feature representing the unique document id - 'title': a 'string' feature, denoting the title of the document. - 'text': a 'string' feature, denoting the text of the document. ### Queries * 'queries': a 'dict' feature representing the query, made up of: + '\_id': a 'string' feature representing the unique query id + 'text': a 'string' feature, denoting the text of the query. ### Qrels * 'qrels': a 'dict' feature representing the query document relevance judgements, made up of: + '\_id': a 'string' feature representing the query id - '\_id': a 'string' feature, denoting the document id. - 'score': a 'int32' feature, denoting the relevance judgement between query and document. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Cite as: ### Contributions Thanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus Dataset Card for BEIR Benchmark =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: URL * Point of Contact: URL@URL ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: * Fact-checking: FEVER, Climate-FEVER, SciFact * Question-Answering: NQ, HotpotQA, FiQA-2018 * Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus * News Retrieval: TREC-NEWS, Robust04 * Argument Retrieval: Touche-2020, ArguAna * Duplicate Question Retrieval: Quora, CqaDupstack * Citation-Prediction: SCIDOCS * Tweet Retrieval: Signal-1M * Entity Retrieval: DBPedia All these datasets have been preprocessed and can be used for your experiments. ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found here. ### Languages All tasks are in English ('en'). Dataset Structure ----------------- All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: * 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{"\_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}' * 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\_id' with unique query identifier and 'text' with query text. For example: '{"\_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}' * 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1' ### Data Instances A high level example of any beir dataset: ### Data Fields Examples from all configurations have the following features: ### Corpus * 'corpus': a 'dict' feature representing the document title and passage text, made up of: + '\_id': a 'string' feature representing the unique document id - 'title': a 'string' feature, denoting the title of the document. - 'text': a 'string' feature, denoting the text of the document. ### Queries * 'queries': a 'dict' feature representing the query, made up of: + '\_id': a 'string' feature representing the unique query id + 'text': a 'string' feature, denoting the text of the query. ### Qrels * 'qrels': a 'dict' feature representing the query document relevance judgements, made up of: + '\_id': a 'string' feature representing the query id - '\_id': a 'string' feature, denoting the document id. - 'score': a 'int32' feature, denoting the relevance judgement between query and document. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Cite as: ### Contributions Thanks to @Nthakur20 for adding this dataset.
[ "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus\n\n\nDataset Card for BEIR Benchmark\n===============================\n\n\nTable of Contents\n-----------------\n\n\n* Dataset Description\n\t+ Dataset Summary\n\t+ Supported Tasks and Leaderboards\n\t+ Languages\n* Dataset Structure\n\t+ Data Instances\n\t+ Data Fields\n\t+ Data Splits\n* Dataset Creation\n\t+ Curation Rationale\n\t+ Source Data\n\t+ Annotations\n\t+ Personal and Sensitive Information\n* Considerations for Using the Data\n\t+ Social Impact of Dataset\n\t+ Discussion of Biases\n\t+ Other Known Limitations\n* Additional Information\n\t+ Dataset Curators\n\t+ Licensing Information\n\t+ Citation Information\n\t+ Contributions\n\n\nDataset Description\n-------------------\n\n\n* Homepage: URL\n* Repository: URL\n* Paper: URL\n* Leaderboard: URL\n* Point of Contact: URL@URL", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset." ]
[ "TAGS\n#task_categories-text-retrieval #multilinguality-monolingual #language-English #license-cc-by-sa-4.0 #region-us \n", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus\n\n\nDataset Card for BEIR Benchmark\n===============================\n\n\nTable of Contents\n-----------------\n\n\n* Dataset Description\n\t+ Dataset Summary\n\t+ Supported Tasks and Leaderboards\n\t+ Languages\n* Dataset Structure\n\t+ Data Instances\n\t+ Data Fields\n\t+ Data Splits\n* Dataset Creation\n\t+ Curation Rationale\n\t+ Source Data\n\t+ Annotations\n\t+ Personal and Sensitive Information\n* Considerations for Using the Data\n\t+ Social Impact of Dataset\n\t+ Discussion of Biases\n\t+ Other Known Limitations\n* Additional Information\n\t+ Dataset Curators\n\t+ Licensing Information\n\t+ Citation Information\n\t+ Contributions\n\n\nDataset Description\n-------------------\n\n\n* Homepage: URL\n* Repository: URL\n* Paper: URL\n* Leaderboard: URL\n* Point of Contact: URL@URL", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset." ]
cdf2b8c35902f0878ed23f8b77c9cdbd15f3d0b8
# NFCorpus: 20 generated queries (BEIR Benchmark) This HF dataset contains the top-20 synthetic queries generated for each passage in the above BEIR benchmark dataset. - DocT5query model used: [BeIR/query-gen-msmarco-t5-base-v1](https://huggingface.co/BeIR/query-gen-msmarco-t5-base-v1) - id (str): unique document id in NFCorpus in the BEIR benchmark (`corpus.jsonl`). - Questions generated: 20 - Code used for generation: [evaluate_anserini_docT5query_parallel.py](https://github.com/beir-cellar/beir/blob/main/examples/retrieval/evaluation/sparse/evaluate_anserini_docT5query_parallel.py) Below contains the old dataset card for the BEIR benchmark. # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** [email protected] ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.Top-20 generated queries for every passage in NFCorpus # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** [email protected] ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
income/cqadupstack-english-top-20-gen-queries
[ "task_categories:text-retrieval", "multilinguality:monolingual", "language:en", "license:cc-by-sa-4.0", "region:us" ]
2023-01-24T19:52:03+00:00
{"annotations_creators": [], "language_creators": [], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": {"msmarco": ["1M<n<10M"], "trec-covid": ["100k<n<1M"], "nfcorpus": ["1K<n<10K"], "nq": ["1M<n<10M"], "hotpotqa": ["1M<n<10M"], "fiqa": ["10K<n<100K"], "arguana": ["1K<n<10K"], "touche-2020": ["100K<n<1M"], "cqadupstack": ["100K<n<1M"], "quora": ["100K<n<1M"], "dbpedia": ["1M<n<10M"], "scidocs": ["10K<n<100K"], "fever": ["1M<n<10M"], "climate-fever": ["1M<n<10M"], "scifact": ["1K<n<10K"]}, "source_datasets": [], "task_categories": ["text-retrieval"], "paperswithcode_id": "beir", "pretty_name": "BEIR Benchmark"}
2023-01-24T19:52:09+00:00
[]
[ "en" ]
TAGS #task_categories-text-retrieval #multilinguality-monolingual #language-English #license-cc-by-sa-4.0 #region-us
NFCorpus: 20 generated queries (BEIR Benchmark) =============================================== This HF dataset contains the top-20 synthetic queries generated for each passage in the above BEIR benchmark dataset. * DocT5query model used: BeIR/query-gen-msmarco-t5-base-v1 * id (str): unique document id in NFCorpus in the BEIR benchmark ('URL'). * Questions generated: 20 * Code used for generation: evaluate\_anserini\_docT5query\_parallel.py Below contains the old dataset card for the BEIR benchmark. Dataset Card for BEIR Benchmark =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: URL * Point of Contact: URL@URL ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: * Fact-checking: FEVER, Climate-FEVER, SciFact * Question-Answering: NQ, HotpotQA, FiQA-2018 * Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus * News Retrieval: TREC-NEWS, Robust04 * Argument Retrieval: Touche-2020, ArguAna * Duplicate Question Retrieval: Quora, CqaDupstack * Citation-Prediction: SCIDOCS * Tweet Retrieval: Signal-1M * Entity Retrieval: DBPedia All these datasets have been preprocessed and can be used for your experiments. ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found here. ### Languages All tasks are in English ('en'). Dataset Structure ----------------- All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: * 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{"\_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}' * 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\_id' with unique query identifier and 'text' with query text. For example: '{"\_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}' * 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1' ### Data Instances A high level example of any beir dataset: ### Data Fields Examples from all configurations have the following features: ### Corpus * 'corpus': a 'dict' feature representing the document title and passage text, made up of: + '\_id': a 'string' feature representing the unique document id - 'title': a 'string' feature, denoting the title of the document. - 'text': a 'string' feature, denoting the text of the document. ### Queries * 'queries': a 'dict' feature representing the query, made up of: + '\_id': a 'string' feature representing the unique query id + 'text': a 'string' feature, denoting the text of the query. ### Qrels * 'qrels': a 'dict' feature representing the query document relevance judgements, made up of: + '\_id': a 'string' feature representing the query id - '\_id': a 'string' feature, denoting the document id. - 'score': a 'int32' feature, denoting the relevance judgement between query and document. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Cite as: ### Contributions Thanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus Dataset Card for BEIR Benchmark =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: URL * Point of Contact: URL@URL ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: * Fact-checking: FEVER, Climate-FEVER, SciFact * Question-Answering: NQ, HotpotQA, FiQA-2018 * Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus * News Retrieval: TREC-NEWS, Robust04 * Argument Retrieval: Touche-2020, ArguAna * Duplicate Question Retrieval: Quora, CqaDupstack * Citation-Prediction: SCIDOCS * Tweet Retrieval: Signal-1M * Entity Retrieval: DBPedia All these datasets have been preprocessed and can be used for your experiments. ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found here. ### Languages All tasks are in English ('en'). Dataset Structure ----------------- All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: * 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{"\_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}' * 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\_id' with unique query identifier and 'text' with query text. For example: '{"\_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}' * 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1' ### Data Instances A high level example of any beir dataset: ### Data Fields Examples from all configurations have the following features: ### Corpus * 'corpus': a 'dict' feature representing the document title and passage text, made up of: + '\_id': a 'string' feature representing the unique document id - 'title': a 'string' feature, denoting the title of the document. - 'text': a 'string' feature, denoting the text of the document. ### Queries * 'queries': a 'dict' feature representing the query, made up of: + '\_id': a 'string' feature representing the unique query id + 'text': a 'string' feature, denoting the text of the query. ### Qrels * 'qrels': a 'dict' feature representing the query document relevance judgements, made up of: + '\_id': a 'string' feature representing the query id - '\_id': a 'string' feature, denoting the document id. - 'score': a 'int32' feature, denoting the relevance judgement between query and document. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Cite as: ### Contributions Thanks to @Nthakur20 for adding this dataset.
[ "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus\n\n\nDataset Card for BEIR Benchmark\n===============================\n\n\nTable of Contents\n-----------------\n\n\n* Dataset Description\n\t+ Dataset Summary\n\t+ Supported Tasks and Leaderboards\n\t+ Languages\n* Dataset Structure\n\t+ Data Instances\n\t+ Data Fields\n\t+ Data Splits\n* Dataset Creation\n\t+ Curation Rationale\n\t+ Source Data\n\t+ Annotations\n\t+ Personal and Sensitive Information\n* Considerations for Using the Data\n\t+ Social Impact of Dataset\n\t+ Discussion of Biases\n\t+ Other Known Limitations\n* Additional Information\n\t+ Dataset Curators\n\t+ Licensing Information\n\t+ Citation Information\n\t+ Contributions\n\n\nDataset Description\n-------------------\n\n\n* Homepage: URL\n* Repository: URL\n* Paper: URL\n* Leaderboard: URL\n* Point of Contact: URL@URL", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset." ]
[ "TAGS\n#task_categories-text-retrieval #multilinguality-monolingual #language-English #license-cc-by-sa-4.0 #region-us \n", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus\n\n\nDataset Card for BEIR Benchmark\n===============================\n\n\nTable of Contents\n-----------------\n\n\n* Dataset Description\n\t+ Dataset Summary\n\t+ Supported Tasks and Leaderboards\n\t+ Languages\n* Dataset Structure\n\t+ Data Instances\n\t+ Data Fields\n\t+ Data Splits\n* Dataset Creation\n\t+ Curation Rationale\n\t+ Source Data\n\t+ Annotations\n\t+ Personal and Sensitive Information\n* Considerations for Using the Data\n\t+ Social Impact of Dataset\n\t+ Discussion of Biases\n\t+ Other Known Limitations\n* Additional Information\n\t+ Dataset Curators\n\t+ Licensing Information\n\t+ Citation Information\n\t+ Contributions\n\n\nDataset Description\n-------------------\n\n\n* Homepage: URL\n* Repository: URL\n* Paper: URL\n* Leaderboard: URL\n* Point of Contact: URL@URL", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset." ]
7bc9f799538604098291b78dc27f8de287295697
# NFCorpus: 20 generated queries (BEIR Benchmark) This HF dataset contains the top-20 synthetic queries generated for each passage in the above BEIR benchmark dataset. - DocT5query model used: [BeIR/query-gen-msmarco-t5-base-v1](https://huggingface.co/BeIR/query-gen-msmarco-t5-base-v1) - id (str): unique document id in NFCorpus in the BEIR benchmark (`corpus.jsonl`). - Questions generated: 20 - Code used for generation: [evaluate_anserini_docT5query_parallel.py](https://github.com/beir-cellar/beir/blob/main/examples/retrieval/evaluation/sparse/evaluate_anserini_docT5query_parallel.py) Below contains the old dataset card for the BEIR benchmark. # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** [email protected] ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.Top-20 generated queries for every passage in NFCorpus # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** [email protected] ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
income/cqadupstack-gaming-top-20-gen-queries
[ "task_categories:text-retrieval", "multilinguality:monolingual", "language:en", "license:cc-by-sa-4.0", "region:us" ]
2023-01-24T19:52:15+00:00
{"annotations_creators": [], "language_creators": [], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": {"msmarco": ["1M<n<10M"], "trec-covid": ["100k<n<1M"], "nfcorpus": ["1K<n<10K"], "nq": ["1M<n<10M"], "hotpotqa": ["1M<n<10M"], "fiqa": ["10K<n<100K"], "arguana": ["1K<n<10K"], "touche-2020": ["100K<n<1M"], "cqadupstack": ["100K<n<1M"], "quora": ["100K<n<1M"], "dbpedia": ["1M<n<10M"], "scidocs": ["10K<n<100K"], "fever": ["1M<n<10M"], "climate-fever": ["1M<n<10M"], "scifact": ["1K<n<10K"]}, "source_datasets": [], "task_categories": ["text-retrieval"], "paperswithcode_id": "beir", "pretty_name": "BEIR Benchmark"}
2023-01-24T19:52:18+00:00
[]
[ "en" ]
TAGS #task_categories-text-retrieval #multilinguality-monolingual #language-English #license-cc-by-sa-4.0 #region-us
NFCorpus: 20 generated queries (BEIR Benchmark) =============================================== This HF dataset contains the top-20 synthetic queries generated for each passage in the above BEIR benchmark dataset. * DocT5query model used: BeIR/query-gen-msmarco-t5-base-v1 * id (str): unique document id in NFCorpus in the BEIR benchmark ('URL'). * Questions generated: 20 * Code used for generation: evaluate\_anserini\_docT5query\_parallel.py Below contains the old dataset card for the BEIR benchmark. Dataset Card for BEIR Benchmark =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: URL * Point of Contact: URL@URL ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: * Fact-checking: FEVER, Climate-FEVER, SciFact * Question-Answering: NQ, HotpotQA, FiQA-2018 * Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus * News Retrieval: TREC-NEWS, Robust04 * Argument Retrieval: Touche-2020, ArguAna * Duplicate Question Retrieval: Quora, CqaDupstack * Citation-Prediction: SCIDOCS * Tweet Retrieval: Signal-1M * Entity Retrieval: DBPedia All these datasets have been preprocessed and can be used for your experiments. ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found here. ### Languages All tasks are in English ('en'). Dataset Structure ----------------- All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: * 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{"\_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}' * 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\_id' with unique query identifier and 'text' with query text. For example: '{"\_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}' * 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1' ### Data Instances A high level example of any beir dataset: ### Data Fields Examples from all configurations have the following features: ### Corpus * 'corpus': a 'dict' feature representing the document title and passage text, made up of: + '\_id': a 'string' feature representing the unique document id - 'title': a 'string' feature, denoting the title of the document. - 'text': a 'string' feature, denoting the text of the document. ### Queries * 'queries': a 'dict' feature representing the query, made up of: + '\_id': a 'string' feature representing the unique query id + 'text': a 'string' feature, denoting the text of the query. ### Qrels * 'qrels': a 'dict' feature representing the query document relevance judgements, made up of: + '\_id': a 'string' feature representing the query id - '\_id': a 'string' feature, denoting the document id. - 'score': a 'int32' feature, denoting the relevance judgement between query and document. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Cite as: ### Contributions Thanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus Dataset Card for BEIR Benchmark =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: URL * Point of Contact: URL@URL ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: * Fact-checking: FEVER, Climate-FEVER, SciFact * Question-Answering: NQ, HotpotQA, FiQA-2018 * Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus * News Retrieval: TREC-NEWS, Robust04 * Argument Retrieval: Touche-2020, ArguAna * Duplicate Question Retrieval: Quora, CqaDupstack * Citation-Prediction: SCIDOCS * Tweet Retrieval: Signal-1M * Entity Retrieval: DBPedia All these datasets have been preprocessed and can be used for your experiments. ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found here. ### Languages All tasks are in English ('en'). Dataset Structure ----------------- All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: * 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{"\_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}' * 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\_id' with unique query identifier and 'text' with query text. For example: '{"\_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}' * 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1' ### Data Instances A high level example of any beir dataset: ### Data Fields Examples from all configurations have the following features: ### Corpus * 'corpus': a 'dict' feature representing the document title and passage text, made up of: + '\_id': a 'string' feature representing the unique document id - 'title': a 'string' feature, denoting the title of the document. - 'text': a 'string' feature, denoting the text of the document. ### Queries * 'queries': a 'dict' feature representing the query, made up of: + '\_id': a 'string' feature representing the unique query id + 'text': a 'string' feature, denoting the text of the query. ### Qrels * 'qrels': a 'dict' feature representing the query document relevance judgements, made up of: + '\_id': a 'string' feature representing the query id - '\_id': a 'string' feature, denoting the document id. - 'score': a 'int32' feature, denoting the relevance judgement between query and document. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Cite as: ### Contributions Thanks to @Nthakur20 for adding this dataset.
[ "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus\n\n\nDataset Card for BEIR Benchmark\n===============================\n\n\nTable of Contents\n-----------------\n\n\n* Dataset Description\n\t+ Dataset Summary\n\t+ Supported Tasks and Leaderboards\n\t+ Languages\n* Dataset Structure\n\t+ Data Instances\n\t+ Data Fields\n\t+ Data Splits\n* Dataset Creation\n\t+ Curation Rationale\n\t+ Source Data\n\t+ Annotations\n\t+ Personal and Sensitive Information\n* Considerations for Using the Data\n\t+ Social Impact of Dataset\n\t+ Discussion of Biases\n\t+ Other Known Limitations\n* Additional Information\n\t+ Dataset Curators\n\t+ Licensing Information\n\t+ Citation Information\n\t+ Contributions\n\n\nDataset Description\n-------------------\n\n\n* Homepage: URL\n* Repository: URL\n* Paper: URL\n* Leaderboard: URL\n* Point of Contact: URL@URL", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset." ]
[ "TAGS\n#task_categories-text-retrieval #multilinguality-monolingual #language-English #license-cc-by-sa-4.0 #region-us \n", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus\n\n\nDataset Card for BEIR Benchmark\n===============================\n\n\nTable of Contents\n-----------------\n\n\n* Dataset Description\n\t+ Dataset Summary\n\t+ Supported Tasks and Leaderboards\n\t+ Languages\n* Dataset Structure\n\t+ Data Instances\n\t+ Data Fields\n\t+ Data Splits\n* Dataset Creation\n\t+ Curation Rationale\n\t+ Source Data\n\t+ Annotations\n\t+ Personal and Sensitive Information\n* Considerations for Using the Data\n\t+ Social Impact of Dataset\n\t+ Discussion of Biases\n\t+ Other Known Limitations\n* Additional Information\n\t+ Dataset Curators\n\t+ Licensing Information\n\t+ Citation Information\n\t+ Contributions\n\n\nDataset Description\n-------------------\n\n\n* Homepage: URL\n* Repository: URL\n* Paper: URL\n* Leaderboard: URL\n* Point of Contact: URL@URL", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset." ]
6cd1d4c86302bccf6a1a884e362dad6a49a4af96
# NFCorpus: 20 generated queries (BEIR Benchmark) This HF dataset contains the top-20 synthetic queries generated for each passage in the above BEIR benchmark dataset. - DocT5query model used: [BeIR/query-gen-msmarco-t5-base-v1](https://huggingface.co/BeIR/query-gen-msmarco-t5-base-v1) - id (str): unique document id in NFCorpus in the BEIR benchmark (`corpus.jsonl`). - Questions generated: 20 - Code used for generation: [evaluate_anserini_docT5query_parallel.py](https://github.com/beir-cellar/beir/blob/main/examples/retrieval/evaluation/sparse/evaluate_anserini_docT5query_parallel.py) Below contains the old dataset card for the BEIR benchmark. # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** [email protected] ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.Top-20 generated queries for every passage in NFCorpus # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** [email protected] ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
income/cqadupstack-gis-top-20-gen-queries
[ "task_categories:text-retrieval", "multilinguality:monolingual", "language:en", "license:cc-by-sa-4.0", "region:us" ]
2023-01-24T19:52:23+00:00
{"annotations_creators": [], "language_creators": [], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": {"msmarco": ["1M<n<10M"], "trec-covid": ["100k<n<1M"], "nfcorpus": ["1K<n<10K"], "nq": ["1M<n<10M"], "hotpotqa": ["1M<n<10M"], "fiqa": ["10K<n<100K"], "arguana": ["1K<n<10K"], "touche-2020": ["100K<n<1M"], "cqadupstack": ["100K<n<1M"], "quora": ["100K<n<1M"], "dbpedia": ["1M<n<10M"], "scidocs": ["10K<n<100K"], "fever": ["1M<n<10M"], "climate-fever": ["1M<n<10M"], "scifact": ["1K<n<10K"]}, "source_datasets": [], "task_categories": ["text-retrieval"], "paperswithcode_id": "beir", "pretty_name": "BEIR Benchmark"}
2023-01-24T19:52:26+00:00
[]
[ "en" ]
TAGS #task_categories-text-retrieval #multilinguality-monolingual #language-English #license-cc-by-sa-4.0 #region-us
NFCorpus: 20 generated queries (BEIR Benchmark) =============================================== This HF dataset contains the top-20 synthetic queries generated for each passage in the above BEIR benchmark dataset. * DocT5query model used: BeIR/query-gen-msmarco-t5-base-v1 * id (str): unique document id in NFCorpus in the BEIR benchmark ('URL'). * Questions generated: 20 * Code used for generation: evaluate\_anserini\_docT5query\_parallel.py Below contains the old dataset card for the BEIR benchmark. Dataset Card for BEIR Benchmark =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: URL * Point of Contact: URL@URL ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: * Fact-checking: FEVER, Climate-FEVER, SciFact * Question-Answering: NQ, HotpotQA, FiQA-2018 * Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus * News Retrieval: TREC-NEWS, Robust04 * Argument Retrieval: Touche-2020, ArguAna * Duplicate Question Retrieval: Quora, CqaDupstack * Citation-Prediction: SCIDOCS * Tweet Retrieval: Signal-1M * Entity Retrieval: DBPedia All these datasets have been preprocessed and can be used for your experiments. ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found here. ### Languages All tasks are in English ('en'). Dataset Structure ----------------- All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: * 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{"\_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}' * 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\_id' with unique query identifier and 'text' with query text. For example: '{"\_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}' * 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1' ### Data Instances A high level example of any beir dataset: ### Data Fields Examples from all configurations have the following features: ### Corpus * 'corpus': a 'dict' feature representing the document title and passage text, made up of: + '\_id': a 'string' feature representing the unique document id - 'title': a 'string' feature, denoting the title of the document. - 'text': a 'string' feature, denoting the text of the document. ### Queries * 'queries': a 'dict' feature representing the query, made up of: + '\_id': a 'string' feature representing the unique query id + 'text': a 'string' feature, denoting the text of the query. ### Qrels * 'qrels': a 'dict' feature representing the query document relevance judgements, made up of: + '\_id': a 'string' feature representing the query id - '\_id': a 'string' feature, denoting the document id. - 'score': a 'int32' feature, denoting the relevance judgement between query and document. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Cite as: ### Contributions Thanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus Dataset Card for BEIR Benchmark =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: URL * Point of Contact: URL@URL ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: * Fact-checking: FEVER, Climate-FEVER, SciFact * Question-Answering: NQ, HotpotQA, FiQA-2018 * Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus * News Retrieval: TREC-NEWS, Robust04 * Argument Retrieval: Touche-2020, ArguAna * Duplicate Question Retrieval: Quora, CqaDupstack * Citation-Prediction: SCIDOCS * Tweet Retrieval: Signal-1M * Entity Retrieval: DBPedia All these datasets have been preprocessed and can be used for your experiments. ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found here. ### Languages All tasks are in English ('en'). Dataset Structure ----------------- All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: * 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{"\_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}' * 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\_id' with unique query identifier and 'text' with query text. For example: '{"\_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}' * 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1' ### Data Instances A high level example of any beir dataset: ### Data Fields Examples from all configurations have the following features: ### Corpus * 'corpus': a 'dict' feature representing the document title and passage text, made up of: + '\_id': a 'string' feature representing the unique document id - 'title': a 'string' feature, denoting the title of the document. - 'text': a 'string' feature, denoting the text of the document. ### Queries * 'queries': a 'dict' feature representing the query, made up of: + '\_id': a 'string' feature representing the unique query id + 'text': a 'string' feature, denoting the text of the query. ### Qrels * 'qrels': a 'dict' feature representing the query document relevance judgements, made up of: + '\_id': a 'string' feature representing the query id - '\_id': a 'string' feature, denoting the document id. - 'score': a 'int32' feature, denoting the relevance judgement between query and document. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Cite as: ### Contributions Thanks to @Nthakur20 for adding this dataset.
[ "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus\n\n\nDataset Card for BEIR Benchmark\n===============================\n\n\nTable of Contents\n-----------------\n\n\n* Dataset Description\n\t+ Dataset Summary\n\t+ Supported Tasks and Leaderboards\n\t+ Languages\n* Dataset Structure\n\t+ Data Instances\n\t+ Data Fields\n\t+ Data Splits\n* Dataset Creation\n\t+ Curation Rationale\n\t+ Source Data\n\t+ Annotations\n\t+ Personal and Sensitive Information\n* Considerations for Using the Data\n\t+ Social Impact of Dataset\n\t+ Discussion of Biases\n\t+ Other Known Limitations\n* Additional Information\n\t+ Dataset Curators\n\t+ Licensing Information\n\t+ Citation Information\n\t+ Contributions\n\n\nDataset Description\n-------------------\n\n\n* Homepage: URL\n* Repository: URL\n* Paper: URL\n* Leaderboard: URL\n* Point of Contact: URL@URL", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset." ]
[ "TAGS\n#task_categories-text-retrieval #multilinguality-monolingual #language-English #license-cc-by-sa-4.0 #region-us \n", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus\n\n\nDataset Card for BEIR Benchmark\n===============================\n\n\nTable of Contents\n-----------------\n\n\n* Dataset Description\n\t+ Dataset Summary\n\t+ Supported Tasks and Leaderboards\n\t+ Languages\n* Dataset Structure\n\t+ Data Instances\n\t+ Data Fields\n\t+ Data Splits\n* Dataset Creation\n\t+ Curation Rationale\n\t+ Source Data\n\t+ Annotations\n\t+ Personal and Sensitive Information\n* Considerations for Using the Data\n\t+ Social Impact of Dataset\n\t+ Discussion of Biases\n\t+ Other Known Limitations\n* Additional Information\n\t+ Dataset Curators\n\t+ Licensing Information\n\t+ Citation Information\n\t+ Contributions\n\n\nDataset Description\n-------------------\n\n\n* Homepage: URL\n* Repository: URL\n* Paper: URL\n* Leaderboard: URL\n* Point of Contact: URL@URL", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset." ]
86c17b0e3b216cf3b658610fd07ba2f59624a73b
# NFCorpus: 20 generated queries (BEIR Benchmark) This HF dataset contains the top-20 synthetic queries generated for each passage in the above BEIR benchmark dataset. - DocT5query model used: [BeIR/query-gen-msmarco-t5-base-v1](https://huggingface.co/BeIR/query-gen-msmarco-t5-base-v1) - id (str): unique document id in NFCorpus in the BEIR benchmark (`corpus.jsonl`). - Questions generated: 20 - Code used for generation: [evaluate_anserini_docT5query_parallel.py](https://github.com/beir-cellar/beir/blob/main/examples/retrieval/evaluation/sparse/evaluate_anserini_docT5query_parallel.py) Below contains the old dataset card for the BEIR benchmark. # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** [email protected] ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.Top-20 generated queries for every passage in NFCorpus # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** [email protected] ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
income/cqadupstack-mathematica-top-20-gen-queries
[ "task_categories:text-retrieval", "multilinguality:monolingual", "language:en", "license:cc-by-sa-4.0", "region:us" ]
2023-01-24T19:52:31+00:00
{"annotations_creators": [], "language_creators": [], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": {"msmarco": ["1M<n<10M"], "trec-covid": ["100k<n<1M"], "nfcorpus": ["1K<n<10K"], "nq": ["1M<n<10M"], "hotpotqa": ["1M<n<10M"], "fiqa": ["10K<n<100K"], "arguana": ["1K<n<10K"], "touche-2020": ["100K<n<1M"], "cqadupstack": ["100K<n<1M"], "quora": ["100K<n<1M"], "dbpedia": ["1M<n<10M"], "scidocs": ["10K<n<100K"], "fever": ["1M<n<10M"], "climate-fever": ["1M<n<10M"], "scifact": ["1K<n<10K"]}, "source_datasets": [], "task_categories": ["text-retrieval"], "paperswithcode_id": "beir", "pretty_name": "BEIR Benchmark"}
2023-01-24T19:52:33+00:00
[]
[ "en" ]
TAGS #task_categories-text-retrieval #multilinguality-monolingual #language-English #license-cc-by-sa-4.0 #region-us
NFCorpus: 20 generated queries (BEIR Benchmark) =============================================== This HF dataset contains the top-20 synthetic queries generated for each passage in the above BEIR benchmark dataset. * DocT5query model used: BeIR/query-gen-msmarco-t5-base-v1 * id (str): unique document id in NFCorpus in the BEIR benchmark ('URL'). * Questions generated: 20 * Code used for generation: evaluate\_anserini\_docT5query\_parallel.py Below contains the old dataset card for the BEIR benchmark. Dataset Card for BEIR Benchmark =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: URL * Point of Contact: URL@URL ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: * Fact-checking: FEVER, Climate-FEVER, SciFact * Question-Answering: NQ, HotpotQA, FiQA-2018 * Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus * News Retrieval: TREC-NEWS, Robust04 * Argument Retrieval: Touche-2020, ArguAna * Duplicate Question Retrieval: Quora, CqaDupstack * Citation-Prediction: SCIDOCS * Tweet Retrieval: Signal-1M * Entity Retrieval: DBPedia All these datasets have been preprocessed and can be used for your experiments. ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found here. ### Languages All tasks are in English ('en'). Dataset Structure ----------------- All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: * 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{"\_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}' * 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\_id' with unique query identifier and 'text' with query text. For example: '{"\_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}' * 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1' ### Data Instances A high level example of any beir dataset: ### Data Fields Examples from all configurations have the following features: ### Corpus * 'corpus': a 'dict' feature representing the document title and passage text, made up of: + '\_id': a 'string' feature representing the unique document id - 'title': a 'string' feature, denoting the title of the document. - 'text': a 'string' feature, denoting the text of the document. ### Queries * 'queries': a 'dict' feature representing the query, made up of: + '\_id': a 'string' feature representing the unique query id + 'text': a 'string' feature, denoting the text of the query. ### Qrels * 'qrels': a 'dict' feature representing the query document relevance judgements, made up of: + '\_id': a 'string' feature representing the query id - '\_id': a 'string' feature, denoting the document id. - 'score': a 'int32' feature, denoting the relevance judgement between query and document. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Cite as: ### Contributions Thanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus Dataset Card for BEIR Benchmark =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: URL * Point of Contact: URL@URL ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: * Fact-checking: FEVER, Climate-FEVER, SciFact * Question-Answering: NQ, HotpotQA, FiQA-2018 * Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus * News Retrieval: TREC-NEWS, Robust04 * Argument Retrieval: Touche-2020, ArguAna * Duplicate Question Retrieval: Quora, CqaDupstack * Citation-Prediction: SCIDOCS * Tweet Retrieval: Signal-1M * Entity Retrieval: DBPedia All these datasets have been preprocessed and can be used for your experiments. ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found here. ### Languages All tasks are in English ('en'). Dataset Structure ----------------- All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: * 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{"\_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}' * 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\_id' with unique query identifier and 'text' with query text. For example: '{"\_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}' * 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1' ### Data Instances A high level example of any beir dataset: ### Data Fields Examples from all configurations have the following features: ### Corpus * 'corpus': a 'dict' feature representing the document title and passage text, made up of: + '\_id': a 'string' feature representing the unique document id - 'title': a 'string' feature, denoting the title of the document. - 'text': a 'string' feature, denoting the text of the document. ### Queries * 'queries': a 'dict' feature representing the query, made up of: + '\_id': a 'string' feature representing the unique query id + 'text': a 'string' feature, denoting the text of the query. ### Qrels * 'qrels': a 'dict' feature representing the query document relevance judgements, made up of: + '\_id': a 'string' feature representing the query id - '\_id': a 'string' feature, denoting the document id. - 'score': a 'int32' feature, denoting the relevance judgement between query and document. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Cite as: ### Contributions Thanks to @Nthakur20 for adding this dataset.
[ "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus\n\n\nDataset Card for BEIR Benchmark\n===============================\n\n\nTable of Contents\n-----------------\n\n\n* Dataset Description\n\t+ Dataset Summary\n\t+ Supported Tasks and Leaderboards\n\t+ Languages\n* Dataset Structure\n\t+ Data Instances\n\t+ Data Fields\n\t+ Data Splits\n* Dataset Creation\n\t+ Curation Rationale\n\t+ Source Data\n\t+ Annotations\n\t+ Personal and Sensitive Information\n* Considerations for Using the Data\n\t+ Social Impact of Dataset\n\t+ Discussion of Biases\n\t+ Other Known Limitations\n* Additional Information\n\t+ Dataset Curators\n\t+ Licensing Information\n\t+ Citation Information\n\t+ Contributions\n\n\nDataset Description\n-------------------\n\n\n* Homepage: URL\n* Repository: URL\n* Paper: URL\n* Leaderboard: URL\n* Point of Contact: URL@URL", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset." ]
[ "TAGS\n#task_categories-text-retrieval #multilinguality-monolingual #language-English #license-cc-by-sa-4.0 #region-us \n", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus\n\n\nDataset Card for BEIR Benchmark\n===============================\n\n\nTable of Contents\n-----------------\n\n\n* Dataset Description\n\t+ Dataset Summary\n\t+ Supported Tasks and Leaderboards\n\t+ Languages\n* Dataset Structure\n\t+ Data Instances\n\t+ Data Fields\n\t+ Data Splits\n* Dataset Creation\n\t+ Curation Rationale\n\t+ Source Data\n\t+ Annotations\n\t+ Personal and Sensitive Information\n* Considerations for Using the Data\n\t+ Social Impact of Dataset\n\t+ Discussion of Biases\n\t+ Other Known Limitations\n* Additional Information\n\t+ Dataset Curators\n\t+ Licensing Information\n\t+ Citation Information\n\t+ Contributions\n\n\nDataset Description\n-------------------\n\n\n* Homepage: URL\n* Repository: URL\n* Paper: URL\n* Leaderboard: URL\n* Point of Contact: URL@URL", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset." ]
d00ec7b4129c7106a28a6b311ae5fdba2e45891a
# NFCorpus: 20 generated queries (BEIR Benchmark) This HF dataset contains the top-20 synthetic queries generated for each passage in the above BEIR benchmark dataset. - DocT5query model used: [BeIR/query-gen-msmarco-t5-base-v1](https://huggingface.co/BeIR/query-gen-msmarco-t5-base-v1) - id (str): unique document id in NFCorpus in the BEIR benchmark (`corpus.jsonl`). - Questions generated: 20 - Code used for generation: [evaluate_anserini_docT5query_parallel.py](https://github.com/beir-cellar/beir/blob/main/examples/retrieval/evaluation/sparse/evaluate_anserini_docT5query_parallel.py) Below contains the old dataset card for the BEIR benchmark. # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** [email protected] ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.Top-20 generated queries for every passage in NFCorpus # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** [email protected] ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
income/cqadupstack-physics-top-20-gen-queries
[ "task_categories:text-retrieval", "multilinguality:monolingual", "language:en", "license:cc-by-sa-4.0", "region:us" ]
2023-01-24T19:52:38+00:00
{"annotations_creators": [], "language_creators": [], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": {"msmarco": ["1M<n<10M"], "trec-covid": ["100k<n<1M"], "nfcorpus": ["1K<n<10K"], "nq": ["1M<n<10M"], "hotpotqa": ["1M<n<10M"], "fiqa": ["10K<n<100K"], "arguana": ["1K<n<10K"], "touche-2020": ["100K<n<1M"], "cqadupstack": ["100K<n<1M"], "quora": ["100K<n<1M"], "dbpedia": ["1M<n<10M"], "scidocs": ["10K<n<100K"], "fever": ["1M<n<10M"], "climate-fever": ["1M<n<10M"], "scifact": ["1K<n<10K"]}, "source_datasets": [], "task_categories": ["text-retrieval"], "paperswithcode_id": "beir", "pretty_name": "BEIR Benchmark"}
2023-01-24T19:52:41+00:00
[]
[ "en" ]
TAGS #task_categories-text-retrieval #multilinguality-monolingual #language-English #license-cc-by-sa-4.0 #region-us
NFCorpus: 20 generated queries (BEIR Benchmark) =============================================== This HF dataset contains the top-20 synthetic queries generated for each passage in the above BEIR benchmark dataset. * DocT5query model used: BeIR/query-gen-msmarco-t5-base-v1 * id (str): unique document id in NFCorpus in the BEIR benchmark ('URL'). * Questions generated: 20 * Code used for generation: evaluate\_anserini\_docT5query\_parallel.py Below contains the old dataset card for the BEIR benchmark. Dataset Card for BEIR Benchmark =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: URL * Point of Contact: URL@URL ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: * Fact-checking: FEVER, Climate-FEVER, SciFact * Question-Answering: NQ, HotpotQA, FiQA-2018 * Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus * News Retrieval: TREC-NEWS, Robust04 * Argument Retrieval: Touche-2020, ArguAna * Duplicate Question Retrieval: Quora, CqaDupstack * Citation-Prediction: SCIDOCS * Tweet Retrieval: Signal-1M * Entity Retrieval: DBPedia All these datasets have been preprocessed and can be used for your experiments. ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found here. ### Languages All tasks are in English ('en'). Dataset Structure ----------------- All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: * 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{"\_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}' * 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\_id' with unique query identifier and 'text' with query text. For example: '{"\_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}' * 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1' ### Data Instances A high level example of any beir dataset: ### Data Fields Examples from all configurations have the following features: ### Corpus * 'corpus': a 'dict' feature representing the document title and passage text, made up of: + '\_id': a 'string' feature representing the unique document id - 'title': a 'string' feature, denoting the title of the document. - 'text': a 'string' feature, denoting the text of the document. ### Queries * 'queries': a 'dict' feature representing the query, made up of: + '\_id': a 'string' feature representing the unique query id + 'text': a 'string' feature, denoting the text of the query. ### Qrels * 'qrels': a 'dict' feature representing the query document relevance judgements, made up of: + '\_id': a 'string' feature representing the query id - '\_id': a 'string' feature, denoting the document id. - 'score': a 'int32' feature, denoting the relevance judgement between query and document. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Cite as: ### Contributions Thanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus Dataset Card for BEIR Benchmark =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: URL * Point of Contact: URL@URL ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: * Fact-checking: FEVER, Climate-FEVER, SciFact * Question-Answering: NQ, HotpotQA, FiQA-2018 * Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus * News Retrieval: TREC-NEWS, Robust04 * Argument Retrieval: Touche-2020, ArguAna * Duplicate Question Retrieval: Quora, CqaDupstack * Citation-Prediction: SCIDOCS * Tweet Retrieval: Signal-1M * Entity Retrieval: DBPedia All these datasets have been preprocessed and can be used for your experiments. ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found here. ### Languages All tasks are in English ('en'). Dataset Structure ----------------- All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: * 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{"\_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}' * 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\_id' with unique query identifier and 'text' with query text. For example: '{"\_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}' * 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1' ### Data Instances A high level example of any beir dataset: ### Data Fields Examples from all configurations have the following features: ### Corpus * 'corpus': a 'dict' feature representing the document title and passage text, made up of: + '\_id': a 'string' feature representing the unique document id - 'title': a 'string' feature, denoting the title of the document. - 'text': a 'string' feature, denoting the text of the document. ### Queries * 'queries': a 'dict' feature representing the query, made up of: + '\_id': a 'string' feature representing the unique query id + 'text': a 'string' feature, denoting the text of the query. ### Qrels * 'qrels': a 'dict' feature representing the query document relevance judgements, made up of: + '\_id': a 'string' feature representing the query id - '\_id': a 'string' feature, denoting the document id. - 'score': a 'int32' feature, denoting the relevance judgement between query and document. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Cite as: ### Contributions Thanks to @Nthakur20 for adding this dataset.
[ "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus\n\n\nDataset Card for BEIR Benchmark\n===============================\n\n\nTable of Contents\n-----------------\n\n\n* Dataset Description\n\t+ Dataset Summary\n\t+ Supported Tasks and Leaderboards\n\t+ Languages\n* Dataset Structure\n\t+ Data Instances\n\t+ Data Fields\n\t+ Data Splits\n* Dataset Creation\n\t+ Curation Rationale\n\t+ Source Data\n\t+ Annotations\n\t+ Personal and Sensitive Information\n* Considerations for Using the Data\n\t+ Social Impact of Dataset\n\t+ Discussion of Biases\n\t+ Other Known Limitations\n* Additional Information\n\t+ Dataset Curators\n\t+ Licensing Information\n\t+ Citation Information\n\t+ Contributions\n\n\nDataset Description\n-------------------\n\n\n* Homepage: URL\n* Repository: URL\n* Paper: URL\n* Leaderboard: URL\n* Point of Contact: URL@URL", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset." ]
[ "TAGS\n#task_categories-text-retrieval #multilinguality-monolingual #language-English #license-cc-by-sa-4.0 #region-us \n", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus\n\n\nDataset Card for BEIR Benchmark\n===============================\n\n\nTable of Contents\n-----------------\n\n\n* Dataset Description\n\t+ Dataset Summary\n\t+ Supported Tasks and Leaderboards\n\t+ Languages\n* Dataset Structure\n\t+ Data Instances\n\t+ Data Fields\n\t+ Data Splits\n* Dataset Creation\n\t+ Curation Rationale\n\t+ Source Data\n\t+ Annotations\n\t+ Personal and Sensitive Information\n* Considerations for Using the Data\n\t+ Social Impact of Dataset\n\t+ Discussion of Biases\n\t+ Other Known Limitations\n* Additional Information\n\t+ Dataset Curators\n\t+ Licensing Information\n\t+ Citation Information\n\t+ Contributions\n\n\nDataset Description\n-------------------\n\n\n* Homepage: URL\n* Repository: URL\n* Paper: URL\n* Leaderboard: URL\n* Point of Contact: URL@URL", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset." ]
6b00f8c5c20225ed7e203b5f4f11ba8d5cd2fc81
# NFCorpus: 20 generated queries (BEIR Benchmark) This HF dataset contains the top-20 synthetic queries generated for each passage in the above BEIR benchmark dataset. - DocT5query model used: [BeIR/query-gen-msmarco-t5-base-v1](https://huggingface.co/BeIR/query-gen-msmarco-t5-base-v1) - id (str): unique document id in NFCorpus in the BEIR benchmark (`corpus.jsonl`). - Questions generated: 20 - Code used for generation: [evaluate_anserini_docT5query_parallel.py](https://github.com/beir-cellar/beir/blob/main/examples/retrieval/evaluation/sparse/evaluate_anserini_docT5query_parallel.py) Below contains the old dataset card for the BEIR benchmark. # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** [email protected] ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.Top-20 generated queries for every passage in NFCorpus # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** [email protected] ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
income/cqadupstack-programmers-top-20-gen-queries
[ "task_categories:text-retrieval", "multilinguality:monolingual", "language:en", "license:cc-by-sa-4.0", "region:us" ]
2023-01-24T19:52:46+00:00
{"annotations_creators": [], "language_creators": [], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": {"msmarco": ["1M<n<10M"], "trec-covid": ["100k<n<1M"], "nfcorpus": ["1K<n<10K"], "nq": ["1M<n<10M"], "hotpotqa": ["1M<n<10M"], "fiqa": ["10K<n<100K"], "arguana": ["1K<n<10K"], "touche-2020": ["100K<n<1M"], "cqadupstack": ["100K<n<1M"], "quora": ["100K<n<1M"], "dbpedia": ["1M<n<10M"], "scidocs": ["10K<n<100K"], "fever": ["1M<n<10M"], "climate-fever": ["1M<n<10M"], "scifact": ["1K<n<10K"]}, "source_datasets": [], "task_categories": ["text-retrieval"], "paperswithcode_id": "beir", "pretty_name": "BEIR Benchmark"}
2023-01-24T19:52:48+00:00
[]
[ "en" ]
TAGS #task_categories-text-retrieval #multilinguality-monolingual #language-English #license-cc-by-sa-4.0 #region-us
NFCorpus: 20 generated queries (BEIR Benchmark) =============================================== This HF dataset contains the top-20 synthetic queries generated for each passage in the above BEIR benchmark dataset. * DocT5query model used: BeIR/query-gen-msmarco-t5-base-v1 * id (str): unique document id in NFCorpus in the BEIR benchmark ('URL'). * Questions generated: 20 * Code used for generation: evaluate\_anserini\_docT5query\_parallel.py Below contains the old dataset card for the BEIR benchmark. Dataset Card for BEIR Benchmark =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: URL * Point of Contact: URL@URL ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: * Fact-checking: FEVER, Climate-FEVER, SciFact * Question-Answering: NQ, HotpotQA, FiQA-2018 * Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus * News Retrieval: TREC-NEWS, Robust04 * Argument Retrieval: Touche-2020, ArguAna * Duplicate Question Retrieval: Quora, CqaDupstack * Citation-Prediction: SCIDOCS * Tweet Retrieval: Signal-1M * Entity Retrieval: DBPedia All these datasets have been preprocessed and can be used for your experiments. ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found here. ### Languages All tasks are in English ('en'). Dataset Structure ----------------- All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: * 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{"\_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}' * 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\_id' with unique query identifier and 'text' with query text. For example: '{"\_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}' * 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1' ### Data Instances A high level example of any beir dataset: ### Data Fields Examples from all configurations have the following features: ### Corpus * 'corpus': a 'dict' feature representing the document title and passage text, made up of: + '\_id': a 'string' feature representing the unique document id - 'title': a 'string' feature, denoting the title of the document. - 'text': a 'string' feature, denoting the text of the document. ### Queries * 'queries': a 'dict' feature representing the query, made up of: + '\_id': a 'string' feature representing the unique query id + 'text': a 'string' feature, denoting the text of the query. ### Qrels * 'qrels': a 'dict' feature representing the query document relevance judgements, made up of: + '\_id': a 'string' feature representing the query id - '\_id': a 'string' feature, denoting the document id. - 'score': a 'int32' feature, denoting the relevance judgement between query and document. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Cite as: ### Contributions Thanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus Dataset Card for BEIR Benchmark =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: URL * Point of Contact: URL@URL ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: * Fact-checking: FEVER, Climate-FEVER, SciFact * Question-Answering: NQ, HotpotQA, FiQA-2018 * Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus * News Retrieval: TREC-NEWS, Robust04 * Argument Retrieval: Touche-2020, ArguAna * Duplicate Question Retrieval: Quora, CqaDupstack * Citation-Prediction: SCIDOCS * Tweet Retrieval: Signal-1M * Entity Retrieval: DBPedia All these datasets have been preprocessed and can be used for your experiments. ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found here. ### Languages All tasks are in English ('en'). Dataset Structure ----------------- All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: * 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{"\_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}' * 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\_id' with unique query identifier and 'text' with query text. For example: '{"\_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}' * 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1' ### Data Instances A high level example of any beir dataset: ### Data Fields Examples from all configurations have the following features: ### Corpus * 'corpus': a 'dict' feature representing the document title and passage text, made up of: + '\_id': a 'string' feature representing the unique document id - 'title': a 'string' feature, denoting the title of the document. - 'text': a 'string' feature, denoting the text of the document. ### Queries * 'queries': a 'dict' feature representing the query, made up of: + '\_id': a 'string' feature representing the unique query id + 'text': a 'string' feature, denoting the text of the query. ### Qrels * 'qrels': a 'dict' feature representing the query document relevance judgements, made up of: + '\_id': a 'string' feature representing the query id - '\_id': a 'string' feature, denoting the document id. - 'score': a 'int32' feature, denoting the relevance judgement between query and document. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Cite as: ### Contributions Thanks to @Nthakur20 for adding this dataset.
[ "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus\n\n\nDataset Card for BEIR Benchmark\n===============================\n\n\nTable of Contents\n-----------------\n\n\n* Dataset Description\n\t+ Dataset Summary\n\t+ Supported Tasks and Leaderboards\n\t+ Languages\n* Dataset Structure\n\t+ Data Instances\n\t+ Data Fields\n\t+ Data Splits\n* Dataset Creation\n\t+ Curation Rationale\n\t+ Source Data\n\t+ Annotations\n\t+ Personal and Sensitive Information\n* Considerations for Using the Data\n\t+ Social Impact of Dataset\n\t+ Discussion of Biases\n\t+ Other Known Limitations\n* Additional Information\n\t+ Dataset Curators\n\t+ Licensing Information\n\t+ Citation Information\n\t+ Contributions\n\n\nDataset Description\n-------------------\n\n\n* Homepage: URL\n* Repository: URL\n* Paper: URL\n* Leaderboard: URL\n* Point of Contact: URL@URL", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset." ]
[ "TAGS\n#task_categories-text-retrieval #multilinguality-monolingual #language-English #license-cc-by-sa-4.0 #region-us \n", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus\n\n\nDataset Card for BEIR Benchmark\n===============================\n\n\nTable of Contents\n-----------------\n\n\n* Dataset Description\n\t+ Dataset Summary\n\t+ Supported Tasks and Leaderboards\n\t+ Languages\n* Dataset Structure\n\t+ Data Instances\n\t+ Data Fields\n\t+ Data Splits\n* Dataset Creation\n\t+ Curation Rationale\n\t+ Source Data\n\t+ Annotations\n\t+ Personal and Sensitive Information\n* Considerations for Using the Data\n\t+ Social Impact of Dataset\n\t+ Discussion of Biases\n\t+ Other Known Limitations\n* Additional Information\n\t+ Dataset Curators\n\t+ Licensing Information\n\t+ Citation Information\n\t+ Contributions\n\n\nDataset Description\n-------------------\n\n\n* Homepage: URL\n* Repository: URL\n* Paper: URL\n* Leaderboard: URL\n* Point of Contact: URL@URL", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset." ]
ba226477db4a8d347624d59274d64d89500ef99e
# NFCorpus: 20 generated queries (BEIR Benchmark) This HF dataset contains the top-20 synthetic queries generated for each passage in the above BEIR benchmark dataset. - DocT5query model used: [BeIR/query-gen-msmarco-t5-base-v1](https://huggingface.co/BeIR/query-gen-msmarco-t5-base-v1) - id (str): unique document id in NFCorpus in the BEIR benchmark (`corpus.jsonl`). - Questions generated: 20 - Code used for generation: [evaluate_anserini_docT5query_parallel.py](https://github.com/beir-cellar/beir/blob/main/examples/retrieval/evaluation/sparse/evaluate_anserini_docT5query_parallel.py) Below contains the old dataset card for the BEIR benchmark. # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** [email protected] ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.Top-20 generated queries for every passage in NFCorpus # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** [email protected] ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
income/cqadupstack-stats-top-20-gen-queries
[ "task_categories:text-retrieval", "multilinguality:monolingual", "language:en", "license:cc-by-sa-4.0", "region:us" ]
2023-01-24T19:52:53+00:00
{"annotations_creators": [], "language_creators": [], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": {"msmarco": ["1M<n<10M"], "trec-covid": ["100k<n<1M"], "nfcorpus": ["1K<n<10K"], "nq": ["1M<n<10M"], "hotpotqa": ["1M<n<10M"], "fiqa": ["10K<n<100K"], "arguana": ["1K<n<10K"], "touche-2020": ["100K<n<1M"], "cqadupstack": ["100K<n<1M"], "quora": ["100K<n<1M"], "dbpedia": ["1M<n<10M"], "scidocs": ["10K<n<100K"], "fever": ["1M<n<10M"], "climate-fever": ["1M<n<10M"], "scifact": ["1K<n<10K"]}, "source_datasets": [], "task_categories": ["text-retrieval"], "paperswithcode_id": "beir", "pretty_name": "BEIR Benchmark"}
2023-01-24T19:52:56+00:00
[]
[ "en" ]
TAGS #task_categories-text-retrieval #multilinguality-monolingual #language-English #license-cc-by-sa-4.0 #region-us
NFCorpus: 20 generated queries (BEIR Benchmark) =============================================== This HF dataset contains the top-20 synthetic queries generated for each passage in the above BEIR benchmark dataset. * DocT5query model used: BeIR/query-gen-msmarco-t5-base-v1 * id (str): unique document id in NFCorpus in the BEIR benchmark ('URL'). * Questions generated: 20 * Code used for generation: evaluate\_anserini\_docT5query\_parallel.py Below contains the old dataset card for the BEIR benchmark. Dataset Card for BEIR Benchmark =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: URL * Point of Contact: URL@URL ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: * Fact-checking: FEVER, Climate-FEVER, SciFact * Question-Answering: NQ, HotpotQA, FiQA-2018 * Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus * News Retrieval: TREC-NEWS, Robust04 * Argument Retrieval: Touche-2020, ArguAna * Duplicate Question Retrieval: Quora, CqaDupstack * Citation-Prediction: SCIDOCS * Tweet Retrieval: Signal-1M * Entity Retrieval: DBPedia All these datasets have been preprocessed and can be used for your experiments. ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found here. ### Languages All tasks are in English ('en'). Dataset Structure ----------------- All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: * 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{"\_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}' * 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\_id' with unique query identifier and 'text' with query text. For example: '{"\_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}' * 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1' ### Data Instances A high level example of any beir dataset: ### Data Fields Examples from all configurations have the following features: ### Corpus * 'corpus': a 'dict' feature representing the document title and passage text, made up of: + '\_id': a 'string' feature representing the unique document id - 'title': a 'string' feature, denoting the title of the document. - 'text': a 'string' feature, denoting the text of the document. ### Queries * 'queries': a 'dict' feature representing the query, made up of: + '\_id': a 'string' feature representing the unique query id + 'text': a 'string' feature, denoting the text of the query. ### Qrels * 'qrels': a 'dict' feature representing the query document relevance judgements, made up of: + '\_id': a 'string' feature representing the query id - '\_id': a 'string' feature, denoting the document id. - 'score': a 'int32' feature, denoting the relevance judgement between query and document. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Cite as: ### Contributions Thanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus Dataset Card for BEIR Benchmark =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: URL * Point of Contact: URL@URL ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: * Fact-checking: FEVER, Climate-FEVER, SciFact * Question-Answering: NQ, HotpotQA, FiQA-2018 * Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus * News Retrieval: TREC-NEWS, Robust04 * Argument Retrieval: Touche-2020, ArguAna * Duplicate Question Retrieval: Quora, CqaDupstack * Citation-Prediction: SCIDOCS * Tweet Retrieval: Signal-1M * Entity Retrieval: DBPedia All these datasets have been preprocessed and can be used for your experiments. ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found here. ### Languages All tasks are in English ('en'). Dataset Structure ----------------- All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: * 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{"\_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}' * 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\_id' with unique query identifier and 'text' with query text. For example: '{"\_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}' * 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1' ### Data Instances A high level example of any beir dataset: ### Data Fields Examples from all configurations have the following features: ### Corpus * 'corpus': a 'dict' feature representing the document title and passage text, made up of: + '\_id': a 'string' feature representing the unique document id - 'title': a 'string' feature, denoting the title of the document. - 'text': a 'string' feature, denoting the text of the document. ### Queries * 'queries': a 'dict' feature representing the query, made up of: + '\_id': a 'string' feature representing the unique query id + 'text': a 'string' feature, denoting the text of the query. ### Qrels * 'qrels': a 'dict' feature representing the query document relevance judgements, made up of: + '\_id': a 'string' feature representing the query id - '\_id': a 'string' feature, denoting the document id. - 'score': a 'int32' feature, denoting the relevance judgement between query and document. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Cite as: ### Contributions Thanks to @Nthakur20 for adding this dataset.
[ "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus\n\n\nDataset Card for BEIR Benchmark\n===============================\n\n\nTable of Contents\n-----------------\n\n\n* Dataset Description\n\t+ Dataset Summary\n\t+ Supported Tasks and Leaderboards\n\t+ Languages\n* Dataset Structure\n\t+ Data Instances\n\t+ Data Fields\n\t+ Data Splits\n* Dataset Creation\n\t+ Curation Rationale\n\t+ Source Data\n\t+ Annotations\n\t+ Personal and Sensitive Information\n* Considerations for Using the Data\n\t+ Social Impact of Dataset\n\t+ Discussion of Biases\n\t+ Other Known Limitations\n* Additional Information\n\t+ Dataset Curators\n\t+ Licensing Information\n\t+ Citation Information\n\t+ Contributions\n\n\nDataset Description\n-------------------\n\n\n* Homepage: URL\n* Repository: URL\n* Paper: URL\n* Leaderboard: URL\n* Point of Contact: URL@URL", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset." ]
[ "TAGS\n#task_categories-text-retrieval #multilinguality-monolingual #language-English #license-cc-by-sa-4.0 #region-us \n", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus\n\n\nDataset Card for BEIR Benchmark\n===============================\n\n\nTable of Contents\n-----------------\n\n\n* Dataset Description\n\t+ Dataset Summary\n\t+ Supported Tasks and Leaderboards\n\t+ Languages\n* Dataset Structure\n\t+ Data Instances\n\t+ Data Fields\n\t+ Data Splits\n* Dataset Creation\n\t+ Curation Rationale\n\t+ Source Data\n\t+ Annotations\n\t+ Personal and Sensitive Information\n* Considerations for Using the Data\n\t+ Social Impact of Dataset\n\t+ Discussion of Biases\n\t+ Other Known Limitations\n* Additional Information\n\t+ Dataset Curators\n\t+ Licensing Information\n\t+ Citation Information\n\t+ Contributions\n\n\nDataset Description\n-------------------\n\n\n* Homepage: URL\n* Repository: URL\n* Paper: URL\n* Leaderboard: URL\n* Point of Contact: URL@URL", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset." ]
5f1c8bbc6dbb56a8c7a8813332dd7a38300fdebe
# NFCorpus: 20 generated queries (BEIR Benchmark) This HF dataset contains the top-20 synthetic queries generated for each passage in the above BEIR benchmark dataset. - DocT5query model used: [BeIR/query-gen-msmarco-t5-base-v1](https://huggingface.co/BeIR/query-gen-msmarco-t5-base-v1) - id (str): unique document id in NFCorpus in the BEIR benchmark (`corpus.jsonl`). - Questions generated: 20 - Code used for generation: [evaluate_anserini_docT5query_parallel.py](https://github.com/beir-cellar/beir/blob/main/examples/retrieval/evaluation/sparse/evaluate_anserini_docT5query_parallel.py) Below contains the old dataset card for the BEIR benchmark. # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** [email protected] ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.Top-20 generated queries for every passage in NFCorpus # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** [email protected] ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
income/cqadupstack-tex-top-20-gen-queries
[ "task_categories:text-retrieval", "multilinguality:monolingual", "language:en", "license:cc-by-sa-4.0", "region:us" ]
2023-01-24T19:53:01+00:00
{"annotations_creators": [], "language_creators": [], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": {"msmarco": ["1M<n<10M"], "trec-covid": ["100k<n<1M"], "nfcorpus": ["1K<n<10K"], "nq": ["1M<n<10M"], "hotpotqa": ["1M<n<10M"], "fiqa": ["10K<n<100K"], "arguana": ["1K<n<10K"], "touche-2020": ["100K<n<1M"], "cqadupstack": ["100K<n<1M"], "quora": ["100K<n<1M"], "dbpedia": ["1M<n<10M"], "scidocs": ["10K<n<100K"], "fever": ["1M<n<10M"], "climate-fever": ["1M<n<10M"], "scifact": ["1K<n<10K"]}, "source_datasets": [], "task_categories": ["text-retrieval"], "paperswithcode_id": "beir", "pretty_name": "BEIR Benchmark"}
2023-01-24T19:53:05+00:00
[]
[ "en" ]
TAGS #task_categories-text-retrieval #multilinguality-monolingual #language-English #license-cc-by-sa-4.0 #region-us
NFCorpus: 20 generated queries (BEIR Benchmark) =============================================== This HF dataset contains the top-20 synthetic queries generated for each passage in the above BEIR benchmark dataset. * DocT5query model used: BeIR/query-gen-msmarco-t5-base-v1 * id (str): unique document id in NFCorpus in the BEIR benchmark ('URL'). * Questions generated: 20 * Code used for generation: evaluate\_anserini\_docT5query\_parallel.py Below contains the old dataset card for the BEIR benchmark. Dataset Card for BEIR Benchmark =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: URL * Point of Contact: URL@URL ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: * Fact-checking: FEVER, Climate-FEVER, SciFact * Question-Answering: NQ, HotpotQA, FiQA-2018 * Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus * News Retrieval: TREC-NEWS, Robust04 * Argument Retrieval: Touche-2020, ArguAna * Duplicate Question Retrieval: Quora, CqaDupstack * Citation-Prediction: SCIDOCS * Tweet Retrieval: Signal-1M * Entity Retrieval: DBPedia All these datasets have been preprocessed and can be used for your experiments. ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found here. ### Languages All tasks are in English ('en'). Dataset Structure ----------------- All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: * 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{"\_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}' * 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\_id' with unique query identifier and 'text' with query text. For example: '{"\_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}' * 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1' ### Data Instances A high level example of any beir dataset: ### Data Fields Examples from all configurations have the following features: ### Corpus * 'corpus': a 'dict' feature representing the document title and passage text, made up of: + '\_id': a 'string' feature representing the unique document id - 'title': a 'string' feature, denoting the title of the document. - 'text': a 'string' feature, denoting the text of the document. ### Queries * 'queries': a 'dict' feature representing the query, made up of: + '\_id': a 'string' feature representing the unique query id + 'text': a 'string' feature, denoting the text of the query. ### Qrels * 'qrels': a 'dict' feature representing the query document relevance judgements, made up of: + '\_id': a 'string' feature representing the query id - '\_id': a 'string' feature, denoting the document id. - 'score': a 'int32' feature, denoting the relevance judgement between query and document. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Cite as: ### Contributions Thanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus Dataset Card for BEIR Benchmark =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: URL * Point of Contact: URL@URL ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: * Fact-checking: FEVER, Climate-FEVER, SciFact * Question-Answering: NQ, HotpotQA, FiQA-2018 * Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus * News Retrieval: TREC-NEWS, Robust04 * Argument Retrieval: Touche-2020, ArguAna * Duplicate Question Retrieval: Quora, CqaDupstack * Citation-Prediction: SCIDOCS * Tweet Retrieval: Signal-1M * Entity Retrieval: DBPedia All these datasets have been preprocessed and can be used for your experiments. ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found here. ### Languages All tasks are in English ('en'). Dataset Structure ----------------- All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: * 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{"\_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}' * 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\_id' with unique query identifier and 'text' with query text. For example: '{"\_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}' * 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1' ### Data Instances A high level example of any beir dataset: ### Data Fields Examples from all configurations have the following features: ### Corpus * 'corpus': a 'dict' feature representing the document title and passage text, made up of: + '\_id': a 'string' feature representing the unique document id - 'title': a 'string' feature, denoting the title of the document. - 'text': a 'string' feature, denoting the text of the document. ### Queries * 'queries': a 'dict' feature representing the query, made up of: + '\_id': a 'string' feature representing the unique query id + 'text': a 'string' feature, denoting the text of the query. ### Qrels * 'qrels': a 'dict' feature representing the query document relevance judgements, made up of: + '\_id': a 'string' feature representing the query id - '\_id': a 'string' feature, denoting the document id. - 'score': a 'int32' feature, denoting the relevance judgement between query and document. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Cite as: ### Contributions Thanks to @Nthakur20 for adding this dataset.
[ "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus\n\n\nDataset Card for BEIR Benchmark\n===============================\n\n\nTable of Contents\n-----------------\n\n\n* Dataset Description\n\t+ Dataset Summary\n\t+ Supported Tasks and Leaderboards\n\t+ Languages\n* Dataset Structure\n\t+ Data Instances\n\t+ Data Fields\n\t+ Data Splits\n* Dataset Creation\n\t+ Curation Rationale\n\t+ Source Data\n\t+ Annotations\n\t+ Personal and Sensitive Information\n* Considerations for Using the Data\n\t+ Social Impact of Dataset\n\t+ Discussion of Biases\n\t+ Other Known Limitations\n* Additional Information\n\t+ Dataset Curators\n\t+ Licensing Information\n\t+ Citation Information\n\t+ Contributions\n\n\nDataset Description\n-------------------\n\n\n* Homepage: URL\n* Repository: URL\n* Paper: URL\n* Leaderboard: URL\n* Point of Contact: URL@URL", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset." ]
[ "TAGS\n#task_categories-text-retrieval #multilinguality-monolingual #language-English #license-cc-by-sa-4.0 #region-us \n", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus\n\n\nDataset Card for BEIR Benchmark\n===============================\n\n\nTable of Contents\n-----------------\n\n\n* Dataset Description\n\t+ Dataset Summary\n\t+ Supported Tasks and Leaderboards\n\t+ Languages\n* Dataset Structure\n\t+ Data Instances\n\t+ Data Fields\n\t+ Data Splits\n* Dataset Creation\n\t+ Curation Rationale\n\t+ Source Data\n\t+ Annotations\n\t+ Personal and Sensitive Information\n* Considerations for Using the Data\n\t+ Social Impact of Dataset\n\t+ Discussion of Biases\n\t+ Other Known Limitations\n* Additional Information\n\t+ Dataset Curators\n\t+ Licensing Information\n\t+ Citation Information\n\t+ Contributions\n\n\nDataset Description\n-------------------\n\n\n* Homepage: URL\n* Repository: URL\n* Paper: URL\n* Leaderboard: URL\n* Point of Contact: URL@URL", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset." ]
790844a3d3bc8342b0ad0e35d2f8b397b77b2784
# NFCorpus: 20 generated queries (BEIR Benchmark) This HF dataset contains the top-20 synthetic queries generated for each passage in the above BEIR benchmark dataset. - DocT5query model used: [BeIR/query-gen-msmarco-t5-base-v1](https://huggingface.co/BeIR/query-gen-msmarco-t5-base-v1) - id (str): unique document id in NFCorpus in the BEIR benchmark (`corpus.jsonl`). - Questions generated: 20 - Code used for generation: [evaluate_anserini_docT5query_parallel.py](https://github.com/beir-cellar/beir/blob/main/examples/retrieval/evaluation/sparse/evaluate_anserini_docT5query_parallel.py) Below contains the old dataset card for the BEIR benchmark. # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** [email protected] ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.Top-20 generated queries for every passage in NFCorpus # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** [email protected] ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
income/cqadupstack-unix-top-20-gen-queries
[ "task_categories:text-retrieval", "multilinguality:monolingual", "language:en", "license:cc-by-sa-4.0", "region:us" ]
2023-01-24T19:53:13+00:00
{"annotations_creators": [], "language_creators": [], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": {"msmarco": ["1M<n<10M"], "trec-covid": ["100k<n<1M"], "nfcorpus": ["1K<n<10K"], "nq": ["1M<n<10M"], "hotpotqa": ["1M<n<10M"], "fiqa": ["10K<n<100K"], "arguana": ["1K<n<10K"], "touche-2020": ["100K<n<1M"], "cqadupstack": ["100K<n<1M"], "quora": ["100K<n<1M"], "dbpedia": ["1M<n<10M"], "scidocs": ["10K<n<100K"], "fever": ["1M<n<10M"], "climate-fever": ["1M<n<10M"], "scifact": ["1K<n<10K"]}, "source_datasets": [], "task_categories": ["text-retrieval"], "paperswithcode_id": "beir", "pretty_name": "BEIR Benchmark"}
2023-01-24T19:53:16+00:00
[]
[ "en" ]
TAGS #task_categories-text-retrieval #multilinguality-monolingual #language-English #license-cc-by-sa-4.0 #region-us
NFCorpus: 20 generated queries (BEIR Benchmark) =============================================== This HF dataset contains the top-20 synthetic queries generated for each passage in the above BEIR benchmark dataset. * DocT5query model used: BeIR/query-gen-msmarco-t5-base-v1 * id (str): unique document id in NFCorpus in the BEIR benchmark ('URL'). * Questions generated: 20 * Code used for generation: evaluate\_anserini\_docT5query\_parallel.py Below contains the old dataset card for the BEIR benchmark. Dataset Card for BEIR Benchmark =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: URL * Point of Contact: URL@URL ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: * Fact-checking: FEVER, Climate-FEVER, SciFact * Question-Answering: NQ, HotpotQA, FiQA-2018 * Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus * News Retrieval: TREC-NEWS, Robust04 * Argument Retrieval: Touche-2020, ArguAna * Duplicate Question Retrieval: Quora, CqaDupstack * Citation-Prediction: SCIDOCS * Tweet Retrieval: Signal-1M * Entity Retrieval: DBPedia All these datasets have been preprocessed and can be used for your experiments. ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found here. ### Languages All tasks are in English ('en'). Dataset Structure ----------------- All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: * 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{"\_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}' * 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\_id' with unique query identifier and 'text' with query text. For example: '{"\_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}' * 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1' ### Data Instances A high level example of any beir dataset: ### Data Fields Examples from all configurations have the following features: ### Corpus * 'corpus': a 'dict' feature representing the document title and passage text, made up of: + '\_id': a 'string' feature representing the unique document id - 'title': a 'string' feature, denoting the title of the document. - 'text': a 'string' feature, denoting the text of the document. ### Queries * 'queries': a 'dict' feature representing the query, made up of: + '\_id': a 'string' feature representing the unique query id + 'text': a 'string' feature, denoting the text of the query. ### Qrels * 'qrels': a 'dict' feature representing the query document relevance judgements, made up of: + '\_id': a 'string' feature representing the query id - '\_id': a 'string' feature, denoting the document id. - 'score': a 'int32' feature, denoting the relevance judgement between query and document. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Cite as: ### Contributions Thanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus Dataset Card for BEIR Benchmark =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: URL * Point of Contact: URL@URL ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: * Fact-checking: FEVER, Climate-FEVER, SciFact * Question-Answering: NQ, HotpotQA, FiQA-2018 * Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus * News Retrieval: TREC-NEWS, Robust04 * Argument Retrieval: Touche-2020, ArguAna * Duplicate Question Retrieval: Quora, CqaDupstack * Citation-Prediction: SCIDOCS * Tweet Retrieval: Signal-1M * Entity Retrieval: DBPedia All these datasets have been preprocessed and can be used for your experiments. ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found here. ### Languages All tasks are in English ('en'). Dataset Structure ----------------- All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: * 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{"\_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}' * 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\_id' with unique query identifier and 'text' with query text. For example: '{"\_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}' * 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1' ### Data Instances A high level example of any beir dataset: ### Data Fields Examples from all configurations have the following features: ### Corpus * 'corpus': a 'dict' feature representing the document title and passage text, made up of: + '\_id': a 'string' feature representing the unique document id - 'title': a 'string' feature, denoting the title of the document. - 'text': a 'string' feature, denoting the text of the document. ### Queries * 'queries': a 'dict' feature representing the query, made up of: + '\_id': a 'string' feature representing the unique query id + 'text': a 'string' feature, denoting the text of the query. ### Qrels * 'qrels': a 'dict' feature representing the query document relevance judgements, made up of: + '\_id': a 'string' feature representing the query id - '\_id': a 'string' feature, denoting the document id. - 'score': a 'int32' feature, denoting the relevance judgement between query and document. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Cite as: ### Contributions Thanks to @Nthakur20 for adding this dataset.
[ "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus\n\n\nDataset Card for BEIR Benchmark\n===============================\n\n\nTable of Contents\n-----------------\n\n\n* Dataset Description\n\t+ Dataset Summary\n\t+ Supported Tasks and Leaderboards\n\t+ Languages\n* Dataset Structure\n\t+ Data Instances\n\t+ Data Fields\n\t+ Data Splits\n* Dataset Creation\n\t+ Curation Rationale\n\t+ Source Data\n\t+ Annotations\n\t+ Personal and Sensitive Information\n* Considerations for Using the Data\n\t+ Social Impact of Dataset\n\t+ Discussion of Biases\n\t+ Other Known Limitations\n* Additional Information\n\t+ Dataset Curators\n\t+ Licensing Information\n\t+ Citation Information\n\t+ Contributions\n\n\nDataset Description\n-------------------\n\n\n* Homepage: URL\n* Repository: URL\n* Paper: URL\n* Leaderboard: URL\n* Point of Contact: URL@URL", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset." ]
[ "TAGS\n#task_categories-text-retrieval #multilinguality-monolingual #language-English #license-cc-by-sa-4.0 #region-us \n", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus\n\n\nDataset Card for BEIR Benchmark\n===============================\n\n\nTable of Contents\n-----------------\n\n\n* Dataset Description\n\t+ Dataset Summary\n\t+ Supported Tasks and Leaderboards\n\t+ Languages\n* Dataset Structure\n\t+ Data Instances\n\t+ Data Fields\n\t+ Data Splits\n* Dataset Creation\n\t+ Curation Rationale\n\t+ Source Data\n\t+ Annotations\n\t+ Personal and Sensitive Information\n* Considerations for Using the Data\n\t+ Social Impact of Dataset\n\t+ Discussion of Biases\n\t+ Other Known Limitations\n* Additional Information\n\t+ Dataset Curators\n\t+ Licensing Information\n\t+ Citation Information\n\t+ Contributions\n\n\nDataset Description\n-------------------\n\n\n* Homepage: URL\n* Repository: URL\n* Paper: URL\n* Leaderboard: URL\n* Point of Contact: URL@URL", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset." ]
6f8523361c6b87bd3f9f949b4cb65c323f703de1
# NFCorpus: 20 generated queries (BEIR Benchmark) This HF dataset contains the top-20 synthetic queries generated for each passage in the above BEIR benchmark dataset. - DocT5query model used: [BeIR/query-gen-msmarco-t5-base-v1](https://huggingface.co/BeIR/query-gen-msmarco-t5-base-v1) - id (str): unique document id in NFCorpus in the BEIR benchmark (`corpus.jsonl`). - Questions generated: 20 - Code used for generation: [evaluate_anserini_docT5query_parallel.py](https://github.com/beir-cellar/beir/blob/main/examples/retrieval/evaluation/sparse/evaluate_anserini_docT5query_parallel.py) Below contains the old dataset card for the BEIR benchmark. # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** [email protected] ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.Top-20 generated queries for every passage in NFCorpus # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** [email protected] ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
income/cqadupstack-webmasters-top-20-gen-queries
[ "task_categories:text-retrieval", "multilinguality:monolingual", "language:en", "license:cc-by-sa-4.0", "region:us" ]
2023-01-24T19:53:22+00:00
{"annotations_creators": [], "language_creators": [], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": {"msmarco": ["1M<n<10M"], "trec-covid": ["100k<n<1M"], "nfcorpus": ["1K<n<10K"], "nq": ["1M<n<10M"], "hotpotqa": ["1M<n<10M"], "fiqa": ["10K<n<100K"], "arguana": ["1K<n<10K"], "touche-2020": ["100K<n<1M"], "cqadupstack": ["100K<n<1M"], "quora": ["100K<n<1M"], "dbpedia": ["1M<n<10M"], "scidocs": ["10K<n<100K"], "fever": ["1M<n<10M"], "climate-fever": ["1M<n<10M"], "scifact": ["1K<n<10K"]}, "source_datasets": [], "task_categories": ["text-retrieval"], "paperswithcode_id": "beir", "pretty_name": "BEIR Benchmark"}
2023-01-24T19:53:24+00:00
[]
[ "en" ]
TAGS #task_categories-text-retrieval #multilinguality-monolingual #language-English #license-cc-by-sa-4.0 #region-us
NFCorpus: 20 generated queries (BEIR Benchmark) =============================================== This HF dataset contains the top-20 synthetic queries generated for each passage in the above BEIR benchmark dataset. * DocT5query model used: BeIR/query-gen-msmarco-t5-base-v1 * id (str): unique document id in NFCorpus in the BEIR benchmark ('URL'). * Questions generated: 20 * Code used for generation: evaluate\_anserini\_docT5query\_parallel.py Below contains the old dataset card for the BEIR benchmark. Dataset Card for BEIR Benchmark =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: URL * Point of Contact: URL@URL ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: * Fact-checking: FEVER, Climate-FEVER, SciFact * Question-Answering: NQ, HotpotQA, FiQA-2018 * Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus * News Retrieval: TREC-NEWS, Robust04 * Argument Retrieval: Touche-2020, ArguAna * Duplicate Question Retrieval: Quora, CqaDupstack * Citation-Prediction: SCIDOCS * Tweet Retrieval: Signal-1M * Entity Retrieval: DBPedia All these datasets have been preprocessed and can be used for your experiments. ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found here. ### Languages All tasks are in English ('en'). Dataset Structure ----------------- All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: * 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{"\_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}' * 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\_id' with unique query identifier and 'text' with query text. For example: '{"\_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}' * 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1' ### Data Instances A high level example of any beir dataset: ### Data Fields Examples from all configurations have the following features: ### Corpus * 'corpus': a 'dict' feature representing the document title and passage text, made up of: + '\_id': a 'string' feature representing the unique document id - 'title': a 'string' feature, denoting the title of the document. - 'text': a 'string' feature, denoting the text of the document. ### Queries * 'queries': a 'dict' feature representing the query, made up of: + '\_id': a 'string' feature representing the unique query id + 'text': a 'string' feature, denoting the text of the query. ### Qrels * 'qrels': a 'dict' feature representing the query document relevance judgements, made up of: + '\_id': a 'string' feature representing the query id - '\_id': a 'string' feature, denoting the document id. - 'score': a 'int32' feature, denoting the relevance judgement between query and document. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Cite as: ### Contributions Thanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus Dataset Card for BEIR Benchmark =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: URL * Point of Contact: URL@URL ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: * Fact-checking: FEVER, Climate-FEVER, SciFact * Question-Answering: NQ, HotpotQA, FiQA-2018 * Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus * News Retrieval: TREC-NEWS, Robust04 * Argument Retrieval: Touche-2020, ArguAna * Duplicate Question Retrieval: Quora, CqaDupstack * Citation-Prediction: SCIDOCS * Tweet Retrieval: Signal-1M * Entity Retrieval: DBPedia All these datasets have been preprocessed and can be used for your experiments. ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found here. ### Languages All tasks are in English ('en'). Dataset Structure ----------------- All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: * 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{"\_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}' * 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\_id' with unique query identifier and 'text' with query text. For example: '{"\_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}' * 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1' ### Data Instances A high level example of any beir dataset: ### Data Fields Examples from all configurations have the following features: ### Corpus * 'corpus': a 'dict' feature representing the document title and passage text, made up of: + '\_id': a 'string' feature representing the unique document id - 'title': a 'string' feature, denoting the title of the document. - 'text': a 'string' feature, denoting the text of the document. ### Queries * 'queries': a 'dict' feature representing the query, made up of: + '\_id': a 'string' feature representing the unique query id + 'text': a 'string' feature, denoting the text of the query. ### Qrels * 'qrels': a 'dict' feature representing the query document relevance judgements, made up of: + '\_id': a 'string' feature representing the query id - '\_id': a 'string' feature, denoting the document id. - 'score': a 'int32' feature, denoting the relevance judgement between query and document. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Cite as: ### Contributions Thanks to @Nthakur20 for adding this dataset.
[ "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus\n\n\nDataset Card for BEIR Benchmark\n===============================\n\n\nTable of Contents\n-----------------\n\n\n* Dataset Description\n\t+ Dataset Summary\n\t+ Supported Tasks and Leaderboards\n\t+ Languages\n* Dataset Structure\n\t+ Data Instances\n\t+ Data Fields\n\t+ Data Splits\n* Dataset Creation\n\t+ Curation Rationale\n\t+ Source Data\n\t+ Annotations\n\t+ Personal and Sensitive Information\n* Considerations for Using the Data\n\t+ Social Impact of Dataset\n\t+ Discussion of Biases\n\t+ Other Known Limitations\n* Additional Information\n\t+ Dataset Curators\n\t+ Licensing Information\n\t+ Citation Information\n\t+ Contributions\n\n\nDataset Description\n-------------------\n\n\n* Homepage: URL\n* Repository: URL\n* Paper: URL\n* Leaderboard: URL\n* Point of Contact: URL@URL", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset." ]
[ "TAGS\n#task_categories-text-retrieval #multilinguality-monolingual #language-English #license-cc-by-sa-4.0 #region-us \n", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus\n\n\nDataset Card for BEIR Benchmark\n===============================\n\n\nTable of Contents\n-----------------\n\n\n* Dataset Description\n\t+ Dataset Summary\n\t+ Supported Tasks and Leaderboards\n\t+ Languages\n* Dataset Structure\n\t+ Data Instances\n\t+ Data Fields\n\t+ Data Splits\n* Dataset Creation\n\t+ Curation Rationale\n\t+ Source Data\n\t+ Annotations\n\t+ Personal and Sensitive Information\n* Considerations for Using the Data\n\t+ Social Impact of Dataset\n\t+ Discussion of Biases\n\t+ Other Known Limitations\n* Additional Information\n\t+ Dataset Curators\n\t+ Licensing Information\n\t+ Citation Information\n\t+ Contributions\n\n\nDataset Description\n-------------------\n\n\n* Homepage: URL\n* Repository: URL\n* Paper: URL\n* Leaderboard: URL\n* Point of Contact: URL@URL", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset." ]
58a61313d0896efecc695368b61ec2b14dff70e7
# NFCorpus: 20 generated queries (BEIR Benchmark) This HF dataset contains the top-20 synthetic queries generated for each passage in the above BEIR benchmark dataset. - DocT5query model used: [BeIR/query-gen-msmarco-t5-base-v1](https://huggingface.co/BeIR/query-gen-msmarco-t5-base-v1) - id (str): unique document id in NFCorpus in the BEIR benchmark (`corpus.jsonl`). - Questions generated: 20 - Code used for generation: [evaluate_anserini_docT5query_parallel.py](https://github.com/beir-cellar/beir/blob/main/examples/retrieval/evaluation/sparse/evaluate_anserini_docT5query_parallel.py) Below contains the old dataset card for the BEIR benchmark. # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** [email protected] ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.Top-20 generated queries for every passage in NFCorpus # Dataset Card for BEIR Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UKPLab/beir - **Repository:** https://github.com/UKPLab/beir - **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ - **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns - **Point of Contact:** [email protected] ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: - Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact) - Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/) - Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) - News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html) - Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data) - Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) - Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs) - Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html) - Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/) All these datasets have been preprocessed and can be used for your experiments. ```python ``` ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/). ### Languages All tasks are in English (`en`). ## Dataset Structure All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: - `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}` - `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}` - `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1` ### Data Instances A high level example of any beir dataset: ```python corpus = { "doc1" : { "title": "Albert Einstein", "text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \ one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \ its influence on the philosophy of science. He is best known to the general public for his mass–energy \ equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \ Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \ of the photoelectric effect', a pivotal step in the development of quantum theory." }, "doc2" : { "title": "", # Keep title an empty string if not present "text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \ malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\ with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)." }, } queries = { "q1" : "Who developed the mass-energy equivalence formula?", "q2" : "Which beer is brewed with a large proportion of wheat?" } qrels = { "q1" : {"doc1": 1}, "q2" : {"doc2": 1}, } ``` ### Data Fields Examples from all configurations have the following features: ### Corpus - `corpus`: a `dict` feature representing the document title and passage text, made up of: - `_id`: a `string` feature representing the unique document id - `title`: a `string` feature, denoting the title of the document. - `text`: a `string` feature, denoting the text of the document. ### Queries - `queries`: a `dict` feature representing the query, made up of: - `_id`: a `string` feature representing the unique query id - `text`: a `string` feature, denoting the text of the query. ### Qrels - `qrels`: a `dict` feature representing the query document relevance judgements, made up of: - `_id`: a `string` feature representing the query id - `_id`: a `string` feature, denoting the document id. - `score`: a `int32` feature, denoting the relevance judgement between query and document. ### Data Splits | Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 | | -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:| | MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` | | TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` | | NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` | | BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) | | NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` | | HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` | | FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` | | Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) | | TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) | | ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` | | Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` | | CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` | | Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` | | DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` | | SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` | | FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` | | Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` | | SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` | | Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information Cite as: ``` @inproceedings{ thakur2021beir, title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models}, author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021}, url={https://openreview.net/forum?id=wCu6T5xFjeJ} } ``` ### Contributions Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.
income/cqadupstack-wordpress-top-20-gen-queries
[ "task_categories:text-retrieval", "multilinguality:monolingual", "language:en", "license:cc-by-sa-4.0", "region:us" ]
2023-01-24T19:53:30+00:00
{"annotations_creators": [], "language_creators": [], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": {"msmarco": ["1M<n<10M"], "trec-covid": ["100k<n<1M"], "nfcorpus": ["1K<n<10K"], "nq": ["1M<n<10M"], "hotpotqa": ["1M<n<10M"], "fiqa": ["10K<n<100K"], "arguana": ["1K<n<10K"], "touche-2020": ["100K<n<1M"], "cqadupstack": ["100K<n<1M"], "quora": ["100K<n<1M"], "dbpedia": ["1M<n<10M"], "scidocs": ["10K<n<100K"], "fever": ["1M<n<10M"], "climate-fever": ["1M<n<10M"], "scifact": ["1K<n<10K"]}, "source_datasets": [], "task_categories": ["text-retrieval"], "paperswithcode_id": "beir", "pretty_name": "BEIR Benchmark"}
2023-01-24T19:53:33+00:00
[]
[ "en" ]
TAGS #task_categories-text-retrieval #multilinguality-monolingual #language-English #license-cc-by-sa-4.0 #region-us
NFCorpus: 20 generated queries (BEIR Benchmark) =============================================== This HF dataset contains the top-20 synthetic queries generated for each passage in the above BEIR benchmark dataset. * DocT5query model used: BeIR/query-gen-msmarco-t5-base-v1 * id (str): unique document id in NFCorpus in the BEIR benchmark ('URL'). * Questions generated: 20 * Code used for generation: evaluate\_anserini\_docT5query\_parallel.py Below contains the old dataset card for the BEIR benchmark. Dataset Card for BEIR Benchmark =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: URL * Point of Contact: URL@URL ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: * Fact-checking: FEVER, Climate-FEVER, SciFact * Question-Answering: NQ, HotpotQA, FiQA-2018 * Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus * News Retrieval: TREC-NEWS, Robust04 * Argument Retrieval: Touche-2020, ArguAna * Duplicate Question Retrieval: Quora, CqaDupstack * Citation-Prediction: SCIDOCS * Tweet Retrieval: Signal-1M * Entity Retrieval: DBPedia All these datasets have been preprocessed and can be used for your experiments. ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found here. ### Languages All tasks are in English ('en'). Dataset Structure ----------------- All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: * 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{"\_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}' * 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\_id' with unique query identifier and 'text' with query text. For example: '{"\_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}' * 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1' ### Data Instances A high level example of any beir dataset: ### Data Fields Examples from all configurations have the following features: ### Corpus * 'corpus': a 'dict' feature representing the document title and passage text, made up of: + '\_id': a 'string' feature representing the unique document id - 'title': a 'string' feature, denoting the title of the document. - 'text': a 'string' feature, denoting the text of the document. ### Queries * 'queries': a 'dict' feature representing the query, made up of: + '\_id': a 'string' feature representing the unique query id + 'text': a 'string' feature, denoting the text of the query. ### Qrels * 'qrels': a 'dict' feature representing the query document relevance judgements, made up of: + '\_id': a 'string' feature representing the query id - '\_id': a 'string' feature, denoting the document id. - 'score': a 'int32' feature, denoting the relevance judgement between query and document. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Cite as: ### Contributions Thanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus Dataset Card for BEIR Benchmark =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: URL * Point of Contact: URL@URL ### Dataset Summary BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks: * Fact-checking: FEVER, Climate-FEVER, SciFact * Question-Answering: NQ, HotpotQA, FiQA-2018 * Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus * News Retrieval: TREC-NEWS, Robust04 * Argument Retrieval: Touche-2020, ArguAna * Duplicate Question Retrieval: Quora, CqaDupstack * Citation-Prediction: SCIDOCS * Tweet Retrieval: Signal-1M * Entity Retrieval: DBPedia All these datasets have been preprocessed and can be used for your experiments. ### Supported Tasks and Leaderboards The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia. The current best performing models can be found here. ### Languages All tasks are in English ('en'). Dataset Structure ----------------- All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format: * 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{"\_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}' * 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\_id' with unique query identifier and 'text' with query text. For example: '{"\_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}' * 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1' ### Data Instances A high level example of any beir dataset: ### Data Fields Examples from all configurations have the following features: ### Corpus * 'corpus': a 'dict' feature representing the document title and passage text, made up of: + '\_id': a 'string' feature representing the unique document id - 'title': a 'string' feature, denoting the title of the document. - 'text': a 'string' feature, denoting the text of the document. ### Queries * 'queries': a 'dict' feature representing the query, made up of: + '\_id': a 'string' feature representing the unique query id + 'text': a 'string' feature, denoting the text of the query. ### Qrels * 'qrels': a 'dict' feature representing the query document relevance judgements, made up of: + '\_id': a 'string' feature representing the query id - '\_id': a 'string' feature, denoting the document id. - 'score': a 'int32' feature, denoting the relevance judgement between query and document. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Cite as: ### Contributions Thanks to @Nthakur20 for adding this dataset.
[ "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus\n\n\nDataset Card for BEIR Benchmark\n===============================\n\n\nTable of Contents\n-----------------\n\n\n* Dataset Description\n\t+ Dataset Summary\n\t+ Supported Tasks and Leaderboards\n\t+ Languages\n* Dataset Structure\n\t+ Data Instances\n\t+ Data Fields\n\t+ Data Splits\n* Dataset Creation\n\t+ Curation Rationale\n\t+ Source Data\n\t+ Annotations\n\t+ Personal and Sensitive Information\n* Considerations for Using the Data\n\t+ Social Impact of Dataset\n\t+ Discussion of Biases\n\t+ Other Known Limitations\n* Additional Information\n\t+ Dataset Curators\n\t+ Licensing Information\n\t+ Citation Information\n\t+ Contributions\n\n\nDataset Description\n-------------------\n\n\n* Homepage: URL\n* Repository: URL\n* Paper: URL\n* Leaderboard: URL\n* Point of Contact: URL@URL", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset." ]
[ "TAGS\n#task_categories-text-retrieval #multilinguality-monolingual #language-English #license-cc-by-sa-4.0 #region-us \n", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset.Top-20 generated queries for every passage in NFCorpus\n\n\nDataset Card for BEIR Benchmark\n===============================\n\n\nTable of Contents\n-----------------\n\n\n* Dataset Description\n\t+ Dataset Summary\n\t+ Supported Tasks and Leaderboards\n\t+ Languages\n* Dataset Structure\n\t+ Data Instances\n\t+ Data Fields\n\t+ Data Splits\n* Dataset Creation\n\t+ Curation Rationale\n\t+ Source Data\n\t+ Annotations\n\t+ Personal and Sensitive Information\n* Considerations for Using the Data\n\t+ Social Impact of Dataset\n\t+ Discussion of Biases\n\t+ Other Known Limitations\n* Additional Information\n\t+ Dataset Curators\n\t+ Licensing Information\n\t+ Citation Information\n\t+ Contributions\n\n\nDataset Description\n-------------------\n\n\n* Homepage: URL\n* Repository: URL\n* Paper: URL\n* Leaderboard: URL\n* Point of Contact: URL@URL", "### Dataset Summary\n\n\nBEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:\n\n\n* Fact-checking: FEVER, Climate-FEVER, SciFact\n* Question-Answering: NQ, HotpotQA, FiQA-2018\n* Bio-Medical IR: TREC-COVID, BioASQ, NFCorpus\n* News Retrieval: TREC-NEWS, Robust04\n* Argument Retrieval: Touche-2020, ArguAna\n* Duplicate Question Retrieval: Quora, CqaDupstack\n* Citation-Prediction: SCIDOCS\n* Tweet Retrieval: Signal-1M\n* Entity Retrieval: DBPedia\n\n\nAll these datasets have been preprocessed and can be used for your experiments.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.\n\n\nThe current best performing models can be found here.", "### Languages\n\n\nAll tasks are in English ('en').\n\n\nDataset Structure\n-----------------\n\n\nAll BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:\n\n\n* 'corpus' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with three fields '\\_id' with unique document identifier, 'title' with document title (optional) and 'text' with document paragraph or passage. For example: '{\"\\_id\": \"doc1\", \"title\": \"Albert Einstein\", \"text\": \"Albert Einstein was a German-born....\"}'\n* 'queries' file: a '.jsonl' file (jsonlines) that contains a list of dictionaries, each with two fields '\\_id' with unique query identifier and 'text' with query text. For example: '{\"\\_id\": \"q1\", \"text\": \"Who developed the mass-energy equivalence formula?\"}'\n* 'qrels' file: a '.tsv' file (tab-seperated) that contains three columns, i.e. the 'query-id', 'corpus-id' and 'score' in this order. Keep 1st row as header. For example: 'q1 doc1 1'", "### Data Instances\n\n\nA high level example of any beir dataset:", "### Data Fields\n\n\nExamples from all configurations have the following features:", "### Corpus\n\n\n* 'corpus': a 'dict' feature representing the document title and passage text, made up of:\n\t+ '\\_id': a 'string' feature representing the unique document id\n\t\t- 'title': a 'string' feature, denoting the title of the document.\n\t\t- 'text': a 'string' feature, denoting the text of the document.", "### Queries\n\n\n* 'queries': a 'dict' feature representing the query, made up of:\n\t+ '\\_id': a 'string' feature representing the unique query id\n\t+ 'text': a 'string' feature, denoting the text of the query.", "### Qrels\n\n\n* 'qrels': a 'dict' feature representing the query document relevance judgements, made up of:\n\t+ '\\_id': a 'string' feature representing the query id\n\t\t- '\\_id': a 'string' feature, denoting the document id.\n\t\t- 'score': a 'int32' feature, denoting the relevance judgement between query and document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCite as:", "### Contributions\n\n\nThanks to @Nthakur20 for adding this dataset." ]
17af9355ef89fba60de966eabaeba797c695f86e
Original dataset introduced by Jin et al. in [What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams](https://paperswithcode.com/paper/what-disease-does-this-patient-have-a-large) <h4>Citation information:</h4> @article{jin2020disease, title={What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams}, author={Jin, Di and Pan, Eileen and Oufattole, Nassim and Weng, Wei-Hung and Fang, Hanyi and Szolovits, Peter}, journal={arXiv preprint arXiv:2009.13081}, year={2020} }
GBaker/MedQA-USMLE-4-options-hf
[ "license:cc-by-sa-4.0", "region:us" ]
2023-01-24T20:32:54+00:00
{"license": "cc-by-sa-4.0"}
2023-01-30T22:57:33+00:00
[]
[]
TAGS #license-cc-by-sa-4.0 #region-us
Original dataset introduced by Jin et al. in What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams <h4>Citation information:</h4> @article{jin2020disease, title={What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams}, author={Jin, Di and Pan, Eileen and Oufattole, Nassim and Weng, Wei-Hung and Fang, Hanyi and Szolovits, Peter}, journal={arXiv preprint arXiv:2009.13081}, year={2020} }
[]
[ "TAGS\n#license-cc-by-sa-4.0 #region-us \n" ]
23ff83b18ed76882f3af1a403a7c464b72efcd86
ERROR: type should be string, got "\nhttps://github.com/allenai/csqa2\n\n```\n@article{talmor2022commonsenseqa,\n title={CommonsenseQA 2.0: Exposing the limits of AI through gamification},\n author={Talmor, Alon and Yoran, Ori and Bras, Ronan Le and Bhagavatula, Chandra and Goldberg, Yoav and Choi, Yejin and Berant, Jonathan},\n journal={arXiv preprint arXiv:2201.05320},\n year={2022}\n}\n```"
tasksource/commonsense_qa_2.0
[ "task_categories:question-answering", "language:en", "license:cc-by-4.0", "region:us" ]
2023-01-24T21:45:47+00:00
{"language": ["en"], "license": "cc-by-4.0", "task_categories": ["question-answering"]}
2023-06-21T11:48:33+00:00
[]
[ "en" ]
TAGS #task_categories-question-answering #language-English #license-cc-by-4.0 #region-us
URL
[]
[ "TAGS\n#task_categories-question-answering #language-English #license-cc-by-4.0 #region-us \n" ]
e548eae008a43d71df22d230476db873b7ad7228
# DMV-Plates This datasets contains various plates and their DMV responses. Props to avery for making this jsonl file!
DarwinAnim8or/DMV-Plate-Review
[ "license:mit", "region:us" ]
2023-01-24T22:13:04+00:00
{"license": "mit"}
2023-01-24T22:17:51+00:00
[]
[]
TAGS #license-mit #region-us
# DMV-Plates This datasets contains various plates and their DMV responses. Props to avery for making this jsonl file!
[ "# DMV-Plates\nThis datasets contains various plates and their DMV responses. \nProps to avery for making this jsonl file!" ]
[ "TAGS\n#license-mit #region-us \n", "# DMV-Plates\nThis datasets contains various plates and their DMV responses. \nProps to avery for making this jsonl file!" ]
d130cde11d599ea68b91ec6bb9d2e87ae2724d78
# Dataset Card for "blimp" HuggingFace Hub Upload of BLiMP: The Benchmark of Linguistic Minimal Pairs from https://github.com/alexwarstadt/blimp If you use this dataset in your work, please cite the original authors and paper. ``` @article{warstadt2020blimp, author = {Warstadt, Alex and Parrish, Alicia and Liu, Haokun and Mohananey, Anhad and Peng, Wei and Wang, Sheng-Fu and Bowman, Samuel R.}, title = {BLiMP: The Benchmark of Linguistic Minimal Pairs for English}, journal = {Transactions of the Association for Computational Linguistics}, volume = {8}, number = {}, pages = {377-392}, year = {2020}, doi = {10.1162/tacl\_a\_00321}, URL = {https://doi.org/10.1162/tacl_a_00321}, eprint = {https://doi.org/10.1162/tacl_a_00321}, abstract = { We introduce The Benchmark of Linguistic Minimal Pairs (BLiMP),1 a challenge set for evaluating the linguistic knowledge of language models (LMs) on major grammatical phenomena in English. BLiMP consists of 67 individual datasets, each containing 1,000 minimal pairs—that is, pairs of minimally different sentences that contrast in grammatical acceptability and isolate specific phenomenon in syntax, morphology, or semantics. We generate the data according to linguist-crafted grammar templates, and human aggregate agreement with the labels is 96.4\%. We evaluate n-gram, LSTM, and Transformer (GPT-2 and Transformer-XL) LMs by observing whether they assign a higher probability to the acceptable sentence in each minimal pair. We find that state-of-the-art models identify morphological contrasts related to agreement reliably, but they struggle with some subtle semantic and syntactic phenomena, such as negative polarity items and extraction islands. } } ```
WillHeld/blimp
[ "region:us" ]
2023-01-24T22:33:00+00:00
{"dataset_info": {"features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "two_prefix_prefix_good", "dtype": "string"}, {"name": "two_prefix_prefix_bad", "dtype": "string"}, {"name": "two_prefix_word", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pairID", "dtype": "string"}, {"name": "feature_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15550503, "num_examples": 67000}], "download_size": 4374212, "dataset_size": 15550503}}
2023-01-24T22:34:34+00:00
[]
[]
TAGS #region-us
# Dataset Card for "blimp" HuggingFace Hub Upload of BLiMP: The Benchmark of Linguistic Minimal Pairs from URL If you use this dataset in your work, please cite the original authors and paper.
[ "# Dataset Card for \"blimp\"\n\nHuggingFace Hub Upload of BLiMP: The Benchmark of Linguistic Minimal Pairs from URL\n\nIf you use this dataset in your work, please cite the original authors and paper." ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"blimp\"\n\nHuggingFace Hub Upload of BLiMP: The Benchmark of Linguistic Minimal Pairs from URL\n\nIf you use this dataset in your work, please cite the original authors and paper." ]
ddb1f829b4c693bd29f91cf466d76e0065593358
# Dataset Card for "OxfordPets_test_facebook_opt_125m_Attributes_ns_3669" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/OxfordPets_test_facebook_opt_125m_Attributes_ns_3669
[ "region:us" ]
2023-01-25T02:40:56+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 121000492.375, "num_examples": 3669}, {"name": "fewshot_1_bs_16", "num_bytes": 121909173.375, "num_examples": 3669}, {"name": "fewshot_3_bs_16", "num_bytes": 123709349.375, "num_examples": 3669}, {"name": "fewshot_5_bs_16", "num_bytes": 125501892.375, "num_examples": 3669}, {"name": "fewshot_8_bs_16", "num_bytes": 128203231.375, "num_examples": 3669}], "download_size": 602523943, "dataset_size": 620324138.875}}
2023-01-25T02:56:08+00:00
[]
[]
TAGS #region-us
# Dataset Card for "OxfordPets_test_facebook_opt_125m_Attributes_ns_3669" More Information needed
[ "# Dataset Card for \"OxfordPets_test_facebook_opt_125m_Attributes_ns_3669\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"OxfordPets_test_facebook_opt_125m_Attributes_ns_3669\"\n\nMore Information needed" ]
7bd99fb29d7ce6498fd5b4ab15880c36d5179cd7
# Dataset Card for "OxfordPets_test_facebook_opt_350m_Attributes_ns_3669" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/OxfordPets_test_facebook_opt_350m_Attributes_ns_3669
[ "region:us" ]
2023-01-25T03:02:29+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 121000490.375, "num_examples": 3669}, {"name": "fewshot_1_bs_16", "num_bytes": 121909413.375, "num_examples": 3669}, {"name": "fewshot_3_bs_16", "num_bytes": 123709541.375, "num_examples": 3669}, {"name": "fewshot_5_bs_16", "num_bytes": 125502094.375, "num_examples": 3669}, {"name": "fewshot_8_bs_16", "num_bytes": 128203377.375, "num_examples": 3669}], "download_size": 602524552, "dataset_size": 620324916.875}}
2023-01-25T03:24:23+00:00
[]
[]
TAGS #region-us
# Dataset Card for "OxfordPets_test_facebook_opt_350m_Attributes_ns_3669" More Information needed
[ "# Dataset Card for \"OxfordPets_test_facebook_opt_350m_Attributes_ns_3669\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"OxfordPets_test_facebook_opt_350m_Attributes_ns_3669\"\n\nMore Information needed" ]
4126b85d57349927a4b886d5924ab4735fffa6a1
# Perturbed Faces This dataset contains 1000 images from [CelebA dataset](!https://www.kaggle.com/datasets/jessicali9530/celeba-dataset). For each of the thousand images dataset also has [LowKey](https://openreview.net/forum?id=hJmtwocEqzc) perturbed version and [Fawkes](https://sandlab.cs.uchicago.edu/fawkes/) perturbed version. LowKey and Fawkes perturbed images have _attacked & _cloaked at the end of the filename respectively. | File Name | Version | |---------------------|--------------------------| | 000001.jpg | Original | | 000001_cloaked.png | Fawkes perturbed version | | 000001_attacked.png | LowKey perturbed version | The Fawkes perturbed images are created using CLI provided in the [github repository](https://github.com/Shawn-Shan/fawkes) with protection mode set to mid. The LowKey version of images are created using Python code provided with the paper. ## Citation If you found this work helpful for your research, please cite it as following: ``` @misc{2301.07315, Author = {Aaditya Bhat and Shrey Jain}, Title = {Face Recognition in the age of CLIP & Billion image datasets}, Year = {2023}, Eprint = {arXiv:2301.07315}, } ```
aadityaubhat/perturbed_faces
[ "task_categories:feature-extraction", "task_categories:image-classification", "task_categories:zero-shot-image-classification", "size_categories:1K<n<10K", "arxiv:2301.07315", "region:us" ]
2023-01-25T03:43:47+00:00
{"size_categories": ["1K<n<10K"], "task_categories": ["feature-extraction", "image-classification", "zero-shot-image-classification"], "pretty_name": "Perturbed Faces"}
2023-01-25T04:29:39+00:00
[ "2301.07315" ]
[]
TAGS #task_categories-feature-extraction #task_categories-image-classification #task_categories-zero-shot-image-classification #size_categories-1K<n<10K #arxiv-2301.07315 #region-us
Perturbed Faces =============== This dataset contains 1000 images from CelebA dataset. For each of the thousand images dataset also has LowKey perturbed version and Fawkes perturbed version. LowKey and Fawkes perturbed images have \_attacked & \_cloaked at the end of the filename respectively. The Fawkes perturbed images are created using CLI provided in the github repository with protection mode set to mid. The LowKey version of images are created using Python code provided with the paper. If you found this work helpful for your research, please cite it as following:
[]
[ "TAGS\n#task_categories-feature-extraction #task_categories-image-classification #task_categories-zero-shot-image-classification #size_categories-1K<n<10K #arxiv-2301.07315 #region-us \n" ]
a435eb8767b9a5667e56d53bbb7c3a154c1b9670
# Dataset Card for "OxfordPets_test_facebook_opt_1.3b_Attributes_ns_3669" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/OxfordPets_test_facebook_opt_1.3b_Attributes_ns_3669
[ "region:us" ]
2023-01-25T04:23:01+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 121038204.375, "num_examples": 3669}, {"name": "fewshot_1_bs_16", "num_bytes": 121909027.375, "num_examples": 3669}, {"name": "fewshot_3_bs_16", "num_bytes": 123709262.375, "num_examples": 3669}, {"name": "fewshot_5_bs_16", "num_bytes": 125502039.375, "num_examples": 3669}, {"name": "fewshot_8_bs_16", "num_bytes": 128203307.375, "num_examples": 3669}], "download_size": 602521012, "dataset_size": 620361840.875}}
2023-01-25T04:55:11+00:00
[]
[]
TAGS #region-us
# Dataset Card for "OxfordPets_test_facebook_opt_1.3b_Attributes_ns_3669" More Information needed
[ "# Dataset Card for \"OxfordPets_test_facebook_opt_1.3b_Attributes_ns_3669\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"OxfordPets_test_facebook_opt_1.3b_Attributes_ns_3669\"\n\nMore Information needed" ]
17f66ff52eb1a7c8bebca582ee6567d29ba17308
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
samkenxstream/turnkey-triumph-326606_SamKenX-imdb
[ "task_categories:token-classification", "task_categories:text-classification", "size_categories:100K<n<1M", "language:aa", "language:an", "language:av", "license:bsl-1.0", "region:us" ]
2023-01-25T05:03:32+00:00
{"language": ["aa", "an", "av"], "license": "bsl-1.0", "size_categories": ["100K<n<1M"], "task_categories": ["token-classification", "text-classification"]}
2023-02-11T18:28:49+00:00
[]
[ "aa", "an", "av" ]
TAGS #task_categories-token-classification #task_categories-text-classification #size_categories-100K<n<1M #language-Afar #language-Aragonese #language-Avaric #license-bsl-1.0 #region-us
# Dataset Card for Dataset Name ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Dataset Name", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#task_categories-token-classification #task_categories-text-classification #size_categories-100K<n<1M #language-Afar #language-Aragonese #language-Avaric #license-bsl-1.0 #region-us \n", "# Dataset Card for Dataset Name", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
2235745e5e53eac10295334dfb18ef829a34705b
# Dataset Card for "noto-emoji-dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
kuotient/noto-emoji-dataset
[ "region:us" ]
2023-01-25T06:10:35+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 24395327.773, "num_examples": 1001}], "download_size": 24218126, "dataset_size": 24395327.773}}
2023-02-01T08:15:25+00:00
[]
[]
TAGS #region-us
# Dataset Card for "noto-emoji-dataset" More Information needed
[ "# Dataset Card for \"noto-emoji-dataset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"noto-emoji-dataset\"\n\nMore Information needed" ]
9f01f21d68e292dc1f272a055ec7aa1c964f9a6f
# Dataset Card for "Uniref90_large_temp" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Oshan/Uniref90_large_temp
[ "region:us" ]
2023-01-25T06:11:58+00:00
{"dataset_info": {"features": [{"name": "cluster_id", "dtype": "string"}, {"name": "cluster_size", "dtype": "int64"}, {"name": "taxon_id", "dtype": "int64"}, {"name": "aa_len", "dtype": "int64"}, {"name": "aa_seq", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15035559, "num_examples": 500}], "download_size": 0, "dataset_size": 15035559}}
2023-01-25T06:20:03+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Uniref90_large_temp" More Information needed
[ "# Dataset Card for \"Uniref90_large_temp\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Uniref90_large_temp\"\n\nMore Information needed" ]
e03d3b8b37d4f3959c556843869f7fba5e3d0020
# Dataset Card for "skin" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
vgg/skin
[ "region:us" ]
2023-01-25T07:36:42+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1751364.0, "num_examples": 80}], "download_size": 1758461, "dataset_size": 1751364.0}}
2023-03-23T12:55:41+00:00
[]
[]
TAGS #region-us
# Dataset Card for "skin" More Information needed
[ "# Dataset Card for \"skin\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"skin\"\n\nMore Information needed" ]
2dbb4c1be6a481a4d5ef52fdc4cbd4f09f52379c
# LoRaLay: A Multilingual and Multimodal Dataset for Long Range and Layout-Aware Summarization A collaboration between [reciTAL](https://recital.ai/en/), [MLIA](https://mlia.lip6.fr/) (ISIR, Sorbonne Université), [Meta AI](https://ai.facebook.com/), and [Università di Trento](https://www.unitn.it/) ## Arxiv-Lay dataset for summarization ArXiv-Lay is an enhanced version of the arXiv summarization dataset, for which layout information is provided. ### Data Fields - `article_id`: article id - `article_words`: sequence of words constituting the body of the article - `article_bboxes`: sequence of corresponding word bounding boxes - `norm_article_bboxes`: sequence of corresponding normalized word bounding boxes - `abstract`: a string containing the abstract of the article - `article_pdf_url`: URL of the article's PDF ### Data Splits This dataset has 3 splits: _train_, _validation_, and _test_. | Dataset Split | Number of Instances | | ------------- | --------------------| | Train | 122,189 | | Validation | 4,374 | | Test | 4,356 | ## Citation ``` latex @article{nguyen2023loralay, title={LoRaLay: A Multilingual and Multimodal Dataset for Long Range and Layout-Aware Summarization}, author={Nguyen, Laura and Scialom, Thomas and Piwowarski, Benjamin and Staiano, Jacopo}, journal={arXiv preprint arXiv:2301.11312}, year={2023} } ```
nglaura/arxivlay-summarization
[ "task_categories:summarization", "language:en", "license:apache-2.0", "region:us" ]
2023-01-25T10:56:42+00:00
{"language": ["en"], "license": "apache-2.0", "task_categories": ["summarization"], "pretty_name": "arXiv-Lay"}
2023-04-11T09:08:36+00:00
[]
[ "en" ]
TAGS #task_categories-summarization #language-English #license-apache-2.0 #region-us
LoRaLay: A Multilingual and Multimodal Dataset for Long Range and Layout-Aware Summarization ============================================================================================ A collaboration between reciTAL, MLIA (ISIR, Sorbonne Université), Meta AI, and Università di Trento Arxiv-Lay dataset for summarization ----------------------------------- ArXiv-Lay is an enhanced version of the arXiv summarization dataset, for which layout information is provided. ### Data Fields * 'article\_id': article id * 'article\_words': sequence of words constituting the body of the article * 'article\_bboxes': sequence of corresponding word bounding boxes * 'norm\_article\_bboxes': sequence of corresponding normalized word bounding boxes * 'abstract': a string containing the abstract of the article * 'article\_pdf\_url': URL of the article's PDF ### Data Splits This dataset has 3 splits: *train*, *validation*, and *test*.
[ "### Data Fields\n\n\n* 'article\\_id': article id\n* 'article\\_words': sequence of words constituting the body of the article\n* 'article\\_bboxes': sequence of corresponding word bounding boxes\n* 'norm\\_article\\_bboxes': sequence of corresponding normalized word bounding boxes\n* 'abstract': a string containing the abstract of the article\n* 'article\\_pdf\\_url': URL of the article's PDF", "### Data Splits\n\n\nThis dataset has 3 splits: *train*, *validation*, and *test*." ]
[ "TAGS\n#task_categories-summarization #language-English #license-apache-2.0 #region-us \n", "### Data Fields\n\n\n* 'article\\_id': article id\n* 'article\\_words': sequence of words constituting the body of the article\n* 'article\\_bboxes': sequence of corresponding word bounding boxes\n* 'norm\\_article\\_bboxes': sequence of corresponding normalized word bounding boxes\n* 'abstract': a string containing the abstract of the article\n* 'article\\_pdf\\_url': URL of the article's PDF", "### Data Splits\n\n\nThis dataset has 3 splits: *train*, *validation*, and *test*." ]
05d4e370515ef41f534d3f87d5c54da0e66e2d1c
# LoRaLay: A Multilingual and Multimodal Dataset for Long Range and Layout-Aware Summarization A collaboration between [reciTAL](https://recital.ai/en/), [MLIA](https://mlia.lip6.fr/) (ISIR, Sorbonne Université), [Meta AI](https://ai.facebook.com/), and [Università di Trento](https://www.unitn.it/) ## HAL dataset for summarization HAL is a dataset for summarization of research papers written in French, for which layout information is provided. ### Data Fields - `article_id`: article id - `article_words`: sequence of words constituting the body of the article - `article_bboxes`: sequence of corresponding word bounding boxes - `norm_article_bboxes`: sequence of corresponding normalized word bounding boxes - `abstract`: a string containing the abstract of the article - `article_pdf_url`: URL of the article's PDF ### Data Splits This dataset has 3 splits: _train_, _validation_, and _test_. | Dataset Split | Number of Instances | | ------------- | --------------------| | Train | 43,379 | | Validation | 1,384 | | Test | 1,385 | ## Citation ``` latex @article{nguyen2023loralay, title={LoRaLay: A Multilingual and Multimodal Dataset for Long Range and Layout-Aware Summarization}, author={Nguyen, Laura and Scialom, Thomas and Piwowarski, Benjamin and Staiano, Jacopo}, journal={arXiv preprint arXiv:2301.11312}, year={2023} } ```
nglaura/hal-summarization
[ "task_categories:summarization", "language:fr", "license:apache-2.0", "region:us" ]
2023-01-25T11:55:33+00:00
{"language": ["fr"], "license": "apache-2.0", "task_categories": ["summarization"], "pretty_name": "HAL"}
2023-04-11T09:15:37+00:00
[]
[ "fr" ]
TAGS #task_categories-summarization #language-French #license-apache-2.0 #region-us
LoRaLay: A Multilingual and Multimodal Dataset for Long Range and Layout-Aware Summarization ============================================================================================ A collaboration between reciTAL, MLIA (ISIR, Sorbonne Université), Meta AI, and Università di Trento HAL dataset for summarization ----------------------------- HAL is a dataset for summarization of research papers written in French, for which layout information is provided. ### Data Fields * 'article\_id': article id * 'article\_words': sequence of words constituting the body of the article * 'article\_bboxes': sequence of corresponding word bounding boxes * 'norm\_article\_bboxes': sequence of corresponding normalized word bounding boxes * 'abstract': a string containing the abstract of the article * 'article\_pdf\_url': URL of the article's PDF ### Data Splits This dataset has 3 splits: *train*, *validation*, and *test*.
[ "### Data Fields\n\n\n* 'article\\_id': article id\n* 'article\\_words': sequence of words constituting the body of the article\n* 'article\\_bboxes': sequence of corresponding word bounding boxes\n* 'norm\\_article\\_bboxes': sequence of corresponding normalized word bounding boxes\n* 'abstract': a string containing the abstract of the article\n* 'article\\_pdf\\_url': URL of the article's PDF", "### Data Splits\n\n\nThis dataset has 3 splits: *train*, *validation*, and *test*." ]
[ "TAGS\n#task_categories-summarization #language-French #license-apache-2.0 #region-us \n", "### Data Fields\n\n\n* 'article\\_id': article id\n* 'article\\_words': sequence of words constituting the body of the article\n* 'article\\_bboxes': sequence of corresponding word bounding boxes\n* 'norm\\_article\\_bboxes': sequence of corresponding normalized word bounding boxes\n* 'abstract': a string containing the abstract of the article\n* 'article\\_pdf\\_url': URL of the article's PDF", "### Data Splits\n\n\nThis dataset has 3 splits: *train*, *validation*, and *test*." ]
e99e680bf47316b8d3f9445c1d449434ab02ef8a
# LoRaLay: A Multilingual and Multimodal Dataset for Long Range and Layout-Aware Summarization A collaboration between [reciTAL](https://recital.ai/en/), [MLIA](https://mlia.lip6.fr/) (ISIR, Sorbonne Université), [Meta AI](https://ai.facebook.com/), and [Università di Trento](https://www.unitn.it/) ## SciELO dataset for summarization SciELO is a dataset for summarization of research papers written in Spanish and Portuguese, for which layout information is provided. ### Data Fields - `article_id`: article id - `article_words`: sequence of words constituting the body of the article - `article_bboxes`: sequence of corresponding word bounding boxes - `norm_article_bboxes`: sequence of corresponding normalized word bounding boxes - `abstract`: a string containing the abstract of the article - `article_pdf_url`: URL of the article's PDF ### Data Splits This dataset has 3 splits: _train_, _validation_, and _test_. | Dataset Split | Number of Instances (ES/PT) | | ------------- | ----------------------------| | Train | 20,853 / 19,407 | | Validation | 1,158 / 1,078 | | Test | 1,159 / 1,078 | ## Citation ``` latex @article{nguyen2023loralay, title={LoRaLay: A Multilingual and Multimodal Dataset for Long Range and Layout-Aware Summarization}, author={Nguyen, Laura and Scialom, Thomas and Piwowarski, Benjamin and Staiano, Jacopo}, journal={arXiv preprint arXiv:2301.11312}, year={2023} } ```
nglaura/scielo-summarization
[ "task_categories:summarization", "language:fr", "license:apache-2.0", "region:us" ]
2023-01-25T12:02:33+00:00
{"language": ["fr"], "license": "apache-2.0", "task_categories": ["summarization"], "pretty_name": "SciELO"}
2023-04-11T09:21:45+00:00
[]
[ "fr" ]
TAGS #task_categories-summarization #language-French #license-apache-2.0 #region-us
LoRaLay: A Multilingual and Multimodal Dataset for Long Range and Layout-Aware Summarization ============================================================================================ A collaboration between reciTAL, MLIA (ISIR, Sorbonne Université), Meta AI, and Università di Trento SciELO dataset for summarization -------------------------------- SciELO is a dataset for summarization of research papers written in Spanish and Portuguese, for which layout information is provided. ### Data Fields * 'article\_id': article id * 'article\_words': sequence of words constituting the body of the article * 'article\_bboxes': sequence of corresponding word bounding boxes * 'norm\_article\_bboxes': sequence of corresponding normalized word bounding boxes * 'abstract': a string containing the abstract of the article * 'article\_pdf\_url': URL of the article's PDF ### Data Splits This dataset has 3 splits: *train*, *validation*, and *test*.
[ "### Data Fields\n\n\n* 'article\\_id': article id\n* 'article\\_words': sequence of words constituting the body of the article\n* 'article\\_bboxes': sequence of corresponding word bounding boxes\n* 'norm\\_article\\_bboxes': sequence of corresponding normalized word bounding boxes\n* 'abstract': a string containing the abstract of the article\n* 'article\\_pdf\\_url': URL of the article's PDF", "### Data Splits\n\n\nThis dataset has 3 splits: *train*, *validation*, and *test*." ]
[ "TAGS\n#task_categories-summarization #language-French #license-apache-2.0 #region-us \n", "### Data Fields\n\n\n* 'article\\_id': article id\n* 'article\\_words': sequence of words constituting the body of the article\n* 'article\\_bboxes': sequence of corresponding word bounding boxes\n* 'norm\\_article\\_bboxes': sequence of corresponding normalized word bounding boxes\n* 'abstract': a string containing the abstract of the article\n* 'article\\_pdf\\_url': URL of the article's PDF", "### Data Splits\n\n\nThis dataset has 3 splits: *train*, *validation*, and *test*." ]
7fea0ce76dc9deb26bc27027b9c143701b6d5030
# Dataset Card for "bookcorpus_stochastic_subset_compact_1024" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
saibo/bookcorpus_stochastic_subset_compact_1024
[ "region:us" ]
2023-01-25T13:32:09+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 27536698, "num_examples": 6160}], "download_size": 16803321, "dataset_size": 27536698}}
2023-01-25T13:32:31+00:00
[]
[]
TAGS #region-us
# Dataset Card for "bookcorpus_stochastic_subset_compact_1024" More Information needed
[ "# Dataset Card for \"bookcorpus_stochastic_subset_compact_1024\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"bookcorpus_stochastic_subset_compact_1024\"\n\nMore Information needed" ]
96e6ad0a454751aad0eeb07705aca1b5574eb8dc
polinaeterna/audio_configs2
[ "region:us" ]
2023-01-25T14:29:15+00:00
{"configs_kwargs": [{"config_name": "v1", "data_dir": "v1", "drop_labels": true}, {"config_name": "v2", "data_dir": "v2", "drop_labels": false}], "duplicated_from": "polinaeterna/audio_configs"}
2023-01-25T14:29:16+00:00
[]
[]
TAGS #region-us
[]
[ "TAGS\n#region-us \n" ]
5ba05938d8d038270614705397b9306ee7716ff0
# Dataset Card for "OxfordPets_test_facebook_opt_2.7b_Attributes_ns_3669" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/OxfordPets_test_facebook_opt_2.7b_Attributes_ns_3669
[ "region:us" ]
2023-01-25T14:35:11+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 121046369.375, "num_examples": 3669}, {"name": "fewshot_1_bs_16", "num_bytes": 121909353.375, "num_examples": 3669}, {"name": "fewshot_3_bs_16", "num_bytes": 123709332.375, "num_examples": 3669}, {"name": "fewshot_5_bs_16", "num_bytes": 125501830.375, "num_examples": 3669}, {"name": "fewshot_8_bs_16", "num_bytes": 128203042.375, "num_examples": 3669}], "download_size": 602512072, "dataset_size": 620369927.875}}
2023-01-25T15:22:28+00:00
[]
[]
TAGS #region-us
# Dataset Card for "OxfordPets_test_facebook_opt_2.7b_Attributes_ns_3669" More Information needed
[ "# Dataset Card for \"OxfordPets_test_facebook_opt_2.7b_Attributes_ns_3669\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"OxfordPets_test_facebook_opt_2.7b_Attributes_ns_3669\"\n\nMore Information needed" ]
d36b28325622ab4865ca64225fa38df9d219599b
# Dataset Card for "gesture_pred" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Jsevisal/gesture_pred
[ "task_categories:token-classification", "language:en", "region:us" ]
2023-01-25T14:48:47+00:00
{"language": ["en"], "task_categories": ["token-classification"], "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "tokens", "sequence": "string"}, {"name": "gestures", "sequence": "string"}, {"name": "label", "sequence": {"class_label": {"names": {"0": "B-BUT", "1": "I-BUT", "2": "B-CALM_DOWN", "3": "I-CALM_DOWN", "4": "B-COME_ON", "5": "I-COME_ON", "6": "B-EMPHATIC", "7": "I-EMPHATIC", "8": "B-ENTHUSIASTIC", "9": "I-ENTHUSIASTIC", "10": "B-EXPLAIN", "11": "I-EXPLAIN", "12": "B-FRONT", "13": "I-FRONT", "14": "B-GREET", "15": "I-GREET", "16": "B-ITERATE", "17": "I-ITERATE", "18": "B-NEUTRAL", "19": "I-NEUTRAL", "20": "B-NO", "21": "I-NO", "22": "B-NO_GESTURE", "23": "I-NO_GESTURE", "24": "B-OTHER_PEER", "25": "I-OTHER_PEER", "26": "B-PLEASE", "27": "I-PLEASE", "28": "B-QUESTION", "29": "I-QUESTION", "30": "B-SELF", "31": "I-SELF", "32": "B-SORRY", "33": "I-SORRY", "34": "B-THANKS", "35": "I-THANKS", "36": "B-THINKING", "37": "I-THINKING", "38": "B-THIRD_PERSON", "39": "I-THIRD_PERSON", "40": "B-YES", "41": "I-YES"}}}}], "splits": [{"name": "train", "num_bytes": 714214, "num_examples": 2339}, {"name": "test", "num_bytes": 40730, "num_examples": 130}, {"name": "validation", "num_bytes": 41891, "num_examples": 130}], "download_size": 140110, "dataset_size": 796835}}
2023-09-14T10:31:43+00:00
[]
[ "en" ]
TAGS #task_categories-token-classification #language-English #region-us
# Dataset Card for "gesture_pred" More Information needed
[ "# Dataset Card for \"gesture_pred\"\n\nMore Information needed" ]
[ "TAGS\n#task_categories-token-classification #language-English #region-us \n", "# Dataset Card for \"gesture_pred\"\n\nMore Information needed" ]
19144fa7c99ff5185443bd425c5ee9fe4ce950d7
# Dataset Card for "yuvalkirstain-sd_15_pexel_people-eval-random-prompts" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yuvalkirstain/yuvalkirstain-sd_15_pexel_people-eval-random-prompts
[ "region:us" ]
2023-01-25T14:54:26+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 32792, "num_examples": 200}], "download_size": 11301, "dataset_size": 32792}}
2023-01-25T14:54:34+00:00
[]
[]
TAGS #region-us
# Dataset Card for "yuvalkirstain-sd_15_pexel_people-eval-random-prompts" More Information needed
[ "# Dataset Card for \"yuvalkirstain-sd_15_pexel_people-eval-random-prompts\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"yuvalkirstain-sd_15_pexel_people-eval-random-prompts\"\n\nMore Information needed" ]
9f45f40d711d83695172619bf59b82135a520678
# Dataset Card for "wild-rabbit" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
purplebear/wild-rabbit
[ "region:us" ]
2023-01-25T14:59:21+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 77534644.0, "num_examples": 20}], "download_size": 0, "dataset_size": 77534644.0}}
2023-01-25T15:09:13+00:00
[]
[]
TAGS #region-us
# Dataset Card for "wild-rabbit" More Information needed
[ "# Dataset Card for \"wild-rabbit\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"wild-rabbit\"\n\nMore Information needed" ]
38ff443244c1b496c33ed237d3d4468daf24265c
# Dataset Card for DocLayNet large ## About this card (01/27/2023) ### Property and license All information from this page but the content of this paragraph "About this card (01/27/2023)" has been copied/pasted from [Dataset Card for DocLayNet](https://huggingface.co/datasets/ds4sd/DocLayNet). DocLayNet is a dataset created by Deep Search (IBM Research) published under [license CDLA-Permissive-1.0](https://huggingface.co/datasets/ds4sd/DocLayNet#licensing-information). I do not claim any rights to the data taken from this dataset and published on this page. ### DocLayNet dataset [DocLayNet dataset](https://github.com/DS4SD/DocLayNet) (IBM) provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. Until today, the dataset can be downloaded through direct links or as a dataset from Hugging Face datasets: - direct links: [doclaynet_core.zip](https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_core.zip) (28 GiB), [doclaynet_extra.zip](https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_extra.zip) (7.5 GiB) - Hugging Face dataset library: [dataset DocLayNet](https://huggingface.co/datasets/ds4sd/DocLayNet) Paper: [DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis](https://arxiv.org/abs/2206.01062) (06/02/2022) ### Processing into a format facilitating its use by HF notebooks These 2 options require the downloading of all the data (approximately 30GBi), which requires downloading time (about 45 mn in Google Colab) and a large space on the hard disk. These could limit experimentation for people with low resources. Moreover, even when using the download via HF datasets library, it is necessary to download the EXTRA zip separately ([doclaynet_extra.zip](https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_extra.zip), 7.5 GiB) to associate the annotated bounding boxes with the text extracted by OCR from the PDFs. This operation also requires additional code because the boundings boxes of the texts do not necessarily correspond to those annotated (a calculation of the percentage of area in common between the boundings boxes annotated and those of the texts makes it possible to make a comparison between them). At last, in order to use Hugging Face notebooks on fine-tuning layout models like LayoutLMv3 or LiLT, DocLayNet data must be processed in a proper format. For all these reasons, I decided to process the DocLayNet dataset: - into 3 datasets of different sizes: - [DocLayNet small](https://huggingface.co/datasets/pierreguillou/DocLayNet-small) (about 1% of DocLayNet) < 1.000k document images (691 train, 64 val, 49 test) - [DocLayNet base](https://huggingface.co/datasets/pierreguillou/DocLayNet-base) (about 10% of DocLayNet) < 10.000k document images (6910 train, 648 val, 499 test) - [DocLayNet large](https://huggingface.co/datasets/pierreguillou/DocLayNet-large) (about 100% of DocLayNet) < 100.000k document images (69.103 train, 6.480 val, 4.994 test) - with associated texts and PDFs (base64 format), - and in a format facilitating their use by HF notebooks. *Note: the layout HF notebooks will greatly help participants of the IBM [ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents](https://ds4sd.github.io/icdar23-doclaynet/)!* ### About PDFs languages Citation of the page 3 of the [DocLayNet paper](https://arxiv.org/abs/2206.01062): "We did not control the document selection with regard to language. **The vast majority of documents contained in DocLayNet (close to 95%) are published in English language.** However, DocLayNet also contains a number of documents in other languages such as German (2.5%), French (1.0%) and Japanese (1.0%). While the document language has negligible impact on the performance of computer vision methods such as object detection and segmentation models, it might prove challenging for layout analysis methods which exploit textual features." ### About PDFs categories distribution Citation of the page 3 of the [DocLayNet paper](https://arxiv.org/abs/2206.01062): "The pages in DocLayNet can be grouped into **six distinct categories**, namely Financial Reports, Manuals, Scientific Articles, Laws & Regulations, Patents and Government Tenders. Each document category was sourced from various repositories. For example, Financial Reports contain both free-style format annual reports which expose company-specific, artistic layouts as well as the more formal SEC filings. The two largest categories (Financial Reports and Manuals) contain a large amount of free-style layouts in order to obtain maximum variability. In the other four categories, we boosted the variability by mixing documents from independent providers, such as different government websites or publishers. In Figure 2, we show the document categories contained in DocLayNet with their respective sizes." ![DocLayNet PDFs categories distribution (source: DocLayNet paper)](https://huggingface.co/datasets/pierreguillou/DocLayNet-large/resolve/main/DocLayNet_PDFs_categories_distribution.png) ### Download & overview The size of the DocLayNet large is about 100% of the DocLayNet dataset. **WARNING** The following code allows to download DocLayNet large but it can not run until the end in Google Colab because of the size needed to store cache data and the CPU RAM to download the data (for example, the cache data in /home/ubuntu/.cache/huggingface/datasets/ needs almost 120 GB during the downloading process). And even with a suitable instance, the download time of the DocLayNet large dataset is around 1h50. This is one more reason to test your fine-tuning code on [DocLayNet small](https://huggingface.co/datasets/pierreguillou/DocLayNet-small) and/or [DocLayNet base](https://huggingface.co/datasets/pierreguillou/DocLayNet-base) 😊 ``` # !pip install -q datasets from datasets import load_dataset dataset_large = load_dataset("pierreguillou/DocLayNet-large") # overview of dataset_large DatasetDict({ train: Dataset({ features: ['id', 'texts', 'bboxes_block', 'bboxes_line', 'categories', 'image', 'pdf', 'page_hash', 'original_filename', 'page_no', 'num_pages', 'original_width', 'original_height', 'coco_width', 'coco_height', 'collection', 'doc_category'], num_rows: 69103 }) validation: Dataset({ features: ['id', 'texts', 'bboxes_block', 'bboxes_line', 'categories', 'image', 'pdf', 'page_hash', 'original_filename', 'page_no', 'num_pages', 'original_width', 'original_height', 'coco_width', 'coco_height', 'collection', 'doc_category'], num_rows: 6480 }) test: Dataset({ features: ['id', 'texts', 'bboxes_block', 'bboxes_line', 'categories', 'image', 'pdf', 'page_hash', 'original_filename', 'page_no', 'num_pages', 'original_width', 'original_height', 'coco_width', 'coco_height', 'collection', 'doc_category'], num_rows: 4994 }) }) ``` ### Annotated bounding boxes The DocLayNet base makes easy to display document image with the annotaed bounding boxes of paragraphes or lines. Check the notebook [processing_DocLayNet_dataset_to_be_used_by_layout_models_of_HF_hub.ipynb](https://github.com/piegu/language-models/blob/master/processing_DocLayNet_dataset_to_be_used_by_layout_models_of_HF_hub.ipynb) in order to get the code. #### Paragraphes ![Annotated DocLayNet document image with bounding boxes and categories of paragraphes](https://huggingface.co/datasets/pierreguillou/DocLayNet-large/resolve/main/DocLayNet_image_annotated_bounding_boxes_paragraph.png) #### Lines ![Annotated DocLayNet document image with bounding boxes and categories of lines](https://huggingface.co/datasets/pierreguillou/DocLayNet-large/resolve/main/DocLayNet_image_annotated_bounding_boxes_line.png) ### HF notebooks - [notebooks LayoutLM](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLM) (Niels Rogge) - [notebooks LayoutLMv2](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLMv2) (Niels Rogge) - [notebooks LayoutLMv3](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLMv3) (Niels Rogge) - [notebooks LiLT](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LiLT) (Niels Rogge) - [Document AI: Fine-tuning LiLT for document-understanding using Hugging Face Transformers](https://github.com/philschmid/document-ai-transformers/blob/main/training/lilt_funsd.ipynb) ([post](https://www.philschmid.de/fine-tuning-lilt#3-fine-tune-and-evaluate-lilt) of Phil Schmid) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Annotations](#annotations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://developer.ibm.com/exchanges/data/all/doclaynet/ - **Repository:** https://github.com/DS4SD/DocLayNet - **Paper:** https://doi.org/10.1145/3534678.3539043 - **Leaderboard:** - **Point of Contact:** ### Dataset Summary DocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank: 1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout 2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals 3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail. 4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models 5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets. ### Supported Tasks and Leaderboards We are hosting a competition in ICDAR 2023 based on the DocLayNet dataset. For more information see https://ds4sd.github.io/icdar23-doclaynet/. ## Dataset Structure ### Data Fields DocLayNet provides four types of data assets: 1. PNG images of all pages, resized to square `1025 x 1025px` 2. Bounding-box annotations in COCO format for each PNG image 3. Extra: Single-page PDF files matching each PNG image 4. Extra: JSON file matching each PDF page, which provides the digital text cells with coordinates and content The COCO image record are defined like this example ```js ... { "id": 1, "width": 1025, "height": 1025, "file_name": "132a855ee8b23533d8ae69af0049c038171a06ddfcac892c3c6d7e6b4091c642.png", // Custom fields: "doc_category": "financial_reports" // high-level document category "collection": "ann_reports_00_04_fancy", // sub-collection name "doc_name": "NASDAQ_FFIN_2002.pdf", // original document filename "page_no": 9, // page number in original document "precedence": 0, // Annotation order, non-zero in case of redundant double- or triple-annotation }, ... ``` The `doc_category` field uses one of the following constants: ``` financial_reports, scientific_articles, laws_and_regulations, government_tenders, manuals, patents ``` ### Data Splits The dataset provides three splits - `train` - `val` - `test` ## Dataset Creation ### Annotations #### Annotation process The labeling guideline used for training of the annotation experts are available at [DocLayNet_Labeling_Guide_Public.pdf](https://raw.githubusercontent.com/DS4SD/DocLayNet/main/assets/DocLayNet_Labeling_Guide_Public.pdf). #### Who are the annotators? Annotations are crowdsourced. ## Additional Information ### Dataset Curators The dataset is curated by the [Deep Search team](https://ds4sd.github.io/) at IBM Research. You can contact us at [[email protected]](mailto:[email protected]). Curators: - Christoph Auer, [@cau-git](https://github.com/cau-git) - Michele Dolfi, [@dolfim-ibm](https://github.com/dolfim-ibm) - Ahmed Nassar, [@nassarofficial](https://github.com/nassarofficial) - Peter Staar, [@PeterStaar-IBM](https://github.com/PeterStaar-IBM) ### Licensing Information License: [CDLA-Permissive-1.0](https://cdla.io/permissive-1-0/) ### Citation Information ```bib @article{doclaynet2022, title = {DocLayNet: A Large Human-Annotated Dataset for Document-Layout Segmentation}, doi = {10.1145/3534678.353904}, url = {https://doi.org/10.1145/3534678.3539043}, author = {Pfitzmann, Birgit and Auer, Christoph and Dolfi, Michele and Nassar, Ahmed S and Staar, Peter W J}, year = {2022}, isbn = {9781450393850}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, booktitle = {Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining}, pages = {3743–3751}, numpages = {9}, location = {Washington DC, USA}, series = {KDD '22} } ``` ### Contributions Thanks to [@dolfim-ibm](https://github.com/dolfim-ibm), [@cau-git](https://github.com/cau-git) for adding this dataset.
pierreguillou/DocLayNet-large
[ "task_categories:object-detection", "task_categories:image-segmentation", "task_categories:token-classification", "task_ids:instance-segmentation", "annotations_creators:crowdsourced", "size_categories:10K<n<100K", "language:en", "language:de", "language:fr", "language:ja", "license:other", "DocLayNet", "COCO", "PDF", "IBM", "Financial-Reports", "Finance", "Manuals", "Scientific-Articles", "Science", "Laws", "Law", "Regulations", "Patents", "Government-Tenders", "object-detection", "image-segmentation", "token-classification", "arxiv:2206.01062", "region:us" ]
2023-01-25T15:14:52+00:00
{"annotations_creators": ["crowdsourced"], "language": ["en", "de", "fr", "ja"], "license": "other", "size_categories": ["10K<n<100K"], "task_categories": ["object-detection", "image-segmentation", "token-classification"], "task_ids": ["instance-segmentation"], "pretty_name": "DocLayNet large", "tags": ["DocLayNet", "COCO", "PDF", "IBM", "Financial-Reports", "Finance", "Manuals", "Scientific-Articles", "Science", "Laws", "Law", "Regulations", "Patents", "Government-Tenders", "object-detection", "image-segmentation", "token-classification"]}
2023-05-17T07:56:48+00:00
[ "2206.01062" ]
[ "en", "de", "fr", "ja" ]
TAGS #task_categories-object-detection #task_categories-image-segmentation #task_categories-token-classification #task_ids-instance-segmentation #annotations_creators-crowdsourced #size_categories-10K<n<100K #language-English #language-German #language-French #language-Japanese #license-other #DocLayNet #COCO #PDF #IBM #Financial-Reports #Finance #Manuals #Scientific-Articles #Science #Laws #Law #Regulations #Patents #Government-Tenders #object-detection #image-segmentation #token-classification #arxiv-2206.01062 #region-us
# Dataset Card for DocLayNet large ## About this card (01/27/2023) ### Property and license All information from this page but the content of this paragraph "About this card (01/27/2023)" has been copied/pasted from Dataset Card for DocLayNet. DocLayNet is a dataset created by Deep Search (IBM Research) published under license CDLA-Permissive-1.0. I do not claim any rights to the data taken from this dataset and published on this page. ### DocLayNet dataset DocLayNet dataset (IBM) provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. Until today, the dataset can be downloaded through direct links or as a dataset from Hugging Face datasets: - direct links: doclaynet_core.zip (28 GiB), doclaynet_extra.zip (7.5 GiB) - Hugging Face dataset library: dataset DocLayNet Paper: DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis (06/02/2022) ### Processing into a format facilitating its use by HF notebooks These 2 options require the downloading of all the data (approximately 30GBi), which requires downloading time (about 45 mn in Google Colab) and a large space on the hard disk. These could limit experimentation for people with low resources. Moreover, even when using the download via HF datasets library, it is necessary to download the EXTRA zip separately (doclaynet_extra.zip, 7.5 GiB) to associate the annotated bounding boxes with the text extracted by OCR from the PDFs. This operation also requires additional code because the boundings boxes of the texts do not necessarily correspond to those annotated (a calculation of the percentage of area in common between the boundings boxes annotated and those of the texts makes it possible to make a comparison between them). At last, in order to use Hugging Face notebooks on fine-tuning layout models like LayoutLMv3 or LiLT, DocLayNet data must be processed in a proper format. For all these reasons, I decided to process the DocLayNet dataset: - into 3 datasets of different sizes: - DocLayNet small (about 1% of DocLayNet) < 1.000k document images (691 train, 64 val, 49 test) - DocLayNet base (about 10% of DocLayNet) < 10.000k document images (6910 train, 648 val, 499 test) - DocLayNet large (about 100% of DocLayNet) < 100.000k document images (69.103 train, 6.480 val, 4.994 test) - with associated texts and PDFs (base64 format), - and in a format facilitating their use by HF notebooks. *Note: the layout HF notebooks will greatly help participants of the IBM ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents!* ### About PDFs languages Citation of the page 3 of the DocLayNet paper: "We did not control the document selection with regard to language. The vast majority of documents contained in DocLayNet (close to 95%) are published in English language. However, DocLayNet also contains a number of documents in other languages such as German (2.5%), French (1.0%) and Japanese (1.0%). While the document language has negligible impact on the performance of computer vision methods such as object detection and segmentation models, it might prove challenging for layout analysis methods which exploit textual features." ### About PDFs categories distribution Citation of the page 3 of the DocLayNet paper: "The pages in DocLayNet can be grouped into six distinct categories, namely Financial Reports, Manuals, Scientific Articles, Laws & Regulations, Patents and Government Tenders. Each document category was sourced from various repositories. For example, Financial Reports contain both free-style format annual reports which expose company-specific, artistic layouts as well as the more formal SEC filings. The two largest categories (Financial Reports and Manuals) contain a large amount of free-style layouts in order to obtain maximum variability. In the other four categories, we boosted the variability by mixing documents from independent providers, such as different government websites or publishers. In Figure 2, we show the document categories contained in DocLayNet with their respective sizes." !DocLayNet PDFs categories distribution (source: DocLayNet paper) ### Download & overview The size of the DocLayNet large is about 100% of the DocLayNet dataset. WARNING The following code allows to download DocLayNet large but it can not run until the end in Google Colab because of the size needed to store cache data and the CPU RAM to download the data (for example, the cache data in /home/ubuntu/.cache/huggingface/datasets/ needs almost 120 GB during the downloading process). And even with a suitable instance, the download time of the DocLayNet large dataset is around 1h50. This is one more reason to test your fine-tuning code on DocLayNet small and/or DocLayNet base ### Annotated bounding boxes The DocLayNet base makes easy to display document image with the annotaed bounding boxes of paragraphes or lines. Check the notebook processing_DocLayNet_dataset_to_be_used_by_layout_models_of_HF_hub.ipynb in order to get the code. #### Paragraphes !Annotated DocLayNet document image with bounding boxes and categories of paragraphes #### Lines !Annotated DocLayNet document image with bounding boxes and categories of lines ### HF notebooks - notebooks LayoutLM (Niels Rogge) - notebooks LayoutLMv2 (Niels Rogge) - notebooks LayoutLMv3 (Niels Rogge) - notebooks LiLT (Niels Rogge) - Document AI: Fine-tuning LiLT for document-understanding using Hugging Face Transformers (post of Phil Schmid) ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Dataset Structure - Data Fields - Data Splits - Dataset Creation - Annotations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Leaderboard: - Point of Contact: ### Dataset Summary DocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank: 1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout 2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals 3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail. 4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models 5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets. ### Supported Tasks and Leaderboards We are hosting a competition in ICDAR 2023 based on the DocLayNet dataset. For more information see URL ## Dataset Structure ### Data Fields DocLayNet provides four types of data assets: 1. PNG images of all pages, resized to square '1025 x 1025px' 2. Bounding-box annotations in COCO format for each PNG image 3. Extra: Single-page PDF files matching each PNG image 4. Extra: JSON file matching each PDF page, which provides the digital text cells with coordinates and content The COCO image record are defined like this example The 'doc_category' field uses one of the following constants: ### Data Splits The dataset provides three splits - 'train' - 'val' - 'test' ## Dataset Creation ### Annotations #### Annotation process The labeling guideline used for training of the annotation experts are available at DocLayNet_Labeling_Guide_Public.pdf. #### Who are the annotators? Annotations are crowdsourced. ## Additional Information ### Dataset Curators The dataset is curated by the Deep Search team at IBM Research. You can contact us at deepsearch-core@URL. Curators: - Christoph Auer, @cau-git - Michele Dolfi, @dolfim-ibm - Ahmed Nassar, @nassarofficial - Peter Staar, @PeterStaar-IBM ### Licensing Information License: CDLA-Permissive-1.0 ### Contributions Thanks to @dolfim-ibm, @cau-git for adding this dataset.
[ "# Dataset Card for DocLayNet large", "## About this card (01/27/2023)", "### Property and license\n\nAll information from this page but the content of this paragraph \"About this card (01/27/2023)\" has been copied/pasted from Dataset Card for DocLayNet.\n\nDocLayNet is a dataset created by Deep Search (IBM Research) published under license CDLA-Permissive-1.0. \n\nI do not claim any rights to the data taken from this dataset and published on this page.", "### DocLayNet dataset\n\nDocLayNet dataset (IBM) provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. \n\nUntil today, the dataset can be downloaded through direct links or as a dataset from Hugging Face datasets:\n- direct links: doclaynet_core.zip (28 GiB), doclaynet_extra.zip (7.5 GiB)\n- Hugging Face dataset library: dataset DocLayNet\n\nPaper: DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis (06/02/2022)", "### Processing into a format facilitating its use by HF notebooks\n\nThese 2 options require the downloading of all the data (approximately 30GBi), which requires downloading time (about 45 mn in Google Colab) and a large space on the hard disk. These could limit experimentation for people with low resources.\n\nMoreover, even when using the download via HF datasets library, it is necessary to download the EXTRA zip separately (doclaynet_extra.zip, 7.5 GiB) to associate the annotated bounding boxes with the text extracted by OCR from the PDFs. This operation also requires additional code because the boundings boxes of the texts do not necessarily correspond to those annotated (a calculation of the percentage of area in common between the boundings boxes annotated and those of the texts makes it possible to make a comparison between them).\n\nAt last, in order to use Hugging Face notebooks on fine-tuning layout models like LayoutLMv3 or LiLT, DocLayNet data must be processed in a proper format.\n\nFor all these reasons, I decided to process the DocLayNet dataset:\n- into 3 datasets of different sizes:\n - DocLayNet small (about 1% of DocLayNet) < 1.000k document images (691 train, 64 val, 49 test)\n - DocLayNet base (about 10% of DocLayNet) < 10.000k document images (6910 train, 648 val, 499 test)\n - DocLayNet large (about 100% of DocLayNet) < 100.000k document images (69.103 train, 6.480 val, 4.994 test)\n- with associated texts and PDFs (base64 format),\n- and in a format facilitating their use by HF notebooks.\n\n*Note: the layout HF notebooks will greatly help participants of the IBM ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents!*", "### About PDFs languages\n\nCitation of the page 3 of the DocLayNet paper: \n\"We did not control the document selection with regard to language. The vast majority of documents contained in DocLayNet (close to 95%) are published in English language. However, DocLayNet also contains a number of documents in other languages such as German (2.5%), French (1.0%) and Japanese (1.0%). While the document language has negligible impact on the performance of computer vision methods such as object detection and segmentation models, it might prove challenging for layout analysis methods which exploit textual features.\"", "### About PDFs categories distribution\n\nCitation of the page 3 of the DocLayNet paper: \n\"The pages in DocLayNet can be grouped into six distinct categories, namely Financial Reports, Manuals, Scientific Articles, Laws & Regulations, Patents and Government Tenders. Each document category was sourced from various repositories. For example, Financial Reports contain both free-style format annual reports which expose company-specific, artistic layouts as well as the more formal SEC filings. The two largest categories (Financial Reports and Manuals) contain a large amount of free-style layouts in order to obtain maximum variability. In the other four categories, we boosted the variability by mixing documents from independent providers, such as different government websites or publishers. In Figure 2, we show the document categories contained in DocLayNet with their respective sizes.\"\n\n!DocLayNet PDFs categories distribution (source: DocLayNet paper)", "### Download & overview\n\nThe size of the DocLayNet large is about 100% of the DocLayNet dataset.\n\nWARNING The following code allows to download DocLayNet large but it can not run until the end in Google Colab because of the size needed to store cache data and the CPU RAM to download the data (for example, the cache data in /home/ubuntu/.cache/huggingface/datasets/ needs almost 120 GB during the downloading process). And even with a suitable instance, the download time of the DocLayNet large dataset is around 1h50. This is one more reason to test your fine-tuning code on DocLayNet small and/or DocLayNet base", "### Annotated bounding boxes\n\nThe DocLayNet base makes easy to display document image with the annotaed bounding boxes of paragraphes or lines.\n\nCheck the notebook processing_DocLayNet_dataset_to_be_used_by_layout_models_of_HF_hub.ipynb in order to get the code.", "#### Paragraphes\n\n!Annotated DocLayNet document image with bounding boxes and categories of paragraphes", "#### Lines\n\n!Annotated DocLayNet document image with bounding boxes and categories of lines", "### HF notebooks\n\n- notebooks LayoutLM (Niels Rogge)\n- notebooks LayoutLMv2 (Niels Rogge)\n- notebooks LayoutLMv3 (Niels Rogge)\n- notebooks LiLT (Niels Rogge)\n- Document AI: Fine-tuning LiLT for document-understanding using Hugging Face Transformers (post of Phil Schmid)", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n- Dataset Structure\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Annotations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nDocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank:\n\n1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout\n2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals\n3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail.\n4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models\n5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets.", "### Supported Tasks and Leaderboards\n\nWe are hosting a competition in ICDAR 2023 based on the DocLayNet dataset. For more information see URL", "## Dataset Structure", "### Data Fields\n\nDocLayNet provides four types of data assets:\n\n1. PNG images of all pages, resized to square '1025 x 1025px'\n2. Bounding-box annotations in COCO format for each PNG image\n3. Extra: Single-page PDF files matching each PNG image\n4. Extra: JSON file matching each PDF page, which provides the digital text cells with coordinates and content\n\nThe COCO image record are defined like this example\n\n\n\nThe 'doc_category' field uses one of the following constants:", "### Data Splits\n\nThe dataset provides three splits\n- 'train'\n- 'val'\n- 'test'", "## Dataset Creation", "### Annotations", "#### Annotation process\n\nThe labeling guideline used for training of the annotation experts are available at DocLayNet_Labeling_Guide_Public.pdf.", "#### Who are the annotators?\n\nAnnotations are crowdsourced.", "## Additional Information", "### Dataset Curators\n\nThe dataset is curated by the Deep Search team at IBM Research.\nYou can contact us at deepsearch-core@URL.\n\nCurators:\n- Christoph Auer, @cau-git\n- Michele Dolfi, @dolfim-ibm\n- Ahmed Nassar, @nassarofficial\n- Peter Staar, @PeterStaar-IBM", "### Licensing Information\n\nLicense: CDLA-Permissive-1.0", "### Contributions\n\nThanks to @dolfim-ibm, @cau-git for adding this dataset." ]
[ "TAGS\n#task_categories-object-detection #task_categories-image-segmentation #task_categories-token-classification #task_ids-instance-segmentation #annotations_creators-crowdsourced #size_categories-10K<n<100K #language-English #language-German #language-French #language-Japanese #license-other #DocLayNet #COCO #PDF #IBM #Financial-Reports #Finance #Manuals #Scientific-Articles #Science #Laws #Law #Regulations #Patents #Government-Tenders #object-detection #image-segmentation #token-classification #arxiv-2206.01062 #region-us \n", "# Dataset Card for DocLayNet large", "## About this card (01/27/2023)", "### Property and license\n\nAll information from this page but the content of this paragraph \"About this card (01/27/2023)\" has been copied/pasted from Dataset Card for DocLayNet.\n\nDocLayNet is a dataset created by Deep Search (IBM Research) published under license CDLA-Permissive-1.0. \n\nI do not claim any rights to the data taken from this dataset and published on this page.", "### DocLayNet dataset\n\nDocLayNet dataset (IBM) provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. \n\nUntil today, the dataset can be downloaded through direct links or as a dataset from Hugging Face datasets:\n- direct links: doclaynet_core.zip (28 GiB), doclaynet_extra.zip (7.5 GiB)\n- Hugging Face dataset library: dataset DocLayNet\n\nPaper: DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis (06/02/2022)", "### Processing into a format facilitating its use by HF notebooks\n\nThese 2 options require the downloading of all the data (approximately 30GBi), which requires downloading time (about 45 mn in Google Colab) and a large space on the hard disk. These could limit experimentation for people with low resources.\n\nMoreover, even when using the download via HF datasets library, it is necessary to download the EXTRA zip separately (doclaynet_extra.zip, 7.5 GiB) to associate the annotated bounding boxes with the text extracted by OCR from the PDFs. This operation also requires additional code because the boundings boxes of the texts do not necessarily correspond to those annotated (a calculation of the percentage of area in common between the boundings boxes annotated and those of the texts makes it possible to make a comparison between them).\n\nAt last, in order to use Hugging Face notebooks on fine-tuning layout models like LayoutLMv3 or LiLT, DocLayNet data must be processed in a proper format.\n\nFor all these reasons, I decided to process the DocLayNet dataset:\n- into 3 datasets of different sizes:\n - DocLayNet small (about 1% of DocLayNet) < 1.000k document images (691 train, 64 val, 49 test)\n - DocLayNet base (about 10% of DocLayNet) < 10.000k document images (6910 train, 648 val, 499 test)\n - DocLayNet large (about 100% of DocLayNet) < 100.000k document images (69.103 train, 6.480 val, 4.994 test)\n- with associated texts and PDFs (base64 format),\n- and in a format facilitating their use by HF notebooks.\n\n*Note: the layout HF notebooks will greatly help participants of the IBM ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents!*", "### About PDFs languages\n\nCitation of the page 3 of the DocLayNet paper: \n\"We did not control the document selection with regard to language. The vast majority of documents contained in DocLayNet (close to 95%) are published in English language. However, DocLayNet also contains a number of documents in other languages such as German (2.5%), French (1.0%) and Japanese (1.0%). While the document language has negligible impact on the performance of computer vision methods such as object detection and segmentation models, it might prove challenging for layout analysis methods which exploit textual features.\"", "### About PDFs categories distribution\n\nCitation of the page 3 of the DocLayNet paper: \n\"The pages in DocLayNet can be grouped into six distinct categories, namely Financial Reports, Manuals, Scientific Articles, Laws & Regulations, Patents and Government Tenders. Each document category was sourced from various repositories. For example, Financial Reports contain both free-style format annual reports which expose company-specific, artistic layouts as well as the more formal SEC filings. The two largest categories (Financial Reports and Manuals) contain a large amount of free-style layouts in order to obtain maximum variability. In the other four categories, we boosted the variability by mixing documents from independent providers, such as different government websites or publishers. In Figure 2, we show the document categories contained in DocLayNet with their respective sizes.\"\n\n!DocLayNet PDFs categories distribution (source: DocLayNet paper)", "### Download & overview\n\nThe size of the DocLayNet large is about 100% of the DocLayNet dataset.\n\nWARNING The following code allows to download DocLayNet large but it can not run until the end in Google Colab because of the size needed to store cache data and the CPU RAM to download the data (for example, the cache data in /home/ubuntu/.cache/huggingface/datasets/ needs almost 120 GB during the downloading process). And even with a suitable instance, the download time of the DocLayNet large dataset is around 1h50. This is one more reason to test your fine-tuning code on DocLayNet small and/or DocLayNet base", "### Annotated bounding boxes\n\nThe DocLayNet base makes easy to display document image with the annotaed bounding boxes of paragraphes or lines.\n\nCheck the notebook processing_DocLayNet_dataset_to_be_used_by_layout_models_of_HF_hub.ipynb in order to get the code.", "#### Paragraphes\n\n!Annotated DocLayNet document image with bounding boxes and categories of paragraphes", "#### Lines\n\n!Annotated DocLayNet document image with bounding boxes and categories of lines", "### HF notebooks\n\n- notebooks LayoutLM (Niels Rogge)\n- notebooks LayoutLMv2 (Niels Rogge)\n- notebooks LayoutLMv3 (Niels Rogge)\n- notebooks LiLT (Niels Rogge)\n- Document AI: Fine-tuning LiLT for document-understanding using Hugging Face Transformers (post of Phil Schmid)", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n- Dataset Structure\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Annotations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nDocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank:\n\n1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout\n2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals\n3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail.\n4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models\n5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets.", "### Supported Tasks and Leaderboards\n\nWe are hosting a competition in ICDAR 2023 based on the DocLayNet dataset. For more information see URL", "## Dataset Structure", "### Data Fields\n\nDocLayNet provides four types of data assets:\n\n1. PNG images of all pages, resized to square '1025 x 1025px'\n2. Bounding-box annotations in COCO format for each PNG image\n3. Extra: Single-page PDF files matching each PNG image\n4. Extra: JSON file matching each PDF page, which provides the digital text cells with coordinates and content\n\nThe COCO image record are defined like this example\n\n\n\nThe 'doc_category' field uses one of the following constants:", "### Data Splits\n\nThe dataset provides three splits\n- 'train'\n- 'val'\n- 'test'", "## Dataset Creation", "### Annotations", "#### Annotation process\n\nThe labeling guideline used for training of the annotation experts are available at DocLayNet_Labeling_Guide_Public.pdf.", "#### Who are the annotators?\n\nAnnotations are crowdsourced.", "## Additional Information", "### Dataset Curators\n\nThe dataset is curated by the Deep Search team at IBM Research.\nYou can contact us at deepsearch-core@URL.\n\nCurators:\n- Christoph Auer, @cau-git\n- Michele Dolfi, @dolfim-ibm\n- Ahmed Nassar, @nassarofficial\n- Peter Staar, @PeterStaar-IBM", "### Licensing Information\n\nLicense: CDLA-Permissive-1.0", "### Contributions\n\nThanks to @dolfim-ibm, @cau-git for adding this dataset." ]
cdf40085b54db16615c767e1c840054a249d7dfb
# Dataset Card for aiornot Dataset for the [aiornot competition](https://hf.co/spaces/competitions/aiornot). By accessing this dataset, you accept the rules of the AI or Not competition. Please note that dataset may contain images which are not considered safe for work. ## Usage ### With Hugging Face Datasets 🤗 You can download and use this dataset using the `datasets` library. 📝 **Note:** You must be logged in to you Hugging Face account for the snippet below to work. You can do this with `huggingface-cli login` or `huggingface_hub.notebook_login` if you have the `huggingface_hub` python library installed (`pip install huggingface_hub`). ```python from datasets import load_dataset ds = load_dataset('competitions/aiornot') ``` ### From Original Files The original files and sample submission can be found in the `.extras` folder (under the files and versions tab of this repo). Feel free to download them and use them directly if you don't wish to use the `datasets` library.
competitions/aiornot
[ "task_categories:image-classification", "image-classification", "autotrain", "competitions", "region:us" ]
2023-01-25T15:22:37+00:00
{"task_categories": ["image-classification"], "tags": ["image-classification", "autotrain", "competitions"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "label", "dtype": "int64"}]}}
2023-03-30T11:32:32+00:00
[]
[]
TAGS #task_categories-image-classification #image-classification #autotrain #competitions #region-us
# Dataset Card for aiornot Dataset for the aiornot competition. By accessing this dataset, you accept the rules of the AI or Not competition. Please note that dataset may contain images which are not considered safe for work. ## Usage ### With Hugging Face Datasets You can download and use this dataset using the 'datasets' library. Note: You must be logged in to you Hugging Face account for the snippet below to work. You can do this with 'huggingface-cli login' or 'huggingface_hub.notebook_login' if you have the 'huggingface_hub' python library installed ('pip install huggingface_hub'). ### From Original Files The original files and sample submission can be found in the '.extras' folder (under the files and versions tab of this repo). Feel free to download them and use them directly if you don't wish to use the 'datasets' library.
[ "# Dataset Card for aiornot\n\nDataset for the aiornot competition. \n\nBy accessing this dataset, you accept the rules of the AI or Not competition.\nPlease note that dataset may contain images which are not considered safe for work.", "## Usage", "### With Hugging Face Datasets \nYou can download and use this dataset using the 'datasets' library.\n\n Note: You must be logged in to you Hugging Face account for the snippet below to work. You can do this with 'huggingface-cli login' or 'huggingface_hub.notebook_login' if you have the 'huggingface_hub' python library installed ('pip install huggingface_hub').", "### From Original Files\n\nThe original files and sample submission can be found in the '.extras' folder (under the files and versions tab of this repo). Feel free to download them and use them directly if you don't wish to use the 'datasets' library." ]
[ "TAGS\n#task_categories-image-classification #image-classification #autotrain #competitions #region-us \n", "# Dataset Card for aiornot\n\nDataset for the aiornot competition. \n\nBy accessing this dataset, you accept the rules of the AI or Not competition.\nPlease note that dataset may contain images which are not considered safe for work.", "## Usage", "### With Hugging Face Datasets \nYou can download and use this dataset using the 'datasets' library.\n\n Note: You must be logged in to you Hugging Face account for the snippet below to work. You can do this with 'huggingface-cli login' or 'huggingface_hub.notebook_login' if you have the 'huggingface_hub' python library installed ('pip install huggingface_hub').", "### From Original Files\n\nThe original files and sample submission can be found in the '.extras' folder (under the files and versions tab of this repo). Feel free to download them and use them directly if you don't wish to use the 'datasets' library." ]
177973aad7fd299f63ccde74824c5a0233998d8b
# LoRaLay: A Multilingual and Multimodal Dataset for Long Range and Layout-Aware Summarization A collaboration between [reciTAL](https://recital.ai/en/), [MLIA](https://mlia.lip6.fr/) (ISIR, Sorbonne Université), [Meta AI](https://ai.facebook.com/), and [Università di Trento](https://www.unitn.it/) ## KoreaScience dataset for summarization KoreaScience is a dataset for summarization of research papers written in Korean, for which layout information is provided. ### Data Fields - `article_id`: article id - `article_words`: sequence of words constituting the body of the article - `article_bboxes`: sequence of corresponding word bounding boxes - `norm_article_bboxes`: sequence of corresponding normalized word bounding boxes - `abstract`: a string containing the abstract of the article - `article_pdf_url`: URL of the article's PDF ### Data Splits This dataset has 3 splits: _train_, _validation_, and _test_. | Dataset Split | Number of Instances | | ------------- | --------------------| | Train | 35,248 | | Validation | 1,125 | | Test | 1,125 | ## Citation ``` latex @article{nguyen2023loralay, title={LoRaLay: A Multilingual and Multimodal Dataset for Long Range and Layout-Aware Summarization}, author={Nguyen, Laura and Scialom, Thomas and Piwowarski, Benjamin and Staiano, Jacopo}, journal={arXiv preprint arXiv:2301.11312}, year={2023} } ```
nglaura/koreascience-summarization
[ "task_categories:summarization", "language:fr", "license:apache-2.0", "region:us" ]
2023-01-25T15:27:10+00:00
{"language": ["fr"], "license": "apache-2.0", "task_categories": ["summarization"], "pretty_name": "KoreaScience"}
2023-04-11T09:23:00+00:00
[]
[ "fr" ]
TAGS #task_categories-summarization #language-French #license-apache-2.0 #region-us
LoRaLay: A Multilingual and Multimodal Dataset for Long Range and Layout-Aware Summarization ============================================================================================ A collaboration between reciTAL, MLIA (ISIR, Sorbonne Université), Meta AI, and Università di Trento KoreaScience dataset for summarization -------------------------------------- KoreaScience is a dataset for summarization of research papers written in Korean, for which layout information is provided. ### Data Fields * 'article\_id': article id * 'article\_words': sequence of words constituting the body of the article * 'article\_bboxes': sequence of corresponding word bounding boxes * 'norm\_article\_bboxes': sequence of corresponding normalized word bounding boxes * 'abstract': a string containing the abstract of the article * 'article\_pdf\_url': URL of the article's PDF ### Data Splits This dataset has 3 splits: *train*, *validation*, and *test*.
[ "### Data Fields\n\n\n* 'article\\_id': article id\n* 'article\\_words': sequence of words constituting the body of the article\n* 'article\\_bboxes': sequence of corresponding word bounding boxes\n* 'norm\\_article\\_bboxes': sequence of corresponding normalized word bounding boxes\n* 'abstract': a string containing the abstract of the article\n* 'article\\_pdf\\_url': URL of the article's PDF", "### Data Splits\n\n\nThis dataset has 3 splits: *train*, *validation*, and *test*." ]
[ "TAGS\n#task_categories-summarization #language-French #license-apache-2.0 #region-us \n", "### Data Fields\n\n\n* 'article\\_id': article id\n* 'article\\_words': sequence of words constituting the body of the article\n* 'article\\_bboxes': sequence of corresponding word bounding boxes\n* 'norm\\_article\\_bboxes': sequence of corresponding normalized word bounding boxes\n* 'abstract': a string containing the abstract of the article\n* 'article\\_pdf\\_url': URL of the article's PDF", "### Data Splits\n\n\nThis dataset has 3 splits: *train*, *validation*, and *test*." ]
d06f14028e58ec8177c13dbacad981b7efca5c21
# Multiclass Semantic Segmentation Duckietown Dataset A dataset of multiclass semantic segmentation image annotations for the first 250 images of the ["Duckietown Object Detection Dataset"](https://docs.duckietown.org/daffy/AIDO/out/object_detection_dataset.html). | Raw Image | Segmentated Image | | --- | --- | | <img width="915" alt="raw_image" src="https://user-images.githubusercontent.com/42655977/211690204-301193c3-a651-4a3a-bd66-6458cf3a8778.png"> | <img width="915" alt="segmentation_mask" src="https://user-images.githubusercontent.com/42655977/211690212-2c9ca63a-f3ae-4d65-a4e0-ea76b20a616f.png"> | # Semantic Classes This dataset defines 8 semantic classes (7 distinct classes + implicit background class): | Class | XML Label | Description | Color (RGB) | | --- | --- | --- | --- | | Ego Lane | `Ego Lane` | The lane the agent is supposed to be driving in (default right-hand traffic assumed) | `[102,255,102]` | | Opposite Lane | `Opposite Lane` | The lane opposite to the one the agent is supposed to be driving in (default right-hand traffic assumed) | `[245,147,49]` | | Road End | `Road End` | Perpendicular red indicator found in Duckietown indicating the end of the road or the beginning of an intersection | `[184,61,245]` | | Intersection | `Intersection` | Road tile with no lane markings that has either 3 (T-intersection) or 4 (X-intersection) adjacent road tiles | `[50,183,250]` | | Middle Lane | `Middle Lane` | Broken yellow lane in the middle of the road separating lanes | `[255,255,0]` | | Side Lane | `Side Lane` | Solid white lane marking the road boundary | `[255,255,255]` | | Background | `Background` | Unclassified | - (implicit class) | ### **Notice**: (1) The color assignment is purely a suggestion as the color information encoded in the annotation file is not used by the `cvat_preprocessor.py` and can therefore be overwritten by any other mapping. The specified color mapping is mentioned here for explanatory and consistency reasons as this mapping is used in `dataloader.py` (see [Usage](#usage) for more information). (2) `[Ego Lane, Opposite Lane, Intersection]` are three semantic classes for essentially the same road tiles - the three classes were added to introduce more information for some use cases. Keep in mind, that some semantic segmentation neural network have a hard time learning the difference between these classes, leading to a poor performance on detecting these classes. In such case, treating these three classes as one *"Road"* class helps improving the segmentation performance. (3) The `Middle Lane` and `Side Lane` classes were added later and thus only the first 125 images were annotated. If you want to use these, use the `segmentation_annotation.xml` annotation file. Otherwise, `segmentation_annotation_old.xml` stores 250 images (including the 125 images from the other annotation file) but without these two classes. (4) `Background` is a special semantic class as it is not stored in the annotation file. This class is assigned to all pixels that don't have any other class (see `dataloader.py` for a reference solution for that). # Usage [](#usage) Due to the rather large size of the original dataset *(~750MB)*, this repository only contains annotations file stored in `CVAT for Images 1.1` format as well as two python files: - `cvat_preprocessor.py`: A collection of helper functions to read the annotations file and extract the annotation masks stored as polygons. - `dataloader.py`: A [_PyTorch_](https://pytorch.org)-specific example implementation of a wrapper-dataset to use with PyTorch machine learning models.
hamnaanaa/Duckietown-Multiclass-Semantic-Segmentation-Dataset
[ "task_categories:image-segmentation", "size_categories:n<1K", "license:openrail", "Duckietown", "Lane Following", "Autonomous Driving", "region:us" ]
2023-01-25T15:56:22+00:00
{"license": "openrail", "size_categories": ["n<1K"], "task_categories": ["image-segmentation"], "pretty_name": "Duckietown Multiclass Semantic Segmentation Dataset", "tags": ["Duckietown", "Lane Following", "Autonomous Driving"]}
2023-01-25T16:03:13+00:00
[]
[]
TAGS #task_categories-image-segmentation #size_categories-n<1K #license-openrail #Duckietown #Lane Following #Autonomous Driving #region-us
Multiclass Semantic Segmentation Duckietown Dataset =================================================== A dataset of multiclass semantic segmentation image annotations for the first 250 images of the "Duckietown Object Detection Dataset". Semantic Classes ================ This dataset defines 8 semantic classes (7 distinct classes + implicit background class): ### Notice: (1) The color assignment is purely a suggestion as the color information encoded in the annotation file is not used by the 'cvat\_preprocessor.py' and can therefore be overwritten by any other mapping. The specified color mapping is mentioned here for explanatory and consistency reasons as this mapping is used in 'URL' (see Usage for more information). (2) '[Ego Lane, Opposite Lane, Intersection]' are three semantic classes for essentially the same road tiles - the three classes were added to introduce more information for some use cases. Keep in mind, that some semantic segmentation neural network have a hard time learning the difference between these classes, leading to a poor performance on detecting these classes. In such case, treating these three classes as one *"Road"* class helps improving the segmentation performance. (3) The 'Middle Lane' and 'Side Lane' classes were added later and thus only the first 125 images were annotated. If you want to use these, use the 'segmentation\_annotation.xml' annotation file. Otherwise, 'segmentation\_annotation\_old.xml' stores 250 images (including the 125 images from the other annotation file) but without these two classes. (4) 'Background' is a special semantic class as it is not stored in the annotation file. This class is assigned to all pixels that don't have any other class (see 'URL' for a reference solution for that). Usage ===== Due to the rather large size of the original dataset *(~750MB)*, this repository only contains annotations file stored in 'CVAT for Images 1.1' format as well as two python files: * 'cvat\_preprocessor.py': A collection of helper functions to read the annotations file and extract the annotation masks stored as polygons. * 'URL': A *PyTorch*-specific example implementation of a wrapper-dataset to use with PyTorch machine learning models.
[ "### Notice:\n\n\n(1) The color assignment is purely a suggestion as the color information encoded in the annotation file is not used by the 'cvat\\_preprocessor.py' and can therefore be overwritten by any other mapping. The specified color mapping is mentioned here for explanatory and consistency reasons as this mapping is used in 'URL' (see Usage for more information).\n\n\n(2) '[Ego Lane, Opposite Lane, Intersection]' are three semantic classes for essentially the same road tiles - the three classes were added to introduce more information for some use cases. Keep in mind, that some semantic segmentation neural network have a hard time learning the difference between these classes, leading to a poor performance on detecting these classes. In such case, treating these three classes as one *\"Road\"* class helps improving the segmentation performance.\n\n\n(3) The 'Middle Lane' and 'Side Lane' classes were added later and thus only the first 125 images were annotated. If you want to use these, use the 'segmentation\\_annotation.xml' annotation file. Otherwise, 'segmentation\\_annotation\\_old.xml' stores 250 images (including the 125 images from the other annotation file) but without these two classes.\n\n\n(4) 'Background' is a special semantic class as it is not stored in the annotation file. This class is assigned to all pixels that don't have any other class (see 'URL' for a reference solution for that).\n\n\nUsage\n=====\n\n\n\nDue to the rather large size of the original dataset *(~750MB)*, this repository only contains annotations file stored in 'CVAT for Images 1.1' format as well as two python files:\n\n\n* 'cvat\\_preprocessor.py': A collection of helper functions to read the annotations file and extract the annotation masks stored as polygons.\n* 'URL': A *PyTorch*-specific example implementation of a wrapper-dataset to use with PyTorch machine learning models." ]
[ "TAGS\n#task_categories-image-segmentation #size_categories-n<1K #license-openrail #Duckietown #Lane Following #Autonomous Driving #region-us \n", "### Notice:\n\n\n(1) The color assignment is purely a suggestion as the color information encoded in the annotation file is not used by the 'cvat\\_preprocessor.py' and can therefore be overwritten by any other mapping. The specified color mapping is mentioned here for explanatory and consistency reasons as this mapping is used in 'URL' (see Usage for more information).\n\n\n(2) '[Ego Lane, Opposite Lane, Intersection]' are three semantic classes for essentially the same road tiles - the three classes were added to introduce more information for some use cases. Keep in mind, that some semantic segmentation neural network have a hard time learning the difference between these classes, leading to a poor performance on detecting these classes. In such case, treating these three classes as one *\"Road\"* class helps improving the segmentation performance.\n\n\n(3) The 'Middle Lane' and 'Side Lane' classes were added later and thus only the first 125 images were annotated. If you want to use these, use the 'segmentation\\_annotation.xml' annotation file. Otherwise, 'segmentation\\_annotation\\_old.xml' stores 250 images (including the 125 images from the other annotation file) but without these two classes.\n\n\n(4) 'Background' is a special semantic class as it is not stored in the annotation file. This class is assigned to all pixels that don't have any other class (see 'URL' for a reference solution for that).\n\n\nUsage\n=====\n\n\n\nDue to the rather large size of the original dataset *(~750MB)*, this repository only contains annotations file stored in 'CVAT for Images 1.1' format as well as two python files:\n\n\n* 'cvat\\_preprocessor.py': A collection of helper functions to read the annotations file and extract the annotation masks stored as polygons.\n* 'URL': A *PyTorch*-specific example implementation of a wrapper-dataset to use with PyTorch machine learning models." ]
2728275fd4936eff670cd0e558946883e9c1b4c0
# Dataset Card for "OxfordPets_test_facebook_opt_125m_Attributes_Caption_ns_3669" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/OxfordPets_test_facebook_opt_125m_Attributes_Caption_ns_3669
[ "region:us" ]
2023-01-25T16:06:40+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 121139246.375, "num_examples": 3669}, {"name": "fewshot_1_bs_16", "num_bytes": 122186317.375, "num_examples": 3669}, {"name": "fewshot_3_bs_16", "num_bytes": 124265700.375, "num_examples": 3669}, {"name": "fewshot_5_bs_16", "num_bytes": 126336927.375, "num_examples": 3669}, {"name": "fewshot_8_bs_16", "num_bytes": 129454684.375, "num_examples": 3669}], "download_size": 603084427, "dataset_size": 623382875.875}}
2023-01-25T19:42:28+00:00
[]
[]
TAGS #region-us
# Dataset Card for "OxfordPets_test_facebook_opt_125m_Attributes_Caption_ns_3669" More Information needed
[ "# Dataset Card for \"OxfordPets_test_facebook_opt_125m_Attributes_Caption_ns_3669\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"OxfordPets_test_facebook_opt_125m_Attributes_Caption_ns_3669\"\n\nMore Information needed" ]
2efd6f74ea9c8e62ae0f1cd5199aebac0251142b
# Dataset Card for "OxfordPets_test_facebook_opt_350m_Attributes_Caption_ns_3669" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/OxfordPets_test_facebook_opt_350m_Attributes_Caption_ns_3669
[ "region:us" ]
2023-01-25T16:10:06+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 121139633.375, "num_examples": 3669}, {"name": "fewshot_1_bs_16", "num_bytes": 122187541.375, "num_examples": 3669}, {"name": "fewshot_3_bs_16", "num_bytes": 124265948.375, "num_examples": 3669}, {"name": "fewshot_5_bs_16", "num_bytes": 126337212.375, "num_examples": 3669}, {"name": "fewshot_8_bs_16", "num_bytes": 129454918.375, "num_examples": 3669}], "download_size": 603082667, "dataset_size": 623385253.875}}
2023-01-25T19:49:43+00:00
[]
[]
TAGS #region-us
# Dataset Card for "OxfordPets_test_facebook_opt_350m_Attributes_Caption_ns_3669" More Information needed
[ "# Dataset Card for \"OxfordPets_test_facebook_opt_350m_Attributes_Caption_ns_3669\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"OxfordPets_test_facebook_opt_350m_Attributes_Caption_ns_3669\"\n\nMore Information needed" ]
f4dd676a131bdc3f7c689438e1c4bd1f81d7c057
# Dataset Card for "WHU-RS19" ## Dataset Description - **Paper:** [Structural high-resolution satellite image indexing](https://hal.science/hal-00458685/document) - **Paper:** [Satellite image classification via two-layer sparse coding with biased image representation](https://ieeexplore.ieee.org/iel5/8859/4357975/05545358.pdf) ### Licensing Information Public Domain ## Citation Information [Structural high-resolution satellite image indexing](https://hal.science/hal-00458685/document) [Satellite image classification via two-layer sparse coding with biased image representation](https://ieeexplore.ieee.org/iel5/8859/4357975/05545358.pdf) ``` @article{xia2009structural, title={Structural high-resolution satellite image indexing}, author={Xia, Gui-Song and Yang, Wen and Delon, Julie and Gousseau, Yann and Sun, Hong and Ma{\^\i}tre, Henri}, year={2009} } @article{dai2010satellite, title={Satellite image classification via two-layer sparse coding with biased image representation}, author={Dai, Dengxin and Yang, Wen}, journal={IEEE Geoscience and remote sensing letters}, volume={8}, number={1}, pages={173--176}, year={2010}, publisher={IEEE} } ```
jonathan-roberts1/WHU-RS19
[ "license:cc-by-4.0", "region:us" ]
2023-01-25T16:10:10+00:00
{"license": "cc-by-4.0", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "airport", "1": "beach", "2": "bridge", "3": "commercial", "4": "desert", "5": "farmland", "6": "football field", "7": "forest", "8": "industrial", "9": "meadow", "10": "mountain", "11": "park", "12": "parking", "13": "pond", "14": "port", "15": "railway station", "16": "residential", "17": "river", "18": "viaduct"}}}}], "splits": [{"name": "train", "num_bytes": 115362308.8, "num_examples": 1005}], "download_size": 113327264, "dataset_size": 115362308.8}}
2023-03-26T10:22:05+00:00
[]
[]
TAGS #license-cc-by-4.0 #region-us
# Dataset Card for "WHU-RS19" ## Dataset Description - Paper: Structural high-resolution satellite image indexing - Paper: Satellite image classification via two-layer sparse coding with biased image representation ### Licensing Information Public Domain Structural high-resolution satellite image indexing Satellite image classification via two-layer sparse coding with biased image representation
[ "# Dataset Card for \"WHU-RS19\"", "## Dataset Description\n\n- Paper: Structural high-resolution satellite image indexing\n- Paper: Satellite image classification via two-layer sparse coding with biased image representation", "### Licensing Information\n\nPublic Domain\n\n\n\nStructural high-resolution satellite image indexing\n\nSatellite image classification via two-layer sparse coding with biased image representation" ]
[ "TAGS\n#license-cc-by-4.0 #region-us \n", "# Dataset Card for \"WHU-RS19\"", "## Dataset Description\n\n- Paper: Structural high-resolution satellite image indexing\n- Paper: Satellite image classification via two-layer sparse coding with biased image representation", "### Licensing Information\n\nPublic Domain\n\n\n\nStructural high-resolution satellite image indexing\n\nSatellite image classification via two-layer sparse coding with biased image representation" ]
022634c2043cdd0487c5e13a874454535ab26c4b
# Dataset Card for "OxfordPets_test_facebook_opt_1.3b_Attributes_Caption_ns_3669" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/OxfordPets_test_facebook_opt_1.3b_Attributes_Caption_ns_3669
[ "region:us" ]
2023-01-25T16:14:14+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 121144412.375, "num_examples": 3669}, {"name": "fewshot_1_bs_16", "num_bytes": 122187081.375, "num_examples": 3669}, {"name": "fewshot_3_bs_16", "num_bytes": 124265835.375, "num_examples": 3669}, {"name": "fewshot_5_bs_16", "num_bytes": 126337236.375, "num_examples": 3669}, {"name": "fewshot_8_bs_16", "num_bytes": 129454816.375, "num_examples": 3669}], "download_size": 603079760, "dataset_size": 623389381.875}}
2023-01-25T20:02:57+00:00
[]
[]
TAGS #region-us
# Dataset Card for "OxfordPets_test_facebook_opt_1.3b_Attributes_Caption_ns_3669" More Information needed
[ "# Dataset Card for \"OxfordPets_test_facebook_opt_1.3b_Attributes_Caption_ns_3669\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"OxfordPets_test_facebook_opt_1.3b_Attributes_Caption_ns_3669\"\n\nMore Information needed" ]
024c3202dd522d1fec98d154895ad8cbaedb74fb
# Dataset Card for the High-Level Dataset ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Supported Tasks](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description The High-Level (HL) dataset aligns **object-centric descriptions** from [COCO](https://arxiv.org/pdf/1405.0312.pdf) with **high-level descriptions** crowdsourced along 3 axes: **_scene_, _action_, _rationale_** The HL dataset contains 14997 images from COCO and a total of 134973 crowdsourced captions (3 captions for each axis) aligned with ~749984 object-centric captions from COCO. Each axis is collected by asking the following 3 questions: 1) Where is the picture taken? 2) What is the subject doing? 3) Why is the subject doing it? **The high-level descriptions capture the human interpretations of the images**. These interpretations contain abstract concepts not directly linked to physical objects. Each high-level description is provided with a _confidence score_, crowdsourced by an independent worker measuring the extent to which the high-level description is likely given the corresponding image, question, and caption. The higher the score, the more the high-level caption is close to the commonsense (in a Likert scale from 1-5). - **🗃️ Repository:** [github.com/michelecafagna26/HL-dataset](https://github.com/michelecafagna26/HL-dataset) - **📜 Paper:** [HL Dataset: Visually-grounded Description of Scenes, Actions and Rationales](https://arxiv.org/abs/2302.12189?context=cs.CL) - **🧭 Spaces:** [Dataset explorer](https://huggingface.co/spaces/michelecafagna26/High-Level-Dataset-explorer) - **🖊️ Contact:** [email protected] ### Supported Tasks - image captioning - visual question answering - multimodal text-scoring - zero-shot evaluation ### Languages English ## Dataset Structure The dataset is provided with images from COCO and two metadata jsonl files containing the annotations ### Data Instances An instance looks like this: ```json { "file_name": "COCO_train2014_000000138878.jpg", "captions": { "scene": [ "in a car", "the picture is taken in a car", "in an office." ], "action": [ "posing for a photo", "the person is posing for a photo", "he's sitting in an armchair." ], "rationale": [ "to have a picture of himself", "he wants to share it with his friends", "he's working and took a professional photo." ], "object": [ "A man sitting in a car while wearing a shirt and tie.", "A man in a car wearing a dress shirt and tie.", "a man in glasses is wearing a tie", "Man sitting in the car seat with button up and tie", "A man in glasses and a tie is near a window." ] }, "confidence": { "scene": [ 5, 5, 4 ], "action": [ 5, 5, 4 ], "rationale": [ 5, 5, 4 ] }, "purity": { "scene": [ -1.1760284900665283, -1.0889461040496826, -1.442818284034729 ], "action": [ -1.0115827322006226, -0.5917857885360718, -1.6931917667388916 ], "rationale": [ -1.0546956062316895, -0.9740906357765198, -1.2204363346099854 ] }, "diversity": { "scene": 25.965358893403383, "action": 32.713305568898775, "rationale": 2.658757840479801 } } ``` ### Data Fields - ```file_name```: original COCO filename - ```captions```: Dict containing all the captions for the image. Each axis can be accessed with the axis name and it contains a list of captions. - ```confidence```: Dict containing the captions confidence scores. Each axis can be accessed with the axis name and it contains a list of captions. Confidence scores are not provided for the _object_ axis (COCO captions).t - ```purity score```: Dict containing the captions purity scores. The purity score measures the semantic similarity of the captions within the same axis (Bleurt-based). - ```diversity score```: Dict containing the captions diversity scores. The diversity score measures the lexical diversity of the captions within the same axis (Self-BLEU-based). ### Data Splits There are 14997 images and 134973 high-level captions split into: - Train-val: 13498 images and 121482 high-level captions - Test: 1499 images and 13491 high-level captions ## Dataset Creation The dataset has been crowdsourced on Amazon Mechanical Turk. From the paper: >We randomly select 14997 images from the COCO 2014 train-val split. In order to answer questions related to _actions_ and _rationales_ we need to > ensure the presence of a subject in the image. Therefore, we leverage the entity annotation provided in COCO to select images containing > at least one person. The whole annotation is conducted on Amazon Mechanical Turk (AMT). We split the workload into batches in order to ease >the monitoring of the quality of the data collected. Each image is annotated by three different annotators, therefore we collect three annotations per axis. ### Curation Rationale From the paper: >In this work, we tackle the issue of **grounding high-level linguistic concepts in the visual modality**, proposing the High-Level (HL) Dataset: a V\&L resource aligning existing object-centric captions with human-collected high-level descriptions of images along three different axes: _scenes_, _actions_ and _rationales_. The high-level captions capture the human interpretation of the scene, providing abstract linguistic concepts complementary to object-centric captions >used in current V\&L datasets, e.g. in COCO. We take a step further, and we collect _confidence scores_ to distinguish commonsense assumptions >from subjective interpretations and we characterize our data under a variety of semantic and lexical aspects. ### Source Data - Images: COCO - object axis annotations: COCO - scene, action, rationale annotations: crowdsourced - confidence scores: crowdsourced - purity score and diversity score: automatically computed #### Annotation process From the paper: >**Pilot:** We run a pilot study with the double goal of collecting feedback and defining the task instructions. >With the results from the pilot we design a beta version of the task and we run a small batch of cases on the crowd-sourcing platform. >We manually inspect the results and we further refine the instructions and the formulation of the task before finally proceeding with the >annotation in bulk. The final annotation form is shown in Appendix D. >***Procedure:*** The participants are shown an image and three questions regarding three aspects or axes: _scene_, _actions_ and _rationales_ > i,e. _Where is the picture taken?_, _What is the subject doing?_, _Why is the subject doing it?_. We explicitly ask the participants to use >their personal interpretation of the scene and add examples and suggestions in the instructions to further guide the annotators. Moreover, >differently from other VQA datasets like (Antol et al., 2015) and (Zhu et al., 2016), where each question can refer to different entities >in the image, we systematically ask the same three questions about the same subject for each image. The full instructions are reported >in Figure 1. For details regarding the annotation costs see Appendix A. #### Who are the annotators? Turkers from Amazon Mechanical Turk ### Personal and Sensitive Information There is no personal or sensitive information ## Considerations for Using the Data [More Information Needed] ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations From the paper: >**Quantitying grammatical errors:** We ask two expert annotators to correct grammatical errors in a sample of 9900 captions, 900 of which are shared between the two annotators. > The annotators are shown the image caption pairs and they are asked to edit the caption whenever they identify a grammatical error. >The most common errors reported by the annotators are: >- Misuse of prepositions >- Wrong verb conjugation >- Pronoun omissions >In order to quantify the extent to which the corrected captions differ from the original ones, we compute the Levenshtein distance (Levenshtein, 1966) between them. >We observe that 22.5\% of the sample has been edited and only 5\% with a Levenshtein distance greater than 10. This suggests a reasonable >level of grammatical quality overall, with no substantial grammatical problems. This can also be observed from the Levenshtein distance >distribution reported in Figure 2. Moreover, the human evaluation is quite reliable as we observe a moderate inter-annotator agreement >(alpha = 0.507, (Krippendorff, 2018) computed over the shared sample. ### Dataset Curators Michele Cafagna ### Licensing Information The Images and the object-centric captions follow the [COCO terms of Use](https://cocodataset.org/#termsofuse) The remaining annotations are licensed under Apache-2.0 license. ### Citation Information ```BibTeX @inproceedings{cafagna2023hl, title={{HL} {D}ataset: {V}isually-grounded {D}escription of {S}cenes, {A}ctions and {R}ationales}, author={Cafagna, Michele and van Deemter, Kees and Gatt, Albert}, booktitle={Proceedings of the 16th International Natural Language Generation Conference (INLG'23)}, address = {Prague, Czech Republic}, year={2023} } ```
michelecafagna26/hl
[ "task_categories:image-to-text", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_ids:text-scoring", "annotations_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en", "license:apache-2.0", "arxiv:1405.0312", "arxiv:2302.12189", "region:us" ]
2023-01-25T16:15:17+00:00
{"annotations_creators": ["crowdsourced"], "language": ["en"], "license": "apache-2.0", "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["image-to-text", "question-answering", "zero-shot-classification"], "task_ids": ["text-scoring"], "pretty_name": "HL (High-Level Dataset)", "annotations_origin": ["crowdsourced"], "dataset_info": {"splits": [{"name": "train", "num_examples": 13498}, {"name": "test", "num_examples": 1499}]}}
2023-08-02T10:50:20+00:00
[ "1405.0312", "2302.12189" ]
[ "en" ]
TAGS #task_categories-image-to-text #task_categories-question-answering #task_categories-zero-shot-classification #task_ids-text-scoring #annotations_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-apache-2.0 #arxiv-1405.0312 #arxiv-2302.12189 #region-us
# Dataset Card for the High-Level Dataset ## Table of Contents - Table of Contents - Dataset Description - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description The High-Level (HL) dataset aligns object-centric descriptions from COCO with high-level descriptions crowdsourced along 3 axes: _scene_, _action_, _rationale_ The HL dataset contains 14997 images from COCO and a total of 134973 crowdsourced captions (3 captions for each axis) aligned with ~749984 object-centric captions from COCO. Each axis is collected by asking the following 3 questions: 1) Where is the picture taken? 2) What is the subject doing? 3) Why is the subject doing it? The high-level descriptions capture the human interpretations of the images. These interpretations contain abstract concepts not directly linked to physical objects. Each high-level description is provided with a _confidence score_, crowdsourced by an independent worker measuring the extent to which the high-level description is likely given the corresponding image, question, and caption. The higher the score, the more the high-level caption is close to the commonsense (in a Likert scale from 1-5). - ️ Repository: URL - Paper: HL Dataset: Visually-grounded Description of Scenes, Actions and Rationales - Spaces: Dataset explorer - ️ Contact: michele.cafagna@URL ### Supported Tasks - image captioning - visual question answering - multimodal text-scoring - zero-shot evaluation ### Languages English ## Dataset Structure The dataset is provided with images from COCO and two metadata jsonl files containing the annotations ### Data Instances An instance looks like this: ### Data Fields - : original COCO filename - : Dict containing all the captions for the image. Each axis can be accessed with the axis name and it contains a list of captions. - : Dict containing the captions confidence scores. Each axis can be accessed with the axis name and it contains a list of captions. Confidence scores are not provided for the _object_ axis (COCO captions).t - : Dict containing the captions purity scores. The purity score measures the semantic similarity of the captions within the same axis (Bleurt-based). - : Dict containing the captions diversity scores. The diversity score measures the lexical diversity of the captions within the same axis (Self-BLEU-based). ### Data Splits There are 14997 images and 134973 high-level captions split into: - Train-val: 13498 images and 121482 high-level captions - Test: 1499 images and 13491 high-level captions ## Dataset Creation The dataset has been crowdsourced on Amazon Mechanical Turk. From the paper: >We randomly select 14997 images from the COCO 2014 train-val split. In order to answer questions related to _actions_ and _rationales_ we need to > ensure the presence of a subject in the image. Therefore, we leverage the entity annotation provided in COCO to select images containing > at least one person. The whole annotation is conducted on Amazon Mechanical Turk (AMT). We split the workload into batches in order to ease >the monitoring of the quality of the data collected. Each image is annotated by three different annotators, therefore we collect three annotations per axis. ### Curation Rationale From the paper: >In this work, we tackle the issue of grounding high-level linguistic concepts in the visual modality, proposing the High-Level (HL) Dataset: a V\&L resource aligning existing object-centric captions with human-collected high-level descriptions of images along three different axes: _scenes_, _actions_ and _rationales_. The high-level captions capture the human interpretation of the scene, providing abstract linguistic concepts complementary to object-centric captions >used in current V\&L datasets, e.g. in COCO. We take a step further, and we collect _confidence scores_ to distinguish commonsense assumptions >from subjective interpretations and we characterize our data under a variety of semantic and lexical aspects. ### Source Data - Images: COCO - object axis annotations: COCO - scene, action, rationale annotations: crowdsourced - confidence scores: crowdsourced - purity score and diversity score: automatically computed #### Annotation process From the paper: >Pilot: We run a pilot study with the double goal of collecting feedback and defining the task instructions. >With the results from the pilot we design a beta version of the task and we run a small batch of cases on the crowd-sourcing platform. >We manually inspect the results and we further refine the instructions and the formulation of the task before finally proceeding with the >annotation in bulk. The final annotation form is shown in Appendix D. >*Procedure:* The participants are shown an image and three questions regarding three aspects or axes: _scene_, _actions_ and _rationales_ > i,e. _Where is the picture taken?_, _What is the subject doing?_, _Why is the subject doing it?_. We explicitly ask the participants to use >their personal interpretation of the scene and add examples and suggestions in the instructions to further guide the annotators. Moreover, >differently from other VQA datasets like (Antol et al., 2015) and (Zhu et al., 2016), where each question can refer to different entities >in the image, we systematically ask the same three questions about the same subject for each image. The full instructions are reported >in Figure 1. For details regarding the annotation costs see Appendix A. #### Who are the annotators? Turkers from Amazon Mechanical Turk ### Personal and Sensitive Information There is no personal or sensitive information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations From the paper: >Quantitying grammatical errors: We ask two expert annotators to correct grammatical errors in a sample of 9900 captions, 900 of which are shared between the two annotators. > The annotators are shown the image caption pairs and they are asked to edit the caption whenever they identify a grammatical error. >The most common errors reported by the annotators are: >- Misuse of prepositions >- Wrong verb conjugation >- Pronoun omissions >In order to quantify the extent to which the corrected captions differ from the original ones, we compute the Levenshtein distance (Levenshtein, 1966) between them. >We observe that 22.5\% of the sample has been edited and only 5\% with a Levenshtein distance greater than 10. This suggests a reasonable >level of grammatical quality overall, with no substantial grammatical problems. This can also be observed from the Levenshtein distance >distribution reported in Figure 2. Moreover, the human evaluation is quite reliable as we observe a moderate inter-annotator agreement >(alpha = 0.507, (Krippendorff, 2018) computed over the shared sample. ### Dataset Curators Michele Cafagna ### Licensing Information The Images and the object-centric captions follow the COCO terms of Use The remaining annotations are licensed under Apache-2.0 license.
[ "# Dataset Card for the High-Level Dataset", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\nThe High-Level (HL) dataset aligns object-centric descriptions from COCO \nwith high-level descriptions crowdsourced along 3 axes: _scene_, _action_, _rationale_\n\nThe HL dataset contains 14997 images from COCO and a total of 134973 crowdsourced captions (3 captions for each axis) aligned with ~749984 object-centric captions from COCO.\n\nEach axis is collected by asking the following 3 questions:\n\n1) Where is the picture taken?\n2) What is the subject doing?\n3) Why is the subject doing it?\n\nThe high-level descriptions capture the human interpretations of the images. These interpretations contain abstract concepts not directly linked to physical objects.\nEach high-level description is provided with a _confidence score_, crowdsourced by an independent worker measuring the extent to which\nthe high-level description is likely given the corresponding image, question, and caption. The higher the score, the more the high-level caption is close to the commonsense (in a Likert scale from 1-5).\n\n- ️ Repository: URL\n- Paper: HL Dataset: Visually-grounded Description of Scenes, Actions and Rationales\n- Spaces: Dataset explorer\n- ️ Contact: michele.cafagna@URL", "### Supported Tasks\n\n- image captioning\n- visual question answering\n- multimodal text-scoring\n- zero-shot evaluation", "### Languages\n\nEnglish", "## Dataset Structure\n\nThe dataset is provided with images from COCO and two metadata jsonl files containing the annotations", "### Data Instances\n\nAn instance looks like this:", "### Data Fields\n\n- : original COCO filename\n- : Dict containing all the captions for the image. Each axis can be accessed with the axis name and it contains a list of captions.\n- : Dict containing the captions confidence scores. Each axis can be accessed with the axis name and it contains a list of captions. Confidence scores are not provided for the _object_ axis (COCO captions).t\n- : Dict containing the captions purity scores. The purity score measures the semantic similarity of the captions within the same axis (Bleurt-based).\n- : Dict containing the captions diversity scores. The diversity score measures the lexical diversity of the captions within the same axis (Self-BLEU-based).", "### Data Splits\n\nThere are 14997 images and 134973 high-level captions split into:\n- Train-val: 13498 images and 121482 high-level captions\n- Test: 1499 images and 13491 high-level captions", "## Dataset Creation\n\nThe dataset has been crowdsourced on Amazon Mechanical Turk.\nFrom the paper:\n\n>We randomly select 14997 images from the COCO 2014 train-val split. In order to answer questions related to _actions_ and _rationales_ we need to\n> ensure the presence of a subject in the image. Therefore, we leverage the entity annotation provided in COCO to select images containing\n> at least one person. The whole annotation is conducted on Amazon Mechanical Turk (AMT). We split the workload into batches in order to ease\n>the monitoring of the quality of the data collected. Each image is annotated by three different annotators, therefore we collect three annotations per axis.", "### Curation Rationale\n\nFrom the paper:\n\n>In this work, we tackle the issue of grounding high-level linguistic concepts in the visual modality, proposing the High-Level (HL) Dataset: a \nV\\&L resource aligning existing object-centric captions with human-collected high-level descriptions of images along three different axes: _scenes_, _actions_ and _rationales_. \nThe high-level captions capture the human interpretation of the scene, providing abstract linguistic concepts complementary to object-centric captions\n>used in current V\\&L datasets, e.g. in COCO. We take a step further, and we collect _confidence scores_ to distinguish commonsense assumptions\n>from subjective interpretations and we characterize our data under a variety of semantic and lexical aspects.", "### Source Data\n\n- Images: COCO\n- object axis annotations: COCO\n- scene, action, rationale annotations: crowdsourced\n- confidence scores: crowdsourced\n- purity score and diversity score: automatically computed", "#### Annotation process\n\nFrom the paper:\n\n>Pilot: We run a pilot study with the double goal of collecting feedback and defining the task instructions.\n>With the results from the pilot we design a beta version of the task and we run a small batch of cases on the crowd-sourcing platform.\n>We manually inspect the results and we further refine the instructions and the formulation of the task before finally proceeding with the\n>annotation in bulk. The final annotation form is shown in Appendix D.\n\n>*Procedure:* The participants are shown an image and three questions regarding three aspects or axes: _scene_, _actions_ and _rationales_\n> i,e. _Where is the picture taken?_, _What is the subject doing?_, _Why is the subject doing it?_. We explicitly ask the participants to use\n>their personal interpretation of the scene and add examples and suggestions in the instructions to further guide the annotators. Moreover,\n>differently from other VQA datasets like (Antol et al., 2015) and (Zhu et al., 2016), where each question can refer to different entities\n>in the image, we systematically ask the same three questions about the same subject for each image. The full instructions are reported\n>in Figure 1. For details regarding the annotation costs see Appendix A.", "#### Who are the annotators?\n\nTurkers from Amazon Mechanical Turk", "### Personal and Sensitive Information\n\nThere is no personal or sensitive information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\nFrom the paper:\n\n>Quantitying grammatical errors: We ask two expert annotators to correct grammatical errors in a sample of 9900 captions, 900 of which are shared between the two annotators.\n> The annotators are shown the image caption pairs and they are asked to edit the caption whenever they identify a grammatical error.\n>The most common errors reported by the annotators are:\n>- Misuse of prepositions\n>- Wrong verb conjugation\n>- Pronoun omissions\n\n>In order to quantify the extent to which the corrected captions differ from the original ones, we compute the Levenshtein distance (Levenshtein, 1966) between them.\n>We observe that 22.5\\% of the sample has been edited and only 5\\% with a Levenshtein distance greater than 10. This suggests a reasonable \n>level of grammatical quality overall, with no substantial grammatical problems. This can also be observed from the Levenshtein distance \n>distribution reported in Figure 2. Moreover, the human evaluation is quite reliable as we observe a moderate inter-annotator agreement \n>(alpha = 0.507, (Krippendorff, 2018) computed over the shared sample.", "### Dataset Curators\n\nMichele Cafagna", "### Licensing Information\n\nThe Images and the object-centric captions follow the COCO terms of Use\nThe remaining annotations are licensed under Apache-2.0 license." ]
[ "TAGS\n#task_categories-image-to-text #task_categories-question-answering #task_categories-zero-shot-classification #task_ids-text-scoring #annotations_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-apache-2.0 #arxiv-1405.0312 #arxiv-2302.12189 #region-us \n", "# Dataset Card for the High-Level Dataset", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\nThe High-Level (HL) dataset aligns object-centric descriptions from COCO \nwith high-level descriptions crowdsourced along 3 axes: _scene_, _action_, _rationale_\n\nThe HL dataset contains 14997 images from COCO and a total of 134973 crowdsourced captions (3 captions for each axis) aligned with ~749984 object-centric captions from COCO.\n\nEach axis is collected by asking the following 3 questions:\n\n1) Where is the picture taken?\n2) What is the subject doing?\n3) Why is the subject doing it?\n\nThe high-level descriptions capture the human interpretations of the images. These interpretations contain abstract concepts not directly linked to physical objects.\nEach high-level description is provided with a _confidence score_, crowdsourced by an independent worker measuring the extent to which\nthe high-level description is likely given the corresponding image, question, and caption. The higher the score, the more the high-level caption is close to the commonsense (in a Likert scale from 1-5).\n\n- ️ Repository: URL\n- Paper: HL Dataset: Visually-grounded Description of Scenes, Actions and Rationales\n- Spaces: Dataset explorer\n- ️ Contact: michele.cafagna@URL", "### Supported Tasks\n\n- image captioning\n- visual question answering\n- multimodal text-scoring\n- zero-shot evaluation", "### Languages\n\nEnglish", "## Dataset Structure\n\nThe dataset is provided with images from COCO and two metadata jsonl files containing the annotations", "### Data Instances\n\nAn instance looks like this:", "### Data Fields\n\n- : original COCO filename\n- : Dict containing all the captions for the image. Each axis can be accessed with the axis name and it contains a list of captions.\n- : Dict containing the captions confidence scores. Each axis can be accessed with the axis name and it contains a list of captions. Confidence scores are not provided for the _object_ axis (COCO captions).t\n- : Dict containing the captions purity scores. The purity score measures the semantic similarity of the captions within the same axis (Bleurt-based).\n- : Dict containing the captions diversity scores. The diversity score measures the lexical diversity of the captions within the same axis (Self-BLEU-based).", "### Data Splits\n\nThere are 14997 images and 134973 high-level captions split into:\n- Train-val: 13498 images and 121482 high-level captions\n- Test: 1499 images and 13491 high-level captions", "## Dataset Creation\n\nThe dataset has been crowdsourced on Amazon Mechanical Turk.\nFrom the paper:\n\n>We randomly select 14997 images from the COCO 2014 train-val split. In order to answer questions related to _actions_ and _rationales_ we need to\n> ensure the presence of a subject in the image. Therefore, we leverage the entity annotation provided in COCO to select images containing\n> at least one person. The whole annotation is conducted on Amazon Mechanical Turk (AMT). We split the workload into batches in order to ease\n>the monitoring of the quality of the data collected. Each image is annotated by three different annotators, therefore we collect three annotations per axis.", "### Curation Rationale\n\nFrom the paper:\n\n>In this work, we tackle the issue of grounding high-level linguistic concepts in the visual modality, proposing the High-Level (HL) Dataset: a \nV\\&L resource aligning existing object-centric captions with human-collected high-level descriptions of images along three different axes: _scenes_, _actions_ and _rationales_. \nThe high-level captions capture the human interpretation of the scene, providing abstract linguistic concepts complementary to object-centric captions\n>used in current V\\&L datasets, e.g. in COCO. We take a step further, and we collect _confidence scores_ to distinguish commonsense assumptions\n>from subjective interpretations and we characterize our data under a variety of semantic and lexical aspects.", "### Source Data\n\n- Images: COCO\n- object axis annotations: COCO\n- scene, action, rationale annotations: crowdsourced\n- confidence scores: crowdsourced\n- purity score and diversity score: automatically computed", "#### Annotation process\n\nFrom the paper:\n\n>Pilot: We run a pilot study with the double goal of collecting feedback and defining the task instructions.\n>With the results from the pilot we design a beta version of the task and we run a small batch of cases on the crowd-sourcing platform.\n>We manually inspect the results and we further refine the instructions and the formulation of the task before finally proceeding with the\n>annotation in bulk. The final annotation form is shown in Appendix D.\n\n>*Procedure:* The participants are shown an image and three questions regarding three aspects or axes: _scene_, _actions_ and _rationales_\n> i,e. _Where is the picture taken?_, _What is the subject doing?_, _Why is the subject doing it?_. We explicitly ask the participants to use\n>their personal interpretation of the scene and add examples and suggestions in the instructions to further guide the annotators. Moreover,\n>differently from other VQA datasets like (Antol et al., 2015) and (Zhu et al., 2016), where each question can refer to different entities\n>in the image, we systematically ask the same three questions about the same subject for each image. The full instructions are reported\n>in Figure 1. For details regarding the annotation costs see Appendix A.", "#### Who are the annotators?\n\nTurkers from Amazon Mechanical Turk", "### Personal and Sensitive Information\n\nThere is no personal or sensitive information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\nFrom the paper:\n\n>Quantitying grammatical errors: We ask two expert annotators to correct grammatical errors in a sample of 9900 captions, 900 of which are shared between the two annotators.\n> The annotators are shown the image caption pairs and they are asked to edit the caption whenever they identify a grammatical error.\n>The most common errors reported by the annotators are:\n>- Misuse of prepositions\n>- Wrong verb conjugation\n>- Pronoun omissions\n\n>In order to quantify the extent to which the corrected captions differ from the original ones, we compute the Levenshtein distance (Levenshtein, 1966) between them.\n>We observe that 22.5\\% of the sample has been edited and only 5\\% with a Levenshtein distance greater than 10. This suggests a reasonable \n>level of grammatical quality overall, with no substantial grammatical problems. This can also be observed from the Levenshtein distance \n>distribution reported in Figure 2. Moreover, the human evaluation is quite reliable as we observe a moderate inter-annotator agreement \n>(alpha = 0.507, (Krippendorff, 2018) computed over the shared sample.", "### Dataset Curators\n\nMichele Cafagna", "### Licensing Information\n\nThe Images and the object-centric captions follow the COCO terms of Use\nThe remaining annotations are licensed under Apache-2.0 license." ]
0ea08bf8ff41c8ea54d6671411ce0005fb46113a
# Dataset Card for "RSSCN7" ## Dataset Description - **Paper** [Deep Learning Based Feature Selection for Remote Sensing Scene Classification](https://ieeexplore.ieee.org/iel7/8859/7305891/07272047.pdf) ### Licensing Information For research and academic purposes. ## Citation Information [Deep Learning Based Feature Selection for Remote Sensing Scene Classification](https://ieeexplore.ieee.org/iel7/8859/7305891/07272047.pdf) ``` @article{7272047, title = {Deep Learning Based Feature Selection for Remote Sensing Scene Classification}, author = {Zou, Qin and Ni, Lihao and Zhang, Tong and Wang, Qian}, year = 2015, journal = {IEEE Geoscience and Remote Sensing Letters}, volume = 12, number = 11, pages = {2321--2325}, doi = {10.1109/LGRS.2015.2475299} } ```
jonathan-roberts1/RSSCN7
[ "task_categories:image-classification", "task_categories:zero-shot-image-classification", "license:other", "region:us" ]
2023-01-25T16:16:29+00:00
{"license": "other", "task_categories": ["image-classification", "zero-shot-image-classification"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "field", "1": "forest", "2": "grass", "3": "industry", "4": "parking", "5": "resident", "6": "river or lake"}}}}], "splits": [{"name": "train", "num_bytes": 345895442.4, "num_examples": 2800}], "download_size": 367257922, "dataset_size": 345895442.4}}
2023-03-31T16:20:53+00:00
[]
[]
TAGS #task_categories-image-classification #task_categories-zero-shot-image-classification #license-other #region-us
# Dataset Card for "RSSCN7" ## Dataset Description - Paper Deep Learning Based Feature Selection for Remote Sensing Scene Classification ### Licensing Information For research and academic purposes. Deep Learning Based Feature Selection for Remote Sensing Scene Classification
[ "# Dataset Card for \"RSSCN7\"", "## Dataset Description\n\n- Paper Deep Learning Based Feature Selection for Remote Sensing Scene Classification", "### Licensing Information\n\nFor research and academic purposes.\n\n\n\nDeep Learning Based Feature Selection for Remote Sensing Scene Classification" ]
[ "TAGS\n#task_categories-image-classification #task_categories-zero-shot-image-classification #license-other #region-us \n", "# Dataset Card for \"RSSCN7\"", "## Dataset Description\n\n- Paper Deep Learning Based Feature Selection for Remote Sensing Scene Classification", "### Licensing Information\n\nFor research and academic purposes.\n\n\n\nDeep Learning Based Feature Selection for Remote Sensing Scene Classification" ]
212f53dc625a4caaefa8f105679d3434381158c1
# Dataset Card for "OxfordPets_test_facebook_opt_2.7b_Attributes_Caption_ns_3669" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/OxfordPets_test_facebook_opt_2.7b_Attributes_Caption_ns_3669
[ "region:us" ]
2023-01-25T16:20:51+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 121189501.375, "num_examples": 3669}, {"name": "fewshot_1_bs_16", "num_bytes": 122187449.375, "num_examples": 3669}, {"name": "fewshot_3_bs_16", "num_bytes": 124265920.375, "num_examples": 3669}, {"name": "fewshot_5_bs_16", "num_bytes": 126336943.375, "num_examples": 3669}, {"name": "fewshot_8_bs_16", "num_bytes": 129454684.375, "num_examples": 3669}], "download_size": 603074119, "dataset_size": 623434498.875}}
2023-01-25T20:23:48+00:00
[]
[]
TAGS #region-us
# Dataset Card for "OxfordPets_test_facebook_opt_2.7b_Attributes_Caption_ns_3669" More Information needed
[ "# Dataset Card for \"OxfordPets_test_facebook_opt_2.7b_Attributes_Caption_ns_3669\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"OxfordPets_test_facebook_opt_2.7b_Attributes_Caption_ns_3669\"\n\nMore Information needed" ]
ac46d216ebaf87e36a4dae607253e6985e6e5a75
# Dataset Card for "OxfordPets_test_facebook_opt_125m_Visclues_ns_3669" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/OxfordPets_test_facebook_opt_125m_Visclues_ns_3669
[ "region:us" ]
2023-01-25T16:23:25+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 121460903.375, "num_examples": 3669}, {"name": "fewshot_1_bs_16", "num_bytes": 122822438.375, "num_examples": 3669}, {"name": "fewshot_3_bs_16", "num_bytes": 125536937.375, "num_examples": 3669}, {"name": "fewshot_5_bs_16", "num_bytes": 128243714.375, "num_examples": 3669}, {"name": "fewshot_8_bs_16", "num_bytes": 132312290.375, "num_examples": 3669}], "download_size": 604694650, "dataset_size": 630376283.875}}
2023-01-25T20:30:55+00:00
[]
[]
TAGS #region-us
# Dataset Card for "OxfordPets_test_facebook_opt_125m_Visclues_ns_3669" More Information needed
[ "# Dataset Card for \"OxfordPets_test_facebook_opt_125m_Visclues_ns_3669\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"OxfordPets_test_facebook_opt_125m_Visclues_ns_3669\"\n\nMore Information needed" ]
bcb88fa457c2bea86e317aa0fc22e177f1ce49b1
# Dataset Card for "OxfordPets_test_facebook_opt_350m_Visclues_ns_3669" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/OxfordPets_test_facebook_opt_350m_Visclues_ns_3669
[ "region:us" ]
2023-01-25T16:27:08+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 121460915.375, "num_examples": 3669}, {"name": "fewshot_1_bs_16", "num_bytes": 122822636.375, "num_examples": 3669}, {"name": "fewshot_3_bs_16", "num_bytes": 125537076.375, "num_examples": 3669}, {"name": "fewshot_5_bs_16", "num_bytes": 128243735.375, "num_examples": 3669}, {"name": "fewshot_8_bs_16", "num_bytes": 132312128.375, "num_examples": 3669}], "download_size": 604694442, "dataset_size": 630376491.875}}
2023-01-25T20:41:27+00:00
[]
[]
TAGS #region-us
# Dataset Card for "OxfordPets_test_facebook_opt_350m_Visclues_ns_3669" More Information needed
[ "# Dataset Card for \"OxfordPets_test_facebook_opt_350m_Visclues_ns_3669\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"OxfordPets_test_facebook_opt_350m_Visclues_ns_3669\"\n\nMore Information needed" ]
80b92e231adc6c0cd9314ab5de5e9a3997c0be16
# Dataset Card for "yuvalkirstain-pickapic-ft-eval-random-prompts" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yuvalkirstain/yuvalkirstain-pickapic-ft-eval-random-prompts
[ "region:us" ]
2023-01-25T16:28:58+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 31392, "num_examples": 200}], "download_size": 11259, "dataset_size": 31392}}
2023-01-25T16:29:05+00:00
[]
[]
TAGS #region-us
# Dataset Card for "yuvalkirstain-pickapic-ft-eval-random-prompts" More Information needed
[ "# Dataset Card for \"yuvalkirstain-pickapic-ft-eval-random-prompts\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"yuvalkirstain-pickapic-ft-eval-random-prompts\"\n\nMore Information needed" ]
c6912d3c9b04c0edc0857e7c4c458b0e3fef1b4b
# Dataset Card for "OxfordPets_test_facebook_opt_1.3b_Visclues_ns_3669" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/OxfordPets_test_facebook_opt_1.3b_Visclues_ns_3669
[ "region:us" ]
2023-01-25T16:32:27+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 121477284.375, "num_examples": 3669}, {"name": "fewshot_1_bs_16", "num_bytes": 122822944.375, "num_examples": 3669}, {"name": "fewshot_3_bs_16", "num_bytes": 125537165.375, "num_examples": 3669}, {"name": "fewshot_5_bs_16", "num_bytes": 128243890.375, "num_examples": 3669}, {"name": "fewshot_8_bs_16", "num_bytes": 132312524.375, "num_examples": 3669}], "download_size": 604685676, "dataset_size": 630393808.875}}
2023-01-25T21:01:08+00:00
[]
[]
TAGS #region-us
# Dataset Card for "OxfordPets_test_facebook_opt_1.3b_Visclues_ns_3669" More Information needed
[ "# Dataset Card for \"OxfordPets_test_facebook_opt_1.3b_Visclues_ns_3669\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"OxfordPets_test_facebook_opt_1.3b_Visclues_ns_3669\"\n\nMore Information needed" ]
3e793948e63e15c2ada57984aff1e8848c55c560
# Dataset Card for "OxfordPets_test_facebook_opt_2.7b_Visclues_ns_3669" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/OxfordPets_test_facebook_opt_2.7b_Visclues_ns_3669
[ "region:us" ]
2023-01-25T16:39:10+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 121488865.375, "num_examples": 3669}, {"name": "fewshot_1_bs_16", "num_bytes": 122822889.375, "num_examples": 3669}, {"name": "fewshot_3_bs_16", "num_bytes": 125537183.375, "num_examples": 3669}, {"name": "fewshot_5_bs_16", "num_bytes": 128243845.375, "num_examples": 3669}, {"name": "fewshot_8_bs_16", "num_bytes": 132312365.375, "num_examples": 3669}], "download_size": 604681164, "dataset_size": 630405148.875}}
2023-01-25T21:31:11+00:00
[]
[]
TAGS #region-us
# Dataset Card for "OxfordPets_test_facebook_opt_2.7b_Visclues_ns_3669" More Information needed
[ "# Dataset Card for \"OxfordPets_test_facebook_opt_2.7b_Visclues_ns_3669\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"OxfordPets_test_facebook_opt_2.7b_Visclues_ns_3669\"\n\nMore Information needed" ]
4228afe8a630ba39652b20c3f12cf34eb80a0cd6
# Dataset Card for "BusinessNewsDataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
LIDIA-HESSEN/vencortex-BusinessNewsDataset
[ "region:us" ]
2023-01-25T17:09:47+00:00
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "image", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 290733891, "num_examples": 469361}], "download_size": 123671926, "dataset_size": 290733891}}
2023-01-25T17:09:54+00:00
[]
[]
TAGS #region-us
# Dataset Card for "BusinessNewsDataset" More Information needed
[ "# Dataset Card for \"BusinessNewsDataset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"BusinessNewsDataset\"\n\nMore Information needed" ]
d26e9511bf4570dcba1ea244f93807bdf14750c6
# Dataset Card for "FAQ_student_accesiblity_for_UTD" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Rami/FAQ_student_accesiblity_for_UTD
[ "region:us" ]
2023-01-25T17:15:01+00:00
{"dataset_info": {"features": [{"name": "Question", "dtype": "string"}, {"name": "Answering", "dtype": "string"}, {"name": "URL", "dtype": "string"}, {"name": "Label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 86308, "num_examples": 156}], "download_size": 44389, "dataset_size": 86308}}
2023-03-25T22:17:38+00:00
[]
[]
TAGS #region-us
# Dataset Card for "FAQ_student_accesiblity_for_UTD" More Information needed
[ "# Dataset Card for \"FAQ_student_accesiblity_for_UTD\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"FAQ_student_accesiblity_for_UTD\"\n\nMore Information needed" ]
763c2084a6b03532f4b6277818b03e5263d229d3
# This repository contains the dataset of weather forecasting competition - Datavidia 2022 ## Deskripsi File - train.csv - Data yang digunakan untuk melatih model berisi fitur-fitur dan target - train_hourly.csv - Data tambahan berisi fitur-fitur untuk setiap jam - test.csv - Data uji yang berisi fitur-fitur untuk prediksi target - test_hourly.csv - Data tambahan berisi fitur-fitur untuk setiap jam pada tanggal-tanggal yang termasuk dalam test.csv - sample_submission.csv - File berisi contoh submisi untuk kompetisi ini ## Deskripsi Fitur ### train.csv - time – Tanggal pencatatan - temperature_2m_max (°C) – Temperatur udara tertinggi pada ketinggian 2 m di atas permukaan - temperature_2m_min (°C) – Temperatur udara terendah pada ketinggian 2 m di atas permukaan - apparent_temperature_max (°C) – Temperatur semu maksimum yang terasa - apparent_temperature_min (°C) – Temperatur semu minimum yang terasa - sunrise (iso8601) – Waktu matahari terbit pada hari itu dengan format ISO 8601 - sunset (iso8601) – Waktu matahari tenggelam pada hari itu dengan format ISO 8601 - shortwave_radiation_sum (MJ/m²) – Total radiasi matahari pada hari tersebut - rain_sum (mm) – Jumlah curah hujan pada hari tersebut - snowfall_sum (cm) – Jumlah hujan salju pada hari tersebut - windspeed_10m_max (km/h) – Kecepatan angin maksimum pada ketinggian 10 m - windgusts_10m_max (km/h) - Kecepatan angin minimum pada ketinggian 10 m - winddirection_10m_dominant (°) – Arah angin dominan pada hari tersebut - et0_fao_evapotranspiration (mm) – Jumlah evaporasi dan transpirasi pada hari tersebut - elevation – Ketinggian kota yang tercatat - city – Nama kota yang tercatat ### train_hourly.csv - time – Tanggal dan jam pencatatan - temperature_2m (°C) – Temperatur pada ketinggian 2 m - relativehumidity_2m (%) – Kelembapan pada ketinggian 2 m - dewpoint_2m (°C) – Titik embun; suhu ambang udara mengembun - apparent_temperature (°C) – Temperatur semu yang dirasakan - pressure_msl (hPa) – Tekanan udara pada ketinggian permukaan air laut rata-rata (mean sea level) - surface_pressure (hPa) – Tekanan udara pada ketinggian permukaan daerah tersebut - snowfall (cm) – Jumlah hujan salju pada jam tersebut - cloudcover (%) – Persentase awan yang menutupi langit - cloudcover_low (%) – Persentase cloud cover pada awan sampai ketinggian 2 km - cloudcover_mid (%) – Persentase cloud cover pada ketinggian 2-6 km - cloudcover_high (%) – Persentase cloud cover pada ketinggian di atas 6 km - shortwave_radiation (W/m²) – Rata-rata energi pancaran matahari pada gelombang inframerah hingga ultraviolet - direct_radiation (W/m²) – Rata-rata pancaran matahari langsung pada permukaan tanah seluas 1 m2 - diffuse_radiation (W/m²) – Rata-rata pancaran matahari yang dihamburkan oleh permukaan dan atmosfer - direct_normal_irradiance (W/m²) – Rata-rata pancaran matahari langsung pada luas 1 m2 tegak lurus dengan arah pancaran - windspeed_10m (km/h) – Kecepatan angin pada ketinggian 10 m - windspeed_100m (km/h) – Kecepatan angin pada ketinggian 100 m - winddirection_10m (°) – Arah angin pada ketinggian 10 m - winddirection_100m (°) – Arah angin pada ketinggian 100 m - windgusts_10m (km/h) – Kecepatan angin ketika terdapat angin kencang - et0_fao_evapotranspiration (mm) – Jumlah evapotranspirasi (evaporasi dan transpirasi) pada jam tersebut - vapor_pressure_deficit (kPa) – Perbedaan tekanan uap air dari udara dengan tekanan uap air ketika udara tersaturasi - soil_temperature_0_to_7cm (°C) – Rata-rata temperatur tanah pada kedalaman 0-7 cm - soil_temperature_7_to_28cm (°C) – Rata-rata temperatur tanah pada kedalaman 7-28 cm - soil_temperature_28_to_100cm (°C) – Rata-rata temperatur tanah pada kedalaman 28-100 cm - soil_temperature_100_to_255cm (°C) – Rata-rata temperatur tanah pada kedalaman 100-255 cm - soil_moisture_0_to_7cm (m³/m³) – Rata-rata kelembapan air pada tanah untuk kedalaman 0-7 cm - soil_moisture_7_to_28cm (m³/m³) – Rata-rata kelembapan air pada tanah untuk kedalaman 7-28 cm - soil_moisture_28_to_100cm (m³/m³) – Rata-rata kelembapan air pada tanah untuk kedalaman 28-100 cm - soil_moisture_100_to_255cm (m³/m³) – Rata-rata kelembapan air pada tanah untuk kedalaman 100-255 cm - city – Nama kota
elskow/Weather4cast
[ "license:unlicense", "region:us" ]
2023-01-25T17:31:20+00:00
{"license": "unlicense"}
2023-01-25T17:58:10+00:00
[]
[]
TAGS #license-unlicense #region-us
# This repository contains the dataset of weather forecasting competition - Datavidia 2022 ## Deskripsi File - URL - Data yang digunakan untuk melatih model berisi fitur-fitur dan target - train_hourly.csv - Data tambahan berisi fitur-fitur untuk setiap jam - URL - Data uji yang berisi fitur-fitur untuk prediksi target - test_hourly.csv - Data tambahan berisi fitur-fitur untuk setiap jam pada tanggal-tanggal yang termasuk dalam URL - sample_submission.csv - File berisi contoh submisi untuk kompetisi ini ## Deskripsi Fitur ### URL - time – Tanggal pencatatan - temperature_2m_max (°C) – Temperatur udara tertinggi pada ketinggian 2 m di atas permukaan - temperature_2m_min (°C) – Temperatur udara terendah pada ketinggian 2 m di atas permukaan - apparent_temperature_max (°C) – Temperatur semu maksimum yang terasa - apparent_temperature_min (°C) – Temperatur semu minimum yang terasa - sunrise (iso8601) – Waktu matahari terbit pada hari itu dengan format ISO 8601 - sunset (iso8601) – Waktu matahari tenggelam pada hari itu dengan format ISO 8601 - shortwave_radiation_sum (MJ/m²) – Total radiasi matahari pada hari tersebut - rain_sum (mm) – Jumlah curah hujan pada hari tersebut - snowfall_sum (cm) – Jumlah hujan salju pada hari tersebut - windspeed_10m_max (km/h) – Kecepatan angin maksimum pada ketinggian 10 m - windgusts_10m_max (km/h) - Kecepatan angin minimum pada ketinggian 10 m - winddirection_10m_dominant (°) – Arah angin dominan pada hari tersebut - et0_fao_evapotranspiration (mm) – Jumlah evaporasi dan transpirasi pada hari tersebut - elevation – Ketinggian kota yang tercatat - city – Nama kota yang tercatat ### train_hourly.csv - time – Tanggal dan jam pencatatan - temperature_2m (°C) – Temperatur pada ketinggian 2 m - relativehumidity_2m (%) – Kelembapan pada ketinggian 2 m - dewpoint_2m (°C) – Titik embun; suhu ambang udara mengembun - apparent_temperature (°C) – Temperatur semu yang dirasakan - pressure_msl (hPa) – Tekanan udara pada ketinggian permukaan air laut rata-rata (mean sea level) - surface_pressure (hPa) – Tekanan udara pada ketinggian permukaan daerah tersebut - snowfall (cm) – Jumlah hujan salju pada jam tersebut - cloudcover (%) – Persentase awan yang menutupi langit - cloudcover_low (%) – Persentase cloud cover pada awan sampai ketinggian 2 km - cloudcover_mid (%) – Persentase cloud cover pada ketinggian 2-6 km - cloudcover_high (%) – Persentase cloud cover pada ketinggian di atas 6 km - shortwave_radiation (W/m²) – Rata-rata energi pancaran matahari pada gelombang inframerah hingga ultraviolet - direct_radiation (W/m²) – Rata-rata pancaran matahari langsung pada permukaan tanah seluas 1 m2 - diffuse_radiation (W/m²) – Rata-rata pancaran matahari yang dihamburkan oleh permukaan dan atmosfer - direct_normal_irradiance (W/m²) – Rata-rata pancaran matahari langsung pada luas 1 m2 tegak lurus dengan arah pancaran - windspeed_10m (km/h) – Kecepatan angin pada ketinggian 10 m - windspeed_100m (km/h) – Kecepatan angin pada ketinggian 100 m - winddirection_10m (°) – Arah angin pada ketinggian 10 m - winddirection_100m (°) – Arah angin pada ketinggian 100 m - windgusts_10m (km/h) – Kecepatan angin ketika terdapat angin kencang - et0_fao_evapotranspiration (mm) – Jumlah evapotranspirasi (evaporasi dan transpirasi) pada jam tersebut - vapor_pressure_deficit (kPa) – Perbedaan tekanan uap air dari udara dengan tekanan uap air ketika udara tersaturasi - soil_temperature_0_to_7cm (°C) – Rata-rata temperatur tanah pada kedalaman 0-7 cm - soil_temperature_7_to_28cm (°C) – Rata-rata temperatur tanah pada kedalaman 7-28 cm - soil_temperature_28_to_100cm (°C) – Rata-rata temperatur tanah pada kedalaman 28-100 cm - soil_temperature_100_to_255cm (°C) – Rata-rata temperatur tanah pada kedalaman 100-255 cm - soil_moisture_0_to_7cm (m³/m³) – Rata-rata kelembapan air pada tanah untuk kedalaman 0-7 cm - soil_moisture_7_to_28cm (m³/m³) – Rata-rata kelembapan air pada tanah untuk kedalaman 7-28 cm - soil_moisture_28_to_100cm (m³/m³) – Rata-rata kelembapan air pada tanah untuk kedalaman 28-100 cm - soil_moisture_100_to_255cm (m³/m³) – Rata-rata kelembapan air pada tanah untuk kedalaman 100-255 cm - city – Nama kota
[ "# This repository contains the dataset of weather forecasting competition - Datavidia 2022", "## Deskripsi File\n- URL - Data yang digunakan untuk melatih model berisi fitur-fitur dan target\n- train_hourly.csv - Data tambahan berisi fitur-fitur untuk setiap jam\n- URL - Data uji yang berisi fitur-fitur untuk prediksi target\n- test_hourly.csv - Data tambahan berisi fitur-fitur untuk setiap jam pada tanggal-tanggal yang termasuk dalam URL\n- sample_submission.csv - File berisi contoh submisi untuk kompetisi ini", "## Deskripsi Fitur", "### URL\n- time – Tanggal pencatatan\n- temperature_2m_max (°C) – Temperatur udara tertinggi pada ketinggian 2 m di atas permukaan\n- temperature_2m_min (°C) – Temperatur udara terendah pada ketinggian 2 m di atas permukaan\n- apparent_temperature_max (°C) – Temperatur semu maksimum yang terasa\n- apparent_temperature_min (°C) – Temperatur semu minimum yang terasa\n- sunrise (iso8601) – Waktu matahari terbit pada hari itu dengan format ISO 8601\n- sunset (iso8601) – Waktu matahari tenggelam pada hari itu dengan format ISO 8601\n- shortwave_radiation_sum (MJ/m²) – Total radiasi matahari pada hari tersebut\n- rain_sum (mm) – Jumlah curah hujan pada hari tersebut\n- snowfall_sum (cm) – Jumlah hujan salju pada hari tersebut\n- windspeed_10m_max (km/h) – Kecepatan angin maksimum pada ketinggian 10 m\n- windgusts_10m_max (km/h) - Kecepatan angin minimum pada ketinggian 10 m\n- winddirection_10m_dominant (°) – Arah angin dominan pada hari tersebut\n- et0_fao_evapotranspiration (mm) – Jumlah evaporasi dan transpirasi pada hari tersebut\n- elevation – Ketinggian kota yang tercatat\n- city – Nama kota yang tercatat", "### train_hourly.csv\n- time – Tanggal dan jam pencatatan\n- temperature_2m (°C) – Temperatur pada ketinggian 2 m\n- relativehumidity_2m (%) – Kelembapan pada ketinggian 2 m\n- dewpoint_2m (°C) – Titik embun; suhu ambang udara mengembun\n- apparent_temperature (°C) – Temperatur semu yang dirasakan\n- pressure_msl (hPa) – Tekanan udara pada ketinggian permukaan air laut rata-rata (mean sea level)\n- surface_pressure (hPa) – Tekanan udara pada ketinggian permukaan daerah tersebut\n- snowfall (cm) – Jumlah hujan salju pada jam tersebut\n- cloudcover (%) – Persentase awan yang menutupi langit\n- cloudcover_low (%) – Persentase cloud cover pada awan sampai ketinggian 2 km\n- cloudcover_mid (%) – Persentase cloud cover pada ketinggian 2-6 km\n- cloudcover_high (%) – Persentase cloud cover pada ketinggian di atas 6 km\n- shortwave_radiation (W/m²) – Rata-rata energi pancaran matahari pada gelombang inframerah hingga ultraviolet\n- direct_radiation (W/m²) – Rata-rata pancaran matahari langsung pada permukaan tanah seluas 1 m2\n- diffuse_radiation (W/m²) – Rata-rata pancaran matahari yang dihamburkan oleh permukaan dan atmosfer\n- direct_normal_irradiance (W/m²) – Rata-rata pancaran matahari langsung pada luas 1 m2 tegak lurus dengan arah pancaran\n- windspeed_10m (km/h) – Kecepatan angin pada ketinggian 10 m\n- windspeed_100m (km/h) – Kecepatan angin pada ketinggian 100 m\n- winddirection_10m (°) – Arah angin pada ketinggian 10 m\n- winddirection_100m (°) – Arah angin pada ketinggian 100 m\n- windgusts_10m (km/h) – Kecepatan angin ketika terdapat angin kencang\n- et0_fao_evapotranspiration (mm) – Jumlah evapotranspirasi (evaporasi dan transpirasi) pada jam tersebut\n- vapor_pressure_deficit (kPa) – Perbedaan tekanan uap air dari udara dengan tekanan uap air ketika udara tersaturasi\n- soil_temperature_0_to_7cm (°C) – Rata-rata temperatur tanah pada kedalaman 0-7 cm\n- soil_temperature_7_to_28cm (°C) – Rata-rata temperatur tanah pada kedalaman 7-28 cm\n- soil_temperature_28_to_100cm (°C) – Rata-rata temperatur tanah pada kedalaman 28-100 cm\n- soil_temperature_100_to_255cm (°C) – Rata-rata temperatur tanah pada kedalaman 100-255 cm\n- soil_moisture_0_to_7cm (m³/m³) – Rata-rata kelembapan air pada tanah untuk kedalaman 0-7 cm\n- soil_moisture_7_to_28cm (m³/m³) – Rata-rata kelembapan air pada tanah untuk kedalaman 7-28 cm\n- soil_moisture_28_to_100cm (m³/m³) – Rata-rata kelembapan air pada tanah untuk kedalaman 28-100 cm\n- soil_moisture_100_to_255cm (m³/m³) – Rata-rata kelembapan air pada tanah untuk kedalaman 100-255 cm\n- city – Nama kota" ]
[ "TAGS\n#license-unlicense #region-us \n", "# This repository contains the dataset of weather forecasting competition - Datavidia 2022", "## Deskripsi File\n- URL - Data yang digunakan untuk melatih model berisi fitur-fitur dan target\n- train_hourly.csv - Data tambahan berisi fitur-fitur untuk setiap jam\n- URL - Data uji yang berisi fitur-fitur untuk prediksi target\n- test_hourly.csv - Data tambahan berisi fitur-fitur untuk setiap jam pada tanggal-tanggal yang termasuk dalam URL\n- sample_submission.csv - File berisi contoh submisi untuk kompetisi ini", "## Deskripsi Fitur", "### URL\n- time – Tanggal pencatatan\n- temperature_2m_max (°C) – Temperatur udara tertinggi pada ketinggian 2 m di atas permukaan\n- temperature_2m_min (°C) – Temperatur udara terendah pada ketinggian 2 m di atas permukaan\n- apparent_temperature_max (°C) – Temperatur semu maksimum yang terasa\n- apparent_temperature_min (°C) – Temperatur semu minimum yang terasa\n- sunrise (iso8601) – Waktu matahari terbit pada hari itu dengan format ISO 8601\n- sunset (iso8601) – Waktu matahari tenggelam pada hari itu dengan format ISO 8601\n- shortwave_radiation_sum (MJ/m²) – Total radiasi matahari pada hari tersebut\n- rain_sum (mm) – Jumlah curah hujan pada hari tersebut\n- snowfall_sum (cm) – Jumlah hujan salju pada hari tersebut\n- windspeed_10m_max (km/h) – Kecepatan angin maksimum pada ketinggian 10 m\n- windgusts_10m_max (km/h) - Kecepatan angin minimum pada ketinggian 10 m\n- winddirection_10m_dominant (°) – Arah angin dominan pada hari tersebut\n- et0_fao_evapotranspiration (mm) – Jumlah evaporasi dan transpirasi pada hari tersebut\n- elevation – Ketinggian kota yang tercatat\n- city – Nama kota yang tercatat", "### train_hourly.csv\n- time – Tanggal dan jam pencatatan\n- temperature_2m (°C) – Temperatur pada ketinggian 2 m\n- relativehumidity_2m (%) – Kelembapan pada ketinggian 2 m\n- dewpoint_2m (°C) – Titik embun; suhu ambang udara mengembun\n- apparent_temperature (°C) – Temperatur semu yang dirasakan\n- pressure_msl (hPa) – Tekanan udara pada ketinggian permukaan air laut rata-rata (mean sea level)\n- surface_pressure (hPa) – Tekanan udara pada ketinggian permukaan daerah tersebut\n- snowfall (cm) – Jumlah hujan salju pada jam tersebut\n- cloudcover (%) – Persentase awan yang menutupi langit\n- cloudcover_low (%) – Persentase cloud cover pada awan sampai ketinggian 2 km\n- cloudcover_mid (%) – Persentase cloud cover pada ketinggian 2-6 km\n- cloudcover_high (%) – Persentase cloud cover pada ketinggian di atas 6 km\n- shortwave_radiation (W/m²) – Rata-rata energi pancaran matahari pada gelombang inframerah hingga ultraviolet\n- direct_radiation (W/m²) – Rata-rata pancaran matahari langsung pada permukaan tanah seluas 1 m2\n- diffuse_radiation (W/m²) – Rata-rata pancaran matahari yang dihamburkan oleh permukaan dan atmosfer\n- direct_normal_irradiance (W/m²) – Rata-rata pancaran matahari langsung pada luas 1 m2 tegak lurus dengan arah pancaran\n- windspeed_10m (km/h) – Kecepatan angin pada ketinggian 10 m\n- windspeed_100m (km/h) – Kecepatan angin pada ketinggian 100 m\n- winddirection_10m (°) – Arah angin pada ketinggian 10 m\n- winddirection_100m (°) – Arah angin pada ketinggian 100 m\n- windgusts_10m (km/h) – Kecepatan angin ketika terdapat angin kencang\n- et0_fao_evapotranspiration (mm) – Jumlah evapotranspirasi (evaporasi dan transpirasi) pada jam tersebut\n- vapor_pressure_deficit (kPa) – Perbedaan tekanan uap air dari udara dengan tekanan uap air ketika udara tersaturasi\n- soil_temperature_0_to_7cm (°C) – Rata-rata temperatur tanah pada kedalaman 0-7 cm\n- soil_temperature_7_to_28cm (°C) – Rata-rata temperatur tanah pada kedalaman 7-28 cm\n- soil_temperature_28_to_100cm (°C) – Rata-rata temperatur tanah pada kedalaman 28-100 cm\n- soil_temperature_100_to_255cm (°C) – Rata-rata temperatur tanah pada kedalaman 100-255 cm\n- soil_moisture_0_to_7cm (m³/m³) – Rata-rata kelembapan air pada tanah untuk kedalaman 0-7 cm\n- soil_moisture_7_to_28cm (m³/m³) – Rata-rata kelembapan air pada tanah untuk kedalaman 7-28 cm\n- soil_moisture_28_to_100cm (m³/m³) – Rata-rata kelembapan air pada tanah untuk kedalaman 28-100 cm\n- soil_moisture_100_to_255cm (m³/m³) – Rata-rata kelembapan air pada tanah untuk kedalaman 100-255 cm\n- city – Nama kota" ]
a4c789887a5064ddb505b642b381c347ac0c6964
# Dataset Card for pile-pii-scrubadub ## Dataset Description - **Repository: https://github.com/tomekkorbak/aligned-pretraining-objectives** - **Paper: Arxiv link to be added** ### Dataset Summary This dataset contains text from [The Pile](https://huggingface.co/datasets/the_pile), annotated based on the toxicity of each sentence. Each document (row in the dataset) is segmented into sentences, and each sentence is given a score: the toxicity predicted by the [Detoxify](https://github.com/unitaryai/detoxify). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages This dataset is taken from [The Pile](https://huggingface.co/datasets/the_pile), which is English text. ## Dataset Structure ### Data Instances 1949977 ### Data Fields - texts (sequence): a list of the sentences in the document, segmented using SpaCy - meta (dict): the section of [The Pile](https://huggingface.co/datasets/the_pile) from which it originated - scores (sequence): a score for each sentence in the `texts` column indicating the toxicity predicted by [Detoxify](https://github.com/unitaryai/detoxify) - avg_score (float64): the average of the scores listed in the `scores` column - num_sents (int64): the number of sentences (and scores) in that document ### Data Splits Training set only ## Dataset Creation ### Curation Rationale This is labeled text from [The Pile](https://huggingface.co/datasets/the_pile), a large dataset of text in English. The text is scored for toxicity so that generative language models can be trained to avoid generating toxic text. ### Source Data #### Initial Data Collection and Normalization This is labeled text from [The Pile](https://huggingface.co/datasets/the_pile). #### Who are the source language producers? Please see [The Pile](https://huggingface.co/datasets/the_pile) for the source of the dataset. ### Annotations #### Annotation process Each sentence was scored using [Detoxify](https://github.com/unitaryai/detoxify), which is a toxic comment classifier. We used the `unbiased` model which is based on the 124M parameter [RoBERTa](https://arxiv.org/abs/1907.11692) and trained on the [Jigsaw Unintended Bias in Toxicity Classification dataset](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification). #### Who are the annotators? [Detoxify](https://github.com/unitaryai/detoxify) ### Personal and Sensitive Information This dataset contains all personal identifable information and toxic text that was originally contained in [The Pile](https://huggingface.co/datasets/the_pile). ## Considerations for Using the Data ### Social Impact of Dataset This dataset contains examples of toxic text and personal identifiable information. (A version of this datatset with personal identifiable information annotated is [available here](https://huggingface.co/datasets/tomekkorbak/pile-pii-scrubadub).) Please take care to avoid misusing the toxic text or putting anybody in danger by publicizing their information. This dataset is intended for research purposes only. We cannot guarantee that all toxic text has been detected, and we cannot guarantee that models trained using it will avoid generating toxic text. We do not recommend deploying models trained on this data. ### Discussion of Biases This dataset contains all biases from The Pile discussed in their paper: https://arxiv.org/abs/2101.00027 ### Other Known Limitations The toxic text in this dataset was detected using imperfect automated detection methods. We cannot guarantee that the labels are 100% accurate. ## Additional Information ### Dataset Curators [The Pile](https://huggingface.co/datasets/the_pile) ### Licensing Information From [The Pile](https://huggingface.co/datasets/the_pile): PubMed Central: [MIT License](https://github.com/EleutherAI/pile-pubmedcentral/blob/master/LICENSE) ### Citation Information Paper information to be added ### Contributions [The Pile](https://huggingface.co/datasets/the_pile)
tomekkorbak/pile-detoxify
[ "task_categories:text-classification", "task_categories:other", "task_ids:acceptability-classification", "task_ids:hate-speech-detection", "task_ids:text-scoring", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:extended|the_pile", "language:en", "license:mit", "toxicity", "pretraining-with-human-feedback", "arxiv:1907.11692", "arxiv:2101.00027", "region:us" ]
2023-01-25T17:32:30+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["extended|the_pile"], "task_categories": ["text-classification", "other"], "task_ids": ["acceptability-classification", "hate-speech-detection", "text-scoring"], "pretty_name": "pile-detoxify", "tags": ["toxicity", "pretraining-with-human-feedback"]}
2023-02-07T15:31:11+00:00
[ "1907.11692", "2101.00027" ]
[ "en" ]
TAGS #task_categories-text-classification #task_categories-other #task_ids-acceptability-classification #task_ids-hate-speech-detection #task_ids-text-scoring #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-extended|the_pile #language-English #license-mit #toxicity #pretraining-with-human-feedback #arxiv-1907.11692 #arxiv-2101.00027 #region-us
# Dataset Card for pile-pii-scrubadub ## Dataset Description - Repository: URL - Paper: Arxiv link to be added ### Dataset Summary This dataset contains text from The Pile, annotated based on the toxicity of each sentence. Each document (row in the dataset) is segmented into sentences, and each sentence is given a score: the toxicity predicted by the Detoxify. ### Supported Tasks and Leaderboards ### Languages This dataset is taken from The Pile, which is English text. ## Dataset Structure ### Data Instances 1949977 ### Data Fields - texts (sequence): a list of the sentences in the document, segmented using SpaCy - meta (dict): the section of The Pile from which it originated - scores (sequence): a score for each sentence in the 'texts' column indicating the toxicity predicted by Detoxify - avg_score (float64): the average of the scores listed in the 'scores' column - num_sents (int64): the number of sentences (and scores) in that document ### Data Splits Training set only ## Dataset Creation ### Curation Rationale This is labeled text from The Pile, a large dataset of text in English. The text is scored for toxicity so that generative language models can be trained to avoid generating toxic text. ### Source Data #### Initial Data Collection and Normalization This is labeled text from The Pile. #### Who are the source language producers? Please see The Pile for the source of the dataset. ### Annotations #### Annotation process Each sentence was scored using Detoxify, which is a toxic comment classifier. We used the 'unbiased' model which is based on the 124M parameter RoBERTa and trained on the Jigsaw Unintended Bias in Toxicity Classification dataset. #### Who are the annotators? Detoxify ### Personal and Sensitive Information This dataset contains all personal identifable information and toxic text that was originally contained in The Pile. ## Considerations for Using the Data ### Social Impact of Dataset This dataset contains examples of toxic text and personal identifiable information. (A version of this datatset with personal identifiable information annotated is available here.) Please take care to avoid misusing the toxic text or putting anybody in danger by publicizing their information. This dataset is intended for research purposes only. We cannot guarantee that all toxic text has been detected, and we cannot guarantee that models trained using it will avoid generating toxic text. We do not recommend deploying models trained on this data. ### Discussion of Biases This dataset contains all biases from The Pile discussed in their paper: URL ### Other Known Limitations The toxic text in this dataset was detected using imperfect automated detection methods. We cannot guarantee that the labels are 100% accurate. ## Additional Information ### Dataset Curators The Pile ### Licensing Information From The Pile: PubMed Central: MIT License Paper information to be added ### Contributions The Pile
[ "# Dataset Card for pile-pii-scrubadub", "## Dataset Description\n\n- Repository: URL \n- Paper: Arxiv link to be added", "### Dataset Summary\n\nThis dataset contains text from The Pile, annotated based on the toxicity of each sentence.\nEach document (row in the dataset) is segmented into sentences, and each sentence is given a score: the toxicity predicted by the Detoxify.", "### Supported Tasks and Leaderboards", "### Languages\n\nThis dataset is taken from The Pile, which is English text.", "## Dataset Structure", "### Data Instances\n\n1949977", "### Data Fields\n\n- texts (sequence): a list of the sentences in the document, segmented using SpaCy\n- meta (dict): the section of The Pile from which it originated\n- scores (sequence): a score for each sentence in the 'texts' column indicating the toxicity predicted by Detoxify\n- avg_score (float64): the average of the scores listed in the 'scores' column\n- num_sents (int64): the number of sentences (and scores) in that document", "### Data Splits\n\nTraining set only", "## Dataset Creation", "### Curation Rationale\n\nThis is labeled text from The Pile, a large dataset of text in English. The text is scored for toxicity so that generative language models can be trained to avoid generating toxic text.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThis is labeled text from The Pile.", "#### Who are the source language producers?\n\nPlease see The Pile for the source of the dataset.", "### Annotations", "#### Annotation process\n\nEach sentence was scored using Detoxify, which is a toxic comment classifier.\nWe used the 'unbiased' model which is based on the 124M parameter RoBERTa and trained on the Jigsaw Unintended Bias in Toxicity Classification dataset.", "#### Who are the annotators?\n\nDetoxify", "### Personal and Sensitive Information\n\nThis dataset contains all personal identifable information and toxic text that was originally contained in The Pile.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset contains examples of toxic text and personal identifiable information.\n(A version of this datatset with personal identifiable information annotated is available here.)\nPlease take care to avoid misusing the toxic text or putting anybody in danger by publicizing their information.\nThis dataset is intended for research purposes only. We cannot guarantee that all toxic text has been detected, and we cannot guarantee that models trained using it will avoid generating toxic text.\nWe do not recommend deploying models trained on this data.", "### Discussion of Biases\n\nThis dataset contains all biases from The Pile discussed in their paper: URL", "### Other Known Limitations\n\nThe toxic text in this dataset was detected using imperfect automated detection methods. We cannot guarantee that the labels are 100% accurate.", "## Additional Information", "### Dataset Curators\n\nThe Pile", "### Licensing Information\n\nFrom The Pile: PubMed Central: MIT License\n\n\n\nPaper information to be added", "### Contributions\n\nThe Pile" ]
[ "TAGS\n#task_categories-text-classification #task_categories-other #task_ids-acceptability-classification #task_ids-hate-speech-detection #task_ids-text-scoring #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-extended|the_pile #language-English #license-mit #toxicity #pretraining-with-human-feedback #arxiv-1907.11692 #arxiv-2101.00027 #region-us \n", "# Dataset Card for pile-pii-scrubadub", "## Dataset Description\n\n- Repository: URL \n- Paper: Arxiv link to be added", "### Dataset Summary\n\nThis dataset contains text from The Pile, annotated based on the toxicity of each sentence.\nEach document (row in the dataset) is segmented into sentences, and each sentence is given a score: the toxicity predicted by the Detoxify.", "### Supported Tasks and Leaderboards", "### Languages\n\nThis dataset is taken from The Pile, which is English text.", "## Dataset Structure", "### Data Instances\n\n1949977", "### Data Fields\n\n- texts (sequence): a list of the sentences in the document, segmented using SpaCy\n- meta (dict): the section of The Pile from which it originated\n- scores (sequence): a score for each sentence in the 'texts' column indicating the toxicity predicted by Detoxify\n- avg_score (float64): the average of the scores listed in the 'scores' column\n- num_sents (int64): the number of sentences (and scores) in that document", "### Data Splits\n\nTraining set only", "## Dataset Creation", "### Curation Rationale\n\nThis is labeled text from The Pile, a large dataset of text in English. The text is scored for toxicity so that generative language models can be trained to avoid generating toxic text.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThis is labeled text from The Pile.", "#### Who are the source language producers?\n\nPlease see The Pile for the source of the dataset.", "### Annotations", "#### Annotation process\n\nEach sentence was scored using Detoxify, which is a toxic comment classifier.\nWe used the 'unbiased' model which is based on the 124M parameter RoBERTa and trained on the Jigsaw Unintended Bias in Toxicity Classification dataset.", "#### Who are the annotators?\n\nDetoxify", "### Personal and Sensitive Information\n\nThis dataset contains all personal identifable information and toxic text that was originally contained in The Pile.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset contains examples of toxic text and personal identifiable information.\n(A version of this datatset with personal identifiable information annotated is available here.)\nPlease take care to avoid misusing the toxic text or putting anybody in danger by publicizing their information.\nThis dataset is intended for research purposes only. We cannot guarantee that all toxic text has been detected, and we cannot guarantee that models trained using it will avoid generating toxic text.\nWe do not recommend deploying models trained on this data.", "### Discussion of Biases\n\nThis dataset contains all biases from The Pile discussed in their paper: URL", "### Other Known Limitations\n\nThe toxic text in this dataset was detected using imperfect automated detection methods. We cannot guarantee that the labels are 100% accurate.", "## Additional Information", "### Dataset Curators\n\nThe Pile", "### Licensing Information\n\nFrom The Pile: PubMed Central: MIT License\n\n\n\nPaper information to be added", "### Contributions\n\nThe Pile" ]
b61d29f477163034001472614dc97fb9614dddea
# Dataset Card for DocLayNet small ## About this card (01/27/2023) ### Property and license All information from this page but the content of this paragraph "About this card (01/27/2023)" has been copied/pasted from [Dataset Card for DocLayNet](https://huggingface.co/datasets/ds4sd/DocLayNet). DocLayNet is a dataset created by Deep Search (IBM Research) published under [license CDLA-Permissive-1.0](https://huggingface.co/datasets/ds4sd/DocLayNet#licensing-information). I do not claim any rights to the data taken from this dataset and published on this page. ### DocLayNet dataset [DocLayNet dataset](https://github.com/DS4SD/DocLayNet) (IBM) provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. Until today, the dataset can be downloaded through direct links or as a dataset from Hugging Face datasets: - direct links: [doclaynet_core.zip](https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_core.zip) (28 GiB), [doclaynet_extra.zip](https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_extra.zip) (7.5 GiB) - Hugging Face dataset library: [dataset DocLayNet](https://huggingface.co/datasets/ds4sd/DocLayNet) Paper: [DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis](https://arxiv.org/abs/2206.01062) (06/02/2022) ### Processing into a format facilitating its use by HF notebooks These 2 options require the downloading of all the data (approximately 30GBi), which requires downloading time (about 45 mn in Google Colab) and a large space on the hard disk. These could limit experimentation for people with low resources. Moreover, even when using the download via HF datasets library, it is necessary to download the EXTRA zip separately ([doclaynet_extra.zip](https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_extra.zip), 7.5 GiB) to associate the annotated bounding boxes with the text extracted by OCR from the PDFs. This operation also requires additional code because the boundings boxes of the texts do not necessarily correspond to those annotated (a calculation of the percentage of area in common between the boundings boxes annotated and those of the texts makes it possible to make a comparison between them). At last, in order to use Hugging Face notebooks on fine-tuning layout models like LayoutLMv3 or LiLT, DocLayNet data must be processed in a proper format. For all these reasons, I decided to process the DocLayNet dataset: - into 3 datasets of different sizes: - [DocLayNet small](https://huggingface.co/datasets/pierreguillou/DocLayNet-small) (about 1% of DocLayNet) < 1.000k document images (691 train, 64 val, 49 test) - [DocLayNet base](https://huggingface.co/datasets/pierreguillou/DocLayNet-base) (about 10% of DocLayNet) < 10.000k document images (6910 train, 648 val, 499 test) - [DocLayNet large](https://huggingface.co/datasets/pierreguillou/DocLayNet-large) (about 100% of DocLayNet) < 100.000k document images (69.103 train, 6.480 val, 4.994 test) - with associated texts and PDFs (base64 format), - and in a format facilitating their use by HF notebooks. *Note: the layout HF notebooks will greatly help participants of the IBM [ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents](https://ds4sd.github.io/icdar23-doclaynet/)!* ### About PDFs languages Citation of the page 3 of the [DocLayNet paper](https://arxiv.org/abs/2206.01062): "We did not control the document selection with regard to language. **The vast majority of documents contained in DocLayNet (close to 95%) are published in English language.** However, **DocLayNet also contains a number of documents in other languages such as German (2.5%), French (1.0%) and Japanese (1.0%)**. While the document language has negligible impact on the performance of computer vision methods such as object detection and segmentation models, it might prove challenging for layout analysis methods which exploit textual features." ### About PDFs categories distribution Citation of the page 3 of the [DocLayNet paper](https://arxiv.org/abs/2206.01062): "The pages in DocLayNet can be grouped into **six distinct categories**, namely **Financial Reports, Manuals, Scientific Articles, Laws & Regulations, Patents and Government Tenders**. Each document category was sourced from various repositories. For example, Financial Reports contain both free-style format annual reports which expose company-specific, artistic layouts as well as the more formal SEC filings. The two largest categories (Financial Reports and Manuals) contain a large amount of free-style layouts in order to obtain maximum variability. In the other four categories, we boosted the variability by mixing documents from independent providers, such as different government websites or publishers. In Figure 2, we show the document categories contained in DocLayNet with their respective sizes." ![DocLayNet PDFs categories distribution (source: DocLayNet paper)](https://huggingface.co/datasets/pierreguillou/DocLayNet-small/resolve/main/DocLayNet_PDFs_categories_distribution.png) ### Download & overview The size of the DocLayNet small is about 1% of the DocLayNet dataset (random selection respectively in the train, val and test files). ``` # !pip install -q datasets from datasets import load_dataset dataset_small = load_dataset("pierreguillou/DocLayNet-small") # overview of dataset_small DatasetDict({ train: Dataset({ features: ['id', 'texts', 'bboxes_block', 'bboxes_line', 'categories', 'image', 'pdf', 'page_hash', 'original_filename', 'page_no', 'num_pages', 'original_width', 'original_height', 'coco_width', 'coco_height', 'collection', 'doc_category'], num_rows: 691 }) validation: Dataset({ features: ['id', 'texts', 'bboxes_block', 'bboxes_line', 'categories', 'image', 'pdf', 'page_hash', 'original_filename', 'page_no', 'num_pages', 'original_width', 'original_height', 'coco_width', 'coco_height', 'collection', 'doc_category'], num_rows: 64 }) test: Dataset({ features: ['id', 'texts', 'bboxes_block', 'bboxes_line', 'categories', 'image', 'pdf', 'page_hash', 'original_filename', 'page_no', 'num_pages', 'original_width', 'original_height', 'coco_width', 'coco_height', 'collection', 'doc_category'], num_rows: 49 }) }) ``` ### Annotated bounding boxes The DocLayNet base makes easy to display document image with the annotaed bounding boxes of paragraphes or lines. Check the notebook [processing_DocLayNet_dataset_to_be_used_by_layout_models_of_HF_hub.ipynb](https://github.com/piegu/language-models/blob/master/processing_DocLayNet_dataset_to_be_used_by_layout_models_of_HF_hub.ipynb) in order to get the code. #### Paragraphes ![Annotated DocLayNet document image with bounding boxes and categories of paragraphes](https://huggingface.co/datasets/pierreguillou/DocLayNet-small/resolve/main/DocLayNet_image_annotated_bounding_boxes_paragraph.png) #### Lines ![Annotated DocLayNet document image with bounding boxes and categories of lines](https://huggingface.co/datasets/pierreguillou/DocLayNet-small/resolve/main/DocLayNet_image_annotated_bounding_boxes_line.png) ### HF notebooks - [notebooks LayoutLM](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLM) (Niels Rogge) - [notebooks LayoutLMv2](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLMv2) (Niels Rogge) - [notebooks LayoutLMv3](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLMv3) (Niels Rogge) - [notebooks LiLT](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LiLT) (Niels Rogge) - [Document AI: Fine-tuning LiLT for document-understanding using Hugging Face Transformers](https://github.com/philschmid/document-ai-transformers/blob/main/training/lilt_funsd.ipynb) ([post](https://www.philschmid.de/fine-tuning-lilt#3-fine-tune-and-evaluate-lilt) of Phil Schmid) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Annotations](#annotations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://developer.ibm.com/exchanges/data/all/doclaynet/ - **Repository:** https://github.com/DS4SD/DocLayNet - **Paper:** https://doi.org/10.1145/3534678.3539043 - **Leaderboard:** - **Point of Contact:** ### Dataset Summary DocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank: 1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout 2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals 3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail. 4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models 5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets. ### Supported Tasks and Leaderboards We are hosting a competition in ICDAR 2023 based on the DocLayNet dataset. For more information see https://ds4sd.github.io/icdar23-doclaynet/. ## Dataset Structure ### Data Fields DocLayNet provides four types of data assets: 1. PNG images of all pages, resized to square `1025 x 1025px` 2. Bounding-box annotations in COCO format for each PNG image 3. Extra: Single-page PDF files matching each PNG image 4. Extra: JSON file matching each PDF page, which provides the digital text cells with coordinates and content The COCO image record are defined like this example ```js ... { "id": 1, "width": 1025, "height": 1025, "file_name": "132a855ee8b23533d8ae69af0049c038171a06ddfcac892c3c6d7e6b4091c642.png", // Custom fields: "doc_category": "financial_reports" // high-level document category "collection": "ann_reports_00_04_fancy", // sub-collection name "doc_name": "NASDAQ_FFIN_2002.pdf", // original document filename "page_no": 9, // page number in original document "precedence": 0, // Annotation order, non-zero in case of redundant double- or triple-annotation }, ... ``` The `doc_category` field uses one of the following constants: ``` financial_reports, scientific_articles, laws_and_regulations, government_tenders, manuals, patents ``` ### Data Splits The dataset provides three splits - `train` - `val` - `test` ## Dataset Creation ### Annotations #### Annotation process The labeling guideline used for training of the annotation experts are available at [DocLayNet_Labeling_Guide_Public.pdf](https://raw.githubusercontent.com/DS4SD/DocLayNet/main/assets/DocLayNet_Labeling_Guide_Public.pdf). #### Who are the annotators? Annotations are crowdsourced. ## Additional Information ### Dataset Curators The dataset is curated by the [Deep Search team](https://ds4sd.github.io/) at IBM Research. You can contact us at [[email protected]](mailto:[email protected]). Curators: - Christoph Auer, [@cau-git](https://github.com/cau-git) - Michele Dolfi, [@dolfim-ibm](https://github.com/dolfim-ibm) - Ahmed Nassar, [@nassarofficial](https://github.com/nassarofficial) - Peter Staar, [@PeterStaar-IBM](https://github.com/PeterStaar-IBM) ### Licensing Information License: [CDLA-Permissive-1.0](https://cdla.io/permissive-1-0/) ### Citation Information ```bib @article{doclaynet2022, title = {DocLayNet: A Large Human-Annotated Dataset for Document-Layout Segmentation}, doi = {10.1145/3534678.353904}, url = {https://doi.org/10.1145/3534678.3539043}, author = {Pfitzmann, Birgit and Auer, Christoph and Dolfi, Michele and Nassar, Ahmed S and Staar, Peter W J}, year = {2022}, isbn = {9781450393850}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, booktitle = {Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining}, pages = {3743–3751}, numpages = {9}, location = {Washington DC, USA}, series = {KDD '22} } ``` ### Contributions Thanks to [@dolfim-ibm](https://github.com/dolfim-ibm), [@cau-git](https://github.com/cau-git) for adding this dataset.
pierreguillou/DocLayNet-small
[ "task_categories:object-detection", "task_categories:image-segmentation", "task_categories:token-classification", "task_ids:instance-segmentation", "annotations_creators:crowdsourced", "size_categories:1K<n<10K", "language:en", "language:de", "language:fr", "language:ja", "license:other", "DocLayNet", "COCO", "PDF", "IBM", "Financial-Reports", "Finance", "Manuals", "Scientific-Articles", "Science", "Laws", "Law", "Regulations", "Patents", "Government-Tenders", "object-detection", "image-segmentation", "token-classification", "arxiv:2206.01062", "region:us" ]
2023-01-25T17:47:43+00:00
{"annotations_creators": ["crowdsourced"], "language": ["en", "de", "fr", "ja"], "license": "other", "size_categories": ["1K<n<10K"], "task_categories": ["object-detection", "image-segmentation", "token-classification"], "task_ids": ["instance-segmentation"], "pretty_name": "DocLayNet small", "tags": ["DocLayNet", "COCO", "PDF", "IBM", "Financial-Reports", "Finance", "Manuals", "Scientific-Articles", "Science", "Laws", "Law", "Regulations", "Patents", "Government-Tenders", "object-detection", "image-segmentation", "token-classification"]}
2023-05-17T07:56:10+00:00
[ "2206.01062" ]
[ "en", "de", "fr", "ja" ]
TAGS #task_categories-object-detection #task_categories-image-segmentation #task_categories-token-classification #task_ids-instance-segmentation #annotations_creators-crowdsourced #size_categories-1K<n<10K #language-English #language-German #language-French #language-Japanese #license-other #DocLayNet #COCO #PDF #IBM #Financial-Reports #Finance #Manuals #Scientific-Articles #Science #Laws #Law #Regulations #Patents #Government-Tenders #object-detection #image-segmentation #token-classification #arxiv-2206.01062 #region-us
# Dataset Card for DocLayNet small ## About this card (01/27/2023) ### Property and license All information from this page but the content of this paragraph "About this card (01/27/2023)" has been copied/pasted from Dataset Card for DocLayNet. DocLayNet is a dataset created by Deep Search (IBM Research) published under license CDLA-Permissive-1.0. I do not claim any rights to the data taken from this dataset and published on this page. ### DocLayNet dataset DocLayNet dataset (IBM) provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. Until today, the dataset can be downloaded through direct links or as a dataset from Hugging Face datasets: - direct links: doclaynet_core.zip (28 GiB), doclaynet_extra.zip (7.5 GiB) - Hugging Face dataset library: dataset DocLayNet Paper: DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis (06/02/2022) ### Processing into a format facilitating its use by HF notebooks These 2 options require the downloading of all the data (approximately 30GBi), which requires downloading time (about 45 mn in Google Colab) and a large space on the hard disk. These could limit experimentation for people with low resources. Moreover, even when using the download via HF datasets library, it is necessary to download the EXTRA zip separately (doclaynet_extra.zip, 7.5 GiB) to associate the annotated bounding boxes with the text extracted by OCR from the PDFs. This operation also requires additional code because the boundings boxes of the texts do not necessarily correspond to those annotated (a calculation of the percentage of area in common between the boundings boxes annotated and those of the texts makes it possible to make a comparison between them). At last, in order to use Hugging Face notebooks on fine-tuning layout models like LayoutLMv3 or LiLT, DocLayNet data must be processed in a proper format. For all these reasons, I decided to process the DocLayNet dataset: - into 3 datasets of different sizes: - DocLayNet small (about 1% of DocLayNet) < 1.000k document images (691 train, 64 val, 49 test) - DocLayNet base (about 10% of DocLayNet) < 10.000k document images (6910 train, 648 val, 499 test) - DocLayNet large (about 100% of DocLayNet) < 100.000k document images (69.103 train, 6.480 val, 4.994 test) - with associated texts and PDFs (base64 format), - and in a format facilitating their use by HF notebooks. *Note: the layout HF notebooks will greatly help participants of the IBM ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents!* ### About PDFs languages Citation of the page 3 of the DocLayNet paper: "We did not control the document selection with regard to language. The vast majority of documents contained in DocLayNet (close to 95%) are published in English language. However, DocLayNet also contains a number of documents in other languages such as German (2.5%), French (1.0%) and Japanese (1.0%). While the document language has negligible impact on the performance of computer vision methods such as object detection and segmentation models, it might prove challenging for layout analysis methods which exploit textual features." ### About PDFs categories distribution Citation of the page 3 of the DocLayNet paper: "The pages in DocLayNet can be grouped into six distinct categories, namely Financial Reports, Manuals, Scientific Articles, Laws & Regulations, Patents and Government Tenders. Each document category was sourced from various repositories. For example, Financial Reports contain both free-style format annual reports which expose company-specific, artistic layouts as well as the more formal SEC filings. The two largest categories (Financial Reports and Manuals) contain a large amount of free-style layouts in order to obtain maximum variability. In the other four categories, we boosted the variability by mixing documents from independent providers, such as different government websites or publishers. In Figure 2, we show the document categories contained in DocLayNet with their respective sizes." !DocLayNet PDFs categories distribution (source: DocLayNet paper) ### Download & overview The size of the DocLayNet small is about 1% of the DocLayNet dataset (random selection respectively in the train, val and test files). ### Annotated bounding boxes The DocLayNet base makes easy to display document image with the annotaed bounding boxes of paragraphes or lines. Check the notebook processing_DocLayNet_dataset_to_be_used_by_layout_models_of_HF_hub.ipynb in order to get the code. #### Paragraphes !Annotated DocLayNet document image with bounding boxes and categories of paragraphes #### Lines !Annotated DocLayNet document image with bounding boxes and categories of lines ### HF notebooks - notebooks LayoutLM (Niels Rogge) - notebooks LayoutLMv2 (Niels Rogge) - notebooks LayoutLMv3 (Niels Rogge) - notebooks LiLT (Niels Rogge) - Document AI: Fine-tuning LiLT for document-understanding using Hugging Face Transformers (post of Phil Schmid) ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Dataset Structure - Data Fields - Data Splits - Dataset Creation - Annotations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Leaderboard: - Point of Contact: ### Dataset Summary DocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank: 1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout 2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals 3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail. 4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models 5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets. ### Supported Tasks and Leaderboards We are hosting a competition in ICDAR 2023 based on the DocLayNet dataset. For more information see URL ## Dataset Structure ### Data Fields DocLayNet provides four types of data assets: 1. PNG images of all pages, resized to square '1025 x 1025px' 2. Bounding-box annotations in COCO format for each PNG image 3. Extra: Single-page PDF files matching each PNG image 4. Extra: JSON file matching each PDF page, which provides the digital text cells with coordinates and content The COCO image record are defined like this example The 'doc_category' field uses one of the following constants: ### Data Splits The dataset provides three splits - 'train' - 'val' - 'test' ## Dataset Creation ### Annotations #### Annotation process The labeling guideline used for training of the annotation experts are available at DocLayNet_Labeling_Guide_Public.pdf. #### Who are the annotators? Annotations are crowdsourced. ## Additional Information ### Dataset Curators The dataset is curated by the Deep Search team at IBM Research. You can contact us at deepsearch-core@URL. Curators: - Christoph Auer, @cau-git - Michele Dolfi, @dolfim-ibm - Ahmed Nassar, @nassarofficial - Peter Staar, @PeterStaar-IBM ### Licensing Information License: CDLA-Permissive-1.0 ### Contributions Thanks to @dolfim-ibm, @cau-git for adding this dataset.
[ "# Dataset Card for DocLayNet small", "## About this card (01/27/2023)", "### Property and license\n\nAll information from this page but the content of this paragraph \"About this card (01/27/2023)\" has been copied/pasted from Dataset Card for DocLayNet.\n\nDocLayNet is a dataset created by Deep Search (IBM Research) published under license CDLA-Permissive-1.0. \n\nI do not claim any rights to the data taken from this dataset and published on this page.", "### DocLayNet dataset\n\nDocLayNet dataset (IBM) provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. \n\nUntil today, the dataset can be downloaded through direct links or as a dataset from Hugging Face datasets:\n- direct links: doclaynet_core.zip (28 GiB), doclaynet_extra.zip (7.5 GiB)\n- Hugging Face dataset library: dataset DocLayNet\n\nPaper: DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis (06/02/2022)", "### Processing into a format facilitating its use by HF notebooks\n\nThese 2 options require the downloading of all the data (approximately 30GBi), which requires downloading time (about 45 mn in Google Colab) and a large space on the hard disk. These could limit experimentation for people with low resources.\n\nMoreover, even when using the download via HF datasets library, it is necessary to download the EXTRA zip separately (doclaynet_extra.zip, 7.5 GiB) to associate the annotated bounding boxes with the text extracted by OCR from the PDFs. This operation also requires additional code because the boundings boxes of the texts do not necessarily correspond to those annotated (a calculation of the percentage of area in common between the boundings boxes annotated and those of the texts makes it possible to make a comparison between them).\n\nAt last, in order to use Hugging Face notebooks on fine-tuning layout models like LayoutLMv3 or LiLT, DocLayNet data must be processed in a proper format.\n\nFor all these reasons, I decided to process the DocLayNet dataset:\n- into 3 datasets of different sizes:\n - DocLayNet small (about 1% of DocLayNet) < 1.000k document images (691 train, 64 val, 49 test)\n - DocLayNet base (about 10% of DocLayNet) < 10.000k document images (6910 train, 648 val, 499 test)\n - DocLayNet large (about 100% of DocLayNet) < 100.000k document images (69.103 train, 6.480 val, 4.994 test)\n- with associated texts and PDFs (base64 format),\n- and in a format facilitating their use by HF notebooks.\n\n*Note: the layout HF notebooks will greatly help participants of the IBM ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents!*", "### About PDFs languages\n\nCitation of the page 3 of the DocLayNet paper: \n\"We did not control the document selection with regard to language. The vast majority of documents contained in DocLayNet (close to 95%) are published in English language. However, DocLayNet also contains a number of documents in other languages such as German (2.5%), French (1.0%) and Japanese (1.0%). While the document language has negligible impact on the performance of computer vision methods such as object detection and segmentation models, it might prove challenging for layout analysis methods which exploit textual features.\"", "### About PDFs categories distribution\n\nCitation of the page 3 of the DocLayNet paper: \n\"The pages in DocLayNet can be grouped into six distinct categories, namely Financial Reports, Manuals, Scientific Articles, Laws & Regulations, Patents and Government Tenders. Each document category was sourced from various repositories. For example, Financial Reports contain both free-style format annual reports which expose company-specific, artistic layouts as well as the more formal SEC filings. The two largest categories (Financial Reports and Manuals) contain a large amount of free-style layouts in order to obtain maximum variability. In the other four categories, we boosted the variability by mixing documents from independent providers, such as different government websites or publishers. In Figure 2, we show the document categories contained in DocLayNet with their respective sizes.\"\n\n!DocLayNet PDFs categories distribution (source: DocLayNet paper)", "### Download & overview\n\nThe size of the DocLayNet small is about 1% of the DocLayNet dataset (random selection respectively in the train, val and test files).", "### Annotated bounding boxes\n\nThe DocLayNet base makes easy to display document image with the annotaed bounding boxes of paragraphes or lines.\n\nCheck the notebook processing_DocLayNet_dataset_to_be_used_by_layout_models_of_HF_hub.ipynb in order to get the code.", "#### Paragraphes\n\n!Annotated DocLayNet document image with bounding boxes and categories of paragraphes", "#### Lines\n\n!Annotated DocLayNet document image with bounding boxes and categories of lines", "### HF notebooks\n\n- notebooks LayoutLM (Niels Rogge)\n- notebooks LayoutLMv2 (Niels Rogge)\n- notebooks LayoutLMv3 (Niels Rogge)\n- notebooks LiLT (Niels Rogge)\n- Document AI: Fine-tuning LiLT for document-understanding using Hugging Face Transformers (post of Phil Schmid)", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n- Dataset Structure\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Annotations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nDocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank:\n\n1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout\n2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals\n3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail.\n4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models\n5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets.", "### Supported Tasks and Leaderboards\n\nWe are hosting a competition in ICDAR 2023 based on the DocLayNet dataset. For more information see URL", "## Dataset Structure", "### Data Fields\n\nDocLayNet provides four types of data assets:\n\n1. PNG images of all pages, resized to square '1025 x 1025px'\n2. Bounding-box annotations in COCO format for each PNG image\n3. Extra: Single-page PDF files matching each PNG image\n4. Extra: JSON file matching each PDF page, which provides the digital text cells with coordinates and content\n\nThe COCO image record are defined like this example\n\n\n\nThe 'doc_category' field uses one of the following constants:", "### Data Splits\n\nThe dataset provides three splits\n- 'train'\n- 'val'\n- 'test'", "## Dataset Creation", "### Annotations", "#### Annotation process\n\nThe labeling guideline used for training of the annotation experts are available at DocLayNet_Labeling_Guide_Public.pdf.", "#### Who are the annotators?\n\nAnnotations are crowdsourced.", "## Additional Information", "### Dataset Curators\n\nThe dataset is curated by the Deep Search team at IBM Research.\nYou can contact us at deepsearch-core@URL.\n\nCurators:\n- Christoph Auer, @cau-git\n- Michele Dolfi, @dolfim-ibm\n- Ahmed Nassar, @nassarofficial\n- Peter Staar, @PeterStaar-IBM", "### Licensing Information\n\nLicense: CDLA-Permissive-1.0", "### Contributions\n\nThanks to @dolfim-ibm, @cau-git for adding this dataset." ]
[ "TAGS\n#task_categories-object-detection #task_categories-image-segmentation #task_categories-token-classification #task_ids-instance-segmentation #annotations_creators-crowdsourced #size_categories-1K<n<10K #language-English #language-German #language-French #language-Japanese #license-other #DocLayNet #COCO #PDF #IBM #Financial-Reports #Finance #Manuals #Scientific-Articles #Science #Laws #Law #Regulations #Patents #Government-Tenders #object-detection #image-segmentation #token-classification #arxiv-2206.01062 #region-us \n", "# Dataset Card for DocLayNet small", "## About this card (01/27/2023)", "### Property and license\n\nAll information from this page but the content of this paragraph \"About this card (01/27/2023)\" has been copied/pasted from Dataset Card for DocLayNet.\n\nDocLayNet is a dataset created by Deep Search (IBM Research) published under license CDLA-Permissive-1.0. \n\nI do not claim any rights to the data taken from this dataset and published on this page.", "### DocLayNet dataset\n\nDocLayNet dataset (IBM) provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. \n\nUntil today, the dataset can be downloaded through direct links or as a dataset from Hugging Face datasets:\n- direct links: doclaynet_core.zip (28 GiB), doclaynet_extra.zip (7.5 GiB)\n- Hugging Face dataset library: dataset DocLayNet\n\nPaper: DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis (06/02/2022)", "### Processing into a format facilitating its use by HF notebooks\n\nThese 2 options require the downloading of all the data (approximately 30GBi), which requires downloading time (about 45 mn in Google Colab) and a large space on the hard disk. These could limit experimentation for people with low resources.\n\nMoreover, even when using the download via HF datasets library, it is necessary to download the EXTRA zip separately (doclaynet_extra.zip, 7.5 GiB) to associate the annotated bounding boxes with the text extracted by OCR from the PDFs. This operation also requires additional code because the boundings boxes of the texts do not necessarily correspond to those annotated (a calculation of the percentage of area in common between the boundings boxes annotated and those of the texts makes it possible to make a comparison between them).\n\nAt last, in order to use Hugging Face notebooks on fine-tuning layout models like LayoutLMv3 or LiLT, DocLayNet data must be processed in a proper format.\n\nFor all these reasons, I decided to process the DocLayNet dataset:\n- into 3 datasets of different sizes:\n - DocLayNet small (about 1% of DocLayNet) < 1.000k document images (691 train, 64 val, 49 test)\n - DocLayNet base (about 10% of DocLayNet) < 10.000k document images (6910 train, 648 val, 499 test)\n - DocLayNet large (about 100% of DocLayNet) < 100.000k document images (69.103 train, 6.480 val, 4.994 test)\n- with associated texts and PDFs (base64 format),\n- and in a format facilitating their use by HF notebooks.\n\n*Note: the layout HF notebooks will greatly help participants of the IBM ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents!*", "### About PDFs languages\n\nCitation of the page 3 of the DocLayNet paper: \n\"We did not control the document selection with regard to language. The vast majority of documents contained in DocLayNet (close to 95%) are published in English language. However, DocLayNet also contains a number of documents in other languages such as German (2.5%), French (1.0%) and Japanese (1.0%). While the document language has negligible impact on the performance of computer vision methods such as object detection and segmentation models, it might prove challenging for layout analysis methods which exploit textual features.\"", "### About PDFs categories distribution\n\nCitation of the page 3 of the DocLayNet paper: \n\"The pages in DocLayNet can be grouped into six distinct categories, namely Financial Reports, Manuals, Scientific Articles, Laws & Regulations, Patents and Government Tenders. Each document category was sourced from various repositories. For example, Financial Reports contain both free-style format annual reports which expose company-specific, artistic layouts as well as the more formal SEC filings. The two largest categories (Financial Reports and Manuals) contain a large amount of free-style layouts in order to obtain maximum variability. In the other four categories, we boosted the variability by mixing documents from independent providers, such as different government websites or publishers. In Figure 2, we show the document categories contained in DocLayNet with their respective sizes.\"\n\n!DocLayNet PDFs categories distribution (source: DocLayNet paper)", "### Download & overview\n\nThe size of the DocLayNet small is about 1% of the DocLayNet dataset (random selection respectively in the train, val and test files).", "### Annotated bounding boxes\n\nThe DocLayNet base makes easy to display document image with the annotaed bounding boxes of paragraphes or lines.\n\nCheck the notebook processing_DocLayNet_dataset_to_be_used_by_layout_models_of_HF_hub.ipynb in order to get the code.", "#### Paragraphes\n\n!Annotated DocLayNet document image with bounding boxes and categories of paragraphes", "#### Lines\n\n!Annotated DocLayNet document image with bounding boxes and categories of lines", "### HF notebooks\n\n- notebooks LayoutLM (Niels Rogge)\n- notebooks LayoutLMv2 (Niels Rogge)\n- notebooks LayoutLMv3 (Niels Rogge)\n- notebooks LiLT (Niels Rogge)\n- Document AI: Fine-tuning LiLT for document-understanding using Hugging Face Transformers (post of Phil Schmid)", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n- Dataset Structure\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Annotations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nDocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank:\n\n1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout\n2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals\n3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail.\n4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models\n5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets.", "### Supported Tasks and Leaderboards\n\nWe are hosting a competition in ICDAR 2023 based on the DocLayNet dataset. For more information see URL", "## Dataset Structure", "### Data Fields\n\nDocLayNet provides four types of data assets:\n\n1. PNG images of all pages, resized to square '1025 x 1025px'\n2. Bounding-box annotations in COCO format for each PNG image\n3. Extra: Single-page PDF files matching each PNG image\n4. Extra: JSON file matching each PDF page, which provides the digital text cells with coordinates and content\n\nThe COCO image record are defined like this example\n\n\n\nThe 'doc_category' field uses one of the following constants:", "### Data Splits\n\nThe dataset provides three splits\n- 'train'\n- 'val'\n- 'test'", "## Dataset Creation", "### Annotations", "#### Annotation process\n\nThe labeling guideline used for training of the annotation experts are available at DocLayNet_Labeling_Guide_Public.pdf.", "#### Who are the annotators?\n\nAnnotations are crowdsourced.", "## Additional Information", "### Dataset Curators\n\nThe dataset is curated by the Deep Search team at IBM Research.\nYou can contact us at deepsearch-core@URL.\n\nCurators:\n- Christoph Auer, @cau-git\n- Michele Dolfi, @dolfim-ibm\n- Ahmed Nassar, @nassarofficial\n- Peter Staar, @PeterStaar-IBM", "### Licensing Information\n\nLicense: CDLA-Permissive-1.0", "### Contributions\n\nThanks to @dolfim-ibm, @cau-git for adding this dataset." ]
86fa5ebffa3d336210ee1eeeec349b2c7f07899b
# Dataset Card for DocLayNet base ## About this card (01/27/2023) ### Property and license All information from this page but the content of this paragraph "About this card (01/27/2023)" has been copied/pasted from [Dataset Card for DocLayNet](https://huggingface.co/datasets/ds4sd/DocLayNet). DocLayNet is a dataset created by Deep Search (IBM Research) published under [license CDLA-Permissive-1.0](https://huggingface.co/datasets/ds4sd/DocLayNet#licensing-information). I do not claim any rights to the data taken from this dataset and published on this page. ### DocLayNet dataset [DocLayNet dataset](https://github.com/DS4SD/DocLayNet) (IBM) provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. Until today, the dataset can be downloaded through direct links or as a dataset from Hugging Face datasets: - direct links: [doclaynet_core.zip](https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_core.zip) (28 GiB), [doclaynet_extra.zip](https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_extra.zip) (7.5 GiB) - Hugging Face dataset library: [dataset DocLayNet](https://huggingface.co/datasets/ds4sd/DocLayNet) Paper: [DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis](https://arxiv.org/abs/2206.01062) (06/02/2022) ### Processing into a format facilitating its use by HF notebooks These 2 options require the downloading of all the data (approximately 30GBi), which requires downloading time (about 45 mn in Google Colab) and a large space on the hard disk. These could limit experimentation for people with low resources. Moreover, even when using the download via HF datasets library, it is necessary to download the EXTRA zip separately ([doclaynet_extra.zip](https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_extra.zip), 7.5 GiB) to associate the annotated bounding boxes with the text extracted by OCR from the PDFs. This operation also requires additional code because the boundings boxes of the texts do not necessarily correspond to those annotated (a calculation of the percentage of area in common between the boundings boxes annotated and those of the texts makes it possible to make a comparison between them). At last, in order to use Hugging Face notebooks on fine-tuning layout models like LayoutLMv3 or LiLT, DocLayNet data must be processed in a proper format. For all these reasons, I decided to process the DocLayNet dataset: - into 3 datasets of different sizes: - [DocLayNet small](https://huggingface.co/datasets/pierreguillou/DocLayNet-small) (about 1% of DocLayNet) < 1.000k document images (691 train, 64 val, 49 test) - [DocLayNet base](https://huggingface.co/datasets/pierreguillou/DocLayNet-base) (about 10% of DocLayNet) < 10.000k document images (6910 train, 648 val, 499 test) - [DocLayNet large](https://huggingface.co/datasets/pierreguillou/DocLayNet-large) (about 100% of DocLayNet) < 100.000k document images (69.103 train, 6.480 val, 4.994 test) - with associated texts and PDFs (base64 format), - and in a format facilitating their use by HF notebooks. *Note: the layout HF notebooks will greatly help participants of the IBM [ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents](https://ds4sd.github.io/icdar23-doclaynet/)!* ### About PDFs languages Citation of the page 3 of the [DocLayNet paper](https://arxiv.org/abs/2206.01062): "We did not control the document selection with regard to language. **The vast majority of documents contained in DocLayNet (close to 95%) are published in English language.** However, **DocLayNet also contains a number of documents in other languages such as German (2.5%), French (1.0%) and Japanese (1.0%)**. While the document language has negligible impact on the performance of computer vision methods such as object detection and segmentation models, it might prove challenging for layout analysis methods which exploit textual features." ### About PDFs categories distribution Citation of the page 3 of the [DocLayNet paper](https://arxiv.org/abs/2206.01062): "The pages in DocLayNet can be grouped into **six distinct categories**, namely **Financial Reports, Manuals, Scientific Articles, Laws & Regulations, Patents and Government Tenders**. Each document category was sourced from various repositories. For example, Financial Reports contain both free-style format annual reports which expose company-specific, artistic layouts as well as the more formal SEC filings. The two largest categories (Financial Reports and Manuals) contain a large amount of free-style layouts in order to obtain maximum variability. In the other four categories, we boosted the variability by mixing documents from independent providers, such as different government websites or publishers. In Figure 2, we show the document categories contained in DocLayNet with their respective sizes." ![DocLayNet PDFs categories distribution (source: DocLayNet paper)](https://huggingface.co/datasets/pierreguillou/DocLayNet-base/resolve/main/DocLayNet_PDFs_categories_distribution.png) ### Download & overview The size of the DocLayNet small is about 10% of the DocLayNet dataset (random selection respectively in the train, val and test files). ``` # !pip install -q datasets from datasets import load_dataset dataset_base = load_dataset("pierreguillou/DocLayNet-base") # overview of dataset_base DatasetDict({ train: Dataset({ features: ['id', 'texts', 'bboxes_block', 'bboxes_line', 'categories', 'image', 'pdf', 'page_hash', 'original_filename', 'page_no', 'num_pages', 'original_width', 'original_height', 'coco_width', 'coco_height', 'collection', 'doc_category'], num_rows: 6910 }) validation: Dataset({ features: ['id', 'texts', 'bboxes_block', 'bboxes_line', 'categories', 'image', 'pdf', 'page_hash', 'original_filename', 'page_no', 'num_pages', 'original_width', 'original_height', 'coco_width', 'coco_height', 'collection', 'doc_category'], num_rows: 648 }) test: Dataset({ features: ['id', 'texts', 'bboxes_block', 'bboxes_line', 'categories', 'image', 'pdf', 'page_hash', 'original_filename', 'page_no', 'num_pages', 'original_width', 'original_height', 'coco_width', 'coco_height', 'collection', 'doc_category'], num_rows: 499 }) }) ``` ### Annotated bounding boxes The DocLayNet base makes easy to display document image with the annotaed bounding boxes of paragraphes or lines. Check the notebook [processing_DocLayNet_dataset_to_be_used_by_layout_models_of_HF_hub.ipynb](https://github.com/piegu/language-models/blob/master/processing_DocLayNet_dataset_to_be_used_by_layout_models_of_HF_hub.ipynb) in order to get the code. #### Paragraphes ![Annotated DocLayNet document image with bounding boxes and categories of paragraphes](https://huggingface.co/datasets/pierreguillou/DocLayNet-base/resolve/main/DocLayNet_image_annotated_bounding_boxes_paragraph.png) #### Lines ![Annotated DocLayNet document image with bounding boxes and categories of lines](https://huggingface.co/datasets/pierreguillou/DocLayNet-base/resolve/main/DocLayNet_image_annotated_bounding_boxes_line.png) ### HF notebooks - [notebooks LayoutLM](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLM) (Niels Rogge) - [notebooks LayoutLMv2](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLMv2) (Niels Rogge) - [notebooks LayoutLMv3](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLMv3) (Niels Rogge) - [notebooks LiLT](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LiLT) (Niels Rogge) - [Document AI: Fine-tuning LiLT for document-understanding using Hugging Face Transformers](https://github.com/philschmid/document-ai-transformers/blob/main/training/lilt_funsd.ipynb) ([post](https://www.philschmid.de/fine-tuning-lilt#3-fine-tune-and-evaluate-lilt) of Phil Schmid) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Annotations](#annotations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://developer.ibm.com/exchanges/data/all/doclaynet/ - **Repository:** https://github.com/DS4SD/DocLayNet - **Paper:** https://doi.org/10.1145/3534678.3539043 - **Leaderboard:** - **Point of Contact:** ### Dataset Summary DocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank: 1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout 2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals 3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail. 4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models 5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets. ### Supported Tasks and Leaderboards We are hosting a competition in ICDAR 2023 based on the DocLayNet dataset. For more information see https://ds4sd.github.io/icdar23-doclaynet/. ## Dataset Structure ### Data Fields DocLayNet provides four types of data assets: 1. PNG images of all pages, resized to square `1025 x 1025px` 2. Bounding-box annotations in COCO format for each PNG image 3. Extra: Single-page PDF files matching each PNG image 4. Extra: JSON file matching each PDF page, which provides the digital text cells with coordinates and content The COCO image record are defined like this example ```js ... { "id": 1, "width": 1025, "height": 1025, "file_name": "132a855ee8b23533d8ae69af0049c038171a06ddfcac892c3c6d7e6b4091c642.png", // Custom fields: "doc_category": "financial_reports" // high-level document category "collection": "ann_reports_00_04_fancy", // sub-collection name "doc_name": "NASDAQ_FFIN_2002.pdf", // original document filename "page_no": 9, // page number in original document "precedence": 0, // Annotation order, non-zero in case of redundant double- or triple-annotation }, ... ``` The `doc_category` field uses one of the following constants: ``` financial_reports, scientific_articles, laws_and_regulations, government_tenders, manuals, patents ``` ### Data Splits The dataset provides three splits - `train` - `val` - `test` ## Dataset Creation ### Annotations #### Annotation process The labeling guideline used for training of the annotation experts are available at [DocLayNet_Labeling_Guide_Public.pdf](https://raw.githubusercontent.com/DS4SD/DocLayNet/main/assets/DocLayNet_Labeling_Guide_Public.pdf). #### Who are the annotators? Annotations are crowdsourced. ## Additional Information ### Dataset Curators The dataset is curated by the [Deep Search team](https://ds4sd.github.io/) at IBM Research. You can contact us at [[email protected]](mailto:[email protected]). Curators: - Christoph Auer, [@cau-git](https://github.com/cau-git) - Michele Dolfi, [@dolfim-ibm](https://github.com/dolfim-ibm) - Ahmed Nassar, [@nassarofficial](https://github.com/nassarofficial) - Peter Staar, [@PeterStaar-IBM](https://github.com/PeterStaar-IBM) ### Licensing Information License: [CDLA-Permissive-1.0](https://cdla.io/permissive-1-0/) ### Citation Information ```bib @article{doclaynet2022, title = {DocLayNet: A Large Human-Annotated Dataset for Document-Layout Segmentation}, doi = {10.1145/3534678.353904}, url = {https://doi.org/10.1145/3534678.3539043}, author = {Pfitzmann, Birgit and Auer, Christoph and Dolfi, Michele and Nassar, Ahmed S and Staar, Peter W J}, year = {2022}, isbn = {9781450393850}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, booktitle = {Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining}, pages = {3743–3751}, numpages = {9}, location = {Washington DC, USA}, series = {KDD '22} } ``` ### Contributions Thanks to [@dolfim-ibm](https://github.com/dolfim-ibm), [@cau-git](https://github.com/cau-git) for adding this dataset.
pierreguillou/DocLayNet-base
[ "task_categories:object-detection", "task_categories:image-segmentation", "task_categories:token-classification", "task_ids:instance-segmentation", "annotations_creators:crowdsourced", "size_categories:1K<n<10K", "language:en", "language:de", "language:fr", "language:ja", "license:other", "DocLayNet", "COCO", "PDF", "IBM", "Financial-Reports", "Finance", "Manuals", "Scientific-Articles", "Science", "Laws", "Law", "Regulations", "Patents", "Government-Tenders", "object-detection", "image-segmentation", "token-classification", "arxiv:2206.01062", "region:us" ]
2023-01-25T17:53:26+00:00
{"annotations_creators": ["crowdsourced"], "language": ["en", "de", "fr", "ja"], "license": "other", "size_categories": ["1K<n<10K"], "task_categories": ["object-detection", "image-segmentation", "token-classification"], "task_ids": ["instance-segmentation"], "pretty_name": "DocLayNet base", "tags": ["DocLayNet", "COCO", "PDF", "IBM", "Financial-Reports", "Finance", "Manuals", "Scientific-Articles", "Science", "Laws", "Law", "Regulations", "Patents", "Government-Tenders", "object-detection", "image-segmentation", "token-classification"]}
2023-05-17T07:56:30+00:00
[ "2206.01062" ]
[ "en", "de", "fr", "ja" ]
TAGS #task_categories-object-detection #task_categories-image-segmentation #task_categories-token-classification #task_ids-instance-segmentation #annotations_creators-crowdsourced #size_categories-1K<n<10K #language-English #language-German #language-French #language-Japanese #license-other #DocLayNet #COCO #PDF #IBM #Financial-Reports #Finance #Manuals #Scientific-Articles #Science #Laws #Law #Regulations #Patents #Government-Tenders #object-detection #image-segmentation #token-classification #arxiv-2206.01062 #region-us
# Dataset Card for DocLayNet base ## About this card (01/27/2023) ### Property and license All information from this page but the content of this paragraph "About this card (01/27/2023)" has been copied/pasted from Dataset Card for DocLayNet. DocLayNet is a dataset created by Deep Search (IBM Research) published under license CDLA-Permissive-1.0. I do not claim any rights to the data taken from this dataset and published on this page. ### DocLayNet dataset DocLayNet dataset (IBM) provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. Until today, the dataset can be downloaded through direct links or as a dataset from Hugging Face datasets: - direct links: doclaynet_core.zip (28 GiB), doclaynet_extra.zip (7.5 GiB) - Hugging Face dataset library: dataset DocLayNet Paper: DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis (06/02/2022) ### Processing into a format facilitating its use by HF notebooks These 2 options require the downloading of all the data (approximately 30GBi), which requires downloading time (about 45 mn in Google Colab) and a large space on the hard disk. These could limit experimentation for people with low resources. Moreover, even when using the download via HF datasets library, it is necessary to download the EXTRA zip separately (doclaynet_extra.zip, 7.5 GiB) to associate the annotated bounding boxes with the text extracted by OCR from the PDFs. This operation also requires additional code because the boundings boxes of the texts do not necessarily correspond to those annotated (a calculation of the percentage of area in common between the boundings boxes annotated and those of the texts makes it possible to make a comparison between them). At last, in order to use Hugging Face notebooks on fine-tuning layout models like LayoutLMv3 or LiLT, DocLayNet data must be processed in a proper format. For all these reasons, I decided to process the DocLayNet dataset: - into 3 datasets of different sizes: - DocLayNet small (about 1% of DocLayNet) < 1.000k document images (691 train, 64 val, 49 test) - DocLayNet base (about 10% of DocLayNet) < 10.000k document images (6910 train, 648 val, 499 test) - DocLayNet large (about 100% of DocLayNet) < 100.000k document images (69.103 train, 6.480 val, 4.994 test) - with associated texts and PDFs (base64 format), - and in a format facilitating their use by HF notebooks. *Note: the layout HF notebooks will greatly help participants of the IBM ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents!* ### About PDFs languages Citation of the page 3 of the DocLayNet paper: "We did not control the document selection with regard to language. The vast majority of documents contained in DocLayNet (close to 95%) are published in English language. However, DocLayNet also contains a number of documents in other languages such as German (2.5%), French (1.0%) and Japanese (1.0%). While the document language has negligible impact on the performance of computer vision methods such as object detection and segmentation models, it might prove challenging for layout analysis methods which exploit textual features." ### About PDFs categories distribution Citation of the page 3 of the DocLayNet paper: "The pages in DocLayNet can be grouped into six distinct categories, namely Financial Reports, Manuals, Scientific Articles, Laws & Regulations, Patents and Government Tenders. Each document category was sourced from various repositories. For example, Financial Reports contain both free-style format annual reports which expose company-specific, artistic layouts as well as the more formal SEC filings. The two largest categories (Financial Reports and Manuals) contain a large amount of free-style layouts in order to obtain maximum variability. In the other four categories, we boosted the variability by mixing documents from independent providers, such as different government websites or publishers. In Figure 2, we show the document categories contained in DocLayNet with their respective sizes." !DocLayNet PDFs categories distribution (source: DocLayNet paper) ### Download & overview The size of the DocLayNet small is about 10% of the DocLayNet dataset (random selection respectively in the train, val and test files). ### Annotated bounding boxes The DocLayNet base makes easy to display document image with the annotaed bounding boxes of paragraphes or lines. Check the notebook processing_DocLayNet_dataset_to_be_used_by_layout_models_of_HF_hub.ipynb in order to get the code. #### Paragraphes !Annotated DocLayNet document image with bounding boxes and categories of paragraphes #### Lines !Annotated DocLayNet document image with bounding boxes and categories of lines ### HF notebooks - notebooks LayoutLM (Niels Rogge) - notebooks LayoutLMv2 (Niels Rogge) - notebooks LayoutLMv3 (Niels Rogge) - notebooks LiLT (Niels Rogge) - Document AI: Fine-tuning LiLT for document-understanding using Hugging Face Transformers (post of Phil Schmid) ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Dataset Structure - Data Fields - Data Splits - Dataset Creation - Annotations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Leaderboard: - Point of Contact: ### Dataset Summary DocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank: 1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout 2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals 3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail. 4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models 5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets. ### Supported Tasks and Leaderboards We are hosting a competition in ICDAR 2023 based on the DocLayNet dataset. For more information see URL ## Dataset Structure ### Data Fields DocLayNet provides four types of data assets: 1. PNG images of all pages, resized to square '1025 x 1025px' 2. Bounding-box annotations in COCO format for each PNG image 3. Extra: Single-page PDF files matching each PNG image 4. Extra: JSON file matching each PDF page, which provides the digital text cells with coordinates and content The COCO image record are defined like this example The 'doc_category' field uses one of the following constants: ### Data Splits The dataset provides three splits - 'train' - 'val' - 'test' ## Dataset Creation ### Annotations #### Annotation process The labeling guideline used for training of the annotation experts are available at DocLayNet_Labeling_Guide_Public.pdf. #### Who are the annotators? Annotations are crowdsourced. ## Additional Information ### Dataset Curators The dataset is curated by the Deep Search team at IBM Research. You can contact us at deepsearch-core@URL. Curators: - Christoph Auer, @cau-git - Michele Dolfi, @dolfim-ibm - Ahmed Nassar, @nassarofficial - Peter Staar, @PeterStaar-IBM ### Licensing Information License: CDLA-Permissive-1.0 ### Contributions Thanks to @dolfim-ibm, @cau-git for adding this dataset.
[ "# Dataset Card for DocLayNet base", "## About this card (01/27/2023)", "### Property and license\n\nAll information from this page but the content of this paragraph \"About this card (01/27/2023)\" has been copied/pasted from Dataset Card for DocLayNet.\n\nDocLayNet is a dataset created by Deep Search (IBM Research) published under license CDLA-Permissive-1.0. \n\nI do not claim any rights to the data taken from this dataset and published on this page.", "### DocLayNet dataset\n\nDocLayNet dataset (IBM) provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. \n\nUntil today, the dataset can be downloaded through direct links or as a dataset from Hugging Face datasets:\n- direct links: doclaynet_core.zip (28 GiB), doclaynet_extra.zip (7.5 GiB)\n- Hugging Face dataset library: dataset DocLayNet\n\nPaper: DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis (06/02/2022)", "### Processing into a format facilitating its use by HF notebooks\n\nThese 2 options require the downloading of all the data (approximately 30GBi), which requires downloading time (about 45 mn in Google Colab) and a large space on the hard disk. These could limit experimentation for people with low resources.\n\nMoreover, even when using the download via HF datasets library, it is necessary to download the EXTRA zip separately (doclaynet_extra.zip, 7.5 GiB) to associate the annotated bounding boxes with the text extracted by OCR from the PDFs. This operation also requires additional code because the boundings boxes of the texts do not necessarily correspond to those annotated (a calculation of the percentage of area in common between the boundings boxes annotated and those of the texts makes it possible to make a comparison between them).\n\nAt last, in order to use Hugging Face notebooks on fine-tuning layout models like LayoutLMv3 or LiLT, DocLayNet data must be processed in a proper format.\n\nFor all these reasons, I decided to process the DocLayNet dataset:\n- into 3 datasets of different sizes:\n - DocLayNet small (about 1% of DocLayNet) < 1.000k document images (691 train, 64 val, 49 test)\n - DocLayNet base (about 10% of DocLayNet) < 10.000k document images (6910 train, 648 val, 499 test)\n - DocLayNet large (about 100% of DocLayNet) < 100.000k document images (69.103 train, 6.480 val, 4.994 test)\n- with associated texts and PDFs (base64 format),\n- and in a format facilitating their use by HF notebooks.\n\n*Note: the layout HF notebooks will greatly help participants of the IBM ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents!*", "### About PDFs languages\n\nCitation of the page 3 of the DocLayNet paper: \n\"We did not control the document selection with regard to language. The vast majority of documents contained in DocLayNet (close to 95%) are published in English language. However, DocLayNet also contains a number of documents in other languages such as German (2.5%), French (1.0%) and Japanese (1.0%). While the document language has negligible impact on the performance of computer vision methods such as object detection and segmentation models, it might prove challenging for layout analysis methods which exploit textual features.\"", "### About PDFs categories distribution\n\nCitation of the page 3 of the DocLayNet paper: \n\"The pages in DocLayNet can be grouped into six distinct categories, namely Financial Reports, Manuals, Scientific Articles, Laws & Regulations, Patents and Government Tenders. Each document category was sourced from various repositories. For example, Financial Reports contain both free-style format annual reports which expose company-specific, artistic layouts as well as the more formal SEC filings. The two largest categories (Financial Reports and Manuals) contain a large amount of free-style layouts in order to obtain maximum variability. In the other four categories, we boosted the variability by mixing documents from independent providers, such as different government websites or publishers. In Figure 2, we show the document categories contained in DocLayNet with their respective sizes.\"\n\n!DocLayNet PDFs categories distribution (source: DocLayNet paper)", "### Download & overview\n\nThe size of the DocLayNet small is about 10% of the DocLayNet dataset (random selection respectively in the train, val and test files).", "### Annotated bounding boxes\n\nThe DocLayNet base makes easy to display document image with the annotaed bounding boxes of paragraphes or lines.\n\nCheck the notebook processing_DocLayNet_dataset_to_be_used_by_layout_models_of_HF_hub.ipynb in order to get the code.", "#### Paragraphes\n\n!Annotated DocLayNet document image with bounding boxes and categories of paragraphes", "#### Lines\n\n!Annotated DocLayNet document image with bounding boxes and categories of lines", "### HF notebooks\n\n- notebooks LayoutLM (Niels Rogge)\n- notebooks LayoutLMv2 (Niels Rogge)\n- notebooks LayoutLMv3 (Niels Rogge)\n- notebooks LiLT (Niels Rogge)\n- Document AI: Fine-tuning LiLT for document-understanding using Hugging Face Transformers (post of Phil Schmid)", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n- Dataset Structure\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Annotations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nDocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank:\n\n1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout\n2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals\n3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail.\n4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models\n5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets.", "### Supported Tasks and Leaderboards\n\nWe are hosting a competition in ICDAR 2023 based on the DocLayNet dataset. For more information see URL", "## Dataset Structure", "### Data Fields\n\nDocLayNet provides four types of data assets:\n\n1. PNG images of all pages, resized to square '1025 x 1025px'\n2. Bounding-box annotations in COCO format for each PNG image\n3. Extra: Single-page PDF files matching each PNG image\n4. Extra: JSON file matching each PDF page, which provides the digital text cells with coordinates and content\n\nThe COCO image record are defined like this example\n\n\n\nThe 'doc_category' field uses one of the following constants:", "### Data Splits\n\nThe dataset provides three splits\n- 'train'\n- 'val'\n- 'test'", "## Dataset Creation", "### Annotations", "#### Annotation process\n\nThe labeling guideline used for training of the annotation experts are available at DocLayNet_Labeling_Guide_Public.pdf.", "#### Who are the annotators?\n\nAnnotations are crowdsourced.", "## Additional Information", "### Dataset Curators\n\nThe dataset is curated by the Deep Search team at IBM Research.\nYou can contact us at deepsearch-core@URL.\n\nCurators:\n- Christoph Auer, @cau-git\n- Michele Dolfi, @dolfim-ibm\n- Ahmed Nassar, @nassarofficial\n- Peter Staar, @PeterStaar-IBM", "### Licensing Information\n\nLicense: CDLA-Permissive-1.0", "### Contributions\n\nThanks to @dolfim-ibm, @cau-git for adding this dataset." ]
[ "TAGS\n#task_categories-object-detection #task_categories-image-segmentation #task_categories-token-classification #task_ids-instance-segmentation #annotations_creators-crowdsourced #size_categories-1K<n<10K #language-English #language-German #language-French #language-Japanese #license-other #DocLayNet #COCO #PDF #IBM #Financial-Reports #Finance #Manuals #Scientific-Articles #Science #Laws #Law #Regulations #Patents #Government-Tenders #object-detection #image-segmentation #token-classification #arxiv-2206.01062 #region-us \n", "# Dataset Card for DocLayNet base", "## About this card (01/27/2023)", "### Property and license\n\nAll information from this page but the content of this paragraph \"About this card (01/27/2023)\" has been copied/pasted from Dataset Card for DocLayNet.\n\nDocLayNet is a dataset created by Deep Search (IBM Research) published under license CDLA-Permissive-1.0. \n\nI do not claim any rights to the data taken from this dataset and published on this page.", "### DocLayNet dataset\n\nDocLayNet dataset (IBM) provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. \n\nUntil today, the dataset can be downloaded through direct links or as a dataset from Hugging Face datasets:\n- direct links: doclaynet_core.zip (28 GiB), doclaynet_extra.zip (7.5 GiB)\n- Hugging Face dataset library: dataset DocLayNet\n\nPaper: DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis (06/02/2022)", "### Processing into a format facilitating its use by HF notebooks\n\nThese 2 options require the downloading of all the data (approximately 30GBi), which requires downloading time (about 45 mn in Google Colab) and a large space on the hard disk. These could limit experimentation for people with low resources.\n\nMoreover, even when using the download via HF datasets library, it is necessary to download the EXTRA zip separately (doclaynet_extra.zip, 7.5 GiB) to associate the annotated bounding boxes with the text extracted by OCR from the PDFs. This operation also requires additional code because the boundings boxes of the texts do not necessarily correspond to those annotated (a calculation of the percentage of area in common between the boundings boxes annotated and those of the texts makes it possible to make a comparison between them).\n\nAt last, in order to use Hugging Face notebooks on fine-tuning layout models like LayoutLMv3 or LiLT, DocLayNet data must be processed in a proper format.\n\nFor all these reasons, I decided to process the DocLayNet dataset:\n- into 3 datasets of different sizes:\n - DocLayNet small (about 1% of DocLayNet) < 1.000k document images (691 train, 64 val, 49 test)\n - DocLayNet base (about 10% of DocLayNet) < 10.000k document images (6910 train, 648 val, 499 test)\n - DocLayNet large (about 100% of DocLayNet) < 100.000k document images (69.103 train, 6.480 val, 4.994 test)\n- with associated texts and PDFs (base64 format),\n- and in a format facilitating their use by HF notebooks.\n\n*Note: the layout HF notebooks will greatly help participants of the IBM ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents!*", "### About PDFs languages\n\nCitation of the page 3 of the DocLayNet paper: \n\"We did not control the document selection with regard to language. The vast majority of documents contained in DocLayNet (close to 95%) are published in English language. However, DocLayNet also contains a number of documents in other languages such as German (2.5%), French (1.0%) and Japanese (1.0%). While the document language has negligible impact on the performance of computer vision methods such as object detection and segmentation models, it might prove challenging for layout analysis methods which exploit textual features.\"", "### About PDFs categories distribution\n\nCitation of the page 3 of the DocLayNet paper: \n\"The pages in DocLayNet can be grouped into six distinct categories, namely Financial Reports, Manuals, Scientific Articles, Laws & Regulations, Patents and Government Tenders. Each document category was sourced from various repositories. For example, Financial Reports contain both free-style format annual reports which expose company-specific, artistic layouts as well as the more formal SEC filings. The two largest categories (Financial Reports and Manuals) contain a large amount of free-style layouts in order to obtain maximum variability. In the other four categories, we boosted the variability by mixing documents from independent providers, such as different government websites or publishers. In Figure 2, we show the document categories contained in DocLayNet with their respective sizes.\"\n\n!DocLayNet PDFs categories distribution (source: DocLayNet paper)", "### Download & overview\n\nThe size of the DocLayNet small is about 10% of the DocLayNet dataset (random selection respectively in the train, val and test files).", "### Annotated bounding boxes\n\nThe DocLayNet base makes easy to display document image with the annotaed bounding boxes of paragraphes or lines.\n\nCheck the notebook processing_DocLayNet_dataset_to_be_used_by_layout_models_of_HF_hub.ipynb in order to get the code.", "#### Paragraphes\n\n!Annotated DocLayNet document image with bounding boxes and categories of paragraphes", "#### Lines\n\n!Annotated DocLayNet document image with bounding boxes and categories of lines", "### HF notebooks\n\n- notebooks LayoutLM (Niels Rogge)\n- notebooks LayoutLMv2 (Niels Rogge)\n- notebooks LayoutLMv3 (Niels Rogge)\n- notebooks LiLT (Niels Rogge)\n- Document AI: Fine-tuning LiLT for document-understanding using Hugging Face Transformers (post of Phil Schmid)", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n- Dataset Structure\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Annotations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nDocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank:\n\n1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout\n2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals\n3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail.\n4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models\n5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets.", "### Supported Tasks and Leaderboards\n\nWe are hosting a competition in ICDAR 2023 based on the DocLayNet dataset. For more information see URL", "## Dataset Structure", "### Data Fields\n\nDocLayNet provides four types of data assets:\n\n1. PNG images of all pages, resized to square '1025 x 1025px'\n2. Bounding-box annotations in COCO format for each PNG image\n3. Extra: Single-page PDF files matching each PNG image\n4. Extra: JSON file matching each PDF page, which provides the digital text cells with coordinates and content\n\nThe COCO image record are defined like this example\n\n\n\nThe 'doc_category' field uses one of the following constants:", "### Data Splits\n\nThe dataset provides three splits\n- 'train'\n- 'val'\n- 'test'", "## Dataset Creation", "### Annotations", "#### Annotation process\n\nThe labeling guideline used for training of the annotation experts are available at DocLayNet_Labeling_Guide_Public.pdf.", "#### Who are the annotators?\n\nAnnotations are crowdsourced.", "## Additional Information", "### Dataset Curators\n\nThe dataset is curated by the Deep Search team at IBM Research.\nYou can contact us at deepsearch-core@URL.\n\nCurators:\n- Christoph Auer, @cau-git\n- Michele Dolfi, @dolfim-ibm\n- Ahmed Nassar, @nassarofficial\n- Peter Staar, @PeterStaar-IBM", "### Licensing Information\n\nLicense: CDLA-Permissive-1.0", "### Contributions\n\nThanks to @dolfim-ibm, @cau-git for adding this dataset." ]
ade0789f658fd356185f9cc1438d268835b99204
<h1 style="text-align: center;">MPSC Multi-view Dataset</h1> <p style='text-align: justify;'> Deep video representation learning has recently attained state-of-the-art performance in video action recognition. However, when used with video clips from varied perspectives, the performance of these models degrades significantly. Existing VAR models frequently simultaneously contain both view information and action attributes, making it difficult to learn a view-invariant representation. Therefore, to study the attribute of multiview representation, we collected a large-scale time synchronous multiview video dataset from 10 subjects in both indoor and outdoor settings performing 10 different actions with three horizontal and vertical viewpoints using a smartphone, an action camera, and a drone camera. We provide the multiview video dataset with various meta-data information to facilitate further research for robust VAR systems. </p> ### Collecting multiview videos <p style='text-align: justify;'> In our data collection strategy, we choose regular sensors (smartphone camera), wide-angle sensors (go-pro, action camera), and drone cameras covering front views, side views, and top view positions to receive simultaneous three 2D projections of ten action events. To collect multi-angular and positional projections of the same actions, smartphones (Samsung S8 plus, flat-angle sensor), action cameras (Dragon touch EK700, wide-angle sensor), and a drone (Parrot Anafi, flat-angle sensor) capture the action events simultaneously from different positions in 1080p at 30 FPS. Among the cameras, the smartphone was hand-held and tracked the events. The action camera was placed in a stationary position and captured the events using its wide-view sensor. Both of them were positions approximately 6 feet away from the participants to capture two completely different side-views of the actions from horizontal position. Lastly, the drone captures the events' top view while flying at a low altitude of varying distances from 8 feet to 15 feet. Although we positioned the cameras to capture events from a particular angular position with some occasional movement, it effectively captured an almost complete-view of actions, as the volunteers turn in different directions to perform different actions without any constraints. </p> <p style='text-align: justify;'> We have selected ten regular micro-actions in our dataset with both static (poses: sitting, standing, lying with face up, lying with face down) and dynamic actions (temporal patterns: walking, push up, waving hand, leg exercise, object carrying, object pick/drop). We hypothesize this would further foundation for complex action recognition since some complex actions require sequentially performing a subset of these micro-actions. In our target actions selection, some actions have only minor differences to distinguish and require contextual knowledge (walking and object carrying, push-ups and lying down, lying with face down and lying with face up, standing and hand waving in standing position). Further, we have collected the background-only data without the human to provide a no-action/human dataset for the identical backgrounds. </p> <p style='text-align: justify;'> We collect these data [sampled shown in the follwing figure] from 12 volunteer participants with varying traits. The participant performs all ten actions for 30 seconds while being recorded from three-positional cameras simultaneously in each session. The participants provided data multiple times, under different environments with different clothing amassing 30 sessions, yielding approximately ten hours of total video data in a time-controlled and safe setup. </p> ![plot](./fig/dataset.png) <p style='text-align: justify;'> Further, the videos are collected under varying realistic lighting conditions; natural lighting, artificial lighting, and a mix of both indoors, and outdoor environments, and multiple realistic backgrounds like walls, doors, windows, grasses, roads, and reflective tiles with varying camera settings like zoom, brightness and contrast filter, relative motions. Environments and lighting conditions are presented in the above figure. We also provide the videos containing only background to avail further research. </p> ### Data Preprocessing and AI readiness <p style='text-align: justify;'> We align each session's simultaneously recorded videos from the starting time-stamp, and at any given time, all three sensors of any particular session capture their corresponding positional projection of the same event. The alignment allows us to annotate one video file per session for the underlying action in the time duration and receive action annotation for the other two videos, significantly reducing the annotation burden for these multiview videos. </p> <p style='text-align: justify;'> Besides action information, each video is also tagged with the following meta-information: the subjects' ID, backgrounds environments, lighting conditions, camera specifications, settings (varying zoom, brightness), camera-subject distances, and relative movements, for various research directions. Additionally, other information such as the date, time, and the trial number were also listed for each video. Multiple human volunteers manually annotated video files, and these annotations went through multiple cross-checking. Finally, we prepare the video data in pickle file format for quick loading using python/C++/Matlab libraries. </p> ### Dataset Statistics Here we provide the our collected dataset characteristics insight. <p style='text-align: justify;'> <strong> 1) Inter and Intra action variations:</strong> We ensure fine-grain inter and intra-action variation in our dataset by requesting the participants to perform similar actions in freestyle. Further, we take multiple sessions on different dates and times to incorporate inter-personal variation in the dataset. 80% of our participants provided data in multiple sessions. 58% of the participant provides their data from multiple backgrounds. We have 20% of female participants in for multiple sessions. In actions, we have 40% stable pose as action and 60% dynamic simple actions in our collected dataset. Further, 10% of our volunteers are athletes. Moreover, our dataset are relatively balanced with almost equal duration of each actions. </p> ![plot](./fig/actionvar.png) <p style='text-align: justify;'> <strong> 2) Background Variations:</strong> We considered different realistic backgrounds for our data collection while ensuring safety for the participants. We have 75% data for the indoor laboratory environment. Among that, we have 60% of data with white wall background with regular inventories like computers, bookshelves, doors, and windows, 25% with reflective tiles, sunny windows, and 5% under a messy laboratory background with multiple office tables and carpets. Among the 25% outdoor data, we collected 50% of the outdoor data in green fields and concrete parking spaces. We have about 60% of the data in the artificial lighting, and the rest are in natural sunlight conditions. We also provide the backgrounds without the subjects from the three sensors' viewpoints for reference. </p> ![plot](./fig/tdist.png) <p style='text-align: justify;'> <strong>3) Viewpoint and sensor Variations:</strong> We have collected 67% data from the horizontal view and 33% from the top-angular positional viewpoints. Our 67% data are captured by the flat lens from a angular viewpoint, and 33% are captured via the wide angular view from the horizontal position. 40% data are recorded from the stable camera position, and 60% data are captured via moving camera sensors. We have 20% data from the subject-focused zoomed camera lens. Further, the subjects perform the actions while facing away from the sensors 20% of the time. </p> ### Reference Please refer to the following papers to cite the dataset. - Hasan, Z., Ahmed, M., Faridee, A. Z. M., Purushotham, S., Kwon, H., Lee, H., & Roy, N. (2023). NEV-NCD: Negative Learning, Entropy, and Variance regularization based novel action categories discovery. arXiv preprint arXiv:2304.07354. ### Acknowledgement <p style='text-align: justify;'> We acknowledge the support of DEVCOM Army Research Laboratory (ARL) and U.S. Army Grant No. W911NF21-20076. </p>
mahmed10/MPSC_MV
[ "task_categories:video-classification", "Video Acitvity Recognition", "region:us" ]
2023-01-25T17:53:36+00:00
{"task_categories": ["video-classification"], "tags": ["Video Acitvity Recognition"]}
2023-04-28T14:25:02+00:00
[]
[]
TAGS #task_categories-video-classification #Video Acitvity Recognition #region-us
<h1 style="text-align: center;">MPSC Multi-view Dataset</h1> <p style='text-align: justify;'> Deep video representation learning has recently attained state-of-the-art performance in video action recognition. However, when used with video clips from varied perspectives, the performance of these models degrades significantly. Existing VAR models frequently simultaneously contain both view information and action attributes, making it difficult to learn a view-invariant representation. Therefore, to study the attribute of multiview representation, we collected a large-scale time synchronous multiview video dataset from 10 subjects in both indoor and outdoor settings performing 10 different actions with three horizontal and vertical viewpoints using a smartphone, an action camera, and a drone camera. We provide the multiview video dataset with various meta-data information to facilitate further research for robust VAR systems. </p> ### Collecting multiview videos <p style='text-align: justify;'> In our data collection strategy, we choose regular sensors (smartphone camera), wide-angle sensors (go-pro, action camera), and drone cameras covering front views, side views, and top view positions to receive simultaneous three 2D projections of ten action events. To collect multi-angular and positional projections of the same actions, smartphones (Samsung S8 plus, flat-angle sensor), action cameras (Dragon touch EK700, wide-angle sensor), and a drone (Parrot Anafi, flat-angle sensor) capture the action events simultaneously from different positions in 1080p at 30 FPS. Among the cameras, the smartphone was hand-held and tracked the events. The action camera was placed in a stationary position and captured the events using its wide-view sensor. Both of them were positions approximately 6 feet away from the participants to capture two completely different side-views of the actions from horizontal position. Lastly, the drone captures the events' top view while flying at a low altitude of varying distances from 8 feet to 15 feet. Although we positioned the cameras to capture events from a particular angular position with some occasional movement, it effectively captured an almost complete-view of actions, as the volunteers turn in different directions to perform different actions without any constraints. </p> <p style='text-align: justify;'> We have selected ten regular micro-actions in our dataset with both static (poses: sitting, standing, lying with face up, lying with face down) and dynamic actions (temporal patterns: walking, push up, waving hand, leg exercise, object carrying, object pick/drop). We hypothesize this would further foundation for complex action recognition since some complex actions require sequentially performing a subset of these micro-actions. In our target actions selection, some actions have only minor differences to distinguish and require contextual knowledge (walking and object carrying, push-ups and lying down, lying with face down and lying with face up, standing and hand waving in standing position). Further, we have collected the background-only data without the human to provide a no-action/human dataset for the identical backgrounds. </p> <p style='text-align: justify;'> We collect these data [sampled shown in the follwing figure] from 12 volunteer participants with varying traits. The participant performs all ten actions for 30 seconds while being recorded from three-positional cameras simultaneously in each session. The participants provided data multiple times, under different environments with different clothing amassing 30 sessions, yielding approximately ten hours of total video data in a time-controlled and safe setup. </p> !plot <p style='text-align: justify;'> Further, the videos are collected under varying realistic lighting conditions; natural lighting, artificial lighting, and a mix of both indoors, and outdoor environments, and multiple realistic backgrounds like walls, doors, windows, grasses, roads, and reflective tiles with varying camera settings like zoom, brightness and contrast filter, relative motions. Environments and lighting conditions are presented in the above figure. We also provide the videos containing only background to avail further research. </p> ### Data Preprocessing and AI readiness <p style='text-align: justify;'> We align each session's simultaneously recorded videos from the starting time-stamp, and at any given time, all three sensors of any particular session capture their corresponding positional projection of the same event. The alignment allows us to annotate one video file per session for the underlying action in the time duration and receive action annotation for the other two videos, significantly reducing the annotation burden for these multiview videos. </p> <p style='text-align: justify;'> Besides action information, each video is also tagged with the following meta-information: the subjects' ID, backgrounds environments, lighting conditions, camera specifications, settings (varying zoom, brightness), camera-subject distances, and relative movements, for various research directions. Additionally, other information such as the date, time, and the trial number were also listed for each video. Multiple human volunteers manually annotated video files, and these annotations went through multiple cross-checking. Finally, we prepare the video data in pickle file format for quick loading using python/C++/Matlab libraries. </p> ### Dataset Statistics Here we provide the our collected dataset characteristics insight. <p style='text-align: justify;'> <strong> 1) Inter and Intra action variations:</strong> We ensure fine-grain inter and intra-action variation in our dataset by requesting the participants to perform similar actions in freestyle. Further, we take multiple sessions on different dates and times to incorporate inter-personal variation in the dataset. 80% of our participants provided data in multiple sessions. 58% of the participant provides their data from multiple backgrounds. We have 20% of female participants in for multiple sessions. In actions, we have 40% stable pose as action and 60% dynamic simple actions in our collected dataset. Further, 10% of our volunteers are athletes. Moreover, our dataset are relatively balanced with almost equal duration of each actions. </p> !plot <p style='text-align: justify;'> <strong> 2) Background Variations:</strong> We considered different realistic backgrounds for our data collection while ensuring safety for the participants. We have 75% data for the indoor laboratory environment. Among that, we have 60% of data with white wall background with regular inventories like computers, bookshelves, doors, and windows, 25% with reflective tiles, sunny windows, and 5% under a messy laboratory background with multiple office tables and carpets. Among the 25% outdoor data, we collected 50% of the outdoor data in green fields and concrete parking spaces. We have about 60% of the data in the artificial lighting, and the rest are in natural sunlight conditions. We also provide the backgrounds without the subjects from the three sensors' viewpoints for reference. </p> !plot <p style='text-align: justify;'> <strong>3) Viewpoint and sensor Variations:</strong> We have collected 67% data from the horizontal view and 33% from the top-angular positional viewpoints. Our 67% data are captured by the flat lens from a angular viewpoint, and 33% are captured via the wide angular view from the horizontal position. 40% data are recorded from the stable camera position, and 60% data are captured via moving camera sensors. We have 20% data from the subject-focused zoomed camera lens. Further, the subjects perform the actions while facing away from the sensors 20% of the time. </p> ### Reference Please refer to the following papers to cite the dataset. - Hasan, Z., Ahmed, M., Faridee, A. Z. M., Purushotham, S., Kwon, H., Lee, H., & Roy, N. (2023). NEV-NCD: Negative Learning, Entropy, and Variance regularization based novel action categories discovery. arXiv preprint arXiv:2304.07354. ### Acknowledgement <p style='text-align: justify;'> We acknowledge the support of DEVCOM Army Research Laboratory (ARL) and U.S. Army Grant No. W911NF21-20076. </p>
[ "### Collecting multiview videos\n\n<p style='text-align: justify;'> \n In our data collection strategy, we choose regular sensors (smartphone camera), wide-angle sensors (go-pro, action camera), and drone cameras covering front views, side views, and top view positions to receive simultaneous three 2D projections of ten action events. To collect multi-angular and positional projections of the same actions, smartphones (Samsung S8 plus, flat-angle sensor), action cameras (Dragon touch EK700, wide-angle sensor), and a drone (Parrot Anafi, flat-angle sensor) capture the action events simultaneously from different positions in 1080p at 30 FPS. Among the cameras, the smartphone was hand-held and tracked the events. The action camera was placed in a stationary position and captured the events using its wide-view sensor. Both of them were positions approximately 6 feet away from the participants to capture two completely different side-views of the actions from horizontal position. Lastly, the drone captures the events' top view while flying at a low altitude of varying distances from 8 feet to 15 feet. Although we positioned the cameras to capture events from a particular angular position with some occasional movement, it effectively captured an almost complete-view of actions, as the volunteers turn in different directions to perform different actions without any constraints.\n</p>\n\n<p style='text-align: justify;'>\n We have selected ten regular micro-actions in our dataset with both static (poses: sitting, standing, lying with face up, lying with face down) and dynamic actions (temporal patterns: walking, push up, waving hand, leg exercise, object carrying, object pick/drop). We hypothesize this would further foundation for complex action recognition since some complex actions require sequentially performing a subset of these micro-actions. In our target actions selection, some actions have only minor differences to distinguish and require contextual knowledge (walking and object carrying, push-ups and lying down, lying with face down and lying with face up, standing and hand waving in standing position). Further, we have collected the background-only data without the human to provide a no-action/human dataset for the identical backgrounds.\n</p>\n\n<p style='text-align: justify;'>\n We collect these data [sampled shown in the follwing figure] from 12 volunteer participants with varying traits. The participant performs all ten actions for 30 seconds while being recorded from three-positional cameras simultaneously in each session. The participants provided data multiple times, under different environments with different clothing amassing 30 sessions, yielding approximately ten hours of total video data in a time-controlled and safe setup. \n</p>\n\n\n!plot\n\n<p style='text-align: justify;'>\n Further, the videos are collected under varying realistic lighting conditions; natural lighting, artificial lighting, and a mix of both indoors, and outdoor environments, and multiple realistic backgrounds like walls, doors, windows, grasses, roads, and reflective tiles with varying camera settings like zoom, brightness and contrast filter, relative motions. Environments and lighting conditions are presented in the above figure. We also provide the videos containing only background to avail further research.\n</p>", "### Data Preprocessing and AI readiness\n<p style='text-align: justify;'>\n We align each session's simultaneously recorded videos from the starting time-stamp, and at any given time, all three sensors of any particular session capture their corresponding positional projection of the same event. The alignment allows us to annotate one video file per session for the underlying action in the time duration and receive action annotation for the other two videos, significantly reducing the annotation burden for these multiview videos.\n</p>\n\n<p style='text-align: justify;'>\n Besides action information, each video is also tagged with the following meta-information: the subjects' ID, backgrounds environments, lighting conditions, camera specifications, settings (varying zoom, brightness), camera-subject distances, and relative movements, for various research directions. Additionally, other information such as the date, time, and the trial number were also listed for each video. Multiple human volunteers manually annotated video files, and these annotations went through multiple cross-checking. Finally, we prepare the video data in pickle file format for quick loading using python/C++/Matlab libraries.\n</p>", "### Dataset Statistics\n\nHere we provide the our collected dataset characteristics insight.\n\n<p style='text-align: justify;'>\n <strong> 1) Inter and Intra action variations:</strong> We ensure fine-grain inter and intra-action variation in our dataset by requesting the participants to perform similar actions in freestyle. Further, we take multiple sessions on different dates and times to incorporate inter-personal variation in the dataset. 80% of our participants provided data in multiple sessions. 58% of the participant provides their data from multiple backgrounds. We have 20% of female participants in for multiple sessions. In actions, we have 40% stable pose as action and 60% dynamic simple actions in our collected dataset. Further, 10% of our volunteers are athletes. Moreover, our dataset are relatively balanced with almost equal duration of each actions.\n</p>\n\n!plot\n\n<p style='text-align: justify;'>\n <strong> 2) Background Variations:</strong> We considered different realistic backgrounds for our data collection while ensuring safety for the participants. We have 75% data for the indoor laboratory environment. Among that, we have 60% of data with white wall background with regular inventories like computers, bookshelves, doors, and windows, 25% with reflective tiles, sunny windows, and 5% under a messy laboratory background with multiple office tables and carpets. Among the 25% outdoor data, we collected 50% of the outdoor data in green fields and concrete parking spaces. We have about 60% of the data in the artificial lighting, and the rest are in natural sunlight conditions. We also provide the backgrounds without the subjects from the three sensors' viewpoints for reference. \n</p>\n\n!plot\n\n<p style='text-align: justify;'>\n <strong>3) Viewpoint and sensor Variations:</strong> We have collected 67% data from the horizontal view and 33% from the top-angular positional viewpoints. Our 67% data are captured by the flat lens from a angular viewpoint, and 33% are captured via the wide angular view from the horizontal position. 40% data are recorded from the stable camera position, and 60% data are captured via moving camera sensors. We have 20% data from the subject-focused zoomed camera lens. Further, the subjects perform the actions while facing away from the sensors 20% of the time. \n</p>", "### Reference\n\nPlease refer to the following papers to cite the dataset.\n\n- Hasan, Z., Ahmed, M., Faridee, A. Z. M., Purushotham, S., Kwon, H., Lee, H., & Roy, N. (2023). NEV-NCD: Negative Learning, Entropy, and Variance regularization based novel action categories discovery. arXiv preprint arXiv:2304.07354.", "### Acknowledgement\n<p style='text-align: justify;'>\n We acknowledge the support of DEVCOM Army Research Laboratory (ARL) and U.S. Army Grant No. W911NF21-20076. \n</p>" ]
[ "TAGS\n#task_categories-video-classification #Video Acitvity Recognition #region-us \n", "### Collecting multiview videos\n\n<p style='text-align: justify;'> \n In our data collection strategy, we choose regular sensors (smartphone camera), wide-angle sensors (go-pro, action camera), and drone cameras covering front views, side views, and top view positions to receive simultaneous three 2D projections of ten action events. To collect multi-angular and positional projections of the same actions, smartphones (Samsung S8 plus, flat-angle sensor), action cameras (Dragon touch EK700, wide-angle sensor), and a drone (Parrot Anafi, flat-angle sensor) capture the action events simultaneously from different positions in 1080p at 30 FPS. Among the cameras, the smartphone was hand-held and tracked the events. The action camera was placed in a stationary position and captured the events using its wide-view sensor. Both of them were positions approximately 6 feet away from the participants to capture two completely different side-views of the actions from horizontal position. Lastly, the drone captures the events' top view while flying at a low altitude of varying distances from 8 feet to 15 feet. Although we positioned the cameras to capture events from a particular angular position with some occasional movement, it effectively captured an almost complete-view of actions, as the volunteers turn in different directions to perform different actions without any constraints.\n</p>\n\n<p style='text-align: justify;'>\n We have selected ten regular micro-actions in our dataset with both static (poses: sitting, standing, lying with face up, lying with face down) and dynamic actions (temporal patterns: walking, push up, waving hand, leg exercise, object carrying, object pick/drop). We hypothesize this would further foundation for complex action recognition since some complex actions require sequentially performing a subset of these micro-actions. In our target actions selection, some actions have only minor differences to distinguish and require contextual knowledge (walking and object carrying, push-ups and lying down, lying with face down and lying with face up, standing and hand waving in standing position). Further, we have collected the background-only data without the human to provide a no-action/human dataset for the identical backgrounds.\n</p>\n\n<p style='text-align: justify;'>\n We collect these data [sampled shown in the follwing figure] from 12 volunteer participants with varying traits. The participant performs all ten actions for 30 seconds while being recorded from three-positional cameras simultaneously in each session. The participants provided data multiple times, under different environments with different clothing amassing 30 sessions, yielding approximately ten hours of total video data in a time-controlled and safe setup. \n</p>\n\n\n!plot\n\n<p style='text-align: justify;'>\n Further, the videos are collected under varying realistic lighting conditions; natural lighting, artificial lighting, and a mix of both indoors, and outdoor environments, and multiple realistic backgrounds like walls, doors, windows, grasses, roads, and reflective tiles with varying camera settings like zoom, brightness and contrast filter, relative motions. Environments and lighting conditions are presented in the above figure. We also provide the videos containing only background to avail further research.\n</p>", "### Data Preprocessing and AI readiness\n<p style='text-align: justify;'>\n We align each session's simultaneously recorded videos from the starting time-stamp, and at any given time, all three sensors of any particular session capture their corresponding positional projection of the same event. The alignment allows us to annotate one video file per session for the underlying action in the time duration and receive action annotation for the other two videos, significantly reducing the annotation burden for these multiview videos.\n</p>\n\n<p style='text-align: justify;'>\n Besides action information, each video is also tagged with the following meta-information: the subjects' ID, backgrounds environments, lighting conditions, camera specifications, settings (varying zoom, brightness), camera-subject distances, and relative movements, for various research directions. Additionally, other information such as the date, time, and the trial number were also listed for each video. Multiple human volunteers manually annotated video files, and these annotations went through multiple cross-checking. Finally, we prepare the video data in pickle file format for quick loading using python/C++/Matlab libraries.\n</p>", "### Dataset Statistics\n\nHere we provide the our collected dataset characteristics insight.\n\n<p style='text-align: justify;'>\n <strong> 1) Inter and Intra action variations:</strong> We ensure fine-grain inter and intra-action variation in our dataset by requesting the participants to perform similar actions in freestyle. Further, we take multiple sessions on different dates and times to incorporate inter-personal variation in the dataset. 80% of our participants provided data in multiple sessions. 58% of the participant provides their data from multiple backgrounds. We have 20% of female participants in for multiple sessions. In actions, we have 40% stable pose as action and 60% dynamic simple actions in our collected dataset. Further, 10% of our volunteers are athletes. Moreover, our dataset are relatively balanced with almost equal duration of each actions.\n</p>\n\n!plot\n\n<p style='text-align: justify;'>\n <strong> 2) Background Variations:</strong> We considered different realistic backgrounds for our data collection while ensuring safety for the participants. We have 75% data for the indoor laboratory environment. Among that, we have 60% of data with white wall background with regular inventories like computers, bookshelves, doors, and windows, 25% with reflective tiles, sunny windows, and 5% under a messy laboratory background with multiple office tables and carpets. Among the 25% outdoor data, we collected 50% of the outdoor data in green fields and concrete parking spaces. We have about 60% of the data in the artificial lighting, and the rest are in natural sunlight conditions. We also provide the backgrounds without the subjects from the three sensors' viewpoints for reference. \n</p>\n\n!plot\n\n<p style='text-align: justify;'>\n <strong>3) Viewpoint and sensor Variations:</strong> We have collected 67% data from the horizontal view and 33% from the top-angular positional viewpoints. Our 67% data are captured by the flat lens from a angular viewpoint, and 33% are captured via the wide angular view from the horizontal position. 40% data are recorded from the stable camera position, and 60% data are captured via moving camera sensors. We have 20% data from the subject-focused zoomed camera lens. Further, the subjects perform the actions while facing away from the sensors 20% of the time. \n</p>", "### Reference\n\nPlease refer to the following papers to cite the dataset.\n\n- Hasan, Z., Ahmed, M., Faridee, A. Z. M., Purushotham, S., Kwon, H., Lee, H., & Roy, N. (2023). NEV-NCD: Negative Learning, Entropy, and Variance regularization based novel action categories discovery. arXiv preprint arXiv:2304.07354.", "### Acknowledgement\n<p style='text-align: justify;'>\n We acknowledge the support of DEVCOM Army Research Laboratory (ARL) and U.S. Army Grant No. W911NF21-20076. \n</p>" ]
297227012467386b09e6bf7d270d277d0e2b9325
# Dataset Card for pile-pii-scrubadub ## Dataset Description - **Repository: https://github.com/tomekkorbak/aligned-pretraining-objectives** - **Paper: Arxiv link to be added** ### Dataset Summary This dataset contains text from [The Pile](https://huggingface.co/datasets/the_pile), annotated based on the personal idenfitiable information (PII) in each sentence. Each document (row in the dataset) is segmented into sentences, and each sentence is given a score: the percentage of words in it that are classified as PII by [Scrubadub](https://scrubadub.readthedocs.io/en/stable/). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages This dataset is taken from [The Pile](https://huggingface.co/datasets/the_pile), which is English text. ## Dataset Structure ### Data Instances 1949977 ### Data Fields - texts (sequence): a list of the sentences in the document (segmented using [SpaCy](https://spacy.io/)) - meta (dict): the section of [The Pile](https://huggingface.co/datasets/the_pile) from which it originated - scores (sequence): a score for each sentence in the `texts` column indicating the percent of words that are detected as PII by [Scrubadub](https://scrubadub.readthedocs.io/en/stable/) - avg_score (float64): the average of the scores listed in the `scores` column - num_sents (int64): the number of sentences (and scores) in that document ### Data Splits Training set only ## Dataset Creation ### Curation Rationale This is labeled text from [The Pile](https://huggingface.co/datasets/the_pile), a large dataset of text in English. The PII is labeled so that generative language models can be trained to avoid generating PII. ### Source Data #### Initial Data Collection and Normalization This is labeled text from [The Pile](https://huggingface.co/datasets/the_pile). #### Who are the source language producers? Please see [The Pile](https://huggingface.co/datasets/the_pile) for the source of the dataset. ### Annotations #### Annotation process For each sentence, [Scrubadub](https://scrubadub.readthedocs.io/en/stable/) was used to detect: - email addresses - addresses and postal codes - phone numbers - credit card numbers - US social security numbers - vehicle plates numbers - dates of birth - URLs - login credentials #### Who are the annotators? [Scrubadub](https://scrubadub.readthedocs.io/en/stable/) ### Personal and Sensitive Information This dataset contains all PII that was originally contained in [The Pile](https://huggingface.co/datasets/the_pile), with all detected PII annotated. ## Considerations for Using the Data ### Social Impact of Dataset This dataset contains examples of real PII (conveniently annotated in the text!). Please take care to avoid misusing it or putting anybody in danger by publicizing their information. This dataset is intended for research purposes only. We cannot guarantee that all PII has been detected, and we cannot guarantee that models trained using it will avoid generating PII. We do not recommend deploying models trained on this data. ### Discussion of Biases This dataset contains all biases from The Pile discussed in their paper: https://arxiv.org/abs/2101.00027 ### Other Known Limitations The PII in this dataset was detected using imperfect automated detection methods. We cannot guarantee that the labels are 100% accurate. ## Additional Information ### Dataset Curators [The Pile](https://huggingface.co/datasets/the_pile) ### Licensing Information From [The Pile](https://huggingface.co/datasets/the_pile): PubMed Central: [MIT License](https://github.com/EleutherAI/pile-pubmedcentral/blob/master/LICENSE) ### Citation Information Paper information to be added ### Contributions [The Pile](https://huggingface.co/datasets/the_pile)
tomekkorbak/pile-pii-scrubadub
[ "task_categories:text-classification", "task_categories:other", "task_ids:acceptability-classification", "task_ids:text-scoring", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:extended|the_pile", "language:en", "license:mit", "pii", "personal", "identifiable", "information", "pretraining-with-human-feedback", "arxiv:2101.00027", "region:us" ]
2023-01-25T18:00:01+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["extended|the_pile"], "task_categories": ["text-classification", "other"], "task_ids": ["acceptability-classification", "text-scoring"], "pretty_name": "pile-pii-scrubadub", "tags": ["pii", "personal", "identifiable", "information", "pretraining-with-human-feedback"]}
2023-02-07T15:26:41+00:00
[ "2101.00027" ]
[ "en" ]
TAGS #task_categories-text-classification #task_categories-other #task_ids-acceptability-classification #task_ids-text-scoring #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-extended|the_pile #language-English #license-mit #pii #personal #identifiable #information #pretraining-with-human-feedback #arxiv-2101.00027 #region-us
# Dataset Card for pile-pii-scrubadub ## Dataset Description - Repository: URL - Paper: Arxiv link to be added ### Dataset Summary This dataset contains text from The Pile, annotated based on the personal idenfitiable information (PII) in each sentence. Each document (row in the dataset) is segmented into sentences, and each sentence is given a score: the percentage of words in it that are classified as PII by Scrubadub. ### Supported Tasks and Leaderboards ### Languages This dataset is taken from The Pile, which is English text. ## Dataset Structure ### Data Instances 1949977 ### Data Fields - texts (sequence): a list of the sentences in the document (segmented using SpaCy) - meta (dict): the section of The Pile from which it originated - scores (sequence): a score for each sentence in the 'texts' column indicating the percent of words that are detected as PII by Scrubadub - avg_score (float64): the average of the scores listed in the 'scores' column - num_sents (int64): the number of sentences (and scores) in that document ### Data Splits Training set only ## Dataset Creation ### Curation Rationale This is labeled text from The Pile, a large dataset of text in English. The PII is labeled so that generative language models can be trained to avoid generating PII. ### Source Data #### Initial Data Collection and Normalization This is labeled text from The Pile. #### Who are the source language producers? Please see The Pile for the source of the dataset. ### Annotations #### Annotation process For each sentence, Scrubadub was used to detect: - email addresses - addresses and postal codes - phone numbers - credit card numbers - US social security numbers - vehicle plates numbers - dates of birth - URLs - login credentials #### Who are the annotators? Scrubadub ### Personal and Sensitive Information This dataset contains all PII that was originally contained in The Pile, with all detected PII annotated. ## Considerations for Using the Data ### Social Impact of Dataset This dataset contains examples of real PII (conveniently annotated in the text!). Please take care to avoid misusing it or putting anybody in danger by publicizing their information. This dataset is intended for research purposes only. We cannot guarantee that all PII has been detected, and we cannot guarantee that models trained using it will avoid generating PII. We do not recommend deploying models trained on this data. ### Discussion of Biases This dataset contains all biases from The Pile discussed in their paper: URL ### Other Known Limitations The PII in this dataset was detected using imperfect automated detection methods. We cannot guarantee that the labels are 100% accurate. ## Additional Information ### Dataset Curators The Pile ### Licensing Information From The Pile: PubMed Central: MIT License Paper information to be added ### Contributions The Pile
[ "# Dataset Card for pile-pii-scrubadub", "## Dataset Description\n\n- Repository: URL \n- Paper: Arxiv link to be added", "### Dataset Summary\n\nThis dataset contains text from The Pile, annotated based on the personal idenfitiable information (PII) in each sentence.\nEach document (row in the dataset) is segmented into sentences, and each sentence is given a score: the percentage of words in it that are classified as PII by Scrubadub.", "### Supported Tasks and Leaderboards", "### Languages\n\nThis dataset is taken from The Pile, which is English text.", "## Dataset Structure", "### Data Instances\n\n1949977", "### Data Fields\n\n- texts (sequence): a list of the sentences in the document (segmented using SpaCy)\n- meta (dict): the section of The Pile from which it originated\n- scores (sequence): a score for each sentence in the 'texts' column indicating the percent of words that are detected as PII by Scrubadub\n- avg_score (float64): the average of the scores listed in the 'scores' column\n- num_sents (int64): the number of sentences (and scores) in that document", "### Data Splits\n\nTraining set only", "## Dataset Creation", "### Curation Rationale\n\nThis is labeled text from The Pile, a large dataset of text in English. The PII is labeled so that generative language models can be trained to avoid generating PII.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThis is labeled text from The Pile.", "#### Who are the source language producers?\n\nPlease see The Pile for the source of the dataset.", "### Annotations", "#### Annotation process\n\nFor each sentence, Scrubadub was used to detect:\n\n- email addresses\n- addresses and postal codes\n- phone numbers\n- credit card numbers\n- US social security numbers\n- vehicle plates numbers\n- dates of birth\n- URLs\n- login credentials", "#### Who are the annotators?\n\nScrubadub", "### Personal and Sensitive Information\n\nThis dataset contains all PII that was originally contained in The Pile, with all detected PII annotated.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset contains examples of real PII (conveniently annotated in the text!). Please take care to avoid misusing it or putting anybody in danger by publicizing their information.\nThis dataset is intended for research purposes only. We cannot guarantee that all PII has been detected, and we cannot guarantee that models trained using it will avoid generating PII.\nWe do not recommend deploying models trained on this data.", "### Discussion of Biases\n\nThis dataset contains all biases from The Pile discussed in their paper: URL", "### Other Known Limitations\n\nThe PII in this dataset was detected using imperfect automated detection methods. We cannot guarantee that the labels are 100% accurate.", "## Additional Information", "### Dataset Curators\n\nThe Pile", "### Licensing Information\n\nFrom The Pile: PubMed Central: MIT License\n\n\n\nPaper information to be added", "### Contributions\n\nThe Pile" ]
[ "TAGS\n#task_categories-text-classification #task_categories-other #task_ids-acceptability-classification #task_ids-text-scoring #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-extended|the_pile #language-English #license-mit #pii #personal #identifiable #information #pretraining-with-human-feedback #arxiv-2101.00027 #region-us \n", "# Dataset Card for pile-pii-scrubadub", "## Dataset Description\n\n- Repository: URL \n- Paper: Arxiv link to be added", "### Dataset Summary\n\nThis dataset contains text from The Pile, annotated based on the personal idenfitiable information (PII) in each sentence.\nEach document (row in the dataset) is segmented into sentences, and each sentence is given a score: the percentage of words in it that are classified as PII by Scrubadub.", "### Supported Tasks and Leaderboards", "### Languages\n\nThis dataset is taken from The Pile, which is English text.", "## Dataset Structure", "### Data Instances\n\n1949977", "### Data Fields\n\n- texts (sequence): a list of the sentences in the document (segmented using SpaCy)\n- meta (dict): the section of The Pile from which it originated\n- scores (sequence): a score for each sentence in the 'texts' column indicating the percent of words that are detected as PII by Scrubadub\n- avg_score (float64): the average of the scores listed in the 'scores' column\n- num_sents (int64): the number of sentences (and scores) in that document", "### Data Splits\n\nTraining set only", "## Dataset Creation", "### Curation Rationale\n\nThis is labeled text from The Pile, a large dataset of text in English. The PII is labeled so that generative language models can be trained to avoid generating PII.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThis is labeled text from The Pile.", "#### Who are the source language producers?\n\nPlease see The Pile for the source of the dataset.", "### Annotations", "#### Annotation process\n\nFor each sentence, Scrubadub was used to detect:\n\n- email addresses\n- addresses and postal codes\n- phone numbers\n- credit card numbers\n- US social security numbers\n- vehicle plates numbers\n- dates of birth\n- URLs\n- login credentials", "#### Who are the annotators?\n\nScrubadub", "### Personal and Sensitive Information\n\nThis dataset contains all PII that was originally contained in The Pile, with all detected PII annotated.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset contains examples of real PII (conveniently annotated in the text!). Please take care to avoid misusing it or putting anybody in danger by publicizing their information.\nThis dataset is intended for research purposes only. We cannot guarantee that all PII has been detected, and we cannot guarantee that models trained using it will avoid generating PII.\nWe do not recommend deploying models trained on this data.", "### Discussion of Biases\n\nThis dataset contains all biases from The Pile discussed in their paper: URL", "### Other Known Limitations\n\nThe PII in this dataset was detected using imperfect automated detection methods. We cannot guarantee that the labels are 100% accurate.", "## Additional Information", "### Dataset Curators\n\nThe Pile", "### Licensing Information\n\nFrom The Pile: PubMed Central: MIT License\n\n\n\nPaper information to be added", "### Contributions\n\nThe Pile" ]
a06d4250163274a43e10baad618e61f097583d27
# Dataset Card for "lat_en_loeb_morph" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
grosenthal/lat_en_loeb_morph
[ "region:us" ]
2023-01-25T18:11:22+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "la", "dtype": "string"}, {"name": "en", "dtype": "string"}, {"name": "file", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 60797479, "num_examples": 99343}, {"name": "test", "num_bytes": 628768, "num_examples": 1014}, {"name": "valid", "num_bytes": 605889, "num_examples": 1014}], "download_size": 31059812, "dataset_size": 62032136}}
2023-02-28T18:49:30+00:00
[]
[]
TAGS #region-us
# Dataset Card for "lat_en_loeb_morph" More Information needed
[ "# Dataset Card for \"lat_en_loeb_morph\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"lat_en_loeb_morph\"\n\nMore Information needed" ]
7c87c5db319dab81696ecb1b7e9ea2eb92c8f6dd
# Dataset for training Russian language models Overall: 75G Scripts: https://github.com/IlyaGusev/rulm/tree/master/data_processing | Website | Char count (M) | Word count (M) | |-----------------|---------------|---------------| | pikabu | 14938 | 2161 | | lenta | 1008 | 135 | | stihi | 2994 | 393 | | stackoverflow | 1073 | 228 | | habr | 5112 | 753 | | taiga_fontanka | 419 | 55 | | librusec | 10149 | 1573 | | buriy | 2646 | 352 | | ods_tass | 1908 | 255 | | wiki | 3473 | 469 | | math | 987 | 177 |
IlyaGusev/rulm
[ "task_categories:text-generation", "size_categories:10M<n<100M", "language:ru", "region:us" ]
2023-01-25T18:14:38+00:00
{"language": ["ru"], "size_categories": ["10M<n<100M"], "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 78609111353, "num_examples": 14811026}, {"name": "test", "num_bytes": 397130292, "num_examples": 74794}, {"name": "validation", "num_bytes": 395354867, "num_examples": 74691}], "download_size": 24170140196, "dataset_size": 79401596512}}
2023-03-20T23:53:53+00:00
[]
[ "ru" ]
TAGS #task_categories-text-generation #size_categories-10M<n<100M #language-Russian #region-us
Dataset for training Russian language models ============================================ Overall: 75G Scripts: URL Website: pikabu, Char count (M): 14938, Word count (M): 2161 Website: lenta, Char count (M): 1008, Word count (M): 135 Website: stihi, Char count (M): 2994, Word count (M): 393 Website: stackoverflow, Char count (M): 1073, Word count (M): 228 Website: habr, Char count (M): 5112, Word count (M): 753 Website: taiga\_fontanka, Char count (M): 419, Word count (M): 55 Website: librusec, Char count (M): 10149, Word count (M): 1573 Website: buriy, Char count (M): 2646, Word count (M): 352 Website: ods\_tass, Char count (M): 1908, Word count (M): 255 Website: wiki, Char count (M): 3473, Word count (M): 469 Website: math, Char count (M): 987, Word count (M): 177
[]
[ "TAGS\n#task_categories-text-generation #size_categories-10M<n<100M #language-Russian #region-us \n" ]
aa69145a9f971d214419ee3eba2838f3b4522fd0
# Dataset Card for "lat_en_loeb_split" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
grosenthal/lat_en_loeb_split
[ "region:us" ]
2023-01-25T18:27:37+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "la", "dtype": "string"}, {"name": "en", "dtype": "string"}, {"name": "file", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 46936015, "num_examples": 99343}, {"name": "test", "num_bytes": 484664, "num_examples": 1014}, {"name": "valid", "num_bytes": 468616, "num_examples": 1014}], "download_size": 26225698, "dataset_size": 47889295}}
2023-03-25T00:31:49+00:00
[]
[]
TAGS #region-us
# Dataset Card for "lat_en_loeb_split" More Information needed
[ "# Dataset Card for \"lat_en_loeb_split\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"lat_en_loeb_split\"\n\nMore Information needed" ]
9bc3c0c62180045ce419b1a58c9cf14666ece180
Jupyter notebooks and supporting code
SDbiaseval/notebooks
[ "license:apache-2.0", "region:us" ]
2023-01-25T18:31:00+00:00
{"license": "apache-2.0", "viewer": false}
2023-01-31T16:17:43+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
Jupyter notebooks and supporting code
[]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
62c181fb787a3e753e52abc328a7c4fd83af4f00
# Dataset Card for "methods2test_raw_grouped" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
dembastu/methods2test_raw_grouped
[ "region:us" ]
2023-01-25T18:41:08+00:00
{"dataset_info": {"features": [{"name": "focal_method_test_case", "dtype": "string"}, {"name": "length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 854772444.2611823, "num_examples": 631120}], "download_size": 339684184, "dataset_size": 854772444.2611823}}
2023-01-26T23:08:46+00:00
[]
[]
TAGS #region-us
# Dataset Card for "methods2test_raw_grouped" More Information needed
[ "# Dataset Card for \"methods2test_raw_grouped\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"methods2test_raw_grouped\"\n\nMore Information needed" ]
3f6210838290d43d58d7fe5a7148b8c489a7fd28
This is the reporsitory of Turkish fake news dataset which consists of Zaytung posts and Hurriyet news articles. Code folder contains the web scrapper python files. Raw folder contains txt files downloaded from sources. Clean folder contains txt files in lowercase, punctuation and numbers removed.
emreisik/news
[ "task_categories:text-generation", "size_categories:1K<n<10K", "language:tr", "license:bsd", "region:us" ]
2023-01-25T18:48:18+00:00
{"language": ["tr"], "license": "bsd", "size_categories": ["1K<n<10K"], "task_categories": ["text-generation"], "pretty_name": "News"}
2023-01-25T18:50:02+00:00
[]
[ "tr" ]
TAGS #task_categories-text-generation #size_categories-1K<n<10K #language-Turkish #license-bsd #region-us
This is the reporsitory of Turkish fake news dataset which consists of Zaytung posts and Hurriyet news articles. Code folder contains the web scrapper python files. Raw folder contains txt files downloaded from sources. Clean folder contains txt files in lowercase, punctuation and numbers removed.
[]
[ "TAGS\n#task_categories-text-generation #size_categories-1K<n<10K #language-Turkish #license-bsd #region-us \n" ]
b480a7d68113a3870224a5c024c642a95ec496e9
# Dataset Card for "pl-text-images-new-5000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Zombely/pl-text-images-new-5000
[ "region:us" ]
2023-01-25T18:59:51+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2529163044.192, "num_examples": 4036}, {"name": "test", "num_bytes": 314971657.0, "num_examples": 459}, {"name": "validation", "num_bytes": 330087667.0, "num_examples": 505}], "download_size": 3146813826, "dataset_size": 3174222368.192}}
2023-01-25T19:02:52+00:00
[]
[]
TAGS #region-us
# Dataset Card for "pl-text-images-new-5000" More Information needed
[ "# Dataset Card for \"pl-text-images-new-5000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"pl-text-images-new-5000\"\n\nMore Information needed" ]
2a7fc8d44c1d363df612fe81e61809ed4e1254d5
This minimal pair data comes from "Learning to Recognize Dialect Features" by Dorottya Demszky, Devyani Sharma, Jonathan H. Clark, Vinodkumar Prabhakaran, and Jacob Eisenstein. Please cite the original work if you make use of this data: ``` @inproceedings{demszky2021learning, title={Learning to Recognize Dialect Features}, author={Demszky, Dorottya and Sharma, Devyani and Clark, Jonathan H and Prabhakaran, Vinodkumar and Eisenstein, Jacob}, booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies}, pages={2315--2338}, year={2021} } ```
WillHeld/demszky_pairs
[ "region:us" ]
2023-01-25T19:15:28+00:00
{"dataset_info": {"features": [{"name": "phrase_ID", "dtype": "int64"}, {"name": "feature", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "feature_present", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 23146, "num_examples": 266}], "download_size": 8919, "dataset_size": 23146}}
2023-01-25T19:19:46+00:00
[]
[]
TAGS #region-us
This minimal pair data comes from "Learning to Recognize Dialect Features" by Dorottya Demszky, Devyani Sharma, Jonathan H. Clark, Vinodkumar Prabhakaran, and Jacob Eisenstein. Please cite the original work if you make use of this data:
[]
[ "TAGS\n#region-us \n" ]
784aed5465aa215e30fdc7f12e51880c3f73d149
year_#ofpeople_location 1833_4_SriLanka 1943_1_USMarine 1952_1_Penang 1966_1_Rabaul 1973_1_Hawaii 1991_1_SriLanka 2001_1_Malaysia 2002_1_Malaysia 2003_1_Malaysia 2009_1_Thailand 2010_1_India 2010_1_Colombia 2013_1_Colombia 2021_1_India 2021_1_Phillipeans 2022_1_India
CoconutData/Coconutmortalityrate
[ "license:openrail", "region:us" ]
2023-01-25T21:20:17+00:00
{"license": "openrail"}
2023-01-25T21:26:00+00:00
[]
[]
TAGS #license-openrail #region-us
year_#ofpeople_location 1833_4_SriLanka 1943_1_USMarine 1952_1_Penang 1966_1_Rabaul 1973_1_Hawaii 1991_1_SriLanka 2001_1_Malaysia 2002_1_Malaysia 2003_1_Malaysia 2009_1_Thailand 2010_1_India 2010_1_Colombia 2013_1_Colombia 2021_1_India 2021_1_Phillipeans 2022_1_India
[]
[ "TAGS\n#license-openrail #region-us \n" ]
a17a37e5e4abde4c6a920d1ca9abfd18b1356c07
# Dataset Card for "relbert/t_rex" ## Dataset Description - **Repository:** [https://hadyelsahar.github.io/t-rex/](https://hadyelsahar.github.io/t-rex/) - **Paper:** [https://aclanthology.org/L18-1544/](https://aclanthology.org/L18-1544/) - **Dataset:** Cleaned T-REX for link prediction. ## Dataset Summary This is the T-REX dataset proposed in [https://aclanthology.org/L18-1544/](https://aclanthology.org/L18-1544/). The test split is universal across different version, which is manually checked by the author of [relbert/t_rex](https://huggingface.co/datasets/relbert/t_rex), and the test split contains predicates that is not included in the train/validation split. The number of triples in each split is summarized in the table below. ***Note:*** To make it consistent with other datasets ([nell](https://huggingface.co/datasets/relbert/nell) and [conceptnet](https://huggingface.co/datasets/relbert/conceptnet)), we rename predicate/subject/object as relation/head/tail. - Number of instances | | train | validation | test | |:--------------------------------|--------:|-------------:|-------:| | number of triples | 1,274,264 | 318,566 | 122 | | number of unique relation types (predicate) | 759 | 676 | 34 | ### Filtering to Remove Noise We apply filtering to keep triples with named-entities in either of head or tail (`named-entity filter`). Then, we remove predicates if they have less than three triples (`rare-predicate filter`). After the filtering, we manually remove too vague and noisy predicate, and unify same predicates with different names (see the annotation [here](https://huggingface.co/datasets/relbert/t_rex/raw/main/predicate_manual_check.csv)). Finally, we remove triples that contain enties that has frequency less than 5 (`frequnecy`). | Dataset | `raw` | `named-entity filter` | `rare-predicate` | `unify-denoise-predicate` | `frequnecy` | |:----------|-----------:|-----------------------:|-----------------:|--------------------------:|------------:| | Triples | 20,877,472 | 12,561,573 | 12,561,250 | 12,410,726 | 1,616,065 | | Predicate | 1,616 | 1,470 | 1,237 | 839 | 839 | ## Dataset Structure An example looks as follows. ```shell { "tail": "Persian", "head": "Tajik", "title": "Tandoor bread", "text": "Tandoor bread (Arabic: \u062e\u0628\u0632 \u062a\u0646\u0648\u0631 khubz tannoor, Armenian: \u0569\u0578\u0576\u056b\u0580 \u0570\u0561\u0581 tonir hats, Azerbaijani: T\u0259ndir \u00e7\u00f6r\u0259yi, Georgian: \u10d7\u10dd\u10dc\u10d8\u10e1 \u10de\u10e3\u10e0\u10d8 tonis puri, Kazakh: \u0442\u0430\u043d\u0434\u044b\u0440 \u043d\u0430\u043d tandyr nan, Kyrgyz: \u0442\u0430\u043d\u0434\u044b\u0440 \u043d\u0430\u043d tandyr nan, Persian: \u0646\u0627\u0646 \u062a\u0646\u0648\u0631\u06cc nan-e-tanuri, Tajik: \u043d\u043e\u043d\u0438 \u0442\u0430\u043d\u0443\u0440\u0439 noni tanuri, Turkish: Tand\u0131r ekme\u011fi, Uyghur: ) is a type of leavened bread baked in a clay oven called a tandoor, similar to naan. In Pakistan, tandoor breads are popular especially in the Khyber Pakhtunkhwa and Punjab regions, where naan breads are baked in tandoor clay ovens fired by wood or charcoal. These tandoor-prepared naans are known as tandoori naan.", "relation": "[Artifact] is a type of [Type]" } ``` ## Reproduce the Dataset ```shell git clone https://huggingface.co/datasets/relbert/t_rex cd t_rex mkdir data_raw cd data_raw cd data_raw wget https://figshare.com/ndownloader/files/8760241 unzip 8760241 cd ../ python process.py python unify_predicate.py python min_entity_filter.py python create_split.py ``` ## Citation Information ``` @inproceedings{elsahar2018t, title={T-rex: A large scale alignment of natural language with knowledge base triples}, author={Elsahar, Hady and Vougiouklis, Pavlos and Remaci, Arslen and Gravier, Christophe and Hare, Jonathon and Laforest, Frederique and Simperl, Elena}, booktitle={Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)}, year={2018} } ```
relbert/t_rex
[ "multilinguality:monolingual", "size_categories:n<1K", "language:en", "license:other", "region:us" ]
2023-01-25T21:47:54+00:00
{"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "pretty_name": "relbert/t_rex"}
2023-03-31T20:02:35+00:00
[]
[ "en" ]
TAGS #multilinguality-monolingual #size_categories-n<1K #language-English #license-other #region-us
Dataset Card for "relbert/t\_rex" ================================= Dataset Description ------------------- * Repository: URL * Paper: URL * Dataset: Cleaned T-REX for link prediction. Dataset Summary --------------- This is the T-REX dataset proposed in URL The test split is universal across different version, which is manually checked by the author of relbert/t\_rex, and the test split contains predicates that is not included in the train/validation split. The number of triples in each split is summarized in the table below. *Note:* To make it consistent with other datasets (nell and conceptnet), we rename predicate/subject/object as relation/head/tail. * Number of instances ### Filtering to Remove Noise We apply filtering to keep triples with named-entities in either of head or tail ('named-entity filter'). Then, we remove predicates if they have less than three triples ('rare-predicate filter'). After the filtering, we manually remove too vague and noisy predicate, and unify same predicates with different names (see the annotation here). Finally, we remove triples that contain enties that has frequency less than 5 ('frequnecy'). Dataset Structure ----------------- An example looks as follows. Reproduce the Dataset ---------------------
[ "### Filtering to Remove Noise\n\n\nWe apply filtering to keep triples with named-entities in either of head or tail ('named-entity filter').\nThen, we remove predicates if they have less than three triples ('rare-predicate filter').\nAfter the filtering, we manually remove too vague and noisy predicate, and unify same predicates with different names (see the annotation here).\nFinally, we remove triples that contain enties that has frequency less than 5 ('frequnecy').\n\n\n\nDataset Structure\n-----------------\n\n\nAn example looks as follows.\n\n\nReproduce the Dataset\n---------------------" ]
[ "TAGS\n#multilinguality-monolingual #size_categories-n<1K #language-English #license-other #region-us \n", "### Filtering to Remove Noise\n\n\nWe apply filtering to keep triples with named-entities in either of head or tail ('named-entity filter').\nThen, we remove predicates if they have less than three triples ('rare-predicate filter').\nAfter the filtering, we manually remove too vague and noisy predicate, and unify same predicates with different names (see the annotation here).\nFinally, we remove triples that contain enties that has frequency less than 5 ('frequnecy').\n\n\n\nDataset Structure\n-----------------\n\n\nAn example looks as follows.\n\n\nReproduce the Dataset\n---------------------" ]
a318ff293895050b848a95de5c108eeef7528ab3
# Dataset Card for Dataset Name UFSAC: Unification of Sense Annotated Corpora and Tools ## Dataset Description - **Homepage:** https://github.com/getalp/UFSAC - **Repository:** https://github.com/getalp/UFSAC - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary ### Supported Tasks and Leaderboards WSD: Word Sense Disambiguation ### Languages English ## Dataset Structure ### Data Instances ``` {'lemmas': ['_', 'be', 'quite', '_', 'hefty', 'spade', '_', '_', 'bicycle', '_', 'type', 'handlebar', '_', '_', 'spring', 'lever', '_', '_', 'rear', '_', '_', '_', 'step', 'on', '_', 'activate', '_', '_'], 'pos_tags': ['PRP', 'VBZ', 'RB', 'DT', 'JJ', 'NN', ',', 'IN', 'NN', ':', 'NN', 'NNS', 'CC', 'DT', 'VBN', 'NN', 'IN', 'DT', 'NN', ',', 'WDT', 'PRP', 'VBP', 'RP', 'TO', 'VB', 'PRP', '.'], 'sense_keys': ['activate%2:36:00::'], 'target_idx': 25, 'tokens': ['It', 'is', 'quite', 'a', 'hefty', 'spade', ',', 'with', 'bicycle', '-', 'type', 'handlebars', 'and', 'a', 'sprung', 'lever', 'at', 'the', 'rear', ',', 'which', 'you', 'step', 'on', 'to', 'activate', 'it', '.']} ``` ### Data Fields ``` {'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'lemmas': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'pos_tags': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'target_idx': Value(dtype='int32', id=None), 'sense_keys': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)} ``` ### Data Splits Not split. Use `train` split directly.
liyucheng/UFSAC
[ "task_categories:token-classification", "size_categories:1M<n<10M", "language:en", "license:cc-by-2.0", "region:us" ]
2023-01-25T22:17:54+00:00
{"language": ["en"], "license": "cc-by-2.0", "size_categories": ["1M<n<10M"], "task_categories": ["token-classification"]}
2023-01-26T15:41:19+00:00
[]
[ "en" ]
TAGS #task_categories-token-classification #size_categories-1M<n<10M #language-English #license-cc-by-2.0 #region-us
# Dataset Card for Dataset Name UFSAC: Unification of Sense Annotated Corpora and Tools ## Dataset Description - Homepage: URL - Repository: URL - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary ### Supported Tasks and Leaderboards WSD: Word Sense Disambiguation ### Languages English ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits Not split. Use 'train' split directly.
[ "# Dataset Card for Dataset Name\n\nUFSAC: Unification of Sense Annotated Corpora and Tools", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards\n\nWSD: Word Sense Disambiguation", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits\n\nNot split. Use 'train' split directly." ]
[ "TAGS\n#task_categories-token-classification #size_categories-1M<n<10M #language-English #license-cc-by-2.0 #region-us \n", "# Dataset Card for Dataset Name\n\nUFSAC: Unification of Sense Annotated Corpora and Tools", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards\n\nWSD: Word Sense Disambiguation", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits\n\nNot split. Use 'train' split directly." ]
4cc9063942e93200777c789215afde03f8bf44e0
# Readme
smearle/pcglm
[ "region:us" ]
2023-01-25T22:30:41+00:00
{}
2023-03-03T17:53:46+00:00
[]
[]
TAGS #region-us
# Readme
[ "# Readme" ]
[ "TAGS\n#region-us \n", "# Readme" ]
012e0c16a562a127ff5f8d13d9e1ac2c786dc406
# Dataset Card for "semantic" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
000alen/semantic
[ "region:us" ]
2023-01-26T00:47:03+00:00
{"dataset_info": {"features": [{"name": "text1", "dtype": "string"}, {"name": "text2", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 318338808, "num_examples": 834836}, {"name": "test", "num_bytes": 41559777, "num_examples": 99893}], "download_size": 38916398, "dataset_size": 359898585}}
2023-01-26T00:47:15+00:00
[]
[]
TAGS #region-us
# Dataset Card for "semantic" More Information needed
[ "# Dataset Card for \"semantic\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"semantic\"\n\nMore Information needed" ]
ab54d9af682a2052a4345b85dccf9de89afd3674
# Dataset Card for "bloom-dialogue-generate-ds-en" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
svjack/bloom-dialogue-generate-ds-en
[ "region:us" ]
2023-01-26T03:05:06+00:00
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "dialogue_text", "dtype": "string"}, {"name": "dialogue", "sequence": "string"}, {"name": "repo", "dtype": "string"}, {"name": "embeddings", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 33783729, "num_examples": 8378}], "download_size": 34957337, "dataset_size": 33783729}}
2023-01-26T03:08:24+00:00
[]
[]
TAGS #region-us
# Dataset Card for "bloom-dialogue-generate-ds-en" More Information needed
[ "# Dataset Card for \"bloom-dialogue-generate-ds-en\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"bloom-dialogue-generate-ds-en\"\n\nMore Information needed" ]
70476aa96efc5b7136f95eb81703ee2e20ee11fc
# BabyLM Dataset This download includes LM Pretraining data for the 2023 CoNLL/CMCL shared task, [The BabyLM Challenge](https://babylm.github.io/). The (unzipped) data is not large, only ~700MB. ## Contents of this download - `10M`: 10M-word training set for the *strict-small* track. - `dev`: Development set for both tracks (10M words) - `test`: Test set for both tracks (10M words) Each directory above contains a single `.txt` file from each of the 10 domains listed below. ## Composition of the data All datasets are sampled from a mixture of 10 data domains, shown below, along with their respective weights in the distributed dataset. | Source | Weight | Domain | Citation | Website | License | | --- | --- | --- | --- | --- | --- | | OpenSubtitles | 30% | Dialogue, Scripted | Lison & Tiedermann (2016) | [link](https://opus.nlpl.eu/OpenSubtitles-v2018.php) | Open source | | Simple English Wikipedia | 15% | Nonfiction | -- | [link](https://dumps.wikimedia.org/simplewiki/20221201/) | [link](https://dumps.wikimedia.org/legal.html) | | BNC | 10% | Dialogue | BNC Consortium (2007) | [link](http://www.natcorp.ox.ac.uk/) | [link](http://www.natcorp.ox.ac.uk/docs/licence.html) <sup>1</sup> | | Project Gutenberg | 10% | Fiction, Nonfiction | Gerlach & Font-Clos (2020) | [link](https://github.com/pgcorpus/gutenberg) | [link](https://www.gutenberg.org/policy/license.html) | | QED | 10% | Dialogue, Education | Abdelali et al. (2014) | [link](https://opus.nlpl.eu/QED.php) | [link](https://opus.nlpl.eu/QED.php) | | Wikipedia | 10% | Nonfiction | -- | [link](https://dumps.wikimedia.org/enwiki/20221220/) | [link](https://dumps.wikimedia.org/legal.html) | | Children's Book Test | 6% | Fiction, Child-Directed | Hill et al. (2016) | [link](https://research.facebook.com/downloads/babi/) | Public domain | | CHILDES | 4% | Dialogue, Child-Directed | MacWhinney (2000) | | [link](https://talkbank.org/share/rules.html) | | Children's Stories | 4% | Fiction, Child-Directed | -- | [link](https://www.kaggle.com/datasets/edenbd/children-stories-text-corpus) | Public domain | | Switchboard | 1% | Dialogue | Godfrey et al. (1992), Stolcke et al., (2000) | [link](http://compprag.christopherpotts.net/swda.html) | [link](http://compprag.christopherpotts.net/swda.html) | <sup>1</sup> Our distribution of part of the BNC Texts is permitted under the fair dealings provision of copyright law (see term (2g) in the BNC license). ## Data preprocessing Data was minimally preprocessed to conform to a plain text format. We did not tokenize the data. Documents are not necessarily complete are newline separated. For documentation of the preprocessing pipeline, consult the following repo: https://github.com/babylm/babylm_data_preprocessing ## References Abdelali, A., Guzman, F., Sajjad, H., & Vogel, S. (2014). The AMARA Corpus: Building parallel language resources for the educational domain. In Proceedings of the 9th International Conference on Language Resources and Evaluation (LREC 2014). 1856-1862. BNC Consortium. (2007). The British National Corpus, XML Edition. Oxford Text Archive, http://hdl.handle.net/20.500.12024/2554. Gerlach, M., & Font-Clos, F. (2020). A standardized Project Gutenberg corpus for statistical analysis of natural language and quantitative linguistics. Entropy, 22(1), 126. Godfrey, J. J., Holliman, E. C., & McDaniel, J. (1992). SWITCHBOARD: Telephone speech corpus for research and development. In Acoustics, Speech, and Signal Processing, IEEE International Conference on (Vol. 1, pp. 517-520). IEEE Computer Society. Hill, F., Bordes, A., Chopra, S., Weston, J. (2016). The Goldilocks principle: Reading children’s books with explicit memory representations. In Proceedings of the 4th International Conference on Learning Representations (ICLR 2016). Lison, P. & Tiedemann, J. (2016). OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016). MacWhinney, B. (2000). The CHILDES Project: Tools for analyzing talk. Third Edition. Mahwah, NJ: Lawrence Erlbaum Associates. Stolcke, A., Ries, K., Coccaro, N., Shriberg, E., Bates, R., Jurafsky, D., Taylor, P., Martin, R., Van Ess-Dykema, C., & Meteer, M. (2000). Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational linguistics, 26(3), 339-373. Tiedemann, J. (2012). Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012).
cambridge-climb/BabyLM
[ "size_categories:10M<n<100M", "language:en", "language modeling", "cognitive modeling", "region:us" ]
2023-01-26T03:05:31+00:00
{"language": ["en"], "size_categories": ["10M<n<100M"], "pretty_name": "Baby Language Modeling Dataset", "tags": ["language modeling", "cognitive modeling"]}
2023-11-01T12:11:06+00:00
[]
[ "en" ]
TAGS #size_categories-10M<n<100M #language-English #language modeling #cognitive modeling #region-us
BabyLM Dataset ============== This download includes LM Pretraining data for the 2023 CoNLL/CMCL shared task, The BabyLM Challenge. The (unzipped) data is not large, only ~700MB. Contents of this download ------------------------- * '10M': 10M-word training set for the *strict-small* track. * 'dev': Development set for both tracks (10M words) * 'test': Test set for both tracks (10M words) Each directory above contains a single '.txt' file from each of the 10 domains listed below. Composition of the data ----------------------- All datasets are sampled from a mixture of 10 data domains, shown below, along with their respective weights in the distributed dataset. 1 Our distribution of part of the BNC Texts is permitted under the fair dealings provision of copyright law (see term (2g) in the BNC license). Data preprocessing ------------------ Data was minimally preprocessed to conform to a plain text format. We did not tokenize the data. Documents are not necessarily complete are newline separated. For documentation of the preprocessing pipeline, consult the following repo: URL References ---------- Abdelali, A., Guzman, F., Sajjad, H., & Vogel, S. (2014). The AMARA Corpus: Building parallel language resources for the educational domain. In Proceedings of the 9th International Conference on Language Resources and Evaluation (LREC 2014). 1856-1862. BNC Consortium. (2007). The British National Corpus, XML Edition. Oxford Text Archive, URL Gerlach, M., & Font-Clos, F. (2020). A standardized Project Gutenberg corpus for statistical analysis of natural language and quantitative linguistics. Entropy, 22(1), 126. Godfrey, J. J., Holliman, E. C., & McDaniel, J. (1992). SWITCHBOARD: Telephone speech corpus for research and development. In Acoustics, Speech, and Signal Processing, IEEE International Conference on (Vol. 1, pp. 517-520). IEEE Computer Society. Hill, F., Bordes, A., Chopra, S., Weston, J. (2016). The Goldilocks principle: Reading children’s books with explicit memory representations. In Proceedings of the 4th International Conference on Learning Representations (ICLR 2016). Lison, P. & Tiedemann, J. (2016). OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016). MacWhinney, B. (2000). The CHILDES Project: Tools for analyzing talk. Third Edition. Mahwah, NJ: Lawrence Erlbaum Associates. Stolcke, A., Ries, K., Coccaro, N., Shriberg, E., Bates, R., Jurafsky, D., Taylor, P., Martin, R., Van Ess-Dykema, C., & Meteer, M. (2000). Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational linguistics, 26(3), 339-373. Tiedemann, J. (2012). Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012).
[]
[ "TAGS\n#size_categories-10M<n<100M #language-English #language modeling #cognitive modeling #region-us \n" ]
58e2a8898d8ddd5c72e9906077d6392359588d86
# Dataset Card for "bloom-dialogue-generate-ds-zh" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
svjack/bloom-dialogue-generate-ds-zh
[ "region:us" ]
2023-01-26T03:52:16+00:00
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "dialogue_text", "dtype": "string"}, {"name": "dialogue", "sequence": "string"}, {"name": "repo", "dtype": "string"}, {"name": "embeddings", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 98021681, "num_examples": 24297}], "download_size": 101459282, "dataset_size": 98021681}}
2023-01-26T03:53:12+00:00
[]
[]
TAGS #region-us
# Dataset Card for "bloom-dialogue-generate-ds-zh" More Information needed
[ "# Dataset Card for \"bloom-dialogue-generate-ds-zh\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"bloom-dialogue-generate-ds-zh\"\n\nMore Information needed" ]
762a774972db5f16a27057ac7516a5fee2cf2fcc
# Dataset Card for "OxfordPets_test_text_davinci_002_Visclues_ns_10" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/OxfordPets_test_text_davinci_002_Visclues_ns_10
[ "region:us" ]
2023-01-26T04:48:02+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "raw_prediction", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_1", "num_bytes": 128960.0, "num_examples": 10}], "download_size": 127751, "dataset_size": 128960.0}}
2023-01-26T04:51:57+00:00
[]
[]
TAGS #region-us
# Dataset Card for "OxfordPets_test_text_davinci_002_Visclues_ns_10" More Information needed
[ "# Dataset Card for \"OxfordPets_test_text_davinci_002_Visclues_ns_10\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"OxfordPets_test_text_davinci_002_Visclues_ns_10\"\n\nMore Information needed" ]
5a033fda33afbde2223d2d28ce396e4c74315ac6
# Dataset Card for "OxfordPets_test_text_davinci_002_Visclues_ns_3669" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/OxfordPets_test_text_davinci_002_Visclues_ns_3669
[ "region:us" ]
2023-01-26T05:06:37+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "raw_prediction", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_5", "num_bytes": 129773283.375, "num_examples": 3669}], "download_size": 120461779, "dataset_size": 129773283.375}}
2023-01-26T05:06:42+00:00
[]
[]
TAGS #region-us
# Dataset Card for "OxfordPets_test_text_davinci_002_Visclues_ns_3669" More Information needed
[ "# Dataset Card for \"OxfordPets_test_text_davinci_002_Visclues_ns_3669\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"OxfordPets_test_text_davinci_002_Visclues_ns_3669\"\n\nMore Information needed" ]
172cd7d323f128722ce38308b76fc8b2d34edd8a
# Dataset Card for "food_asia_2017" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
chaeso/food_asia_2017
[ "region:us" ]
2023-01-26T05:40:01+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "100", "1": "101", "2": "102", "3": "103", "4": "104", "5": "105", "6": "106", "7": "107", "8": "108", "9": "109", "10": "110", "11": "111", "12": "112", "13": "113", "14": "114", "15": "115", "16": "116", "17": "117", "18": "118", "19": "119", "20": "12", "21": "120", "22": "121", "23": "122", "24": "123", "25": "124", "26": "125", "27": "126", "28": "127", "29": "128", "30": "129", "31": "13", "32": "130", "33": "131", "34": "132", "35": "133", "36": "134", "37": "135", "38": "136", "39": "137", "40": "138", "41": "139", "42": "14", "43": "140", "44": "141", "45": "142", "46": "143", "47": "144", "48": "145", "49": "146", "50": "147", "51": "148", "52": "149", "53": "15", "54": "150", "55": "151", "56": "152", "57": "153", "58": "154", "59": "155", "60": "156", "61": "157", "62": "158", "63": "159", "64": "16", "65": "160", "66": "161", "67": "162", "68": "163", "69": "164", "70": "165", "71": "166", "72": "167", "73": "168", "74": "169", "75": "17", "76": "170", "77": "171", "78": "172", "79": "173", "80": "174", "81": "175", "82": "176", "83": "177", "84": "178", "85": "179", "86": "18", "87": "180", "88": "181", "89": "182", "90": "183", "91": "184", "92": "185", "93": "186", "94": "187", "95": "188", "96": "189", "97": "19", "98": "190", "99": "191", "100": "192", "101": "193", "102": "194", "103": "195", "104": "196", "105": "197", "106": "198", "107": "199", "108": "20", "109": "200", "110": "201", "111": "202", "112": "203", "113": "204", "114": "205", "115": "206", "116": "207", "117": "208", "118": "209", "119": "21", "120": "210", "121": "211", "122": "212", "123": "213", "124": "214", "125": "215", "126": "216", "127": "217", "128": "218", "129": "219", "130": "22", "131": "220", "132": "221", "133": "222", "134": "223", "135": "224", "136": "225", "137": "226", "138": "227", "139": "228", "140": "229", "141": "23", "142": "230", "143": "231", "144": "232", "145": "233", "146": "234", "147": "235", "148": "236", "149": "237", "150": "238", "151": "239", "152": "24", "153": "240", "154": "241", "155": "242", "156": "243", "157": "244", "158": "245", "159": "246", "160": "247", "161": "248", "162": "249", "163": "25", "164": "250", "165": "251", "166": "252", "167": "253", "168": "254", "169": "255", "170": "256", "171": "26", "172": "27", "173": "28", "174": "29", "175": "3", "176": "30", "177": "31", "178": "32", "179": "33", "180": "34", "181": "35", "182": "36", "183": "37", "184": "38", "185": "39", "186": "4", "187": "40", "188": "41", "189": "42", "190": "43", "191": "44", "192": "45", "193": "46", "194": "47", "195": "48", "196": "49", "197": "50", "198": "51", "199": "52", "200": "53", "201": "54", "202": "55", "203": "56", "204": "57", "205": "58", "206": "59", "207": "60", "208": "61", "209": "62", "210": "63", "211": "64", "212": "65", "213": "66", "214": "67", "215": "68", "216": "69", "217": "70", "218": "71", "219": "72", "220": "73", "221": "74", "222": "75", "223": "76", "224": "77", "225": "78", "226": "79", "227": "8", "228": "80", "229": "81", "230": "82", "231": "83", "232": "84", "233": "85", "234": "86", "235": "87", "236": "88", "237": "89", "238": "9", "239": "90", "240": "91", "241": "92", "242": "93", "243": "94", "244": "95", "245": "96", "246": "97", "247": "98", "248": "99", "249": "beef_currie", "250": "bibimbob", "251": "donburi", "252": "grilled_eel", "253": "rice", "254": "sushi", "255": "tendong"}}}}], "splits": [{"name": "train", "num_bytes": 408215938.23, "num_examples": 31395}], "download_size": 0, "dataset_size": 408215938.23}}
2023-01-26T07:26:32+00:00
[]
[]
TAGS #region-us
# Dataset Card for "food_asia_2017" More Information needed
[ "# Dataset Card for \"food_asia_2017\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"food_asia_2017\"\n\nMore Information needed" ]
d72f52c5745553ed03a0b3ea3c3421585746e867
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Point of Contact:** [Vishal Burman](mailto:[email protected]) ### Dataset Summary This dataset comprises of open-domain question-answer pairs obtained from extracting 150K FAQ URLs from C4 dataset. Please refer to the original [`paper`](https://arxiv.org/abs/1910.10683) and [`dataset card`](https://huggingface.co/datasets/c4) for more details. You can load C4-FAQs as follows: ```python from datasets import load_dataset c4_faqs_dataset = load_dataset("vishal-burman/c4-faqs") ``` ### Supported Tasks and Leaderboards C4-FAQs is mainly intended for open-domain end-to-end question generation. It can also be used for open-domain question answering. ### Languages C4-FAQs only supports English language. ## Dataset Structure ### Data Instances An example of a single dataset point: ```python {'url': 'https://www.brusselsghosts.com/things-to-do-brussels/faq.html', 'faq_pairs': [{'question': 'What should I bring for the tour?', 'answer': 'Nothing special, just be ready to walk for bit and potentially something to protect you from poltergeists and rain. Any kind of amulet or protection stone is also welcome.'}, {'question': 'Can kids join too ?', 'answer': 'Yes, we accept kids from 6 years old and on! We also have a family discount, if you book for 2 adults and 2 kids!'}, {'question': 'Where is the meeting point ?', 'answer': 'Brussels has many paved roads and those are hardly accessible with a wheelchair, for that reason we have to unfortunately label our tour as not wheelchair accessible.'}]} ``` ### Data Fields The data have several fields: - `url`: URL of the webpage containing the FAQs - `faq_pairs`: A list of question-answer pairs extracted from the webpage - `question`: A single question as a string - `answer`: A single answer to the above question as a string ### Data Splits | subset | total | |:-------|:------| | train | 150K | ## Dataset Creation ### Curation Rationale The dataset was curated to create end-to-end Question Generation pipelines. A large amount of open-source models use [`SQuAD`](https://huggingface.co/datasets/squad) dataset to create answer-agnostic question generation models. While the questions are valid, they often are short factoid in nature. This dataset is curated from FAQs of websites, which are generally hand-crafted and can be used to further improve generated question quality. ## Additional Information ### Dataset Curators Original data by [Common Crawl](https://commoncrawl.org/). ### Licensing Information The original dataset was released under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset. ### Citation Information If you use this dataset, I would love to hear about it! Reach out on GitHub, twitter or shoot me an email. To cite the original `c4` dataset: ```bibtex @article{2019t5, author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu}, title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer}, journal = {arXiv e-prints}, year = {2019}, archivePrefix = {arXiv}, eprint = {1910.10683}, } ```
vishal-burman/c4-faqs
[ "task_categories:text2text-generation", "task_categories:text-generation", "task_categories:question-answering", "task_ids:text-simplification", "task_ids:language-modeling", "task_ids:open-domain-qa", "annotations_creators:no-annotation", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|c4", "language:en", "license:odc-by", "question-generation", "question_generation", "open-domain-qg", "qg", "arxiv:1910.10683", "region:us" ]
2023-01-26T06:15:58+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["odc-by"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|c4"], "task_categories": ["text2text-generation", "text-generation", "question-answering"], "task_ids": ["text-simplification", "language-modeling", "open-domain-qa"], "pretty_name": "C4-FAQs", "tags": ["question-generation", "question_generation", "open-domain-qg", "qg"]}
2023-02-06T04:35:16+00:00
[ "1910.10683" ]
[ "en" ]
TAGS #task_categories-text2text-generation #task_categories-text-generation #task_categories-question-answering #task_ids-text-simplification #task_ids-language-modeling #task_ids-open-domain-qa #annotations_creators-no-annotation #language_creators-machine-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|c4 #language-English #license-odc-by #question-generation #question_generation #open-domain-qg #qg #arxiv-1910.10683 #region-us
Dataset Card for [Dataset Name] =============================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Point of Contact: Vishal Burman ### Dataset Summary This dataset comprises of open-domain question-answer pairs obtained from extracting 150K FAQ URLs from C4 dataset. Please refer to the original 'paper' and 'dataset card' for more details. You can load C4-FAQs as follows: ### Supported Tasks and Leaderboards C4-FAQs is mainly intended for open-domain end-to-end question generation. It can also be used for open-domain question answering. ### Languages C4-FAQs only supports English language. Dataset Structure ----------------- ### Data Instances An example of a single dataset point: ### Data Fields The data have several fields: * 'url': URL of the webpage containing the FAQs * 'faq\_pairs': A list of question-answer pairs extracted from the webpage + 'question': A single question as a string + 'answer': A single answer to the above question as a string ### Data Splits Dataset Creation ---------------- ### Curation Rationale The dataset was curated to create end-to-end Question Generation pipelines. A large amount of open-source models use 'SQuAD' dataset to create answer-agnostic question generation models. While the questions are valid, they often are short factoid in nature. This dataset is curated from FAQs of websites, which are generally hand-crafted and can be used to further improve generated question quality. Additional Information ---------------------- ### Dataset Curators Original data by Common Crawl. ### Licensing Information The original dataset was released under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset. If you use this dataset, I would love to hear about it! Reach out on GitHub, twitter or shoot me an email. To cite the original 'c4' dataset:
[ "### Dataset Summary\n\n\nThis dataset comprises of open-domain question-answer pairs obtained from extracting 150K FAQ URLs from C4 dataset. Please refer to the original 'paper' and 'dataset card' for more details.\n\n\nYou can load C4-FAQs as follows:", "### Supported Tasks and Leaderboards\n\n\nC4-FAQs is mainly intended for open-domain end-to-end question generation. It can also be used for open-domain question answering.", "### Languages\n\n\nC4-FAQs only supports English language.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of a single dataset point:", "### Data Fields\n\n\nThe data have several fields:\n\n\n* 'url': URL of the webpage containing the FAQs\n* 'faq\\_pairs': A list of question-answer pairs extracted from the webpage\n\t+ 'question': A single question as a string\n\t+ 'answer': A single answer to the above question as a string", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe dataset was curated to create end-to-end Question Generation pipelines. A large amount of open-source models use 'SQuAD' dataset to create answer-agnostic question generation models. While the questions are valid, they often are short factoid in nature. This dataset is curated from FAQs of websites, which are generally hand-crafted and can be used to further improve generated question quality.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nOriginal data by Common Crawl.", "### Licensing Information\n\n\nThe original dataset was released under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.\n\n\nIf you use this dataset, I would love to hear about it! Reach out on GitHub, twitter or shoot me an email.\n\n\nTo cite the original 'c4' dataset:" ]
[ "TAGS\n#task_categories-text2text-generation #task_categories-text-generation #task_categories-question-answering #task_ids-text-simplification #task_ids-language-modeling #task_ids-open-domain-qa #annotations_creators-no-annotation #language_creators-machine-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|c4 #language-English #license-odc-by #question-generation #question_generation #open-domain-qg #qg #arxiv-1910.10683 #region-us \n", "### Dataset Summary\n\n\nThis dataset comprises of open-domain question-answer pairs obtained from extracting 150K FAQ URLs from C4 dataset. Please refer to the original 'paper' and 'dataset card' for more details.\n\n\nYou can load C4-FAQs as follows:", "### Supported Tasks and Leaderboards\n\n\nC4-FAQs is mainly intended for open-domain end-to-end question generation. It can also be used for open-domain question answering.", "### Languages\n\n\nC4-FAQs only supports English language.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of a single dataset point:", "### Data Fields\n\n\nThe data have several fields:\n\n\n* 'url': URL of the webpage containing the FAQs\n* 'faq\\_pairs': A list of question-answer pairs extracted from the webpage\n\t+ 'question': A single question as a string\n\t+ 'answer': A single answer to the above question as a string", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe dataset was curated to create end-to-end Question Generation pipelines. A large amount of open-source models use 'SQuAD' dataset to create answer-agnostic question generation models. While the questions are valid, they often are short factoid in nature. This dataset is curated from FAQs of websites, which are generally hand-crafted and can be used to further improve generated question quality.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nOriginal data by Common Crawl.", "### Licensing Information\n\n\nThe original dataset was released under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.\n\n\nIf you use this dataset, I would love to hear about it! Reach out on GitHub, twitter or shoot me an email.\n\n\nTo cite the original 'c4' dataset:" ]
61419d7f2cec9ca67324f28f6a077582643a037c
#SciNLI: A Corpus for Natural Language Inference on Scientific Text https://github.com/msadat3/SciNLI ```bib @inproceedings{sadat-caragea-2022-scinli, title = "{S}ci{NLI}: A Corpus for Natural Language Inference on Scientific Text", author = "Sadat, Mobashir and Caragea, Cornelia", booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.acl-long.511", pages = "7399--7409", } ```
tasksource/scinli
[ "license:apache-2.0", "region:us" ]
2023-01-26T08:35:52+00:00
{"license": "apache-2.0"}
2023-01-26T09:34:08+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
#SciNLI: A Corpus for Natural Language Inference on Scientific Text URL
[]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
c13f83ea610d0f04c8b6ea50a59339b8204dcd44
# The Argument Reasoning Comprehension Task: Identification and Reconstruction of Implicit Warrants https://github.com/UKPLab/argument-reasoning-comprehension-task ```bib @InProceedings{Habernal.et.al.2018.NAACL.ARCT, title = {The Argument Reasoning Comprehension Task: Identification and Reconstruction of Implicit Warrants}, author = {Habernal, Ivan and Wachsmuth, Henning and Gurevych, Iryna and Stein, Benno}, publisher = {Association for Computational Linguistics}, booktitle = {Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)}, pages = {1930--1940}, month = jun, year = {2018}, address = {New Orleans, Louisiana}, url = {http://aclweb.org/anthology/N18-1175} } ```
tasksource/arct
[ "license:apache-2.0", "region:us" ]
2023-01-26T08:41:15+00:00
{"license": "apache-2.0"}
2023-05-15T07:19:50+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
# The Argument Reasoning Comprehension Task: Identification and Reconstruction of Implicit Warrants URL
[ "# The Argument Reasoning Comprehension Task: Identification and Reconstruction of Implicit Warrants\nURL" ]
[ "TAGS\n#license-apache-2.0 #region-us \n", "# The Argument Reasoning Comprehension Task: Identification and Reconstruction of Implicit Warrants\nURL" ]
7b722853d985102b6ab6fe1dd7e10473945de4d0
# Dataset Card for "food_chinese_2017" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
chaeso/food_chinese_2017
[ "region:us" ]
2023-01-26T08:59:59+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "100", "1": "101", "2": "102", "3": "103", "4": "104", "5": "105", "6": "106", "7": "107", "8": "108", "9": "109", "10": "110", "11": "111", "12": "112", "13": "113", "14": "114", "15": "115", "16": "116", "17": "117", "18": "118", "19": "119", "20": "12", "21": "120", "22": "121", "23": "122", "24": "123", "25": "124", "26": "125", "27": "126", "28": "127", "29": "128", "30": "129", "31": "13", "32": "130", "33": "131", "34": "132", "35": "133", "36": "134", "37": "135", "38": "136", "39": "137", "40": "138", "41": "139", "42": "14", "43": "140", "44": "141", "45": "142", "46": "143", "47": "144", "48": "145", "49": "146", "50": "147", "51": "148", "52": "149", "53": "15", "54": "150", "55": "151", "56": "152", "57": "153", "58": "154", "59": "155", "60": "156", "61": "157", "62": "158", "63": "159", "64": "16", "65": "160", "66": "161", "67": "162", "68": "163", "69": "164", "70": "165", "71": "166", "72": "167", "73": "168", "74": "169", "75": "17", "76": "170", "77": "171", "78": "172", "79": "173", "80": "174", "81": "175", "82": "176", "83": "177", "84": "178", "85": "179", "86": "18", "87": "180", "88": "181", "89": "182", "90": "183", "91": "184", "92": "185", "93": "186", "94": "187", "95": "188", "96": "189", "97": "19", "98": "190", "99": "191", "100": "192", "101": "193", "102": "194", "103": "195", "104": "196", "105": "197", "106": "198", "107": "199", "108": "20", "109": "200", "110": "201", "111": "202", "112": "203", "113": "204", "114": "205", "115": "206", "116": "207", "117": "208", "118": "209", "119": "21", "120": "210", "121": "211", "122": "212", "123": "213", "124": "214", "125": "215", "126": "216", "127": "217", "128": "218", "129": "219", "130": "22", "131": "220", "132": "221", "133": "222", "134": "223", "135": "224", "136": "225", "137": "226", "138": "227", "139": "228", "140": "229", "141": "23", "142": "230", "143": "231", "144": "232", "145": "233", "146": "234", "147": "235", "148": "236", "149": "237", "150": "238", "151": "239", "152": "24", "153": "240", "154": "241", "155": "242", "156": "243", "157": "244", "158": "245", "159": "246", "160": "247", "161": "248", "162": "249", "163": "25", "164": "250", "165": "251", "166": "252", "167": "253", "168": "254", "169": "255", "170": "256", "171": "26", "172": "27", "173": "28", "174": "29", "175": "3", "176": "30", "177": "31", "178": "32", "179": "33", "180": "34", "181": "35", "182": "36", "183": "37", "184": "38", "185": "39", "186": "4", "187": "40", "188": "41", "189": "42", "190": "43", "191": "44", "192": "45", "193": "46", "194": "47", "195": "48", "196": "49", "197": "50", "198": "51", "199": "52", "200": "53", "201": "54", "202": "55", "203": "56", "204": "57", "205": "58", "206": "59", "207": "60", "208": "61", "209": "62", "210": "63", "211": "64", "212": "65", "213": "66", "214": "67", "215": "68", "216": "69", "217": "70", "218": "71", "219": "72", "220": "73", "221": "74", "222": "75", "223": "76", "224": "77", "225": "78", "226": "79", "227": "8", "228": "80", "229": "81", "230": "82", "231": "83", "232": "84", "233": "85", "234": "86", "235": "87", "236": "88", "237": "89", "238": "9", "239": "90", "240": "91", "241": "92", "242": "93", "243": "94", "244": "95", "245": "96", "246": "97", "247": "98", "248": "99", "249": "beef_currie", "250": "bibimbob", "251": "donburi", "252": "grilled_eel", "253": "rice", "254": "sushi", "255": "tendong"}}}}], "splits": [{"name": "train", "num_bytes": 408076826.985, "num_examples": 31395}, {"name": "test", "num_bytes": 135802193.08, "num_examples": 6660}, {"name": "validation", "num_bytes": 137529971.372, "num_examples": 6734}], "download_size": 677961805, "dataset_size": 681408991.437}}
2023-01-26T09:21:46+00:00
[]
[]
TAGS #region-us
# Dataset Card for "food_chinese_2017" More Information needed
[ "# Dataset Card for \"food_chinese_2017\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"food_chinese_2017\"\n\nMore Information needed" ]
746f83a1f110ee7cd4ca4267f28a9ef044fb8d4b
https://github.com/feng-yufei/Neural-Natural-Logic ```bib @inproceedings{feng2020exploring, title={Exploring End-to-End Differentiable Natural Logic Modeling}, author={Feng, Yufei, Ziou Zheng, and Liu, Quan and Greenspan, Michael and Zhu, Xiaodan}, booktitle={Proceedings of the 28th International Conference on Computational Linguistics}, pages={1172--1185}, year={2020} } ```
tasksource/naturallogic
[ "task_categories:text-classification", "language:en", "license:apache-2.0", "region:us" ]
2023-01-26T09:49:49+00:00
{"language": ["en"], "license": "apache-2.0", "task_categories": ["text-classification"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "original_id ", "dtype": "int64"}, {"name": " sent1 ", "dtype": "string"}, {"name": " sent2 ", "dtype": "string"}, {"name": " keyword_before ", "dtype": "string"}, {"name": " relation 1to2 ", "dtype": "string"}, {"name": " pattern ", "dtype": "string"}, {"name": " original_label ", "dtype": "string"}, {"name": " original_genre ", "dtype": "string"}, {"name": " consistent ", "dtype": "bool"}, {"name": " formula ", "dtype": "string"}, {"name": " start_ends ", "dtype": "string"}, {"name": " new_label ", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2011728.0534709194, "num_examples": 6390}], "download_size": 227618, "dataset_size": 2011728.0534709194}}
2023-12-06T08:23:46+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #language-English #license-apache-2.0 #region-us
URL
[]
[ "TAGS\n#task_categories-text-classification #language-English #license-apache-2.0 #region-us \n" ]
8da1ab1711a5f6d7127391f3d68c449daa5bd540
ERROR: type should be string, got "https://github.com/IKMLab/arct2\n```bib\n@inproceedings{niven-kao-2019-probing,\n title = \"Probing Neural Network Comprehension of Natural Language Arguments\",\n author = \"Niven, Timothy and\n Kao, Hung-Yu\",\n booktitle = \"Proceedings of the 57th Conference of the Association for Computational Linguistics\",\n month = jul,\n year = \"2019\",\n address = \"Florence, Italy\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/P19-1459\",\n pages = \"4658--4664\",\n abstract = \"We are surprised to find that BERT{'}s peak performance of 77{\\%} on the Argument Reasoning Comprehension Task reaches just three points below the average untrained human baseline. However, we show that this result is entirely accounted for by exploitation of spurious statistical cues in the dataset. We analyze the nature of these cues and demonstrate that a range of models all exploit them. This analysis informs the construction of an adversarial dataset on which all models achieve random accuracy. Our adversarial dataset provides a more robust assessment of argument comprehension and should be adopted as the standard in future work.\",\n}\n```"
tasksource/arct2
[ "task_categories:text-classification", "language:en", "license:apache-2.0", "region:us" ]
2023-01-26T10:11:15+00:00
{"language": ["en"], "license": "apache-2.0", "task_categories": ["text-classification"]}
2023-01-26T10:15:21+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #language-English #license-apache-2.0 #region-us
URL
[]
[ "TAGS\n#task_categories-text-classification #language-English #license-apache-2.0 #region-us \n" ]
d14fc1e72fa656736f2330a1f9250d4080b69aaa
# Dataset Card for "concatenated_librispeech" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
sanchit-gandhi/concatenated_librispeech
[ "region:us" ]
2023-01-26T10:26:12+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}], "splits": [{"name": "train", "num_bytes": 707889.0, "num_examples": 1}], "download_size": 0, "dataset_size": 707889.0}}
2023-01-26T11:45:39+00:00
[]
[]
TAGS #region-us
# Dataset Card for "concatenated_librispeech" More Information needed
[ "# Dataset Card for \"concatenated_librispeech\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"concatenated_librispeech\"\n\nMore Information needed" ]
c5d7b1bd3da912bb0b3c1ab5c5e619b23103dc32
# Dataset Card for "USTC_SmokeRS" ## Dataset Description - **Paper:** [SmokeNet: Satellite smoke scene detection using convolutional neural network with spatial and channel-wise attention](https://www.mdpi.com/2072-4292/11/14/1702/pdf) ### Licensing Information For research/education purposes. ## Citation Information [SmokeNet: Satellite smoke scene detection using convolutional neural network with spatial and channel-wise attention](https://www.mdpi.com/2072-4292/11/14/1702/pdf) ``` @article{ba2019smokenet, title = {SmokeNet: Satellite smoke scene detection using convolutional neural network with spatial and channel-wise attention}, author = {Ba, Rui and Chen, Chen and Yuan, Jing and Song, Weiguo and Lo, Siuming}, year = 2019, journal = {Remote Sensing}, publisher = {MDPI}, volume = 11, number = 14, pages = 1702 } ```
jonathan-roberts1/USTC_SmokeRS
[ "license:other", "region:us" ]
2023-01-26T10:45:45+00:00
{"license": "other", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "cloud", "1": "dust", "2": "haze", "3": "land", "4": "seaside", "5": "smoke"}}}}], "splits": [{"name": "train", "num_bytes": 1229029078.725, "num_examples": 6225}], "download_size": 1115042620, "dataset_size": 1229029078.725}}
2023-03-31T13:56:13+00:00
[]
[]
TAGS #license-other #region-us
# Dataset Card for "USTC_SmokeRS" ## Dataset Description - Paper: SmokeNet: Satellite smoke scene detection using convolutional neural network with spatial and channel-wise attention ### Licensing Information For research/education purposes. SmokeNet: Satellite smoke scene detection using convolutional neural network with spatial and channel-wise attention
[ "# Dataset Card for \"USTC_SmokeRS\"", "## Dataset Description\n\n- Paper: SmokeNet: Satellite smoke scene detection using convolutional neural network with spatial and channel-wise attention", "### Licensing Information\n\nFor research/education purposes.\n\n\n\nSmokeNet: Satellite smoke scene detection using convolutional neural network with spatial and channel-wise attention" ]
[ "TAGS\n#license-other #region-us \n", "# Dataset Card for \"USTC_SmokeRS\"", "## Dataset Description\n\n- Paper: SmokeNet: Satellite smoke scene detection using convolutional neural network with spatial and channel-wise attention", "### Licensing Information\n\nFor research/education purposes.\n\n\n\nSmokeNet: Satellite smoke scene detection using convolutional neural network with spatial and channel-wise attention" ]
d2eedc6a97dd6af5d46ef0eecfbab37b8d9575cb
# Dataset Card for "identities-dalle-2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
SDbiaseval/identities-dalle-2
[ "region:us" ]
2023-01-26T11:15:25+00:00
{"dataset_info": {"features": [{"name": "ethnicity", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "no", "dtype": "int32"}, {"name": "image_path", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 535524743.0, "num_examples": 680}], "download_size": 416250866, "dataset_size": 535524743.0}}
2023-01-26T22:33:31+00:00
[]
[]
TAGS #region-us
# Dataset Card for "identities-dalle-2" More Information needed
[ "# Dataset Card for \"identities-dalle-2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"identities-dalle-2\"\n\nMore Information needed" ]
28f10b2c257704f348e2b6241106d4206c218206
# Dataset Card for "identities-sd-2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
SDbiaseval/identities-sd-2
[ "region:us" ]
2023-01-26T11:20:38+00:00
{"dataset_info": {"features": [{"name": "ethnicity", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "no", "dtype": "int32"}, {"name": "image_path", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 22563834.0, "num_examples": 680}], "download_size": 22470423, "dataset_size": 22563834.0}}
2023-01-26T22:39:17+00:00
[]
[]
TAGS #region-us
# Dataset Card for "identities-sd-2" More Information needed
[ "# Dataset Card for \"identities-sd-2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"identities-sd-2\"\n\nMore Information needed" ]
e4fd4d399c521d7b805c926205db6f8e2cbfc420
# Dataset Card for "jobs-sd-2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
SDbiaseval/jobs-sd-2
[ "region:us" ]
2023-01-26T11:36:22+00:00
{"dataset_info": {"features": [{"name": "adjective", "dtype": "string"}, {"name": "profession", "dtype": "string"}, {"name": "no", "dtype": "int32"}, {"name": "image_path", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1061811457.5, "num_examples": 31500}], "download_size": 1040536722, "dataset_size": 1061811457.5}}
2023-01-26T12:35:08+00:00
[]
[]
TAGS #region-us
# Dataset Card for "jobs-sd-2" More Information needed
[ "# Dataset Card for \"jobs-sd-2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"jobs-sd-2\"\n\nMore Information needed" ]
390b69fe0abd969dd351d0637b18b1b6e0dd8e26
# Dataset Card for "default_config" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
polinaeterna/default_config
[ "region:us" ]
2023-01-26T12:02:23+00:00
{"pretty_name": "traktor_dodik", "dataset_info": [{"config_name": "default", "features": [{"name": "x", "dtype": "int64"}, {"name": "y", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 93, "num_examples": 6}, {"name": "test", "num_bytes": 28, "num_examples": 2}], "download_size": 1703, "dataset_size": 121}, {"config_name": "v2", "features": [{"name": "x", "dtype": "int64"}, {"name": "y", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 56, "num_examples": 4}, {"name": "test", "num_bytes": 14, "num_examples": 1}], "download_size": 0, "dataset_size": 70}]}
2023-01-26T16:18:29+00:00
[]
[]
TAGS #region-us
# Dataset Card for "default_config" More Information needed
[ "# Dataset Card for \"default_config\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"default_config\"\n\nMore Information needed" ]
e275e545ec573b19bf183bdc566d02ef5e9c3065
# Dataset Card for "hh_eval_ilql" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
reciprocate/hh_eval_ilql
[ "region:us" ]
2023-01-26T12:48:28+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "ilql_hh_125M", "dtype": "string"}, {"name": "ilql_hh_1B", "dtype": "string"}, {"name": "ilql_hh_6B", "dtype": "string"}, {"name": "ilql_hh_20B", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 170467, "num_examples": 100}], "download_size": 108160, "dataset_size": 170467}}
2023-01-26T12:48:54+00:00
[]
[]
TAGS #region-us
# Dataset Card for "hh_eval_ilql" More Information needed
[ "# Dataset Card for \"hh_eval_ilql\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"hh_eval_ilql\"\n\nMore Information needed" ]
48a4df4d99673944db3eaabbf12d2a31348e98f3
# Dataset Card for "Teamp" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
aryanlath/Teamp
[ "region:us" ]
2023-01-26T13:06:09+00:00
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 85286.0, "num_examples": 1}], "download_size": 0, "dataset_size": 85286.0}}
2023-01-26T13:09:34+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Teamp" More Information needed
[ "# Dataset Card for \"Teamp\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Teamp\"\n\nMore Information needed" ]
f91928829ad1815bfd8393a17eeeaed1ffe51993
# Dataset Card for "Unlabelled_Seg" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
aryanlath/Unlabelled_Seg
[ "region:us" ]
2023-01-26T14:26:29+00:00
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 17399091.0, "num_examples": 138}], "download_size": 17243457, "dataset_size": 17399091.0}}
2023-01-26T14:26:39+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Unlabelled_Seg" More Information needed
[ "# Dataset Card for \"Unlabelled_Seg\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Unlabelled_Seg\"\n\nMore Information needed" ]
fc75b07adbf817d7b8875ac23f3542ace8c00c6f
# Dataset Card for "summarize_eval_ilql" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
reciprocate/summarize_eval_ilql
[ "region:us" ]
2023-01-26T14:51:13+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "ilql_summarize_125M", "dtype": "string"}, {"name": "ilql_summarize_1B", "dtype": "string"}, {"name": "ilql_summarize_6B", "dtype": "string"}, {"name": "ilql_summarize_20B", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 190740, "num_examples": 100}], "download_size": 131602, "dataset_size": 190740}}
2023-01-26T14:51:42+00:00
[]
[]
TAGS #region-us
# Dataset Card for "summarize_eval_ilql" More Information needed
[ "# Dataset Card for \"summarize_eval_ilql\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"summarize_eval_ilql\"\n\nMore Information needed" ]
3e948872ae78b18cf93370d7eaaa0a2579715a55
# Dataset Card for "OxfordPets_test_text_davinci_002_Attributes_Caption_ns_300" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/OxfordPets_test_text_davinci_002_Attributes_Caption_ns_300
[ "region:us" ]
2023-01-26T15:08:19+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "raw_prediction", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_5", "num_bytes": 10666447.0, "num_examples": 300}], "download_size": 10031431, "dataset_size": 10666447.0}}
2023-01-26T15:08:22+00:00
[]
[]
TAGS #region-us
# Dataset Card for "OxfordPets_test_text_davinci_002_Attributes_Caption_ns_300" More Information needed
[ "# Dataset Card for \"OxfordPets_test_text_davinci_002_Attributes_Caption_ns_300\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"OxfordPets_test_text_davinci_002_Attributes_Caption_ns_300\"\n\nMore Information needed" ]
a16da9840a55b9ac85e737090a6a1b1ea44f4bc8
# Dataset Card for "OxfordPets_test_text_davinci_002_Visclues_ns_300" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/OxfordPets_test_text_davinci_002_Visclues_ns_300
[ "region:us" ]
2023-01-26T15:09:48+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "raw_prediction", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_10", "num_bytes": 11471423.0, "num_examples": 300}, {"name": "fewshot_15", "num_bytes": 12083140.0, "num_examples": 300}, {"name": "fewshot_12", "num_bytes": 11719304.0, "num_examples": 300}, {"name": "fewshot_5", "num_bytes": 10858509.0, "num_examples": 300}], "download_size": 40683194, "dataset_size": 46132376.0}}
2023-01-26T15:46:31+00:00
[]
[]
TAGS #region-us
# Dataset Card for "OxfordPets_test_text_davinci_002_Visclues_ns_300" More Information needed
[ "# Dataset Card for \"OxfordPets_test_text_davinci_002_Visclues_ns_300\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"OxfordPets_test_text_davinci_002_Visclues_ns_300\"\n\nMore Information needed" ]
12cb2b33a749314f2180384154f9d541f65687f2
# Dataset Summary **hystoclass** (hybrid social text and tabular classification)has been collected from Instagram stories with privacy in mind. In addition to the texts published in the stories, this dataset has graphic features such as background color, text color, and font. also has a Textual feature named 'content' in the Persian language. # Classes This dataset is divided into **18 classes** by human supervision: Event, Political, Advertising and business, Romantic, Motivational, Literature, Social Networks, Scientific, Social, IT, Advices, Academic, Cosmetic and Feminine, Religious, Sport, Property and housing, Tourism and Medical. [Github](https://github.com/pooyaphoenix/hystoclass) [Email](https://[email protected])
pooyaphoenix/hystoclass
[ "task_categories:text-classification", "task_categories:token-classification", "size_categories:1K<n<10K", "language:fa", "license:openrail", "tabular_data", "Text Classification", "Social Networks", "Ensemble Learning", "region:us" ]
2023-01-26T15:12:55+00:00
{"language": ["fa"], "license": "openrail", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification", "token-classification"], "pretty_name": "hystoclass", "tags": ["tabular_data", "Text Classification", "Social Networks", "Ensemble Learning"]}
2023-02-10T09:55:36+00:00
[]
[ "fa" ]
TAGS #task_categories-text-classification #task_categories-token-classification #size_categories-1K<n<10K #language-Persian #license-openrail #tabular_data #Text Classification #Social Networks #Ensemble Learning #region-us
# Dataset Summary hystoclass (hybrid social text and tabular classification)has been collected from Instagram stories with privacy in mind. In addition to the texts published in the stories, this dataset has graphic features such as background color, text color, and font. also has a Textual feature named 'content' in the Persian language. # Classes This dataset is divided into 18 classes by human supervision: Event, Political, Advertising and business, Romantic, Motivational, Literature, Social Networks, Scientific, Social, IT, Advices, Academic, Cosmetic and Feminine, Religious, Sport, Property and housing, Tourism and Medical. Github Email
[ "# Dataset Summary \nhystoclass (hybrid social text and tabular classification)has been collected from Instagram stories with privacy in mind. In addition to the texts published in the stories, this dataset has graphic features such as background color, text color, and font. also has a Textual feature named 'content' in the Persian language.", "# Classes\nThis dataset is divided into 18 classes by human supervision:\nEvent, Political, Advertising and business, Romantic, Motivational, Literature, Social Networks, Scientific, Social, IT, Advices, Academic, Cosmetic and Feminine, Religious, Sport, Property and housing, Tourism and Medical.\n\nGithub\nEmail" ]
[ "TAGS\n#task_categories-text-classification #task_categories-token-classification #size_categories-1K<n<10K #language-Persian #license-openrail #tabular_data #Text Classification #Social Networks #Ensemble Learning #region-us \n", "# Dataset Summary \nhystoclass (hybrid social text and tabular classification)has been collected from Instagram stories with privacy in mind. In addition to the texts published in the stories, this dataset has graphic features such as background color, text color, and font. also has a Textual feature named 'content' in the Persian language.", "# Classes\nThis dataset is divided into 18 classes by human supervision:\nEvent, Political, Advertising and business, Romantic, Motivational, Literature, Social Networks, Scientific, Social, IT, Advices, Academic, Cosmetic and Feminine, Religious, Sport, Property and housing, Tourism and Medical.\n\nGithub\nEmail" ]
90763dfee177aca8ee44f0a750e8124119b61d39
# Dataset Card for "OxfordPets_test_text_davinci_003_Visclues_ns_300" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/OxfordPets_test_text_davinci_003_Visclues_ns_300
[ "region:us" ]
2023-01-26T15:43:17+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "raw_prediction", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "fewshot_12", "num_bytes": 11719655.0, "num_examples": 300}, {"name": "fewshot_5", "num_bytes": 10858951.0, "num_examples": 300}], "download_size": 20270915, "dataset_size": 22578606.0}}
2023-01-26T15:44:30+00:00
[]
[]
TAGS #region-us
# Dataset Card for "OxfordPets_test_text_davinci_003_Visclues_ns_300" More Information needed
[ "# Dataset Card for \"OxfordPets_test_text_davinci_003_Visclues_ns_300\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"OxfordPets_test_text_davinci_003_Visclues_ns_300\"\n\nMore Information needed" ]
9ebf0a6a4f50fb99f6fb9e47f80f0dc79ae4deb8
# Dataset Card for "nllb-eng-tgl-12k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Ramos-Ramos/nllb-eng-tgl-12k
[ "region:us" ]
2023-01-26T15:50:21+00:00
{"dataset_info": {"features": [{"name": "translation", "dtype": {"translation": {"languages": ["eng_Latn", "tgl_Latn"]}}}, {"name": "laser_score", "dtype": "float32"}, {"name": "source_sentence_lid", "dtype": "float32"}, {"name": "target_sentence_lid", "dtype": "float32"}, {"name": "source_sentence_source", "dtype": "string"}, {"name": "source_sentence_url", "dtype": "string"}, {"name": "target_sentence_source", "dtype": "string"}, {"name": "target_sentence_url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5795415, "num_examples": 12000}], "download_size": 2811921, "dataset_size": 5795415}}
2023-01-26T15:50:24+00:00
[]
[]
TAGS #region-us
# Dataset Card for "nllb-eng-tgl-12k" More Information needed
[ "# Dataset Card for \"nllb-eng-tgl-12k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"nllb-eng-tgl-12k\"\n\nMore Information needed" ]
47c04abe372718066681b600f65db85cf8a4ff4b
# VUA20 ## Dataset Description - **Paper:** [A Report on the 2020 VUA and TOEFL Metaphor Detection Shared Task](https://aclanthology.org/2020.figlang-1.3/) ### Dataset Summary Creative Language Toolkit (CLTK) Metadata - CL Type: Metaphor - Task Type: detection - Size: 200k - Created time: 2020 VUA20 is (**perhaps**) the largest dataset of metaphor detection used in Figlang2020 workshop. For the details of this dataset, we refer you to the release [paper](https://aclanthology.org/2020.figlang-1.3/). The annotation method of VUA20 is elabrated in the paper of [MIP](https://www.tandfonline.com/doi/abs/10.1080/10926480709336752). ### Citation Information If you find this dataset helpful, please cite: ``` @inproceedings{Leong2020ARO, title={A Report on the 2020 VUA and TOEFL Metaphor Detection Shared Task}, author={Chee Wee Leong and Beata Beigman Klebanov and Chris Hamill and Egon W. Stemle and Rutuja Ubale and Xianyang Chen}, booktitle={FIGLANG}, year={2020} } ``` ### Contributions If you have any queries, please open an issue or direct your queries to [mail](mailto:[email protected]).
CreativeLang/vua20_metaphor
[ "license:cc-by-2.0", "region:us" ]
2023-01-26T16:18:53+00:00
{"license": "cc-by-2.0"}
2023-06-27T12:51:59+00:00
[]
[]
TAGS #license-cc-by-2.0 #region-us
# VUA20 ## Dataset Description - Paper: A Report on the 2020 VUA and TOEFL Metaphor Detection Shared Task ### Dataset Summary Creative Language Toolkit (CLTK) Metadata - CL Type: Metaphor - Task Type: detection - Size: 200k - Created time: 2020 VUA20 is (perhaps) the largest dataset of metaphor detection used in Figlang2020 workshop. For the details of this dataset, we refer you to the release paper. The annotation method of VUA20 is elabrated in the paper of MIP. If you find this dataset helpful, please cite: ### Contributions If you have any queries, please open an issue or direct your queries to mail.
[ "# VUA20", "## Dataset Description\n\n- Paper: A Report on the 2020 VUA and TOEFL Metaphor Detection Shared Task", "### Dataset Summary\n\nCreative Language Toolkit (CLTK) Metadata\n- CL Type: Metaphor\n- Task Type: detection\n- Size: 200k\n- Created time: 2020\n\nVUA20 is (perhaps) the largest dataset of metaphor detection used in Figlang2020 workshop.\n\nFor the details of this dataset, we refer you to the release paper.\n\nThe annotation method of VUA20 is elabrated in the paper of MIP.\n\n\n\nIf you find this dataset helpful, please cite:", "### Contributions\n\nIf you have any queries, please open an issue or direct your queries to mail." ]
[ "TAGS\n#license-cc-by-2.0 #region-us \n", "# VUA20", "## Dataset Description\n\n- Paper: A Report on the 2020 VUA and TOEFL Metaphor Detection Shared Task", "### Dataset Summary\n\nCreative Language Toolkit (CLTK) Metadata\n- CL Type: Metaphor\n- Task Type: detection\n- Size: 200k\n- Created time: 2020\n\nVUA20 is (perhaps) the largest dataset of metaphor detection used in Figlang2020 workshop.\n\nFor the details of this dataset, we refer you to the release paper.\n\nThe annotation method of VUA20 is elabrated in the paper of MIP.\n\n\n\nIf you find this dataset helpful, please cite:", "### Contributions\n\nIf you have any queries, please open an issue or direct your queries to mail." ]
96d1eed0e41ef32a091c000098cb47a0dc226d65
This repository contains various Tamazight language datasets created by [Col·lectivaT](https://www.collectivat.cat) in collaboration with CIEMEN and with funding from Municipality of Barcelona and Government of Catalonia. Under `mono` you can find monolingual sentences. - `tc_wajdm_v1.txt` - Texts from language learning material “tc wawjdm” - `IRCAM-clean-tifinagh.txt` - Tifinagh scripted sentences extracted from [IRCAM's text corpus](https://tal.ircam.ma/talam/corpus.php) Under `parallel` you can find sentences with translations in Catalan, English and Spanish. - `tatoeba-translit` contains parallel sentences from Tatoeba.org transliterated into Tifinagh. - `proverbs` contains Tamazight proverbs with translations in Catalan.
collectivat/amazic
[ "task_categories:translation", "task_categories:text-generation", "size_categories:100K<n<1M", "language:zgh", "language:fr", "language:ca", "language:en", "language:es", "license:cc-by-2.0", "region:us" ]
2023-01-26T16:33:26+00:00
{"language": ["zgh", "fr", "ca", "en", "es"], "license": "cc-by-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["translation", "text-generation"], "pretty_name": "Tamazight language data"}
2023-07-27T09:56:40+00:00
[]
[ "zgh", "fr", "ca", "en", "es" ]
TAGS #task_categories-translation #task_categories-text-generation #size_categories-100K<n<1M #language-Standard Moroccan Tamazight #language-French #language-Catalan #language-English #language-Spanish #license-cc-by-2.0 #region-us
This repository contains various Tamazight language datasets created by Col·lectivaT in collaboration with CIEMEN and with funding from Municipality of Barcelona and Government of Catalonia. Under 'mono' you can find monolingual sentences. - 'tc_wajdm_v1.txt' - Texts from language learning material “tc wawjdm” - 'URL' - Tifinagh scripted sentences extracted from IRCAM's text corpus Under 'parallel' you can find sentences with translations in Catalan, English and Spanish. - 'tatoeba-translit' contains parallel sentences from URL transliterated into Tifinagh. - 'proverbs' contains Tamazight proverbs with translations in Catalan.
[]
[ "TAGS\n#task_categories-translation #task_categories-text-generation #size_categories-100K<n<1M #language-Standard Moroccan Tamazight #language-French #language-Catalan #language-English #language-Spanish #license-cc-by-2.0 #region-us \n" ]
10e2ecc2882a108819959a062ca7b1a528d6999f
# Dataset Card for "Caltech101_not_background_test_facebook_opt_125m_Attributes_ns_5647" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/Caltech101_not_background_test_facebook_opt_125m_Attributes_ns_5647
[ "region:us" ]
2023-01-26T17:08:17+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}, {"name": "scores", "sequence": "float64"}], "splits": [{"name": "fewshot_0_bs_16", "num_bytes": 84088557.125, "num_examples": 5647}, {"name": "fewshot_1_bs_16", "num_bytes": 85276022.125, "num_examples": 5647}, {"name": "fewshot_3_bs_16", "num_bytes": 87656291.125, "num_examples": 5647}, {"name": "fewshot_5_bs_16", "num_bytes": 90034037.125, "num_examples": 5647}, {"name": "fewshot_8_bs_16", "num_bytes": 93580093.125, "num_examples": 5647}], "download_size": 415553691, "dataset_size": 440635000.625}}
2023-01-27T09:38:09+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Caltech101_not_background_test_facebook_opt_125m_Attributes_ns_5647" More Information needed
[ "# Dataset Card for \"Caltech101_not_background_test_facebook_opt_125m_Attributes_ns_5647\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Caltech101_not_background_test_facebook_opt_125m_Attributes_ns_5647\"\n\nMore Information needed" ]