jorses commited on
Commit
1d252ef
ยท
1 Parent(s): 9d54dd4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -26
README.md CHANGED
@@ -685,6 +685,32 @@ answering systems on tabular data, they are not large and diverse enough to eval
685
  To this end, we provide a corpus of 65 real world datasets, with 3,269,975 and 1615 columns in total, and 1300 questions to evaluate your models for the task of QA over Tabular Data.
686
 
687
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
688
  ## ๐Ÿ“š Datasets
689
  By clicking on each name in the table below, you will be able to explore each dataset.
690
 
@@ -797,30 +823,4 @@ If you use this resource, please use the following reference:
797
  year = "2024",
798
  address = "Turin, Italy"
799
  }
800
- ```
801
-
802
- # Usage
803
-
804
- ```python
805
- from datasets import load_dataset
806
-
807
- # Load all QA pairs
808
- all_qa = load_dataset("cardiffnlp/databench", name="qa", split="full")
809
-
810
- # Load SemEval 2025 task 8 Question-Answer splits
811
- semeval_train_qa = load_dataset("cardiffnlp/databench", name="semeval", split="train")
812
- semeval_dev_qa = load_dataset("cardiffnlp/databench", name="semeval", split="dev")
813
-
814
-
815
- # is "001_Forbes", the id of the dataset where information to answer the Question is located
816
- all_qa['dataset'][0]
817
-
818
- # This id can be used load a specific Question-Answer pair collection from the splits
819
- forbes_qa = load_dataset("cardiffnlp/databench", name="qa", split=all_qa['dataset'][0] )
820
-
821
- # you can load a specific dataset containg the "answer" for a QA pair using the
822
- forbes_full = load_dataset("cardiffnlp/databench", name=all_qa['dataset'][0] , split="full")
823
-
824
- # or to load the databench lite equivalent dataset, to answer the "sample_answer"
825
- forbes_sample = load_dataset("cardiffnlp/databench", name=all_qa['dataset'][0] , split="lite")
826
  ```
 
685
  To this end, we provide a corpus of 65 real world datasets, with 3,269,975 and 1615 columns in total, and 1300 questions to evaluate your models for the task of QA over Tabular Data.
686
 
687
 
688
+ ## Usage
689
+
690
+ ```python
691
+ from datasets import load_dataset
692
+
693
+ # Load all QA pairs
694
+ all_qa = load_dataset("cardiffnlp/databench", name="qa", split="full")
695
+
696
+ # Load SemEval 2025 task 8 Question-Answer splits
697
+ semeval_train_qa = load_dataset("cardiffnlp/databench", name="semeval", split="train")
698
+ semeval_dev_qa = load_dataset("cardiffnlp/databench", name="semeval", split="dev")
699
+
700
+
701
+ # "001_Forbes", the id of the dataset where information to answer the Question is located
702
+ all_qa['dataset'][0]
703
+
704
+ # This id can be used load a specific Question-Answer pair collection from the splits
705
+ forbes_qa = load_dataset("cardiffnlp/databench", name="qa", split=all_qa['dataset'][0] )
706
+
707
+ # you can load a specific dataset containg the "answer" for a QA pair using the
708
+ forbes_full = load_dataset("cardiffnlp/databench", name=all_qa['dataset'][0] , split="full")
709
+
710
+ # or to load the databench lite equivalent dataset, to answer the "sample_answer"
711
+ forbes_sample = load_dataset("cardiffnlp/databench", name=all_qa['dataset'][0] , split="lite")
712
+ ```
713
+
714
  ## ๐Ÿ“š Datasets
715
  By clicking on each name in the table below, you will be able to explore each dataset.
716
 
 
823
  year = "2024",
824
  address = "Turin, Italy"
825
  }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
826
  ```