topic
stringlengths 3
96
| wiki
stringlengths 33
127
| url
stringlengths 101
106
| action
stringclasses 7
values | sent
stringlengths 34
223
| annotation
stringlengths 74
227
| logic
stringlengths 207
5.45k
| logic_str
stringlengths 37
493
| interpret
stringlengths 43
471
| num_func
stringclasses 15
values | nid
stringclasses 13
values | g_ids
stringlengths 70
455
| g_ids_features
stringlengths 98
670
| g_adj
stringlengths 79
515
| table_header
stringlengths 40
458
| table_cont
large_stringlengths 135
4.41k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dwkt | https://en.wikipedia.org/wiki/DWKT | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/1-23915973-1.html.csv | majority | most of the radio stations have power kw of 10 kw . | {'scope': 'all', 'col': '4', 'most_or_all': 'most', 'criterion': 'equal', 'value': '10 kw', 'subset': None} | {'func': 'most_str_eq', 'args': ['all_rows', 'power kw', '10 kw'], 'result': True, 'ind': 0, 'tointer': 'for the power kw records of all rows , most of them fuzzily match to 10 kw .', 'tostr': 'most_eq { all_rows ; power kw ; 10 kw } = true'} | most_eq { all_rows ; power kw ; 10 kw } = true | for the power kw records of all rows , most of them fuzzily match to 10 kw . | 1 | 1 | {'most_str_eq_0': 0, 'result_1': 1, 'all_rows_2': 2, 'power kw_3': 3, '10 kw_4': 4} | {'most_str_eq_0': 'most_str_eq', 'result_1': 'true', 'all_rows_2': 'all_rows', 'power kw_3': 'power kw', '10 kw_4': '10 kw'} | {'most_str_eq_0': [1], 'result_1': [], 'all_rows_2': [0], 'power kw_3': [0], '10 kw_4': [0]} | ['branding', 'callsign', 'frequency', 'power kw', 'coverage'] | [['106.7 energy fm', 'dwet - fm', '106.7 mhz', '25 kw', 'mega manila'], ['106.3 energy fm naga', 'dwbq - fm', '106.3 mhz', '10 kw', 'naga bicol region'], ['94.7 energy fm cebu', 'dykt - fm', '94.7 mhz', '10 kw', 'cebu visayas region'], ['93.7 energy fm dumaguete', 'dymd - fm', '93.7 mhz', '10 kw', 'dumaguete central visayas region'], ['103.7 energy fm dipolog', 'dxru - fm', '103.7 mhz', '5 kw', 'dipolog western mindanao region'], ['88.3 energy fm davao', 'dxdr - fm', '88.3 mhz', '10 kw', 'davao mindanao region']] |
list of oregon ballot measures | https://en.wikipedia.org/wiki/List_of_Oregon_ballot_measures | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/1-256286-5.html.csv | count | there are 11 meas num in the list of oregon ballot . | {'scope': 'all', 'criterion': 'all', 'value': 'n/a', 'result': '11', 'col': '1', 'subset': None} | {'func': 'eq', 'args': [{'func': 'count', 'args': [{'func': 'filter_all', 'args': ['all_rows', 'meas num'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose meas num record is arbitrary .', 'tostr': 'filter_all { all_rows ; meas num }'}], 'result': '11', 'ind': 1, 'tostr': 'count { filter_all { all_rows ; meas num } }', 'tointer': 'select the rows whose meas num record is arbitrary . the number of such rows is 11 .'}, '11'], 'result': True, 'ind': 2, 'tostr': 'eq { count { filter_all { all_rows ; meas num } } ; 11 } = true', 'tointer': 'select the rows whose meas num record is arbitrary . the number of such rows is 11 .'} | eq { count { filter_all { all_rows ; meas num } } ; 11 } = true | select the rows whose meas num record is arbitrary . the number of such rows is 11 . | 3 | 3 | {'eq_2': 2, 'result_3': 3, 'count_1': 1, 'filter_all_0': 0, 'all_rows_4': 4, 'meas num_5': 5, '11_6': 6} | {'eq_2': 'eq', 'result_3': 'true', 'count_1': 'count', 'filter_all_0': 'filter_all', 'all_rows_4': 'all_rows', 'meas num_5': 'meas num', '11_6': '11'} | {'eq_2': [3], 'result_3': [], 'count_1': [2], 'filter_all_0': [1], 'all_rows_4': [0], 'meas num_5': [0], '11_6': [2]} | ['meas num', 'passed', 'yes votes', 'no votes', '% yes', 'const amd', 'type', 'description'] | [['1', 'no', '35270', '59065', '37.39 %', 'yes', 'init', 'permitting female taxpayers to vote'], ['3', 'no', '23143', '59974', '27.84 %', 'no', 'leg', 'calling convention to revise state constitution'], ['10', 'yes', '50191', '40044', '55.62 %', 'no', 'init', 'to establish a state normal school at monmouth'], ['12', 'no', '16250', '69002', '19.06 %', 'no', 'init', 'annexing part of clackamas county to multnomah'], ['17', 'no', '15664', '62712', '19.99 %', 'no', 'init', 'creating orchard county from part of umatilla'], ['18', 'no', '15613', '61704', '20.19 %', 'no', 'init', 'creating clark county from part of grant'], ['19', 'no', '40898', '46201', '46.96 %', 'no', 'init', 'to establish state normal school at weston'], ['20', 'no', '14047', '68221', '17.07 %', 'no', 'init', 'to annex part of washington county to multnomah'], ['21', 'no', '38473', '48655', '44.16 %', 'no', 'init', 'to establish state normal school at ashland'], ['22', 'no', '43540', '61221', '41.56 %', 'yes', 'init', 'prohibiting liquor traffic'], ['26', 'no', '17592', '60486', '22.53 %', 'no', 'init', 'creating deschutes county out of part of crook']] |
marianne werdel | https://en.wikipedia.org/wiki/Marianne_Werdel | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-15113825-5.html.csv | superlative | marianne werdel 's first match with tami whitlinger-jones as her partner took place on a carpet surface . | {'scope': 'subset', 'col_superlative': '2', 'row_superlative': '4', 'value_mentioned': 'no', 'max_or_min': 'min', 'other_col': '4,5', 'subset': {'col': '5', 'criterion': 'equal', 'value': 'tami whitlinger - jones'}} | {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'argmin', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'partner', 'tami whitlinger - jones'], 'result': None, 'ind': 0, 'tostr': 'filter_eq { all_rows ; partner ; tami whitlinger - jones }', 'tointer': 'select the rows whose partner record fuzzily matches to tami whitlinger - jones .'}, 'date'], 'result': None, 'ind': 1, 'tostr': 'argmin { filter_eq { all_rows ; partner ; tami whitlinger - jones } ; date }'}, 'surface'], 'result': 'carpet', 'ind': 2, 'tostr': 'hop { argmin { filter_eq { all_rows ; partner ; tami whitlinger - jones } ; date } ; surface }'}, 'carpet'], 'result': True, 'ind': 3, 'tostr': 'eq { hop { argmin { filter_eq { all_rows ; partner ; tami whitlinger - jones } ; date } ; surface } ; carpet } = true', 'tointer': 'select the rows whose partner record fuzzily matches to tami whitlinger - jones . select the row whose date record of these rows is minimum . the surface record of this row is carpet .'} | eq { hop { argmin { filter_eq { all_rows ; partner ; tami whitlinger - jones } ; date } ; surface } ; carpet } = true | select the rows whose partner record fuzzily matches to tami whitlinger - jones . select the row whose date record of these rows is minimum . the surface record of this row is carpet . | 4 | 4 | {'str_eq_3': 3, 'result_4': 4, 'str_hop_2': 2, 'argmin_1': 1, 'filter_str_eq_0': 0, 'all_rows_5': 5, 'partner_6': 6, 'tami whitlinger - jones_7': 7, 'date_8': 8, 'surface_9': 9, 'carpet_10': 10} | {'str_eq_3': 'str_eq', 'result_4': 'true', 'str_hop_2': 'str_hop', 'argmin_1': 'argmin', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_5': 'all_rows', 'partner_6': 'partner', 'tami whitlinger - jones_7': 'tami whitlinger - jones', 'date_8': 'date', 'surface_9': 'surface', 'carpet_10': 'carpet'} | {'str_eq_3': [4], 'result_4': [], 'str_hop_2': [3], 'argmin_1': [2], 'filter_str_eq_0': [1], 'all_rows_5': [0], 'partner_6': [0], 'tami whitlinger - jones_7': [0], 'date_8': [1], 'surface_9': [2], 'carpet_10': [3]} | ['outcome', 'date', 'tournament', 'surface', 'partner', 'opponents', 'score'] | [['runner - up', 'may 24 , 1992', 'european open , switzerland', 'clay', 'karina habšudová', 'amy frazier elna reinach', '5 - 7 , 2 - 6'], ['runner - up', 'may 23 , 1993', 'european open , switzerland', 'clay', 'lindsay davenport', 'mary joe fernandez helena suková', '2 - 6 , 4 - 6'], ['runner - up', 'september 19 , 1993', 'hong kong', 'hard', 'debbie graham', 'karin kschwendt rachel mcquillan', '6 - 4 , 4 - 6 , 2 - 6'], ['runner - up', 'february 12 , 1995', 'chicago , illinois , usa', 'carpet', 'tami whitlinger - jones', 'gabriela sabatini brenda shultz', '7 - 5 , 6 - 7 , 4 - 6'], ['runner - up', 'may 25 , 1996', 'strasbourg , france', 'clay', 'tami whitlinger - jones', 'yayuk basuki nicole bradtke', '7 - 5 , 4 - 6 , 4 - 6'], ['runner - up', 'february 23 , 1997', 'oklahoma city , oklahoma , usa', 'hard', 'tami whitlinger - jones', 'rika hiraki nana miyagi', '4 - 6 , 1 - 6']] |
high - temperature superconductivity | https://en.wikipedia.org/wiki/High-temperature_superconductivity | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/1-101336-1.html.csv | unique | the only orthorhombic superconductive formula is yba 2 cu 3 o 7 . | {'scope': 'all', 'row': '1', 'col': '5', 'col_other': '1', 'criterion': 'equal', 'value': 'orthorhombic', 'subset': None} | {'func': 'and', 'args': [{'func': 'only', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'crystal structure', 'orthorhombic'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose crystal structure record fuzzily matches to orthorhombic .', 'tostr': 'filter_eq { all_rows ; crystal structure ; orthorhombic }'}], 'result': True, 'ind': 1, 'tostr': 'only { filter_eq { all_rows ; crystal structure ; orthorhombic } }', 'tointer': 'select the rows whose crystal structure record fuzzily matches to orthorhombic . there is only one such row in the table .'}, {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'crystal structure', 'orthorhombic'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose crystal structure record fuzzily matches to orthorhombic .', 'tostr': 'filter_eq { all_rows ; crystal structure ; orthorhombic }'}, 'formula'], 'result': 'yba 2 cu 3 o 7', 'ind': 2, 'tostr': 'hop { filter_eq { all_rows ; crystal structure ; orthorhombic } ; formula }'}, 'yba 2 cu 3 o 7'], 'result': True, 'ind': 3, 'tostr': 'eq { hop { filter_eq { all_rows ; crystal structure ; orthorhombic } ; formula } ; yba 2 cu 3 o 7 }', 'tointer': 'the formula record of this unqiue row is yba 2 cu 3 o 7 .'}], 'result': True, 'ind': 4, 'tostr': 'and { only { filter_eq { all_rows ; crystal structure ; orthorhombic } } ; eq { hop { filter_eq { all_rows ; crystal structure ; orthorhombic } ; formula } ; yba 2 cu 3 o 7 } } = true', 'tointer': 'select the rows whose crystal structure record fuzzily matches to orthorhombic . there is only one such row in the table . the formula record of this unqiue row is yba 2 cu 3 o 7 .'} | and { only { filter_eq { all_rows ; crystal structure ; orthorhombic } } ; eq { hop { filter_eq { all_rows ; crystal structure ; orthorhombic } ; formula } ; yba 2 cu 3 o 7 } } = true | select the rows whose crystal structure record fuzzily matches to orthorhombic . there is only one such row in the table . the formula record of this unqiue row is yba 2 cu 3 o 7 . | 6 | 5 | {'and_4': 4, 'result_5': 5, 'only_1': 1, 'filter_str_eq_0': 0, 'all_rows_6': 6, 'crystal structure_7': 7, 'orthorhombic_8': 8, 'str_eq_3': 3, 'str_hop_2': 2, 'formula_9': 9, 'yba 2 cu 3 o 7_10': 10} | {'and_4': 'and', 'result_5': 'true', 'only_1': 'only', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_6': 'all_rows', 'crystal structure_7': 'crystal structure', 'orthorhombic_8': 'orthorhombic', 'str_eq_3': 'str_eq', 'str_hop_2': 'str_hop', 'formula_9': 'formula', 'yba 2 cu 3 o 7_10': 'yba 2 cu 3 o 7'} | {'and_4': [5], 'result_5': [], 'only_1': [4], 'filter_str_eq_0': [1, 2], 'all_rows_6': [0], 'crystal structure_7': [0], 'orthorhombic_8': [0], 'str_eq_3': [4], 'str_hop_2': [3], 'formula_9': [2], 'yba 2 cu 3 o 7_10': [3]} | ['formula', 'notation', 't c ( k )', 'no of cu - o planes in unit cell', 'crystal structure'] | [['yba 2 cu 3 o 7', '123', '92', '2', 'orthorhombic'], ['bi 2 sr 2 cuo 6', 'bi - 2201', '20', '1', 'tetragonal'], ['bi 2 sr 2 cacu 2 o 8', 'bi - 2212', '85', '2', 'tetragonal'], ['bi 2 sr 2 ca 2 cu 3 o 6', 'bi - 2223', '110', '3', 'tetragonal'], ['tl 2 ba 2 cuo 6', 'tl - 2201', '80', '1', 'tetragonal'], ['tl 2 ba 2 cacu 2 o 8', 'tl - 2212', '108', '2', 'tetragonal'], ['tl 2 ba 2 ca 2 cu 3 o 10', 'tl - 2223', '125', '3', 'tetragonal'], ['tlba 2 ca 3 cu 4 o 11', 'tl - 1234', '122', '4', 'tetragonal'], ['hgba 2 cuo 4', 'hg - 1201', '94', '1', 'tetragonal'], ['hgba 2 cacu 2 o 6', 'hg - 1212', '128', '2', 'tetragonal']] |
list of mr. belvedere episodes | https://en.wikipedia.org/wiki/List_of_Mr._Belvedere_episodes | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/1-20967430-3.html.csv | comparative | the mr. belvedere episode titled ' college bound ' aired earlier than the episode titled ' the cadet ' . | {'row_1': '12', 'row_2': '16', 'col': '6', 'col_other': '3', 'relation': 'less', 'record_mentioned': 'no', 'diff_result': None} | {'func': 'less', 'args': [{'func': 'str_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'title', 'college bound'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose title record fuzzily matches to college bound .', 'tostr': 'filter_eq { all_rows ; title ; college bound }'}, 'original air date'], 'result': None, 'ind': 2, 'tostr': 'hop { filter_eq { all_rows ; title ; college bound } ; original air date }', 'tointer': 'select the rows whose title record fuzzily matches to college bound . take the original air date record of this row .'}, {'func': 'str_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'title', 'the cadet'], 'result': None, 'ind': 1, 'tointer': 'select the rows whose title record fuzzily matches to the cadet .', 'tostr': 'filter_eq { all_rows ; title ; the cadet }'}, 'original air date'], 'result': None, 'ind': 3, 'tostr': 'hop { filter_eq { all_rows ; title ; the cadet } ; original air date }', 'tointer': 'select the rows whose title record fuzzily matches to the cadet . take the original air date record of this row .'}], 'result': True, 'ind': 4, 'tostr': 'less { hop { filter_eq { all_rows ; title ; college bound } ; original air date } ; hop { filter_eq { all_rows ; title ; the cadet } ; original air date } } = true', 'tointer': 'select the rows whose title record fuzzily matches to college bound . take the original air date record of this row . select the rows whose title record fuzzily matches to the cadet . take the original air date record of this row . the first record is less than the second record .'} | less { hop { filter_eq { all_rows ; title ; college bound } ; original air date } ; hop { filter_eq { all_rows ; title ; the cadet } ; original air date } } = true | select the rows whose title record fuzzily matches to college bound . take the original air date record of this row . select the rows whose title record fuzzily matches to the cadet . take the original air date record of this row . the first record is less than the second record . | 5 | 5 | {'less_4': 4, 'result_5': 5, 'str_hop_2': 2, 'filter_str_eq_0': 0, 'all_rows_6': 6, 'title_7': 7, 'college bound_8': 8, 'original air date_9': 9, 'str_hop_3': 3, 'filter_str_eq_1': 1, 'all_rows_10': 10, 'title_11': 11, 'the cadet_12': 12, 'original air date_13': 13} | {'less_4': 'less', 'result_5': 'true', 'str_hop_2': 'str_hop', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_6': 'all_rows', 'title_7': 'title', 'college bound_8': 'college bound', 'original air date_9': 'original air date', 'str_hop_3': 'str_hop', 'filter_str_eq_1': 'filter_str_eq', 'all_rows_10': 'all_rows', 'title_11': 'title', 'the cadet_12': 'the cadet', 'original air date_13': 'original air date'} | {'less_4': [5], 'result_5': [], 'str_hop_2': [4], 'filter_str_eq_0': [2], 'all_rows_6': [0], 'title_7': [0], 'college bound_8': [0], 'original air date_9': [2], 'str_hop_3': [4], 'filter_str_eq_1': [3], 'all_rows_10': [1], 'title_11': [1], 'the cadet_12': [1], 'original air date_13': [3]} | ['ep', 'season', 'title', 'directed by', 'written by', 'original air date', 'prod code'] | [['30', '1', 'the thief', 'noam pitlik', 'jeffrey ferro & fredric weiss', 'september 26 , 1986', '5a03'], ['31', '2', 'grandma', 'noam pitlik', 'frank dungan & jeff stein & tony sheehan', 'october 03 , 1986', '5a04'], ['32', '3', 'debut', 'noam pitlik', 'fredric weiss & jeffrey ferro', 'october 17 , 1986', '5a05'], ['33', '4', "kevin 's date", 'noam pitlik', 'tony sheehan', 'october 24 , 1986', '5a07'], ['34', '5', 'halloween', 'noam pitlik', 'jeffrey ferro & fredric weiss', 'october 31 , 1986', '5a08'], ['35', '6', 'deportation : part 1', 'noam pitlik', 'frank dungan & jeff stein & tony sheehan', 'november 07 , 1986', '5a01'], ['36', '7', 'deportation : part 2', 'noam pitlik', 'frank dungan & jeff stein & tony sheehan', 'november 14 , 1986', '5a02'], ['37', '8', 'reunion', 'noam pitlik', 'frank dungan & jeff stein', 'november 21 , 1986', '5a10'], ['38', '9', 'the spelling bee', 'noam pitlik', 'fredric weiss & jeffrey ferro', 'december 05 , 1986', '5a11'], ['39', '10', 'pills', 'noam pitlik', 'gene braunstein & bob perlow', 'december 12 , 1986', '5a09'], ['40', '11', 'the ticket', 'noam pitlik', 'tony sheehan', 'january 30 , 1987', '5a12'], ['41', '12', 'college bound', 'noam pitlik', 'jeffrey ferro & fredric weiss', 'january 09 , 1987', '5a13'], ['42', '13', 'inky', 'noam pitlik', 'frank dungan & jeff stein', 'january 16 , 1987', '5a14'], ['43', '14', 'jobless', 'noam pitlik', 'frank dungan & jeff stein & tony sheehan', 'january 23 , 1987', '5a15'], ['44', '15', 'the crush', 'noam pitlik', 'fredric weiss & jeffrey ferro', 'february 06 , 1987', '5a16'], ['46', '17', 'the cadet', 'noam pitlik', 'jeffrey ferro & fredric weiss', 'february 20 , 1987', '5a18'], ['47', '18', "kevin 's older woman", 'noam pitlik', 'frank dungan & jeff stein & tony sheehan', 'february 27 , 1987', '5a19'], ['48', '19', 'baby', 'noam pitlik', 'lisa albert', 'march 06 , 1987', '5a06'], ['49', '20', 'separation', 'noam pitlik', 'frank dungan & jeff stein', 'may 01 , 1987', '5a21'], ['50', '21', 'the mogul', 'noam pitlik', 'frank dungan & jeff stein & tony sheehan', 'may 08 , 1987', '5a22']] |
washington redskins draft history | https://en.wikipedia.org/wiki/Washington_Redskins_draft_history | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-17100961-5.html.csv | comparative | in the washington redskins draft history , larry lutz was picked one round before don irwin . | {'row_1': '6', 'row_2': '7', 'col': '1', 'col_other': '4', 'relation': 'diff', 'record_mentioned': 'no', 'diff_result': {'diff_value': '1', 'bigger': 'row2'}} | {'func': 'eq', 'args': [{'func': 'diff', 'args': [{'func': 'num_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'name', 'larry lutz'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose name record fuzzily matches to larry lutz .', 'tostr': 'filter_eq { all_rows ; name ; larry lutz }'}, 'round'], 'result': None, 'ind': 2, 'tostr': 'hop { filter_eq { all_rows ; name ; larry lutz } ; round }', 'tointer': 'select the rows whose name record fuzzily matches to larry lutz . take the round record of this row .'}, {'func': 'num_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'name', 'don irwin'], 'result': None, 'ind': 1, 'tointer': 'select the rows whose name record fuzzily matches to don irwin .', 'tostr': 'filter_eq { all_rows ; name ; don irwin }'}, 'round'], 'result': None, 'ind': 3, 'tostr': 'hop { filter_eq { all_rows ; name ; don irwin } ; round }', 'tointer': 'select the rows whose name record fuzzily matches to don irwin . take the round record of this row .'}], 'result': '-1', 'ind': 4, 'tostr': 'diff { hop { filter_eq { all_rows ; name ; larry lutz } ; round } ; hop { filter_eq { all_rows ; name ; don irwin } ; round } }'}, '-1'], 'result': True, 'ind': 5, 'tostr': 'eq { diff { hop { filter_eq { all_rows ; name ; larry lutz } ; round } ; hop { filter_eq { all_rows ; name ; don irwin } ; round } } ; -1 } = true', 'tointer': 'select the rows whose name record fuzzily matches to larry lutz . take the round record of this row . select the rows whose name record fuzzily matches to don irwin . take the round record of this row . the second record is 1 larger than the first record .'} | eq { diff { hop { filter_eq { all_rows ; name ; larry lutz } ; round } ; hop { filter_eq { all_rows ; name ; don irwin } ; round } } ; -1 } = true | select the rows whose name record fuzzily matches to larry lutz . take the round record of this row . select the rows whose name record fuzzily matches to don irwin . take the round record of this row . the second record is 1 larger than the first record . | 6 | 6 | {'eq_5': 5, 'result_6': 6, 'diff_4': 4, 'num_hop_2': 2, 'filter_str_eq_0': 0, 'all_rows_7': 7, 'name_8': 8, 'larry lutz_9': 9, 'round_10': 10, 'num_hop_3': 3, 'filter_str_eq_1': 1, 'all_rows_11': 11, 'name_12': 12, 'don irwin_13': 13, 'round_14': 14, '-1_15': 15} | {'eq_5': 'eq', 'result_6': 'true', 'diff_4': 'diff', 'num_hop_2': 'num_hop', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_7': 'all_rows', 'name_8': 'name', 'larry lutz_9': 'larry lutz', 'round_10': 'round', 'num_hop_3': 'num_hop', 'filter_str_eq_1': 'filter_str_eq', 'all_rows_11': 'all_rows', 'name_12': 'name', 'don irwin_13': 'don irwin', 'round_14': 'round', '-1_15': '-1'} | {'eq_5': [6], 'result_6': [], 'diff_4': [5], 'num_hop_2': [4], 'filter_str_eq_0': [2], 'all_rows_7': [0], 'name_8': [0], 'larry lutz_9': [0], 'round_10': [2], 'num_hop_3': [4], 'filter_str_eq_1': [3], 'all_rows_11': [1], 'name_12': [1], 'don irwin_13': [1], 'round_14': [3], '-1_15': [5]} | ['round', 'pick', 'overall', 'name', 'position', 'college'] | [['1', '2', '2', 'riley smith', 'fb', 'alabama'], ['2', '2', '11', 'keith topping', 'e', 'stanford'], ['3', '2', '20', 'ed smith', 'fb', 'new york'], ['4', '2', '29', 'paul tangora', 'g', 'northwestern'], ['5', '2', '38', 'wilson groseclose', 'ot', 'texas christian'], ['6', '2', '47', 'larry lutz', 'ot', 'california'], ['7', '2', '56', 'don irwin', 'fb', 'colgate'], ['8', '2', '65', 'wayne millner', 'e', 'notre dame'], ['9', '2', '74', 'marcel saunders', 'g', 'loyola']] |
chad little | https://en.wikipedia.org/wiki/Chad_Little | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-1875157-2.html.csv | comparative | chad little 's winnings were higher in the year 1996 than they were in 1998 . | {'row_1': '5', 'row_2': '6', 'col': '9', 'col_other': '1', 'relation': 'greater', 'record_mentioned': 'no', 'diff_result': None} | {'func': 'greater', 'args': [{'func': 'num_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'year', '1996'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose year record fuzzily matches to 1996 .', 'tostr': 'filter_eq { all_rows ; year ; 1996 }'}, 'winnings'], 'result': None, 'ind': 2, 'tostr': 'hop { filter_eq { all_rows ; year ; 1996 } ; winnings }', 'tointer': 'select the rows whose year record fuzzily matches to 1996 . take the winnings record of this row .'}, {'func': 'num_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'year', '1998'], 'result': None, 'ind': 1, 'tointer': 'select the rows whose year record fuzzily matches to 1998 .', 'tostr': 'filter_eq { all_rows ; year ; 1998 }'}, 'winnings'], 'result': None, 'ind': 3, 'tostr': 'hop { filter_eq { all_rows ; year ; 1998 } ; winnings }', 'tointer': 'select the rows whose year record fuzzily matches to 1998 . take the winnings record of this row .'}], 'result': True, 'ind': 4, 'tostr': 'greater { hop { filter_eq { all_rows ; year ; 1996 } ; winnings } ; hop { filter_eq { all_rows ; year ; 1998 } ; winnings } } = true', 'tointer': 'select the rows whose year record fuzzily matches to 1996 . take the winnings record of this row . select the rows whose year record fuzzily matches to 1998 . take the winnings record of this row . the first record is greater than the second record .'} | greater { hop { filter_eq { all_rows ; year ; 1996 } ; winnings } ; hop { filter_eq { all_rows ; year ; 1998 } ; winnings } } = true | select the rows whose year record fuzzily matches to 1996 . take the winnings record of this row . select the rows whose year record fuzzily matches to 1998 . take the winnings record of this row . the first record is greater than the second record . | 5 | 5 | {'greater_4': 4, 'result_5': 5, 'num_hop_2': 2, 'filter_str_eq_0': 0, 'all_rows_6': 6, 'year_7': 7, '1996_8': 8, 'winnings_9': 9, 'num_hop_3': 3, 'filter_str_eq_1': 1, 'all_rows_10': 10, 'year_11': 11, '1998_12': 12, 'winnings_13': 13} | {'greater_4': 'greater', 'result_5': 'true', 'num_hop_2': 'num_hop', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_6': 'all_rows', 'year_7': 'year', '1996_8': '1996', 'winnings_9': 'winnings', 'num_hop_3': 'num_hop', 'filter_str_eq_1': 'filter_str_eq', 'all_rows_10': 'all_rows', 'year_11': 'year', '1998_12': '1998', 'winnings_13': 'winnings'} | {'greater_4': [5], 'result_5': [], 'num_hop_2': [4], 'filter_str_eq_0': [2], 'all_rows_6': [0], 'year_7': [0], '1996_8': [0], 'winnings_9': [2], 'num_hop_3': [4], 'filter_str_eq_1': [3], 'all_rows_10': [1], 'year_11': [1], '1998_12': [1], 'winnings_13': [3]} | ['year', 'starts', 'wins', 'top 5', 'top 10', 'poles', 'avg start', 'avg finish', 'winnings', 'position'] | [['1992', '1', '0', '0', '0', '0', '29.0', '29.0', '1400', '120th'], ['1993', '12', '0', '2', '3', '0', '22.1', '22.6', '56508', '32nd'], ['1994', '28', '0', '10', '14', '0', '21.0', '11.9', '234022', '3rd'], ['1995', '26', '6', '11', '13', '0', '15.5', '14.5', '529056', '2nd'], ['1996', '26', '0', '2', '7', '1', '15.3', '16.5', '317394', '5th'], ['1998', '1', '0', '0', '0', '0', '6.0', '30.0', '4380', '108th'], ['2000', '4', '0', '0', '1', '0', '24.5', '21.2', '48510', '66th'], ['2001', '33', '0', '2', '6', '0', '24.8', '16.0', '690321', '9th'], ['2002', '3', '0', '0', '0', '0', '25.0', '19.7', '69565', '69th']] |
2008 - 09 los angeles clippers season | https://en.wikipedia.org/wiki/2008%E2%80%9309_Los_Angeles_Clippers_season | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-17323529-5.html.csv | ordinal | the los angeles clippers ' game against la lakers recorded their 2nd highest attendance of the 2008 - 09 season . | {'row': '3', 'col': '8', 'order': '2', 'col_other': '3', 'max_or_min': 'max_to_min', 'value_mentioned': 'no', 'scope': 'all', 'subset': None} | {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'nth_argmax', 'args': ['all_rows', 'location attendance', '2'], 'result': None, 'ind': 0, 'tostr': 'nth_argmax { all_rows ; location attendance ; 2 }'}, 'team'], 'result': 'la lakers', 'ind': 1, 'tostr': 'hop { nth_argmax { all_rows ; location attendance ; 2 } ; team }'}, 'la lakers'], 'result': True, 'ind': 2, 'tostr': 'eq { hop { nth_argmax { all_rows ; location attendance ; 2 } ; team } ; la lakers } = true', 'tointer': 'select the row whose location attendance record of all rows is 2nd maximum . the team record of this row is la lakers .'} | eq { hop { nth_argmax { all_rows ; location attendance ; 2 } ; team } ; la lakers } = true | select the row whose location attendance record of all rows is 2nd maximum . the team record of this row is la lakers . | 3 | 3 | {'str_eq_2': 2, 'result_3': 3, 'str_hop_1': 1, 'nth_argmax_0': 0, 'all_rows_4': 4, 'location attendance_5': 5, '2_6': 6, 'team_7': 7, 'la lakers_8': 8} | {'str_eq_2': 'str_eq', 'result_3': 'true', 'str_hop_1': 'str_hop', 'nth_argmax_0': 'nth_argmax', 'all_rows_4': 'all_rows', 'location attendance_5': 'location attendance', '2_6': '2', 'team_7': 'team', 'la lakers_8': 'la lakers'} | {'str_eq_2': [3], 'result_3': [], 'str_hop_1': [2], 'nth_argmax_0': [1], 'all_rows_4': [0], 'location attendance_5': [0], '2_6': [0], 'team_7': [1], 'la lakers_8': [2]} | ['game', 'date', 'team', 'score', 'high points', 'high rebounds', 'high assists', 'location attendance', 'record'] | [['3', 'november 1', 'utah', 'l 79 - 101 ( ot )', 'cuttino mobley ( 20 )', 'chris kaman ( 12 )', 'mike taylor ( 4 )', 'energysolutions arena 19602', '0 - 3'], ['4', 'november 3', 'utah', 'l 73 - 89 ( ot )', 'chris kaman ( 19 )', 'chris kaman ( 10 )', 'baron davis ( 9 )', 'staples center 12712', '0 - 4'], ['5', 'november 5', 'la lakers', 'l 88 - 106 ( ot )', 'al thornton ( 22 )', 'tim thomas , chris kaman ( 11 )', 'baron davis ( 7 )', 'staples center 18997', '0 - 5'], ['6', 'november 7', 'houston', 'l 83 - 92 ( ot )', 'baron davis , chris kaman ( 23 )', 'marcus camby ( 13 )', 'baron davis ( 8 )', 'staples center 14670', '0 - 6'], ['7', 'november 9', 'dallas', 'w 103 - 92 ( ot )', 'baron davis ( 22 )', 'marcus camby ( 14 )', 'baron davis ( 10 )', 'staples center 14249', '1 - 6'], ['8', 'november 12', 'sacramento', 'l 98 - 103 ( ot )', 'al thornton ( 20 )', 'chris kaman ( 6 )', 'baron davis ( 11 )', 'staples center 13266', '1 - 7'], ['9', 'november 15', 'golden state', 'l 103 - 121 ( ot )', 'baron davis ( 25 )', 'chris kaman ( 13 )', 'baron davis ( 11 )', 'staples center 12823', '1 - 8'], ['10', 'november 17', 'san antonio', 'l 83 - 86 ( ot )', 'cuttino mobley ( 18 )', 'chris kaman ( 13 )', 'baron davis ( 8 )', 'staples center 14962', '1 - 9'], ['11', 'november 19', 'oklahoma city', 'w 108 - 88 ( ot )', 'chris kaman ( 25 )', 'chris kaman ( 14 )', 'baron davis ( 8 )', 'ford center 18312', '2 - 9'], ['12', 'november 21', 'philadelphia', 'l 88 - 89 ( ot )', 'al thornton ( 22 )', 'al thornton , chris kaman , marcus camby ( 9 )', 'baron davis ( 6 )', 'wachovia center 13474', '2 - 10'], ['13', 'november 22', 'new jersey', 'l 95 - 112 ( ot )', 'baron davis ( 30 )', 'marcus camby ( 13 )', 'baron davis ( 10 )', 'izod center 17677', '2 - 11'], ['14', 'november 24', 'new orleans', 'l 87 - 99 ( ot )', 'eric gordon ( 25 )', 'marcus camby ( 11 )', 'baron davis ( 8 )', 'staples center 14956', '2 - 12'], ['15', 'november 26', 'denver', 'l 105 - 106 ( ot )', 'eric gordon ( 24 )', 'marcus camby ( 11 )', 'baron davis ( 10 )', 'staples center 14934', '2 - 13'], ['16', 'november 29', 'miami', 'w 97 - 96 ( ot )', 'al thornton , zach randolph ( 27 )', 'zach randolph ( 13 )', 'baron davis ( 9 )', 'staples center 16245', '3 - 13']] |
united states house of representatives elections , 1926 | https://en.wikipedia.org/wiki/United_States_House_of_Representatives_elections%2C_1926 | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/1-1342379-45.html.csv | ordinal | of the incumbents in the 1926 election for the united states house of representatives , the one with the 2nd earliest date of first election was s otis bland . | {'row': '1', 'col': '4', 'order': '2', 'col_other': '2', 'max_or_min': 'min_to_max', 'value_mentioned': 'no', 'scope': 'all', 'subset': None} | {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'nth_argmin', 'args': ['all_rows', 'first elected', '2'], 'result': None, 'ind': 0, 'tostr': 'nth_argmin { all_rows ; first elected ; 2 }'}, 'incumbent'], 'result': 's otis bland', 'ind': 1, 'tostr': 'hop { nth_argmin { all_rows ; first elected ; 2 } ; incumbent }'}, 's otis bland'], 'result': True, 'ind': 2, 'tostr': 'eq { hop { nth_argmin { all_rows ; first elected ; 2 } ; incumbent } ; s otis bland } = true', 'tointer': 'select the row whose first elected record of all rows is 2nd minimum . the incumbent record of this row is s otis bland .'} | eq { hop { nth_argmin { all_rows ; first elected ; 2 } ; incumbent } ; s otis bland } = true | select the row whose first elected record of all rows is 2nd minimum . the incumbent record of this row is s otis bland . | 3 | 3 | {'str_eq_2': 2, 'result_3': 3, 'str_hop_1': 1, 'nth_argmin_0': 0, 'all_rows_4': 4, 'first elected_5': 5, '2_6': 6, 'incumbent_7': 7, 's otis bland_8': 8} | {'str_eq_2': 'str_eq', 'result_3': 'true', 'str_hop_1': 'str_hop', 'nth_argmin_0': 'nth_argmin', 'all_rows_4': 'all_rows', 'first elected_5': 'first elected', '2_6': '2', 'incumbent_7': 'incumbent', 's otis bland_8': 's otis bland'} | {'str_eq_2': [3], 'result_3': [], 'str_hop_1': [2], 'nth_argmin_0': [1], 'all_rows_4': [0], 'first elected_5': [0], '2_6': [0], 'incumbent_7': [1], 's otis bland_8': [2]} | ['district', 'incumbent', 'party', 'first elected', 'result', 'candidates'] | [['virginia 1', 's otis bland', 'democratic', '1918', 're - elected', 's otis bland ( d ) unopposed'], ['virginia 2', 'joseph t deal', 'democratic', '1920', 're - elected', 'joseph t deal ( d ) 65.4 % l s parsons ( r ) 34.6 %'], ['virginia 3', 'andrew jackson montague', 'democratic', '1912', 're - elected', 'andrew jackson montague ( d ) unopposed'], ['virginia 4', 'patrick h drewry', 'democratic', '1920', 're - elected', 'patrick h drewry ( d ) unopposed'], ['virginia 5', 'joseph whitehead', 'democratic', '1924', 're - elected', 'joseph whitehead ( d ) unopposed'], ['virginia 6', 'clifton a woodrum', 'democratic', '1922', 're - elected', 'clifton a woodrum ( d ) unopposed'], ['virginia 8', 'r walton moore', 'democratic', '1919', 're - elected', 'r walton moore ( d ) 95.5 % j w leedy ( r ) 4.5 %'], ['virginia 9', 'george c peery', 'democratic', '1922', 're - elected', 'george c peery ( d ) 53.4 % s r hurley ( r ) 46.6 %']] |
water resources management in chile | https://en.wikipedia.org/wiki/Water_resources_management_in_Chile | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/1-22854436-1.html.csv | count | in water resources management in chile , 2 of the ones with average annual rainfall ( mm ) of less than 100 has an average annual runoff ( mm ) of less than 1 . | {'scope': 'subset', 'criterion': 'less_than', 'value': '1', 'result': '2', 'col': '6', 'subset': {'col': '5', 'criterion': 'less_than', 'value': '100'}} | {'func': 'eq', 'args': [{'func': 'count', 'args': [{'func': 'filter_less', 'args': [{'func': 'filter_less', 'args': ['all_rows', 'average annual rainfall ( mm )', '100'], 'result': None, 'ind': 0, 'tostr': 'filter_less { all_rows ; average annual rainfall ( mm ) ; 100 }', 'tointer': 'select the rows whose average annual rainfall ( mm ) record is less than 100 .'}, 'average annual runoff ( mm )', '1'], 'result': None, 'ind': 1, 'tointer': 'select the rows whose average annual rainfall ( mm ) record is less than 100 . among these rows , select the rows whose average annual runoff ( mm ) record is less than 1 .', 'tostr': 'filter_less { filter_less { all_rows ; average annual rainfall ( mm ) ; 100 } ; average annual runoff ( mm ) ; 1 }'}], 'result': '2', 'ind': 2, 'tostr': 'count { filter_less { filter_less { all_rows ; average annual rainfall ( mm ) ; 100 } ; average annual runoff ( mm ) ; 1 } }', 'tointer': 'select the rows whose average annual rainfall ( mm ) record is less than 100 . among these rows , select the rows whose average annual runoff ( mm ) record is less than 1 . the number of such rows is 2 .'}, '2'], 'result': True, 'ind': 3, 'tostr': 'eq { count { filter_less { filter_less { all_rows ; average annual rainfall ( mm ) ; 100 } ; average annual runoff ( mm ) ; 1 } } ; 2 } = true', 'tointer': 'select the rows whose average annual rainfall ( mm ) record is less than 100 . among these rows , select the rows whose average annual runoff ( mm ) record is less than 1 . the number of such rows is 2 .'} | eq { count { filter_less { filter_less { all_rows ; average annual rainfall ( mm ) ; 100 } ; average annual runoff ( mm ) ; 1 } } ; 2 } = true | select the rows whose average annual rainfall ( mm ) record is less than 100 . among these rows , select the rows whose average annual runoff ( mm ) record is less than 1 . the number of such rows is 2 . | 4 | 4 | {'eq_3': 3, 'result_4': 4, 'count_2': 2, 'filter_less_1': 1, 'filter_less_0': 0, 'all_rows_5': 5, 'average annual rainfall (mm)_6': 6, '100_7': 7, 'average annual runoff (mm)_8': 8, '1_9': 9, '2_10': 10} | {'eq_3': 'eq', 'result_4': 'true', 'count_2': 'count', 'filter_less_1': 'filter_less', 'filter_less_0': 'filter_less', 'all_rows_5': 'all_rows', 'average annual rainfall (mm)_6': 'average annual rainfall ( mm )', '100_7': '100', 'average annual runoff (mm)_8': 'average annual runoff ( mm )', '1_9': '1', '2_10': '2'} | {'eq_3': [4], 'result_4': [], 'count_2': [3], 'filter_less_1': [2], 'filter_less_0': [1], 'all_rows_5': [0], 'average annual rainfall (mm)_6': [0], '100_7': [0], 'average annual runoff (mm)_8': [1], '1_9': [1], '2_10': [3]} | ['administrative region', 'population ( 2002 census data )', 'surface km 2', 'main rivers', 'average annual rainfall ( mm )', 'average annual runoff ( mm )', 'per capita average annual renewable water resources m 3'] | [['i - tarapacá', '428594', '58698', 'azapa river , vítor river and camarones river', '93.6', '7.1', '972'], ['ii - antofagasta', '493984', '126444', 'loa river', '44.5', '0.2', '51'], ['iii - atacama', '254336', '75573', 'salado river', '82.4', '0.7', '208'], ['iv - coquimbo', '603210', '40656', 'elqui river , choapa river and limarí river', '222', '18', '1213'], ['v - valparaíso', '1539852', '16396', 'petorca river , la ligua river and aconcagua river', '434', '84', '894'], ['metro region ( mr ) - santiago metropolitan', '7003122', '15349', 'maipo river', '650', '200', '438'], ['vii - maule', '908097', '30325', 'mataquito river and maule river', '1377', '784', '26181'], ['viii - biobío', '1861562', '36929', 'itata river , biobío river and laja river', '1766', '1173', '23270']] |
50 metre running target mixed | https://en.wikipedia.org/wiki/50_metre_running_target_mixed | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-18938213-3.html.csv | ordinal | among the nations who 've won no gold medals in the 50 metre running target mixed , slovakia has the second most total medals . | {'scope': 'subset', 'row': '11', 'col': '6', 'order': '2', 'col_other': '2', 'max_or_min': 'max_to_min', 'value_mentioned': 'no', 'subset': {'col': '3', 'criterion': 'equal', 'value': '0'}} | {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'nth_argmax', 'args': [{'func': 'filter_eq', 'args': ['all_rows', 'gold', '0'], 'result': None, 'ind': 0, 'tostr': 'filter_eq { all_rows ; gold ; 0 }', 'tointer': 'select the rows whose gold record is equal to 0 .'}, 'total', '2'], 'result': None, 'ind': 1, 'tostr': 'nth_argmax { filter_eq { all_rows ; gold ; 0 } ; total ; 2 }'}, 'nation'], 'result': 'slovakia', 'ind': 2, 'tostr': 'hop { nth_argmax { filter_eq { all_rows ; gold ; 0 } ; total ; 2 } ; nation }'}, 'slovakia'], 'result': True, 'ind': 3, 'tostr': 'eq { hop { nth_argmax { filter_eq { all_rows ; gold ; 0 } ; total ; 2 } ; nation } ; slovakia } = true', 'tointer': 'select the rows whose gold record is equal to 0 . select the row whose total record of these rows is 2nd maximum . the nation record of this row is slovakia .'} | eq { hop { nth_argmax { filter_eq { all_rows ; gold ; 0 } ; total ; 2 } ; nation } ; slovakia } = true | select the rows whose gold record is equal to 0 . select the row whose total record of these rows is 2nd maximum . the nation record of this row is slovakia . | 4 | 4 | {'str_eq_3': 3, 'result_4': 4, 'str_hop_2': 2, 'nth_argmax_1': 1, 'filter_eq_0': 0, 'all_rows_5': 5, 'gold_6': 6, '0_7': 7, 'total_8': 8, '2_9': 9, 'nation_10': 10, 'slovakia_11': 11} | {'str_eq_3': 'str_eq', 'result_4': 'true', 'str_hop_2': 'str_hop', 'nth_argmax_1': 'nth_argmax', 'filter_eq_0': 'filter_eq', 'all_rows_5': 'all_rows', 'gold_6': 'gold', '0_7': '0', 'total_8': 'total', '2_9': '2', 'nation_10': 'nation', 'slovakia_11': 'slovakia'} | {'str_eq_3': [4], 'result_4': [], 'str_hop_2': [3], 'nth_argmax_1': [2], 'filter_eq_0': [1], 'all_rows_5': [0], 'gold_6': [0], '0_7': [0], 'total_8': [1], '2_9': [1], 'nation_10': [2], 'slovakia_11': [3]} | ['rank', 'nation', 'gold', 'silver', 'bronze', 'total'] | [['1', 'ussr', '13', '10', '2', '25'], ['2', 'czech republic', '4', '0', '3', '7'], ['3', 'russia', '3', '3', '1', '7'], ['4', 'hungary', '2', '4', '4', '10'], ['5', 'sweden', '2', '2', '5', '9'], ['6', 'china', '2', '2', '4', '8'], ['7', 'italy', '2', '0', '1', '3'], ['8', 'poland', '1', '1', '2', '4'], ['9', 'ukraine', '1', '1', '1', '3'], ['10', 'finland', '0', '3', '2', '5'], ['11', 'slovakia', '0', '2', '1', '3'], ['12', 'west germany', '0', '2', '0', '2'], ['13', 'united states', '0', '0', '2', '2'], ['14', 'colombia', '0', '0', '1', '1'], ['14', 'east germany', '0', '0', '1', '1'], ['total', 'total', '30', '30', '30', '90']] |
main line broadcasting | https://en.wikipedia.org/wiki/Main_Line_Broadcasting | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/1-19131921-1.html.csv | unique | wing-am is the only main line broadcasting radio station with a sports format . | {'scope': 'all', 'row': '14', 'col': '6', 'col_other': '3', 'criterion': 'equal', 'value': 'sports', 'subset': None} | {'func': 'and', 'args': [{'func': 'only', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'format', 'sports'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose format record fuzzily matches to sports .', 'tostr': 'filter_eq { all_rows ; format ; sports }'}], 'result': True, 'ind': 1, 'tostr': 'only { filter_eq { all_rows ; format ; sports } }', 'tointer': 'select the rows whose format record fuzzily matches to sports . there is only one such row in the table .'}, {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'format', 'sports'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose format record fuzzily matches to sports .', 'tostr': 'filter_eq { all_rows ; format ; sports }'}, 'station'], 'result': 'wing - am', 'ind': 2, 'tostr': 'hop { filter_eq { all_rows ; format ; sports } ; station }'}, 'wing - am'], 'result': True, 'ind': 3, 'tostr': 'eq { hop { filter_eq { all_rows ; format ; sports } ; station } ; wing - am }', 'tointer': 'the station record of this unqiue row is wing - am .'}], 'result': True, 'ind': 4, 'tostr': 'and { only { filter_eq { all_rows ; format ; sports } } ; eq { hop { filter_eq { all_rows ; format ; sports } ; station } ; wing - am } } = true', 'tointer': 'select the rows whose format record fuzzily matches to sports . there is only one such row in the table . the station record of this unqiue row is wing - am .'} | and { only { filter_eq { all_rows ; format ; sports } } ; eq { hop { filter_eq { all_rows ; format ; sports } ; station } ; wing - am } } = true | select the rows whose format record fuzzily matches to sports . there is only one such row in the table . the station record of this unqiue row is wing - am . | 6 | 5 | {'and_4': 4, 'result_5': 5, 'only_1': 1, 'filter_str_eq_0': 0, 'all_rows_6': 6, 'format_7': 7, 'sports_8': 8, 'str_eq_3': 3, 'str_hop_2': 2, 'station_9': 9, 'wing - am_10': 10} | {'and_4': 'and', 'result_5': 'true', 'only_1': 'only', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_6': 'all_rows', 'format_7': 'format', 'sports_8': 'sports', 'str_eq_3': 'str_eq', 'str_hop_2': 'str_hop', 'station_9': 'station', 'wing - am_10': 'wing - am'} | {'and_4': [5], 'result_5': [], 'only_1': [4], 'filter_str_eq_0': [1, 2], 'all_rows_6': [0], 'format_7': [0], 'sports_8': [0], 'str_eq_3': [4], 'str_hop_2': [3], 'station_9': [2], 'wing - am_10': [3]} | ['dma', 'market', 'station', 'frequency', 'branding', 'format'] | [['53', 'louisville , ky', 'wgzb - fm', '96.5', 'b96 .5', 'urban'], ['53', 'louisville , ky', 'wdjx - fm', '99.7', '99.7 djx', 'contemporary hit radio'], ['53', 'louisville , ky', 'wmjm - fm', '101.3', 'magic 101.3', 'urban ac'], ['53', 'louisville , ky', 'wxma - fm', '102.3', '102.3 the max', 'hot ac'], ['53', 'louisville , ky', 'wesi', '105.1', 'easy rock 105.1', 'soft adult contemporary'], ['56', 'richmond - petersburg , va', 'wlfv - fm', '93.1', '93.1 the wolf', 'southern country'], ['56', 'richmond - petersburg , va', 'wwlb - fm', '98.9', '98.9 liberty', 'variety hits'], ['56', 'richmond - petersburg , va', 'warv - fm', '100.3', 'big oldies 107.3', 'oldies'], ['56', 'richmond - petersburg , va', 'wbbt - fm', '107.3', 'big oldies 107.3', 'oldies'], ['60', 'dayton , oh', 'wrou - fm', '92.1', '92.1 wrou', 'urban ac'], ['60', 'dayton , oh', 'wgtz - fm', '92.9', 'fly 92.9', 'variety hits'], ['60', 'dayton , oh', 'wcli - fm', '101.5', 'click 101.5', 'modern hit music'], ['60', 'dayton , oh', 'wdht - fm', '102.9', 'hot 102.9', 'rhythmic contemporary'], ['60', 'dayton , oh', 'wing - am', '1410', 'espn 1410', 'sports'], ['166', 'hagerstown , md - chambersburg , pa', 'wqcm - fm', '94.3', '94.3 wqcm', 'rock'], ['166', 'hagerstown , md - chambersburg , pa', 'wikz - fm', '95.1', 'mix 95.1', 'adult contemporary'], ['166', 'hagerstown , md - chambersburg , pa', 'wdld - fm', '96.7', 'wild 96.7', 'rhythmic contemporary hit radio'], ['166', 'hagerstown , md - chambersburg , pa', 'wcha - am', '800', 'true oldies 96.3', 'oldies'], ['166', 'hagerstown , md - chambersburg , pa', 'whag - am', '1410', 'true oldies 96.3', 'oldies']] |
1983 formula one season | https://en.wikipedia.org/wiki/1983_Formula_One_season | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/1-1140074-2.html.csv | unique | in the 1983 formula one season , the only race that was won by michele alboreto was on june 5th . | {'scope': 'all', 'row': '7', 'col': '7', 'col_other': '3', 'criterion': 'equal', 'value': 'michele alboreto', 'subset': None} | {'func': 'and', 'args': [{'func': 'only', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'race winner', 'michele alboreto'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose race winner record fuzzily matches to michele alboreto .', 'tostr': 'filter_eq { all_rows ; race winner ; michele alboreto }'}], 'result': True, 'ind': 1, 'tostr': 'only { filter_eq { all_rows ; race winner ; michele alboreto } }', 'tointer': 'select the rows whose race winner record fuzzily matches to michele alboreto . there is only one such row in the table .'}, {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'race winner', 'michele alboreto'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose race winner record fuzzily matches to michele alboreto .', 'tostr': 'filter_eq { all_rows ; race winner ; michele alboreto }'}, 'date'], 'result': '5 june', 'ind': 2, 'tostr': 'hop { filter_eq { all_rows ; race winner ; michele alboreto } ; date }'}, '5 june'], 'result': True, 'ind': 3, 'tostr': 'eq { hop { filter_eq { all_rows ; race winner ; michele alboreto } ; date } ; 5 june }', 'tointer': 'the date record of this unqiue row is 5 june .'}], 'result': True, 'ind': 4, 'tostr': 'and { only { filter_eq { all_rows ; race winner ; michele alboreto } } ; eq { hop { filter_eq { all_rows ; race winner ; michele alboreto } ; date } ; 5 june } } = true', 'tointer': 'select the rows whose race winner record fuzzily matches to michele alboreto . there is only one such row in the table . the date record of this unqiue row is 5 june .'} | and { only { filter_eq { all_rows ; race winner ; michele alboreto } } ; eq { hop { filter_eq { all_rows ; race winner ; michele alboreto } ; date } ; 5 june } } = true | select the rows whose race winner record fuzzily matches to michele alboreto . there is only one such row in the table . the date record of this unqiue row is 5 june . | 6 | 5 | {'and_4': 4, 'result_5': 5, 'only_1': 1, 'filter_str_eq_0': 0, 'all_rows_6': 6, 'race winner_7': 7, 'michele alboreto_8': 8, 'str_eq_3': 3, 'str_hop_2': 2, 'date_9': 9, '5 june_10': 10} | {'and_4': 'and', 'result_5': 'true', 'only_1': 'only', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_6': 'all_rows', 'race winner_7': 'race winner', 'michele alboreto_8': 'michele alboreto', 'str_eq_3': 'str_eq', 'str_hop_2': 'str_hop', 'date_9': 'date', '5 june_10': '5 june'} | {'and_4': [5], 'result_5': [], 'only_1': [4], 'filter_str_eq_0': [1, 2], 'all_rows_6': [0], 'race winner_7': [0], 'michele alboreto_8': [0], 'str_eq_3': [4], 'str_hop_2': [3], 'date_9': [2], '5 june_10': [3]} | ['rnd', 'race', 'date', 'location', 'pole position', 'fastest lap', 'race winner', 'constructor', 'report'] | [['1', 'brazilian grand prix', '13 march', 'jacarepaguá', 'keke rosberg', 'nelson piquet', 'nelson piquet', 'brabham - bmw', 'report'], ['2', 'united states grand prix west', '27 march', 'long beach', 'patrick tambay', 'niki lauda', 'john watson', 'mclaren - ford', 'report'], ['3', 'french grand prix', '17 april', 'paul ricard', 'alain prost', 'alain prost', 'alain prost', 'renault', 'report'], ['4', 'san marino grand prix', '1 may', 'imola', 'rené arnoux', 'riccardo patrese', 'patrick tambay', 'ferrari', 'report'], ['5', 'monaco grand prix', '15 may', 'monaco', 'alain prost', 'nelson piquet', 'keke rosberg', 'williams - ford', 'report'], ['6', 'belgian grand prix', '22 may', 'spa - francorchamps', 'alain prost', 'andrea de cesaris', 'alain prost', 'renault', 'report'], ['7', 'detroit grand prix', '5 june', 'detroit', 'rené arnoux', 'john watson', 'michele alboreto', 'tyrrell - ford', 'report'], ['8', 'canadian grand prix', '12 june', 'circuit gilles villeneuve', 'rené arnoux', 'patrick tambay', 'rené arnoux', 'ferrari', 'report'], ['9', 'british grand prix', '16 july', 'silverstone', 'rené arnoux', 'alain prost', 'alain prost', 'renault', 'report'], ['10', 'german grand prix', '7 august', 'hockenheimring', 'patrick tambay', 'rené arnoux', 'rené arnoux', 'ferrari', 'report'], ['11', 'austrian grand prix', '14 august', 'österreichring', 'patrick tambay', 'alain prost', 'alain prost', 'renault', 'report'], ['12', 'dutch grand prix', '28 august', 'zandvoort', 'nelson piquet', 'rené arnoux', 'rené arnoux', 'ferrari', 'report'], ['13', 'italian grand prix', '11 september', 'monza', 'riccardo patrese', 'nelson piquet', 'nelson piquet', 'brabham - bmw', 'report'], ['14', 'european grand prix', '25 september', 'brands hatch', 'elio de angelis', 'nigel mansell', 'nelson piquet', 'brabham - bmw', 'report']] |
1952 vfl season | https://en.wikipedia.org/wiki/1952_VFL_season | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-10750694-2.html.csv | count | the away team had a score of 13.11 twice when there were more than 20000 in the crowd , during round two of the 1952 vfl season . | {'scope': 'subset', 'criterion': 'equal', 'value': '13.11', 'result': '2', 'col': '4', 'subset': {'col': '6', 'criterion': 'greater_than', 'value': '20000'}} | {'func': 'eq', 'args': [{'func': 'count', 'args': [{'func': 'filter_eq', 'args': [{'func': 'filter_greater', 'args': ['all_rows', 'crowd', '20000'], 'result': None, 'ind': 0, 'tostr': 'filter_greater { all_rows ; crowd ; 20000 }', 'tointer': 'select the rows whose crowd record is greater than 20000 .'}, 'away team score', '13.11'], 'result': None, 'ind': 1, 'tointer': 'select the rows whose crowd record is greater than 20000 . among these rows , select the rows whose away team score record is equal to 13.11 .', 'tostr': 'filter_eq { filter_greater { all_rows ; crowd ; 20000 } ; away team score ; 13.11 }'}], 'result': '2', 'ind': 2, 'tostr': 'count { filter_eq { filter_greater { all_rows ; crowd ; 20000 } ; away team score ; 13.11 } }', 'tointer': 'select the rows whose crowd record is greater than 20000 . among these rows , select the rows whose away team score record is equal to 13.11 . the number of such rows is 2 .'}, '2'], 'result': True, 'ind': 3, 'tostr': 'eq { count { filter_eq { filter_greater { all_rows ; crowd ; 20000 } ; away team score ; 13.11 } } ; 2 } = true', 'tointer': 'select the rows whose crowd record is greater than 20000 . among these rows , select the rows whose away team score record is equal to 13.11 . the number of such rows is 2 .'} | eq { count { filter_eq { filter_greater { all_rows ; crowd ; 20000 } ; away team score ; 13.11 } } ; 2 } = true | select the rows whose crowd record is greater than 20000 . among these rows , select the rows whose away team score record is equal to 13.11 . the number of such rows is 2 . | 4 | 4 | {'eq_3': 3, 'result_4': 4, 'count_2': 2, 'filter_eq_1': 1, 'filter_greater_0': 0, 'all_rows_5': 5, 'crowd_6': 6, '20000_7': 7, 'away team score_8': 8, '13.11_9': 9, '2_10': 10} | {'eq_3': 'eq', 'result_4': 'true', 'count_2': 'count', 'filter_eq_1': 'filter_eq', 'filter_greater_0': 'filter_greater', 'all_rows_5': 'all_rows', 'crowd_6': 'crowd', '20000_7': '20000', 'away team score_8': 'away team score', '13.11_9': '13.11', '2_10': '2'} | {'eq_3': [4], 'result_4': [], 'count_2': [3], 'filter_eq_1': [2], 'filter_greater_0': [1], 'all_rows_5': [0], 'crowd_6': [0], '20000_7': [0], 'away team score_8': [1], '13.11_9': [1], '2_10': [3]} | ['home team', 'home team score', 'away team', 'away team score', 'venue', 'crowd', 'date'] | [['essendon', '14.12 ( 96 )', 'north melbourne', '13.11 ( 89 )', 'windy hill', '21000', '26 april 1952'], ['collingwood', '13.17 ( 95 )', 'fitzroy', '8.14 ( 62 )', 'victoria park', '26000', '26 april 1952'], ['carlton', '12.12 ( 84 )', 'south melbourne', '13.11 ( 89 )', 'princes park', '31500', '26 april 1952'], ['st kilda', '13.11 ( 89 )', 'melbourne', '12.19 ( 91 )', 'junction oval', '19000', '26 april 1952'], ['richmond', '12.10 ( 82 )', 'hawthorn', '8.14 ( 62 )', 'punt road oval', '13000', '26 april 1952'], ['geelong', '10.4 ( 64 )', 'footscray', '7.15 ( 57 )', 'kardinia park', '30000', '26 april 1952']] |
list of england national rugby union team results 1960 - 69 | https://en.wikipedia.org/wiki/List_of_England_national_rugby_union_team_results_1960%E2%80%9369 | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-18179114-2.html.csv | count | four of the england national rugby union team matches in 1961 had a five nations status . | {'scope': 'all', 'criterion': 'equal', 'value': 'five nations', 'result': '4', 'col': '5', 'subset': None} | {'func': 'eq', 'args': [{'func': 'count', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'status', 'five nations'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose status record fuzzily matches to five nations .', 'tostr': 'filter_eq { all_rows ; status ; five nations }'}], 'result': '4', 'ind': 1, 'tostr': 'count { filter_eq { all_rows ; status ; five nations } }', 'tointer': 'select the rows whose status record fuzzily matches to five nations . the number of such rows is 4 .'}, '4'], 'result': True, 'ind': 2, 'tostr': 'eq { count { filter_eq { all_rows ; status ; five nations } } ; 4 } = true', 'tointer': 'select the rows whose status record fuzzily matches to five nations . the number of such rows is 4 .'} | eq { count { filter_eq { all_rows ; status ; five nations } } ; 4 } = true | select the rows whose status record fuzzily matches to five nations . the number of such rows is 4 . | 3 | 3 | {'eq_2': 2, 'result_3': 3, 'count_1': 1, 'filter_str_eq_0': 0, 'all_rows_4': 4, 'status_5': 5, 'five nations_6': 6, '4_7': 7} | {'eq_2': 'eq', 'result_3': 'true', 'count_1': 'count', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_4': 'all_rows', 'status_5': 'status', 'five nations_6': 'five nations', '4_7': '4'} | {'eq_2': [3], 'result_3': [], 'count_1': [2], 'filter_str_eq_0': [1], 'all_rows_4': [0], 'status_5': [0], 'five nations_6': [0], '4_7': [2]} | ['opposing teams', 'against', 'date', 'venue', 'status'] | [['south africa', '5', '07 / 01 / 1961', 'twickenham , london', 'test match'], ['wales', '6', '21 / 01 / 1961', 'cardiff arms park , cardiff', 'five nations'], ['ireland', '11', '11 / 02 / 1961', 'lansdowne road , dublin', 'five nations'], ['france', '5', '25 / 02 / 1961', 'twickenham , london', 'five nations'], ['scotland', '0', '18 / 03 / 1961', 'twickenham , london', 'five nations']] |
list of prime ministers of albania | https://en.wikipedia.org/wiki/List_of_Prime_Ministers_of_Albania | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-167235-4.html.csv | comparative | pandeli evangjeli 's term as prime minister of albania began earlier in the calendar year than mehdi bej frashëri 's . | {'row_1': '3', 'row_2': '4', 'col': '3', 'col_other': '1', 'relation': 'less', 'record_mentioned': 'yes', 'diff_result': None} | {'func': 'and', 'args': [{'func': 'less', 'args': [{'func': 'str_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'name', 'pandeli evangjeli ( 2nd time )'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose name record fuzzily matches to pandeli evangjeli ( 2nd time ) .', 'tostr': 'filter_eq { all_rows ; name ; pandeli evangjeli ( 2nd time ) }'}, 'term start'], 'result': None, 'ind': 2, 'tostr': 'hop { filter_eq { all_rows ; name ; pandeli evangjeli ( 2nd time ) } ; term start }', 'tointer': 'select the rows whose name record fuzzily matches to pandeli evangjeli ( 2nd time ) . take the term start record of this row .'}, {'func': 'str_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'name', 'mehdi bej frashëri ( 1st time )'], 'result': None, 'ind': 1, 'tointer': 'select the rows whose name record fuzzily matches to mehdi bej frashëri ( 1st time ) .', 'tostr': 'filter_eq { all_rows ; name ; mehdi bej frashëri ( 1st time ) }'}, 'term start'], 'result': None, 'ind': 3, 'tostr': 'hop { filter_eq { all_rows ; name ; mehdi bej frashëri ( 1st time ) } ; term start }', 'tointer': 'select the rows whose name record fuzzily matches to mehdi bej frashëri ( 1st time ) . take the term start record of this row .'}], 'result': True, 'ind': 4, 'tostr': 'less { hop { filter_eq { all_rows ; name ; pandeli evangjeli ( 2nd time ) } ; term start } ; hop { filter_eq { all_rows ; name ; mehdi bej frashëri ( 1st time ) } ; term start } }', 'tointer': 'select the rows whose name record fuzzily matches to pandeli evangjeli ( 2nd time ) . take the term start record of this row . select the rows whose name record fuzzily matches to mehdi bej frashëri ( 1st time ) . take the term start record of this row . the first record is less than the second record .'}, {'func': 'and', 'args': [{'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'name', 'pandeli evangjeli ( 2nd time )'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose name record fuzzily matches to pandeli evangjeli ( 2nd time ) .', 'tostr': 'filter_eq { all_rows ; name ; pandeli evangjeli ( 2nd time ) }'}, 'term start'], 'result': None, 'ind': 2, 'tostr': 'hop { filter_eq { all_rows ; name ; pandeli evangjeli ( 2nd time ) } ; term start }', 'tointer': 'select the rows whose name record fuzzily matches to pandeli evangjeli ( 2nd time ) . take the term start record of this row .'}, '6 march 1930'], 'result': True, 'ind': 5, 'tostr': 'eq { hop { filter_eq { all_rows ; name ; pandeli evangjeli ( 2nd time ) } ; term start } ; 6 march 1930 }', 'tointer': 'the term start record of the first row is 6 march 1930 .'}, {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'name', 'mehdi bej frashëri ( 1st time )'], 'result': None, 'ind': 1, 'tointer': 'select the rows whose name record fuzzily matches to mehdi bej frashëri ( 1st time ) .', 'tostr': 'filter_eq { all_rows ; name ; mehdi bej frashëri ( 1st time ) }'}, 'term start'], 'result': None, 'ind': 3, 'tostr': 'hop { filter_eq { all_rows ; name ; mehdi bej frashëri ( 1st time ) } ; term start }', 'tointer': 'select the rows whose name record fuzzily matches to mehdi bej frashëri ( 1st time ) . take the term start record of this row .'}, '22 october 1935'], 'result': True, 'ind': 6, 'tostr': 'eq { hop { filter_eq { all_rows ; name ; mehdi bej frashëri ( 1st time ) } ; term start } ; 22 october 1935 }', 'tointer': 'the term start record of the second row is 22 october 1935 .'}], 'result': True, 'ind': 7, 'tostr': 'and { eq { hop { filter_eq { all_rows ; name ; pandeli evangjeli ( 2nd time ) } ; term start } ; 6 march 1930 } ; eq { hop { filter_eq { all_rows ; name ; mehdi bej frashëri ( 1st time ) } ; term start } ; 22 october 1935 } }', 'tointer': 'the term start record of the first row is 6 march 1930 . the term start record of the second row is 22 october 1935 .'}], 'result': True, 'ind': 8, 'tostr': 'and { less { hop { filter_eq { all_rows ; name ; pandeli evangjeli ( 2nd time ) } ; term start } ; hop { filter_eq { all_rows ; name ; mehdi bej frashëri ( 1st time ) } ; term start } } ; and { eq { hop { filter_eq { all_rows ; name ; pandeli evangjeli ( 2nd time ) } ; term start } ; 6 march 1930 } ; eq { hop { filter_eq { all_rows ; name ; mehdi bej frashëri ( 1st time ) } ; term start } ; 22 october 1935 } } } = true', 'tointer': 'select the rows whose name record fuzzily matches to pandeli evangjeli ( 2nd time ) . take the term start record of this row . select the rows whose name record fuzzily matches to mehdi bej frashëri ( 1st time ) . take the term start record of this row . the first record is less than the second record . the term start record of the first row is 6 march 1930 . the term start record of the second row is 22 october 1935 .'} | and { less { hop { filter_eq { all_rows ; name ; pandeli evangjeli ( 2nd time ) } ; term start } ; hop { filter_eq { all_rows ; name ; mehdi bej frashëri ( 1st time ) } ; term start } } ; and { eq { hop { filter_eq { all_rows ; name ; pandeli evangjeli ( 2nd time ) } ; term start } ; 6 march 1930 } ; eq { hop { filter_eq { all_rows ; name ; mehdi bej frashëri ( 1st time ) } ; term start } ; 22 october 1935 } } } = true | select the rows whose name record fuzzily matches to pandeli evangjeli ( 2nd time ) . take the term start record of this row . select the rows whose name record fuzzily matches to mehdi bej frashëri ( 1st time ) . take the term start record of this row . the first record is less than the second record . the term start record of the first row is 6 march 1930 . the term start record of the second row is 22 october 1935 . | 13 | 9 | {'and_8': 8, 'result_9': 9, 'less_4': 4, 'str_hop_2': 2, 'filter_str_eq_0': 0, 'all_rows_10': 10, 'name_11': 11, 'pandeli evangjeli (2nd time)_12': 12, 'term start_13': 13, 'str_hop_3': 3, 'filter_str_eq_1': 1, 'all_rows_14': 14, 'name_15': 15, 'mehdi bej frashëri (1st time)_16': 16, 'term start_17': 17, 'and_7': 7, 'str_eq_5': 5, '6 march 1930_18': 18, 'str_eq_6': 6, '22 october 1935_19': 19} | {'and_8': 'and', 'result_9': 'true', 'less_4': 'less', 'str_hop_2': 'str_hop', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_10': 'all_rows', 'name_11': 'name', 'pandeli evangjeli (2nd time)_12': 'pandeli evangjeli ( 2nd time )', 'term start_13': 'term start', 'str_hop_3': 'str_hop', 'filter_str_eq_1': 'filter_str_eq', 'all_rows_14': 'all_rows', 'name_15': 'name', 'mehdi bej frashëri (1st time)_16': 'mehdi bej frashëri ( 1st time )', 'term start_17': 'term start', 'and_7': 'and', 'str_eq_5': 'str_eq', '6 march 1930_18': '6 march 1930', 'str_eq_6': 'str_eq', '22 october 1935_19': '22 october 1935'} | {'and_8': [9], 'result_9': [], 'less_4': [8], 'str_hop_2': [4, 5], 'filter_str_eq_0': [2], 'all_rows_10': [0], 'name_11': [0], 'pandeli evangjeli (2nd time)_12': [0], 'term start_13': [2], 'str_hop_3': [4, 6], 'filter_str_eq_1': [3], 'all_rows_14': [1], 'name_15': [1], 'mehdi bej frashëri (1st time)_16': [1], 'term start_17': [3], 'and_7': [8], 'str_eq_5': [7], '6 march 1930_18': [5], 'str_eq_6': [7], '22 october 1935_19': [6]} | ['name', 'born - died', 'term start', 'term end', 'political party'] | [['prime ministers 1928 - 1939', 'prime ministers 1928 - 1939', 'prime ministers 1928 - 1939', 'prime ministers 1928 - 1939', 'prime ministers 1928 - 1939'], ['kostaq kota ( 1st time )', '1889 - 1949', '5 september 1928', '5 march 1930', 'non - party'], ['pandeli evangjeli ( 2nd time )', '1859 - 1939', '6 march 1930', '16 october 1935', 'non - party'], ['mehdi bej frashëri ( 1st time )', '1872 - 1963', '22 october 1935', '9 november 1936', 'non - party'], ['kostaq kota ( 2nd time )', '1889 - 1949', '9 november 1936', '8 april 1939', 'non - party']] |
the apprentice new zealand | https://en.wikipedia.org/wiki/The_Apprentice_New_Zealand | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/1-26263322-1.html.csv | aggregation | the average age for candidates in the apprentice new zealand was 29 . | {'scope': 'all', 'col': '4', 'type': 'average', 'result': '29', 'subset': None} | {'func': 'round_eq', 'args': [{'func': 'avg', 'args': ['all_rows', 'age'], 'result': '29', 'ind': 0, 'tostr': 'avg { all_rows ; age }'}, '29'], 'result': True, 'ind': 1, 'tostr': 'round_eq { avg { all_rows ; age } ; 29 } = true', 'tointer': 'the average of the age record of all rows is 29 .'} | round_eq { avg { all_rows ; age } ; 29 } = true | the average of the age record of all rows is 29 . | 2 | 2 | {'eq_1': 1, 'result_2': 2, 'avg_0': 0, 'all_rows_3': 3, 'age_4': 4, '29_5': 5} | {'eq_1': 'eq', 'result_2': 'true', 'avg_0': 'avg', 'all_rows_3': 'all_rows', 'age_4': 'age', '29_5': '29'} | {'eq_1': [2], 'result_2': [], 'avg_0': [1], 'all_rows_3': [0], 'age_4': [0], '29_5': [1]} | ['candidate', 'background', 'original team', 'age', 'hometown', 'result'] | [['thomas ben', 'divisional manager', 'number 8', '34', 'auckland', 'hired by serepisos'], ['david wyatt', 'self - employed - media agency', 'number 8', '27', 'auckland', 'fired in the season finale'], ['catherine livingstone', 'self - employed - concierge service', 'athena', '33', 'auckland', 'fired in week 12'], ['karen reid', 'self - employed - practices in alternative medicine', 'athena', '33', 'auckland', 'fired in week 11'], ['linda slade', 'university student', 'athena', '21', 'christchurch', 'fired in week 10'], ['nicky clarke', 'pr specialist', 'athena', '28', 'auckland', 'fired in week 9'], ['daniel phillips', 'advertising account manager', 'number 8', '31', 'auckland', 'fired in week 8'], ['meena chhagan', 'accountant', 'athena', '24', 'wellington', 'fired in week 7'], ['richard henry', 'getfrank founder', 'number 8', '26', 'auckland', 'fired in week 7'], ['paul natac', 'infringement relationship manager', 'number 8', '28', 'auckland', 'fired in week 6'], ['chris whiteside', 'accountant', 'number 8', '28', 'christchurch', 'fired in week 4'], ['kirsty parkhill', 'business development manager', 'athena', '35', 'wellington', 'fired in week 3']] |
2007 bombardier learjet 550 | https://en.wikipedia.org/wiki/2007_Bombardier_Learjet_550 | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/1-17319931-1.html.csv | comparative | sarah fisher completed more laps than dan wheldon in the 2007 bombardier learjet 550 race . | {'row_1': '10', 'row_2': '15', 'col': '5', 'col_other': '3', 'relation': 'greater', 'record_mentioned': 'no', 'diff_result': None} | {'func': 'greater', 'args': [{'func': 'num_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'driver', 'sarah fisher'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose driver record fuzzily matches to sarah fisher .', 'tostr': 'filter_eq { all_rows ; driver ; sarah fisher }'}, 'laps'], 'result': None, 'ind': 2, 'tostr': 'hop { filter_eq { all_rows ; driver ; sarah fisher } ; laps }', 'tointer': 'select the rows whose driver record fuzzily matches to sarah fisher . take the laps record of this row .'}, {'func': 'num_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'driver', 'dan wheldon'], 'result': None, 'ind': 1, 'tointer': 'select the rows whose driver record fuzzily matches to dan wheldon .', 'tostr': 'filter_eq { all_rows ; driver ; dan wheldon }'}, 'laps'], 'result': None, 'ind': 3, 'tostr': 'hop { filter_eq { all_rows ; driver ; dan wheldon } ; laps }', 'tointer': 'select the rows whose driver record fuzzily matches to dan wheldon . take the laps record of this row .'}], 'result': True, 'ind': 4, 'tostr': 'greater { hop { filter_eq { all_rows ; driver ; sarah fisher } ; laps } ; hop { filter_eq { all_rows ; driver ; dan wheldon } ; laps } } = true', 'tointer': 'select the rows whose driver record fuzzily matches to sarah fisher . take the laps record of this row . select the rows whose driver record fuzzily matches to dan wheldon . take the laps record of this row . the first record is greater than the second record .'} | greater { hop { filter_eq { all_rows ; driver ; sarah fisher } ; laps } ; hop { filter_eq { all_rows ; driver ; dan wheldon } ; laps } } = true | select the rows whose driver record fuzzily matches to sarah fisher . take the laps record of this row . select the rows whose driver record fuzzily matches to dan wheldon . take the laps record of this row . the first record is greater than the second record . | 5 | 5 | {'greater_4': 4, 'result_5': 5, 'num_hop_2': 2, 'filter_str_eq_0': 0, 'all_rows_6': 6, 'driver_7': 7, 'sarah fisher_8': 8, 'laps_9': 9, 'num_hop_3': 3, 'filter_str_eq_1': 1, 'all_rows_10': 10, 'driver_11': 11, 'dan wheldon_12': 12, 'laps_13': 13} | {'greater_4': 'greater', 'result_5': 'true', 'num_hop_2': 'num_hop', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_6': 'all_rows', 'driver_7': 'driver', 'sarah fisher_8': 'sarah fisher', 'laps_9': 'laps', 'num_hop_3': 'num_hop', 'filter_str_eq_1': 'filter_str_eq', 'all_rows_10': 'all_rows', 'driver_11': 'driver', 'dan wheldon_12': 'dan wheldon', 'laps_13': 'laps'} | {'greater_4': [5], 'result_5': [], 'num_hop_2': [4], 'filter_str_eq_0': [2], 'all_rows_6': [0], 'driver_7': [0], 'sarah fisher_8': [0], 'laps_9': [2], 'num_hop_3': [4], 'filter_str_eq_1': [3], 'all_rows_10': [1], 'driver_11': [1], 'dan wheldon_12': [1], 'laps_13': [3]} | ['fin pos', 'car no', 'driver', 'team', 'laps', 'time / retired', 'grid', 'laps led', 'points'] | [['1', '6', 'sam hornish , jr', 'team penske', '228', '1:52:15.2873', '2', '159', '50 + 3'], ['2', '11', 'tony kanaan', 'andretti green', '228', '+ 0.0786', '4', '1', '40'], ['3', '7', 'danica patrick', 'andretti green', '228', '+ 0.3844', '6', '2', '35'], ['4', '27', 'dario franchitti', 'andretti green', '228', '+ 3.9765', '3', '0', '32'], ['5', '4', 'vitor meira', 'panther racing', '228', '+ 4.0019', '13', '3', '30'], ['6', '17', 'jeff simmons', 'rahal letterman', '228', '+ 4.6340', '8', '5', '28'], ['7', '8', 'scott sharp', 'rahal letterman', '227', '+ 1 lap', '1', '0', '26'], ['8', '15', 'buddy rice', 'dreyer & reinbold racing', '225', '+ 3 laps', '16', '0', '24'], ['9', '55', 'kosuke matsuura', 'panther racing', '225', '+ 3 laps', '15', '0', '22'], ['10', '5', 'sarah fisher', 'dreyer & reinbold racing', '221', '+ 7 laps', '18', '0', '20'], ['11', '23', 'milka duno ( r )', 'samax motorsport', '221', '+ 7 laps', '19', '0', '19'], ['12', '9', 'scott dixon', 'target chip ganassi', '206', '+ 22 laps', '7', '6', '18'], ['13', '14', 'darren manning', 'aj foyt racing', '200', 'mechanical', '11', '0', '17'], ['14', '2', 'tomas scheckter', 'vision racing', '199', '+ 29 laps', '9', '0', '16'], ['15', '10', 'dan wheldon', 'target chip ganassi', '196', 'collision', '5', '52', '15'], ['16', '3', 'hãlio castroneves', 'team penske', '196', 'collision', '10', '0', '14'], ['17', '22', 'a j foyt iv', 'vision racing', '195', 'tire', '17', '0', '13'], ['18', '20', 'ed carpenter', 'vision racing', '195', 'collision', '20', '0', '12'], ['19', '26', 'marco andretti', 'andretti green', '140', 'mechanical', '12', '0', '12'], ['20', '19', 'jon herb', 'racing professionals', '44', 'accident', '14', '0', '12']] |
list of amusement park rankings | https://en.wikipedia.org/wiki/List_of_amusement_park_rankings | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-16578883-3.html.csv | superlative | an amusement park in lake buena vista , florida , usa drew the highest amount of attendance in the year 2012 . | {'scope': 'all', 'col_superlative': '7', 'row_superlative': '1', 'value_mentioned': 'no', 'max_or_min': 'max', 'other_col': '2', 'subset': None} | {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'argmax', 'args': ['all_rows', '2012'], 'result': None, 'ind': 0, 'tostr': 'argmax { all_rows ; 2012 }'}, 'location'], 'result': 'lake buena vista , florida , usa', 'ind': 1, 'tostr': 'hop { argmax { all_rows ; 2012 } ; location }'}, 'lake buena vista , florida , usa'], 'result': True, 'ind': 2, 'tostr': 'eq { hop { argmax { all_rows ; 2012 } ; location } ; lake buena vista , florida , usa } = true', 'tointer': 'select the row whose 2012 record of all rows is maximum . the location record of this row is lake buena vista , florida , usa .'} | eq { hop { argmax { all_rows ; 2012 } ; location } ; lake buena vista , florida , usa } = true | select the row whose 2012 record of all rows is maximum . the location record of this row is lake buena vista , florida , usa . | 3 | 3 | {'str_eq_2': 2, 'result_3': 3, 'str_hop_1': 1, 'argmax_0': 0, 'all_rows_4': 4, '2012_5': 5, 'location_6': 6, 'lake buena vista , florida , usa_7': 7} | {'str_eq_2': 'str_eq', 'result_3': 'true', 'str_hop_1': 'str_hop', 'argmax_0': 'argmax', 'all_rows_4': 'all_rows', '2012_5': '2012', 'location_6': 'location', 'lake buena vista , florida , usa_7': 'lake buena vista , florida , usa'} | {'str_eq_2': [3], 'result_3': [], 'str_hop_1': [2], 'argmax_0': [1], 'all_rows_4': [0], '2012_5': [0], 'location_6': [1], 'lake buena vista , florida , usa_7': [2]} | ['rank', 'location', '2008', '2009', '2010', '2011', '2012'] | [['1', 'lake buena vista , florida , usa', '17063000', '17233000', '16972000', '17142000', '17536000'], ['2', 'anaheim , california , usa', '14721000', '15900000', '15980000', '16140000', '15963000'], ['3', 'lake buena vista , florida , usa', '10935000', '10990000', '10825000', '10825000', '11063000'], ['4', 'lake buena vista , florida , usa', '9540000', '9590000', '9686000', '9783000', '9998000'], ['5', 'lake buena vista , florida , usa', '9608000', '9700000', '9603000', '9699000', '9912000'], ['6', 'orlando , florida , usa', '5297000', '4627000', '5949000', '7674000', '7981000'], ['7', 'anaheim , california , usa', '5566000', '6095000', '6278000', '6341000', '7775000'], ['8', 'orlando , florida , usa', '6231000', '5530000', '5925000', '6044000', '6195000'], ['9', 'universal city , california , usa', '4583000', '4308000', '5040000', '5141000', '5912000'], ['10', 'orlando , florida , usa', '5926000', '5800000', '5100000', '5202000', '5358000']] |
1996 pga tour | https://en.wikipedia.org/wiki/1996_PGA_Tour | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-14611511-3.html.csv | aggregation | in the 1996 pga tour , the top 5 players earned a combined $ 7,546,842 . | {'scope': 'all', 'col': '4', 'type': 'sum', 'result': '7546842', 'subset': None} | {'func': 'round_eq', 'args': [{'func': 'sum', 'args': ['all_rows', 'earnings'], 'result': '7546842', 'ind': 0, 'tostr': 'sum { all_rows ; earnings }'}, '7546842'], 'result': True, 'ind': 1, 'tostr': 'round_eq { sum { all_rows ; earnings } ; 7546842 } = true', 'tointer': 'the sum of the earnings record of all rows is 7546842 .'} | round_eq { sum { all_rows ; earnings } ; 7546842 } = true | the sum of the earnings record of all rows is 7546842 . | 2 | 2 | {'eq_1': 1, 'result_2': 2, 'sum_0': 0, 'all_rows_3': 3, 'earnings_4': 4, '7546842_5': 5} | {'eq_1': 'eq', 'result_2': 'true', 'sum_0': 'sum', 'all_rows_3': 'all_rows', 'earnings_4': 'earnings', '7546842_5': '7546842'} | {'eq_1': [2], 'result_2': [], 'sum_0': [1], 'all_rows_3': [0], 'earnings_4': [0], '7546842_5': [1]} | ['rank', 'player', 'country', 'earnings', 'events', 'wins'] | [['1', 'tom lehman', 'united states', '1780159', '22', '2'], ['2', 'phil mickelson', 'united states', '1697799', '21', '4'], ['3', 'mark brooks', 'united states', '1429396', '29', '3'], ['4', 'steve stricker', 'united states', '1383739', '22', '2'], ['5', "mark o'meara", 'united states', '1255749', '21', '2']] |
central collegiate lacrosse association | https://en.wikipedia.org/wiki/Central_Collegiate_Lacrosse_Association | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/1-28211213-2.html.csv | superlative | university of dayton was founded before all other institutions in the central collegiate lacrosse association . | {'scope': 'all', 'col_superlative': '3', 'row_superlative': '4', 'value_mentioned': 'no', 'max_or_min': 'min', 'other_col': '1', 'subset': None} | {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'argmin', 'args': ['all_rows', 'founded'], 'result': None, 'ind': 0, 'tostr': 'argmin { all_rows ; founded }'}, 'institution'], 'result': 'university of dayton', 'ind': 1, 'tostr': 'hop { argmin { all_rows ; founded } ; institution }'}, 'university of dayton'], 'result': True, 'ind': 2, 'tostr': 'eq { hop { argmin { all_rows ; founded } ; institution } ; university of dayton } = true', 'tointer': 'select the row whose founded record of all rows is minimum . the institution record of this row is university of dayton .'} | eq { hop { argmin { all_rows ; founded } ; institution } ; university of dayton } = true | select the row whose founded record of all rows is minimum . the institution record of this row is university of dayton . | 3 | 3 | {'str_eq_2': 2, 'result_3': 3, 'str_hop_1': 1, 'argmin_0': 0, 'all_rows_4': 4, 'founded_5': 5, 'institution_6': 6, 'university of dayton_7': 7} | {'str_eq_2': 'str_eq', 'result_3': 'true', 'str_hop_1': 'str_hop', 'argmin_0': 'argmin', 'all_rows_4': 'all_rows', 'founded_5': 'founded', 'institution_6': 'institution', 'university of dayton_7': 'university of dayton'} | {'str_eq_2': [3], 'result_3': [], 'str_hop_1': [2], 'argmin_0': [1], 'all_rows_4': [0], 'founded_5': [0], 'institution_6': [1], 'university of dayton_7': [2]} | ['institution', 'location', 'founded', 'affiliation', 'enrollment', 'team nickname', 'primary conference'] | [['aquinas college', 'grand rapids , michigan', '1886', 'private', '2159', 'saints', 'whac ( naia )'], ['butler university', 'indianapolis , indiana', '1855', 'private', '4512', 'bulldogs', 'horizon ( division i )'], ['carnegie mellon university', 'pittsburgh , pennsylvania', '1900', 'private / nonsectarian', '10875', 'tartans', 'uaa ( division iii )'], ['university of dayton', 'dayton , ohio', '1850', 'private / catholic', '10569', 'flyers', 'atlantic 10 ( division i )'], ['ferris state university', 'big rapids , michigan', '1884', 'public', '13865', 'bulldogs', 'gliac ( division ii )'], ['grand valley state university', 'allendale , michigan', '1960', 'public', '24408', 'lakers', 'gliac ( division ii )'], ['grove city college', 'grove city , pennsylvania', '1876', 'private / christian', '2500', 'wolverines', 'pac ( division iii )'], ['indiana institute of technology', 'fort wayne , indiana', '1930', 'private', '3207', 'warriors', 'whac ( naia )'], ['john carroll university', 'university heights , ohio', '1886', 'private / catholic', '3709', 'blue streaks', 'oac ( division iii )'], ['lawrence technological university', 'southfield , mi', '1932', 'private', '4000', 'blue devils', 'whac ( naia )'], ['lourdes college', 'sylvania , oh', '1958', 'private / catholic', '2616', 'gray wolves', 'whac ( naia )'], ['university of michigan - dearborn', 'dearborn , michigan', '1959', 'public', '8634', 'wolves', 'wolverine - hoosier ( naia )'], ['northern michigan university', 'marquette , michigan', '1899', 'public', '8578', 'wildcats', 'gliac ( division ii )'], ['northwood university', 'midland , michigan', '1961', 'private', '1987', 'timberwolves', 'gliac ( division ii )'], ['oakland university', 'rochester , michigan', '1957', 'public', '18553', 'grizzlies', 'the summit league ( division i )'], ['siena heights university', 'adrian , michigan', '1919', 'private / catholic', '2274', 'saints', 'wolverine - hoosier ( naia )']] |
peak water | https://en.wikipedia.org/wiki/Peak_water | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/1-15909409-3.html.csv | superlative | turkmenistan has the highest amount of per capita withdrawal of freshwater . | {'scope': 'all', 'col_superlative': '3', 'row_superlative': '1', 'value_mentioned': 'no', 'max_or_min': 'max', 'other_col': '1', 'subset': None} | {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'argmax', 'args': ['all_rows', 'per capita withdrawal'], 'result': None, 'ind': 0, 'tostr': 'argmax { all_rows ; per capita withdrawal }'}, ''], 'result': 'turkmenistan', 'ind': 1, 'tostr': 'hop { argmax { all_rows ; per capita withdrawal } ; }'}, 'turkmenistan'], 'result': True, 'ind': 2, 'tostr': 'eq { hop { argmax { all_rows ; per capita withdrawal } ; } ; turkmenistan } = true', 'tointer': 'select the row whose per capita withdrawal record of all rows is maximum . the record of this row is turkmenistan .'} | eq { hop { argmax { all_rows ; per capita withdrawal } ; } ; turkmenistan } = true | select the row whose per capita withdrawal record of all rows is maximum . the record of this row is turkmenistan . | 3 | 3 | {'str_eq_2': 2, 'result_3': 3, 'str_hop_1': 1, 'argmax_0': 0, 'all_rows_4': 4, 'per capita withdrawal_5': 5, '_6': 6, 'turkmenistan_7': 7} | {'str_eq_2': 'str_eq', 'result_3': 'true', 'str_hop_1': 'str_hop', 'argmax_0': 'argmax', 'all_rows_4': 'all_rows', 'per capita withdrawal_5': 'per capita withdrawal', '_6': '', 'turkmenistan_7': 'turkmenistan'} | {'str_eq_2': [3], 'result_3': [], 'str_hop_1': [2], 'argmax_0': [1], 'all_rows_4': [0], 'per capita withdrawal_5': [0], '_6': [1], 'turkmenistan_7': [2]} | ['', 'total freshwater withdrawal', 'per capita withdrawal', 'domestic use', 'industrial use', 'agricultural use'] | [['turkmenistan', '24.65', '5104', '2', '1', '98'], ['kazakhstan', '35', '2360', '2', '17', '82'], ['uzbekistan', '58.34', '2194', '5', '2', '93'], ['guyana', '1.64', '2187', '2', '1', '98'], ['hungary', '21.03', '2082', '9', '59', '32'], ['azerbaijan', '17.25', '2051', '5', '28', '68'], ['kyrgyzstan', '10.08', '1916', '3', '3', '94'], ['tajikistan', '11.96', '1837', '4', '5', '92'], ['usa', '477', '1600', '13', '46', '41'], ['suriname', '0.67', '1489', '4', '3', '93'], ['iraq', '42.7', '1482', '3', '5', '92'], ['canada', '44.72', '1386', '20', '69', '12'], ['thailand', '82.75', '1288', '2', '2', '95'], ['ecuador', '16.98', '1283', '12', '5', '82']] |
chala kelele | https://en.wikipedia.org/wiki/Chala_Kelele | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-12736407-1.html.csv | unique | the 1988 tournament was the only one that chala kelele competed in that was held in new zealand . | {'scope': 'all', 'row': '1', 'col': '3', 'col_other': '1', 'criterion': 'equal', 'value': 'new zealand', 'subset': None} | {'func': 'and', 'args': [{'func': 'only', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'venue', 'new zealand'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose venue record fuzzily matches to new zealand .', 'tostr': 'filter_eq { all_rows ; venue ; new zealand }'}], 'result': True, 'ind': 1, 'tostr': 'only { filter_eq { all_rows ; venue ; new zealand } }', 'tointer': 'select the rows whose venue record fuzzily matches to new zealand . there is only one such row in the table .'}, {'func': 'eq', 'args': [{'func': 'num_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'venue', 'new zealand'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose venue record fuzzily matches to new zealand .', 'tostr': 'filter_eq { all_rows ; venue ; new zealand }'}, 'year'], 'result': '1988', 'ind': 2, 'tostr': 'hop { filter_eq { all_rows ; venue ; new zealand } ; year }'}, '1988'], 'result': True, 'ind': 3, 'tostr': 'eq { hop { filter_eq { all_rows ; venue ; new zealand } ; year } ; 1988 }', 'tointer': 'the year record of this unqiue row is 1988 .'}], 'result': True, 'ind': 4, 'tostr': 'and { only { filter_eq { all_rows ; venue ; new zealand } } ; eq { hop { filter_eq { all_rows ; venue ; new zealand } ; year } ; 1988 } } = true', 'tointer': 'select the rows whose venue record fuzzily matches to new zealand . there is only one such row in the table . the year record of this unqiue row is 1988 .'} | and { only { filter_eq { all_rows ; venue ; new zealand } } ; eq { hop { filter_eq { all_rows ; venue ; new zealand } ; year } ; 1988 } } = true | select the rows whose venue record fuzzily matches to new zealand . there is only one such row in the table . the year record of this unqiue row is 1988 . | 6 | 5 | {'and_4': 4, 'result_5': 5, 'only_1': 1, 'filter_str_eq_0': 0, 'all_rows_6': 6, 'venue_7': 7, 'new zealand_8': 8, 'eq_3': 3, 'num_hop_2': 2, 'year_9': 9, '1988_10': 10} | {'and_4': 'and', 'result_5': 'true', 'only_1': 'only', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_6': 'all_rows', 'venue_7': 'venue', 'new zealand_8': 'new zealand', 'eq_3': 'eq', 'num_hop_2': 'num_hop', 'year_9': 'year', '1988_10': '1988'} | {'and_4': [5], 'result_5': [], 'only_1': [4], 'filter_str_eq_0': [1, 2], 'all_rows_6': [0], 'venue_7': [0], 'new zealand_8': [0], 'eq_3': [4], 'num_hop_2': [3], 'year_9': [2], '1988_10': [3]} | ['year', 'tournament', 'venue', 'result', 'extra'] | [['1988', 'world cross country championships', 'auckland , new zealand', '2nd', 'team competition'], ['1991', 'world cross country championships', 'antwerp , belgium', '7th', 'long race'], ['1991', 'world cross country championships', 'antwerp , belgium', '2nd', 'team competition'], ['1995', 'world cross country championships', 'durham , england', '27th', 'long race'], ['1995', 'world cross country championships', 'durham , england', '5th', 'team competition'], ['1996', 'world cross country championships', 'stellenbosch , south africa', '3rd', 'team competition']] |
canoeing at the 2008 summer olympics - men 's k - 1 500 metres | https://en.wikipedia.org/wiki/Canoeing_at_the_2008_Summer_Olympics_%E2%80%93_Men%27s_K-1_500_metres | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-18646681-3.html.csv | superlative | in the 2008 summer olympics canoeing men 's 1500 metres , the fastest time by a cuban racer was 1:42:803 . | {'scope': 'subset', 'col_superlative': '4', 'row_superlative': '7', 'value_mentioned': 'yes', 'max_or_min': 'max', 'other_col': '3', 'subset': {'col': '3', 'criterion': 'equal', 'value': 'cuba'}} | {'func': 'eq', 'args': [{'func': 'max', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'country', 'cuba'], 'result': None, 'ind': 0, 'tostr': 'filter_eq { all_rows ; country ; cuba }', 'tointer': 'select the rows whose country record fuzzily matches to cuba .'}, 'time'], 'result': '1:42.803', 'ind': 1, 'tostr': 'max { filter_eq { all_rows ; country ; cuba } ; time }', 'tointer': 'select the rows whose country record fuzzily matches to cuba . the maximum time record of these rows is 1:42.803 .'}, '1:42.803'], 'result': True, 'ind': 2, 'tostr': 'eq { max { filter_eq { all_rows ; country ; cuba } ; time } ; 1:42.803 } = true', 'tointer': 'select the rows whose country record fuzzily matches to cuba . the maximum time record of these rows is 1:42.803 .'} | eq { max { filter_eq { all_rows ; country ; cuba } ; time } ; 1:42.803 } = true | select the rows whose country record fuzzily matches to cuba . the maximum time record of these rows is 1:42.803 . | 3 | 3 | {'eq_2': 2, 'result_3': 3, 'max_1': 1, 'filter_str_eq_0': 0, 'all_rows_4': 4, 'country_5': 5, 'cuba_6': 6, 'time_7': 7, '1:42.803_8': 8} | {'eq_2': 'eq', 'result_3': 'true', 'max_1': 'max', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_4': 'all_rows', 'country_5': 'country', 'cuba_6': 'cuba', 'time_7': 'time', '1:42.803_8': '1:42.803'} | {'eq_2': [3], 'result_3': [], 'max_1': [2], 'filter_str_eq_0': [1], 'all_rows_4': [0], 'country_5': [0], 'cuba_6': [0], 'time_7': [1], '1:42.803_8': [2]} | ['rank', 'athletes', 'country', 'time', 'notes'] | [['1', 'adam van koeverden', 'canada', '1:35.554 wb', 'qs'], ['2', 'eirik verãs larsen', 'norway', '1:36.439', 'qs'], ['3', 'michele zerial', 'italy', '1:36.950', 'qs'], ['4', 'steven ferguson', 'new zealand', '1:37.538', 'qs'], ['5', 'shaun rubenstein', 'south africa', '1:37.687', 'qs'], ['6', 'dmitriy torlopov', 'kazakhstan', '1:39.892', 'qs'], ['7', 'jorge garcia', 'cuba', '1:42.803', 'qs'], ['8', 'rudolph berking - williams', 'samoa', '1:47.839', 'qs']] |
2010 - 11 temple owls men 's basketball team | https://en.wikipedia.org/wiki/2010%E2%80%9311_Temple_Owls_men%27s_basketball_team | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/1-29556461-9.html.csv | majority | the majority of the games in march were won by temple owls men 's basketball team . | {'scope': 'all', 'col': '4', 'most_or_all': 'most', 'criterion': 'fuzzily_match', 'value': 'w', 'subset': None} | {'func': 'most_str_eq', 'args': ['all_rows', 'score', 'w'], 'result': True, 'ind': 0, 'tointer': 'for the score records of all rows , most of them fuzzily match to w .', 'tostr': 'most_eq { all_rows ; score ; w } = true'} | most_eq { all_rows ; score ; w } = true | for the score records of all rows , most of them fuzzily match to w . | 1 | 1 | {'most_str_eq_0': 0, 'result_1': 1, 'all_rows_2': 2, 'score_3': 3, 'w_4': 4} | {'most_str_eq_0': 'most_str_eq', 'result_1': 'true', 'all_rows_2': 'all_rows', 'score_3': 'score', 'w_4': 'w'} | {'most_str_eq_0': [1], 'result_1': [], 'all_rows_2': [0], 'score_3': [0], 'w_4': [0]} | ['game', 'date', 'team', 'score', 'high points', 'high rebounds', 'high assists', 'location attendance', 'record'] | [['29', 'march 2', 'umass', 'w 73 - 67 ( ot )', 'fernandez - 19', 'allen - 18', 'moore - 7', 'mullins center , amherst , ma ( 3641 )', '23 - 6 ( 13 - 2 )'], ['30', 'march 5', 'la salle', 'w 90 - 82', 'allen - 24', 'allen - 11', 'fernandez - 7', 'liacouras center , philadelphia , pa ( 8154 )', '24 - 6 ( 14 - 2 )'], ['31', 'march 11', 'la salle', 'w 96 - 76', 'moore - 23', 'allen - 12', 'allen - 6', 'boardwalk hall , atlantic city , nj', '25 - 6'], ['32', 'march 12', 'richmond', 'l 58 - 54', 'wyatt - 15', 'allen - 10', 'fernandez - 10', 'boardwalk hall , atlantic city , nj ( 8285 )', '25 - 7'], ['33', 'march 17', '( 10 seed ) penn state', 'w 66 - 64', 'moore / fernandez - 23', 'allen - 11', 'allen / fernandez - 3', 'mckale center , tucson , az', '26 - 7']] |
carlos andino | https://en.wikipedia.org/wiki/Carlos_Andino | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-16055831-1.html.csv | majority | the majority of carlos andino 's fights ended with a win result for carlos andino . | {'scope': 'all', 'col': '1', 'most_or_all': 'most', 'criterion': 'equal', 'value': 'win', 'subset': None} | {'func': 'most_str_eq', 'args': ['all_rows', 'result', 'win'], 'result': True, 'ind': 0, 'tointer': 'for the result records of all rows , most of them fuzzily match to win .', 'tostr': 'most_eq { all_rows ; result ; win } = true'} | most_eq { all_rows ; result ; win } = true | for the result records of all rows , most of them fuzzily match to win . | 1 | 1 | {'most_str_eq_0': 0, 'result_1': 1, 'all_rows_2': 2, 'result_3': 3, 'win_4': 4} | {'most_str_eq_0': 'most_str_eq', 'result_1': 'true', 'all_rows_2': 'all_rows', 'result_3': 'result', 'win_4': 'win'} | {'most_str_eq_0': [1], 'result_1': [], 'all_rows_2': [0], 'result_3': [0], 'win_4': [0]} | ['result', 'record', 'opponent', 'method', 'date', 'round', 'notes'] | [['win', '1 - 0 - 0', 'tony swamp rat santaigo', 'knockout', '1980 dec 31', '1', 'kickboxing'], ['win', '2 - 0 - 0', 'tony swamp rat santaigo', 'knockout', '1981', '1', 'kickboxing'], ['win', '3 - 0 - 0', 'tony swamp rat santaigo', 'knockout', '1981', '1', 'kickboxing'], ['win', '4 - 0 - 0', 'tony swamp rat santiago', 'knockout', '1981', '1', 'kickboxing'], ['win', '5 - 0 - 0', 'alberto rodriguez', 'knockout', '1982', '1', 'kickboxing'], ['win', '6 - 0 - 0', 'manny morales', 'knockout', '1987', '2', 'kickboxing'], ['win', '7 - 0 - 0', 'sherman bergman', 'knockout', '1989 may 7', '2', 'kickboxing'], ['loss', '7 - 1 - 0', 'sherman bergman', 'knockout', '1990', '1', 'kickboxing'], ['loss', '7 - 2 - 0', 'sherman bergman', 'knockout', '20 may 1990', '1', 'kickboxing'], ['loss', '7 - 3 - 0', 'sherman bergman', 'knockout', '15 july 1990', '1', 'kickboxing']] |
gary medel | https://en.wikipedia.org/wiki/Gary_Medel | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-11535001-1.html.csv | superlative | in estado national de chile , gary medel scored the most points , 4 , than any other cup he has played . | {'scope': 'all', 'col_superlative': '4', 'row_superlative': '4', 'value_mentioned': 'yes', 'max_or_min': 'max', 'other_col': '2', 'subset': None} | {'func': 'and', 'args': [{'func': 'eq', 'args': [{'func': 'max', 'args': ['all_rows', 'result'], 'result': '4 - 2', 'ind': 0, 'tostr': 'max { all_rows ; result }', 'tointer': 'the maximum result record of all rows is 4 - 2 .'}, '4 - 2'], 'result': True, 'ind': 1, 'tostr': 'eq { max { all_rows ; result } ; 4 - 2 }', 'tointer': 'the maximum result record of all rows is 4 - 2 .'}, {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'argmax', 'args': ['all_rows', 'result'], 'result': None, 'ind': 2, 'tostr': 'argmax { all_rows ; result }'}, 'venue'], 'result': 'estadio monumental , santiago , chile', 'ind': 3, 'tostr': 'hop { argmax { all_rows ; result } ; venue }'}, 'estadio monumental , santiago , chile'], 'result': True, 'ind': 4, 'tostr': 'eq { hop { argmax { all_rows ; result } ; venue } ; estadio monumental , santiago , chile }', 'tointer': 'the venue record of the row with superlative result record is estadio monumental , santiago , chile .'}], 'result': True, 'ind': 5, 'tostr': 'and { eq { max { all_rows ; result } ; 4 - 2 } ; eq { hop { argmax { all_rows ; result } ; venue } ; estadio monumental , santiago , chile } } = true', 'tointer': 'the maximum result record of all rows is 4 - 2 . the venue record of the row with superlative result record is estadio monumental , santiago , chile .'} | and { eq { max { all_rows ; result } ; 4 - 2 } ; eq { hop { argmax { all_rows ; result } ; venue } ; estadio monumental , santiago , chile } } = true | the maximum result record of all rows is 4 - 2 . the venue record of the row with superlative result record is estadio monumental , santiago , chile . | 6 | 6 | {'and_5': 5, 'result_6': 6, 'eq_1': 1, 'max_0': 0, 'all_rows_7': 7, 'result_8': 8, '4 - 2_9': 9, 'str_eq_4': 4, 'str_hop_3': 3, 'argmax_2': 2, 'all_rows_10': 10, 'result_11': 11, 'venue_12': 12, 'estadio monumental , santiago , chile_13': 13} | {'and_5': 'and', 'result_6': 'true', 'eq_1': 'eq', 'max_0': 'max', 'all_rows_7': 'all_rows', 'result_8': 'result', '4 - 2_9': '4 - 2', 'str_eq_4': 'str_eq', 'str_hop_3': 'str_hop', 'argmax_2': 'argmax', 'all_rows_10': 'all_rows', 'result_11': 'result', 'venue_12': 'venue', 'estadio monumental , santiago , chile_13': 'estadio monumental , santiago , chile'} | {'and_5': [6], 'result_6': [], 'eq_1': [5], 'max_0': [1], 'all_rows_7': [0], 'result_8': [0], '4 - 2_9': [1], 'str_eq_4': [5], 'str_hop_3': [4], 'argmax_2': [3], 'all_rows_10': [2], 'result_11': [2], 'venue_12': [3], 'estadio monumental , santiago , chile_13': [4]} | ['date', 'venue', 'score', 'result', 'competition'] | [['15 june 2008', 'estadio hernando siles , la paz , bolivia', '0 - 1', '0 - 2', '2010 fifa world cup qualification'], ['15 june 2008', 'estadio hernando siles , la paz , bolivia', '0 - 2', '0 - 2', '2010 fifa world cup qualification'], ['29 may 2009', 'fukuda denshi arena , chiba , japan', '1 - 1', '1 - 1', 'friendly'], ['11 october 2011', 'estadio monumental , santiago , chile', '3 - 0', '4 - 2', '2014 fifa world cup qualification'], ['15 october 2013', 'estadio monumental , santiago , chile', '2 - 0', '2 - 1', '2014 fifa world cup qualification']] |
2004 - 05 toronto raptors season | https://en.wikipedia.org/wiki/2004%E2%80%9305_Toronto_Raptors_season | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/1-15872814-6.html.csv | unique | in the 2004-05 toronto raptors season , when the game was at the air canada centre , the only time the opponent was dallas was on february 6 . | {'scope': 'subset', 'row': '3', 'col': '3', 'col_other': '2', 'criterion': 'equal', 'value': 'dallas', 'subset': {'col': '8', 'criterion': 'fuzzily_match', 'value': 'air canada centre'}} | {'func': 'and', 'args': [{'func': 'only', 'args': [{'func': 'filter_str_eq', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'location attendance', 'air canada centre'], 'result': None, 'ind': 0, 'tostr': 'filter_eq { all_rows ; location attendance ; air canada centre }', 'tointer': 'select the rows whose location attendance record fuzzily matches to air canada centre .'}, 'team', 'dallas'], 'result': None, 'ind': 1, 'tointer': 'select the rows whose location attendance record fuzzily matches to air canada centre . among these rows , select the rows whose team record fuzzily matches to dallas .', 'tostr': 'filter_eq { filter_eq { all_rows ; location attendance ; air canada centre } ; team ; dallas }'}], 'result': True, 'ind': 2, 'tostr': 'only { filter_eq { filter_eq { all_rows ; location attendance ; air canada centre } ; team ; dallas } }', 'tointer': 'select the rows whose location attendance record fuzzily matches to air canada centre . among these rows , select the rows whose team record fuzzily matches to dallas . there is only one such row in the table .'}, {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'filter_str_eq', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'location attendance', 'air canada centre'], 'result': None, 'ind': 0, 'tostr': 'filter_eq { all_rows ; location attendance ; air canada centre }', 'tointer': 'select the rows whose location attendance record fuzzily matches to air canada centre .'}, 'team', 'dallas'], 'result': None, 'ind': 1, 'tointer': 'select the rows whose location attendance record fuzzily matches to air canada centre . among these rows , select the rows whose team record fuzzily matches to dallas .', 'tostr': 'filter_eq { filter_eq { all_rows ; location attendance ; air canada centre } ; team ; dallas }'}, 'date'], 'result': 'february 6', 'ind': 3, 'tostr': 'hop { filter_eq { filter_eq { all_rows ; location attendance ; air canada centre } ; team ; dallas } ; date }'}, 'february 6'], 'result': True, 'ind': 4, 'tostr': 'eq { hop { filter_eq { filter_eq { all_rows ; location attendance ; air canada centre } ; team ; dallas } ; date } ; february 6 }', 'tointer': 'the date record of this unqiue row is february 6 .'}], 'result': True, 'ind': 5, 'tostr': 'and { only { filter_eq { filter_eq { all_rows ; location attendance ; air canada centre } ; team ; dallas } } ; eq { hop { filter_eq { filter_eq { all_rows ; location attendance ; air canada centre } ; team ; dallas } ; date } ; february 6 } } = true', 'tointer': 'select the rows whose location attendance record fuzzily matches to air canada centre . among these rows , select the rows whose team record fuzzily matches to dallas . there is only one such row in the table . the date record of this unqiue row is february 6 .'} | and { only { filter_eq { filter_eq { all_rows ; location attendance ; air canada centre } ; team ; dallas } } ; eq { hop { filter_eq { filter_eq { all_rows ; location attendance ; air canada centre } ; team ; dallas } ; date } ; february 6 } } = true | select the rows whose location attendance record fuzzily matches to air canada centre . among these rows , select the rows whose team record fuzzily matches to dallas . there is only one such row in the table . the date record of this unqiue row is february 6 . | 8 | 6 | {'and_5': 5, 'result_6': 6, 'only_2': 2, 'filter_str_eq_1': 1, 'filter_str_eq_0': 0, 'all_rows_7': 7, 'location attendance_8': 8, 'air canada centre_9': 9, 'team_10': 10, 'dallas_11': 11, 'str_eq_4': 4, 'str_hop_3': 3, 'date_12': 12, 'february 6_13': 13} | {'and_5': 'and', 'result_6': 'true', 'only_2': 'only', 'filter_str_eq_1': 'filter_str_eq', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_7': 'all_rows', 'location attendance_8': 'location attendance', 'air canada centre_9': 'air canada centre', 'team_10': 'team', 'dallas_11': 'dallas', 'str_eq_4': 'str_eq', 'str_hop_3': 'str_hop', 'date_12': 'date', 'february 6_13': 'february 6'} | {'and_5': [6], 'result_6': [], 'only_2': [5], 'filter_str_eq_1': [2, 3], 'filter_str_eq_0': [1], 'all_rows_7': [0], 'location attendance_8': [0], 'air canada centre_9': [0], 'team_10': [1], 'dallas_11': [1], 'str_eq_4': [5], 'str_hop_3': [4], 'date_12': [3], 'february 6_13': [4]} | ['game', 'date', 'team', 'score', 'high points', 'high rebounds', 'high assists', 'location attendance', 'record'] | [['46', 'february 2', 'indiana', 'w 98 - 97 ( ot )', 'chris bosh ( 25 )', 'chris bosh ( 15 )', 'milt palacio ( 7 )', 'conseco fieldhouse 14783', '19 - 27'], ['47', 'february 4', 'washington', 'w 103 - 100 ( ot )', 'jalen rose ( 26 )', 'chris bosh ( 11 )', 'rafer alston ( 8 )', 'air canada centre 15546', '20 - 27'], ['48', 'february 6', 'dallas', 'l 113 - 122 ( ot )', 'chris bosh ( 29 )', 'chris bosh ( 8 )', 'rafer alston ( 8 )', 'air canada centre 17896', '20 - 28'], ['49', 'february 8', 'cleveland', 'l 91 - 104 ( ot )', 'jalen rose ( 21 )', 'loren woods ( 9 )', 'milt palacio ( 9 )', 'gund arena 17036', '20 - 29'], ['50', 'february 9', 'milwaukee', 'l 107 - 110 ( ot )', 'jalen rose ( 26 )', 'chris bosh ( 9 )', 'rafer alston ( 12 )', 'air canada centre 14269', '20 - 30'], ['51', 'february 11', 'philadelphia', 'l 91 - 106 ( ot )', 'jalen rose ( 23 )', 'jalen rose ( 10 )', 'chris bosh , morris peterson , jalen rose ( 5 )', 'air canada centre 19800', '20 - 31'], ['52', 'february 13', 'la clippers', 'w 109 - 106 ( ot )', 'chris bosh ( 26 )', 'chris bosh ( 10 )', 'rafer alston ( 8 )', 'air canada centre 15721', '21 - 31'], ['53', 'february 16', 'chicago', 'l 115 - 121 ( ot )', 'chris bosh ( 28 )', 'chris bosh , jalen rose ( 7 )', 'rafer alston ( 8 )', 'air canada centre 15881', '21 - 32'], ['54', 'february 22', 'new jersey', 'w 100 - 82 ( ot )', 'jalen rose ( 30 )', 'chris bosh ( 12 )', 'rafer alston ( 6 )', 'continental airlines arena 14080', '22 - 32'], ['55', 'february 25', 'milwaukee', 'w 106 - 102 ( ot )', 'chris bosh ( 27 )', 'chris bosh , donyell marshall ( 8 )', 'rafer alston ( 7 )', 'bradley center 15883', '23 - 32']] |
1932 vfl season | https://en.wikipedia.org/wiki/1932_VFL_season | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-10790099-13.html.csv | unique | the away team windy hill is the only team to score 15 .2 points in these competitions . | {'scope': 'all', 'row': '3', 'col': '4', 'col_other': '5', 'criterion': 'fuzzily_match', 'value': '15.2', 'subset': None} | {'func': 'and', 'args': [{'func': 'only', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'away team score', '15.2'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose away team score record fuzzily matches to 15.2 .', 'tostr': 'filter_eq { all_rows ; away team score ; 15.2 }'}], 'result': True, 'ind': 1, 'tostr': 'only { filter_eq { all_rows ; away team score ; 15.2 } }', 'tointer': 'select the rows whose away team score record fuzzily matches to 15.2 . there is only one such row in the table .'}, {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'away team score', '15.2'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose away team score record fuzzily matches to 15.2 .', 'tostr': 'filter_eq { all_rows ; away team score ; 15.2 }'}, 'venue'], 'result': 'windy hill', 'ind': 2, 'tostr': 'hop { filter_eq { all_rows ; away team score ; 15.2 } ; venue }'}, 'windy hill'], 'result': True, 'ind': 3, 'tostr': 'eq { hop { filter_eq { all_rows ; away team score ; 15.2 } ; venue } ; windy hill }', 'tointer': 'the venue record of this unqiue row is windy hill .'}], 'result': True, 'ind': 4, 'tostr': 'and { only { filter_eq { all_rows ; away team score ; 15.2 } } ; eq { hop { filter_eq { all_rows ; away team score ; 15.2 } ; venue } ; windy hill } } = true', 'tointer': 'select the rows whose away team score record fuzzily matches to 15.2 . there is only one such row in the table . the venue record of this unqiue row is windy hill .'} | and { only { filter_eq { all_rows ; away team score ; 15.2 } } ; eq { hop { filter_eq { all_rows ; away team score ; 15.2 } ; venue } ; windy hill } } = true | select the rows whose away team score record fuzzily matches to 15.2 . there is only one such row in the table . the venue record of this unqiue row is windy hill . | 6 | 5 | {'and_4': 4, 'result_5': 5, 'only_1': 1, 'filter_str_eq_0': 0, 'all_rows_6': 6, 'away team score_7': 7, '15.2_8': 8, 'str_eq_3': 3, 'str_hop_2': 2, 'venue_9': 9, 'windy hill_10': 10} | {'and_4': 'and', 'result_5': 'true', 'only_1': 'only', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_6': 'all_rows', 'away team score_7': 'away team score', '15.2_8': '15.2', 'str_eq_3': 'str_eq', 'str_hop_2': 'str_hop', 'venue_9': 'venue', 'windy hill_10': 'windy hill'} | {'and_4': [5], 'result_5': [], 'only_1': [4], 'filter_str_eq_0': [1, 2], 'all_rows_6': [0], 'away team score_7': [0], '15.2_8': [0], 'str_eq_3': [4], 'str_hop_2': [3], 'venue_9': [2], 'windy hill_10': [3]} | ['home team', 'home team score', 'away team', 'away team score', 'venue', 'crowd', 'date'] | [['hawthorn', '8.13 ( 61 )', 'st kilda', '7.8 ( 50 )', 'glenferrie oval', '6000', '30 july 1932'], ['fitzroy', '6.13 ( 49 )', 'footscray', '10.14 ( 74 )', 'brunswick street oval', '10000', '30 july 1932'], ['essendon', '14.14 ( 98 )', 'north melbourne', '15.2 ( 92 )', 'windy hill', '9000', '30 july 1932'], ['richmond', '12.18 ( 90 )', 'melbourne', '10.6 ( 66 )', 'punt road oval', '11000', '30 july 1932'], ['geelong', '13.17 ( 95 )', 'collingwood', '10.10 ( 70 )', 'corio oval', '11000', '30 july 1932'], ['south melbourne', '11.14 ( 80 )', 'carlton', '12.17 ( 89 )', 'lake oval', '41000', '30 july 1932']] |
best of + | https://en.wikipedia.org/wiki/Best_of_%2B | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-18079773-1.html.csv | count | among the previously unreleased hits of best of + , 2 of them are longer than 4:00 . | {'scope': 'subset', 'criterion': 'greater_than', 'value': '4:00', 'result': '2', 'col': '5', 'subset': {'col': '2', 'criterion': 'equal', 'value': 'previously unreleased'}} | {'func': 'eq', 'args': [{'func': 'count', 'args': [{'func': 'filter_greater', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'original album', 'previously unreleased'], 'result': None, 'ind': 0, 'tostr': 'filter_eq { all_rows ; original album ; previously unreleased }', 'tointer': 'select the rows whose original album record fuzzily matches to previously unreleased .'}, 'time', '4:00'], 'result': None, 'ind': 1, 'tointer': 'select the rows whose original album record fuzzily matches to previously unreleased . among these rows , select the rows whose time record is greater than 4:00 .', 'tostr': 'filter_greater { filter_eq { all_rows ; original album ; previously unreleased } ; time ; 4:00 }'}], 'result': '2', 'ind': 2, 'tostr': 'count { filter_greater { filter_eq { all_rows ; original album ; previously unreleased } ; time ; 4:00 } }', 'tointer': 'select the rows whose original album record fuzzily matches to previously unreleased . among these rows , select the rows whose time record is greater than 4:00 . the number of such rows is 2 .'}, '2'], 'result': True, 'ind': 3, 'tostr': 'eq { count { filter_greater { filter_eq { all_rows ; original album ; previously unreleased } ; time ; 4:00 } } ; 2 } = true', 'tointer': 'select the rows whose original album record fuzzily matches to previously unreleased . among these rows , select the rows whose time record is greater than 4:00 . the number of such rows is 2 .'} | eq { count { filter_greater { filter_eq { all_rows ; original album ; previously unreleased } ; time ; 4:00 } } ; 2 } = true | select the rows whose original album record fuzzily matches to previously unreleased . among these rows , select the rows whose time record is greater than 4:00 . the number of such rows is 2 . | 4 | 4 | {'eq_3': 3, 'result_4': 4, 'count_2': 2, 'filter_greater_1': 1, 'filter_str_eq_0': 0, 'all_rows_5': 5, 'original album_6': 6, 'previously unreleased_7': 7, 'time_8': 8, '4:00_9': 9, '2_10': 10} | {'eq_3': 'eq', 'result_4': 'true', 'count_2': 'count', 'filter_greater_1': 'filter_greater', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_5': 'all_rows', 'original album_6': 'original album', 'previously unreleased_7': 'previously unreleased', 'time_8': 'time', '4:00_9': '4:00', '2_10': '2'} | {'eq_3': [4], 'result_4': [], 'count_2': [3], 'filter_greater_1': [2], 'filter_str_eq_0': [1], 'all_rows_5': [0], 'original album_6': [0], 'previously unreleased_7': [0], 'time_8': [1], '4:00_9': [1], '2_10': [3]} | ['english translation', 'original album', 'lyricist ( s )', 'composer ( s )', 'time'] | [['the summer', 'previously unreleased', 'giorgos moukidis', 'giorgos moukidis', '3:44'], ['goodbye', 'previously unreleased', 'giorgos moukidis', 'giorgos moukidis', '4:07'], ['surrender', 'previously unreleased', 'giorgos moukidis', 'giorgos moukidis', '4:03'], ['what else will i hear', 'ena hadi', 'ilias filippou', 'kyriakos papadopoulos', '3:22'], ["i 'm bleeding", 'matono', 'vasilis giannopoulos', 'christos dantis', '3:33'], ['nowhere', 'matono', 'giorgos moukidis', 'giorgos moukidis', '4:09'], ["i 'm fine", 'mazi sou', 'stelios hronis', 'stelios hronis', '4:03'], ['meaning', 'noima', 'giorgos theofanous', 'giorgos theofanous', '4:45'], ['a caress', 'ena hadi', 'ilias filippou', 'kyriakos papadopoulos', '3:57'], ['the last cup', 'ena ( new edition )', 'christos dantis', 'christos dantis', '4:13'], ['you boldnesses and you have complaint , you', 'vres enan tropo', 'natalia germanou', 'petros imvrios', '3:31'], ['you left', 'matono', 'giorgos moukidis', 'giorgos moukidis', '3:58'], ["do n't ask", 'noima', 'giorgos theofanous', 'giorgos theofanous', '4:33'], ['one', 'ena', 'vicky gerothodorou', 'takis bougas', '3:39'], ['we lived', 'ena', 'eleana vrahali', 'dimitris kontopoulos', '3:41'], ['run', 'trekse', 'eleana vrahali', 'christos dantis', '4:02'], ['with you', 'mazi sou', 'giorgos moukidis', 'giorgos moukidis', '3:33'], ['the paths', 'matono', 'vicky gerothodorou', 'antonios vardis', '5:03'], ['secret', 'ena ( new edition )', 'giorgos moukidis', 'giorgos moukidis', '5:44']] |
tiffany joh | https://en.wikipedia.org/wiki/Tiffany_Joh | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-15870501-2.html.csv | superlative | from 2007 - 2012 , tiffany joh had her lowest scoring average in 2007 . | {'scope': 'all', 'col_superlative': '7', 'row_superlative': '1', 'value_mentioned': 'no', 'max_or_min': 'min', 'other_col': '1', 'subset': None} | {'func': 'eq', 'args': [{'func': 'num_hop', 'args': [{'func': 'argmin', 'args': ['all_rows', 'scoring average'], 'result': None, 'ind': 0, 'tostr': 'argmin { all_rows ; scoring average }'}, 'year'], 'result': '2007', 'ind': 1, 'tostr': 'hop { argmin { all_rows ; scoring average } ; year }'}, '2007'], 'result': True, 'ind': 2, 'tostr': 'eq { hop { argmin { all_rows ; scoring average } ; year } ; 2007 } = true', 'tointer': 'select the row whose scoring average record of all rows is minimum . the year record of this row is 2007 .'} | eq { hop { argmin { all_rows ; scoring average } ; year } ; 2007 } = true | select the row whose scoring average record of all rows is minimum . the year record of this row is 2007 . | 3 | 3 | {'eq_2': 2, 'result_3': 3, 'num_hop_1': 1, 'argmin_0': 0, 'all_rows_4': 4, 'scoring average_5': 5, 'year_6': 6, '2007_7': 7} | {'eq_2': 'eq', 'result_3': 'true', 'num_hop_1': 'num_hop', 'argmin_0': 'argmin', 'all_rows_4': 'all_rows', 'scoring average_5': 'scoring average', 'year_6': 'year', '2007_7': '2007'} | {'eq_2': [3], 'result_3': [], 'num_hop_1': [2], 'argmin_0': [1], 'all_rows_4': [0], 'scoring average_5': [0], 'year_6': [1], '2007_7': [2]} | ['year', 'tournaments played', 'cuts made', 'wins', 'best finish', 'earnings', 'scoring average'] | [['2007', '1', '1', '0', 't22', 'n / a', '71.66'], ['2009', '1', '1', '0', 't21', 'n / a', '72.50'], ['2010', '2', '0', '0', 'mc', '0', '79.00'], ['2011', '14', '12', '0', '2', '237365', '72.75'], ['2012', '20', '10', '0', 't33', '48695', '74.09']] |
list of wards in plymouth | https://en.wikipedia.org/wiki/List_of_wards_in_Plymouth | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-18506586-1.html.csv | majority | according to the list of wards in plymouth , the majority of wards that belong to south west devon constituency have more than 9000 in electorate . | {'scope': 'subset', 'col': '5', 'most_or_all': 'most', 'criterion': 'greater_than', 'value': '9000', 'subset': {'col': '6', 'criterion': 'equal', 'value': 'south west devon'}} | {'func': 'most_greater', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'constituency', 'south west devon'], 'result': None, 'ind': 0, 'tostr': 'filter_eq { all_rows ; constituency ; south west devon }', 'tointer': 'select the rows whose constituency record fuzzily matches to south west devon .'}, 'electorate', '9000'], 'result': True, 'ind': 1, 'tointer': 'select the rows whose constituency record fuzzily matches to south west devon . for the electorate records of these rows , most of them are greater than 9000 .', 'tostr': 'most_greater { filter_eq { all_rows ; constituency ; south west devon } ; electorate ; 9000 } = true'} | most_greater { filter_eq { all_rows ; constituency ; south west devon } ; electorate ; 9000 } = true | select the rows whose constituency record fuzzily matches to south west devon . for the electorate records of these rows , most of them are greater than 9000 . | 2 | 2 | {'most_greater_1': 1, 'result_2': 2, 'filter_str_eq_0': 0, 'all_rows_3': 3, 'constituency_4': 4, 'south west devon_5': 5, 'electorate_6': 6, '9000_7': 7} | {'most_greater_1': 'most_greater', 'result_2': 'true', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_3': 'all_rows', 'constituency_4': 'constituency', 'south west devon_5': 'south west devon', 'electorate_6': 'electorate', '9000_7': '9000'} | {'most_greater_1': [2], 'result_2': [], 'filter_str_eq_0': [1], 'all_rows_3': [0], 'constituency_4': [0], 'south west devon_5': [0], 'electorate_6': [1], '9000_7': [1]} | ['name', 'labour', 'conservative', 'political party', 'electorate', 'constituency'] | [['budshead', '2', '1', 'no overall', '9697', 'devonport'], ['compton', '0', '3', 'conservative', '9270', 'sutton'], ['devonport', '3', '0', 'labour', '9880', 'devonport'], ['drake', '0', '2', 'conservative', '6362', 'sutton'], ['efford and lipson', '3', '0', 'labour', '9716', 'sutton'], ['eggbuckland', '0', '3', 'conservative', '10050', 'devonport'], ['ham', '3', '0', 'labour', '9870', 'devonport'], ['honicknowle', '3', '0', 'labour', '10306', 'devonport'], ['moor view', '3', '0', 'labour', '9592', 'devonport'], ['peverell', '0', '3', 'conservative', '9853', 'sutton'], ['plympton chaddlewood', '0', '2', 'conservative', '6156', 'south west devon'], ['plympton erle', '0', '2', 'conservative', '6989', 'south west devon'], ['plympton st mary', '0', '3', 'conservative', '9740', 'south west devon'], ['plymstock dunstone', '0', '3', 'conservative', '9973', 'south west devon'], ['plymstock radford', '0', '3', 'conservative', '9335', 'south west devon'], ['st budeaux', '3', '0', 'labour', '9673', 'devonport'], ['st peter and the waterfront', '3', '0', 'labour', '9715', 'sutton'], ['southway', '0', '3', 'conservative', '9493', 'devonport'], ['stoke', '1', '2', 'no overall', '9691', 'sutton'], ['sutton and mount gould', '3', '0', 'labour', '9595', 'sutton']] |
interwar unemployment and poverty in the united kingdom | https://en.wikipedia.org/wiki/Interwar_unemployment_and_poverty_in_the_United_Kingdom | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-11042765-1.html.csv | majority | the majority of prime ministers of the united kingdom served under george v. | {'scope': 'all', 'col': '5', 'most_or_all': 'most', 'criterion': 'equal', 'value': 'george v', 'subset': None} | {'func': 'most_str_eq', 'args': ['all_rows', 'monarchs served', 'george v'], 'result': True, 'ind': 0, 'tointer': 'for the monarchs served records of all rows , most of them fuzzily match to george v .', 'tostr': 'most_eq { all_rows ; monarchs served ; george v } = true'} | most_eq { all_rows ; monarchs served ; george v } = true | for the monarchs served records of all rows , most of them fuzzily match to george v . | 1 | 1 | {'most_str_eq_0': 0, 'result_1': 1, 'all_rows_2': 2, 'monarchs served_3': 3, 'george v_4': 4} | {'most_str_eq_0': 'most_str_eq', 'result_1': 'true', 'all_rows_2': 'all_rows', 'monarchs served_3': 'monarchs served', 'george v_4': 'george v'} | {'most_str_eq_0': [1], 'result_1': [], 'all_rows_2': [0], 'monarchs served_3': [0], 'george v_4': [0]} | ['name', 'entered office', 'left office', 'political party', 'monarchs served', 'birth place'] | [['david lloyd george', '7 december 1916', '19 october 1922', 'national liberal', 'george v', 'manchester'], ['andrew bonar law', '23 october 1922', '20 may 1923', 'conservative', 'george v', 'rexton , kent county , new brunswick , canada'], ['stanley baldwin ( 1st ministry )', '23 may 1923', '16 january 1924', 'conservative', 'george v', 'bewdley , worcestershire'], ['ramsay macdonald ( 1st ministry )', '22 january 1924', '4 november 1924', 'labour', 'george v', 'lossiemouth , moray'], ['stanley baldwin ( 2nd ministry )', '4 november 1924', '5 june 1929', 'conservative', 'george v', 'bewdley , worcestershire'], ['ramsay macdonald ( 2nd ministry )', '5 june 1929', '24 august 1931', 'labour', 'george v', 'lossiemouth , moray'], ['ramsay macdonald ( 3rd ministry )', '24 august 1931', '7 june 1935', 'national labour ( national government )', 'george v', 'lossiemouth , moray'], ['stanley baldwin ( 3rd ministry )', '7 june 1935', '28 may 1937', 'conservative ( national government )', 'george v , edward viii , george vi', 'bewdley , worcestershire'], ['neville chamberlain', '28 may 1937', '10 may 1940', 'conservative ( national government )', 'george vi', 'birmingham , west midlands']] |
list of how it 's made episodes | https://en.wikipedia.org/wiki/List_of_How_It%27s_Made_episodes | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-15187735-12.html.csv | count | there are 13 episodes in the drama series titled ' how it 's made ' . | {'scope': 'all', 'criterion': 'all', 'value': 'n/a', 'result': '13', 'col': '2', 'subset': None} | {'func': 'eq', 'args': [{'func': 'count', 'args': [{'func': 'filter_all', 'args': ['all_rows', 'episode'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose episode record is arbitrary .', 'tostr': 'filter_all { all_rows ; episode }'}], 'result': '13', 'ind': 1, 'tostr': 'count { filter_all { all_rows ; episode } }', 'tointer': 'select the rows whose episode record is arbitrary . the number of such rows is 13 .'}, '13'], 'result': True, 'ind': 2, 'tostr': 'eq { count { filter_all { all_rows ; episode } } ; 13 } = true', 'tointer': 'select the rows whose episode record is arbitrary . the number of such rows is 13 .'} | eq { count { filter_all { all_rows ; episode } } ; 13 } = true | select the rows whose episode record is arbitrary . the number of such rows is 13 . | 3 | 3 | {'eq_2': 2, 'result_3': 3, 'count_1': 1, 'filter_all_0': 0, 'all_rows_4': 4, 'episode_5': 5, '13_6': 6} | {'eq_2': 'eq', 'result_3': 'true', 'count_1': 'count', 'filter_all_0': 'filter_all', 'all_rows_4': 'all_rows', 'episode_5': 'episode', '13_6': '13'} | {'eq_2': [3], 'result_3': [], 'count_1': [2], 'filter_all_0': [1], 'all_rows_4': [0], 'episode_5': [0], '13_6': [2]} | ['series ep', 'episode', 'netflix', 'segment a', 'segment b', 'segment c', 'segment d'] | [['12 - 01', '144', 's06e14', 'pneumatic impact wrenches', 'cultured marble sinks', 'plantain chips', 'nascar stock cars'], ['12 - 02', '145', 's06e15', 'jaws of life', 'artificial christmas trees', 'soda crackers', 'ratchets'], ['12 - 03', '146', 's06e16', 's thermometer', 'produce scales', 'aircraft painting', 'luxury s chocolate'], ['12 - 04', '147', 's06e17', 'carburetors', 'air conditioners', 'sugar ( part 1 )', 'sugar ( part 2 )'], ['12 - 05', '148', 's06e18', 'combination wrenches', 'deli meats', 'golf cars', 'airships'], ['12 - 06', '149', 's06e19', 'carbon fibre car parts', 'hand dryers', 'recycled polyester yarn', 'fleece'], ['12 - 07', '150', 's06e20', 'police badges', 'muffins', 'car washes', 'pressure gauges'], ['12 - 08', '151', 's06e21', 'metal detectors', 'rum', 'tiffany reproductions', 'aircraft engines'], ['12 - 09', '152', 's06e22', 'riding mowers', 'popcorn', 'adjustable beds', 'cultured diamonds'], ['12 - 10', '153', 's06e23', 'airstream trailers', 'horseradish', 'industrial steam s boiler', 'deodorant'], ['12 - 11', '154', 's06e24', 's screwdriver', 'compact track loaders', 'physician scales', 'carbon fibre bats'], ['12 - 12', '155', 's06e25', 's escalator', 'kevlar s canoe', 'goat cheese', 'disc music boxes'], ['12 - 13', '156', 's06e26', 'motorcycle engines', 'glass enamel sculptures', 'hand - made paper', 'vaulting poles']] |
somerset county cricket club in 2009 | https://en.wikipedia.org/wiki/Somerset_County_Cricket_Club_in_2009 | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/1-27922491-11.html.csv | aggregation | all of the players in the 2009 somerset country cricket club played an average of 9 matches . | {'scope': 'all', 'col': '2', 'type': 'average', 'result': '8.71', 'subset': None} | {'func': 'round_eq', 'args': [{'func': 'avg', 'args': ['all_rows', 'matches'], 'result': '8.71', 'ind': 0, 'tostr': 'avg { all_rows ; matches }'}, '8.71'], 'result': True, 'ind': 1, 'tostr': 'round_eq { avg { all_rows ; matches } ; 8.71 } = true', 'tointer': 'the average of the matches record of all rows is 8.71 .'} | round_eq { avg { all_rows ; matches } ; 8.71 } = true | the average of the matches record of all rows is 8.71 . | 2 | 2 | {'eq_1': 1, 'result_2': 2, 'avg_0': 0, 'all_rows_3': 3, 'matches_4': 4, '8.71_5': 5} | {'eq_1': 'eq', 'result_2': 'true', 'avg_0': 'avg', 'all_rows_3': 'all_rows', 'matches_4': 'matches', '8.71_5': '8.71'} | {'eq_1': [2], 'result_2': [], 'avg_0': [1], 'all_rows_3': [0], 'matches_4': [0], '8.71_5': [1]} | ['player', 'matches', 'innings', 'runs', 'average', 'highest score', '100s', '50s'] | [['zander de bruyn', '9', '6', '388', '97.00', '96', '0', '5'], ['craig kieswetter', '8', '8', '395', '65.83', '138', '2', '0'], ['justin langer', '8', '4', '195', '65.00', '78', '0', '2'], ['marcus trescothick', '9', '9', '476', '59.50', '144', '1', '4'], ['peter trego', '9', '6', '171', '57.00', '74', '0', '2'], ['james hildreth', '9', '9', '313', '34.77', '151', '1', '1'], ['arul suppiah', '9', '5', '101', '33.66', '48', '0', '0']] |
1998 cfl draft | https://en.wikipedia.org/wiki/1998_CFL_Draft | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-16441561-6.html.csv | unique | adam kossack is the only player that was drafted from hastings college . | {'scope': 'all', 'row': '5', 'col': '5', 'col_other': '3', 'criterion': 'equal', 'value': 'hastings college', 'subset': None} | {'func': 'and', 'args': [{'func': 'only', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'college', 'hastings college'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose college record fuzzily matches to hastings college .', 'tostr': 'filter_eq { all_rows ; college ; hastings college }'}], 'result': True, 'ind': 1, 'tostr': 'only { filter_eq { all_rows ; college ; hastings college } }', 'tointer': 'select the rows whose college record fuzzily matches to hastings college . there is only one such row in the table .'}, {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'college', 'hastings college'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose college record fuzzily matches to hastings college .', 'tostr': 'filter_eq { all_rows ; college ; hastings college }'}, 'player'], 'result': 'adam kossack', 'ind': 2, 'tostr': 'hop { filter_eq { all_rows ; college ; hastings college } ; player }'}, 'adam kossack'], 'result': True, 'ind': 3, 'tostr': 'eq { hop { filter_eq { all_rows ; college ; hastings college } ; player } ; adam kossack }', 'tointer': 'the player record of this unqiue row is adam kossack .'}], 'result': True, 'ind': 4, 'tostr': 'and { only { filter_eq { all_rows ; college ; hastings college } } ; eq { hop { filter_eq { all_rows ; college ; hastings college } ; player } ; adam kossack } } = true', 'tointer': 'select the rows whose college record fuzzily matches to hastings college . there is only one such row in the table . the player record of this unqiue row is adam kossack .'} | and { only { filter_eq { all_rows ; college ; hastings college } } ; eq { hop { filter_eq { all_rows ; college ; hastings college } ; player } ; adam kossack } } = true | select the rows whose college record fuzzily matches to hastings college . there is only one such row in the table . the player record of this unqiue row is adam kossack . | 6 | 5 | {'and_4': 4, 'result_5': 5, 'only_1': 1, 'filter_str_eq_0': 0, 'all_rows_6': 6, 'college_7': 7, 'hastings college_8': 8, 'str_eq_3': 3, 'str_hop_2': 2, 'player_9': 9, 'adam kossack_10': 10} | {'and_4': 'and', 'result_5': 'true', 'only_1': 'only', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_6': 'all_rows', 'college_7': 'college', 'hastings college_8': 'hastings college', 'str_eq_3': 'str_eq', 'str_hop_2': 'str_hop', 'player_9': 'player', 'adam kossack_10': 'adam kossack'} | {'and_4': [5], 'result_5': [], 'only_1': [4], 'filter_str_eq_0': [1, 2], 'all_rows_6': [0], 'college_7': [0], 'hastings college_8': [0], 'str_eq_3': [4], 'str_hop_2': [3], 'player_9': [2], 'adam kossack_10': [3]} | ['pick', 'cfl team', 'player', 'position', 'college'] | [['36', 'hamilton', 'benjie hutchison', 'dl', 'british columbia'], ['37', 'winnipeg', 'john baunemann', 'k', 'manitoba'], ['38', 'winnipeg', 'chad vath', 'lb', 'manitoba'], ['39', 'calgary', 'jodi bednarek', 'lb', 'calgary'], ['40', 'edmonton', 'adam kossack', 'ol', 'hastings college'], ['41', 'montreal', 'kelly ireland', 'ol', "saint mary 's"], ['42', 'saskatchewan', 'james rapesse', 'lb', 'saskatchewan'], ['43', 'hamilton', 'robert yelenich', 'lb', 'york'], ['44', 'toronto', 'bill mitoulas', 'lb', 'notre dame']] |
2005 connecticut sun season | https://en.wikipedia.org/wiki/2005_Connecticut_Sun_season | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-18904831-5.html.csv | comparative | the june 11 game of the 2005 connecticut sun season had the same number of assists as that of the previous day . | {'row_1': '4', 'row_2': '3', 'col': '7', 'col_other': '2', 'relation': 'equal', 'record_mentioned': 'no', 'diff_result': None} | {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'date', 'june 11'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose date record fuzzily matches to june 11 .', 'tostr': 'filter_eq { all_rows ; date ; june 11 }'}, 'high assists'], 'result': None, 'ind': 2, 'tostr': 'hop { filter_eq { all_rows ; date ; june 11 } ; high assists }', 'tointer': 'select the rows whose date record fuzzily matches to june 11 . take the high assists record of this row .'}, {'func': 'str_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'date', 'june 10'], 'result': None, 'ind': 1, 'tointer': 'select the rows whose date record fuzzily matches to june 10 .', 'tostr': 'filter_eq { all_rows ; date ; june 10 }'}, 'high assists'], 'result': None, 'ind': 3, 'tostr': 'hop { filter_eq { all_rows ; date ; june 10 } ; high assists }', 'tointer': 'select the rows whose date record fuzzily matches to june 10 . take the high assists record of this row .'}], 'result': True, 'ind': 4, 'tostr': 'eq { hop { filter_eq { all_rows ; date ; june 11 } ; high assists } ; hop { filter_eq { all_rows ; date ; june 10 } ; high assists } } = true', 'tointer': 'select the rows whose date record fuzzily matches to june 11 . take the high assists record of this row . select the rows whose date record fuzzily matches to june 10 . take the high assists record of this row . the first record fuzzily matches to the second record .'} | eq { hop { filter_eq { all_rows ; date ; june 11 } ; high assists } ; hop { filter_eq { all_rows ; date ; june 10 } ; high assists } } = true | select the rows whose date record fuzzily matches to june 11 . take the high assists record of this row . select the rows whose date record fuzzily matches to june 10 . take the high assists record of this row . the first record fuzzily matches to the second record . | 5 | 5 | {'str_eq_4': 4, 'result_5': 5, 'str_hop_2': 2, 'filter_str_eq_0': 0, 'all_rows_6': 6, 'date_7': 7, 'june 11_8': 8, 'high assists_9': 9, 'str_hop_3': 3, 'filter_str_eq_1': 1, 'all_rows_10': 10, 'date_11': 11, 'june 10_12': 12, 'high assists_13': 13} | {'str_eq_4': 'str_eq', 'result_5': 'true', 'str_hop_2': 'str_hop', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_6': 'all_rows', 'date_7': 'date', 'june 11_8': 'june 11', 'high assists_9': 'high assists', 'str_hop_3': 'str_hop', 'filter_str_eq_1': 'filter_str_eq', 'all_rows_10': 'all_rows', 'date_11': 'date', 'june 10_12': 'june 10', 'high assists_13': 'high assists'} | {'str_eq_4': [5], 'result_5': [], 'str_hop_2': [4], 'filter_str_eq_0': [2], 'all_rows_6': [0], 'date_7': [0], 'june 11_8': [0], 'high assists_9': [2], 'str_hop_3': [4], 'filter_str_eq_1': [3], 'all_rows_10': [1], 'date_11': [1], 'june 10_12': [1], 'high assists_13': [3]} | ['game', 'date', 'opponent', 'score', 'high points', 'high rebounds', 'high assists', 'location', 'record'] | [['4', 'june 4', 'san antonio', 'w 80 - 69', 'douglas , dydek , mcwilliams - franklin ( 15 )', 'jones ( 7 )', 'douglas , whalen ( 6 )', 'mohegan sun arena 6252', '3 - 1'], ['5', 'june 7', 'seattle', 'w 81 - 69', 'douglas ( 20 )', 'dydek ( 14 )', 'whalen ( 8 )', 'mohegan sun arena 7080', '4 - 1'], ['6', 'june 10', 'houston', 'w 77 - 57', 'sales ( 21 )', 'dydek ( 12 )', 'whalen ( 6 )', 'toyota center 5736', '5 - 1'], ['7', 'june 11', 'san antonio', 'w 78 - 69', 'mcwilliams - franklin ( 17 )', 'douglas , mcwilliams - franklin ( 9 )', 'whalen ( 6 )', 'at & t center 9772', '6 - 1'], ['8', 'june 18', 'detroit', 'w 73 - 63', 'sales ( 17 )', 'mcwilliams - franklin ( 10 )', 'whalen ( 6 )', 'mohegan sun arena 7427', '7 - 1'], ['9', 'june 20', 'los angeles', 'w 90 - 70', 'sales ( 26 )', 'dydek ( 10 )', 'douglas , whalen ( 6 )', 'staples center 7246', '8 - 1'], ['10', 'june 22', 'seattle', 'l 86 - 95', 'mcwilliams - franklin ( 21 )', 'dydek , mcwilliams - franklin , whalen ( 5 )', 'whalen ( 6 )', 'keyarena 8120', '8 - 2'], ['11', 'june 24', 'sacramento', 'w 61 - 50', 'mcwilliams - franklin ( 15 )', 'mcwilliams - franklin ( 14 )', 'whalen ( 3 )', 'arco arena 10067', '9 - 2'], ['12', 'june 25', 'phoenix', 'w 77 - 69', 'sales ( 22 )', 'wyckoff ( 9 )', 'sales , wyckoff ( 3 )', 'us airways center 8091', '10 - 2'], ['13', 'june 28', 'sacramento', 'w 70 - 66', 'sales ( 19 )', 'mcwilliams - franklin ( 7 )', 'whalen ( 5 )', 'mohegan sun arena 6789', '11 - 2'], ['14', 'june 30', 'minnesota', 'w 71 - 56', 'sales ( 18 )', 'mcwilliams - franklin ( 9 )', 'whalen ( 6 )', 'mohegan sun arena 6869', '12 - 2']] |
2006 east asian judo championships | https://en.wikipedia.org/wiki/2006_East_Asian_Judo_Championships | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-18991964-3.html.csv | unique | of the nations that earned 3 gold medals at the 2006 east asian judo championship , the only one that earned 4 silver was china . | {'scope': 'subset', 'row': '2', 'col': '4', 'col_other': '2', 'criterion': 'equal', 'value': '4', 'subset': {'col': '3', 'criterion': 'equal', 'value': '3'}} | {'func': 'and', 'args': [{'func': 'only', 'args': [{'func': 'filter_eq', 'args': [{'func': 'filter_eq', 'args': ['all_rows', 'gold', '3'], 'result': None, 'ind': 0, 'tostr': 'filter_eq { all_rows ; gold ; 3 }', 'tointer': 'select the rows whose gold record is equal to 3 .'}, 'silver', '4'], 'result': None, 'ind': 1, 'tointer': 'select the rows whose gold record is equal to 3 . among these rows , select the rows whose silver record is equal to 4 .', 'tostr': 'filter_eq { filter_eq { all_rows ; gold ; 3 } ; silver ; 4 }'}], 'result': True, 'ind': 2, 'tostr': 'only { filter_eq { filter_eq { all_rows ; gold ; 3 } ; silver ; 4 } }', 'tointer': 'select the rows whose gold record is equal to 3 . among these rows , select the rows whose silver record is equal to 4 . there is only one such row in the table .'}, {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'filter_eq', 'args': [{'func': 'filter_eq', 'args': ['all_rows', 'gold', '3'], 'result': None, 'ind': 0, 'tostr': 'filter_eq { all_rows ; gold ; 3 }', 'tointer': 'select the rows whose gold record is equal to 3 .'}, 'silver', '4'], 'result': None, 'ind': 1, 'tointer': 'select the rows whose gold record is equal to 3 . among these rows , select the rows whose silver record is equal to 4 .', 'tostr': 'filter_eq { filter_eq { all_rows ; gold ; 3 } ; silver ; 4 }'}, 'nation'], 'result': 'china', 'ind': 3, 'tostr': 'hop { filter_eq { filter_eq { all_rows ; gold ; 3 } ; silver ; 4 } ; nation }'}, 'china'], 'result': True, 'ind': 4, 'tostr': 'eq { hop { filter_eq { filter_eq { all_rows ; gold ; 3 } ; silver ; 4 } ; nation } ; china }', 'tointer': 'the nation record of this unqiue row is china .'}], 'result': True, 'ind': 5, 'tostr': 'and { only { filter_eq { filter_eq { all_rows ; gold ; 3 } ; silver ; 4 } } ; eq { hop { filter_eq { filter_eq { all_rows ; gold ; 3 } ; silver ; 4 } ; nation } ; china } } = true', 'tointer': 'select the rows whose gold record is equal to 3 . among these rows , select the rows whose silver record is equal to 4 . there is only one such row in the table . the nation record of this unqiue row is china .'} | and { only { filter_eq { filter_eq { all_rows ; gold ; 3 } ; silver ; 4 } } ; eq { hop { filter_eq { filter_eq { all_rows ; gold ; 3 } ; silver ; 4 } ; nation } ; china } } = true | select the rows whose gold record is equal to 3 . among these rows , select the rows whose silver record is equal to 4 . there is only one such row in the table . the nation record of this unqiue row is china . | 8 | 6 | {'and_5': 5, 'result_6': 6, 'only_2': 2, 'filter_eq_1': 1, 'filter_eq_0': 0, 'all_rows_7': 7, 'gold_8': 8, '3_9': 9, 'silver_10': 10, '4_11': 11, 'str_eq_4': 4, 'str_hop_3': 3, 'nation_12': 12, 'china_13': 13} | {'and_5': 'and', 'result_6': 'true', 'only_2': 'only', 'filter_eq_1': 'filter_eq', 'filter_eq_0': 'filter_eq', 'all_rows_7': 'all_rows', 'gold_8': 'gold', '3_9': '3', 'silver_10': 'silver', '4_11': '4', 'str_eq_4': 'str_eq', 'str_hop_3': 'str_hop', 'nation_12': 'nation', 'china_13': 'china'} | {'and_5': [6], 'result_6': [], 'only_2': [5], 'filter_eq_1': [2, 3], 'filter_eq_0': [1], 'all_rows_7': [0], 'gold_8': [0], '3_9': [0], 'silver_10': [1], '4_11': [1], 'str_eq_4': [5], 'str_hop_3': [4], 'nation_12': [3], 'china_13': [4]} | ['rank', 'nation', 'gold', 'silver', 'bronze', 'total'] | [['1', 'japan', '6', '1', '6', '13'], ['2', 'china', '3', '4', '4', '11'], ['3', 'south korea', '3', '3', '3', '9'], ['4', 'mongolia', '1', '5', '12', '18'], ['5', 'north korea', '1', '1', '2', '4'], ['6', 'chinese taipei', '0', '0', '1', '1'], ['total', 'total', '14', '14', '28', '56']] |
list of great central railway locomotives and rolling stock | https://en.wikipedia.org/wiki/List_of_Great_Central_Railway_locomotives_and_rolling_stock | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-11913905-6.html.csv | unique | no e50266 is the only car with a br unlined green with half yellow end . | {'scope': 'all', 'row': '3', 'col': '3', 'col_other': '1', 'criterion': 'equal', 'value': 'br unlined green with half yellow end', 'subset': None} | {'func': 'and', 'args': [{'func': 'only', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'livery', 'br unlined green with half yellow end'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose livery record fuzzily matches to br unlined green with half yellow end .', 'tostr': 'filter_eq { all_rows ; livery ; br unlined green with half yellow end }'}], 'result': True, 'ind': 1, 'tostr': 'only { filter_eq { all_rows ; livery ; br unlined green with half yellow end } }', 'tointer': 'select the rows whose livery record fuzzily matches to br unlined green with half yellow end . there is only one such row in the table .'}, {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'livery', 'br unlined green with half yellow end'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose livery record fuzzily matches to br unlined green with half yellow end .', 'tostr': 'filter_eq { all_rows ; livery ; br unlined green with half yellow end }'}, 'number & name'], 'result': 'no e50266', 'ind': 2, 'tostr': 'hop { filter_eq { all_rows ; livery ; br unlined green with half yellow end } ; number & name }'}, 'no e50266'], 'result': True, 'ind': 3, 'tostr': 'eq { hop { filter_eq { all_rows ; livery ; br unlined green with half yellow end } ; number & name } ; no e50266 }', 'tointer': 'the number & name record of this unqiue row is no e50266 .'}], 'result': True, 'ind': 4, 'tostr': 'and { only { filter_eq { all_rows ; livery ; br unlined green with half yellow end } } ; eq { hop { filter_eq { all_rows ; livery ; br unlined green with half yellow end } ; number & name } ; no e50266 } } = true', 'tointer': 'select the rows whose livery record fuzzily matches to br unlined green with half yellow end . there is only one such row in the table . the number & name record of this unqiue row is no e50266 .'} | and { only { filter_eq { all_rows ; livery ; br unlined green with half yellow end } } ; eq { hop { filter_eq { all_rows ; livery ; br unlined green with half yellow end } ; number & name } ; no e50266 } } = true | select the rows whose livery record fuzzily matches to br unlined green with half yellow end . there is only one such row in the table . the number & name record of this unqiue row is no e50266 . | 6 | 5 | {'and_4': 4, 'result_5': 5, 'only_1': 1, 'filter_str_eq_0': 0, 'all_rows_6': 6, 'livery_7': 7, 'br unlined green with half yellow end_8': 8, 'str_eq_3': 3, 'str_hop_2': 2, 'number & name_9': 9, 'no e50266_10': 10} | {'and_4': 'and', 'result_5': 'true', 'only_1': 'only', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_6': 'all_rows', 'livery_7': 'livery', 'br unlined green with half yellow end_8': 'br unlined green with half yellow end', 'str_eq_3': 'str_eq', 'str_hop_2': 'str_hop', 'number & name_9': 'number & name', 'no e50266_10': 'no e50266'} | {'and_4': [5], 'result_5': [], 'only_1': [4], 'filter_str_eq_0': [1, 2], 'all_rows_6': [0], 'livery_7': [0], 'br unlined green with half yellow end_8': [0], 'str_eq_3': [4], 'str_hop_2': [3], 'number & name_9': [2], 'no e50266_10': [3]} | ['number & name', 'description', 'livery', 'owner ( s )', 'date'] | [['operational', 'operational', 'operational', 'operational', 'operational'], ['nos 50321 51427', 'british rail class 101 dmcl dmbs', 'br lined green', 'renaissance railcars', '1958 / 59'], ['no e50266', 'british rail class 101 dmcl', 'br unlined green with half yellow end', 'renaissance railcars', '1957'], ['no m51616 m51622 m59276', 'british rail class 127 dmbs dms tslrb', 'br lined green', 'red triangle society', '1959'], ['undergoing overhaul , restoration or repairs', 'undergoing overhaul , restoration or repairs', 'undergoing overhaul , restoration or repairs', 'undergoing overhaul , restoration or repairs', 'undergoing overhaul , restoration or repairs'], ['no e53645 w53926', 'british rail class 108 dmcl dmbs', 'br blue and grey', 'nottingham ( gc ) dmu group', '1958 / 1959'], ['no e59575', 'british rail class 111 tslrb', 'br lined green', 'renaissance railcars', '1960'], ['no w79976', 'british railways ac cars railbus', 'br lined green', 'david clarke railway trust', '1958'], ['stored or static', 'stored or static', 'stored or static', 'stored or static', 'stored or static'], ['no e50193 e50203', 'british rail class 101 dmbs', 'br blue and grey', 'renaissance railcars', '1957'], ['no w51138 w51151 w59501', 'british rail class 116 dmbs dms tcl', 'br lined green', 'pressed steel heritage ltd', '1958 / 1960']] |
1980 vfl season | https://en.wikipedia.org/wiki/1980_VFL_season | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-10809823-18.html.csv | count | there were 6 game venues used during the 1980 vfl season . | {'scope': 'all', 'criterion': 'all', 'value': 'venue', 'result': '6', 'col': '5', 'subset': None} | {'func': 'eq', 'args': [{'func': 'count', 'args': [{'func': 'filter_all', 'args': ['all_rows', 'venue'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose venue record is arbitrary .', 'tostr': 'filter_all { all_rows ; venue }'}], 'result': '6', 'ind': 1, 'tostr': 'count { filter_all { all_rows ; venue } }', 'tointer': 'select the rows whose venue record is arbitrary . the number of such rows is 6 .'}, '6'], 'result': True, 'ind': 2, 'tostr': 'eq { count { filter_all { all_rows ; venue } } ; 6 } = true', 'tointer': 'select the rows whose venue record is arbitrary . the number of such rows is 6 .'} | eq { count { filter_all { all_rows ; venue } } ; 6 } = true | select the rows whose venue record is arbitrary . the number of such rows is 6 . | 3 | 3 | {'eq_2': 2, 'result_3': 3, 'count_1': 1, 'filter_all_0': 0, 'all_rows_4': 4, 'venue_5': 5, '6_6': 6} | {'eq_2': 'eq', 'result_3': 'true', 'count_1': 'count', 'filter_all_0': 'filter_all', 'all_rows_4': 'all_rows', 'venue_5': 'venue', '6_6': '6'} | {'eq_2': [3], 'result_3': [], 'count_1': [2], 'filter_all_0': [1], 'all_rows_4': [0], 'venue_5': [0], '6_6': [2]} | ['home team', 'home team score', 'away team', 'away team score', 'venue', 'crowd', 'date'] | [['fitzroy', '15.14 ( 104 )', 'st kilda', '12.8 ( 80 )', 'junction oval', '6836', '2 august 1980'], ['collingwood', '11.24 ( 90 )', 'south melbourne', '8.9 ( 57 )', 'victoria park', '24739', '2 august 1980'], ['north melbourne', '12.15 ( 87 )', 'melbourne', '7.15 ( 57 )', 'arden street oval', '7544', '2 august 1980'], ['geelong', '15.14 ( 104 )', 'essendon', '13.12 ( 90 )', 'kardinia park', '24738', '2 august 1980'], ['hawthorn', '9.15 ( 69 )', 'carlton', '16.17 ( 113 )', 'princes park', '15046', '2 august 1980'], ['richmond', '23.18 ( 156 )', 'footscray', '6.5 ( 41 )', 'vfl park', '18282', '2 august 1980']] |
list of kraft nabisco championship champions | https://en.wikipedia.org/wiki/List_of_Kraft_Nabisco_Championship_champions | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/1-27864661-6.html.csv | count | there are three nations on the list of kraft nabisco championship champions that had one total win . | {'scope': 'all', 'criterion': 'equal', 'value': '1', 'result': '3', 'col': '7', 'subset': None} | {'func': 'eq', 'args': [{'func': 'count', 'args': [{'func': 'filter_eq', 'args': ['all_rows', 'total wins', '1'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose total wins record is equal to 1 .', 'tostr': 'filter_eq { all_rows ; total wins ; 1 }'}], 'result': '3', 'ind': 1, 'tostr': 'count { filter_eq { all_rows ; total wins ; 1 } }', 'tointer': 'select the rows whose total wins record is equal to 1 . the number of such rows is 3 .'}, '3'], 'result': True, 'ind': 2, 'tostr': 'eq { count { filter_eq { all_rows ; total wins ; 1 } } ; 3 } = true', 'tointer': 'select the rows whose total wins record is equal to 1 . the number of such rows is 3 .'} | eq { count { filter_eq { all_rows ; total wins ; 1 } } ; 3 } = true | select the rows whose total wins record is equal to 1 . the number of such rows is 3 . | 3 | 3 | {'eq_2': 2, 'result_3': 3, 'count_1': 1, 'filter_eq_0': 0, 'all_rows_4': 4, 'total wins_5': 5, '1_6': 6, '3_7': 7} | {'eq_2': 'eq', 'result_3': 'true', 'count_1': 'count', 'filter_eq_0': 'filter_eq', 'all_rows_4': 'all_rows', 'total wins_5': 'total wins', '1_6': '1', '3_7': '3'} | {'eq_2': [3], 'result_3': [], 'count_1': [2], 'filter_eq_0': [1], 'all_rows_4': [0], 'total wins_5': [0], '1_6': [0], '3_7': [2]} | ['rank', 'nationality', 'non - major wins', 'non - major winners', 'major wins', 'major winners', 'total wins', 'total winners', 'first title', 'last title'] | [['1', 'united states', '8', '8', '19', '13', '27', '21', '1972', '2011'], ['2', 'sweden', '0', '0', '4', '2', '4', '2', '1993', '2005'], ['3', 'south korea', '0', '0', '3', '3', '3', '3', '2004', '2013'], ['t4', 'australia', '0', '0', '2', '1', '2', '1', '2000', '2006'], ['t4', 'canada', '2', '1', '0', '0', '2', '1', '1978', '1979'], ['t6', 'france', '0', '0', '1', '1', '1', '1', '2003', '2003'], ['t6', 'mexico', '0', '0', '1', '1', '1', '1', '2008', '2008'], ['t6', 'south africa', '1', '1', '0', '0', '1', '1', '1982', '1982']] |
1979 - 80 philadelphia flyers season | https://en.wikipedia.org/wiki/1979%E2%80%9380_Philadelphia_Flyers_season | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-14208862-4.html.csv | ordinal | in the 1979 - 80 philadelphia flyers season , the game with the 2nd lowest attendance was on december 22nd . | {'row': '10', 'col': '6', 'order': '2', 'col_other': '1', 'max_or_min': 'min_to_max', 'value_mentioned': 'no', 'scope': 'all', 'subset': None} | {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'nth_argmin', 'args': ['all_rows', 'attendance', '2'], 'result': None, 'ind': 0, 'tostr': 'nth_argmin { all_rows ; attendance ; 2 }'}, 'date'], 'result': 'december 22', 'ind': 1, 'tostr': 'hop { nth_argmin { all_rows ; attendance ; 2 } ; date }'}, 'december 22'], 'result': True, 'ind': 2, 'tostr': 'eq { hop { nth_argmin { all_rows ; attendance ; 2 } ; date } ; december 22 } = true', 'tointer': 'select the row whose attendance record of all rows is 2nd minimum . the date record of this row is december 22 .'} | eq { hop { nth_argmin { all_rows ; attendance ; 2 } ; date } ; december 22 } = true | select the row whose attendance record of all rows is 2nd minimum . the date record of this row is december 22 . | 3 | 3 | {'str_eq_2': 2, 'result_3': 3, 'str_hop_1': 1, 'nth_argmin_0': 0, 'all_rows_4': 4, 'attendance_5': 5, '2_6': 6, 'date_7': 7, 'december 22_8': 8} | {'str_eq_2': 'str_eq', 'result_3': 'true', 'str_hop_1': 'str_hop', 'nth_argmin_0': 'nth_argmin', 'all_rows_4': 'all_rows', 'attendance_5': 'attendance', '2_6': '2', 'date_7': 'date', 'december 22_8': 'december 22'} | {'str_eq_2': [3], 'result_3': [], 'str_hop_1': [2], 'nth_argmin_0': [1], 'all_rows_4': [0], 'attendance_5': [0], '2_6': [0], 'date_7': [1], 'december 22_8': [2]} | ['date', 'visitor', 'score', 'home', 'decision', 'attendance', 'record'] | [['december 1', 'philadelphia', '4 - 4', 'toronto', 'myre', '16485', '17 - 1 - 4'], ['december 2', 'detroit', '4 - 4', 'philadelphia', 'peeters', '17077', '17 - 1 - 5'], ['december 4', 'boston', '2 - 2', 'philadelphia', 'myre', '17077', '17 - 1 - 6'], ['december 6', 'los angeles', '4 - 9', 'philadelphia', 'peeters', '17077', '18 - 1 - 6'], ['december 9', 'chicago', '4 - 4', 'philadelphia', 'myre', '17077', '18 - 1 - 7'], ['december 13', 'quebec', '4 - 6', 'philadelphia', 'peeters', '17077', '19 - 1 - 7'], ['december 15', 'buffalo', '2 - 3', 'philadelphia', 'peeters', '17077', '20 - 1 - 7'], ['december 16', 'philadelphia', '1 - 1', 'ny rangers', 'myre', '17404', '20 - 1 - 8'], ['december 20', 'pittsburgh', '1 - 1', 'philadelphia', 'peeters', '17077', '20 - 1 - 9'], ['december 22', 'philadelphia', '5 - 2', 'boston', 'myre', '14673', '21 - 1 - 9'], ['december 23', 'hartford', '2 - 4', 'philadelphia', 'peeters', '17077', '22 - 1 - 9'], ['december 26', 'philadelphia', '4 - 4', 'hartford', 'myre', '7627', '22 - 1 - 10'], ['december 28', 'philadelphia', '5 - 3', 'winnipeg', 'peeters', '16038', '23 - 1 - 10'], ['december 29', 'philadelphia', '3 - 2', 'colorado', 'myre', '16452', '24 - 1 - 10']] |
solomon islands national football team results | https://en.wikipedia.org/wiki/Solomon_Islands_national_football_team_results | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-12276713-1.html.csv | unique | the game played on 11 may 1996 was the only game that the solomon islands national team played in tahiti . | {'scope': 'all', 'row': '2', 'col': '2', 'col_other': '1', 'criterion': 'fuzzily_match', 'value': 'tahiti', 'subset': None} | {'func': 'and', 'args': [{'func': 'only', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'venue', 'tahiti'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose venue record fuzzily matches to tahiti .', 'tostr': 'filter_eq { all_rows ; venue ; tahiti }'}], 'result': True, 'ind': 1, 'tostr': 'only { filter_eq { all_rows ; venue ; tahiti } }', 'tointer': 'select the rows whose venue record fuzzily matches to tahiti . there is only one such row in the table .'}, {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'venue', 'tahiti'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose venue record fuzzily matches to tahiti .', 'tostr': 'filter_eq { all_rows ; venue ; tahiti }'}, 'date'], 'result': '11 may 1996', 'ind': 2, 'tostr': 'hop { filter_eq { all_rows ; venue ; tahiti } ; date }'}, '11 may 1996'], 'result': True, 'ind': 3, 'tostr': 'eq { hop { filter_eq { all_rows ; venue ; tahiti } ; date } ; 11 may 1996 }', 'tointer': 'the date record of this unqiue row is 11 may 1996 .'}], 'result': True, 'ind': 4, 'tostr': 'and { only { filter_eq { all_rows ; venue ; tahiti } } ; eq { hop { filter_eq { all_rows ; venue ; tahiti } ; date } ; 11 may 1996 } } = true', 'tointer': 'select the rows whose venue record fuzzily matches to tahiti . there is only one such row in the table . the date record of this unqiue row is 11 may 1996 .'} | and { only { filter_eq { all_rows ; venue ; tahiti } } ; eq { hop { filter_eq { all_rows ; venue ; tahiti } ; date } ; 11 may 1996 } } = true | select the rows whose venue record fuzzily matches to tahiti . there is only one such row in the table . the date record of this unqiue row is 11 may 1996 . | 6 | 5 | {'and_4': 4, 'result_5': 5, 'only_1': 1, 'filter_str_eq_0': 0, 'all_rows_6': 6, 'venue_7': 7, 'tahiti_8': 8, 'str_eq_3': 3, 'str_hop_2': 2, 'date_9': 9, '11 may 1996_10': 10} | {'and_4': 'and', 'result_5': 'true', 'only_1': 'only', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_6': 'all_rows', 'venue_7': 'venue', 'tahiti_8': 'tahiti', 'str_eq_3': 'str_eq', 'str_hop_2': 'str_hop', 'date_9': 'date', '11 may 1996_10': '11 may 1996'} | {'and_4': [5], 'result_5': [], 'only_1': [4], 'filter_str_eq_0': [1, 2], 'all_rows_6': [0], 'venue_7': [0], 'tahiti_8': [0], 'str_eq_3': [4], 'str_hop_2': [3], 'date_9': [2], '11 may 1996_10': [3]} | ['date', 'venue', 'score', 'competition', 'att'] | [['17 november 1995', 'solomon islands ( h )', '0 - 1', '1996 ofc nations cup', '-'], ['11 may 1996', 'tahiti ( a )', '1 - 2', '1996 ofc nations cup', '-'], ['16 september 1996', 'papua new guinea ( a )', '1 - 1', '1998 fifa world cup qualification', '-'], ['18 september 1996', 'papua new guinea ( n )', '1 - 1', '1998 fifa world cup qualification', '-'], ['15 february 1997', 'tonga ( a )', '4 - 0', '1998 fifa world cup qualification', '-'], ['17 february 1997', 'fiji ( a )', '2 - 1', 'friendly', '-'], ['21 february 1997', 'fiji ( a )', '2 - 3', 'friendly', '-'], ['1 march 1997', 'solomon islands ( h )', '9 - 0', '1998 fifa world cup qualification', '-'], ['11 june 1997', 'australia ( n )', '0 - 13', '1998 fifa world cup qualification', '-'], ['15 june 1997', 'australia ( n )', '4 - 1', '1998 fifa world cup qualification', '-'], ['17 june 1997', 'australia ( a )', '2 - 6', '1998 fifa world cup qualification', '-'], ['21 june 1997', 'australia ( n )', '1 - 1', '1998 fifa world cup qualification', '-'], ['5 september 1998', 'vanuatu ( n )', '3 - 1', 'melanesia cup 1998', '-'], ['8 september 1998', 'vanuatu ( n )', '3 - 2', 'melanesia cup 1998', '-'], ['10 september 1998', 'vanuatu ( a )', '1 - 3', 'melanesia cup 1998', '-'], ['12 september 1998', 'vanuatu ( n )', '1 - 1', 'melanesia cup 1998', '-'], ['19 september 1998', 'solomon islands ( h )', '2 - 1', 'friendly', '-']] |
2007 - 08 portland trail blazers season | https://en.wikipedia.org/wiki/2007%E2%80%9308_Portland_Trail_Blazers_season | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-11964047-5.html.csv | aggregation | in the 2007 - 08 portland trail blazers season , for games where roy was at least one of the leading scorers , his average number of points was 25.75 . | {'scope': 'subset', 'col': '5', 'type': 'average', 'result': '25.75', 'subset': {'col': '5', 'criterion': 'equal', 'value': 'roy :'}} | {'func': 'round_eq', 'args': [{'func': 'avg', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'leading scorer', 'roy :'], 'result': None, 'ind': 0, 'tostr': 'filter_eq { all_rows ; leading scorer ; roy : }', 'tointer': 'select the rows whose leading scorer record fuzzily matches to roy : .'}, 'leading scorer'], 'result': '25.75', 'ind': 1, 'tostr': 'avg { filter_eq { all_rows ; leading scorer ; roy : } ; leading scorer }'}, '25.75'], 'result': True, 'ind': 2, 'tostr': 'round_eq { avg { filter_eq { all_rows ; leading scorer ; roy : } ; leading scorer } ; 25.75 } = true', 'tointer': 'select the rows whose leading scorer record fuzzily matches to roy : . the average of the leading scorer record of these rows is 25.75 .'} | round_eq { avg { filter_eq { all_rows ; leading scorer ; roy : } ; leading scorer } ; 25.75 } = true | select the rows whose leading scorer record fuzzily matches to roy : . the average of the leading scorer record of these rows is 25.75 . | 3 | 3 | {'eq_2': 2, 'result_3': 3, 'avg_1': 1, 'filter_str_eq_0': 0, 'all_rows_4': 4, 'leading scorer_5': 5, 'roy:_6': 6, 'leading scorer_7': 7, '25.75_8': 8} | {'eq_2': 'eq', 'result_3': 'true', 'avg_1': 'avg', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_4': 'all_rows', 'leading scorer_5': 'leading scorer', 'roy:_6': 'roy :', 'leading scorer_7': 'leading scorer', '25.75_8': '25.75'} | {'eq_2': [3], 'result_3': [], 'avg_1': [2], 'filter_str_eq_0': [1], 'all_rows_4': [0], 'leading scorer_5': [0], 'roy:_6': [0], 'leading scorer_7': [1], '25.75_8': [2]} | ['date', 'visitor', 'score', 'home', 'leading scorer', 'attendance', 'record', 'streak'] | [['november 2', 'portland trail blazers', 'l 93 - 113', 'new orleans hornets', 'roy : 23', 'new orleans arena 9817', '0 - 2', 'l2'], ['november 3', 'portland trail blazers', 'l 80 - 89', 'houston rockets', 'roy : 23', 'toyota center 18232', '0 - 3', 'l3'], ['november 7', 'new orleans', 'w 90 - 93', 'portland trail blazers', 'west : 34', 'rose garden 19980', '1 - 3', 'w1'], ['november 9', 'memphis grizzlies', 'w 98 - 110', 'portland trail blazers', 'gay : 31', 'rose garden 18112', '2 - 3', 'w2'], ['november 10', 'dallas mavericks', 'w 82 - 91', 'portland trail blazers', 'roy : 32', 'rose garden 19255', '3 - 3', 'w3'], ['november 13', 'detroit pistons', 'w 94 - 102', 'portland trail blazers', 'aldridge : 22', 'rose garden 19980', '4 - 3', 'w4'], ['november 14', 'portland trail blazers', 'l 93 - 110', 'denver nuggets', 'anthony : 25', 'pepsi center 13289', '4 - 4', 'l1'], ['november 16', 'portland trail blazers', 'l 88 - 92', 'philadelphia 76ers', 'roy : 25 aldridge : 25', 'wachovia center 11483', '4 - 5', 'l2'], ['november 17', 'portland trail blazers', 'l 90 - 109', 'washington wizards', 'jamison : 30', 'verizon center 20173', '4 - 6', 'l3'], ['november 19', 'portland trail blazers', 'l 92 - 101', 'charlotte bobcats', 'wallace : 27', 'charlotte bobcats arena 10612', '4 - 7', 'l4'], ['november 21', 'new jersey nets', 'l 106 - 101', 'portland trail blazers', 'jefferson : 30', 'rose garden 18423', '4 - 8', 'l5'], ['november 23', 'sacramento kings', 'w 84 - 87', 'portland trail blazers', 'aldridge : 28', 'rose garden 19980', '5 - 8', 'w1'], ['november 26', 'orlando magic', 'l 85 - 74', 'portland trail blazers', 'turkoglu : 21', 'rose garden 15922', '5 - 9', 'l1'], ['november 28', 'indiana pacers', 'l 95 - 89', 'portland trail blazers', 'outlaw : 26', 'rose garden 16168', '5 - 10', 'l2'], ['november 30', 'portland trail blazers', 'l 80 - 91', 'dallas mavericks', 'howard : 23', 'american airlines center 20301', '5 - 11', 'l3']] |
1966 dutch grand prix | https://en.wikipedia.org/wiki/1966_Dutch_Grand_Prix | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-1122356-1.html.csv | count | there were 3 drivers with a +6 laps completion time during the 1966 dutch grand prix . | {'scope': 'all', 'criterion': 'equal', 'value': '+ 6 laps', 'result': '3', 'col': '4', 'subset': None} | {'func': 'eq', 'args': [{'func': 'count', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'time / retired', '+ 6 laps'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose time / retired record fuzzily matches to + 6 laps .', 'tostr': 'filter_eq { all_rows ; time / retired ; + 6 laps }'}], 'result': '3', 'ind': 1, 'tostr': 'count { filter_eq { all_rows ; time / retired ; + 6 laps } }', 'tointer': 'select the rows whose time / retired record fuzzily matches to + 6 laps . the number of such rows is 3 .'}, '3'], 'result': True, 'ind': 2, 'tostr': 'eq { count { filter_eq { all_rows ; time / retired ; + 6 laps } } ; 3 } = true', 'tointer': 'select the rows whose time / retired record fuzzily matches to + 6 laps . the number of such rows is 3 .'} | eq { count { filter_eq { all_rows ; time / retired ; + 6 laps } } ; 3 } = true | select the rows whose time / retired record fuzzily matches to + 6 laps . the number of such rows is 3 . | 3 | 3 | {'eq_2': 2, 'result_3': 3, 'count_1': 1, 'filter_str_eq_0': 0, 'all_rows_4': 4, 'time / retired_5': 5, '+ 6 laps_6': 6, '3_7': 7} | {'eq_2': 'eq', 'result_3': 'true', 'count_1': 'count', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_4': 'all_rows', 'time / retired_5': 'time / retired', '+ 6 laps_6': '+ 6 laps', '3_7': '3'} | {'eq_2': [3], 'result_3': [], 'count_1': [2], 'filter_str_eq_0': [1], 'all_rows_4': [0], 'time / retired_5': [0], '+ 6 laps_6': [0], '3_7': [2]} | ['driver', 'constructor', 'laps', 'time / retired', 'grid'] | [['jack brabham', 'brabham - repco', '90', '2:20:32.5', '1'], ['graham hill', 'brm', '89', '+ 1 lap', '7'], ['jim clark', 'lotus - climax', '88', '+ 2 laps', '3'], ['jackie stewart', 'brm', '88', '+ 2 laps', '8'], ['mike spence', 'lotus - brm', '87', '+ 3 laps', '12'], ['lorenzo bandini', 'ferrari', '87', '+ 3 laps', '9'], ['jo bonnier', 'cooper - maserati', '84', '+ 6 laps', '13'], ['john taylor', 'brabham - brm', '84', '+ 6 laps', '17'], ['guy ligier', 'cooper - maserati', '84', '+ 6 laps', '16'], ['jo siffert', 'cooper - maserati', '79', 'engine', '11'], ['bob anderson', 'brabham - climax', '73', 'suspension', '14'], ['john surtees', 'cooper - maserati', '44', 'electrical', '10'], ['denny hulme', 'brabham - repco', '37', 'ignition', '2'], ['peter arundell', 'lotus - brm', '28', 'ignition', '15'], ['dan gurney', 'eagle - climax', '26', 'oil leak', '4'], ['mike parkes', 'ferrari', '10', 'accident', '5'], ['jochen rindt', 'cooper - maserati', '2', 'accident', '6']] |
united states house of representatives elections , 1926 | https://en.wikipedia.org/wiki/United_States_House_of_Representatives_elections%2C_1926 | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/1-1342379-20.html.csv | comparative | george h. tinkham and james a. gallivan had been in the same office the longest . | {'row_1': '5', 'row_2': '6', 'col': '4', 'col_other': '2', 'relation': 'equal', 'record_mentioned': 'no', 'diff_result': None} | {'func': 'eq', 'args': [{'func': 'num_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'incumbent', 'george h tinkham'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose incumbent record fuzzily matches to george h tinkham .', 'tostr': 'filter_eq { all_rows ; incumbent ; george h tinkham }'}, 'first elected'], 'result': None, 'ind': 2, 'tostr': 'hop { filter_eq { all_rows ; incumbent ; george h tinkham } ; first elected }', 'tointer': 'select the rows whose incumbent record fuzzily matches to george h tinkham . take the first elected record of this row .'}, {'func': 'num_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'incumbent', 'james a gallivan'], 'result': None, 'ind': 1, 'tointer': 'select the rows whose incumbent record fuzzily matches to james a gallivan .', 'tostr': 'filter_eq { all_rows ; incumbent ; james a gallivan }'}, 'first elected'], 'result': None, 'ind': 3, 'tostr': 'hop { filter_eq { all_rows ; incumbent ; james a gallivan } ; first elected }', 'tointer': 'select the rows whose incumbent record fuzzily matches to james a gallivan . take the first elected record of this row .'}], 'result': True, 'ind': 4, 'tostr': 'eq { hop { filter_eq { all_rows ; incumbent ; george h tinkham } ; first elected } ; hop { filter_eq { all_rows ; incumbent ; james a gallivan } ; first elected } } = true', 'tointer': 'select the rows whose incumbent record fuzzily matches to george h tinkham . take the first elected record of this row . select the rows whose incumbent record fuzzily matches to james a gallivan . take the first elected record of this row . the first record is equal to the second record .'} | eq { hop { filter_eq { all_rows ; incumbent ; george h tinkham } ; first elected } ; hop { filter_eq { all_rows ; incumbent ; james a gallivan } ; first elected } } = true | select the rows whose incumbent record fuzzily matches to george h tinkham . take the first elected record of this row . select the rows whose incumbent record fuzzily matches to james a gallivan . take the first elected record of this row . the first record is equal to the second record . | 5 | 5 | {'eq_4': 4, 'result_5': 5, 'num_hop_2': 2, 'filter_str_eq_0': 0, 'all_rows_6': 6, 'incumbent_7': 7, 'george h tinkham_8': 8, 'first elected_9': 9, 'num_hop_3': 3, 'filter_str_eq_1': 1, 'all_rows_10': 10, 'incumbent_11': 11, 'james a gallivan_12': 12, 'first elected_13': 13} | {'eq_4': 'eq', 'result_5': 'true', 'num_hop_2': 'num_hop', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_6': 'all_rows', 'incumbent_7': 'incumbent', 'george h tinkham_8': 'george h tinkham', 'first elected_9': 'first elected', 'num_hop_3': 'num_hop', 'filter_str_eq_1': 'filter_str_eq', 'all_rows_10': 'all_rows', 'incumbent_11': 'incumbent', 'james a gallivan_12': 'james a gallivan', 'first elected_13': 'first elected'} | {'eq_4': [5], 'result_5': [], 'num_hop_2': [4], 'filter_str_eq_0': [2], 'all_rows_6': [0], 'incumbent_7': [0], 'george h tinkham_8': [0], 'first elected_9': [2], 'num_hop_3': [4], 'filter_str_eq_1': [3], 'all_rows_10': [1], 'incumbent_11': [1], 'james a gallivan_12': [1], 'first elected_13': [3]} | ['district', 'incumbent', 'party', 'first elected', 'result', 'candidates'] | [['massachusetts 2', 'henry l bowles', 'republican', '1925', 're - elected', 'henry l bowles ( r ) 64.0 % john hall ( d ) 36.0 %'], ['massachusetts 3', 'frank h foss', 'republican', '1924', 're - elected', 'frank h foss ( r ) 62.8 % joseph e casey ( d ) 37.2 %'], ['massachusetts 6', 'abram andrew', 'republican', '1921', 're - elected', 'abram andrew ( r ) 76.9 % james mcpherson ( d ) 23.1 %'], ['massachusetts 10', 'john j douglass', 'democratic', '1924', 're - elected', 'john j douglass ( d ) unopposed'], ['massachusetts 11', 'george h tinkham', 'republican', '1914', 're - elected', 'george h tinkham ( r ) unopposed'], ['massachusetts 12', 'james a gallivan', 'democratic', '1914', 're - elected', 'james a gallivan ( d ) unopposed']] |
1979 - 80 segunda división | https://en.wikipedia.org/wiki/1979%E2%80%9380_Segunda_Divisi%C3%B3n | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-12193971-2.html.csv | superlative | ca osasuna was the team that had the highest number of wins in the 79-80 segunda division . | {'scope': 'all', 'col_superlative': '5', 'row_superlative': '3', 'value_mentioned': 'no', 'max_or_min': 'max', 'other_col': '2', 'subset': None} | {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'argmax', 'args': ['all_rows', 'wins'], 'result': None, 'ind': 0, 'tostr': 'argmax { all_rows ; wins }'}, 'club'], 'result': 'ca osasuna', 'ind': 1, 'tostr': 'hop { argmax { all_rows ; wins } ; club }'}, 'ca osasuna'], 'result': True, 'ind': 2, 'tostr': 'eq { hop { argmax { all_rows ; wins } ; club } ; ca osasuna } = true', 'tointer': 'select the row whose wins record of all rows is maximum . the club record of this row is ca osasuna .'} | eq { hop { argmax { all_rows ; wins } ; club } ; ca osasuna } = true | select the row whose wins record of all rows is maximum . the club record of this row is ca osasuna . | 3 | 3 | {'str_eq_2': 2, 'result_3': 3, 'str_hop_1': 1, 'argmax_0': 0, 'all_rows_4': 4, 'wins_5': 5, 'club_6': 6, 'ca osasuna_7': 7} | {'str_eq_2': 'str_eq', 'result_3': 'true', 'str_hop_1': 'str_hop', 'argmax_0': 'argmax', 'all_rows_4': 'all_rows', 'wins_5': 'wins', 'club_6': 'club', 'ca osasuna_7': 'ca osasuna'} | {'str_eq_2': [3], 'result_3': [], 'str_hop_1': [2], 'argmax_0': [1], 'all_rows_4': [0], 'wins_5': [0], 'club_6': [1], 'ca osasuna_7': [2]} | ['position', 'club', 'played', 'points', 'wins', 'draws', 'losses', 'goals for', 'goals against', 'goal difference'] | [['1', 'real murcia', '38', '47 + 9', '19', '9', '10', '58', '39', '+ 19'], ['2', 'real valladolid', '38', '45 + 7', '17', '11', '10', '53', '40', '+ 13'], ['3', 'ca osasuna', '38', '44 + 6', '20', '4', '14', '74', '49', '+ 25'], ['4', 'elche cf', '38', '42 + 4', '15', '12', '11', '46', '41', '+ 5'], ['5', 'cd castellón', '38', '42 + 4', '16', '10', '12', '47', '42', '+ 5'], ['6', 'ce sabadell fc', '38', '41 + 3', '17', '7', '14', '47', '49', '- 2'], ['7', 'castilla cf', '38', '40 + 2', '15', '10', '13', '46', '39', '+ 7'], ['8', 'cádiz cf', '38', '39 + 1', '15', '9', '14', '43', '42', '+ 1'], ['9', 'deportivo alavés', '38', '38', '13', '12', '13', '37', '39', '- 2'], ['10', 'levante ud', '38', '38', '15', '8', '15', '47', '51', '- 4'], ['11', 'real oviedo', '38', '38', '13', '12', '13', '40', '45', '- 5'], ['12', 'recreativo de huelva', '38', '37 - 1', '12', '13', '13', '38', '42', '- 4'], ['13', 'granada cf', '38', '37 - 1', '13', '11', '14', '42', '42', '0'], ['14', 'getafe deportivo', '38', '37 - 1', '13', '11', '14', '44', '50', '- 6'], ['15', 'palencia cf', '38', '36 - 2', '11', '14', '13', '49', '49', '0'], ['16', 'racing de santander', '38', '36 - 2', '10', '16', '12', '34', '39', '- 5'], ['17', 'celta de vigo', '38', '35 - 3', '12', '11', '15', '48', '43', '+ 5'], ['18', 'deportivo de la coruña', '38', '35 - 3', '15', '5', '18', '49', '50', '- 1'], ['19', 'gimnàstic de tarragona', '38', '27 - 11', '8', '11', '19', '29', '61', '- 32'], ['20', 'algeciras cf', '38', '26 - 12', '7', '12', '19', '29', '48', '- 19']] |
list of kent first - class cricket records | https://en.wikipedia.org/wiki/List_of_Kent_first-class_cricket_records | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-11204543-17.html.csv | count | there are 5 seasons listed in the kent first - class cricket records . | {'scope': 'all', 'criterion': 'all', 'value': 'n/a', 'result': '5', 'col': '5', 'subset': None} | {'func': 'eq', 'args': [{'func': 'count', 'args': [{'func': 'filter_all', 'args': ['all_rows', 'season'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose season record is arbitrary .', 'tostr': 'filter_all { all_rows ; season }'}], 'result': '5', 'ind': 1, 'tostr': 'count { filter_all { all_rows ; season } }', 'tointer': 'select the rows whose season record is arbitrary . the number of such rows is 5 .'}, '5'], 'result': True, 'ind': 2, 'tostr': 'eq { count { filter_all { all_rows ; season } } ; 5 } = true', 'tointer': 'select the rows whose season record is arbitrary . the number of such rows is 5 .'} | eq { count { filter_all { all_rows ; season } } ; 5 } = true | select the rows whose season record is arbitrary . the number of such rows is 5 . | 3 | 3 | {'eq_2': 2, 'result_3': 3, 'count_1': 1, 'filter_all_0': 0, 'all_rows_4': 4, 'season_5': 5, '5_6': 6} | {'eq_2': 'eq', 'result_3': 'true', 'count_1': 'count', 'filter_all_0': 'filter_all', 'all_rows_4': 'all_rows', 'season_5': 'season', '5_6': '5'} | {'eq_2': [3], 'result_3': [], 'count_1': [2], 'filter_all_0': [1], 'all_rows_4': [0], 'season_5': [0], '5_6': [2]} | ['bowling', 'player', 'opponent', 'venue', 'season'] | [['17 - 48', 'colin blythe', 'v northamptonshire', 'county ground , northampton', '1907'], ['17 - 67', 'tich freeman', 'v sussex', 'county ground , hove', '1922'], ['17 - 92', 'tich freeman', 'v warwickshire', 'cheriton road , folkestone', '1932'], ['16 - 80', 'doug wright', 'v somerset', 'recreation ground , bath', '1939'], ['16 - 82', 'tich freeman', 'v northamptonshire', 'nevill ground , tunbridge wells', '1932']] |
weightlifting at the 2007 pan american games | https://en.wikipedia.org/wiki/Weightlifting_at_the_2007_Pan_American_Games | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-17703223-11.html.csv | ordinal | the lowest bodyweight of a lifter who achieved a total of over 195 kg was the 57.05 kg of rusmeris villar . | {'scope': 'subset', 'row': '2', 'col': '2', 'order': '1', 'col_other': '1', 'max_or_min': 'min_to_max', 'value_mentioned': 'yes', 'subset': {'col': '5', 'criterion': 'greater_than', 'value': '195.0'}} | {'func': 'and', 'args': [{'func': 'eq', 'args': [{'func': 'nth_min', 'args': [{'func': 'filter_greater', 'args': ['all_rows', 'total ( kg )', '195.0'], 'result': None, 'ind': 0, 'tostr': 'filter_greater { all_rows ; total ( kg ) ; 195.0 }', 'tointer': 'select the rows whose total ( kg ) record is greater than 195.0 .'}, 'bodyweight', '1'], 'result': '57.05', 'ind': 1, 'tostr': 'nth_min { filter_greater { all_rows ; total ( kg ) ; 195.0 } ; bodyweight ; 1 }', 'tointer': 'select the rows whose total ( kg ) record is greater than 195.0 . the 1st minimum bodyweight record of these rows is 57.05 .'}, '57.05'], 'result': True, 'ind': 2, 'tostr': 'eq { nth_min { filter_greater { all_rows ; total ( kg ) ; 195.0 } ; bodyweight ; 1 } ; 57.05 }', 'tointer': 'select the rows whose total ( kg ) record is greater than 195.0 . the 1st minimum bodyweight record of these rows is 57.05 .'}, {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'nth_argmin', 'args': [{'func': 'filter_greater', 'args': ['all_rows', 'total ( kg )', '195.0'], 'result': None, 'ind': 0, 'tostr': 'filter_greater { all_rows ; total ( kg ) ; 195.0 }', 'tointer': 'select the rows whose total ( kg ) record is greater than 195.0 .'}, 'bodyweight', '1'], 'result': None, 'ind': 3, 'tostr': 'nth_argmin { filter_greater { all_rows ; total ( kg ) ; 195.0 } ; bodyweight ; 1 }'}, 'name'], 'result': 'rusmeris villar ( col )', 'ind': 4, 'tostr': 'hop { nth_argmin { filter_greater { all_rows ; total ( kg ) ; 195.0 } ; bodyweight ; 1 } ; name }'}, 'rusmeris villar ( col )'], 'result': True, 'ind': 5, 'tostr': 'eq { hop { nth_argmin { filter_greater { all_rows ; total ( kg ) ; 195.0 } ; bodyweight ; 1 } ; name } ; rusmeris villar ( col ) }', 'tointer': 'the name record of the row with 1st minimum bodyweight record is rusmeris villar ( col ) .'}], 'result': True, 'ind': 6, 'tostr': 'and { eq { nth_min { filter_greater { all_rows ; total ( kg ) ; 195.0 } ; bodyweight ; 1 } ; 57.05 } ; eq { hop { nth_argmin { filter_greater { all_rows ; total ( kg ) ; 195.0 } ; bodyweight ; 1 } ; name } ; rusmeris villar ( col ) } } = true', 'tointer': 'select the rows whose total ( kg ) record is greater than 195.0 . the 1st minimum bodyweight record of these rows is 57.05 . the name record of the row with 1st minimum bodyweight record is rusmeris villar ( col ) .'} | and { eq { nth_min { filter_greater { all_rows ; total ( kg ) ; 195.0 } ; bodyweight ; 1 } ; 57.05 } ; eq { hop { nth_argmin { filter_greater { all_rows ; total ( kg ) ; 195.0 } ; bodyweight ; 1 } ; name } ; rusmeris villar ( col ) } } = true | select the rows whose total ( kg ) record is greater than 195.0 . the 1st minimum bodyweight record of these rows is 57.05 . the name record of the row with 1st minimum bodyweight record is rusmeris villar ( col ) . | 8 | 7 | {'and_6': 6, 'result_7': 7, 'eq_2': 2, 'nth_min_1': 1, 'filter_greater_0': 0, 'all_rows_8': 8, 'total (kg)_9': 9, '195.0_10': 10, 'bodyweight_11': 11, '1_12': 12, '57.05_13': 13, 'str_eq_5': 5, 'str_hop_4': 4, 'nth_argmin_3': 3, 'bodyweight_14': 14, '1_15': 15, 'name_16': 16, 'rusmeris villar ( col )_17': 17} | {'and_6': 'and', 'result_7': 'true', 'eq_2': 'eq', 'nth_min_1': 'nth_min', 'filter_greater_0': 'filter_greater', 'all_rows_8': 'all_rows', 'total (kg)_9': 'total ( kg )', '195.0_10': '195.0', 'bodyweight_11': 'bodyweight', '1_12': '1', '57.05_13': '57.05', 'str_eq_5': 'str_eq', 'str_hop_4': 'str_hop', 'nth_argmin_3': 'nth_argmin', 'bodyweight_14': 'bodyweight', '1_15': '1', 'name_16': 'name', 'rusmeris villar ( col )_17': 'rusmeris villar ( col )'} | {'and_6': [7], 'result_7': [], 'eq_2': [6], 'nth_min_1': [2], 'filter_greater_0': [1, 3], 'all_rows_8': [0], 'total (kg)_9': [0], '195.0_10': [0], 'bodyweight_11': [1], '1_12': [1], '57.05_13': [2], 'str_eq_5': [6], 'str_hop_4': [5], 'nth_argmin_3': [4], 'bodyweight_14': [3], '1_15': [3], 'name_16': [4], 'rusmeris villar ( col )_17': [5]} | ['name', 'bodyweight', 'snatch', 'clean & jerk', 'total ( kg )'] | [['alexandra escobar ( ecu )', '57.35', '98.0', '125.0', '223.0'], ['rusmeris villar ( col )', '57.05', '90.0', '114.0', '204.0'], ['maria cecilia floriddia ( arg )', '57.45', '90.0', '110.0', '200.0'], ['gretty lugo ( ven )', '57.55', '88.0', '108.0', '196.0'], ['geralee vega ( pur )', '57.80', '85.0', '110.0', '195.0'], ['jackie berube ( usa )', '57.85', '86.0', '105.0', '191.0'], ['wildri de los santos ( dom )', '57.35', '84.0', '106.0', '190.0'], ['quisia yaneli ( mex )', '58.00', '82.0', '103.0', '185.0'], ['valãrie lefebvre ( can )', '57.75', '78.0', '98.0', '176.0'], ['eliane silva ( bra )', '57.00', '75.0', '90.0', '165.0']] |
stateless ( band ) | https://en.wikipedia.org/wiki/Stateless_%28band%29 | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-11816737-3.html.csv | superlative | the earliest album stateless recorded with ! k7 was on may 14 , 2007 . | {'scope': 'subset', 'col_superlative': '2', 'row_superlative': '2', 'value_mentioned': 'yes', 'max_or_min': 'min', 'other_col': '1,3,4', 'subset': {'col': '4', 'criterion': 'equal', 'value': '! k7'}} | {'func': 'and', 'args': [{'func': 'eq', 'args': [{'func': 'min', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'label', '! k7'], 'result': None, 'ind': 0, 'tostr': 'filter_eq { all_rows ; label ; ! k7 }', 'tointer': 'select the rows whose label record fuzzily matches to ! k7 .'}, 'date'], 'result': '14 may', 'ind': 1, 'tostr': 'min { filter_eq { all_rows ; label ; ! k7 } ; date }', 'tointer': 'select the rows whose label record fuzzily matches to ! k7 . the minimum date record of these rows is 14 may .'}, '14 may'], 'result': True, 'ind': 2, 'tostr': 'eq { min { filter_eq { all_rows ; label ; ! k7 } ; date } ; 14 may }', 'tointer': 'select the rows whose label record fuzzily matches to ! k7 . the minimum date record of these rows is 14 may .'}, {'func': 'and', 'args': [{'func': 'eq', 'args': [{'func': 'num_hop', 'args': [{'func': 'argmin', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'label', '! k7'], 'result': None, 'ind': 0, 'tostr': 'filter_eq { all_rows ; label ; ! k7 }', 'tointer': 'select the rows whose label record fuzzily matches to ! k7 .'}, 'date'], 'result': None, 'ind': 3, 'tostr': 'argmin { filter_eq { all_rows ; label ; ! k7 } ; date }'}, 'year'], 'result': '2007', 'ind': 4, 'tostr': 'hop { argmin { filter_eq { all_rows ; label ; ! k7 } ; date } ; year }'}, '2007'], 'result': True, 'ind': 5, 'tostr': 'eq { hop { argmin { filter_eq { all_rows ; label ; ! k7 } ; date } ; year } ; 2007 }', 'tointer': 'the year record of the row with superlative date record is 2007 .'}, {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'argmin', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'label', '! k7'], 'result': None, 'ind': 0, 'tostr': 'filter_eq { all_rows ; label ; ! k7 }', 'tointer': 'select the rows whose label record fuzzily matches to ! k7 .'}, 'date'], 'result': None, 'ind': 3, 'tostr': 'argmin { filter_eq { all_rows ; label ; ! k7 } ; date }'}, 'album'], 'result': 'stateless', 'ind': 6, 'tostr': 'hop { argmin { filter_eq { all_rows ; label ; ! k7 } ; date } ; album }'}, 'stateless'], 'result': True, 'ind': 7, 'tostr': 'eq { hop { argmin { filter_eq { all_rows ; label ; ! k7 } ; date } ; album } ; stateless }', 'tointer': 'the album record of the row with superlative date record is stateless .'}], 'result': True, 'ind': 8, 'tostr': 'and { eq { hop { argmin { filter_eq { all_rows ; label ; ! k7 } ; date } ; year } ; 2007 } ; eq { hop { argmin { filter_eq { all_rows ; label ; ! k7 } ; date } ; album } ; stateless } }', 'tointer': 'the year record of the row with superlative date record is 2007 . the album record of the row with superlative date record is stateless .'}], 'result': True, 'ind': 9, 'tostr': 'and { eq { min { filter_eq { all_rows ; label ; ! k7 } ; date } ; 14 may } ; and { eq { hop { argmin { filter_eq { all_rows ; label ; ! k7 } ; date } ; year } ; 2007 } ; eq { hop { argmin { filter_eq { all_rows ; label ; ! k7 } ; date } ; album } ; stateless } } } = true', 'tointer': 'select the rows whose label record fuzzily matches to ! k7 . the minimum date record of these rows is 14 may . the year record of the row with superlative date record is 2007 . the album record of the row with superlative date record is stateless .'} | and { eq { min { filter_eq { all_rows ; label ; ! k7 } ; date } ; 14 may } ; and { eq { hop { argmin { filter_eq { all_rows ; label ; ! k7 } ; date } ; year } ; 2007 } ; eq { hop { argmin { filter_eq { all_rows ; label ; ! k7 } ; date } ; album } ; stateless } } } = true | select the rows whose label record fuzzily matches to ! k7 . the minimum date record of these rows is 14 may . the year record of the row with superlative date record is 2007 . the album record of the row with superlative date record is stateless . | 13 | 10 | {'and_9': 9, 'result_10': 10, 'eq_2': 2, 'min_1': 1, 'filter_str_eq_0': 0, 'all_rows_11': 11, 'label_12': 12, '!k7_13': 13, 'date_14': 14, '14 may_15': 15, 'and_8': 8, 'eq_5': 5, 'num_hop_4': 4, 'argmin_3': 3, 'date_16': 16, 'year_17': 17, '2007_18': 18, 'str_eq_7': 7, 'str_hop_6': 6, 'album_19': 19, 'stateless_20': 20} | {'and_9': 'and', 'result_10': 'true', 'eq_2': 'eq', 'min_1': 'min', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_11': 'all_rows', 'label_12': 'label', '!k7_13': '! k7', 'date_14': 'date', '14 may_15': '14 may', 'and_8': 'and', 'eq_5': 'eq', 'num_hop_4': 'num_hop', 'argmin_3': 'argmin', 'date_16': 'date', 'year_17': 'year', '2007_18': '2007', 'str_eq_7': 'str_eq', 'str_hop_6': 'str_hop', 'album_19': 'album', 'stateless_20': 'stateless'} | {'and_9': [10], 'result_10': [], 'eq_2': [9], 'min_1': [2], 'filter_str_eq_0': [1, 3], 'all_rows_11': [0], 'label_12': [0], '!k7_13': [0], 'date_14': [1], '14 may_15': [2], 'and_8': [9], 'eq_5': [8], 'num_hop_4': [5], 'argmin_3': [4, 6], 'date_16': [3], 'year_17': [4], '2007_18': [5], 'str_eq_7': [8], 'str_hop_6': [7], 'album_19': [6], 'stateless_20': [7]} | ['year', 'date', 'album', 'label', 'format ( s )'] | [['2004', '5 april', 'stateless', 'sony music', '7 , cds'], ['2007', '14 may', 'stateless', '! k7', '7'], ['2007', '30 july', 'stateless', '! k7', '12 , cds'], ['2007', '29 october', 'stateless', '! k7', '12 , cds'], ['2008', '20 july', 'single - only', 'first word excursions', '7'], ['2010', '22 november', 'matilda', 'ninja tune', 'digital'], ['2011', '14 february', 'matilda', 'ninja tune', 'digital']] |
list of tallest buildings in mobile | https://en.wikipedia.org/wiki/List_of_tallest_buildings_in_Mobile | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-17961233-2.html.csv | aggregation | the average height in feet of the tallest buildings in mobile is 325.4 feet . | {'scope': 'all', 'col': '4', 'type': 'average', 'result': '325.4', 'subset': None} | {'func': 'round_eq', 'args': [{'func': 'avg', 'args': ['all_rows', 'height ft ( m )'], 'result': '325.4', 'ind': 0, 'tostr': 'avg { all_rows ; height ft ( m ) }'}, '325.4'], 'result': True, 'ind': 1, 'tostr': 'round_eq { avg { all_rows ; height ft ( m ) } ; 325.4 } = true', 'tointer': 'the average of the height ft ( m ) record of all rows is 325.4 .'} | round_eq { avg { all_rows ; height ft ( m ) } ; 325.4 } = true | the average of the height ft ( m ) record of all rows is 325.4 . | 2 | 2 | {'eq_1': 1, 'result_2': 2, 'avg_0': 0, 'all_rows_3': 3, 'height ft (m)_4': 4, '325.4_5': 5} | {'eq_1': 'eq', 'result_2': 'true', 'avg_0': 'avg', 'all_rows_3': 'all_rows', 'height ft (m)_4': 'height ft ( m )', '325.4_5': '325.4'} | {'eq_1': [2], 'result_2': [], 'avg_0': [1], 'all_rows_3': [0], 'height ft (m)_4': [0], '325.4_5': [1]} | ['name', 'street address', 'years as tallest', 'height ft ( m )', 'floors'] | [['cathedral basilica of the immaculate conception', '400 government street', '1850 - 1907', '102 ( 31 )', '2'], ['van antwerp building', '103 dauphin street', '1907 - 1929', '120 ( 37 )', '11'], ['regions bank building', '56 saint joseph street', '1929 - 1965', '236 ( 72 )', '18'], ['rsa - banktrust building', '107 saint francis street', '1965 - 2006', '424 ( 129 )', '34'], ['rsa battle house tower', '11 north water street', '2006 - present', '745 ( 227 )', '35']] |
list of how it 's made episodes | https://en.wikipedia.org/wiki/List_of_How_It%27s_Made_episodes | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/1-15187735-12.html.csv | superlative | of the how it 's made episodes , the most recent episode was the one where segment c was goat cheese . | {'scope': 'all', 'col_superlative': '2', 'row_superlative': '12', 'value_mentioned': 'no', 'max_or_min': 'max', 'other_col': '6', 'subset': None} | {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'argmax', 'args': ['all_rows', 'episode'], 'result': None, 'ind': 0, 'tostr': 'argmax { all_rows ; episode }'}, 'segment c'], 'result': 'goat cheese', 'ind': 1, 'tostr': 'hop { argmax { all_rows ; episode } ; segment c }'}, 'goat cheese'], 'result': True, 'ind': 2, 'tostr': 'eq { hop { argmax { all_rows ; episode } ; segment c } ; goat cheese } = true', 'tointer': 'select the row whose episode record of all rows is maximum . the segment c record of this row is goat cheese .'} | eq { hop { argmax { all_rows ; episode } ; segment c } ; goat cheese } = true | select the row whose episode record of all rows is maximum . the segment c record of this row is goat cheese . | 3 | 3 | {'str_eq_2': 2, 'result_3': 3, 'str_hop_1': 1, 'argmax_0': 0, 'all_rows_4': 4, 'episode_5': 5, 'segment c_6': 6, 'goat cheese_7': 7} | {'str_eq_2': 'str_eq', 'result_3': 'true', 'str_hop_1': 'str_hop', 'argmax_0': 'argmax', 'all_rows_4': 'all_rows', 'episode_5': 'episode', 'segment c_6': 'segment c', 'goat cheese_7': 'goat cheese'} | {'str_eq_2': [3], 'result_3': [], 'str_hop_1': [2], 'argmax_0': [1], 'all_rows_4': [0], 'episode_5': [0], 'segment c_6': [1], 'goat cheese_7': [2]} | ['series ep', 'episode', 'netflix', 'segment a', 'segment b', 'segment c', 'segment d'] | [['12 - 01', '144', 's06e14', 'pneumatic impact wrenches', 'cultured marble sinks', 'plantain chips', 'nascar stock cars'], ['12 - 02', '145', 's06e15', 'jaws of life', 'artificial christmas trees', 'soda crackers', 'ratchets'], ['12 - 03', '146', 's06e16', 's thermometer', 'produce scales', 'aircraft painting', 'luxury s chocolate'], ['12 - 04', '147', 's06e17', 'carburetors', 'air conditioners', 'sugar ( part 1 )', 'sugar ( part 2 )'], ['12 - 05', '148', 's06e18', 'combination wrenches', 'deli meats', 'golf cars', 'airships'], ['12 - 06', '149', 's06e19', 'carbon fibre car parts', 'hand dryers', 'recycled polyester yarn', 'fleece'], ['12 - 07', '150', 's06e20', 'police badges', 'muffins', 'car washes', 'pressure gauges'], ['12 - 08', '151', 's06e21', 'metal detectors', 'rum', 'tiffany reproductions', 'aircraft engines'], ['12 - 09', '152', 's06e22', 'riding mowers', 'popcorn', 'adjustable beds', 'cultured diamonds'], ['12 - 10', '153', 's06e23', 'airstream trailers', 'horseradish', 'industrial steam s boiler', 'deodorant'], ['12 - 11', '154', 's06e24', 's screwdriver', 'compact track loaders', 'physician scales', 'carbon fibre bats'], ['12 - 12', '155', 's06e25', 's escalator', 'kevlar s canoe', 'goat cheese', 'disc music boxes']] |
new york state election , 1934 | https://en.wikipedia.org/wiki/New_York_state_election%2C_1934 | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-15563403-1.html.csv | comparative | frederick e crane was the chief judge in both the democratic and republican ticket of the 1934 new york state elections . | {'row_1': '5', 'row_2': '5', 'col': '2', 'col_other': '3', 'relation': 'equal', 'record_mentioned': 'yes', 'diff_result': None} | {'func': 'and', 'args': [{'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'republican ticket', 'frederick e crane'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose republican ticket record fuzzily matches to frederick e crane .', 'tostr': 'filter_eq { all_rows ; republican ticket ; frederick e crane }'}, 'democratic ticket'], 'result': None, 'ind': 2, 'tostr': 'hop { filter_eq { all_rows ; republican ticket ; frederick e crane } ; democratic ticket }', 'tointer': 'select the rows whose republican ticket record fuzzily matches to frederick e crane . take the democratic ticket record of this row .'}, {'func': 'str_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'republican ticket', 'frederick e crane'], 'result': None, 'ind': 1, 'tointer': 'select the rows whose republican ticket record fuzzily matches to frederick e crane .', 'tostr': 'filter_eq { all_rows ; republican ticket ; frederick e crane }'}, 'democratic ticket'], 'result': None, 'ind': 3, 'tostr': 'hop { filter_eq { all_rows ; republican ticket ; frederick e crane } ; democratic ticket }', 'tointer': 'select the rows whose republican ticket record fuzzily matches to frederick e crane . take the democratic ticket record of this row .'}], 'result': True, 'ind': 4, 'tostr': 'eq { hop { filter_eq { all_rows ; republican ticket ; frederick e crane } ; democratic ticket } ; hop { filter_eq { all_rows ; republican ticket ; frederick e crane } ; democratic ticket } }', 'tointer': 'select the rows whose republican ticket record fuzzily matches to frederick e crane . take the democratic ticket record of this row . select the rows whose republican ticket record fuzzily matches to frederick e crane . take the democratic ticket record of this row . the first record fuzzily matches to the second record .'}, {'func': 'and', 'args': [{'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'republican ticket', 'frederick e crane'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose republican ticket record fuzzily matches to frederick e crane .', 'tostr': 'filter_eq { all_rows ; republican ticket ; frederick e crane }'}, 'democratic ticket'], 'result': None, 'ind': 2, 'tostr': 'hop { filter_eq { all_rows ; republican ticket ; frederick e crane } ; democratic ticket }', 'tointer': 'select the rows whose republican ticket record fuzzily matches to frederick e crane . take the democratic ticket record of this row .'}, 'frederick e crane'], 'result': True, 'ind': 5, 'tostr': 'eq { hop { filter_eq { all_rows ; republican ticket ; frederick e crane } ; democratic ticket } ; frederick e crane }', 'tointer': 'the democratic ticket record of the first row is frederick e crane .'}, {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'republican ticket', 'frederick e crane'], 'result': None, 'ind': 1, 'tointer': 'select the rows whose republican ticket record fuzzily matches to frederick e crane .', 'tostr': 'filter_eq { all_rows ; republican ticket ; frederick e crane }'}, 'democratic ticket'], 'result': None, 'ind': 3, 'tostr': 'hop { filter_eq { all_rows ; republican ticket ; frederick e crane } ; democratic ticket }', 'tointer': 'select the rows whose republican ticket record fuzzily matches to frederick e crane . take the democratic ticket record of this row .'}, 'frederick e crane'], 'result': True, 'ind': 6, 'tostr': 'eq { hop { filter_eq { all_rows ; republican ticket ; frederick e crane } ; democratic ticket } ; frederick e crane }', 'tointer': 'the democratic ticket record of the second row is frederick e crane .'}], 'result': True, 'ind': 7, 'tostr': 'and { eq { hop { filter_eq { all_rows ; republican ticket ; frederick e crane } ; democratic ticket } ; frederick e crane } ; eq { hop { filter_eq { all_rows ; republican ticket ; frederick e crane } ; democratic ticket } ; frederick e crane } }', 'tointer': 'the democratic ticket record of the first row is frederick e crane . the democratic ticket record of the second row is frederick e crane .'}], 'result': True, 'ind': 8, 'tostr': 'and { eq { hop { filter_eq { all_rows ; republican ticket ; frederick e crane } ; democratic ticket } ; hop { filter_eq { all_rows ; republican ticket ; frederick e crane } ; democratic ticket } } ; and { eq { hop { filter_eq { all_rows ; republican ticket ; frederick e crane } ; democratic ticket } ; frederick e crane } ; eq { hop { filter_eq { all_rows ; republican ticket ; frederick e crane } ; democratic ticket } ; frederick e crane } } } = true', 'tointer': 'select the rows whose republican ticket record fuzzily matches to frederick e crane . take the democratic ticket record of this row . select the rows whose republican ticket record fuzzily matches to frederick e crane . take the democratic ticket record of this row . the first record fuzzily matches to the second record . the democratic ticket record of the first row is frederick e crane . the democratic ticket record of the second row is frederick e crane .'} | and { eq { hop { filter_eq { all_rows ; republican ticket ; frederick e crane } ; democratic ticket } ; hop { filter_eq { all_rows ; republican ticket ; frederick e crane } ; democratic ticket } } ; and { eq { hop { filter_eq { all_rows ; republican ticket ; frederick e crane } ; democratic ticket } ; frederick e crane } ; eq { hop { filter_eq { all_rows ; republican ticket ; frederick e crane } ; democratic ticket } ; frederick e crane } } } = true | select the rows whose republican ticket record fuzzily matches to frederick e crane . take the democratic ticket record of this row . select the rows whose republican ticket record fuzzily matches to frederick e crane . take the democratic ticket record of this row . the first record fuzzily matches to the second record . the democratic ticket record of the first row is frederick e crane . the democratic ticket record of the second row is frederick e crane . | 13 | 9 | {'and_8': 8, 'result_9': 9, 'str_eq_4': 4, 'str_hop_2': 2, 'filter_str_eq_0': 0, 'all_rows_10': 10, 'republican ticket_11': 11, 'frederick e crane_12': 12, 'democratic ticket_13': 13, 'str_hop_3': 3, 'filter_str_eq_1': 1, 'all_rows_14': 14, 'republican ticket_15': 15, 'frederick e crane_16': 16, 'democratic ticket_17': 17, 'and_7': 7, 'str_eq_5': 5, 'frederick e crane_18': 18, 'str_eq_6': 6, 'frederick e crane_19': 19} | {'and_8': 'and', 'result_9': 'true', 'str_eq_4': 'str_eq', 'str_hop_2': 'str_hop', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_10': 'all_rows', 'republican ticket_11': 'republican ticket', 'frederick e crane_12': 'frederick e crane', 'democratic ticket_13': 'democratic ticket', 'str_hop_3': 'str_hop', 'filter_str_eq_1': 'filter_str_eq', 'all_rows_14': 'all_rows', 'republican ticket_15': 'republican ticket', 'frederick e crane_16': 'frederick e crane', 'democratic ticket_17': 'democratic ticket', 'and_7': 'and', 'str_eq_5': 'str_eq', 'frederick e crane_18': 'frederick e crane', 'str_eq_6': 'str_eq', 'frederick e crane_19': 'frederick e crane'} | {'and_8': [9], 'result_9': [], 'str_eq_4': [8], 'str_hop_2': [4, 5], 'filter_str_eq_0': [2], 'all_rows_10': [0], 'republican ticket_11': [0], 'frederick e crane_12': [0], 'democratic ticket_13': [2], 'str_hop_3': [4, 6], 'filter_str_eq_1': [3], 'all_rows_14': [1], 'republican ticket_15': [1], 'frederick e crane_16': [1], 'democratic ticket_17': [3], 'and_7': [8], 'str_eq_5': [7], 'frederick e crane_18': [5], 'str_eq_6': [7], 'frederick e crane_19': [6]} | ['office', 'democratic ticket', 'republican ticket', 'socialist ticket', 'constitutional ticket', 'law preservation ticket'] | [['governor', 'herbert h lehman', 'robert moses', 'charles solomon', '( none )', 'william f varney'], ['lieutenant governor', 'm william bray', 'fred j douglas', 'herman kobbe', '( none )', 'james f luckey'], ['comptroller', 'morris s tremaine', 'wilson r campbell', 'fred sander', '( none )', 'fred c foster'], ['attorney general', 'john j bennett , jr', 'william t powers', 'william karlin', '( none )', 'joseph s robinson'], ['chief judge', 'frederick e crane', 'frederick e crane', 'jacob hillquit', '( none )', 'frederick e crane'], ['judge of the court of appeals', 'john t loughran', 'john t loughran', 'darwin j meserole', '( none )', 'john t loughran'], ['judge of the court of appeals', 'edward r finch', 'charles b sears', 'julian h weiss', '( none )', 'david e hartshorn'], ['us senator', 'royal s copeland', 'e harold cluett', 'norman thomas', 'henry breckinridge', 'william sheafe chase'], ['us representative - at - large', 'matthew j merritt', 'william b groat , jr', 'charles w noonan', '( none )', 'william e barron'], ['us representative - at - large', "caroline o'day", 'natalie f couch', 'august claessens', '( none )', 'dorothy frooks']] |
metro baguio | https://en.wikipedia.org/wiki/Metro_Baguio | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/1-29289372-1.html.csv | aggregation | the average area of all cities or municipalities is 198.448 . | {'scope': 'all', 'col': '3', 'type': 'average', 'result': '198.448', 'subset': None} | {'func': 'round_eq', 'args': [{'func': 'avg', 'args': ['all_rows', 'area ( km square )'], 'result': '198.448', 'ind': 0, 'tostr': 'avg { all_rows ; area ( km square ) }'}, '198.448'], 'result': True, 'ind': 1, 'tostr': 'round_eq { avg { all_rows ; area ( km square ) } ; 198.448 } = true', 'tointer': 'the average of the area ( km square ) record of all rows is 198.448 .'} | round_eq { avg { all_rows ; area ( km square ) } ; 198.448 } = true | the average of the area ( km square ) record of all rows is 198.448 . | 2 | 2 | {'eq_1': 1, 'result_2': 2, 'avg_0': 0, 'all_rows_3': 3, 'area (km square)_4': 4, '198.448_5': 5} | {'eq_1': 'eq', 'result_2': 'true', 'avg_0': 'avg', 'all_rows_3': 'all_rows', 'area (km square)_4': 'area ( km square )', '198.448_5': '198.448'} | {'eq_1': [2], 'result_2': [], 'avg_0': [1], 'all_rows_3': [0], 'area (km square)_4': [0], '198.448_5': [1]} | ['city / municipality', 'population ( 2010 )', 'area ( km square )', 'pop density ( per km square )', 'income classification'] | [['baguio city', '318676', '57.5', 'category : pages with bad rounding precision', '1st class'], ['la trinidad , benguet', '107188', '82.74', 'category : pages with bad rounding precision', '1st class'], ['itogon , benguet', '55960', '450', 'category : pages with bad rounding precision', '1st class'], ['sablan , benguet', '10511', '106', 'category : pages with bad rounding precision', '5th class'], ['tuba , benguet', '42874', '296', 'category : pages with bad rounding precision', '1st class']] |
made ( tv series ) | https://en.wikipedia.org/wiki/Made_%28TV_series%29 | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/1-2140071-5.html.csv | count | four episodes of made season 5 premiered in 2004 . | {'scope': 'all', 'criterion': 'fuzzily_match', 'value': '2004', 'result': '4', 'col': '4', 'subset': None} | {'func': 'eq', 'args': [{'func': 'count', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'premier date', '2004'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose premier date record fuzzily matches to 2004 .', 'tostr': 'filter_eq { all_rows ; premier date ; 2004 }'}], 'result': '4', 'ind': 1, 'tostr': 'count { filter_eq { all_rows ; premier date ; 2004 } }', 'tointer': 'select the rows whose premier date record fuzzily matches to 2004 . the number of such rows is 4 .'}, '4'], 'result': True, 'ind': 2, 'tostr': 'eq { count { filter_eq { all_rows ; premier date ; 2004 } } ; 4 } = true', 'tointer': 'select the rows whose premier date record fuzzily matches to 2004 . the number of such rows is 4 .'} | eq { count { filter_eq { all_rows ; premier date ; 2004 } } ; 4 } = true | select the rows whose premier date record fuzzily matches to 2004 . the number of such rows is 4 . | 3 | 3 | {'eq_2': 2, 'result_3': 3, 'count_1': 1, 'filter_str_eq_0': 0, 'all_rows_4': 4, 'premier date_5': 5, '2004_6': 6, '4_7': 7} | {'eq_2': 'eq', 'result_3': 'true', 'count_1': 'count', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_4': 'all_rows', 'premier date_5': 'premier date', '2004_6': '2004', '4_7': '4'} | {'eq_2': [3], 'result_3': [], 'count_1': [2], 'filter_str_eq_0': [1], 'all_rows_4': [0], 'premier date_5': [0], '2004_6': [0], '4_7': [2]} | ['season', 'episode', 'episode summary', 'premier date', 'external link', 'coach'] | [['5', '1', 'selena is made into a surfer chick', 'october 6 , 2004', 'episode summary', 'brad'], ['5', '2', 'richard is made into boyfriend material', 'october 13 , 2004', 'episode summary', 'samantha house'], ['5', '3', 'abby is made into a hip hop dancer', 'october 20 , 2004', 'episode summary', 'cedric crowe'], ['5', '5', 'jackie is made into a talent show chowder', 'december 30 , 2004', 'episode summary', 'brian'], ['5', '6', 'krystle is made into miss junior', 'january 6 , 2005', 'episode summary', 'ceylone boothe - grooms'], ['5', '7', 'dov is made into a wrestler', 'january 13 , 2005', 'episode summary', 'gene mills , kurt angle'], ['5', '8', 'anna is made into a leading lady', 'january 20 , 2005', 'episode summary', "john o'connell"], ['5', '9', 'lawryn is made into a bmx biker', 'january 27 , 2005', 'episode summary', 'warwick stevenson'], ['5', '10', 'mack is made into a ballet dancer', 'february 3 , 2005', 'episode summary', 'christopher fleming'], ['5', '11', 'ian is made into a salsa dancer', 'february 10 , 2005', 'episode summary', 'shaun perry']] |
1940 in brazilian football | https://en.wikipedia.org/wiki/1940_in_Brazilian_football | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-15349635-2.html.csv | unique | portuguesa was the only team with thirty points . | {'scope': 'all', 'row': '2', 'col': '3', 'col_other': '2', 'criterion': 'equal', 'value': '30', 'subset': None} | {'func': 'and', 'args': [{'func': 'only', 'args': [{'func': 'filter_eq', 'args': ['all_rows', 'points', '30'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose points record is equal to 30 .', 'tostr': 'filter_eq { all_rows ; points ; 30 }'}], 'result': True, 'ind': 1, 'tostr': 'only { filter_eq { all_rows ; points ; 30 } }', 'tointer': 'select the rows whose points record is equal to 30 . there is only one such row in the table .'}, {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'filter_eq', 'args': ['all_rows', 'points', '30'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose points record is equal to 30 .', 'tostr': 'filter_eq { all_rows ; points ; 30 }'}, 'team'], 'result': 'portuguesa', 'ind': 2, 'tostr': 'hop { filter_eq { all_rows ; points ; 30 } ; team }'}, 'portuguesa'], 'result': True, 'ind': 3, 'tostr': 'eq { hop { filter_eq { all_rows ; points ; 30 } ; team } ; portuguesa }', 'tointer': 'the team record of this unqiue row is portuguesa .'}], 'result': True, 'ind': 4, 'tostr': 'and { only { filter_eq { all_rows ; points ; 30 } } ; eq { hop { filter_eq { all_rows ; points ; 30 } ; team } ; portuguesa } } = true', 'tointer': 'select the rows whose points record is equal to 30 . there is only one such row in the table . the team record of this unqiue row is portuguesa .'} | and { only { filter_eq { all_rows ; points ; 30 } } ; eq { hop { filter_eq { all_rows ; points ; 30 } ; team } ; portuguesa } } = true | select the rows whose points record is equal to 30 . there is only one such row in the table . the team record of this unqiue row is portuguesa . | 6 | 5 | {'and_4': 4, 'result_5': 5, 'only_1': 1, 'filter_eq_0': 0, 'all_rows_6': 6, 'points_7': 7, '30_8': 8, 'str_eq_3': 3, 'str_hop_2': 2, 'team_9': 9, 'portuguesa_10': 10} | {'and_4': 'and', 'result_5': 'true', 'only_1': 'only', 'filter_eq_0': 'filter_eq', 'all_rows_6': 'all_rows', 'points_7': 'points', '30_8': '30', 'str_eq_3': 'str_eq', 'str_hop_2': 'str_hop', 'team_9': 'team', 'portuguesa_10': 'portuguesa'} | {'and_4': [5], 'result_5': [], 'only_1': [4], 'filter_eq_0': [1, 2], 'all_rows_6': [0], 'points_7': [0], '30_8': [0], 'str_eq_3': [4], 'str_hop_2': [3], 'team_9': [2], 'portuguesa_10': [3]} | ['position', 'team', 'points', 'played', 'drawn', 'lost', 'against', 'difference'] | [['1', 'palestra itália - sp', '33', '20', '3', '2', '19', '34'], ['2', 'portuguesa', '30', '20', '4', '3', '24', '22'], ['3', 'ypiranga - sp', '27', '20', '1', '6', '37', '19'], ['4', 'corinthians', '26', '20', '2', '6', '31', '23'], ['5', 'portuguesa santista', '25', '20', '3', '6', '40', '13'], ['6', 'são paulo', '19', '20', '1', '10', '41', '1'], ['7', 'santos', '18', '20', '4', '9', '49', '2'], ['8', 'são paulo railway', '16', '20', '6', '9', '50', '- 6'], ['9', 'hespanha', '10', '20', '0', '15', '47', '- 22'], ['10', 'comercial - sp', '9', '20', '3', '14', '72', '- 47'], ['11', 'juventus', '7', '20', '1', '16', '68', '- 39']] |
2007 - 08 atlanta hawks season | https://en.wikipedia.org/wiki/2007%E2%80%9308_Atlanta_Hawks_season | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-11961582-10.html.csv | unique | the game on may 2nd was the only game where m. williams had high points . | {'scope': 'all', 'row': '6', 'col': '5', 'col_other': '2', 'criterion': 'fuzzily_match', 'value': 'm williams', 'subset': None} | {'func': 'and', 'args': [{'func': 'only', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'high points', 'm williams'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose high points record fuzzily matches to m williams .', 'tostr': 'filter_eq { all_rows ; high points ; m williams }'}], 'result': True, 'ind': 1, 'tostr': 'only { filter_eq { all_rows ; high points ; m williams } }', 'tointer': 'select the rows whose high points record fuzzily matches to m williams . there is only one such row in the table .'}, {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'high points', 'm williams'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose high points record fuzzily matches to m williams .', 'tostr': 'filter_eq { all_rows ; high points ; m williams }'}, 'date'], 'result': 'may 2', 'ind': 2, 'tostr': 'hop { filter_eq { all_rows ; high points ; m williams } ; date }'}, 'may 2'], 'result': True, 'ind': 3, 'tostr': 'eq { hop { filter_eq { all_rows ; high points ; m williams } ; date } ; may 2 }', 'tointer': 'the date record of this unqiue row is may 2 .'}], 'result': True, 'ind': 4, 'tostr': 'and { only { filter_eq { all_rows ; high points ; m williams } } ; eq { hop { filter_eq { all_rows ; high points ; m williams } ; date } ; may 2 } } = true', 'tointer': 'select the rows whose high points record fuzzily matches to m williams . there is only one such row in the table . the date record of this unqiue row is may 2 .'} | and { only { filter_eq { all_rows ; high points ; m williams } } ; eq { hop { filter_eq { all_rows ; high points ; m williams } ; date } ; may 2 } } = true | select the rows whose high points record fuzzily matches to m williams . there is only one such row in the table . the date record of this unqiue row is may 2 . | 6 | 5 | {'and_4': 4, 'result_5': 5, 'only_1': 1, 'filter_str_eq_0': 0, 'all_rows_6': 6, 'high points_7': 7, 'm williams_8': 8, 'str_eq_3': 3, 'str_hop_2': 2, 'date_9': 9, 'may 2_10': 10} | {'and_4': 'and', 'result_5': 'true', 'only_1': 'only', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_6': 'all_rows', 'high points_7': 'high points', 'm williams_8': 'm williams', 'str_eq_3': 'str_eq', 'str_hop_2': 'str_hop', 'date_9': 'date', 'may 2_10': 'may 2'} | {'and_4': [5], 'result_5': [], 'only_1': [4], 'filter_str_eq_0': [1, 2], 'all_rows_6': [0], 'high points_7': [0], 'm williams_8': [0], 'str_eq_3': [4], 'str_hop_2': [3], 'date_9': [2], 'may 2_10': [3]} | ['game', 'date', 'team', 'score', 'high points', 'high rebounds', 'high assists', 'location attendance', 'series'] | [['1', 'april 20', 'boston', '81 - 104', 'a horford ( 20 )', 'a horford ( 10 )', 'j johnson ( 7 )', 'td banknorth garden 18624', '0 - 1'], ['2', 'april 23', 'boston', '77 - 96', 'two - way tie ( 13 )', 'a horford ( 9 )', 'two - way tie ( 3 )', 'td banknorth garden 18624', '0 - 2'], ['3', 'april 26', 'boston', '102 - 93', 'j smith ( 27 )', 'a horford ( 10 )', 'm bibby ( 8 )', 'philips arena 19725', '1 - 2'], ['4', 'april 28', 'boston', '97 - 92', 'j johnson ( 35 )', 'a horford ( 13 )', 'j johnson ( 6 )', 'philips arena 20016', '2 - 2'], ['5', 'april 30', 'boston', '85 - 110', 'j johnson ( 21 )', 'a horford ( 10 )', 'a horford ( 5 )', 'td banknorth garden 18624', '2 - 3'], ['6', 'may 2', 'boston', '103 - 100', 'm williams ( 18 )', 'four - way tie ( 6 )', 'm bibby ( 7 )', 'philips arena 20425', '3 - 3'], ['7', 'may 4', 'boston', '99 - 65', 'j johnson ( 16 )', 'a horford ( 12 )', 'a horford ( 3 )', 'td banknorth garden 18624', '3 - 4']] |
athletics at the 2008 summer olympics - women 's 100 metres hurdles | https://en.wikipedia.org/wiki/Athletics_at_the_2008_Summer_Olympics_%E2%80%93_Women%27s_100_metres_hurdles | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-18578883-3.html.csv | count | among athlets from united states at the 2008 summer olympics - women 's 100 metres hurdles , 2 of them had their results higher than 12.60 . | {'scope': 'subset', 'criterion': 'greater_than', 'value': '12.6', 'result': '2', 'col': '5', 'subset': {'col': '3', 'criterion': 'equal', 'value': 'united states'}} | {'func': 'eq', 'args': [{'func': 'count', 'args': [{'func': 'filter_greater', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'nationality', 'united states'], 'result': None, 'ind': 0, 'tostr': 'filter_eq { all_rows ; nationality ; united states }', 'tointer': 'select the rows whose nationality record fuzzily matches to united states .'}, 'result', '12.6'], 'result': None, 'ind': 1, 'tointer': 'select the rows whose nationality record fuzzily matches to united states . among these rows , select the rows whose result record is greater than 12.6 .', 'tostr': 'filter_greater { filter_eq { all_rows ; nationality ; united states } ; result ; 12.6 }'}], 'result': '2', 'ind': 2, 'tostr': 'count { filter_greater { filter_eq { all_rows ; nationality ; united states } ; result ; 12.6 } }', 'tointer': 'select the rows whose nationality record fuzzily matches to united states . among these rows , select the rows whose result record is greater than 12.6 . the number of such rows is 2 .'}, '2'], 'result': True, 'ind': 3, 'tostr': 'eq { count { filter_greater { filter_eq { all_rows ; nationality ; united states } ; result ; 12.6 } } ; 2 } = true', 'tointer': 'select the rows whose nationality record fuzzily matches to united states . among these rows , select the rows whose result record is greater than 12.6 . the number of such rows is 2 .'} | eq { count { filter_greater { filter_eq { all_rows ; nationality ; united states } ; result ; 12.6 } } ; 2 } = true | select the rows whose nationality record fuzzily matches to united states . among these rows , select the rows whose result record is greater than 12.6 . the number of such rows is 2 . | 4 | 4 | {'eq_3': 3, 'result_4': 4, 'count_2': 2, 'filter_greater_1': 1, 'filter_str_eq_0': 0, 'all_rows_5': 5, 'nationality_6': 6, 'united states_7': 7, 'result_8': 8, '12.6_9': 9, '2_10': 10} | {'eq_3': 'eq', 'result_4': 'true', 'count_2': 'count', 'filter_greater_1': 'filter_greater', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_5': 'all_rows', 'nationality_6': 'nationality', 'united states_7': 'united states', 'result_8': 'result', '12.6_9': '12.6', '2_10': '2'} | {'eq_3': [4], 'result_4': [], 'count_2': [3], 'filter_greater_1': [2], 'filter_str_eq_0': [1], 'all_rows_5': [0], 'nationality_6': [0], 'united states_7': [0], 'result_8': [1], '12.6_9': [1], '2_10': [3]} | ['heat', 'name', 'nationality', 'reaction', 'result'] | [['1', 'lolo jones', 'united states', '0.172', '12.43'], ['2', 'damu cherry', 'united states', '0.189', '12.62'], ['2', 'dawn harper', 'united states', '0.191', '12.66'], ['1', 'delloreen ennis - london', 'jamaica', '0.145', '12.67'], ['1', 'priscilla lopes - schliep', 'canada', '0.159', '12.68'], ['1', 'sally mclellan', 'australia', '0.140', '12.70'], ['2', 'brigitte foster - hylton', 'jamaica', '0.162', '12.76'], ['2', 'sarah claxton', 'great britain', '0.145', '12.84'], ['2', 'vonette dixon', 'jamaica', '0.237', '12.86'], ['1', 'josephine onyia', 'spain', '0.203', '12.86'], ['1', 'aurelia trywiańska - kollasch', 'poland', '0.118', '12.96'], ['1', 'carolin nytra', 'germany', '0.144', '12.99'], ['2', 'reïna - flor okori', 'france', '0.153', '13.05'], ['1', 'nevin yanit', 'turkey', '0.201', '13.28'], ['2', 'susanna kallur', 'sweden', '0.198', 'dnf'], ['2', 'anay tejeda', 'cuba', '0.156', 'dnf']] |
zlatan ljubijankić | https://en.wikipedia.org/wiki/Zlatan_Ljubijanki%C4%87 | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-17368230-1.html.csv | aggregation | for zlatan ljubijankic the matches that were friendly competition the total combined result was 4 . | {'scope': 'subset', 'col': '4', 'type': 'sum', 'result': '4', 'subset': {'col': '5', 'criterion': 'equal', 'value': 'friendly'}} | {'func': 'round_eq', 'args': [{'func': 'sum', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'competition', 'friendly'], 'result': None, 'ind': 0, 'tostr': 'filter_eq { all_rows ; competition ; friendly }', 'tointer': 'select the rows whose competition record fuzzily matches to friendly .'}, 'result'], 'result': '4', 'ind': 1, 'tostr': 'sum { filter_eq { all_rows ; competition ; friendly } ; result }'}, '4'], 'result': True, 'ind': 2, 'tostr': 'round_eq { sum { filter_eq { all_rows ; competition ; friendly } ; result } ; 4 } = true', 'tointer': 'select the rows whose competition record fuzzily matches to friendly . the sum of the result record of these rows is 4 .'} | round_eq { sum { filter_eq { all_rows ; competition ; friendly } ; result } ; 4 } = true | select the rows whose competition record fuzzily matches to friendly . the sum of the result record of these rows is 4 . | 3 | 3 | {'eq_2': 2, 'result_3': 3, 'sum_1': 1, 'filter_str_eq_0': 0, 'all_rows_4': 4, 'competition_5': 5, 'friendly_6': 6, 'result_7': 7, '4_8': 8} | {'eq_2': 'eq', 'result_3': 'true', 'sum_1': 'sum', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_4': 'all_rows', 'competition_5': 'competition', 'friendly_6': 'friendly', 'result_7': 'result', '4_8': '4'} | {'eq_2': [3], 'result_3': [], 'sum_1': [2], 'filter_str_eq_0': [1], 'all_rows_4': [0], 'competition_5': [0], 'friendly_6': [0], 'result_7': [1], '4_8': [2]} | ['date', 'venue', 'score', 'result', 'competition'] | [['28 february 2006', 'gsz stadium , larnaka , cyprus', '1 - 0', '1 - 0', 'friendly'], ['11 october 2008', 'ljudski vrt , maribor , slovenia', '2 - 0', '2 - 0', '2010 world cup qualification'], ['12 august 2009', 'ljudski vrt , maribor , slovenia', '5 - 0', '5 - 0', '2010 world cup qualification'], ['5 september 2009', 'wembley , london , uk', '1 - 2', '1 - 2', 'friendly'], ['18 june 2010', 'ellis park stadium , johannesburg , south africa', '2 - 0', '2 - 2', '2010 fifa world cup'], ['11 august 2010', 'stožice stadium , ljubljana', '2 - 0', '2 - 0', 'friendly match']] |
whrz - lp | https://en.wikipedia.org/wiki/WHRZ-LP | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-11966193-1.html.csv | count | there are five different locations that can pick up radio station whrz - lp . | {'scope': 'all', 'criterion': 'all', 'value': 'n/a', 'result': '5', 'col': '3', 'subset': None} | {'func': 'eq', 'args': [{'func': 'count', 'args': [{'func': 'filter_all', 'args': ['all_rows', 'city of license'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose city of license record is arbitrary .', 'tostr': 'filter_all { all_rows ; city of license }'}], 'result': '5', 'ind': 1, 'tostr': 'count { filter_all { all_rows ; city of license } }', 'tointer': 'select the rows whose city of license record is arbitrary . the number of such rows is 5 .'}, '5'], 'result': True, 'ind': 2, 'tostr': 'eq { count { filter_all { all_rows ; city of license } } ; 5 } = true', 'tointer': 'select the rows whose city of license record is arbitrary . the number of such rows is 5 .'} | eq { count { filter_all { all_rows ; city of license } } ; 5 } = true | select the rows whose city of license record is arbitrary . the number of such rows is 5 . | 3 | 3 | {'eq_2': 2, 'result_3': 3, 'count_1': 1, 'filter_all_0': 0, 'all_rows_4': 4, 'city of license_5': 5, '5_6': 6} | {'eq_2': 'eq', 'result_3': 'true', 'count_1': 'count', 'filter_all_0': 'filter_all', 'all_rows_4': 'all_rows', 'city of license_5': 'city of license', '5_6': '5'} | {'eq_2': [3], 'result_3': [], 'count_1': [2], 'filter_all_0': [1], 'all_rows_4': [0], 'city of license_5': [0], '5_6': [2]} | ['call sign', 'frequency mhz', 'city of license', 'erp w', 'class', 'fcc info'] | [['w238aw', '95.5', 'west view , south carolina', '55', 'd', 'fcc'], ['w242bx', '96.3', 'greenville , south carolina', '100', 'd', 'fcc'], ['w289ao', '105.9', 'anderson , south carolina', '27', 'd', 'fcc'], ['w216bj', '91.1', 'wando , south carolina', '10', 'd', 'fcc'], ['w220cn', '91.9', 'charleston , south carolina', '10', 'd', 'fcc']] |
imperial vicar | https://en.wikipedia.org/wiki/Imperial_vicar | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-11071897-1.html.csv | comparative | the interregnum that began on the 20 february 1790 death of joseph ii lasted longer than the interregnum that began on the 1 march 1792 death of leopold ii . | {'row_1': '10', 'row_2': '11', 'col': '3', 'col_other': '1', 'relation': 'greater', 'record_mentioned': 'no', 'diff_result': None} | {'func': 'greater', 'args': [{'func': 'str_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'interregnum began', '20 february 1790 death of joseph ii'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose interregnum began record fuzzily matches to 20 february 1790 death of joseph ii .', 'tostr': 'filter_eq { all_rows ; interregnum began ; 20 february 1790 death of joseph ii }'}, 'duration'], 'result': None, 'ind': 2, 'tostr': 'hop { filter_eq { all_rows ; interregnum began ; 20 february 1790 death of joseph ii } ; duration }', 'tointer': 'select the rows whose interregnum began record fuzzily matches to 20 february 1790 death of joseph ii . take the duration record of this row .'}, {'func': 'str_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'interregnum began', '1 march 1792 death of leopold ii'], 'result': None, 'ind': 1, 'tointer': 'select the rows whose interregnum began record fuzzily matches to 1 march 1792 death of leopold ii .', 'tostr': 'filter_eq { all_rows ; interregnum began ; 1 march 1792 death of leopold ii }'}, 'duration'], 'result': None, 'ind': 3, 'tostr': 'hop { filter_eq { all_rows ; interregnum began ; 1 march 1792 death of leopold ii } ; duration }', 'tointer': 'select the rows whose interregnum began record fuzzily matches to 1 march 1792 death of leopold ii . take the duration record of this row .'}], 'result': True, 'ind': 4, 'tostr': 'greater { hop { filter_eq { all_rows ; interregnum began ; 20 february 1790 death of joseph ii } ; duration } ; hop { filter_eq { all_rows ; interregnum began ; 1 march 1792 death of leopold ii } ; duration } } = true', 'tointer': 'select the rows whose interregnum began record fuzzily matches to 20 february 1790 death of joseph ii . take the duration record of this row . select the rows whose interregnum began record fuzzily matches to 1 march 1792 death of leopold ii . take the duration record of this row . the first record is greater than the second record .'} | greater { hop { filter_eq { all_rows ; interregnum began ; 20 february 1790 death of joseph ii } ; duration } ; hop { filter_eq { all_rows ; interregnum began ; 1 march 1792 death of leopold ii } ; duration } } = true | select the rows whose interregnum began record fuzzily matches to 20 february 1790 death of joseph ii . take the duration record of this row . select the rows whose interregnum began record fuzzily matches to 1 march 1792 death of leopold ii . take the duration record of this row . the first record is greater than the second record . | 5 | 5 | {'greater_4': 4, 'result_5': 5, 'str_hop_2': 2, 'filter_str_eq_0': 0, 'all_rows_6': 6, 'interregnum began_7': 7, '20 february 1790 death of joseph ii_8': 8, 'duration_9': 9, 'str_hop_3': 3, 'filter_str_eq_1': 1, 'all_rows_10': 10, 'interregnum began_11': 11, '1 march 1792 death of leopold ii_12': 12, 'duration_13': 13} | {'greater_4': 'greater', 'result_5': 'true', 'str_hop_2': 'str_hop', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_6': 'all_rows', 'interregnum began_7': 'interregnum began', '20 february 1790 death of joseph ii_8': '20 february 1790 death of joseph ii', 'duration_9': 'duration', 'str_hop_3': 'str_hop', 'filter_str_eq_1': 'filter_str_eq', 'all_rows_10': 'all_rows', 'interregnum began_11': 'interregnum began', '1 march 1792 death of leopold ii_12': '1 march 1792 death of leopold ii', 'duration_13': 'duration'} | {'greater_4': [5], 'result_5': [], 'str_hop_2': [4], 'filter_str_eq_0': [2], 'all_rows_6': [0], 'interregnum began_7': [0], '20 february 1790 death of joseph ii_8': [0], 'duration_9': [2], 'str_hop_3': [4], 'filter_str_eq_1': [3], 'all_rows_10': [1], 'interregnum began_11': [1], '1 march 1792 death of leopold ii_12': [1], 'duration_13': [3]} | ['interregnum began', 'interregnum ended', 'duration', 'count palatine of saxony', 'count palatine of the rhine'] | [['9 december 1437 death of sigismund', '18 march 1438 election of albert ii', '3 months , 9 days', 'frederick ii , elector of saxony', 'louis iv , elector palatine'], ['27 october 1439 death of albert ii', '2 february 1440 election of frederick iii', '3 months , 6 days', 'frederick ii , elector of saxony', 'louis iv , elector palatine'], ['12 january 1519 death of maximilian i', '17 june 1519 election of charles v', '5 months , 5 days', 'frederick iii , elector of saxony', 'louis v , elector palatine'], ['20 january 1612 death of rudolph ii', '13 june 1612 election of matthias', '4 months , 24 days', 'john george i , elector of saxony', 'frederick v , elector palatine'], ['20 march 1619 death of matthias', '28 august 1619 election of ferdinand ii', '5 months , 8 days', 'john george i , elector of saxony', 'frederick v , elector palatine'], ['2 april 1657 death of ferdinand iii', '18 july 1658 election of leopold i', '15 months , 16 days', 'john george ii , elector of saxony', 'ferdinand maria , elector of bavaria'], ['17 april 1711 death of joseph i', '12 october 1711 election of charles vi', '5 months , 25 days', 'frederick augustus i , elector of saxony', 'john william , elector palatine'], ['20 october 1740 death of charles vi', '14 january 1742 election of charles vii', '14 months , 25 days', 'frederick augustus ii , elector of saxony', 'charles albert , elector of bavaria'], ['20 january 1745 death of charles vii', '13 september 1745 election of francis i', '7 months , 24 days', 'frederick augustus ii , elector of saxony', 'maximilian iii , elector of bavaria'], ['20 february 1790 death of joseph ii', '30 september 1790 election of leopold ii', '7 months , 10 days', 'frederick augustus iii , elector of saxony', 'charles theodore , elector of bavaria'], ['1 march 1792 death of leopold ii', '5 july 1792 election of francis ii', '4 months , 4 days', 'frederick augustus iii , elector of saxony', 'charles theodore , elector of bavaria']] |
34th united states congress | https://en.wikipedia.org/wiki/34th_United_States_Congress | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/1-2417308-3.html.csv | superlative | john parker hale was the earliest congressman of the 34th congress to be installed . | {'scope': 'all', 'col_superlative': '5', 'row_superlative': '1', 'value_mentioned': 'no', 'max_or_min': 'min', 'other_col': '4', 'subset': None} | {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'argmin', 'args': ['all_rows', 'date of successors formal installation'], 'result': None, 'ind': 0, 'tostr': 'argmin { all_rows ; date of successors formal installation }'}, 'successor'], 'result': 'john parker hale ( r )', 'ind': 1, 'tostr': 'hop { argmin { all_rows ; date of successors formal installation } ; successor }'}, 'john parker hale ( r )'], 'result': True, 'ind': 2, 'tostr': 'eq { hop { argmin { all_rows ; date of successors formal installation } ; successor } ; john parker hale ( r ) } = true', 'tointer': 'select the row whose date of successors formal installation record of all rows is minimum . the successor record of this row is john parker hale ( r ) .'} | eq { hop { argmin { all_rows ; date of successors formal installation } ; successor } ; john parker hale ( r ) } = true | select the row whose date of successors formal installation record of all rows is minimum . the successor record of this row is john parker hale ( r ) . | 3 | 3 | {'str_eq_2': 2, 'result_3': 3, 'str_hop_1': 1, 'argmin_0': 0, 'all_rows_4': 4, 'date of successors formal installation_5': 5, 'successor_6': 6, 'john parker hale (r)_7': 7} | {'str_eq_2': 'str_eq', 'result_3': 'true', 'str_hop_1': 'str_hop', 'argmin_0': 'argmin', 'all_rows_4': 'all_rows', 'date of successors formal installation_5': 'date of successors formal installation', 'successor_6': 'successor', 'john parker hale (r)_7': 'john parker hale ( r )'} | {'str_eq_2': [3], 'result_3': [], 'str_hop_1': [2], 'argmin_0': [1], 'all_rows_4': [0], 'date of successors formal installation_5': [0], 'successor_6': [1], 'john parker hale (r)_7': [2]} | ['state ( class )', 'vacator', 'reason for change', 'successor', 'date of successors formal installation'] | [['new hampshire ( 2 )', 'vacant', 'legislature failed to elect on time', 'john parker hale ( r )', 'july 30 , 1855'], ['alabama ( 3 )', 'vacant', 'legislature failed to elect on time', 'benjamin fitzpatrick ( d )', 'november 26 , 1855'], ['pennsylvania ( 3 )', 'vacant', 'legislature failed to elect on time', 'william bigler ( d )', 'january 14 , 1856'], ['california ( 3 )', 'vacant', 'legislature failed to elect on time', 'william m gwin ( d )', 'january 13 , 1857'], ['indiana ( 3 )', 'vacant', 'legislature failed to elect on time', 'graham n fitch ( d )', 'february 4 , 1857']] |
records of members of parliament of the united kingdom | https://en.wikipedia.org/wiki/Records_of_members_of_parliament_of_the_United_Kingdom | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-11921877-1.html.csv | comparative | the member of parliment that was born on 30 november 1874 left the house before the member born on 16 january 1905 . | {'row_1': '3', 'row_2': '9', 'col': '3', 'col_other': '1', 'relation': 'less', 'record_mentioned': 'no', 'diff_result': None} | {'func': 'less', 'args': [{'func': 'str_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'born', '30 november 1874'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose born record fuzzily matches to 30 november 1874 .', 'tostr': 'filter_eq { all_rows ; born ; 30 november 1874 }'}, 'left house'], 'result': None, 'ind': 2, 'tostr': 'hop { filter_eq { all_rows ; born ; 30 november 1874 } ; left house }', 'tointer': 'select the rows whose born record fuzzily matches to 30 november 1874 . take the left house record of this row .'}, {'func': 'str_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'born', '16 january 1905'], 'result': None, 'ind': 1, 'tointer': 'select the rows whose born record fuzzily matches to 16 january 1905 .', 'tostr': 'filter_eq { all_rows ; born ; 16 january 1905 }'}, 'left house'], 'result': None, 'ind': 3, 'tostr': 'hop { filter_eq { all_rows ; born ; 16 january 1905 } ; left house }', 'tointer': 'select the rows whose born record fuzzily matches to 16 january 1905 . take the left house record of this row .'}], 'result': True, 'ind': 4, 'tostr': 'less { hop { filter_eq { all_rows ; born ; 30 november 1874 } ; left house } ; hop { filter_eq { all_rows ; born ; 16 january 1905 } ; left house } } = true', 'tointer': 'select the rows whose born record fuzzily matches to 30 november 1874 . take the left house record of this row . select the rows whose born record fuzzily matches to 16 january 1905 . take the left house record of this row . the first record is less than the second record .'} | less { hop { filter_eq { all_rows ; born ; 30 november 1874 } ; left house } ; hop { filter_eq { all_rows ; born ; 16 january 1905 } ; left house } } = true | select the rows whose born record fuzzily matches to 30 november 1874 . take the left house record of this row . select the rows whose born record fuzzily matches to 16 january 1905 . take the left house record of this row . the first record is less than the second record . | 5 | 5 | {'less_4': 4, 'result_5': 5, 'str_hop_2': 2, 'filter_str_eq_0': 0, 'all_rows_6': 6, 'born_7': 7, '30 november 1874_8': 8, 'left house_9': 9, 'str_hop_3': 3, 'filter_str_eq_1': 1, 'all_rows_10': 10, 'born_11': 11, '16 january 1905_12': 12, 'left house_13': 13} | {'less_4': 'less', 'result_5': 'true', 'str_hop_2': 'str_hop', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_6': 'all_rows', 'born_7': 'born', '30 november 1874_8': '30 november 1874', 'left house_9': 'left house', 'str_hop_3': 'str_hop', 'filter_str_eq_1': 'filter_str_eq', 'all_rows_10': 'all_rows', 'born_11': 'born', '16 january 1905_12': '16 january 1905', 'left house_13': 'left house'} | {'less_4': [5], 'result_5': [], 'str_hop_2': [4], 'filter_str_eq_0': [2], 'all_rows_6': [0], 'born_7': [0], '30 november 1874_8': [0], 'left house_9': [2], 'str_hop_3': [4], 'filter_str_eq_1': [3], 'all_rows_10': [1], 'born_11': [1], '16 january 1905_12': [1], 'left house_13': [3]} | ['born', 'became oldest mp', 'left house', 'age on leaving', 'died', 'political party'] | [['6 may 1866', '1945', '1950', '83 2', '24 april 1957', 'liberal party'], ['22 november 1871', '1950', 'feb 1964', '92 1', '25 february 1964', 'labour party'], ['30 november 1874', 'feb 1964', 'sep 1964', '89 2', '24 january 1965', 'conservative'], ['18 october 1884', 'sep 1964', '1970', '85 2', '8 may 1986', 'labour party'], ['probably 9 november 1879', '1970', '1972', '92 1', '25 february 1972', 'labour party'], ['1 february 1890', '1972', '1973', '83 1', '8 october 1973', 'labour party'], ['23 february 1895', '1973', 'feb 1974', '79 2', '26 april 1980', 'conservative'], ['18 june 1898', 'feb 1974', '1979', '80 2', '6 may 1987', 'labour party'], ['16 january 1905', '1979', '1987', '82 2', '4 june 1990', 'labour party'], ['23 july 1913', '1987', '1992', '78 2', '3 march 2010', 'labour party'], ['9 july 1916', '1992', '2001', '84 2', '17 july 2005', 'conservative'], ['20 november 1921', '2001', '2007', '85 1', '21 june 2007', 'labour party'], ['6 april 1926', '2007', '2010', '84 2', 'living', 'democratic unionist party'], ['1 february 1930', '2010', 'n / a', 'n / a', 'living', 'conservative']] |
robert earnshaw | https://en.wikipedia.org/wiki/Robert_Earnshaw | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-1102279-1.html.csv | majority | most of the robert earnshaw competitions have been held in millennium stadium . | {'scope': 'all', 'col': '2', 'most_or_all': 'most', 'criterion': 'equal', 'value': 'millennium stadium', 'subset': None} | {'func': 'most_str_eq', 'args': ['all_rows', 'venue', 'millennium stadium'], 'result': True, 'ind': 0, 'tointer': 'for the venue records of all rows , most of them fuzzily match to millennium stadium .', 'tostr': 'most_eq { all_rows ; venue ; millennium stadium } = true'} | most_eq { all_rows ; venue ; millennium stadium } = true | for the venue records of all rows , most of them fuzzily match to millennium stadium . | 1 | 1 | {'most_str_eq_0': 0, 'result_1': 1, 'all_rows_2': 2, 'venue_3': 3, 'millennium stadium_4': 4} | {'most_str_eq_0': 'most_str_eq', 'result_1': 'true', 'all_rows_2': 'all_rows', 'venue_3': 'venue', 'millennium stadium_4': 'millennium stadium'} | {'most_str_eq_0': [1], 'result_1': [], 'all_rows_2': [0], 'venue_3': [0], 'millennium stadium_4': [0]} | ['date', 'venue', 'score', 'result', 'competition'] | [['14 may 2002', 'millennium stadium , cardiff , wales', '1 - 0', '1 - 0', 'friendly'], ['12 february 2003', 'millennium stadium , cardiff , wales', '1 - 0', '2 - 2', 'friendly'], ['11 october 2003', 'millennium stadium , cardiff , wales', '2 - 3', '2 - 3', 'uefa euro 2004 qual'], ['18 february 2004', 'millennium stadium , cardiff , wales', '1 - 0', '4 - 0', 'friendly'], ['18 february 2004', 'millennium stadium , cardiff , wales', '2 - 0', '4 - 0', 'friendly'], ['18 february 2004', 'millennium stadium , cardiff , wales', '3 - 0', '4 - 0', 'friendly'], ['31 march 2004', 'stadium puskás ferenc , budapest , hungary', '1 - 2', '1 - 2', 'friendly'], ['8 september 2004', 'millennium stadium , cardiff , wales', '2 - 2', '2 - 2', '2006 fifa world cup qual'], ['13 october 2004', 'millennium stadium , cardiff , wales', '1 - 0', '2 - 3', '2006 fifa world cup qual'], ['27 may 2006', 'upc - arena , graz , austria', '1 - 1', '1 - 2', 'friendly'], ['27 may 2006', 'upc - arena , graz , austria', '1 - 2', '1 - 2', 'friendly'], ['11 october 2006', 'millennium stadium , cardiff , wales', '2 - 0', '3 - 1', 'uefa euro 2008 qual'], ['17 october 2007', 'stadio olimpico , serravalle , san marino', '1 - 0', '1 - 2', 'uefa euro 2008 qual'], ['29 may 2009', 'parc y scarlets , llanelli , wales', '1 - 0', '1 - 0', 'friendly'], ['25 may 2011', 'aviva stadium , dublin , ireland', '1 - 0', '1 - 3', '2011 nations cup'], ['27 may 2011', 'aviva stadium , dublin , ireland', '2 - 0', '2 - 0', '2011 nations cup']] |
1999 u.s. open ( golf ) | https://en.wikipedia.org/wiki/1999_U.S._Open_%28golf%29 | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-17162128-2.html.csv | aggregation | at the 1999 u.s. open , the average strokes to par was 15.6 . | {'scope': 'all', 'col': '5', 'type': 'average', 'result': '15.6', 'subset': None} | {'func': 'round_eq', 'args': [{'func': 'avg', 'args': ['all_rows', 'to par'], 'result': '15.6', 'ind': 0, 'tostr': 'avg { all_rows ; to par }'}, '15.6'], 'result': True, 'ind': 1, 'tostr': 'round_eq { avg { all_rows ; to par } ; 15.6 } = true', 'tointer': 'the average of the to par record of all rows is 15.6 .'} | round_eq { avg { all_rows ; to par } ; 15.6 } = true | the average of the to par record of all rows is 15.6 . | 2 | 2 | {'eq_1': 1, 'result_2': 2, 'avg_0': 0, 'all_rows_3': 3, 'to par_4': 4, '15.6_5': 5} | {'eq_1': 'eq', 'result_2': 'true', 'avg_0': 'avg', 'all_rows_3': 'all_rows', 'to par_4': 'to par', '15.6_5': '15.6'} | {'eq_1': [2], 'result_2': [], 'avg_0': [1], 'all_rows_3': [0], 'to par_4': [0], '15.6_5': [1]} | ['player', 'country', 'year ( s ) won', 'total', 'to par', 'finish'] | [['payne stewart', 'united states', '1991', '279', '1', '1'], ['corey pavin', 'united states', '1995', '296', '+ 16', 't34'], ['lee janzen', 'united states', '1993 , 1998', '298', '+ 18', 't46'], ['tom watson', 'united states', '1982', '301', '+ 21', 't57'], ['tom kite', 'united states', '1992', '302', '+ 22', 't60']] |
igor andreev | https://en.wikipedia.org/wiki/Igor_Andreev | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-1724440-3.html.csv | count | among the games where igor andreev became a runner-up , 5 of them were played on clay surface . | {'scope': 'subset', 'criterion': 'equal', 'value': 'clay', 'result': '5', 'col': '4', 'subset': {'col': '1', 'criterion': 'equal', 'value': 'runner - up'}} | {'func': 'eq', 'args': [{'func': 'count', 'args': [{'func': 'filter_str_eq', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'outcome', 'runner - up'], 'result': None, 'ind': 0, 'tostr': 'filter_eq { all_rows ; outcome ; runner - up }', 'tointer': 'select the rows whose outcome record fuzzily matches to runner - up .'}, 'surface', 'clay'], 'result': None, 'ind': 1, 'tointer': 'select the rows whose outcome record fuzzily matches to runner - up . among these rows , select the rows whose surface record fuzzily matches to clay .', 'tostr': 'filter_eq { filter_eq { all_rows ; outcome ; runner - up } ; surface ; clay }'}], 'result': '5', 'ind': 2, 'tostr': 'count { filter_eq { filter_eq { all_rows ; outcome ; runner - up } ; surface ; clay } }', 'tointer': 'select the rows whose outcome record fuzzily matches to runner - up . among these rows , select the rows whose surface record fuzzily matches to clay . the number of such rows is 5 .'}, '5'], 'result': True, 'ind': 3, 'tostr': 'eq { count { filter_eq { filter_eq { all_rows ; outcome ; runner - up } ; surface ; clay } } ; 5 } = true', 'tointer': 'select the rows whose outcome record fuzzily matches to runner - up . among these rows , select the rows whose surface record fuzzily matches to clay . the number of such rows is 5 .'} | eq { count { filter_eq { filter_eq { all_rows ; outcome ; runner - up } ; surface ; clay } } ; 5 } = true | select the rows whose outcome record fuzzily matches to runner - up . among these rows , select the rows whose surface record fuzzily matches to clay . the number of such rows is 5 . | 4 | 4 | {'eq_3': 3, 'result_4': 4, 'count_2': 2, 'filter_str_eq_1': 1, 'filter_str_eq_0': 0, 'all_rows_5': 5, 'outcome_6': 6, 'runner - up_7': 7, 'surface_8': 8, 'clay_9': 9, '5_10': 10} | {'eq_3': 'eq', 'result_4': 'true', 'count_2': 'count', 'filter_str_eq_1': 'filter_str_eq', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_5': 'all_rows', 'outcome_6': 'outcome', 'runner - up_7': 'runner - up', 'surface_8': 'surface', 'clay_9': 'clay', '5_10': '5'} | {'eq_3': [4], 'result_4': [], 'count_2': [3], 'filter_str_eq_1': [2], 'filter_str_eq_0': [1], 'all_rows_5': [0], 'outcome_6': [0], 'runner - up_7': [0], 'surface_8': [1], 'clay_9': [1], '5_10': [3]} | ['outcome', 'date', 'championship', 'surface', 'opponent', 'score'] | [['runner - up', '12 july 2004', 'gstaad , switzerland', 'clay', 'roger federer', '2 - 6 , 3 - 6 , 7 - 5 , 3 - 6'], ['runner - up', '19 september 2004', 'bucharest , romania', 'clay', 'josé acasuso', '3 - 6 , 0 - 6'], ['winner', '4 april 2005', 'valencia , spain', 'clay', 'david ferrer', '6 - 3 , 5 - 7 , 6 - 3'], ['winner', '26 september 2005', 'palermo , italy', 'clay', 'filippo volandri', '0 - 6 , 6 - 1 , 6 - 3'], ['runner - up', '18 september 2005', 'bucharest , romania', 'clay', 'florent serra', '4 - 6 , 3 - 6'], ['winner', '10 october 2005', 'moscow , russia', 'carpet ( i )', 'nicolas kiefer', '5 - 7 , 7 - 6 ( 7 - 3 ) , 6 - 2'], ['runner - up', '16 january 2006', 'sydney , australia', 'hard', 'james blake', '2 - 6 , 6 - 3 , 6 - 7 ( 3 - 7 )'], ['runner - up', '13 july 2008', 'gstaad , switzerland', 'clay', 'victor hănescu', '3 - 6 , 4 - 6'], ['runner - up', '20 july 2008', 'umag , croatia', 'clay', 'fernando verdasco', '6 - 3 , 4 - 6 , 6 - 7 ( 4 - 7 )']] |
list of awards and nominations received by grey 's anatomy | https://en.wikipedia.org/wiki/List_of_awards_and_nominations_received_by_Grey%27s_Anatomy | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-12372406-1.html.csv | ordinal | of the awards and nominations received by grey 's anatomy , the second to last one was in the category of favorite tv actress - leading role in a drama . | {'row': '4', 'col': '1', 'order': '2', 'col_other': '4', 'max_or_min': 'max_to_min', 'value_mentioned': 'no', 'scope': 'all', 'subset': None} | {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'nth_argmax', 'args': ['all_rows', 'year', '2'], 'result': None, 'ind': 0, 'tostr': 'nth_argmax { all_rows ; year ; 2 }'}, 'category'], 'result': 'favorite tv actress - leading role in a drama', 'ind': 1, 'tostr': 'hop { nth_argmax { all_rows ; year ; 2 } ; category }'}, 'favorite tv actress - leading role in a drama'], 'result': True, 'ind': 2, 'tostr': 'eq { hop { nth_argmax { all_rows ; year ; 2 } ; category } ; favorite tv actress - leading role in a drama } = true', 'tointer': 'select the row whose year record of all rows is 2nd maximum . the category record of this row is favorite tv actress - leading role in a drama .'} | eq { hop { nth_argmax { all_rows ; year ; 2 } ; category } ; favorite tv actress - leading role in a drama } = true | select the row whose year record of all rows is 2nd maximum . the category record of this row is favorite tv actress - leading role in a drama . | 3 | 3 | {'str_eq_2': 2, 'result_3': 3, 'str_hop_1': 1, 'nth_argmax_0': 0, 'all_rows_4': 4, 'year_5': 5, '2_6': 6, 'category_7': 7, 'favorite tv actress - leading role in a drama_8': 8} | {'str_eq_2': 'str_eq', 'result_3': 'true', 'str_hop_1': 'str_hop', 'nth_argmax_0': 'nth_argmax', 'all_rows_4': 'all_rows', 'year_5': 'year', '2_6': '2', 'category_7': 'category', 'favorite tv actress - leading role in a drama_8': 'favorite tv actress - leading role in a drama'} | {'str_eq_2': [3], 'result_3': [], 'str_hop_1': [2], 'nth_argmax_0': [1], 'all_rows_4': [0], 'year_5': [0], '2_6': [0], 'category_7': [1], 'favorite tv actress - leading role in a drama_8': [2]} | ['year', 'recipient', 'role', 'category', 'result'] | [['2007', 'sara ramirez', 'callie torres', 'outstanding actress - television series', 'nominated'], ['2008', 'sara ramirez', 'callie torres', 'outstanding actress - drama television series', 'nominated'], ['2009', 'sara ramirez', 'callie torres', 'outstanding actress - drama television series', 'nominated'], ['2011', 'sara ramirez', 'callie torres', 'favorite tv actress - leading role in a drama', 'nominated'], ['2012', 'sara ramirez', 'callie torres', 'favorite tv actress - supporting role', 'nominated']] |
t.o.p ( entertainer ) | https://en.wikipedia.org/wiki/T.O.P_%28entertainer%29 | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-18180883-6.html.csv | count | eight total awards have been won by t.o.p. | {'scope': 'all', 'criterion': 'fuzzily_match', 'value': 'won', 'result': '8', 'col': '5', 'subset': None} | {'func': 'eq', 'args': [{'func': 'count', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'result', 'won'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose result record fuzzily matches to won .', 'tostr': 'filter_eq { all_rows ; result ; won }'}], 'result': '8', 'ind': 1, 'tostr': 'count { filter_eq { all_rows ; result ; won } }', 'tointer': 'select the rows whose result record fuzzily matches to won . the number of such rows is 8 .'}, '8'], 'result': True, 'ind': 2, 'tostr': 'eq { count { filter_eq { all_rows ; result ; won } } ; 8 } = true', 'tointer': 'select the rows whose result record fuzzily matches to won . the number of such rows is 8 .'} | eq { count { filter_eq { all_rows ; result ; won } } ; 8 } = true | select the rows whose result record fuzzily matches to won . the number of such rows is 8 . | 3 | 3 | {'eq_2': 2, 'result_3': 3, 'count_1': 1, 'filter_str_eq_0': 0, 'all_rows_4': 4, 'result_5': 5, 'won_6': 6, '8_7': 7} | {'eq_2': 'eq', 'result_3': 'true', 'count_1': 'count', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_4': 'all_rows', 'result_5': 'result', 'won_6': 'won', '8_7': '8'} | {'eq_2': [3], 'result_3': [], 'count_1': [2], 'filter_str_eq_0': [1], 'all_rows_4': [0], 'result_5': [0], 'won_6': [0], '8_7': [2]} | ['year', 'event', 'category', 'nominated work', 'result'] | [['2010', '47th grand bell awards', 'hallyu popularity', '71 : into the fire', 'won'], ['2010', '47th grand bell awards', 'best new actor', '71 : into the fire', 'nominated'], ['2010', '8th korea film awards', 'best new actor', '71 : into the fire', 'nominated'], ['2010', 'style icon awards', 'new icon ( movie ) ( korean )', '71 : into the fire', 'won'], ['2010', '31st blue dragon film awards', 'best new actor', '71 : into the fire', 'won'], ['2010', '31st blue dragon film awards', 'popularity', '71 : into the fire', 'won'], ['2010', 'max movie award', 'best new actor', '71 : into the fire', 'won'], ['2010', '5th asian film awards', 'best new actor', '71 : into the fire', 'nominated'], ['2010', '47th paeksang arts award', 'best new actor', '71 : into the fire', 'won'], ['2010', '47th paeksang arts award', 'popularity award ( actor in a motion picture )', '71 : into the fire', 'won'], ['2013', '17th biff asia star awards', 'rookie awards', 'commitment', 'won']] |
casey martin | https://en.wikipedia.org/wiki/Casey_Martin | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-1697190-2.html.csv | unique | there is only one year in which casey martin played in more than 20 tournaments . | {'scope': 'all', 'row': '2', 'col': '2', 'col_other': 'n/a', 'criterion': 'greater_than', 'value': '20', 'subset': None} | {'func': 'only', 'args': [{'func': 'filter_greater', 'args': ['all_rows', 'tournaments played', '20'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose tournaments played record is greater than 20 .', 'tostr': 'filter_greater { all_rows ; tournaments played ; 20 }'}], 'result': True, 'ind': 1, 'tostr': 'only { filter_greater { all_rows ; tournaments played ; 20 } } = true', 'tointer': 'select the rows whose tournaments played record is greater than 20 . there is only one such row in the table .'} | only { filter_greater { all_rows ; tournaments played ; 20 } } = true | select the rows whose tournaments played record is greater than 20 . there is only one such row in the table . | 2 | 2 | {'only_1': 1, 'result_2': 2, 'filter_greater_0': 0, 'all_rows_3': 3, 'tournaments played_4': 4, '20_5': 5} | {'only_1': 'only', 'result_2': 'true', 'filter_greater_0': 'filter_greater', 'all_rows_3': 'all_rows', 'tournaments played_4': 'tournaments played', '20_5': '20'} | {'only_1': [2], 'result_2': [], 'filter_greater_0': [1], 'all_rows_3': [0], 'tournaments played_4': [0], '20_5': [0]} | ['year', 'tournaments played', 'cuts made', 'wins', 'best finish', 'earnings', 'money list rank'] | [['1998', '3', '2', '0', 't - 23', '37221', '221'], ['2000', '29', '14', '0', 't - 17', '143248', '179'], ['2001', '2', '0', '0', 'cut', '0', 'n / a'], ['2002', '3', '0', '0', 'cut', '0', 'n / a'], ['2003', '1', '0', '0', 'cut', '0', 'n / a'], ['2004', '2', '2', '0', 't - 69', '15858', 'n / a'], ['2005', '1', '1', '0', 't - 65', '10547', 'n / a'], ['2012', '2', '0', '0', 'cut', '0', 'n / a']] |
1985 denver broncos season | https://en.wikipedia.org/wiki/1985_Denver_Broncos_season | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/1-17972193-1.html.csv | aggregation | the 1985 denver broncos scored forty points when playing against the chargers . | {'scope': 'subset', 'col': '4', 'type': 'sum', 'result': '40', 'subset': {'col': '3', 'criterion': 'equal', 'value': 'chargers'}} | {'func': 'round_eq', 'args': [{'func': 'sum', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'opponent', 'chargers'], 'result': None, 'ind': 0, 'tostr': 'filter_eq { all_rows ; opponent ; chargers }', 'tointer': 'select the rows whose opponent record fuzzily matches to chargers .'}, 'result'], 'result': '40', 'ind': 1, 'tostr': 'sum { filter_eq { all_rows ; opponent ; chargers } ; result }'}, '40'], 'result': True, 'ind': 2, 'tostr': 'round_eq { sum { filter_eq { all_rows ; opponent ; chargers } ; result } ; 40 } = true', 'tointer': 'select the rows whose opponent record fuzzily matches to chargers . the sum of the result record of these rows is 40 .'} | round_eq { sum { filter_eq { all_rows ; opponent ; chargers } ; result } ; 40 } = true | select the rows whose opponent record fuzzily matches to chargers . the sum of the result record of these rows is 40 . | 3 | 3 | {'eq_2': 2, 'result_3': 3, 'sum_1': 1, 'filter_str_eq_0': 0, 'all_rows_4': 4, 'opponent_5': 5, 'chargers_6': 6, 'result_7': 7, '40_8': 8} | {'eq_2': 'eq', 'result_3': 'true', 'sum_1': 'sum', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_4': 'all_rows', 'opponent_5': 'opponent', 'chargers_6': 'chargers', 'result_7': 'result', '40_8': '40'} | {'eq_2': [3], 'result_3': [], 'sum_1': [2], 'filter_str_eq_0': [1], 'all_rows_4': [0], 'opponent_5': [0], 'chargers_6': [0], 'result_7': [1], '40_8': [2]} | ['week', 'date', 'opponent', 'result', 'game site', 'record', 'attendance'] | [['1', 'september 8', 'los angeles rams', 'l 16 - 20', 'anaheim stadium', '0 - 1', '52522'], ['2', 'september 15', 'new orleans saints', 'w 34 - 23', 'mile high stadium', '1 - 1', '74488'], ['3', 'september 22', 'atlanta falcons', 'w 44 - 28', 'atlanta - fulton county stadium', '2 - 1', '37903'], ['4', 'september 29', 'miami dolphins', 'l 26 - 30', 'mile high stadium', '2 - 2', '73614'], ['5', 'october 6', 'houston oilers', 'w 31 - 20', 'mile high stadium', '3 - 2', '74699'], ['6', 'october 13', 'indianapolis colts', 'w 15 - 10', 'hoosier dome', '4 - 2', '60128'], ['7', 'october 20', 'seattle seahawks', 'w 13 - 10 ( ot )', 'mile high stadium', '5 - 2', '74899'], ['8', 'october 27', 'kansas city chiefs', 'w 30 - 10', 'arrowhead stadium', '6 - 2', '68246'], ['9', 'november 3', 'san diego chargers', 'l 10 - 30', 'jack murphy stadium', '6 - 3', '57312'], ['10', 'november 11', 'san francisco 49ers', 'w 17 - 16', 'mile high stadium', '7 - 3', '73173'], ['11', 'november 17', 'san diego chargers', 'w 30 - 24 ( ot )', 'mile high stadium', '8 - 3', '74376'], ['12', 'november 24', 'los angeles raiders', 'l 28 - 31 ( ot )', 'los angeles memorial coliseum', '8 - 4', '63181'], ['13', 'december 1', 'pittsburgh steelers', 'w 31 - 23', 'three rivers stadium', '9 - 4', '56797'], ['14', 'december 8', 'los angeles raiders', 'l 14 - 17 ( ot )', 'mile high stadium', '9 - 5', '75042'], ['15', 'december 14', 'kansas city chiefs', 'w 14 - 13', 'mile high stadium', '10 - 5', '69209']] |
delaware valley collegiate hockey conference | https://en.wikipedia.org/wiki/Delaware_Valley_Collegiate_Hockey_Conference | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-16432543-2.html.csv | count | there are 6 institutions which participated in the delaware valley collegiate hockey conference . | {'scope': 'all', 'criterion': 'all', 'value': 'n/a', 'result': '6', 'col': '1', 'subset': None} | {'func': 'eq', 'args': [{'func': 'count', 'args': [{'func': 'filter_all', 'args': ['all_rows', 'institution'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose institution record is arbitrary .', 'tostr': 'filter_all { all_rows ; institution }'}], 'result': '6', 'ind': 1, 'tostr': 'count { filter_all { all_rows ; institution } }', 'tointer': 'select the rows whose institution record is arbitrary . the number of such rows is 6 .'}, '6'], 'result': True, 'ind': 2, 'tostr': 'eq { count { filter_all { all_rows ; institution } } ; 6 } = true', 'tointer': 'select the rows whose institution record is arbitrary . the number of such rows is 6 .'} | eq { count { filter_all { all_rows ; institution } } ; 6 } = true | select the rows whose institution record is arbitrary . the number of such rows is 6 . | 3 | 3 | {'eq_2': 2, 'result_3': 3, 'count_1': 1, 'filter_all_0': 0, 'all_rows_4': 4, 'institution_5': 5, '6_6': 6} | {'eq_2': 'eq', 'result_3': 'true', 'count_1': 'count', 'filter_all_0': 'filter_all', 'all_rows_4': 'all_rows', 'institution_5': 'institution', '6_6': '6'} | {'eq_2': [3], 'result_3': [], 'count_1': [2], 'filter_all_0': [1], 'all_rows_4': [0], 'institution_5': [0], '6_6': [2]} | ['institution', 'location', 'nickname', 'enrollment', 'established'] | [['alvernia university', 'reading , pa', 'crusaders', '2900', '1958'], ['bryn athyn college', 'bryn athyn , pa', 'lions', '218', '1877'], ['east stroudsburg university', 'east stroudsburg , pa', 'warriors', '7000', '1893'], ['neumann university', 'aston , pa', 'knights', '2012', '1965'], ['penn state brandywine', 'media , pa', 'nittany lions', '1700', '1967'], ['richard stockton college', 'galloway township , nj', 'ospreys', '7243', '1969']] |
provinces of korea | https://en.wikipedia.org/wiki/Provinces_of_Korea | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/1-160510-5.html.csv | superlative | gyeongsangbuk has the largest area of any province in korea . | {'scope': 'all', 'col_superlative': '6', 'row_superlative': '6', 'value_mentioned': 'no', 'max_or_min': 'max', 'other_col': '1', 'subset': None} | {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'argmax', 'args': ['all_rows', 'area'], 'result': None, 'ind': 0, 'tostr': 'argmax { all_rows ; area }'}, 'rr romaja'], 'result': 'gyeongsangbuk', 'ind': 1, 'tostr': 'hop { argmax { all_rows ; area } ; rr romaja }'}, 'gyeongsangbuk'], 'result': True, 'ind': 2, 'tostr': 'eq { hop { argmax { all_rows ; area } ; rr romaja } ; gyeongsangbuk } = true', 'tointer': 'select the row whose area record of all rows is maximum . the rr romaja record of this row is gyeongsangbuk .'} | eq { hop { argmax { all_rows ; area } ; rr romaja } ; gyeongsangbuk } = true | select the row whose area record of all rows is maximum . the rr romaja record of this row is gyeongsangbuk . | 3 | 3 | {'str_eq_2': 2, 'result_3': 3, 'str_hop_1': 1, 'argmax_0': 0, 'all_rows_4': 4, 'area_5': 5, 'rr romaja_6': 6, 'gyeongsangbuk_7': 7} | {'str_eq_2': 'str_eq', 'result_3': 'true', 'str_hop_1': 'str_hop', 'argmax_0': 'argmax', 'all_rows_4': 'all_rows', 'area_5': 'area', 'rr romaja_6': 'rr romaja', 'gyeongsangbuk_7': 'gyeongsangbuk'} | {'str_eq_2': [3], 'result_3': [], 'str_hop_1': [2], 'argmax_0': [1], 'all_rows_4': [0], 'area_5': [0], 'rr romaja_6': [1], 'gyeongsangbuk_7': [2]} | ['rr romaja', 'm - r romaja', 'hangul / chosongul', 'hanja', 'iso', 'area', 'capital', 'region', 'country'] | [['chungcheongbuk', "ch ' ungch ' ŏngbuk", '충청북도', '忠清北道', 'kr - 43', '7436', 'cheongju', 'hoseo', 'south korea'], ['chungcheongnam', "ch ' ungch ' ŏngnam", '충청남도', '忠清南道', 'kr - 44', '8352', 'hongseong', 'hoseo', 'south korea'], ['gangwon', 'kangwŏn', '강원도', '江原道', 'kr - 44', '16894', 'chuncheon', 'gwandong', 'south korea'], ['gangwon', 'kangwŏn', '강원도', '江原道', 'kp - 07', '11091', 'wonsan', 'gwandong', 'north korea'], ['gyeonggi', 'kyŏnggi', '경기도', '京畿道', 'kr - 41', '10131', 'suwon', 'sudogwon', 'south korea'], ['gyeongsangbuk', 'kyŏngsangbuk', '경상북도', '慶尙北道', 'kr - 47', '19440', 'daegu', 'yeongnam', 'south korea'], ['gyeongsangnam', 'kyŏngsangnam', '경상남도', '慶尙南道', 'kr - 48', '11859', 'changwon', 'yeongnam', 'south korea'], ['hamgyeongbuk', 'hamgyŏngbuk', '함경북도', '咸鏡北道', 'kp - 09', '15980', 'chongjin', 'kwanbuk', 'north korea'], ['hamgyeongnam', 'hamgyŏngnam', '함경남도', '咸鏡南道', 'kp - 08', '18534', 'hamhung', 'kwannam', 'north korea'], ['hwanghaebuk', 'hwanghaebuk', '황해북도', '黃海北道', 'kp - 06', '8154', 'sariwon', 'haeso', 'north korea'], ['hwanghaenam', 'hwanghaenam', '황해남도', '黃海南道', 'kp - 05', '8450', 'sariwon', 'haeso', 'north korea'], ['jagang', 'chagang', '자강도', '慈江道', 'kp - 04', '16765', 'kanggye', 'kwanso', 'north korea'], ['jeju', 'cheju', '제주도', '濟州道', 'kr - 49', '1846', 'jeju city', 'jejudo', 'south korea'], ['jeollabuk', 'chŏllabuk', '전라북도', '全羅北道', 'kr - 45', '8043', 'jeonju', 'honam', 'south korea'], ['jeollanam', 'chŏllanam', '전라남도', '全羅南道', 'kr - 46', '11858', 'muan', 'honam', 'south korea'], ['pyeonganbuk', "p ' yŏnganbuk", '평안북도', '平安北道', 'kp - 03', '12680', 'sinuiju', 'kwanso', 'north korea'], ['pyeongannam', "p ' yŏngannam", '평안남도', '平安南道', 'kp - 02', '11891', 'pyongsong', 'kwanso', 'north korea']] |
1953 green bay packers season | https://en.wikipedia.org/wiki/1953_Green_Bay_Packers_season | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-14656212-2.html.csv | superlative | in the 1953 green bay packers season , the game with the highest attendance was the detroit lions game . | {'scope': 'all', 'col_superlative': '6', 'row_superlative': '10', 'value_mentioned': 'no', 'max_or_min': 'max', 'other_col': '3', 'subset': None} | {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'argmax', 'args': ['all_rows', 'attendance'], 'result': None, 'ind': 0, 'tostr': 'argmax { all_rows ; attendance }'}, 'opponent'], 'result': 'detroit lions', 'ind': 1, 'tostr': 'hop { argmax { all_rows ; attendance } ; opponent }'}, 'detroit lions'], 'result': True, 'ind': 2, 'tostr': 'eq { hop { argmax { all_rows ; attendance } ; opponent } ; detroit lions } = true', 'tointer': 'select the row whose attendance record of all rows is maximum . the opponent record of this row is detroit lions .'} | eq { hop { argmax { all_rows ; attendance } ; opponent } ; detroit lions } = true | select the row whose attendance record of all rows is maximum . the opponent record of this row is detroit lions . | 3 | 3 | {'str_eq_2': 2, 'result_3': 3, 'str_hop_1': 1, 'argmax_0': 0, 'all_rows_4': 4, 'attendance_5': 5, 'opponent_6': 6, 'detroit lions_7': 7} | {'str_eq_2': 'str_eq', 'result_3': 'true', 'str_hop_1': 'str_hop', 'argmax_0': 'argmax', 'all_rows_4': 'all_rows', 'attendance_5': 'attendance', 'opponent_6': 'opponent', 'detroit lions_7': 'detroit lions'} | {'str_eq_2': [3], 'result_3': [], 'str_hop_1': [2], 'argmax_0': [1], 'all_rows_4': [0], 'attendance_5': [0], 'opponent_6': [1], 'detroit lions_7': [2]} | ['week', 'date', 'opponent', 'result', 'venue', 'attendance'] | [['1', 'september 27 , 1953', 'cleveland browns', 'l 27 - 0', 'milwaukee county stadium', '22604'], ['2', 'october 4 , 1953', 'chicago bears', 'l 17 - 13', 'city stadium', '24835'], ['3', 'october 11 , 1953', 'los angeles rams', 'l 38 - 20', 'milwaukee county stadium', '23353'], ['4', 'october 18 , 1953', 'baltimore colts', 'w 37 - 14', 'city stadium', '18713'], ['5', 'october 24 , 1953', 'pittsburgh steelers', 'l 31 - 14', 'forbes field', '22918'], ['6', 'october 31 , 1953', 'baltimore colts', 'w 35 - 24', 'memorial stadium', '33797'], ['7', 'november 8 , 1953', 'chicago bears', 't 21 - 21', 'wrigley field', '39889'], ['8', 'november 15 , 1953', 'detroit lions', 'l 14 - 7', 'city stadium', '20834'], ['9', 'november 22 , 1953', 'san francisco 49ers', 'l 37 - 7', 'milwaukee county stadium', '16378'], ['10', 'november 26 , 1953', 'detroit lions', 'l 34 - 15', 'briggs stadium', '52607'], ['11', 'december 6 , 1953', 'san francisco 49ers', 'l 48 - 14', 'kezar stadium', '31337'], ['12', 'december 12 , 1953', 'los angeles rams', 'l 33 - 17', 'los angeles memorial coliseum', '23069']] |
comme j' ai mal | https://en.wikipedia.org/wiki/Comme_j%27ai_mal | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-14978398-2.html.csv | majority | most versions of the song comme j' ai mal were released in the year 1996 . | {'scope': 'all', 'col': '5', 'most_or_all': 'most', 'criterion': 'equal', 'value': '1996', 'subset': None} | {'func': 'most_eq', 'args': ['all_rows', 'year', '1996'], 'result': True, 'ind': 0, 'tointer': 'for the year records of all rows , most of them are equal to 1996 .', 'tostr': 'most_eq { all_rows ; year ; 1996 } = true'} | most_eq { all_rows ; year ; 1996 } = true | for the year records of all rows , most of them are equal to 1996 . | 1 | 1 | {'most_eq_0': 0, 'result_1': 1, 'all_rows_2': 2, 'year_3': 3, '1996_4': 4} | {'most_eq_0': 'most_eq', 'result_1': 'true', 'all_rows_2': 'all_rows', 'year_3': 'year', '1996_4': '1996'} | {'most_eq_0': [1], 'result_1': [], 'all_rows_2': [0], 'year_3': [0], '1996_4': [0]} | ['version', 'length', 'album', 'remixed by', 'year'] | [['album version', '3:53', 'anamorphosée , les mots', '-', '1995'], ['single version', '3:50', '-', '-', '1996'], ['instrumental', '3:50', '-', 'laurent boutonnat', '1996'], ['aches remix', '3:58', '-', 'laurent boutonnat , bertrand chtenet', '1996'], ['pain killer mix', '6:20', '-', 'laurent boutonnat , bertrand chtenet', '1996'], ['upside down remix', '6:45', '-', 'laurent boutonnat , bertrand chtenet', '1996'], ['music video', '4:00', 'music videos ii , music videos ii & iii', '-', '1996'], ['live version ( recorded in 1996 )', '4:35 ( audio ) 4:18 ( video )', 'live à bercy', '-', '1996']] |
swatch fivb world tour 2005 | https://en.wikipedia.org/wiki/Swatch_FIVB_World_Tour_2005 | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-18694694-3.html.csv | aggregation | in the beach volleyball swatch fivb world tour 2005 , countries earned an average of 5.2 silver medals . | {'scope': 'all', 'col': '4', 'type': 'average', 'result': '5.2', 'subset': None} | {'func': 'round_eq', 'args': [{'func': 'avg', 'args': ['all_rows', 'silver'], 'result': '5.2', 'ind': 0, 'tostr': 'avg { all_rows ; silver }'}, '5.2'], 'result': True, 'ind': 1, 'tostr': 'round_eq { avg { all_rows ; silver } ; 5.2 } = true', 'tointer': 'the average of the silver record of all rows is 5.2 .'} | round_eq { avg { all_rows ; silver } ; 5.2 } = true | the average of the silver record of all rows is 5.2 . | 2 | 2 | {'eq_1': 1, 'result_2': 2, 'avg_0': 0, 'all_rows_3': 3, 'silver_4': 4, '5.2_5': 5} | {'eq_1': 'eq', 'result_2': 'true', 'avg_0': 'avg', 'all_rows_3': 'all_rows', 'silver_4': 'silver', '5.2_5': '5.2'} | {'eq_1': [2], 'result_2': [], 'avg_0': [1], 'all_rows_3': [0], 'silver_4': [0], '5.2_5': [1]} | ['rank', 'nation', 'gold', 'silver', 'bronze', 'total'] | [['1', 'brazil', '19', '19', '17', '55'], ['2', 'united states', '6', '3', '4', '13'], ['4', 'germany', '3', '3', '3', '9'], ['5', 'switzerland', '2', '3', '2', '7'], ['6', 'greece', '1', '2', '1', '4'], ['7', 'china', '0', '1', '4', '5']] |
victorian railways c class ( diesel ) | https://en.wikipedia.org/wiki/Victorian_Railways_C_class_%28diesel%29 | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-14880117-1.html.csv | ordinal | the c503 was the third earliest victorian railways c class locomotive to enter service . | {'row': '3', 'col': '2', 'order': '3', 'col_other': '1', 'max_or_min': 'min_to_max', 'value_mentioned': 'no', 'scope': 'all', 'subset': None} | {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'nth_argmin', 'args': ['all_rows', 'entered service', '3'], 'result': None, 'ind': 0, 'tostr': 'nth_argmin { all_rows ; entered service ; 3 }'}, 'locomotive'], 'result': 'c503', 'ind': 1, 'tostr': 'hop { nth_argmin { all_rows ; entered service ; 3 } ; locomotive }'}, 'c503'], 'result': True, 'ind': 2, 'tostr': 'eq { hop { nth_argmin { all_rows ; entered service ; 3 } ; locomotive } ; c503 } = true', 'tointer': 'select the row whose entered service record of all rows is 3rd minimum . the locomotive record of this row is c503 .'} | eq { hop { nth_argmin { all_rows ; entered service ; 3 } ; locomotive } ; c503 } = true | select the row whose entered service record of all rows is 3rd minimum . the locomotive record of this row is c503 . | 3 | 3 | {'str_eq_2': 2, 'result_3': 3, 'str_hop_1': 1, 'nth_argmin_0': 0, 'all_rows_4': 4, 'entered service_5': 5, '3_6': 6, 'locomotive_7': 7, 'c503_8': 8} | {'str_eq_2': 'str_eq', 'result_3': 'true', 'str_hop_1': 'str_hop', 'nth_argmin_0': 'nth_argmin', 'all_rows_4': 'all_rows', 'entered service_5': 'entered service', '3_6': '3', 'locomotive_7': 'locomotive', 'c503_8': 'c503'} | {'str_eq_2': [3], 'result_3': [], 'str_hop_1': [2], 'nth_argmin_0': [1], 'all_rows_4': [0], 'entered service_5': [0], '3_6': [0], 'locomotive_7': [1], 'c503_8': [2]} | ['locomotive', 'entered service', 'owner', 'operator', 'livery', 'status'] | [['c501', '14 may 1977', 'seymour railway heritage centre', 'el zorro', 'vr blue & gold', 'preserved - operational'], ['c502', '22 june 1977', 'chicago freight car leasing australia', 'el zorro', 'cfcla blue & yellow', 'operational'], ['c503', '19 july 1977', 'chicago freight car leasing australia', 'southern shorthaul railroad', 'ssr yellow & black', 'operational'], ['c504', '18 august 1977', 'greentrains', 'pacific national', 'blue & yellow', 'operational'], ['c505', '13 september 1977', 'greentrains', 'pacific national', 'blue & yellow', 'operational'], ['c506', '6 october 1977', 'greentrains', 'pacific national', 'greentrains green & yellow', 'operational'], ['c507', '4 november 1977', 'greentrains', 'pacific national', 'blue & yellow', 'operational'], ['c508', '16 december 1977', 'chicago freight car leasing australia', 'el zorro', 'blue & yellow', 'operational'], ['c509', '10 march 1978', 'greentrains', 'pacific national', 'greentrains green & yellow', 'operational'], ['c510', '11 july 1978', 'greentrains', 'pacific national', 'greentrains green & yellow', 'operational']] |
list of game of the year awards | https://en.wikipedia.org/wiki/List_of_Game_of_the_Year_awards | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-1851722-8.html.csv | superlative | the game from the earliest game of the year awards was resident evil 4 . | {'scope': 'all', 'col_superlative': '1', 'row_superlative': '1', 'value_mentioned': 'no', 'max_or_min': 'min', 'other_col': '2', 'subset': None} | {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'argmin', 'args': ['all_rows', 'year'], 'result': None, 'ind': 0, 'tostr': 'argmin { all_rows ; year }'}, 'game'], 'result': 'resident evil 4', 'ind': 1, 'tostr': 'hop { argmin { all_rows ; year } ; game }'}, 'resident evil 4'], 'result': True, 'ind': 2, 'tostr': 'eq { hop { argmin { all_rows ; year } ; game } ; resident evil 4 } = true', 'tointer': 'select the row whose year record of all rows is minimum . the game record of this row is resident evil 4 .'} | eq { hop { argmin { all_rows ; year } ; game } ; resident evil 4 } = true | select the row whose year record of all rows is minimum . the game record of this row is resident evil 4 . | 3 | 3 | {'str_eq_2': 2, 'result_3': 3, 'str_hop_1': 1, 'argmin_0': 0, 'all_rows_4': 4, 'year_5': 5, 'game_6': 6, 'resident evil 4_7': 7} | {'str_eq_2': 'str_eq', 'result_3': 'true', 'str_hop_1': 'str_hop', 'argmin_0': 'argmin', 'all_rows_4': 'all_rows', 'year_5': 'year', 'game_6': 'game', 'resident evil 4_7': 'resident evil 4'} | {'str_eq_2': [3], 'result_3': [], 'str_hop_1': [2], 'argmin_0': [1], 'all_rows_4': [0], 'year_5': [0], 'game_6': [1], 'resident evil 4_7': [2]} | ['year', 'game', 'genre', 'platform ( s )', 'developer ( s )'] | [['2005', 'resident evil 4', 'survival horror : ( third - person ) shooter', 'gamecube , playstation 2 windows', 'capcom'], ['2006', 'gears of war', 'third - person shooter', 'xbox 360', 'epic games'], ['2007', 'call of duty 4 : modern warfare', 'action', 'playstation 3 , xbox 360 , nintendo ds , pc', 'infinity ward'], ['2008', 'littlebigplanet', 'puzzle platformer', 'playstation 3', 'media molecule'], ['2009', "assassin 's creed ii", 'third - person action - adventure', 'microsoft windows , playstation 3 , xbox 360', 'ubisoft montreal']] |
list of childrens hospital episodes | https://en.wikipedia.org/wiki/List_of_Childrens_Hospital_episodes | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/1-28081876-4.html.csv | ordinal | of the childrens hospital episodes , the one with the 2nd most recent original air date was the episode titled " the coffee machine paid for itself . " . | {'row': '8', 'col': '6', 'order': '2', 'col_other': '3', 'max_or_min': 'max_to_min', 'value_mentioned': 'no', 'scope': 'all', 'subset': None} | {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'nth_argmax', 'args': ['all_rows', 'original air date', '2'], 'result': None, 'ind': 0, 'tostr': 'nth_argmax { all_rows ; original air date ; 2 }'}, 'title'], 'result': 'the coffee machine paid for itself', 'ind': 1, 'tostr': 'hop { nth_argmax { all_rows ; original air date ; 2 } ; title }'}, 'the coffee machine paid for itself'], 'result': True, 'ind': 2, 'tostr': 'eq { hop { nth_argmax { all_rows ; original air date ; 2 } ; title } ; the coffee machine paid for itself } = true', 'tointer': 'select the row whose original air date record of all rows is 2nd maximum . the title record of this row is the coffee machine paid for itself .'} | eq { hop { nth_argmax { all_rows ; original air date ; 2 } ; title } ; the coffee machine paid for itself } = true | select the row whose original air date record of all rows is 2nd maximum . the title record of this row is the coffee machine paid for itself . | 3 | 3 | {'str_eq_2': 2, 'result_3': 3, 'str_hop_1': 1, 'nth_argmax_0': 0, 'all_rows_4': 4, 'original air date_5': 5, '2_6': 6, 'title_7': 7, 'the coffee machine paid for itself_8': 8} | {'str_eq_2': 'str_eq', 'result_3': 'true', 'str_hop_1': 'str_hop', 'nth_argmax_0': 'nth_argmax', 'all_rows_4': 'all_rows', 'original air date_5': 'original air date', '2_6': '2', 'title_7': 'title', 'the coffee machine paid for itself_8': 'the coffee machine paid for itself'} | {'str_eq_2': [3], 'result_3': [], 'str_hop_1': [2], 'nth_argmax_0': [1], 'all_rows_4': [0], 'original air date_5': [0], '2_6': [0], 'title_7': [1], 'the coffee machine paid for itself_8': [2]} | ['series no', 'season no', 'title', 'directed by', 'written by', 'original air date', 'production code'] | [['6', '1', 'i see her face everywhere', 'matt shakman', 'rob corddry', 'august 22 , 2010', '201'], ['7', '2', 'no one can replace her', 'matt shakman', 'rob corddry', 'august 29 , 2010', '202'], ['8', '3', 'i am not afraid of any ghost', 'bryan gordon', 'rob huebel', 'september 5 , 2010', '203'], ['9', '4', 'give a painted brother a break', 'rob schrab', 'paul scheer', 'september 12 , 2010', '205'], ['10', '5', 'joke overload', 'john inwood', 'jason mantzoukas', 'september 19 , 2010', '207'], ['11', '6', 'end of the middle', 'david wain', 'jonathan stern', 'september 26 , 2010', '206'], ['13', '8', 'hot enough for you', 'david wain', 'rob corddry & david wain', 'october 10 , 2010', '208'], ['14', '9', 'the coffee machine paid for itself', 'bryan gordon', 'ken marino & erica oyama', 'october 17 , 2010', '209'], ['16', '11', 'you know no one can hear you , right', 'ken marino', 'brian huskey and rob corddry', 'october 31 , 2010', '211']] |
peoria , illinois | https://en.wikipedia.org/wiki/Peoria%2C_Illinois | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-111774-2.html.csv | count | of the clubs in peoria , illinois , there were three that have carver arena as their venue . | {'scope': 'all', 'criterion': 'equal', 'value': 'carver arena', 'result': '3', 'col': '4', 'subset': None} | {'func': 'eq', 'args': [{'func': 'count', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'venue', 'carver arena'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose venue record fuzzily matches to carver arena .', 'tostr': 'filter_eq { all_rows ; venue ; carver arena }'}], 'result': '3', 'ind': 1, 'tostr': 'count { filter_eq { all_rows ; venue ; carver arena } }', 'tointer': 'select the rows whose venue record fuzzily matches to carver arena . the number of such rows is 3 .'}, '3'], 'result': True, 'ind': 2, 'tostr': 'eq { count { filter_eq { all_rows ; venue ; carver arena } } ; 3 } = true', 'tointer': 'select the rows whose venue record fuzzily matches to carver arena . the number of such rows is 3 .'} | eq { count { filter_eq { all_rows ; venue ; carver arena } } ; 3 } = true | select the rows whose venue record fuzzily matches to carver arena . the number of such rows is 3 . | 3 | 3 | {'eq_2': 2, 'result_3': 3, 'count_1': 1, 'filter_str_eq_0': 0, 'all_rows_4': 4, 'venue_5': 5, 'carver arena_6': 6, '3_7': 7} | {'eq_2': 'eq', 'result_3': 'true', 'count_1': 'count', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_4': 'all_rows', 'venue_5': 'venue', 'carver arena_6': 'carver arena', '3_7': '3'} | {'eq_2': [3], 'result_3': [], 'count_1': [2], 'filter_str_eq_0': [1], 'all_rows_4': [0], 'venue_5': [0], 'carver arena_6': [0], '3_7': [2]} | ['club', 'league', 'sport', 'venue', 'established', 'disbanded', 'championships'] | [['peoria chiefs', 'midwest league class - a', 'baseball', 'dozer park', '1983', 'n / a', '1 league title'], ['peoria rivermen', 'sphl', 'hockey', 'carver arena', '2013', 'n / a', '0'], ['peoria rivermen', 'ahl', 'ice hockey', 'carver arena', '2005', '2013', '0'], ['peoria pirates', 'af2', 'arena football', 'carver arena', '1999', '2009', '2 s arenacup'], ['peoria redwings', 'aagpbl', 'baseball', 'peoria stadium', '1946', '1951', '0']] |
fred couples | https://en.wikipedia.org/wiki/Fred_Couples | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-1221065-7.html.csv | count | fred won 3 of his championships in the month of march . | {'scope': 'all', 'criterion': 'fuzzily_match', 'value': 'mar', 'result': '3', 'col': '1', 'subset': None} | {'func': 'eq', 'args': [{'func': 'count', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'date', 'mar'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose date record fuzzily matches to mar .', 'tostr': 'filter_eq { all_rows ; date ; mar }'}], 'result': '3', 'ind': 1, 'tostr': 'count { filter_eq { all_rows ; date ; mar } }', 'tointer': 'select the rows whose date record fuzzily matches to mar . the number of such rows is 3 .'}, '3'], 'result': True, 'ind': 2, 'tostr': 'eq { count { filter_eq { all_rows ; date ; mar } } ; 3 } = true', 'tointer': 'select the rows whose date record fuzzily matches to mar . the number of such rows is 3 .'} | eq { count { filter_eq { all_rows ; date ; mar } } ; 3 } = true | select the rows whose date record fuzzily matches to mar . the number of such rows is 3 . | 3 | 3 | {'eq_2': 2, 'result_3': 3, 'count_1': 1, 'filter_str_eq_0': 0, 'all_rows_4': 4, 'date_5': 5, 'mar_6': 6, '3_7': 7} | {'eq_2': 'eq', 'result_3': 'true', 'count_1': 'count', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_4': 'all_rows', 'date_5': 'date', 'mar_6': 'mar', '3_7': '3'} | {'eq_2': [3], 'result_3': [], 'count_1': [2], 'filter_str_eq_0': [1], 'all_rows_4': [0], 'date_5': [0], 'mar_6': [0], '3_7': [2]} | ['date', 'tournament', 'winning score', 'margin of victory', 'runner ( s ) - up'] | [['feb 14 , 2010', 'the ace group classic', '17 ( 68 + 67 + 64 = 199 )', '1 stroke', 'tommy armour iii'], ['mar 7 , 2010', 'toshiba classic', '18 ( 66 + 64 + 65 = 195 )', '4 strokes', 'ronnie black'], ['mar 28 , 2010', 'cap cana championship', '21 ( 67 + 66 + 62 = 195 )', '2 strokes', 'corey pavin'], ['oct 24 , 2010', 'administaff small business classic', '17 ( 71 + 65 + 63 = 199 )', '7 strokes', 'mark wiebe'], ['aug 20 , 2011', 'constellation energy senior players championship', '11 ( 68 + 66 + 68 + 71 = 273 )', 'playoff', 'john cook'], ['oct 16 , 2011', 'at & t championship', '23 ( 65 + 62 + 66 = 193 )', '7 strokes', 'mark calcavecchia'], ['mar 25 , 2012', 'mississippi gulf resort classic', '14 ( 63 + 70 + 69 = 202 )', '1 stroke', 'michael allen'], ['jul 29 , 2012', 'the senior open championship', '9 ( 72 + 68 + 64 + 67 = 271 )', '2 strokes', 'gary hallberg'], ['nov 3 , 2013', 'charles schwab cup championship', '17 ( 65 + 65 + 68 + 69 = 267 )', '6 strokes', "bernhard langer , mark o'meara , peter senior"]] |
1981 pga championship | https://en.wikipedia.org/wiki/1981_PGA_Championship | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-18168735-3.html.csv | aggregation | for the athletes from the united states , the average number of strokes to par at the 1981 pga championship was 2.125 . | {'scope': 'all', 'col': '5', 'type': 'average', 'result': '2.125', 'subset': None} | {'func': 'round_eq', 'args': [{'func': 'avg', 'args': ['all_rows', 'to par'], 'result': '2.125', 'ind': 0, 'tostr': 'avg { all_rows ; to par }'}, '2.125'], 'result': True, 'ind': 1, 'tostr': 'round_eq { avg { all_rows ; to par } ; 2.125 } = true', 'tointer': 'the average of the to par record of all rows is 2.125 .'} | round_eq { avg { all_rows ; to par } ; 2.125 } = true | the average of the to par record of all rows is 2.125 . | 2 | 2 | {'eq_1': 1, 'result_2': 2, 'avg_0': 0, 'all_rows_3': 3, 'to par_4': 4, '2.125_5': 5} | {'eq_1': 'eq', 'result_2': 'true', 'avg_0': 'avg', 'all_rows_3': 'all_rows', 'to par_4': 'to par', '2.125_5': '2.125'} | {'eq_1': [2], 'result_2': [], 'avg_0': [1], 'all_rows_3': [0], 'to par_4': [0], '2.125_5': [1]} | ['place', 'player', 'country', 'score', 'to par', 'money'] | [['1', 'larry nelson', 'united states', '70 + 66 + 66 + 71 = 273', '7', '60000'], ['2', 'fuzzy zoeller', 'united states', '70 + 68 + 68 + 71 = 277', '3', '40000'], ['3', 'dan pohl', 'united states', '69 + 67 + 73 + 69 = 278', '2', '25000'], ['t4', 'isao aoki', 'japan', '75 + 68 + 66 + 70 = 279', '1', '13146'], ['t4', 'keith fergus', 'united states', '71 + 71 + 69 + 68 = 279', '1', '13146'], ['t4', 'bob gilder', 'united states', '74 + 69 + 70 + 66 = 279', '1', '13146'], ['t4', 'tom kite', 'united states', '71 + 67 + 69 + 72 = 279', '1', '13146'], ['t4', 'bruce lietzke', 'united states', '70 + 70 + 71 + 68 = 279', '1', '13146'], ['t4', 'jack nicklaus', 'united states', '71 + 68 + 71 + 69 = 279', '1', '13146'], ['t4', 'greg norman', 'australia', '73 + 67 + 68 + 71 = 279', '1', '13146']] |
euroleague 2007 - 08 individual statistics | https://en.wikipedia.org/wiki/Euroleague_2007%E2%80%9308_Individual_Statistics | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-16050349-9.html.csv | superlative | yotam halperin had more assists than all other players in the euroleague . | {'scope': 'all', 'col_superlative': '5', 'row_superlative': '1', 'value_mentioned': 'no', 'max_or_min': 'max', 'other_col': '2', 'subset': None} | {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'argmax', 'args': ['all_rows', 'assists'], 'result': None, 'ind': 0, 'tostr': 'argmax { all_rows ; assists }'}, 'name'], 'result': 'yotam halperin', 'ind': 1, 'tostr': 'hop { argmax { all_rows ; assists } ; name }'}, 'yotam halperin'], 'result': True, 'ind': 2, 'tostr': 'eq { hop { argmax { all_rows ; assists } ; name } ; yotam halperin } = true', 'tointer': 'select the row whose assists record of all rows is maximum . the name record of this row is yotam halperin .'} | eq { hop { argmax { all_rows ; assists } ; name } ; yotam halperin } = true | select the row whose assists record of all rows is maximum . the name record of this row is yotam halperin . | 3 | 3 | {'str_eq_2': 2, 'result_3': 3, 'str_hop_1': 1, 'argmax_0': 0, 'all_rows_4': 4, 'assists_5': 5, 'name_6': 6, 'yotam halperin_7': 7} | {'str_eq_2': 'str_eq', 'result_3': 'true', 'str_hop_1': 'str_hop', 'argmax_0': 'argmax', 'all_rows_4': 'all_rows', 'assists_5': 'assists', 'name_6': 'name', 'yotam halperin_7': 'yotam halperin'} | {'str_eq_2': [3], 'result_3': [], 'str_hop_1': [2], 'argmax_0': [1], 'all_rows_4': [0], 'assists_5': [0], 'name_6': [1], 'yotam halperin_7': [2]} | ['rank', 'name', 'team', 'games', 'assists'] | [['1', 'yotam halperin', 'maccabi tel aviv', '3', '16'], ['2', 'theodoros papaloukas', 'cska moscow', '3', '15'], ['2', 'lynn greer', 'olympiacos', '3', '15'], ['4', 'terrell mcintyre', 'montepaschi siena', '2', '10'], ['5', 'bootsy thornton', 'montepaschi siena', '2', '8']] |
1985 in film | https://en.wikipedia.org/wiki/1985_in_film | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-171615-1.html.csv | count | during 1985 in film , among the films produced on universal studio , 2 of them had gross over 100,000,000 . | {'scope': 'subset', 'criterion': 'greater_than', 'value': '100000000', 'result': '2', 'col': '5', 'subset': {'col': '3', 'criterion': 'equal', 'value': 'universal'}} | {'func': 'eq', 'args': [{'func': 'count', 'args': [{'func': 'filter_greater', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'studio', 'universal'], 'result': None, 'ind': 0, 'tostr': 'filter_eq { all_rows ; studio ; universal }', 'tointer': 'select the rows whose studio record fuzzily matches to universal .'}, 'gross', '100000000'], 'result': None, 'ind': 1, 'tointer': 'select the rows whose studio record fuzzily matches to universal . among these rows , select the rows whose gross record is greater than 100000000 .', 'tostr': 'filter_greater { filter_eq { all_rows ; studio ; universal } ; gross ; 100000000 }'}], 'result': '2', 'ind': 2, 'tostr': 'count { filter_greater { filter_eq { all_rows ; studio ; universal } ; gross ; 100000000 } }', 'tointer': 'select the rows whose studio record fuzzily matches to universal . among these rows , select the rows whose gross record is greater than 100000000 . the number of such rows is 2 .'}, '2'], 'result': True, 'ind': 3, 'tostr': 'eq { count { filter_greater { filter_eq { all_rows ; studio ; universal } ; gross ; 100000000 } } ; 2 } = true', 'tointer': 'select the rows whose studio record fuzzily matches to universal . among these rows , select the rows whose gross record is greater than 100000000 . the number of such rows is 2 .'} | eq { count { filter_greater { filter_eq { all_rows ; studio ; universal } ; gross ; 100000000 } } ; 2 } = true | select the rows whose studio record fuzzily matches to universal . among these rows , select the rows whose gross record is greater than 100000000 . the number of such rows is 2 . | 4 | 4 | {'eq_3': 3, 'result_4': 4, 'count_2': 2, 'filter_greater_1': 1, 'filter_str_eq_0': 0, 'all_rows_5': 5, 'studio_6': 6, 'universal_7': 7, 'gross_8': 8, '100000000_9': 9, '2_10': 10} | {'eq_3': 'eq', 'result_4': 'true', 'count_2': 'count', 'filter_greater_1': 'filter_greater', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_5': 'all_rows', 'studio_6': 'studio', 'universal_7': 'universal', 'gross_8': 'gross', '100000000_9': '100000000', '2_10': '2'} | {'eq_3': [4], 'result_4': [], 'count_2': [3], 'filter_greater_1': [2], 'filter_str_eq_0': [1], 'all_rows_5': [0], 'studio_6': [0], 'universal_7': [0], 'gross_8': [1], '100000000_9': [1], '2_10': [3]} | ['rank', 'title', 'studio', 'director', 'gross'] | [['1', 'back to the future', 'universal', 'robert zemeckis', '215000000'], ['2', 'rocky iv', 'united artists', 'sylvester stallone', '127873716'], ['3', 'rambo : first blood part ii', 'tri - star / carolco', 'george pan cosmatos', '150415432'], ['4', 'out of africa', 'universal', 'sydney pollack', '128499205'], ['5', 'the color purple', 'warner bros', 'steven spielberg', '98467863'], ['6', 'the jewel of the nile', '20th century fox', 'lewis teague', '96773200'], ['7', 'cocoon', '20th century fox', 'ron howard', '85313124'], ['8', 'witness', 'paramount', 'peter weir', '68706993'], ['9', 'police academy 2 : their first assignment', 'warner bros / ladd', 'jerry paris', '61600000'], ['10', 'the goonies', 'warner bros', 'richard donner', '61389680'], ['11', 'spies like us', 'warner bros', 'john landis', '60088980'], ['12', 'a view to a kill', 'united artists', 'john glen', '50300000'], ['13', 'fletch', 'universal', 'michael ritchie', '50600000'], ['14', "national lampoon 's european vacation", 'warner bros', 'amy heckerling', '49364621'], ['15', 'mask', 'universal', 'peter bogdanovich', '48230162'], ['16', "brewster 's millions", 'universal', 'walter hill', '45833132'], ['17', 'the breakfast club', 'universal', 'john hughes', '45000000'], ['18', 'white nights', 'columbia', 'taylor hackford', '42160849'], ['19', 'pale rider', 'warner bros', 'clint eastwood', '41410568'], ['20', "pee - wee 's big adventure", 'warner bros', 'tim burton', '40940662']] |
list of multiple barrel firearms | https://en.wikipedia.org/wiki/List_of_multiple_barrel_firearms | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/1-29474407-11.html.csv | unique | of the multiple barrel firearms , the only one whose country of origin is colombia , is the saturn machine pistol . | {'scope': 'all', 'row': '7', 'col': '3', 'col_other': '1', 'criterion': 'equal', 'value': 'colombia', 'subset': None} | {'func': 'and', 'args': [{'func': 'only', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'country of origin', 'colombia'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose country of origin record fuzzily matches to colombia .', 'tostr': 'filter_eq { all_rows ; country of origin ; colombia }'}], 'result': True, 'ind': 1, 'tostr': 'only { filter_eq { all_rows ; country of origin ; colombia } }', 'tointer': 'select the rows whose country of origin record fuzzily matches to colombia . there is only one such row in the table .'}, {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'country of origin', 'colombia'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose country of origin record fuzzily matches to colombia .', 'tostr': 'filter_eq { all_rows ; country of origin ; colombia }'}, 'name / designation'], 'result': 'saturn machine pistol', 'ind': 2, 'tostr': 'hop { filter_eq { all_rows ; country of origin ; colombia } ; name / designation }'}, 'saturn machine pistol'], 'result': True, 'ind': 3, 'tostr': 'eq { hop { filter_eq { all_rows ; country of origin ; colombia } ; name / designation } ; saturn machine pistol }', 'tointer': 'the name / designation record of this unqiue row is saturn machine pistol .'}], 'result': True, 'ind': 4, 'tostr': 'and { only { filter_eq { all_rows ; country of origin ; colombia } } ; eq { hop { filter_eq { all_rows ; country of origin ; colombia } ; name / designation } ; saturn machine pistol } } = true', 'tointer': 'select the rows whose country of origin record fuzzily matches to colombia . there is only one such row in the table . the name / designation record of this unqiue row is saturn machine pistol .'} | and { only { filter_eq { all_rows ; country of origin ; colombia } } ; eq { hop { filter_eq { all_rows ; country of origin ; colombia } ; name / designation } ; saturn machine pistol } } = true | select the rows whose country of origin record fuzzily matches to colombia . there is only one such row in the table . the name / designation record of this unqiue row is saturn machine pistol . | 6 | 5 | {'and_4': 4, 'result_5': 5, 'only_1': 1, 'filter_str_eq_0': 0, 'all_rows_6': 6, 'country of origin_7': 7, 'colombia_8': 8, 'str_eq_3': 3, 'str_hop_2': 2, 'name / designation_9': 9, 'saturn machine pistol_10': 10} | {'and_4': 'and', 'result_5': 'true', 'only_1': 'only', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_6': 'all_rows', 'country of origin_7': 'country of origin', 'colombia_8': 'colombia', 'str_eq_3': 'str_eq', 'str_hop_2': 'str_hop', 'name / designation_9': 'name / designation', 'saturn machine pistol_10': 'saturn machine pistol'} | {'and_4': [5], 'result_5': [], 'only_1': [4], 'filter_str_eq_0': [1, 2], 'all_rows_6': [0], 'country of origin_7': [0], 'colombia_8': [0], 'str_eq_3': [4], 'str_hop_2': [3], 'name / designation_9': [2], 'saturn machine pistol_10': [3]} | ['name / designation', 'year of intro', 'country of origin', 'primary cartridge', 'major users'] | [['csmg', '2000', 'belgium', '9x19 mm parabellum 22 mm grenade', 'n / a'], ['flieger - doppelpistole 1919', '1919', 'switzerland', '7.65 x21 mm parabellum', 'n / a'], ['gordon close - support weapon system', '1972', 'australia', '9x19 mm parabellum', 'n / a'], ['itm model 4', '1990', 'united states', '9x19 mm parabellum', 'n / a'], ['neal submachine gun', '1948', 'united states', '22lr', 'n / a'], ['onorati smg', '1935', 'italy', '9x19 mm parabellum', 'n / a'], ['saturn machine pistol', '1985', 'colombia', '22lr', 'n / a'], ['serlea', '1990', 'lebanon', '9x19 mm parabellum', 'n / a']] |
list of tallest buildings in slovenia | https://en.wikipedia.org/wiki/List_of_tallest_buildings_in_Slovenia | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-15062421-2.html.csv | count | three of the tallest buildings in slovenia were proposed in 2010 . | {'scope': 'all', 'criterion': 'equal', 'value': '2010', 'result': '3', 'col': '6', 'subset': None} | {'func': 'eq', 'args': [{'func': 'count', 'args': [{'func': 'filter_eq', 'args': ['all_rows', 'year proposed', '2010'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose year proposed record is equal to 2010 .', 'tostr': 'filter_eq { all_rows ; year proposed ; 2010 }'}], 'result': '3', 'ind': 1, 'tostr': 'count { filter_eq { all_rows ; year proposed ; 2010 } }', 'tointer': 'select the rows whose year proposed record is equal to 2010 . the number of such rows is 3 .'}, '3'], 'result': True, 'ind': 2, 'tostr': 'eq { count { filter_eq { all_rows ; year proposed ; 2010 } } ; 3 } = true', 'tointer': 'select the rows whose year proposed record is equal to 2010 . the number of such rows is 3 .'} | eq { count { filter_eq { all_rows ; year proposed ; 2010 } } ; 3 } = true | select the rows whose year proposed record is equal to 2010 . the number of such rows is 3 . | 3 | 3 | {'eq_2': 2, 'result_3': 3, 'count_1': 1, 'filter_eq_0': 0, 'all_rows_4': 4, 'year proposed_5': 5, '2010_6': 6, '3_7': 7} | {'eq_2': 'eq', 'result_3': 'true', 'count_1': 'count', 'filter_eq_0': 'filter_eq', 'all_rows_4': 'all_rows', 'year proposed_5': 'year proposed', '2010_6': '2010', '3_7': '3'} | {'eq_2': [3], 'result_3': [], 'count_1': [2], 'filter_eq_0': [1], 'all_rows_4': [0], 'year proposed_5': [0], '2010_6': [0], '3_7': [2]} | ['rank', 'name', 'location', 'height feet / metres', 'floors', 'year proposed'] | [['1', 'dravska vrata', 'maribor', '112 m', 'n / a', 'n / a'], ['2', 'emonika', 'ljubljana', '107 m', 'n / a', '27'], ['3', 'kolizej centre', 'ljubljana', '75 m', '18', '2010'], ['4', 'severna mestna vrata', 'ljubljana', '2 x 72 m', '22', '2010'], ['5', 'hotel plaza , ljubljana', 'ljubljana', '60 m', '15', '2010']] |
1985 pga tour | https://en.wikipedia.org/wiki/1985_PGA_Tour | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-14640372-4.html.csv | count | the united states was the country of 5 players on the 1985 pga tour . | {'scope': 'all', 'criterion': 'equal', 'value': 'united states', 'result': '5', 'col': '3', 'subset': None} | {'func': 'eq', 'args': [{'func': 'count', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'country', 'united states'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose country record fuzzily matches to united states .', 'tostr': 'filter_eq { all_rows ; country ; united states }'}], 'result': '5', 'ind': 1, 'tostr': 'count { filter_eq { all_rows ; country ; united states } }', 'tointer': 'select the rows whose country record fuzzily matches to united states . the number of such rows is 5 .'}, '5'], 'result': True, 'ind': 2, 'tostr': 'eq { count { filter_eq { all_rows ; country ; united states } } ; 5 } = true', 'tointer': 'select the rows whose country record fuzzily matches to united states . the number of such rows is 5 .'} | eq { count { filter_eq { all_rows ; country ; united states } } ; 5 } = true | select the rows whose country record fuzzily matches to united states . the number of such rows is 5 . | 3 | 3 | {'eq_2': 2, 'result_3': 3, 'count_1': 1, 'filter_str_eq_0': 0, 'all_rows_4': 4, 'country_5': 5, 'united states_6': 6, '5_7': 7} | {'eq_2': 'eq', 'result_3': 'true', 'count_1': 'count', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_4': 'all_rows', 'country_5': 'country', 'united states_6': 'united states', '5_7': '5'} | {'eq_2': [3], 'result_3': [], 'count_1': [2], 'filter_str_eq_0': [1], 'all_rows_4': [0], 'country_5': [0], 'united states_6': [0], '5_7': [2]} | ['rank', 'player', 'country', 'earnings', 'wins'] | [['1', 'jack nicklaus', 'united states', '4686280', '72'], ['2', 'tom watson', 'united states', '3806940', '36'], ['3', 'lee trevino', 'united states', '3177975', '29'], ['4', 'raymond floyd', 'united states', '2868951', '19'], ['5', 'hale irwin', 'united states', '2751050', '17']] |
rural community | https://en.wikipedia.org/wiki/Rural_community | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/1-26321719-1.html.csv | unique | saint - andré is the only rural community located in new brunswick that has a population under 1000 in 2006 . | {'scope': 'all', 'row': '4', 'col': '3', 'col_other': '1', 'criterion': 'less_than', 'value': '1000', 'subset': None} | {'func': 'and', 'args': [{'func': 'only', 'args': [{'func': 'filter_less', 'args': ['all_rows', 'population ( 2006 )', '1000'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose population ( 2006 ) record is less than 1000 .', 'tostr': 'filter_less { all_rows ; population ( 2006 ) ; 1000 }'}], 'result': True, 'ind': 1, 'tostr': 'only { filter_less { all_rows ; population ( 2006 ) ; 1000 } }', 'tointer': 'select the rows whose population ( 2006 ) record is less than 1000 . there is only one such row in the table .'}, {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'filter_less', 'args': ['all_rows', 'population ( 2006 )', '1000'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose population ( 2006 ) record is less than 1000 .', 'tostr': 'filter_less { all_rows ; population ( 2006 ) ; 1000 }'}, 'name'], 'result': 'saint - andré', 'ind': 2, 'tostr': 'hop { filter_less { all_rows ; population ( 2006 ) ; 1000 } ; name }'}, 'saint - andré'], 'result': True, 'ind': 3, 'tostr': 'eq { hop { filter_less { all_rows ; population ( 2006 ) ; 1000 } ; name } ; saint - andré }', 'tointer': 'the name record of this unqiue row is saint - andré .'}], 'result': True, 'ind': 4, 'tostr': 'and { only { filter_less { all_rows ; population ( 2006 ) ; 1000 } } ; eq { hop { filter_less { all_rows ; population ( 2006 ) ; 1000 } ; name } ; saint - andré } } = true', 'tointer': 'select the rows whose population ( 2006 ) record is less than 1000 . there is only one such row in the table . the name record of this unqiue row is saint - andré .'} | and { only { filter_less { all_rows ; population ( 2006 ) ; 1000 } } ; eq { hop { filter_less { all_rows ; population ( 2006 ) ; 1000 } ; name } ; saint - andré } } = true | select the rows whose population ( 2006 ) record is less than 1000 . there is only one such row in the table . the name record of this unqiue row is saint - andré . | 6 | 5 | {'and_4': 4, 'result_5': 5, 'only_1': 1, 'filter_less_0': 0, 'all_rows_6': 6, 'population (2006)_7': 7, '1000_8': 8, 'str_eq_3': 3, 'str_hop_2': 2, 'name_9': 9, 'saint - andré_10': 10} | {'and_4': 'and', 'result_5': 'true', 'only_1': 'only', 'filter_less_0': 'filter_less', 'all_rows_6': 'all_rows', 'population (2006)_7': 'population ( 2006 )', '1000_8': '1000', 'str_eq_3': 'str_eq', 'str_hop_2': 'str_hop', 'name_9': 'name', 'saint - andré_10': 'saint - andré'} | {'and_4': [5], 'result_5': [], 'only_1': [4], 'filter_less_0': [1, 2], 'all_rows_6': [0], 'population (2006)_7': [0], '1000_8': [0], 'str_eq_3': [4], 'str_hop_2': [3], 'name_9': [2], 'saint - andré_10': [3]} | ['name', 'population ( 2011 )', 'population ( 2006 )', 'change ( % )', 'area ( km square )', 'population density'] | [['beaubassin east', '6200', '6429', '- 3.6', '291.12', '21.3'], ['campobello island', '925', '1056', '- 12.4', '39.67', '23.3'], ['kedgwick', '993', '1146', '- 13.4', '4.28', '232.2'], ['saint - andré', '819', '868', '- 5.6', '8.12', '100.8'], ['upper miramichi', '2373', '2414', '- 1.7', '1835.01', '1.3']] |
1912 summer olympics | https://en.wikipedia.org/wiki/1912_Summer_Olympics | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-180200-1.html.csv | superlative | the united states , at 25 gold medals , had the highest number at the 1912 summer olympics . | {'scope': 'all', 'col_superlative': '3', 'row_superlative': '1', 'value_mentioned': 'no', 'max_or_min': 'max', 'other_col': '2', 'subset': None} | {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'argmax', 'args': ['all_rows', 'gold'], 'result': None, 'ind': 0, 'tostr': 'argmax { all_rows ; gold }'}, 'nation'], 'result': 'united states', 'ind': 1, 'tostr': 'hop { argmax { all_rows ; gold } ; nation }'}, 'united states'], 'result': True, 'ind': 2, 'tostr': 'eq { hop { argmax { all_rows ; gold } ; nation } ; united states } = true', 'tointer': 'select the row whose gold record of all rows is maximum . the nation record of this row is united states .'} | eq { hop { argmax { all_rows ; gold } ; nation } ; united states } = true | select the row whose gold record of all rows is maximum . the nation record of this row is united states . | 3 | 3 | {'str_eq_2': 2, 'result_3': 3, 'str_hop_1': 1, 'argmax_0': 0, 'all_rows_4': 4, 'gold_5': 5, 'nation_6': 6, 'united states_7': 7} | {'str_eq_2': 'str_eq', 'result_3': 'true', 'str_hop_1': 'str_hop', 'argmax_0': 'argmax', 'all_rows_4': 'all_rows', 'gold_5': 'gold', 'nation_6': 'nation', 'united states_7': 'united states'} | {'str_eq_2': [3], 'result_3': [], 'str_hop_1': [2], 'argmax_0': [1], 'all_rows_4': [0], 'gold_5': [0], 'nation_6': [1], 'united states_7': [2]} | ['rank', 'nation', 'gold', 'silver', 'bronze', 'total'] | [['1', 'united states', '25', '19', '19', '63'], ['2', 'sweden ( host nation )', '24', '24', '17', '65'], ['3', 'great britain', '10', '15', '16', '41'], ['4', 'finland', '9', '8', '9', '26'], ['5', 'france', '7', '4', '3', '14'], ['6', 'germany', '5', '13', '7', '25'], ['7', 'south africa', '4', '2', '0', '6'], ['8', 'norway', '4', '1', '4', '9'], ['9', 'canada', '3', '2', '3', '8'], ['9', 'hungary', '3', '2', '3', '8']] |
2005 - 06 new york rangers season | https://en.wikipedia.org/wiki/2005%E2%80%9306_New_York_Rangers_season | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-14383093-16.html.csv | majority | in the new york rangers 2005 - 06 season , most of the players playing position d were from canada . | {'scope': 'subset', 'col': '4', 'most_or_all': 'most', 'criterion': 'equal', 'value': 'canada', 'subset': {'col': '3', 'criterion': 'equal', 'value': 'd'}} | {'func': 'most_str_eq', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'position', 'd'], 'result': None, 'ind': 0, 'tostr': 'filter_eq { all_rows ; position ; d }', 'tointer': 'select the rows whose position record fuzzily matches to d .'}, 'nationality', 'canada'], 'result': True, 'ind': 1, 'tointer': 'select the rows whose position record fuzzily matches to d . for the nationality records of these rows , most of them fuzzily match to canada .', 'tostr': 'most_eq { filter_eq { all_rows ; position ; d } ; nationality ; canada } = true'} | most_eq { filter_eq { all_rows ; position ; d } ; nationality ; canada } = true | select the rows whose position record fuzzily matches to d . for the nationality records of these rows , most of them fuzzily match to canada . | 2 | 2 | {'most_str_eq_1': 1, 'result_2': 2, 'filter_str_eq_0': 0, 'all_rows_3': 3, 'position_4': 4, 'd_5': 5, 'nationality_6': 6, 'canada_7': 7} | {'most_str_eq_1': 'most_str_eq', 'result_2': 'true', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_3': 'all_rows', 'position_4': 'position', 'd_5': 'd', 'nationality_6': 'nationality', 'canada_7': 'canada'} | {'most_str_eq_1': [2], 'result_2': [], 'filter_str_eq_0': [1], 'all_rows_3': [0], 'position_4': [0], 'd_5': [0], 'nationality_6': [1], 'canada_7': [1]} | ['round', 'player', 'position', 'nationality', 'college / junior / club team ( league )'] | [['1', 'marc staal', 'd', 'canada', 'sudbury wolves ( ohl )'], ['2', 'michael sauer', 'd', 'united states', 'portland winter hawks ( whl )'], ['2', 'marc - andre cliche', 'rw', 'canada', 'lewiston maineiacs ( qmjhl )'], ['3', 'brodie dupont', 'c', 'canada', 'calgary hitmen ( whl )'], ['3', 'dalyn flatt', 'd', 'canada', 'saskatoon blades ( whl )'], ['4', 'tom pyatt', 'c', 'canada', 'saginaw spirit ( ohl )'], ['5', 'trevor koverko', 'd', 'canada', 'owen sound attack ( ohl )'], ['6', 'greg beller', 'f', 'canada', 'lake of the woods ( minnesota )'], ['7', 'ryan russell', 'c', 'canada', 'kootenay ice ( whl )']] |
2005 open championship | https://en.wikipedia.org/wiki/2005_Open_Championship | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-16225902-7.html.csv | majority | most of the players ended up with 7 to par . | {'scope': 'all', 'col': '5', 'most_or_all': 'most', 'criterion': 'equal', 'value': '7', 'subset': None} | {'func': 'most_eq', 'args': ['all_rows', 'to par', '7'], 'result': True, 'ind': 0, 'tointer': 'for the to par records of all rows , most of them are equal to 7 .', 'tostr': 'most_eq { all_rows ; to par ; 7 } = true'} | most_eq { all_rows ; to par ; 7 } = true | for the to par records of all rows , most of them are equal to 7 . | 1 | 1 | {'most_eq_0': 0, 'result_1': 1, 'all_rows_2': 2, 'to par_3': 3, '7_4': 4} | {'most_eq_0': 'most_eq', 'result_1': 'true', 'all_rows_2': 'all_rows', 'to par_3': 'to par', '7_4': '7'} | {'most_eq_0': [1], 'result_1': [], 'all_rows_2': [0], 'to par_3': [0], '7_4': [0]} | ['place', 'player', 'country', 'score', 'to par', 'money'] | [['1', 'tiger woods', 'united states', '66 + 67 + 71 + 70 = 274', '14', '720000'], ['2', 'colin montgomerie', 'scotland', '71 + 66 + 70 + 72 = 279', '9', '430000'], ['t3', 'fred couples', 'united states', '68 + 71 + 73 + 68 = 280', '8', '242350'], ['t3', 'josé maría olazábal', 'spain', '68 + 70 + 68 + 74 = 280', '8', '242350'], ['t5', 'michael campbell', 'new zealand', '69 + 72 + 68 + 72 = 281', '7', '122100'], ['t5', 'sergio garcía', 'spain', '70 + 69 + 69 + 73 = 281', '7', '122100'], ['t5', 'retief goosen', 'south africa', '68 + 73 + 66 + 74 = 281', '7', '122100'], ['t5', 'bernhard langer', 'germany', '71 + 69 + 70 + 71 = 281', '7', '122100'], ['t5', 'geoff ogilvy', 'australia', '71 + 74 + 67 + 69 = 281', '7', '122100'], ['t5', 'vijay singh', 'fiji', '69 + 69 + 71 + 72 = 281', '7', '122100']] |
2000 - 01 prva hnl | https://en.wikipedia.org/wiki/2000%E2%80%9301_Prva_HNL | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-17573976-1.html.csv | ordinal | stadion gradski vrt has the 3rd largest capacity of all stadiums played at during the 2000 - 01 season . | {'row': '7', 'col': '5', 'order': '3', 'col_other': '4', 'max_or_min': 'max_to_min', 'value_mentioned': 'no', 'scope': 'all', 'subset': None} | {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'nth_argmax', 'args': ['all_rows', 'capacity', '3'], 'result': None, 'ind': 0, 'tostr': 'nth_argmax { all_rows ; capacity ; 3 }'}, 'stadium'], 'result': 'stadion gradski vrt', 'ind': 1, 'tostr': 'hop { nth_argmax { all_rows ; capacity ; 3 } ; stadium }'}, 'stadion gradski vrt'], 'result': True, 'ind': 2, 'tostr': 'eq { hop { nth_argmax { all_rows ; capacity ; 3 } ; stadium } ; stadion gradski vrt } = true', 'tointer': 'select the row whose capacity record of all rows is 3rd maximum . the stadium record of this row is stadion gradski vrt .'} | eq { hop { nth_argmax { all_rows ; capacity ; 3 } ; stadium } ; stadion gradski vrt } = true | select the row whose capacity record of all rows is 3rd maximum . the stadium record of this row is stadion gradski vrt . | 3 | 3 | {'str_eq_2': 2, 'result_3': 3, 'str_hop_1': 1, 'nth_argmax_0': 0, 'all_rows_4': 4, 'capacity_5': 5, '3_6': 6, 'stadium_7': 7, 'stadion gradski vrt_8': 8} | {'str_eq_2': 'str_eq', 'result_3': 'true', 'str_hop_1': 'str_hop', 'nth_argmax_0': 'nth_argmax', 'all_rows_4': 'all_rows', 'capacity_5': 'capacity', '3_6': '3', 'stadium_7': 'stadium', 'stadion gradski vrt_8': 'stadion gradski vrt'} | {'str_eq_2': [3], 'result_3': [], 'str_hop_1': [2], 'nth_argmax_0': [1], 'all_rows_4': [0], 'capacity_5': [0], '3_6': [0], 'stadium_7': [1], 'stadion gradski vrt_8': [2]} | ['team', 'manager', 'home city', 'stadium', 'capacity'] | [['cibalia', 'davor mladina', 'vinkovci', 'stadion hnk cibalia', '9920'], ['čakovec', 'ilija lončarević', 'čakovec', 'stadion src mladost', '8000'], ['dinamo zagreb', 'marijan vlak', 'zagreb', 'stadion maksimir', '37168'], ['hajduk split', 'petar nadoveza', 'split', 'gradski stadion u poljudu', '35000'], ['hrvatski dragovoljac', 'milivoj bračun', 'zagreb', 'stadion nšc stjepan spajić', '5000'], ['marsonia', 'stjepan deverić', 'slavonski brod', 'gradski stadion uz savu', '10000'], ['osijek', 'stanko mršić', 'osijek', 'stadion gradski vrt', '19500'], ['rijeka', 'nenad gračan', 'rijeka', 'stadion na kantridi', '10275'], ['slaven belupo', 'mladen frančić', 'koprivnica', 'gradski stadion u koprivnici', '4000'], ['šibenik', 'milo nižetić', 'šibenik', 'stadion šubićevac', '8000'], ['varteks', 'ivan katalinić', 'varaždin', 'stadion nk varteks', '10800'], ['nk zagreb', 'branko karačić', 'zagreb', 'stadion u kranjčevićevoj ulici', '8850']] |
roberta vinci | https://en.wikipedia.org/wiki/Roberta_Vinci | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-1818978-1.html.csv | count | two of these events were part of the french open championship . | {'scope': 'all', 'criterion': 'equal', 'value': 'french open', 'result': '2', 'col': '3', 'subset': None} | {'func': 'eq', 'args': [{'func': 'count', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'championship', 'french open'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose championship record fuzzily matches to french open .', 'tostr': 'filter_eq { all_rows ; championship ; french open }'}], 'result': '2', 'ind': 1, 'tostr': 'count { filter_eq { all_rows ; championship ; french open } }', 'tointer': 'select the rows whose championship record fuzzily matches to french open . the number of such rows is 2 .'}, '2'], 'result': True, 'ind': 2, 'tostr': 'eq { count { filter_eq { all_rows ; championship ; french open } } ; 2 } = true', 'tointer': 'select the rows whose championship record fuzzily matches to french open . the number of such rows is 2 .'} | eq { count { filter_eq { all_rows ; championship ; french open } } ; 2 } = true | select the rows whose championship record fuzzily matches to french open . the number of such rows is 2 . | 3 | 3 | {'eq_2': 2, 'result_3': 3, 'count_1': 1, 'filter_str_eq_0': 0, 'all_rows_4': 4, 'championship_5': 5, 'french open_6': 6, '2_7': 7} | {'eq_2': 'eq', 'result_3': 'true', 'count_1': 'count', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_4': 'all_rows', 'championship_5': 'championship', 'french open_6': 'french open', '2_7': '2'} | {'eq_2': [3], 'result_3': [], 'count_1': [2], 'filter_str_eq_0': [1], 'all_rows_4': [0], 'championship_5': [0], 'french open_6': [0], '2_7': [2]} | ['outcome', 'year', 'championship', 'surface', 'partner', 'opponents', 'score'] | [['runner - up', '2012', 'australian open', 'hard', 'sara errani', 'svetlana kuznetsova vera zvonareva', '5 - 7 , 6 - 4 , 6 - 3'], ['winner', '2012', 'french open', 'clay', 'sara errani', 'maria kirilenko nadia petrova', '4 - 6 , 6 - 4 , 6 - 2'], ['winner', '2012', 'us open', 'hard', 'sara errani', 'andrea hlaváčková lucie hradecká', '6 - 4 , 6 - 2'], ['winner', '2013', 'australian open', 'hard', 'sara errani', 'ashleigh barty casey dellacqua', '6 - 2 , 3 - 6 , 6 - 2'], ['runner - up', '2013', 'french open', 'clay', 'sara errani', 'ekaterina makarova elena vesnina', '5 - 7 , 2 - 6']] |
charleston southern buccaneers | https://en.wikipedia.org/wiki/Charleston_Southern_Buccaneers | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/1-26240481-1.html.csv | majority | jay mills was the charleston southerns head coach for the majority of years . | {'scope': 'all', 'col': '6', 'most_or_all': 'most', 'criterion': 'equal', 'value': 'jay mills', 'subset': None} | {'func': 'most_str_eq', 'args': ['all_rows', 'charleston southerns head coach', 'jay mills'], 'result': True, 'ind': 0, 'tointer': 'for the charleston southerns head coach records of all rows , most of them fuzzily match to jay mills .', 'tostr': 'most_eq { all_rows ; charleston southerns head coach ; jay mills } = true'} | most_eq { all_rows ; charleston southerns head coach ; jay mills } = true | for the charleston southerns head coach records of all rows , most of them fuzzily match to jay mills . | 1 | 1 | {'most_str_eq_0': 0, 'result_1': 1, 'all_rows_2': 2, 'charleston southerns head coach_3': 3, 'jay mills_4': 4} | {'most_str_eq_0': 'most_str_eq', 'result_1': 'true', 'all_rows_2': 'all_rows', 'charleston southerns head coach_3': 'charleston southerns head coach', 'jay mills_4': 'jay mills'} | {'most_str_eq_0': [1], 'result_1': [], 'all_rows_2': [0], 'charleston southerns head coach_3': [0], 'jay mills_4': [0]} | ['year', 'fbs opponent', 'result', 'opponents conference', 'opponents head coach', 'charleston southerns head coach'] | [['2014', 'georgia bulldogs', 'tbd', 'sec', 'mark richt as of 2011', 'jay mills as of 2011'], ['2012', 'illinois fighting illini', 'tbd', 'big ten', 'tim beckman as of 2012', 'jay mills as of 2011'], ['2011', 'ucf knights', 'l , 62 - 0', 'c - usa', "george o'leary", 'jay mills'], ['2011', 'florida state seminoles', 'l , 62 - 10', 'acc', 'jimbo fisher', 'jay mills'], ['2010', 'kentucky wildcats', 'l , 49 - 21', 'sec', 'joker phillips', 'jay mills'], ['2010', 'hawaii warriors', 'l , 66 - 7', 'wac', 'greg mcmackin', 'jay mills'], ['2009', 'south florida bulls', 'l , 59 - 0', 'big east', 'jim leavitt', 'jay mills'], ['2009', 'florida gators', 'l , 62 - 3', 'sec', 'urban meyer', 'jay mills'], ['2008', 'miami redhawks', 'l , 38 - 27', 'mac', 'don treadwell', 'jay mills'], ['2008', 'miami hurricanes', 'l , 52 - 7', 'acc', 'randy shannon', 'jay mills'], ['2007', 'hawaii warriors', 'l , 66 - 10', 'wac', 'june jones', 'jay mills'], ['2003', 'south florida bulls', 'l , 55 - 7', 'big east', 'jim leavitt', 'jay mills'], ['2002', 'south florida bulls', 'l , 56 - 6', 'big east', 'jim leavitt', 'david dowd']] |
2007 - 08 belize premier football league | https://en.wikipedia.org/wiki/2007%E2%80%9308_Belize_Premier_Football_League | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-13713206-1.html.csv | comparative | hankook verdes united had less goals for than fc belize . | {'row_1': '1', 'row_2': '2', 'col': '5', 'col_other': '2', 'relation': 'less', 'record_mentioned': 'no', 'diff_result': None} | {'func': 'less', 'args': [{'func': 'str_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'club ( city / town )', 'hankook verdes united ( san ignacio )'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose club ( city / town ) record fuzzily matches to hankook verdes united ( san ignacio ) .', 'tostr': 'filter_eq { all_rows ; club ( city / town ) ; hankook verdes united ( san ignacio ) }'}, 'goals for / against'], 'result': None, 'ind': 2, 'tostr': 'hop { filter_eq { all_rows ; club ( city / town ) ; hankook verdes united ( san ignacio ) } ; goals for / against }', 'tointer': 'select the rows whose club ( city / town ) record fuzzily matches to hankook verdes united ( san ignacio ) . take the goals for / against record of this row .'}, {'func': 'str_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'club ( city / town )', 'fc belize ( belize city )'], 'result': None, 'ind': 1, 'tointer': 'select the rows whose club ( city / town ) record fuzzily matches to fc belize ( belize city ) .', 'tostr': 'filter_eq { all_rows ; club ( city / town ) ; fc belize ( belize city ) }'}, 'goals for / against'], 'result': None, 'ind': 3, 'tostr': 'hop { filter_eq { all_rows ; club ( city / town ) ; fc belize ( belize city ) } ; goals for / against }', 'tointer': 'select the rows whose club ( city / town ) record fuzzily matches to fc belize ( belize city ) . take the goals for / against record of this row .'}], 'result': True, 'ind': 4, 'tostr': 'less { hop { filter_eq { all_rows ; club ( city / town ) ; hankook verdes united ( san ignacio ) } ; goals for / against } ; hop { filter_eq { all_rows ; club ( city / town ) ; fc belize ( belize city ) } ; goals for / against } } = true', 'tointer': 'select the rows whose club ( city / town ) record fuzzily matches to hankook verdes united ( san ignacio ) . take the goals for / against record of this row . select the rows whose club ( city / town ) record fuzzily matches to fc belize ( belize city ) . take the goals for / against record of this row . the first record is less than the second record .'} | less { hop { filter_eq { all_rows ; club ( city / town ) ; hankook verdes united ( san ignacio ) } ; goals for / against } ; hop { filter_eq { all_rows ; club ( city / town ) ; fc belize ( belize city ) } ; goals for / against } } = true | select the rows whose club ( city / town ) record fuzzily matches to hankook verdes united ( san ignacio ) . take the goals for / against record of this row . select the rows whose club ( city / town ) record fuzzily matches to fc belize ( belize city ) . take the goals for / against record of this row . the first record is less than the second record . | 5 | 5 | {'less_4': 4, 'result_5': 5, 'str_hop_2': 2, 'filter_str_eq_0': 0, 'all_rows_6': 6, 'club (city / town)_7': 7, 'hankook verdes united ( san ignacio )_8': 8, 'goals for / against_9': 9, 'str_hop_3': 3, 'filter_str_eq_1': 1, 'all_rows_10': 10, 'club (city / town)_11': 11, 'fc belize ( belize city )_12': 12, 'goals for / against_13': 13} | {'less_4': 'less', 'result_5': 'true', 'str_hop_2': 'str_hop', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_6': 'all_rows', 'club (city / town)_7': 'club ( city / town )', 'hankook verdes united ( san ignacio )_8': 'hankook verdes united ( san ignacio )', 'goals for / against_9': 'goals for / against', 'str_hop_3': 'str_hop', 'filter_str_eq_1': 'filter_str_eq', 'all_rows_10': 'all_rows', 'club (city / town)_11': 'club ( city / town )', 'fc belize ( belize city )_12': 'fc belize ( belize city )', 'goals for / against_13': 'goals for / against'} | {'less_4': [5], 'result_5': [], 'str_hop_2': [4], 'filter_str_eq_0': [2], 'all_rows_6': [0], 'club (city / town)_7': [0], 'hankook verdes united ( san ignacio )_8': [0], 'goals for / against_9': [2], 'str_hop_3': [4], 'filter_str_eq_1': [3], 'all_rows_10': [1], 'club (city / town)_11': [1], 'fc belize ( belize city )_12': [1], 'goals for / against_13': [3]} | ['position', 'club ( city / town )', 'games played', 'w - l - d', 'goals for / against', 'points'] | [['1', 'hankook verdes united ( san ignacio )', '16', '7 - 3 - 6', '26 - 17', '27'], ['2', 'fc belize ( belize city )', '16', '8 - 5 - 3', '29 - 22', '27'], ['3', 'wagiya ( dangriga )', '16', '7 - 4 - 5', '29 - 24', '26'], ['4', 'defence force ( belize city )', '16', '6 - 2 - 8', '18 - 14', '26'], ['5', 'san pedro dolphins ( san pedro town )', '16', '6 - 4 - 6', '20 - 18', '24'], ['6', 'georgetown ibayani ( independence )', '16', '5 - 7 - 4', '28 - 32', '19'], ['7', 'juventus ( orange walk , belize )', '16', '5 - 9 - 2', '31 - 32', '17'], ['8', 'revolutionary conquerors ( dangriga )', '16', '3 - 7 - 6', '25 - 32', '15'], ['9', "santel 's ( santa elena )", '16', '4 - 10 - 2', '16 - 31', '14']] |
2008 in paleontology | https://en.wikipedia.org/wiki/2008_in_paleontology | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-15688561-4.html.csv | count | a total of four of the fossils found in 2008 have gen et sp nov novelty . | {'scope': 'all', 'criterion': 'equal', 'value': 'gen et sp nov', 'result': '4', 'col': '2', 'subset': None} | {'func': 'eq', 'args': [{'func': 'count', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'novelty', 'gen et sp nov'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose novelty record fuzzily matches to gen et sp nov .', 'tostr': 'filter_eq { all_rows ; novelty ; gen et sp nov }'}], 'result': '4', 'ind': 1, 'tostr': 'count { filter_eq { all_rows ; novelty ; gen et sp nov } }', 'tointer': 'select the rows whose novelty record fuzzily matches to gen et sp nov . the number of such rows is 4 .'}, '4'], 'result': True, 'ind': 2, 'tostr': 'eq { count { filter_eq { all_rows ; novelty ; gen et sp nov } } ; 4 } = true', 'tointer': 'select the rows whose novelty record fuzzily matches to gen et sp nov . the number of such rows is 4 .'} | eq { count { filter_eq { all_rows ; novelty ; gen et sp nov } } ; 4 } = true | select the rows whose novelty record fuzzily matches to gen et sp nov . the number of such rows is 4 . | 3 | 3 | {'eq_2': 2, 'result_3': 3, 'count_1': 1, 'filter_str_eq_0': 0, 'all_rows_4': 4, 'novelty_5': 5, 'gen et sp nov_6': 6, '4_7': 7} | {'eq_2': 'eq', 'result_3': 'true', 'count_1': 'count', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_4': 'all_rows', 'novelty_5': 'novelty', 'gen et sp nov_6': 'gen et sp nov', '4_7': '4'} | {'eq_2': [3], 'result_3': [], 'count_1': [2], 'filter_str_eq_0': [1], 'all_rows_4': [0], 'novelty_5': [0], 'gen et sp nov_6': [0], '4_7': [2]} | ['name', 'novelty', 'status', 'authors', 'location'] | [['formosibittacus', 'gen et sp nov', 'vaild', 'li , ren & shih', 'china'], ['haidomyrmodes', 'gen et sp nov', 'vaild', 'perrichot et al', 'france'], ['jurahylobittacus', 'gen et sp nov', 'valid', 'li , ren & shih', 'china'], ['lutzomyia adiketis', 'sp nov', 'valid', 'poinar', 'dominican republic'], ['proraphidia gomezi', 'sp nov', 'valid', 'jarzembowski', 'england'], ['proraphidia hopkinsi', 'sp nov', 'valid', 'jarzembowski', 'spain'], ['sinomeganeura', 'gen et sp nov', 'valid', 'ren , nel , & prokop', 'china'], ['syndesus ambericus', 'sp nov', 'vaild', 'woodruff', 'dominican republic'], ['termitaradus avitinquilinus', 'sp nov', 'vaild', 'grimaldi & engel', 'dominican republic']] |
2005 - 06 miami heat season | https://en.wikipedia.org/wiki/2005%E2%80%9306_Miami_Heat_season | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/1-13762472-13.html.csv | count | during the may 2005-2006 miami heat season , in games against detroit shaquille o'neal pulled down the most rebounds in 3 of the 5 games . | {'scope': 'subset', 'criterion': 'fuzzily_match', 'value': "shaquille o'neal", 'result': '3', 'col': '6', 'subset': {'col': '3', 'criterion': 'equal', 'value': 'detroit'}} | {'func': 'eq', 'args': [{'func': 'count', 'args': [{'func': 'filter_str_eq', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'team', 'detroit'], 'result': None, 'ind': 0, 'tostr': 'filter_eq { all_rows ; team ; detroit }', 'tointer': 'select the rows whose team record fuzzily matches to detroit .'}, 'high rebounds', "shaquille o'neal"], 'result': None, 'ind': 1, 'tointer': "select the rows whose team record fuzzily matches to detroit . among these rows , select the rows whose high rebounds record fuzzily matches to shaquille o'neal .", 'tostr': "filter_eq { filter_eq { all_rows ; team ; detroit } ; high rebounds ; shaquille o'neal }"}], 'result': '3', 'ind': 2, 'tostr': "count { filter_eq { filter_eq { all_rows ; team ; detroit } ; high rebounds ; shaquille o'neal } }", 'tointer': "select the rows whose team record fuzzily matches to detroit . among these rows , select the rows whose high rebounds record fuzzily matches to shaquille o'neal . the number of such rows is 3 ."}, '3'], 'result': True, 'ind': 3, 'tostr': "eq { count { filter_eq { filter_eq { all_rows ; team ; detroit } ; high rebounds ; shaquille o'neal } } ; 3 } = true", 'tointer': "select the rows whose team record fuzzily matches to detroit . among these rows , select the rows whose high rebounds record fuzzily matches to shaquille o'neal . the number of such rows is 3 ."} | eq { count { filter_eq { filter_eq { all_rows ; team ; detroit } ; high rebounds ; shaquille o'neal } } ; 3 } = true | select the rows whose team record fuzzily matches to detroit . among these rows , select the rows whose high rebounds record fuzzily matches to shaquille o'neal . the number of such rows is 3 . | 4 | 4 | {'eq_3': 3, 'result_4': 4, 'count_2': 2, 'filter_str_eq_1': 1, 'filter_str_eq_0': 0, 'all_rows_5': 5, 'team_6': 6, 'detroit_7': 7, 'high rebounds_8': 8, "shaquille o'neal_9": 9, '3_10': 10} | {'eq_3': 'eq', 'result_4': 'true', 'count_2': 'count', 'filter_str_eq_1': 'filter_str_eq', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_5': 'all_rows', 'team_6': 'team', 'detroit_7': 'detroit', 'high rebounds_8': 'high rebounds', "shaquille o'neal_9": "shaquille o'neal", '3_10': '3'} | {'eq_3': [4], 'result_4': [], 'count_2': [3], 'filter_str_eq_1': [2], 'filter_str_eq_0': [1], 'all_rows_5': [0], 'team_6': [0], 'detroit_7': [0], 'high rebounds_8': [1], "shaquille o'neal_9": [1], '3_10': [3]} | ['game', 'date', 'team', 'score', 'high points', 'high rebounds', 'high assists', 'location attendance', 'series'] | [['1', 'may 23', 'detroit', 'w 91 - 86 ( ot )', 'dwyane wade ( 25 )', 'udonis haslem ( 9 )', 'dwyane wade ( 5 )', 'the palace of auburn hills 22076', '1 - 0'], ['2', 'may 25', 'detroit', 'l 88 - 92 ( ot )', 'dwyane wade ( 32 )', "shaquille o'neal ( 17 )", 'dwyane wade ( 5 )', 'the palace of auburn hills 22076', '1 - 1'], ['3', 'may 27', 'detroit', 'w 98 - 83 ( ot )', 'dwyane wade ( 35 )', "shaquille o'neal ( 12 )", 'antoine walker , dwyane wade ( 4 )', 'american airlines arena 20245', '2 - 1'], ['4', 'may 29', 'detroit', 'w 89 - 78 ( ot )', 'dwyane wade ( 31 )', "shaquille o'neal ( 9 )", 'dwyane wade ( 5 )', 'american airlines arena 20248', '3 - 1'], ['5', 'may 31', 'detroit', 'l 78 - 91 ( ot )', 'dwyane wade ( 23 )', 'udonis haslem ( 10 )', 'jason williams ( 6 )', 'the palace of auburn hills 22076', '3 - 2']] |
yuuki kondo | https://en.wikipedia.org/wiki/Yuuki_Kondo | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-12870166-2.html.csv | unique | the only draw in yuuki kondo 's career was against yoko takahashi . | {'scope': 'all', 'row': '9', 'col': '4', 'col_other': '3', 'criterion': 'equal', 'value': 'draw', 'subset': None} | {'func': 'and', 'args': [{'func': 'only', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'method', 'draw'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose method record fuzzily matches to draw .', 'tostr': 'filter_eq { all_rows ; method ; draw }'}], 'result': True, 'ind': 1, 'tostr': 'only { filter_eq { all_rows ; method ; draw } }', 'tointer': 'select the rows whose method record fuzzily matches to draw . there is only one such row in the table .'}, {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'filter_str_eq', 'args': ['all_rows', 'method', 'draw'], 'result': None, 'ind': 0, 'tointer': 'select the rows whose method record fuzzily matches to draw .', 'tostr': 'filter_eq { all_rows ; method ; draw }'}, 'opponent'], 'result': 'yoko takahashi', 'ind': 2, 'tostr': 'hop { filter_eq { all_rows ; method ; draw } ; opponent }'}, 'yoko takahashi'], 'result': True, 'ind': 3, 'tostr': 'eq { hop { filter_eq { all_rows ; method ; draw } ; opponent } ; yoko takahashi }', 'tointer': 'the opponent record of this unqiue row is yoko takahashi .'}], 'result': True, 'ind': 4, 'tostr': 'and { only { filter_eq { all_rows ; method ; draw } } ; eq { hop { filter_eq { all_rows ; method ; draw } ; opponent } ; yoko takahashi } } = true', 'tointer': 'select the rows whose method record fuzzily matches to draw . there is only one such row in the table . the opponent record of this unqiue row is yoko takahashi .'} | and { only { filter_eq { all_rows ; method ; draw } } ; eq { hop { filter_eq { all_rows ; method ; draw } ; opponent } ; yoko takahashi } } = true | select the rows whose method record fuzzily matches to draw . there is only one such row in the table . the opponent record of this unqiue row is yoko takahashi . | 6 | 5 | {'and_4': 4, 'result_5': 5, 'only_1': 1, 'filter_str_eq_0': 0, 'all_rows_6': 6, 'method_7': 7, 'draw_8': 8, 'str_eq_3': 3, 'str_hop_2': 2, 'opponent_9': 9, 'yoko takahashi_10': 10} | {'and_4': 'and', 'result_5': 'true', 'only_1': 'only', 'filter_str_eq_0': 'filter_str_eq', 'all_rows_6': 'all_rows', 'method_7': 'method', 'draw_8': 'draw', 'str_eq_3': 'str_eq', 'str_hop_2': 'str_hop', 'opponent_9': 'opponent', 'yoko takahashi_10': 'yoko takahashi'} | {'and_4': [5], 'result_5': [], 'only_1': [4], 'filter_str_eq_0': [1, 2], 'all_rows_6': [0], 'method_7': [0], 'draw_8': [0], 'str_eq_3': [4], 'str_hop_2': [3], 'opponent_9': [2], 'yoko takahashi_10': [3]} | ['res', 'record', 'opponent', 'method', 'event', 'round', 'time', 'location'] | [['loss', '11 - 5 - 1', 'marloes coenen', 'ko ( punch )', 'smackgirl 2005 : cool fighter last stand', '2', '0:50', 'numazu , japan'], ['loss', '11 - 4 - 1', 'amanda buckner', 'submission ( kneebar )', 'smackgirl 2004 : yuuki kondo retirement celebration', '2', '0:41', 'tokyo , japan'], ['win', '11 - 3 - 1', 'sarah boyd', 'submission ( keylock )', 'smackgirl 2004 : go west', '3', '3:35', 'osaka , japan'], ['win', '10 - 3 - 1', 'fatiha abalhaja', 'submission ( armbar )', 'smackgirl : third season iii', '3', '3:28', 'tokyo , japan'], ['win', '9 - 3 - 1', 'mayra conde', 'submission ( armbar )', 'smackgirl : third season i', '1', '2:20', 'tokyo , japan'], ['win', '8 - 3 - 1', 'keiko tamai', 'decision ( 3 - 0 )', 'smackgirl : japan cup 2002 grand final', '3', '5:00', 'tokyo , japan'], ['loss', '7 - 3 - 1', 'angela reestad', 'submission ( armbar )', 'ax vol 4', '1', '2:18', 'tokyo , japan'], ['win', '7 - 2 - 1', 'yuki morimatsu', 'submission ( armbar )', 'ax vol 3', '1', '1:23', 'tokyo , japan'], ['draw', '6 - 2 - 1', 'yoko takahashi', 'draw', "zero - one : true century creation ' 02", '3', '3:00', 'tokyo , japan'], ['win', '6 - 2 - 0', 'hiromi kanai', 'submission ( armbar )', 'smackgirl : pioneering spirit', '1', '0:34', 'tokyo , japan'], ['loss', '5 - 2 - 0', 'ikuma hoshino', 'decision ( 0 - 3 )', 'ax vol 1 : we do the justice', '3', '5:00', 'tokyo , japan'], ['win', '5 - 1 - 0', 'mika harigai', 'submission ( armbar )', 'smackgirl : alive !', '1', '1:34', 'tokyo , japan'], ['win', '4 - 1 - 0', 'megumi sato', 'submission ( armbar )', 'smackgirl : burning night', '1', '0:44', 'tokyo , japan'], ['win', '3 - 1 - 0', 'aya koyama', 'decision ( 3 - 0 )', 'smackgirl : fighting chance', '3', '5:00', 'tokyo , japan'], ['win', '2 - 1 - 0', 'mika harigai', 'submission ( armbar )', 'smackgirl : starting over', '1', '0:49', 'tokyo , japan'], ['win', '1 - 1 - 0', 'yuta dum', 'tko ( ankle injury )', 'remix golden gate 2001', '1', '5:00', 'tokyo , japan'], ['loss', '0 - 1 - 0', 'marloes coenen', 'submission ( armbar )', 'llpw : l - 1 2000 the strongest lady', '1', '2:37', 'tokyo , japan']] |
wru division four west | https://en.wikipedia.org/wiki/WRU_Division_Four_West | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-13758945-2.html.csv | aggregation | the average number of losses at the wru division four west is 10.75 . | {'scope': 'all', 'col': '4', 'type': 'average', 'result': '10.75', 'subset': None} | {'func': 'round_eq', 'args': [{'func': 'avg', 'args': ['all_rows', 'lost'], 'result': '10.75', 'ind': 0, 'tostr': 'avg { all_rows ; lost }'}, '10.75'], 'result': True, 'ind': 1, 'tostr': 'round_eq { avg { all_rows ; lost } ; 10.75 } = true', 'tointer': 'the average of the lost record of all rows is 10.75 .'} | round_eq { avg { all_rows ; lost } ; 10.75 } = true | the average of the lost record of all rows is 10.75 . | 2 | 2 | {'eq_1': 1, 'result_2': 2, 'avg_0': 0, 'all_rows_3': 3, 'lost_4': 4, '10.75_5': 5} | {'eq_1': 'eq', 'result_2': 'true', 'avg_0': 'avg', 'all_rows_3': 'all_rows', 'lost_4': 'lost', '10.75_5': '10.75'} | {'eq_1': [2], 'result_2': [], 'avg_0': [1], 'all_rows_3': [0], 'lost_4': [0], '10.75_5': [1]} | ['club', 'played', 'drawn', 'lost', 'points for', 'points against', 'tries for', 'tries against', 'try bonus', 'losing bonus', 'points'] | [['club', 'played', 'drawn', 'lost', 'points for', 'points against', 'tries for', 'tries against', 'try bonus', 'losing bonus', 'points'], ['llandeilo rfc', '22', '1', '0', '917', '119', '136', '14', '19', '0', '105'], ['brynamman rfc', '22', '1', '2', '821', '210', '116', '27', '16', '2', '96'], ['tenby united rfc', '22', '0', '8', '562', '461', '78', '61', '10', '1', '67'], ['pembroke dock harlequins rfc', '22', '0', '8', '423', '351', '56', '40', '7', '3', '66'], ['pontarddulais rfc', '22', '1', '9', '550', '503', '79', '68', '11', '5', '66'], ['betws rfc', '22', '1', '9', '528', '440', '72', '63', '9', '0', '59'], ['trimsaran rfc', '22', '0', '12', '471', '540', '68', '77', '7', '1', '48'], ['pembroke rfc', '22', '0', '13', '467', '500', '69', '66', '8', '4', '48'], ['burry port rfc', '22', '1', '14', '373', '688', '47', '99', '3', '2', '31'], ['hendy rfc', '22', '0', '17', '292', '707', '38', '109', '1', '6', '27'], ['tycroes rfc', '22', '0', '18', '267', '645', '35', '89', '3', '3', '18'], ['cwmgors rfc', '22', '1', '19', '211', '718', '28', '109', '2', '3', '15']] |
fringe ( season 1 ) | https://en.wikipedia.org/wiki/Fringe_%28season_1%29 | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/1-24648983-1.html.csv | ordinal | the episode titled " the dreamscape " had the fewest number of viewers , with 7.7 million . | {'row': '8', 'col': '7', 'order': '1', 'col_other': '2', 'max_or_min': 'min_to_max', 'value_mentioned': 'yes', 'scope': 'all', 'subset': None} | {'func': 'and', 'args': [{'func': 'eq', 'args': [{'func': 'nth_min', 'args': ['all_rows', 'us viewers ( million )', '1'], 'result': '7.70', 'ind': 0, 'tostr': 'nth_min { all_rows ; us viewers ( million ) ; 1 }', 'tointer': 'the 1st minimum us viewers ( million ) record of all rows is 7.70 .'}, '7.70'], 'result': True, 'ind': 1, 'tostr': 'eq { nth_min { all_rows ; us viewers ( million ) ; 1 } ; 7.70 }', 'tointer': 'the 1st minimum us viewers ( million ) record of all rows is 7.70 .'}, {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'nth_argmin', 'args': ['all_rows', 'us viewers ( million )', '1'], 'result': None, 'ind': 2, 'tostr': 'nth_argmin { all_rows ; us viewers ( million ) ; 1 }'}, 'title'], 'result': 'the dreamscape', 'ind': 3, 'tostr': 'hop { nth_argmin { all_rows ; us viewers ( million ) ; 1 } ; title }'}, 'the dreamscape'], 'result': True, 'ind': 4, 'tostr': 'eq { hop { nth_argmin { all_rows ; us viewers ( million ) ; 1 } ; title } ; the dreamscape }', 'tointer': 'the title record of the row with 1st minimum us viewers ( million ) record is the dreamscape .'}], 'result': True, 'ind': 5, 'tostr': 'and { eq { nth_min { all_rows ; us viewers ( million ) ; 1 } ; 7.70 } ; eq { hop { nth_argmin { all_rows ; us viewers ( million ) ; 1 } ; title } ; the dreamscape } } = true', 'tointer': 'the 1st minimum us viewers ( million ) record of all rows is 7.70 . the title record of the row with 1st minimum us viewers ( million ) record is the dreamscape .'} | and { eq { nth_min { all_rows ; us viewers ( million ) ; 1 } ; 7.70 } ; eq { hop { nth_argmin { all_rows ; us viewers ( million ) ; 1 } ; title } ; the dreamscape } } = true | the 1st minimum us viewers ( million ) record of all rows is 7.70 . the title record of the row with 1st minimum us viewers ( million ) record is the dreamscape . | 6 | 6 | {'and_5': 5, 'result_6': 6, 'eq_1': 1, 'nth_min_0': 0, 'all_rows_7': 7, 'us viewers (million)_8': 8, '1_9': 9, '7.70_10': 10, 'str_eq_4': 4, 'str_hop_3': 3, 'nth_argmin_2': 2, 'all_rows_11': 11, 'us viewers (million)_12': 12, '1_13': 13, 'title_14': 14, 'the dreamscape_15': 15} | {'and_5': 'and', 'result_6': 'true', 'eq_1': 'eq', 'nth_min_0': 'nth_min', 'all_rows_7': 'all_rows', 'us viewers (million)_8': 'us viewers ( million )', '1_9': '1', '7.70_10': '7.70', 'str_eq_4': 'str_eq', 'str_hop_3': 'str_hop', 'nth_argmin_2': 'nth_argmin', 'all_rows_11': 'all_rows', 'us viewers (million)_12': 'us viewers ( million )', '1_13': '1', 'title_14': 'title', 'the dreamscape_15': 'the dreamscape'} | {'and_5': [6], 'result_6': [], 'eq_1': [5], 'nth_min_0': [1], 'all_rows_7': [0], 'us viewers (million)_8': [0], '1_9': [0], '7.70_10': [1], 'str_eq_4': [5], 'str_hop_3': [4], 'nth_argmin_2': [3], 'all_rows_11': [2], 'us viewers (million)_12': [2], '1_13': [2], 'title_14': [3], 'the dreamscape_15': [4]} | ['-', 'title', 'directed by', 'written by', 'original air date', 'production code', 'us viewers ( million )'] | [['1', 'pilot', 'alex graves', 'j j abrams & alex kurtzman & roberto orci', 'september 9 , 2008', '276038', '9.13'], ['3', 'the ghost network', 'frederick e o toye', 'david h goodman & j r orci', 'september 23 , 2008', '3t7652', '9.42'], ['4', 'the arrival', 'paul edwards', 'j j abrams & jeff pinkner', 'september 30 , 2008', '3t7653', '9.91'], ['5', 'power hungry', 'christopher misiano', 'jason cahill & julia cho', 'october 14 , 2008', '3t7654', '9.16'], ['6', 'the cure', 'bill eagles', 'felicia d henderson & brad caleb kane', 'october 21 , 2008', '3t7655', '8.91'], ['7', 'in which we meet mr jones', 'brad anderson', 'j j abrams & jeff pinkner', 'november 11 , 2008', '3t7656', '8.61'], ['8', 'the equation', 'gwyneth horder - payton', 'j r orci & david h goodman', 'november 18 , 2008', '3t7657', '9.18'], ['9', 'the dreamscape', 'frederick e o toye', 'zack whedon & julia cho', 'november 25 , 2008', '3t7658', '7.70'], ['10', 'safe', 'michael zinberg', 'david h goodman & jason cahill', 'december 2 , 2008', '3t7659', '8.54'], ['12', 'the no - brainer', 'john polson', 'david h goodman & brad caleb kane', 'january 27 , 2009', '3t7661', '11.62'], ['13', 'the transformation', 'brad anderson', 'zack whedon & j r orci', 'february 3 , 2009', '3t7662', '12.78'], ['15', 'inner child', 'frederick e o toye', 'brad caleb kane & julia cho', 'april 7 , 2009', '3t7664', '9.88'], ['16', 'unleashed', 'brad anderson', 'zack whedon & j r orci', 'april 14 , 2009', '3t7665', '10.15'], ['17', 'bad dreams', 'akiva goldsman', 'akiva goldsman', 'april 21 , 2009', '3t7666', '9.89']] |
2008 wnba draft | https://en.wikipedia.org/wiki/2008_WNBA_draft | https://raw.githubusercontent.com/wenhuchen/Table-Fact-Checking/master/data/all_csv/2-14122892-3.html.csv | superlative | the first person drafted in the 2008 draft was candace parker . | {'scope': 'all', 'col_superlative': '1', 'row_superlative': '1', 'value_mentioned': 'yes', 'max_or_min': 'min', 'other_col': '2', 'subset': None} | {'func': 'and', 'args': [{'func': 'eq', 'args': [{'func': 'min', 'args': ['all_rows', 'pick'], 'result': '1', 'ind': 0, 'tostr': 'min { all_rows ; pick }', 'tointer': 'the minimum pick record of all rows is 1 .'}, '1'], 'result': True, 'ind': 1, 'tostr': 'eq { min { all_rows ; pick } ; 1 }', 'tointer': 'the minimum pick record of all rows is 1 .'}, {'func': 'str_eq', 'args': [{'func': 'str_hop', 'args': [{'func': 'argmin', 'args': ['all_rows', 'pick'], 'result': None, 'ind': 2, 'tostr': 'argmin { all_rows ; pick }'}, 'player'], 'result': 'candace parker', 'ind': 3, 'tostr': 'hop { argmin { all_rows ; pick } ; player }'}, 'candace parker'], 'result': True, 'ind': 4, 'tostr': 'eq { hop { argmin { all_rows ; pick } ; player } ; candace parker }', 'tointer': 'the player record of the row with superlative pick record is candace parker .'}], 'result': True, 'ind': 5, 'tostr': 'and { eq { min { all_rows ; pick } ; 1 } ; eq { hop { argmin { all_rows ; pick } ; player } ; candace parker } } = true', 'tointer': 'the minimum pick record of all rows is 1 . the player record of the row with superlative pick record is candace parker .'} | and { eq { min { all_rows ; pick } ; 1 } ; eq { hop { argmin { all_rows ; pick } ; player } ; candace parker } } = true | the minimum pick record of all rows is 1 . the player record of the row with superlative pick record is candace parker . | 6 | 6 | {'and_5': 5, 'result_6': 6, 'eq_1': 1, 'min_0': 0, 'all_rows_7': 7, 'pick_8': 8, '1_9': 9, 'str_eq_4': 4, 'str_hop_3': 3, 'argmin_2': 2, 'all_rows_10': 10, 'pick_11': 11, 'player_12': 12, 'candace parker_13': 13} | {'and_5': 'and', 'result_6': 'true', 'eq_1': 'eq', 'min_0': 'min', 'all_rows_7': 'all_rows', 'pick_8': 'pick', '1_9': '1', 'str_eq_4': 'str_eq', 'str_hop_3': 'str_hop', 'argmin_2': 'argmin', 'all_rows_10': 'all_rows', 'pick_11': 'pick', 'player_12': 'player', 'candace parker_13': 'candace parker'} | {'and_5': [6], 'result_6': [], 'eq_1': [5], 'min_0': [1], 'all_rows_7': [0], 'pick_8': [0], '1_9': [1], 'str_eq_4': [5], 'str_hop_3': [4], 'argmin_2': [3], 'all_rows_10': [2], 'pick_11': [2], 'player_12': [3], 'candace parker_13': [4]} | ['pick', 'player', 'nationality', 'wnba team', 'school / club team'] | [['1', 'candace parker', 'united states', 'los angeles sparks', 'tennessee'], ['2', 'sylvia fowles', 'united states', 'chicago sky', 'lsu'], ['3', 'candice wiggins', 'united states', 'minnesota lynx', 'stanford'], ['4', 'alexis hornbuckle', 'united states', 'detroit shock ( from atl , via sea )', 'tennessee'], ['5', 'matee ajavon', 'united states', 'houston comets', 'rutgers'], ['6', 'crystal langhorne', 'united states', 'washington mystics', 'maryland'], ['7', 'essence carson', 'united states', 'new york liberty', 'rutgers'], ['8', 'tamera young', 'united states', 'atlanta dream ( from sea )', 'james madison'], ['9', 'amber holt', 'united states', 'connecticut sun', 'middle tennessee'], ['10', 'laura harper', 'united states', 'sacramento monarchs', 'maryland'], ['11', 'tasha humphrey', 'united states', 'detroit shock ( from sa )', 'georgia'], ['12', 'ketia swanier', 'united states', 'connecticut sun ( from ind )', 'connecticut'], ['13', 'latoya pringle', 'united states', 'phoenix mercury', 'north carolina'], ['14', 'erlana larkins', 'united states', 'new york liberty ( from det )', 'north carolina']] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.