ACL-OCL / Base_JSON /prefixD /json /dialdoc /2021.dialdoc-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:31:23.177081Z"
},
"title": "Automatic Learning Assistant in Telugu",
"authors": [
{
"first": "Meghana",
"middle": [],
"last": "Bommadi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Language Technologies Research Centre International Institute of Information Technology Hyderabad",
"location": {
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "Shreya",
"middle": [],
"last": "Terupally",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Language Technologies Research Centre International Institute of Information Technology Hyderabad",
"location": {
"country": "India"
}
},
"email": ""
},
{
"first": "Radhika",
"middle": [],
"last": "Mamidi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Language Technologies Research Centre International Institute of Information Technology Hyderabad",
"location": {
"country": "India"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a learning assistant that tests one's knowledge and gives feedback that helps a person learn at a faster pace. A learning assistant (based on an automated question generation) has extensive uses in education, information websites, self-assessment, FAQs, testing ML agents, research, etc. Multiple researchers, and companies have worked on Virtual Assistance, but majorly in English. We built our learning assistant for Telugu language to help with teaching in the mother tongue, which is the most efficient way of learning 1. Our system is built primarily based on Question Generation in Telugu.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a learning assistant that tests one's knowledge and gives feedback that helps a person learn at a faster pace. A learning assistant (based on an automated question generation) has extensive uses in education, information websites, self-assessment, FAQs, testing ML agents, research, etc. Multiple researchers, and companies have worked on Virtual Assistance, but majorly in English. We built our learning assistant for Telugu language to help with teaching in the mother tongue, which is the most efficient way of learning 1. Our system is built primarily based on Question Generation in Telugu.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Many experiments were conducted on Question Generation in English in multiple ways. We have built the first hybrid machine learning and rule-based solution in Telugu, which proves efficient for short stories or short passages in children's books. Our work covers the fundamental question forms with question types: adjective, yes/no, adverb, verb, when, where, whose, quotative, and quantitative (how many/how much). We constructed rules for question generation using Part of Speech (POS) tags and Universal Dependency (UD) tags along with linguistic information of the surrounding relevant context of the word. Our system is primarily built on question generation in Telugu, and is also capable of evaluating the user's answers to the generated questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Research on Virtual Assistants is renowned since they being widely used in recent times for numerous tasks. These assistants are generated using large datasets and high-end Natural Language Understanding (NLU) and Natural Language Generation (NLG) tools. NLU and NLG are used in 1 (Roshni, 2020) (Nishanthi, 2020) interactive NLP applications such as AI-based dialogue systems/voice assistants like SIRI, Google Assistant, Alexa, and similar personal assistants. Research is still going on to make these assistants work in major Indian languages as well.",
"cite_spans": [
{
"start": 296,
"end": 313,
"text": "(Nishanthi, 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An automated learning assistant like our system is not only useful for the learning process for humans but also for machines in the process of testing ML systems 2 . Research has been done for Question Answer generating system in English 3 , concentrating on basic Wh-questions with a rule-based approach 4 , question template based approaches 5 etc. For a low-resourced language like Telugu, a complete AI-based solution can be non-viable. There are hardly any datasets available for the system to produce significant accuracy. A completely rule-based system might leave out principle parts of the abstract. There is a chance that all the questions cannot be captured inclusively by completely handwritten rules. Hence, we want to introduce a mixed rule-based and AI-based solution to this problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our system works on the following three crucial steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. Summarization",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We implemented summarization using two techniques viz. Word Frequency (see 4.1), and TextRank (see 4.2) which are explained further in section 4. Summarization",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer Evaluation",
"sec_num": "3."
},
{
"text": "We attempted to produce questions, concentrating on the critical points of a text that are generally asked in assessment tests. Questions posed to an individual challenge their knowledge and understanding of specific topics, so we formed questions in each sentence in as many ways as possible. We based this model on children's stories, so the questions we wanted to produce aim to be simpler and more objective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer Evaluation",
"sec_num": "3."
},
{
"text": "Based on the observation of the data chosen and analysis of all the possible causes, we developed a set of rules for each part of speech that can be formed into a question word in Telugu. We maximized the possible number of questions in each sentence with all the keywords. We built rules for question generation based on POS tags, UD tags and information surrounding the word, which is comparable with Vibhaktis (case markers) in Telugu grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer Evaluation",
"sec_num": "3."
},
{
"text": "The Question Generation in manually evaluated and the detailed error analysis is given in section 8.1. Our Learning Assistant evaluates using string matching, keyword matching for Telugu answers, and a pre-trained sentence transformer model using XLM-R. (Nils Reimers, 2019) 2 Related Work Previously, Holy Lovenia, Felix Limanta et al.[2018] (Holy Lovenia, 2018) experimented on Q&A pair Generation in English where they succeeded in forming What, Who, and Where questions. Rami Reddy et al.[2006] (Rami Reddy Nandi Reddy, 2006) worked on Dialogue based Question Answering System in Telugu for Railway inquiries, which majorly concentrated on Answer Generation for a given query. Similar work has done by (Hoojung Chung) on dealing with practical question answering system in restricted domain. Shudipta Sharma et al. [2018] (Shudipta Sharma) worked on automatic Q&A pair generation for English and Bengali texts using NLP tasks like verb decomposition, subject auxiliary inversion for a question tag.",
"cite_spans": [
{
"start": 254,
"end": 274,
"text": "(Nils Reimers, 2019)",
"ref_id": null
},
{
"start": 307,
"end": 342,
"text": "Lovenia, Felix Limanta et al.[2018]",
"ref_id": "BIBREF6"
},
{
"start": 480,
"end": 498,
"text": "Reddy et al.[2006]",
"ref_id": null
},
{
"start": 511,
"end": 529,
"text": "Nandi Reddy, 2006)",
"ref_id": null
},
{
"start": 819,
"end": 825,
"text": "[2018]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Answer Evaluation",
"sec_num": "3."
},
{
"text": "We have used a Telugu stories dataset taken from a website called kathalu wordpress\". 6 This dataset was chosen because of a variety in the themes of the stories, wide vocabulary and sentences of varying lengths. ",
"cite_spans": [
{
"start": 86,
"end": 87,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3"
},
{
"text": "Since Telugu is a low resource language, we used statistical and unsupervised methods for this task. Summarization also ensures the portability of our system to other similar low resource languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization",
"sec_num": "4"
},
{
"text": "For summarization, we did a basic data preprocessing (spaces, special characters, etc.) in addition to root-word extraction using Shiva Reddy's POS tagger 7 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization",
"sec_num": "4"
},
{
"text": "We used two types of existing summarization techniques:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization",
"sec_num": "4"
},
{
"text": "1. Word Frequency-based summarization 2. TextRank based frequency",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summarization",
"sec_num": "4"
},
{
"text": "WFBS (Word Frequency-based Summarization) is calculated using the word frequency in the passage. 8 This process is based on the idea that the keywords or the main words will frequently appear in the text, and those words with lower frequency have a high probability of being less related to the story.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Frequency-based Summarization",
"sec_num": "4.1"
},
{
"text": "All the sentences that carry crucial information are produced successfully by this method because the keywords are used repeatedly in children's stories, subsequently causing the highest frequency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Frequency-based Summarization",
"sec_num": "4.1"
},
{
"text": "We used a dynamic ratio (a ratio that can be changed or chosen by the user as an input) for getting the desirable amount of summary (short summary or a longer summary, for example: k% of the sentences, the system will output k% of sentences with the highest frequent words from the dictionary) This ratio, when dynamically changed, performed better than the fixed ratio of word selection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Frequency-based Summarization",
"sec_num": "4.1"
},
{
"text": "Steps followed in WFBS are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Frequency-based Summarization",
"sec_num": "4.1"
},
{
"text": "1. Sentences are extracted from the input file.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Frequency-based Summarization",
"sec_num": "4.1"
},
{
"text": "2. The file is prepossessed and the words are tokenized. 3. Stop words are removed. 4. Frequency of each word is calculated and stored in dictionaries. 5. The sentences with least frequent word are removed. 6. Calculated the ratio of words that occur in highest to lowest frequency order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Frequency-based Summarization",
"sec_num": "4.1"
},
{
"text": "TextRank is a graph-based ranking model 9 that prioritizes each element based on the values in the graph. This process is done in the following steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TextRank based Frequency",
"sec_num": "4.2"
},
{
"text": "1. A graph is constructed using each sentence as a node 2. Similarity between the two nodes is marked as the edge weight between the nodes 3. Each sentence is ranked based on the similarity with the whole text 4. The page-rank algorithm is run until convergence 5. The sentences with top N ranking as summarized text is given as the output",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TextRank based Frequency",
"sec_num": "4.2"
},
{
"text": "The TextRank algorithm is a graph based method that updates the sentence score WS iteratively using the following equation 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TextRank based Frequency",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "W S (V i ) = (1 \u2212 d) + d * \u2211 V i \u03b5In(V i ) w i j \u2211 V k \u03b5Out(V j ) w jk W S(V j )",
"eq_num": "(1)"
}
],
"section": "TextRank based Frequency",
"sec_num": "4.2"
},
{
"text": "Where d = damping factor (0.85), w ij is the similarity measure between ith and jth sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TextRank based Frequency",
"sec_num": "4.2"
},
{
"text": "This method has the advantage of using the similarity between the two sentences to rank them 9 (Joshi, 2018) (Liang, 2019) instead of high-frequency words.",
"cite_spans": [
{
"start": 109,
"end": 122,
"text": "(Liang, 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TextRank based Frequency",
"sec_num": "4.2"
},
{
"text": "We used two kinds of similarity measures for the TextRank based summarization:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TextRank based Frequency",
"sec_num": "4.2"
},
{
"text": "1. Common words: A measure of similarity based on the number of common words in two sentences after removing stop words. We used root word extraction of the common words for better results since Telugu is a fusional and agglutinative language and has repeated words with a different suffix each time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TextRank based Frequency",
"sec_num": "4.2"
},
{
"text": "2. Best Match 25: A measure of the similarity between two passages, based on term frequencies in the passage. 10",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TextRank based Frequency",
"sec_num": "4.2"
},
{
"text": "The results observed by this method captures crucial information of the story, but lesser readability and fluency was observed. Within the similarity measures, BM25 has shown slightly better results since the BM25 algorithm ranks sentences based on the importance of particular words (inverse document frequency -IDF) instead of just using the frequency of words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TextRank based Frequency",
"sec_num": "4.2"
},
{
"text": "Candidate answers are words/phrases that depict some vital information in a sentence. Adjectives, adverbs, and the subject of a sentence are some examples of such candidates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer Phrase Selection",
"sec_num": "5"
},
{
"text": "The answer selection module utilizes two main NLP components -POS Tagging (Part of Speech tagging) and UD parsing (Universal Dependency parsing), along with language-specific rules to determine the answer words in an input sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer Phrase Selection",
"sec_num": "5"
},
{
"text": "We followed state-of-the-art method by Siva Reddy et al. (2011) (Siva Reddy, 2011), Cross-Language POS Taggers\" an implementation of a TnT-based Telugu POS Tagger 11 to parse our data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS Tagging",
"sec_num": "5.1"
},
{
"text": "The tagger learns morphological analysis and POS tags at the same time, and outputs the lemma (root word), POS tag, suffix, gender, number and case marker for each word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS Tagging",
"sec_num": "5.1"
},
{
"text": "The model was pre-trained on a Telugu corpus containing approximately 3.5 million tokens and had an evaluation accuracy of 90.73% for the main POS tag.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS Tagging",
"sec_num": "5.1"
},
{
"text": "A Bi-LSTM model using Keras is structured and trained using Telugu UD tags dataset UD_Telugu-MTG\". 12 The Bi-LSTM model outputs the UD tags for each word in a sentence using Keras. We considered the subject, which is marked subj\" by UD tagger, as the selected answer phrase for a sentence based on the condition that it marked root and punctuation correctly.",
"cite_spans": [
{
"start": 99,
"end": 101,
"text": "12",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "UD Tagging",
"sec_num": "5.2"
},
{
"text": "This model gave 85% accurate results, including the PAD tags(padding tags), which might not be an adequate result, but based on the conditions and given that the tags subj\" is labeled in a sentence scarcely, the results have been considered to be acceptable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "UD Tagging",
"sec_num": "5.2"
},
{
"text": "The outputs of the POS Tagging and UD Parsing modules are used as the crucial markers in our language-specific rules. In addition to conditions based on word surroundings, these tags select one or more answer phrases in each sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rules",
"sec_num": "5.3"
},
{
"text": "We classify the rules into different categories, typically based on their usage and interrogative forms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rules",
"sec_num": "5.3"
},
{
"text": "1. Quantifiers, Adjectives, Adverbs: Words with the QC, RB, and JJ POS tag, respectively. For words with JJ tags, the word and the corresponding determiners (if present) are selected as the answer candidate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rules",
"sec_num": "5.3"
},
{
"text": "2. Possession based: Words with PRP and NN tags that have suffixes as \"\u0c1f\u0c3f\",\" \u0c2f\u0c4a\u0c15\u0c15\u0c4d\", \"\u0c15\u0c3f\" and \"\u0c15\u0c41\" (Ti\",yokka\",ki\" and ku\"). The suffix \"\u0c1f\u0c3f\" (Ti\") is used for words like \"\u0c05\u0c24\u0c28\u0c3f\", \"\u0c35\u0c3e\u0c33\u0c33\u0c4d\", \"\u0c15\u0c02\u0c1f\u0c3f\", \"\u0c35\u0c3f\u0c26\u0c3e\u0c2f\u0c4d\u0c30\u0c41 \u0c27\u0c4d \u0c32\" (atani\"-his,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rules",
"sec_num": "5.3"
},
{
"text": "vAlla\"-their's, kanTi\"-eyes', vidyArthula\"students') 3. Time-Place based : Noun words with a \"\u0c32\u0c4a\" (lO\") suffix, along with other words present in custom list of time-related words (\"\u0c2e\u0c3e\u0c30\u0c3f\u0c28\u0c4d\u0c02\u0c17\u0c4d\", \"\u0c07\u0c2f\u0c30\u0c4d\")(morning\", year\") come under this category.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rules",
"sec_num": "5.3"
},
{
"text": "4. Direct and Reported Speech: The word \"\u0c05\u0c28\u0c3f\" is generally used to denote direct speech in Telugu. Phrases before the word \"\u0c05\u0c28\u0c3f\", along with phrases in quotation marks, are chosen as answer phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rules",
"sec_num": "5.3"
},
{
"text": "5. Verbs: Telugu follows the SOV (Subject Object Verb) structure, in general. If the last word has a V\" POS tag in a sentence, then we selected the verb and adjacent adverbs as an answer candidate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rules",
"sec_num": "5.3"
},
{
"text": "We use the UD tags to determine the subject of a sentence. As an additional check, we only select the candidate subjects in those sentences whose last word is tagged as the root verb, and the subject is a noun.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subject:",
"sec_num": "6."
},
{
"text": "Questions are formed according to the chosen phrases chosen previously, and the question words are replaced using further conditions if required.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Formation",
"sec_num": "6"
},
{
"text": "1. Quantifiers, Adjectives, Adverbs: The words that are marked JJ POS are replaced with \"\u0c0e\u0c1f\u0c41\u0c35\u0c02\u0c1f\u0c3f\" (eTuvanti\"-what kind of) RB POS tagged that are followed by verbs with \"\u0c17\u0c3e\" (gA\") suffix are replaced by \"\u0c0e\u0c32\u0c3e\" (elA\"-how) and the QC tagged words that are not articles (\"\u0c12\u0c15\" (oka\"-one/once) were chosen and changed based on the following word. If the quantifier is followed by \"\u0c36\u0c3e\u0c24\u0c02\", \"\u0c2e\u0c02\u0c26\u0c3f\" ,\"\u0c35\u0c30\u0c15\u0c41\" (shAtam\",maMdi\",varaku\") then the word is replaced with \"\u0c0e\u0c02\u0c24\" (eMta\"how much), if the quantifier has a suffix it is added to the question word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Formation",
"sec_num": "6"
},
{
"text": "For example: \"1700\u0c15\u0c41\" -\"\u0c0e\u0c02\u0c24\u0c15\u0c41\" (eMtaku) and the rest of the quantifiers like \u0c10\u0c26\u0c41 \u0c2a\u0c3f\u0c1a\u0c41\u0c1a\u0c4d\u0c15\u0c32\u0c41 (meaning five sparrows) are replaced with \"\u0c0e\u0c28\u0c3f\u0c28\u0c4d\" (enni\"-how many) (\"\u0c0e\u0c28\u0c3f\u0c28\u0c4d \u0c2a\u0c3f\u0c1a\u0c41\u0c1a\u0c4d\u0c15\u0c32\u0c41\" (how many sparrows) in this case).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Formation",
"sec_num": "6"
},
{
"text": "The nouns and pronouns that satisfied the rules are replaced with \"\u0c0e\u0c35\u0c30\u0c3f\" (evari\"-whose ) and the dative cases are replaced with \"\u0c0e\u0c35\u0c30\u0c3f\u0c15\u0c3f\" (evariki\"-to whom). This could be an exception for non-human nouns and pronouns. In the children's stories, most of the nouns are personified, so there were fewer errors than we presumed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Possession based:",
"sec_num": "2."
},
{
"text": "For example: A sentence with a phrase like \"\u0c30\u0c3e\u0c2e\u0c41\u0c21\u0c3f \u0c07\u0c32\u0c41 \u0c32\u0c4d ...\" (ram's house...) would form a question like \"\u0c0e\u0c35\u0c30\u0c3f \u0c07\u0c32\u0c41 \u0c32\u0c4d ..\" (whose house..)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Possession based:",
"sec_num": "2."
},
{
"text": "We made a list of words that are used to convey time. If the lemma of the word matched the word in the dictionary, then we marked it time\" and was replaced with \"\u0c0e\u0c2a\u0c41\u0c2a\u0c4d\u0c21\u0c41\" (eppuDu\"-when) or else it was marked as a place and replaced with \"\u0c0e\u0c15\u0c15\u0c4d\u0c21\" (ekkaDa\"-where).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Time-Place based:",
"sec_num": "3."
},
{
"text": "For example: A sentence with the phrase \"\u0c30\u0c47\u0c2a\u0c41 \u0c35\u0c38\u0c3e\u0c24\u0c4d \u0c21\u0c41\" (he will come tomorrow) will form a question \"\u0c0e\u0c2a\u0c41\u0c2a\u0c4d\u0c21\u0c41 \u0c35\u0c38\u0c3e\u0c24\u0c4d \u0c21\u0c41?\"(when will he come).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Time-Place based:",
"sec_num": "3."
},
{
"text": "4. Direct and Reported Speech : The whole speech phrase or the phrase that is quoted is replaced with \"\u0c0f\u0c2e\u0c28\u0c3f\" (Emani\") in the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Time-Place based:",
"sec_num": "3."
},
{
"text": "For example: A phrase in quotes in a sentence like \u0c26\u0c41\u0c30\u0c4b\u0c2f\u0c4d\u0c27\u0c28\u0c41\u0c21\u0c41 \"\u0c0f\u0c2e\u0c02\u0c1f\u0c3f\u0c35\u0c3f \u0c0f\u0c2e\u0c02\u0c1f\u0c3f\u0c35\u0c3f..!\" \u0c05\u0c28\u0c3f \u0c05\u0c28\u0c3e\u0c28\u0c4d\u0c21\u0c41. (Duryodhan said,\"what did you say..!\".) would form a question like \u0c26\u0c41\u0c30\u0c4b\u0c2f\u0c4d\u0c27\u0c28\u0c41\u0c21\u0c41 \u0c0f\u0c2e\u0c28\u0c3f \u0c05\u0c28\u0c3e\u0c28\u0c4d\u0c21\u0c41? (what did Duryodhan say?) 5. Verbs : The verb is replaced with \"\u0c0f\u0c2e\u0c3f \u0c1a\u0c47\u0c38\u0c42\u0c24\u0c4d \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Time-Place based:",
"sec_num": "3."
},
{
"text": "Emi cEstu\"-doing what) + <suffix>\". The appropriate suffix is chosen from the information lost in the lemmatized word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Time-Place based:",
"sec_num": "3."
},
{
"text": "Additionally, the verb tags were used to form polar questions. The interrogative form of a sentence in Telugu can be constructed by adding intonation to the verb, so we added \"\u0c06\" (A\") vowel at the end of the verb to make a yes or no question. The answer phrase to this question would be \"\u0c05\u0c35\u0c41\u0c28\u0c41\" (avunu\"-yes), followed by the original phrase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Time-Place based:",
"sec_num": "3."
},
{
"text": "For example: A sentence with a verbal phrase like \"\u0c38\u0c40\u0c24 \u0c35\u0c46\u0c33\u0c41\u0c24\u0c42 \u0c09\u0c02\u0c26\u0c3f\"(Sita is going) will form a question like \"\u0c38\u0c40\u0c24 \u0c0f\u0c2e\u0c3f \u0c1a\u0c47\u0c38\u0c42\u0c24\u0c4d \u0c09\u0c02\u0c26\u0c3f? \"(What is Sita doing?).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Time-Place based:",
"sec_num": "3."
},
{
"text": "6. Subject : Based on the suffix of the verb the subject is replaced with \"\u0c0f\u0c26\u0c3f\", \"\u0c0f\u0c35\u0c3f\" or \"\u0c26\u0c47\u0c28\u0c3f\", \"\u0c35\u0c47\u0c1f\u0c3f\u0c15\u0c3f\" (meaning what, which simultaneously) or \"\u0c0e\u0c35\u0c30\u0c41\" (evaru\"-who) if the subject has a gender and marked a human in POS tags, and the root suffix is changed accordingly for \"\u0c0e\u0c35\u0c30\u0c41\" (evaru\"-who (honorific)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Time-Place based:",
"sec_num": "3."
},
{
"text": "For example: \"\u0c17\u0c02\u0c17 \u0c05\u0c15\u0c15\u0c4d\u0c21\u0c3f \u0c28\u0c41\u0c02\u0c1a\u0c3f \u0c35\u0c46\u0c33\u0c3f \u0c32\u0c4d \u0c2a\u0c4b\u0c2f\u0c3f\u0c02\u0c26\u0c3f.\" (Ganga left from that place) forms a question like \"\u0c0e\u0c35\u0c30\u0c41 \u0c05\u0c15\u0c15\u0c4d\u0c21\u0c3f \u0c28\u0c41\u0c02\u0c1a\u0c3f \u0c35\u0c46\u0c33\u0c3f \u0c32\u0c4d \u0c2a\u0c4b\u0c2f\u0c3e\u0c30\u0c41?\" (Who left from there?).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Time-Place based:",
"sec_num": "3."
},
{
"text": "User's answer for the question generated is evaluated in two ways depending on the form of input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer Evaluation",
"sec_num": "7"
},
{
"text": "A string input in Telugu is taken from the user and string matching is done for the whole sentence to the answer phrase stored from Question and Answer Pair Generation. Answer could be either in the sentence form or in a phrasal form that has the keywords which the question was formed on.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Telugu Answer Evaluation",
"sec_num": "7.1"
},
{
"text": "Similar to word embedding, where the learned representation of same words have similar representation, sentence embedding (Nikhil, 2017) maps semantic information of sentences into vectors. Multilingual Sentence Embedding deals with sentences in multiple languages that are mapped in a closer vector space if they have similar meanings.",
"cite_spans": [
{
"start": 122,
"end": 136,
"text": "(Nikhil, 2017)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Transformers",
"sec_num": "7.2.1"
},
{
"text": "Sentence Transformers are Multilingual Sentence Embedding (Ivana Kvapil\u00edkov\u00e1, 2020; Mikel Artetxe, 2019) formed using BERT / RoBERTa / XLM-RoBERTa & Co. with Py-Torch 13 . This framework provides an easy way of computing dense vector representation of sentences in multiple languages. They are called sentence transformers since the models are based on transformer networks like BERT / RoBERTa / XLM-RoBERTa etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Transformers",
"sec_num": "7.2.1"
},
{
"text": "We use a pre-trained sentence transformer (Nils Reimers, 2019) based cross-lingual sentence embedding system which can take a sentence in a language and create an embedding in a multilingual space. The answer phrases and sentences are stored in a dictionary. The answers in a different language are taken as an input and are pro-jected into multilingual space and the similarity is checked using cosine similarity with the stored answer phrase in Telugu.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Transformers",
"sec_num": "7.2.1"
},
{
"text": "In the final system we used syntax matching to mark the user's answer if the input is in Telugu and used sentence transformers if the input is in any other lanuage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Transformers",
"sec_num": "7.2.1"
},
{
"text": "We obtained results that resemble commonly used questions covering nine POS and UD tags. The questions generated by this system are successful and are most similar to academic questions we see in textbooks. We did manual error analysis for the question and answer pair generated. In most cases, it has produced legible results that resemble human-made questions, but there were errors in a few complex sentences. Out of the 916 questions formed, only 34 were either completely erroneous or illegible. The rest were both grammatically correct and significant for the context of the story. The system successfully obtained all possible questions for each simple sentence, not requiring further linguistic analysis. Table 1 lists the number of times each question word occurred and the number of times it appeared wrong in the experiment with five stories. Table 2 in section 9 shows the sample question and answers generated by the system for children stories.",
"cite_spans": [],
"ref_spans": [
{
"start": 713,
"end": 720,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 854,
"end": 862,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "8"
},
{
"text": "The Question Generation by the system is manually annotated by two human evaluators with Computational Linguistics background. Guidelines given to the evaluators are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Generation Error Analysis",
"sec_num": "8.1"
},
{
"text": "\u2022 Question with grammatical mistakes are marked as errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Generation Error Analysis",
"sec_num": "8.1"
},
{
"text": "\u2022 Semantic errors in question are marked as errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Generation Error Analysis",
"sec_num": "8.1"
},
{
"text": "\u2022 Questions that are highly irrelevant to the story are marked as errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Generation Error Analysis",
"sec_num": "8.1"
},
{
"text": "Errors are equally influenced by the word tags, the context of the word, and the word's position in a sentence. We analysed each and every way the errors occurred and could occur.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Generation Error Analysis",
"sec_num": "8.1"
},
{
"text": "Errors in elA\" ('how') questions are often caused due to spaces between the words and suffixes in the dataset we chose. enni\" (quantifier -based) questions are built from diverse quantifiers (for example: time, age, number of people -these quantifiers are often written as sandhi with the word, which causes the POS tagger to give ambiguous tags) and numerous ways of writing quantifiers in Telugu. Few quantifier question word errors occurred due to wrong POS tagging of cross-coded words (words that are actually in English but written in Telugu script). In Telugu, two numbers are used together when representing non-specific quantities between the two numbers (x y means from x to y), for example, reMDu (two) mUDu(three) nimishAlu (minutes)\" meaning two to three minutes. This kind of representation makes the system assume there are two quantifiers, and the sentence is eligible for two questions based on the same.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Generation Error Analysis",
"sec_num": "8.1"
},
{
"text": "dEni\" (subject-based) questions have errors because of ambiguous suffixes and inaccuracies in UD tagging. The lack of human identification in the system made human subjects also replaceable with dEnini\" instead of evarini\". Another error was due to subjects that were nominal (names) with end syllables similar to common suffixes (which are included as word context in the rule formation). These names were split and formed incorrect question words. For example, the name Shalini\" was converted to interrogative form as dEnini\". The rest of the errors are due to wrong POS tags, cross-codes, and initials/abbreviations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Generation Error Analysis",
"sec_num": "8.1"
},
{
"text": "Emi\" ('what') question forms also have similar POS tags and cross-codes issues. Few of these errors occurred due to punctuation marks between the same sentence, breaking it up into multiple sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Generation Error Analysis",
"sec_num": "8.1"
},
{
"text": "eTuvaMTi\" ('what-kind-of') question forms run into issues where there is personification. General questions based on adjectives for humans are based on a person's subtle qualities; however, in a few cases, the adjective that was chosen is inapt to be formed into a question (less similar to human made question). The question that was formed was still grammatically correct in both human and non-human subjects; nevertheless, it is more suitable and precise for a non-human noun. For example (\u0c0e\u0c32\u0c3e\u0c02\u0c1f\u0c3f \u0c36\u0c3e\u0c32\u0c3f\u0c28\u0c3f/what kind of Shalini-\u0c2a\u0c30\u0c3f\u0c1a\u0c2f\u0c2e\u0c56 \u0c46 \u0c28 \u0c36\u0c3e\u0c32\u0c3f\u0c28\u0c3f/ the Shalini, that I know) ekkaDa\" ('where') based question forms show errors when an abstract word is used as a place, for example -In thoughts\", In that age\". Certain quantitative words in Telugu can be appended with -lO to convey meanings like in youth\", in hundreds\". They tend to pass the rules in question generation. Our list of time-related words is not exhaustive, so a few time-related words are also tagged under ekkaDa\" (place) because of the same suffix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Generation Error Analysis",
"sec_num": "8.1"
},
{
"text": "Most of the tags are error free except for a few ambiguous errors since the rules select answer phrases precisely or do not consider it. Some of the examples of the questions that are produced by the system are listed below in Table-2 in the appendix. The results can be improved to make the question formation more precise by increasing the number of rules by observing further data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Generation Error Analysis",
"sec_num": "8.1"
},
{
"text": "The anaphora resolution is a limitation in this system; thus, most of the in-appropriation in the answer section was caused due to this.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Generation Error Analysis",
"sec_num": "8.1"
},
{
"text": "For example: Q: \u0c0e\u0c35\u0c30\u0c3f \u0c1a\u0c26\u0c41\u0c35\u0c02\u0c24\u0c3e \u0c38\u0c3f\u0c1f\u0c40\u0c32\u0c4b , \u0c26\u0c30\u0c3e \u0c1c\u0c4d \u0c17\u0c3e ... \u0c38\u0c3e\u0c17\u0c3f\u0c02\u0c26\u0c3f? Q: Whose studies got completed in the city luxuriously? A: \u0c28\u0c40 \u0c1a\u0c26\u0c41\u0c35\u0c02\u0c24\u0c3e \u0c38\u0c3f\u0c1f\u0c40\u0c32\u0c4b , \u0c26\u0c30\u0c3e \u0c1c\u0c4d \u0c17\u0c3e ... \u0c38\u0c3e\u0c17\u0c3f\u0c02\u0c26\u0c3f . A: Your studies got completed in the city luxuriously.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Generation Error Analysis",
"sec_num": "8.1"
},
{
"text": "In this case the question is aptly formed but the answer is slightly ill-formed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Generation Error Analysis",
"sec_num": "8.1"
},
{
"text": "There were few errors due to the POS tagger we used. It marked wrong POS tags for cross coded text. The error in this question and answer pair is the \"\u0c10\" 'I' which is an initial (Neelam Kumavat, I) is marked as a number.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Generation Error Analysis",
"sec_num": "8.1"
},
{
"text": "We have built a mixed rule-based and AIbased question and answer generating system with 96.28% accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "9"
},
{
"text": "We used two methods for summarization and two similarity measures. We constructed observation-based rules for the dataset in a particular domain. There is a chance of varying results if we test this system for data in a different domain, but it gives accuracies above 95% for any data in the domain chosen.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "9"
},
{
"text": "We tested question generation in the news article domain, which gave grammatically correct questions. The error rate may increase if we use complex words and phrases that need tags beyond the proposed set of rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "9"
},
{
"text": "We plan to extend our work to be able to include:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "9"
},
{
"text": "1. Anaphora Resolution 2. Extending to other domains 3. Cover more types of questions 4. Improving the UD tagging model For testing the meticulousness of the user, as a future task, we wish to use: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "9"
},
{
"text": "(Hidenobu Kunichika, 2004) 3 (Maria Chinkina, 2017) 4 (Payal Khullar) 5(Hafedh Hussein, 2014)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://kathalu.wordpress.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://sivareddy.in/downloads 8 (Ani Nenkova) (Mr. Shubham Bhosale)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "(Federico Barrios, 2016) 11 https://bitbucket.org/sivareddyg/ telugu-part-of-speech-tagger/src/master/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/UniversalDependencies/ UD_Telugu-MTG(Bogdani, 2018)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "(Reimers, 2021)(Horev, 2018) (Ferreira, 2020)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "List of words related to time: '\u0c05\u0c2a\u0c41\u0c2a\u0c4d\u0c21\u0c41', '\u0c30\u0c4b\u0c1c\u0c41' , '\u0c15\u0c3e\u0c32\u0c02', '\u0c38\u0c3e\u0c2f\u0c02\u0c15\u0c3e\u0c32\u0c02', '\u0c09\u0c26\u0c2f\u0c02', '\u0c2e\u0c27\u0c3e\u0c2f\u0c4d\u0c39\u0c28\u0c4d\u0c02', '\u0c30\u0c3e\u0c24\u0c3f \u0c30\u0c4d ', '\u0c2a\u0c17\u0c32\u0c41', '\u0c28\u0c46\u0c32', '\u0c35\u0c3e\u0c30\u0c02', '\u0c38\u0c02\u0c35\u0c24\u0c38\u0c4d\u0c30\u0c02', '\u0c38\u0c42\u0c30\u0c3e\u0c2f\u0c4d\u0c38\u0c24\u0c4d \u0c2e\u0c2f\u0c02', '\u0c36\u0c41\u0c2d\u0c4b\u0c26\u0c2f\u0c02', '\u0c26\u0c3f\u0c28\u0c02', '\u0c38\u0c2e\u0c2f\u0c02', '\u0c35\u0c30\u0c24\u0c4d \u0c2e\u0c3e\u0c28\u0c02' , '\u0c2a\u0c42\u0c30\u0c35\u0c4d\u0c02', '\u0c2d\u0c35\u0c3f\u0c37\u0c2f\u0c4d\u0c24\u0c41\u0c24\u0c4d ', '\u0c38\u0c4b\u0c2e\u0c35\u0c3e\u0c30\u0c02', '\u0c2e\u0c02\u0c17\u0c33\u0c35\u0c3e\u0c30\u0c02', '\u0c2c\u0c41\u0c27\u0c35\u0c3e\u0c30\u0c02', '\u0c17\u0c41\u0c30\u0c41\u0c35\u0c3e\u0c30\u0c02', '\u0c36\u0c41\u0c15 \u0c30\u0c4d \u0c35\u0c3e\u0c30\u0c02', '\u0c36\u0c28\u0c3f\u0c35\u0c3e\u0c30\u0c02', '\u0c06\u0c26\u0c3f\u0c35\u0c3e\u0c30\u0c02', '\u0c2e\u0c3e\u0c38\u0c02' Translations Then, day, time period, evening, morning, afternoon, night, morning(synonym), month, week, year, sunset, sunrise, day(synonym), time, present, past, future, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday, month(synonym). Table 3 : This set comprises of the time-related words that have a high chance of being used in a storybook. Q:What kind of sack was hard to carry? A:That much of a heavy sack was hard to carry. Q:In the market how was he buying sandals, clothes, bangles, fruits, utensils -and sold them in the village? A:In the market how was buying sandals, clothes, bangles, fruits, utensils for cheap rates and sold them in the village. Q:Packing all the things, putting them on the donkey, from market to village, from village to whose house was he taking them? A:Packing all the things, putting them on the donkey, from market to village, from village to his own house he was taking them. Q:How did the innocent sparrow believed the crows without even asking why and where? A:The innocent sparrow believed the crows blindly without even asking why and where. Q:Instead of believing the sparrow, looking at it with disgust how many times did they beat it? A:Instead of believing the sparrow, looking at it with disgust they beat it 2 times. Q:Did the sparrow made friends with the crows? A:Yes, the sparrow made friends with the crows. Q:Once upon a time where was the innocent sparrow living? A:Once upon a time the innocent sparrow was living in a village. Q:What did the sparrow say pleadingly? A:The sparrow said pleadingly, \"No! no! I didn't do any mistake, I'm innocent, I did nothing, please leave me.\" Table 4 : Translations of the results in Table 2 in section 9",
"cite_spans": [
{
"start": 42,
"end": 311,
"text": "'\u0c30\u0c4b\u0c1c\u0c41' , '\u0c15\u0c3e\u0c32\u0c02', '\u0c38\u0c3e\u0c2f\u0c02\u0c15\u0c3e\u0c32\u0c02', '\u0c09\u0c26\u0c2f\u0c02', '\u0c2e\u0c27\u0c3e\u0c2f\u0c4d\u0c39\u0c28\u0c4d\u0c02', '\u0c30\u0c3e\u0c24\u0c3f \u0c30\u0c4d ', '\u0c2a\u0c17\u0c32\u0c41', '\u0c28\u0c46\u0c32', '\u0c35\u0c3e\u0c30\u0c02', '\u0c38\u0c02\u0c35\u0c24\u0c38\u0c4d\u0c30\u0c02', '\u0c38\u0c42\u0c30\u0c3e\u0c2f\u0c4d\u0c38\u0c24\u0c4d \u0c2e\u0c2f\u0c02', '\u0c36\u0c41\u0c2d\u0c4b\u0c26\u0c2f\u0c02', '\u0c26\u0c3f\u0c28\u0c02', '\u0c38\u0c2e\u0c2f\u0c02', '\u0c35\u0c30\u0c24\u0c4d \u0c2e\u0c3e\u0c28\u0c02' , '\u0c2a\u0c42\u0c30\u0c35\u0c4d\u0c02', '\u0c2d\u0c35\u0c3f\u0c37\u0c2f\u0c4d\u0c24\u0c41\u0c24\u0c4d ', '\u0c38\u0c4b\u0c2e\u0c35\u0c3e\u0c30\u0c02', '\u0c2e\u0c02\u0c17\u0c33\u0c35\u0c3e\u0c30\u0c02', '\u0c2c\u0c41\u0c27\u0c35\u0c3e\u0c30\u0c02', '\u0c17\u0c41\u0c30\u0c41\u0c35\u0c3e\u0c30\u0c02', '\u0c36\u0c41\u0c15 \u0c30\u0c4d \u0c35\u0c3e\u0c30\u0c02', '\u0c36\u0c28\u0c3f\u0c35\u0c3e\u0c30\u0c02', '\u0c06\u0c26\u0c3f\u0c35\u0c3e\u0c30\u0c02', '\u0c2e\u0c3e\u0c38\u0c02'",
"ref_id": null
}
],
"ref_spans": [
{
"start": 562,
"end": 569,
"text": "Table 3",
"ref_id": null
},
{
"start": 1961,
"end": 1968,
"text": "Table 4",
"ref_id": null
},
{
"start": 2002,
"end": 2009,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Appendix",
"sec_num": "10"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The impact of frequency on summarization",
"authors": [
{
"first": "Lucy",
"middle": [],
"last": "Vanderwende",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucy Vanderwende Ani Nenkova. The impact of fre- quency on summarization.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Parts of speech tagging using lstm (long short term memory) neural networks",
"authors": [
{
"first": "",
"middle": [],
"last": "Bogdani",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bogdani. 2018. Parts of speech tagging using lstm (long short term memory) neural networks.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Variations of the similarity function of textrank for automated summarization",
"authors": [
{
"first": "Luis Argerich Rosita Wachenchauzer Federico",
"middle": [],
"last": "Barrios",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "Lt",
"suffix": ""
},
{
"first": "'",
"middle": [],
"last": "Opez",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luis Argerich Rosita Wachenchauzer Federico Barrios, Federico Lt'opez. 2016. Variations of the similarity function of textrank for automated summarization.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "What are sentence embeddings and why are they useful?",
"authors": [
{
"first": "Diogo",
"middle": [],
"last": "Ferreira",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diogo Ferreira. 2020. What are sentence embeddings and why are they useful?",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Automatic english question generation system based on template driven scheme",
"authors": [
{
"first": "Shawkat",
"middle": [],
"last": "Guirguis",
"suffix": ""
},
{
"first": "Hafedh",
"middle": [],
"last": "Hussein",
"suffix": ""
},
{
"first": "Mohammed",
"middle": [],
"last": "Elmogy",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shawkat Guirguis Hafedh Hussein, Mohammed Elm- ogy. 2014. Automatic english question generation system based on template driven scheme.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automated question generation methodsfor intelligent english learning systems and its evaluation",
"authors": [],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsukasa Hirashima Akira Takeuchi Hidenobu Ku- nichika, Tomoki Katayama*. 2004. Automated question generation methodsfor intelligent english learning systems and its evaluation.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Automatic question-answer pairs generation from text",
"authors": [
{
"first": "Agus Gunawan Holy",
"middle": [],
"last": "Lovenia",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Limanta",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.13140/RG.2.2.33776.92162"
]
},
"num": null,
"urls": [],
"raw_text": "Agus Gunawan Holy Lovenia, Felix Limanta. 2018. Automatic question-answer pairs generation from text.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Song. A practical qa system in restricted domains",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyoung-Soo Han Do-Sang Yoon Joo-Young Lee Hae- Chang Rim Hoojung Chung, Young-In Song. A practical qa system in restricted domains.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Bert explained: State of the art language model for nlp",
"authors": [
{
"first": "Rani",
"middle": [],
"last": "Horev",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rani Horev. 2018. Bert explained: State of the art lan- guage model for nlp.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Gorka Labaka-Eneko Agirre Ondej Bojar Ivana Kvapil\u00edkov\u00e1, Mikel Artetxe. 2020. Unsupervised multilingual sentence embeddings for parallel corpus mining",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-srw.34"
]
},
"num": null,
"urls": [],
"raw_text": "Gorka Labaka-Eneko Agirre Ondej Bojar Ivana Kva- pil\u00edkov\u00e1, Mikel Artetxe. 2020. Unsupervised mul- tilingual sentence embeddings for parallel corpus mining.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "An introduction to text summarization using the textrank algorithm",
"authors": [
{
"first": "Prateek",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Prateek Joshi. 2018. An introduction to text summa- rization using the textrank algorithm (with python implementation).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Understand textrank for keyword extraction by python: A scratch implementation by python and spacy to help you understand pagerank and textrank for keyword extraction",
"authors": [
{
"first": "Xu",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xu Liang. 2019. Understand textrank for keyword ex- traction by python: A scratch implementation by python and spacy to help you understand pagerank and textrank for keyword extraction.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Question generation for language learning: From ensuring texts are read to supporting learning",
"authors": [
{
"first": "Detmar",
"middle": [],
"last": "Meurers",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Chinkina",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Detmar Meurers Maria Chinkina. 2017. Question gen- eration for language learning: From ensuring texts are read to supporting learning.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Massively multilingual sentence embeddings for zero-shot crosslingual transfer and beyond",
"authors": [
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00288"
]
},
"num": null,
"urls": [],
"raw_text": "Holger Schwenk Mikel Artetxe. 2019. Massively mul- tilingual sentence embeddings for zero-shot cross- lingual transfer and beyond.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Automatic text summarization based on frequency",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ms. Vrushali Bhise Rushali A. Deshmukh Mr. Shub- ham Bhosale, Ms. Diksha Joshi. Automatic text summarization based on frequency count for marathi e-newspaper.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Sentence embedding: A literature review -towards data science",
"authors": [
{
"first": "Nishant",
"middle": [],
"last": "Nikhil",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nishant Nikhil. 2017. Sentence embedding: A litera- ture review -towards data science.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Sentence-bert: Sentence embeddings using siamese bert-networks",
"authors": [],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iryna Gurevych Nils Reimers. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Understanding of the importance of mother tongue learning",
"authors": [
{
"first": "Rajathurai",
"middle": [],
"last": "Nishanthi",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rajathurai Nishanthi. 2020. Understanding of the im- portance of mother tongue learning.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Automatic question generation using relative pronouns and adverbs",
"authors": [
{
"first": "Konigari",
"middle": [],
"last": "Mukul Hase Manish Shrivastava Payal Khullar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rachna",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/P18-3022"
]
},
"num": null,
"urls": [],
"raw_text": "Mukul Hase Manish Shrivastava Payal Khullar, Koni- gari Rachna. Automatic question generation using relative pronouns and adverbs.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Dialogue based question answering system in telugu",
"authors": [],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sivaji Bandyopadhyay Rami Reddy Nandi Reddy. 2006. Dialogue based question answering system in telugu.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Sentencetransformers documentation -sentence-bert: Sentence embeddings using siamese bert-networks",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nils Reimers. 2021. Sentencetransformers documen- tation -sentence-bert: Sentence embeddings using siamese bert-networks.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "reasons why the nep's move to teaching in mother tongue could transform teaching and learning in india",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roshni. 2020. 8 reasons why the nep's move to teach- ing in mother tongue could transform teaching and learning in india.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Automatic question and answer generation from bengali and english texts",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Md. Sajjatul Islam Md. Shahnur Azad Chowdhury-Md. Jiabul Hoque Shudipta Sharma, Muhammad Ka- mal Hossen. Automatic question and answer gen- eration from bengali and english texts.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Cross language pos taggers (and other tools) for indian languages: An experiment with kannada using telugu resources",
"authors": [
{
"first": "Serge Sharoff Siva",
"middle": [],
"last": "Reddy",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Serge Sharoff Siva Reddy. 2011. Cross language pos taggers (and other tools) for indian languages: An experiment with kannada using telugu resources.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "For example: Q: \u0c28\u0c40\u0c32\u0c02 \u0c15\u0c41\u0c2e\u0c3e\u0c35\u0c24\u0c4d, \u0c0e\u0c28\u0c3f\u0c28\u0c4d? Q: Neelam Kumawath, how many? A: \u0c28\u0c40\u0c32\u0c02 \u0c15\u0c41\u0c2e\u0c3e\u0c35\u0c24\u0c4d , \u0c10 . A: Neelam Kumawath, I."
},
"TABREF2": {
"type_str": "table",
"text": "",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF3": {
"type_str": "table",
"text": "\u0c0e\u0c1f\u0c41\u0c35\u0c02\u0c1f\u0c3f \u0c2e\u0c4b\u0c1f \u0c24\u0c4b \u0c35\u0c02\u0c17\u0c21\u0c02 \u0c15\u0c37 \u0c1f\u0c4d \u0c02\u0c17\u0c3e \u0c35\u0c41\u0c02\u0c26\u0c3f? A: \u0c05\u0c02\u0c24 \u0c2a\u0c46\u0c26 \u0c26\u0c4d \u0c2e\u0c4b\u0c1f \u0c24\u0c4b \u0c35\u0c02\u0c17\u0c21\u0c02 \u0c15\u0c37 \u0c1f\u0c4d \u0c02\u0c17\u0c3e \u0c35\u0c41\u0c02\u0c26\u0c3f Q: \u0c1a\u0c46\u0c2a\u0c41\u0c2a\u0c4d\u0c32\u0c41 , \u0c2c\u0c1f \u0c1f\u0c4d \u0c32\u0c41 , \u0c17\u0c3e\u0c1c\u0c41\u0c32\u0c41 , \u0c2a\u0c33\u0c41\u0c33\u0c4d, \u0c17\u0c3f\u0c28\u0c46\u0c28\u0c4d\u0c32\u0c41 \u0c2c\u0c1c\u0c3e\u0c30\u0c41\u0c32\u0c4b \u0c0e\u0c32\u0c3e \u0c15\u0c4a\u0c28\u0c3f , \u0c0a\u0c33\u0c4b\u0c33\u0c4d \u0c07\u0c02\u0c1f\u0c3f\u0c02\u0c1f\u0c3f\u0c15\u0c3f \u0c35\u0c46\u0c33\u0c3f \u0c32\u0c4d \u0c05\u0c2e\u0c41\u0c2e\u0c4d\u0c15\u0c41\u0c28\u0c47 \u0c35\u0c3e\u0c21\u0c41? A: \u0c1a\u0c46\u0c2a\u0c41\u0c2a\u0c4d\u0c32\u0c41 , \u0c2c\u0c1f \u0c1f\u0c4d \u0c32\u0c41 , \u0c17\u0c3e\u0c1c\u0c41\u0c32\u0c41 , \u0c2a\u0c33\u0c41\u0c33\u0c4d , \u0c17\u0c3f\u0c28\u0c46\u0c28\u0c4d\u0c32\u0c41 \u0c2c\u0c1c\u0c3e\u0c30\u0c41\u0c32\u0c4b \u0c1a\u0c35\u0c15\u0c17\u0c3e \u0c15\u0c4a\u0c28\u0c3f , \u0c0a\u0c33\u0c4b\u0c33\u0c4d \u0c07\u0c02\u0c1f\u0c3f\u0c02\u0c1f\u0c3f\u0c15\u0c3f \u0c35\u0c46\u0c33\u0c3f \u0c32\u0c4d \u0c05\u0c2e\u0c41\u0c2e\u0c4d\u0c15\u0c41\u0c28\u0c47 \u0c35\u0c3e\u0c21\u0c41 Q: \u0c38\u0c3e\u0c2e\u0c3e\u0c28 \u0c32\u0c4d \u0c28\u0c40\u0c28\u0c4d \u0c2e\u0c4b\u0c1f \u0c15\u0c1f\u0c3f \u0c1f\u0c4d , \u0c17\u0c3e\u0c21\u0c3f\u0c26 \u0c2e\u0c40\u0c26 \u0c35\u0c47\u0c38\u0c3f , \u0c2c\u0c1c\u0c3e\u0c30\u0c41 \u0c28\u0c41\u0c02\u0c1a\u0c3f \u0c0a\u0c33\u0c4b\u0c33\u0c4d , \u0c0a\u0c33\u0c4b \u0c32\u0c4d \u0c28\u0c41\u0c02\u0c1a\u0c3f \u0c24\u0c3f\u0c30\u0c3f\u0c17\u0c3f \u0c0e\u0c35\u0c30\u0c3f \u0c07\u0c02\u0c1f\u0c3f\u0c15\u0c3f \u0c24\u0c3f\u0c2a\u0c47\u0c2a\u0c4d\u0c35\u0c3e\u0c21\u0c41? A: \u0c38\u0c3e\u0c2e\u0c3e\u0c28 \u0c32\u0c4d \u0c28\u0c40\u0c28\u0c4d \u0c2e\u0c4b\u0c1f \u0c15\u0c1f\u0c3f \u0c1f\u0c4d , \u0c17\u0c3e\u0c21\u0c3f\u0c26 \u0c2e\u0c40\u0c26 \u0c35\u0c47\u0c38\u0c3f , \u0c2c\u0c1c\u0c3e\u0c30\u0c41 \u0c28\u0c41\u0c02\u0c1a\u0c3f \u0c0a\u0c33\u0c4b\u0c33\u0c4d , \u0c0a\u0c33\u0c4b \u0c32\u0c4d \u0c28\u0c41\u0c02\u0c1a\u0c3f \u0c24\u0c3f\u0c30\u0c3f\u0c17\u0c3f \u0c05\u0c24\u0c28\u0c3f \u0c07\u0c02\u0c1f\u0c3f\u0c15\u0c3f \u0c24\u0c3f\u0c2a\u0c47\u0c2a\u0c4d\u0c35\u0c3e\u0c21\u0c41 Q: \u0c05\u0c2e\u0c3e\u0c2f\u0c15 \u0c2a\u0c3f\u0c1a\u0c41\u0c15 \u0c0e\u0c15\u0c15\u0c4d\u0c21\u0c15\u0c3f, \u0c0e\u0c02\u0c26\u0c41\u0c15\u0c41 \u0c05\u0c28\u0c3f \u0c05\u0c21\u0c17\u0c15\u0c41\u0c02\u0c21\u0c3e, \u0c06 \u0c15\u0c3e\u0c15\u0c41\u0c32\u0c28\u0c41 \u0c17\u0c41\u0c21\u0c3f \u0c21\u0c4d \u0c17\u0c3e \u0c28\u0c2e\u0c3f\u0c2e\u0c4d \u0c0f\u0c2e\u0c3f \u0c1a\u0c47\u0c38\u0c3f\u0c02\u0c26\u0c3f? A: \u0c05\u0c2e\u0c3e\u0c2f\u0c15 \u0c2a\u0c3f\u0c1a\u0c41\u0c15 \u0c0e\u0c15\u0c15\u0c4d\u0c21\u0c15\u0c3f, \u0c0e\u0c02\u0c26\u0c41\u0c15\u0c41 \u0c05\u0c28\u0c3f \u0c05\u0c21\u0c17\u0c15\u0c41\u0c02\u0c21\u0c3e, \u0c06 \u0c15\u0c3e\u0c15\u0c41\u0c32\u0c28\u0c41 \u0c17\u0c41\u0c21\u0c3f \u0c21\u0c4d \u0c17\u0c3e \u0c28\u0c2e\u0c3f\u0c2e\u0c4d \u0c35\u0c3e\u0c1f\u0c3f\u0c24\u0c4b \u0c35\u0c46\u0c33\u0c3f\u0c33\u0c4d\u0c02\u0c26\u0c3f. \u0c2a\u0c3f\u0c1a\u0c41\u0c15 \u0c2e\u0c3e\u0c1f \u0c28\u0c2e\u0c2e\u0c4d\u0c32\u0c47\u0c26\u0c41 \u0c15\u0c26\u0c3e , \u0c26\u0c3e\u0c28\u0c3f \u0c35\u0c56 \u0c46 \u0c2a\u0c41 \u0c05\u0c38\u0c39\u0c2f\u0c4d\u0c02\u0c17\u0c3e \u0c1a\u0c42\u0c38\u0c3f \u0c2e\u0c30\u0c4b \u0c0e\u0c28\u0c3f\u0c28\u0c4d \u0c26\u0c46\u0c2c\u0c2c\u0c4d\u0c32\u0c41 \u0c35\u0c47\u0c38\u0c3e\u0c30\u0c41? A: \u0c2a\u0c3f\u0c1a\u0c41\u0c15 \u0c2e\u0c3e\u0c1f \u0c28\u0c2e\u0c2e\u0c4d\u0c32\u0c47\u0c26\u0c41 \u0c15\u0c26\u0c3e , \u0c26\u0c3e\u0c28\u0c3f \u0c35\u0c56 \u0c46 \u0c2a\u0c41 \u0c05\u0c38\u0c39\u0c2f\u0c4d\u0c02\u0c17\u0c3e \u0c1a\u0c42\u0c38\u0c3f \u0c2e\u0c30\u0c4b \u0c30\u0c46\u0c02\u0c21\u0c41 \u0c26\u0c46\u0c2c\u0c2c\u0c4d\u0c32\u0c41 \u0c35\u0c47\u0c38\u0c3e\u0c30\u0c41 Q: \u0c06 \u0c15\u0c3e\u0c15\u0c41\u0c32\u0c24\u0c4b \u0c2a\u0c3f\u0c1a\u0c41\u0c15\u0c15\u0c3f \u0c38\u0c47\u0c28\u0c4d\u0c39\u0c02 \u0c05\u0c2f\u0c3f\u0c2f\u0c4d\u0c02\u0c26\u0c3e? A: \u0c05\u0c35\u0c41\u0c28\u0c41, \u0c06 \u0c15\u0c3e\u0c15\u0c41\u0c32\u0c24\u0c4b \u0c2a\u0c3f\u0c1a\u0c41\u0c15\u0c15\u0c3f \u0c38\u0c47\u0c28\u0c4d\u0c39\u0c02 \u0c05\u0c2f\u0c3f\u0c2f\u0c4d\u0c02\u0c26\u0c3f. Q: \u0c12\u0c15\u0c3e\u0c28\u0c4a\u0c15\u0c2a\u0c41\u0c2a\u0c4d\u0c21\u0c41 \u0c0e\u0c15\u0c15\u0c4d\u0c21 \u0c12\u0c15 \u0c05\u0c2e\u0c3e\u0c2f\u0c15\u0c2a\u0c41 \u0c2a\u0c3f\u0c1a\u0c41\u0c15 \u0c35\u0c41\u0c02\u0c21\u0c47\u0c26\u0c3f? A:\u0c12\u0c15\u0c3e\u0c28\u0c4a\u0c15\u0c2a\u0c41\u0c2a\u0c4d\u0c21\u0c41 \u0c12\u0c15 \u0c0a\u0c30\u0c3f\u0c32\u0c4b \u0c12\u0c15 \u0c05\u0c2e\u0c3e\u0c2f\u0c15\u0c2a\u0c41 \u0c2a\u0c3f\u0c1a\u0c41\u0c15 \u0c35\u0c41\u0c02\u0c21\u0c47\u0c26\u0c3f. Q: \u0c0f\u0c2e\u0c28\u0c3f \u0c2a\u0c3f\u0c1a\u0c41\u0c15 \u0c2a\u0c3e \u0c30\u0c4d \u0c27\u0c47\u0c2f \u0c2a\u0c21\u0c3f\u0c02\u0c26\u0c3f? A:\u0c2c\u0c3e\u0c2c\u0c4b\u0c2f\u0c4d! \u0c2c\u0c3e\u0c2c\u0c4b\u0c2f\u0c4d! \u0c28\u0c3e \u0c24\u0c2a\u0c47\u0c2a\u0c4d\u0c2e\u0c40 \u0c32\u0c47\u0c26\u0c41, \u0c28\u0c47\u0c28\u0c41 \u0c05\u0c2e\u0c3e\u0c2f\u0c15\u0c41\u0c30\u0c3e\u0c32\u0c3f\u0c28\u0c3f, \u0c28\u0c47\u0c28\u0c47\u0c2e\u0c40 \u0c1a\u0c47\u0c2f\u0c32\u0c47\u0c26\u0c41, \u0c28\u0c28\u0c41\u0c28\u0c4d \u0c35\u0c26\u0c3f\u0c32\u0c47\u0c2f\u0c02\u0c21\u0c3f! \u0c05\u0c28\u0c3f \u0c2a\u0c3f\u0c1a\u0c41\u0c15 \u0c2a\u0c3e \u0c30\u0c4d \u0c27\u0c47\u0c2f \u0c2a\u0c21\u0c3f\u0c02\u0c26\u0c3f.",
"content": "<table><tr><td>1. Questions on minor details</td></tr><tr><td>2. NE (Named Entities) and CN (Common</td></tr><tr><td>Nouns)</td></tr><tr><td>Q:</td></tr></table>",
"html": null,
"num": null
},
"TABREF4": {
"type_str": "table",
"text": "Sample questions generated by the system",
"content": "<table/>",
"html": null,
"num": null
}
}
}
}