id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_95800 | We used the parallel corpus developed by Farhath et al. | hence checking the casecount combinations of a word when substituting, helps to preserve language semantics of the generated sentence. | neutral |
train_95801 | That is, each word begins with a 'd' sound and it contains a consonant cluster 'nt'. | some data points in BDPROTO represent root-level language family nodes, such as Indo-European. | neutral |
train_95802 | Even if we followed a linguistic approach there were 16,473 unique lemmas in the training dataset. | we use string-wise micro averaged precision, recall and F-Score to evaluate our model as is the standard with evaluating word segmentation models. | neutral |
train_95803 | Everyday language is built up of prefabricated parts and templates that form a speaker's individual discourse experience (Hopper, 1998;MacWhinney, 2001;Bybee and McClelland, 2005). | the Constructicon lists constructions, such as the Way_manner construction (e.g., She whistled her way down the lane), and lists the roles associated with the construction and the Construction Evoking Element (CEE) (e.g., one's way in this context). | neutral |
train_95804 | Furthermore, a goal of AMR is to provide consistent semantic representation despite language-specific syntactic idiosyncracies. | using keyword searches over the annotations and text (e.g., 'Compared-to'), we discovered an initial set of about 4,600 annotations that potentially needed retrofitting for Have-Degree/Quant-91 (the remaining cases were found to be simple usages of degree or quantity modifiers, such as very, which remain unchanged), and about 30 potential Correlate-91 cases. | neutral |
train_95805 | Nonetheless, the selection of the appropriate root is necessarily somewhat subjective and remains a source of disagreement throughout other annotations as well. | representing the meanings associated with fully syntactic patterns required a novel annotation approach. | neutral |
train_95806 | Moreover, such manual resources need extra effort to be maintained and updated to integrate new senses and words appearing in everyday language. | we chose wikipedia in the language L as raw corpus C L and BabelNet as the underlying semantic graph G because both are available for all the 6 languages of interest. | neutral |
train_95807 | The toolkit comes with scripts to download and preprocess datasets, and an easy interface to evaluate sentence encoders. | over the years, something of a consensus has been established, mostly based on the evaluations in seminal papers such as SkipThought (Kiros et al., 2015), concerning what evaluations to use. | neutral |
train_95808 | We use the average of Pearson correlations for STS'12 to STS'16 which are composed of several subtasks. | it has been argued that the focus should be on downstream tasks where these representations would actually be applied (Ettinger et al., 2016;Nayak et al., 2016). | neutral |
train_95809 | An alternative approach to using word embeddings to extend an existing WordNet has been described by Al tarouti and Kalita (2016), who in fact use word embeddings to extend an automatically-constructed Arabic WordNet built using the machine translation / bilingual dictionary method described by Lam et al. | an extension of this work specifically aimed at lesser-resourced languages was also described, in which a Persian WordNet is constructed by finding the English translations of Persian words in small corpora using a bilingual dictionary. | neutral |
train_95810 | Murder, violence, flashback, and romantic are the most frequent four tags in the corpus that are assigned to 5,732; 4,426; 2,937 and 2,906 movies respectively. | folksonomy (Vander Wal, 2005), also known as collaborative tagging or social tagging, is a popular way to gather community feedback about online items in the form of tags. | neutral |
train_95811 | "), the subject of "没"(not-exist, i.e., "vanish") is omitted, which causes the problem that most parser will treat "这样"(this way, i.e., "this way") as the subject. | the annotation file is of the same format with the brat's default format. | neutral |
train_95812 | The concept is overlapped with the concept of ellipsis, more specifically contained in the concept of ellipsis. | most of the time, we don't know what the exact wording of the omission is, but we are aware that it must be a noun and represent a thing. | neutral |
train_95813 | The performance of ecp improves with decreasing time interval. | to establish a benchmark on detecting changes in online conversation, we collected a dataset of 16 sport events, with reference change points for each event. | neutral |
train_95814 | Event detection detects either emerging events or specific events from raw stream data. | more analysis would be required to check, e.g. | neutral |
train_95815 | As algorithms handle multivariate signal, we can also add counts as a fourth time series, in addition to the sentiment scores. | ecp detects too many changes. | neutral |
train_95816 | Coefficients range between 0.66 and 0.73 depending on the temporal tag. | also, we can infer the location of the author with greater certainty for before rather than after posting the tweet. | neutral |
train_95817 | Note that annotators almost never disagree between (CY, PY) and (CN, PN). | to their work, we (a) present a corpus with few invalid locations (⇡ 6%), and (b) work with finer-grained temporal information (when somebody tweets, within 24 hours before and after he tweeted, and longer than 24 hours before and after he tweeted). | neutral |
train_95818 | Similar observations are true of an average case of literary encoding, to which grammatical information gets added for the purpose of enhancing searches or for basic measurements -it can be added as a separate document, i.e. | apart from @lemma and @lemmaRef, up till now TEI encoders could only resort to using the generic attribute @ana for inline linguistic annotation, or to the quite complex system of feature structures for robust linguistic annotation, the latter requiring relatively complex processing even for the most basic types of linguistic features. | neutral |
train_95819 | 2 1 The changes were merged in a pull request which also references detailed discussion, see https://github.com/TEIC/TEI/pull/1671 and issue #1670. | 13 with the dependency captured at the level of markup while the representation proposed here is able to handle mild deviations from the 1:1 correspondence between word forms and tokens, it is not sufficient for handling complex multi-word units or for syntactic description -these require more powerful descriptive mechanisms. | neutral |
train_95820 | We hope to introduce the head directionality parameter for UD guidelines. | a bunsetsu dependency-based treebank does not include the syntactic relation information. | neutral |
train_95821 | We use Rhetorical Structure Theory (Mann and Thompson, 1988) as the theoretical framework of the corpus. | the relational taxonomy used in our annotation is the one used in the PCC (Stede, 2016), which is based on the relation set proposed in the original RSt paper (Mann and thompson, 1988). | neutral |
train_95822 | The first component analyzes candidate words within a sentence and duplicates the sentence if there are two or more words marked as candidate and not all of them are verbs. | the information contains either new position in the sentence, parent token and dependency relation, or a marker that the token should be deleted. | neutral |
train_95823 | We iteratively adjusted the conversion rules, manually checking output samples, making the rules more strict and precise, and re-running them. | all input data are in the CoNLL-U format. | neutral |
train_95824 | It matches the pattern because 1. its "root" node is a verb; 2. the verb has an "aux" child; 3. the verb is linked with another clause via a "conj" relation; 4. the other clause is headed by an adjective and has a "cop" (copula) dependent; 5. both clauses contain a "nsubj" (subject) and an "advmod" (adverbial modifier). | we currently do not check prepositions in English; instead, we manually fix sentences where prepositions are not compatible. | neutral |
train_95825 | Later, we start the network construction by making an attempt to find a base word for each lexeme. | to the best of our knowledge, it is the biggest language resource of derivational morphology for Polish. | neutral |
train_95826 | Although their approach is completely unsupervised, it requires POS tagging and, particularly for Spanish, it provides a considerably lower accuracy of extracted relations than for the other languages (73% for Spanish compared with 98% for English and German). | we have also visualized some parts of the Spanish word-Formation Network. | neutral |
train_95827 | Polish WordNet contains many relations which store information about derived words such as "feminity" which links masculine nouns with its feminine counterparts, "inhabitant" which connects geographical names with the name of their inhabitants, "aspectuality" which relates verbs of different aspects and many others. | in recent years researchers noticed the potential of derivational morphology to improve the performance in many important areas of NLP, which caused the development of novel language resources which focus on word formation. | neutral |
train_95828 | We store the corresponding wordform, category, morphological features, source lemma, UD lemma 5-tuples for later use. | each language makes use of its own guidelines regarding the inventory of categories and the detailed definition of morphological features and feature values. | neutral |
train_95829 | In Table 2 we use the code "TRsource language" to identify lexicons created in this cross-language way. | apertium does include a morphological lexicon for Czech, a language closely related to Slovak. | neutral |
train_95830 | Table 1 shows the a summary of the current resource sizes of selected languages, along with the number of distinct inflections covered, and the number of expanded phrasal glosses generated given multiple translations per lemma. | a full specification of the UniMorph annotation schema is available. | neutral |
train_95831 | To correct these errors, we noted that for each part-ofspeech within a language in Wiktionary, authors use only a handful of distinct table layouts. | determine which entries in an HTML table are inflected forms and which are grammatical descriptors. | neutral |
train_95832 | The verbal morphology of Upper Tanana is typical of a Dene language (Rice, 2000), and often involves a complex interweaving of non-continuous lexical, derivational, and inflectional prefix sequences. | second, and more importantly, this approach allows us to model significant aspects of the derivational system of Upper Tanana. | neutral |
train_95833 | (2016) and detailed in Arppe et al. | a lexical entry for a verb like na#D+kuyh 'vomit' (imperfective), which contains only the stem kuyh and a single lexical disjunct prefix na-, can be specified as na=kuyh, and subsequently expanded by the model automatically into na=_.kuyh. | neutral |
train_95834 | Since the true tags are available neither in automatic nor manual annotation, we select 34 Polish words which are often abbreviated 4.1., look up their inflected forms in the Polimorf morphological dictionary (Wolinski et al., 2012), and gather sentences which contain at least one of these words. | a different branch of text normalization is focused on abbreviation discovery. | neutral |
train_95835 | abstract goes always after title detection, bibliographic entries go always after sections' textual content). | effective tools to extract structured textual content from PDF files represent a key technology to enable scientific text mining (Ronzano and Saggion, 2016). | neutral |
train_95836 | In order to provide experts with a web tool to process big data related to their domain, the TERRE-ISTEX approach was improved. | the MODS-TI data model expands the MODS format to describe spatial, temporal, and thematic entities extracted from documents. | neutral |
train_95837 | The vocal (verbal and prosodic) annotation used in this project follows the principles previously developed for spoken Russian discourse, for more detail see Kibrik, Podlesskaja, 2009 and the website spokencorpora.ru. | in addition, the camera GoPro Hero 4 (50 frames per second and 2700х1500 pixels) was used to record the whole scene (see Fig. | neutral |
train_95838 | Many prosodic phenomena are of a relative, rather than absolute, nature: their specific realizations can be assessed and identified only with respect to neutral characteristics of a given speaker's voice. | during an interactive stage (conversation), the Commentator supplied additional details and corrected the Narrator's story where necessary; the Reteller checked his/her understanding of the plot, asking questions to the Narrator and the Commentator. | neutral |
train_95839 | Discourse structure analysis not only helps to understand the discourse structure and semantics, but also provides strong support for deep applications of natural language processing, such as automatic summarization, statistical machine translation, question and answering, etc. | in addition to the logical semantic structure, we also define the functional pragmatic structure. | neutral |
train_95840 | The edges connect the discourse units, while the arrows pointing to the primary discourse units. | discourse structures vary if the genres are different. | neutral |
train_95841 | Only 4 dialogues out of the 54 were held between subjects with the same mother tongue. | as we have shown in our preliminary analyses, this data offers various possibilities regarding the study of mechanisms that regulate human communication. | neutral |
train_95842 | Despite the recent advances in the field of Spoken Dialogue Systems (SDSs), non task-oriented spontaneous dialogue is still a very challenging problem since its structure is often difficult to represent, unlike task-oriented dialogues which could easily be represented by a flow chart. | this JSON files contains all the dialogue ids and a link to a JSON set file. | neutral |
train_95843 | Whenever the audio for the whole set was available, all the turns in the set, including those where participants are refining their strategy between scenes are included, with information about time boundaries, speaker, turn index, topic and rich transcriptions. | the scene sea ( Figure 11) has 9 objects in the XML description, but the entropy value in 3.99 (median value is 3.57). | neutral |
train_95844 | The insight into these many works is that neural networks are better suited at capturing semantic clues between the two arguments of an implicit relation than traditional methods heavily reliant on feature engineering, as in (Pitler et al., 2009;Xue et al., 2015). | unlike these methods we make no feature engineering. | neutral |
train_95845 | We run each experiment for ten times and take the average. | the hidden vector s t obtained after the last character is called the last feature vector, as it stores the information related to the character language model and the sentiment of the utterance. | neutral |
train_95846 | This gives evidence that when we process the text of a conversation, we can see the context of a current utterance in the preceding utterances. | it was shown that the average vector over all characters in the utterance works better for emotion detection (Lakomkin et al., 2017). | neutral |
train_95847 | (a) Visualization Modes Representing data by means of alternative modes helps in understanding and interpreting the subject matter. | any annotation can be revised during the actual session. | neutral |
train_95848 | This task has been reproduced with variations on the language (Cholakov et al., 2014;Fabre et al., 2014) or the size of the dataset (Kremer et al., 2014). | by beginning to identify the main differences in terms of substitutes proposed by humans and NLP systems, we can complete the initial analysis proposed by (Tanguy et al., 2016) who found that there are also important differences in the difficulty encountered for specific target sentences. | neutral |
train_95849 | For example, considering again sentence 208, the wordécartement, which is a morphological variant of the substituteécart, appears as a new valid substitute. | following (McCarthy and Navigli, 2009) we created a new annotation task mixing man-made and automaticallyretrieved substitutes. | neutral |
train_95850 | This is another characteristic that explains why especially the more advanced classifier methods work better when trained on Gulli's and applied to Bernard. | yucaipa owned Dominick's before selling the chain to Safeway in 1998 for $2.5 billion. | neutral |
train_95851 | The DT classifier performs particularly bad especially for the cross-application task. | only basic NLP techniques (e.g., checking similarity thresholds over string-and n-gram-shingles), which are independent from an advanced global knowledge or training experience, are applicable for analyzing historical corpora (Büchler et al., 2010). | neutral |
train_95852 | Wikipedia has very valuable translation texts since these translations were manually made by editors. | the article ends with some conclusions and future directions in Section 4.. For the creation of the proto-dictionaries, we applied several lexicon building methods utilizing Wikipedia and Wiktionary. | neutral |
train_95853 | We also want to convert our data into the data format following the conventions of linguistic linked open data and provide them via our web site 3 or via the repositories of dictionary families such as Giellatekno 4 . | these methods need a large amount of (pre-processed) data and a seed lexicon which is then used to acquire additional translations of the context words. | neutral |
train_95854 | In section 4, the evaluation results are presented and discussed according to different aspects. | the system behavior was studied and validated on different corpora and speech styles. | neutral |
train_95855 | Our comparative analysis reveals the differences between crowdsourced data and system usage data. | the score of an utterance x is calculated such that the impact of the utterance length U L is normalized: the results are filtered by an experimentally determined threshold value. | neutral |
train_95856 | Nevertheless, it is difficult to decide whether the scores they report are satisfactory for productive use. | we show that when training NLU services on crowdsourced data the scores achieved are as good as system usage data, even when the test set contains faulty utterances. | neutral |
train_95857 | We now explain the details. | toward the optimal goal of human-level intelligence, CV researchers have actively studied on Visual/Video Question Answering (VQA) (Antol et al., 2015;Ye et al., 2017;Zhao et al., 2017), which is to understand textual questions and images and give correct textual answers by machine. | neutral |
train_95858 | In the overall VCOPA dataset, we compare using machinegenerated captions with using human-generated annotations again. | as reported in (Luo et al., 2016), although CausalNet formally achieved 70.2% accuracy on COPa evaluation, its accuracy on the overlapped set between COPa and VCOPa is 67.9%. | neutral |
train_95859 | The graphic relates the percentage of verb and noun pairs that were analyzed with the number of VNICs that were correctly identified by the models, essentially showing the gain calculations through each of the model's deciles. | because they focus on evaluation accuracy they are evaluating both the model's ability to score an instance appropriately and the threshold the model uses. | neutral |
train_95860 | Most of the work in this field either uses accuracy (used by Fazly et al. | ∼60% of our data is labeled as positive by our model (shaded green), while 40% is labeled as negative (shaded red). | neutral |
train_95861 | An optional gaussian curve can be overlaid. | developed by the LNE, specialized in the evaluation of NLP systems, Matics is free and open-source. | neutral |
train_95862 | MetaMap relies on a powerful tool to deal with variability -the SPECIALIST Lexicon; we do not address variability but for a closed list of abbreviations. | we also report a first approximation for assessing the performance of the prototype. | neutral |
train_95863 | As it happens, MetaMap's knowledge base contains "congenital asplenia" but not "congenital anesplenia", and so it does not annotate it. | a qualitative error and disagreement analysis has been carried out in an attempt to elucidate these issues. | neutral |
train_95864 | The datasets consists of a set of terms from different corpora and the statistics of the datasets are given in Table 1. | for example, it is common that large organizations will collect many documents generated by their staff into a content management system, with only limited organization such as tags. | neutral |
train_95865 | For each word in a given term, we obtain a vector of fixed size K (we choose K = 100). | we can then assume that we can minimize the error by finding the point where we derive this using identities proven in the Matrix Cookbook (Petersen et al., 2008), readers are recommended to refer to this text to better understand the derivation here: : we can easily find a matrix V + that satisfies 5 V In particular this can be achieved by using the singular value decomposition of V in order to find Such that Σ is a diagonal matrix and the following hold It is clear that a solution to find A is: To obtain the A matrix, we used all but one of the taxonomies as training data and evaluated on the rest of the taxonomies. | neutral |
train_95866 | In theory, RNN can keep a memory of previous information. | many approaches have been proposed in the post studies. | neutral |
train_95867 | However, it was difficult to train RNNs to capture longterm dependencies because the gradients tend to either vanish or explode. | native speakers may associate a fully expanded term with its abbreviation by some intuition. | neutral |
train_95868 | Nonetheless, we barely see studies that consider NFFs. | we extract long phrases and terms in popular Chinese natural language processing corpora, which include People's Daily corpora and SIGHAN word segmentation corpora. | neutral |
train_95869 | As an example of annotation work, Figure 2 shows a part of a sample annotated document in the Korean TimeBank. | the relation should be RT2 instead of RT3 or RT4 due to the relation's range. | neutral |
train_95870 | As noted above, annotations registered in PubAnnotation are aligned with the canonical text and all other annotations applied to the same data. | the LAPPS Grid seeks the contribution of tools and resources for biomedical text mining to augment the current facilities. | neutral |
train_95871 | Each of these was run three times (once each with T, TD, TDN queries, each created by concatenating all words in the indicated fields). | as Table 3 shows, assessors a and B show reasonable consistency in system ranking. | neutral |
train_95872 | The FrNewsLink corpus allows addressing several multi-modal linking tasks, with heterogeneous data from various sources and of various length. | then, a manual annotation process (described in Section 4.) | neutral |
train_95873 | The obtained MAP@10 in such configuration is 83.7%. | of the linking annotation process, TVBN topic segments can be separated into two sets. | neutral |
train_95874 | Annotated data are then used for training the K-Nearest Neighbors algorithm (KNN) classifier. | this method usually increases the contrast in the image. | neutral |
train_95875 | Crucially, the weights depend not only on the Euclidean distance of pixels but also on the radiometric differences. | the AM can be adapted using the available recordings to fit the actual acoustics conditions in the interviews. | neutral |
train_95876 | More specifically, Katz and Frost (1992) introduce the notion of orthographic depth. | in fact, the positive correlations of the acoustic weighting factors seem to support such an interpretation. | neutral |
train_95877 | • grab strength: float, the strength of a grab hand pose as a value in the range [0, 1]. | the gestures are with defined meanings that are independent of speech. | neutral |
train_95878 | The information gain from multimodal features was not relevant for classification in our corpus since there is too much noise in certain categories (eg., in Figure 4 in which a printer, a cable and a sensor were grouped together in the same category). | the objective function of this model reflects this ratio in the distances between these word vectors. | neutral |
train_95879 | It uses simple metrics and processing based on object and event semantics. | in the context of shared physical tasks in a common workspace, shared perception creates the context for the conversation between interlocutors (Lascarides and Stone, 2006;Lascarides and Stone, 2009b;Clair et al., 2010;Matuszek et al., 2014), and it is this shared space that gives many gestures, such as pointing, their meaning (Krishnaswamy and Pustejovsky, 2016a). | neutral |
train_95880 | The final entailment label is actually a pair of two labels: • entailment+neutral points to one-way entailment, 19 • entailment+entailment points to equivalence (two-way entailment), • neutral+neutral points to no entailment. | each sentence pair is human-annotated for relatedness in meaning and entailment. | neutral |
train_95881 | ; 4. part-of-speech tags, automatically generated with the Tree-Tagger (Schmid, 1995) and manually corrected; 5. lemmata, automatically generated with the Tree-Tagger and manually corrected; 6. the information which object is moved and where it is moved to (manually annotated); 7. the information whether the left or right hand touches a particular object (manually annotated); 8. the information whether a particular object touches the ground/table (automatically identified by the object tracker and manually corrected); 9. segmentation of the stream of words into chunks using heuristics such as long pauses (min. | the overall goal of the data collection activity was to gather multimodal data of basic actions such as tAKE (Ge: nehmen), PUt (Ge: stellen/legen) and PUSH (Ge: schieben). | neutral |
train_95882 | Table 1 contains the number of different anomaly induction methods used during the creation of the corpus. | speech signals correspond to the same text: "Implorant le pardon de sa fille, il se mità aiguiser sa hache" (Imploring his daughter's forgiveness, he began to sharpen his axe.). | neutral |
train_95883 | As far as we know, no emotional audiovisual corpus exists containing controlled, acted or natural anomalies to address specifically the anomaly detection question. | they are extracted by using the PRAAt software (Boersma and Weenink, 2016). | neutral |
train_95884 | The reason could be that reactions caused by induced anomalies are more subtle and nuanced, sometimes under control of the speaker, and then more difficult to detect than acted behaviors. | in this paper, we present a new and complementary multimedia corpus called EMOLY which contains human-centered anomalies. | neutral |
train_95885 | Online archives of presentations provide valuable sources of material for study and research. | video segments were aligned into time windows each 90 seconds in length, i.e. | neutral |
train_95886 | The treebank was divided into a training set of 5639 sentences and a test set of 1270 sentences for learning and testing POS tagging and dependency parsing. | other treebanks are built manually for languages such as Norwegian (Solberg et al., 2014). | neutral |
train_95887 | The lowest precision is 62.0% and the highest value is 66.8%. | an automatic classification of complements for each verb in any sentence would solve this sophisticated problem. | neutral |
train_95888 | The project is managed using the maven build automation tool, giving researchers two simple ways to start using the framework. | the German sentence is automatically annotated with multiple layers of linguistic annotation. | neutral |
train_95889 | In recent years, different language processing applications demand state-of-the-art parsers. | in this study, an attempt will be done to create treebanks for Amharic. | neutral |
train_95890 | Treebanks have been developed for well-resourced languages in different frameworks such as Phrase Structure, HPSG, and Dependency. | we have mentioned problems related to clitic segmentation and indicated that Amharic orthographic words may not only bear morphological information but also carry other function elements of syntactic relations. | neutral |
train_95891 | In Arabic and Hebrew, such instance is treated as agreement phenomena within the noun phrase. | functionally, converbs may have three functions: serial, consecutive, and co-extensive (Meyer, 2011). | neutral |
train_95892 | When adopting UD, we need to give language-specific information regarding the POS tag-set relevant to Amharic. | amharic is a less-resourced and morphologically-rich language where problems of OOV and ambiguities are major bottlenecks. | neutral |
train_95893 | Syntactic dependency types for Amharic are defined in order to be as consistent as possible with the principle of UD. | we consider them as non-main verbs and the final verb as a main verb. | neutral |
train_95894 | The nominal will be given the grammatical role of nsubj, obj, etc., while the clitics will be treated as a pronominal copy of the nominal and will get the role of expl. | the distinction they want to capture by the tags of the subcategories will not be used when such forms attach a preposition. | neutral |
train_95895 | Google Cloud Platform (GCP) (Google, 2018d) has various container solutions, two of which are used by our system: 1. | ttS engines and synthesizers are hosted in containers via Docker images. | neutral |
train_95896 | These new words are extracted from many language resources on the Web automatically or semi-automatically, and it is frequently updated (currently twice a week). | it is common to conduct text formatting, sentence segmentation, and character normalization as pre-processing. | neutral |
train_95897 | This paper proposes a visualization system for chemical compounds. | this paper proposes a visualization system for chemical compound information extracted from Japanese texts and chemical compound databases. | neutral |
train_95898 | The processed information is embedded as links in text. | in a similar way, 'Acrylic acid 4 -(1,1dimethylethyl)phenyl ester' is split into 'Acrylic acid', '(1,1-dimethylethyl)' and 'phenyl ester'. | neutral |
train_95899 | We expect that a possibility to access useful information would increase by using these rules. | figure 2 shows an example of an extraction of a paraphrase rule from the above two chemical compound names. | neutral |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.