id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_96900
(2006) have shown that BLEU does not always rank competing systems in accord with human judgments, but also because surface realization scores are typically much higher than those in MT-where BLEU's performance has been repeatedly assessed-even when using just one reference.
for a both the original CCGbank and the collapsed corpus, we extracted a section 02-21 lexicogrammars and used it to derive Lfs for the development and test sections.
neutral
train_96901
Explicit YQQs are less interesting, because the user's temporal intention is clearly specified in the query.
when k < 0, the newest document is adjusted slightly under the old one.
neutral
train_96902
Figure 2 shows the class of out-of-domain documents for the formality factor, using 3 categories of formality: low (conversational, unprofessional), medium (casual but coherent), high (formal).
roughly 16 of the 100 documents are labeled as very informal and another 55 include some informal text or are moderately informal.
neutral
train_96903
Syntactic parsing is typically one of the first steps in a text mining pipeline.
different analysis show other problematic characteristics, including inconsistent use of nouns and partial words (Tateisi & Tsujii, 2004), higher perplexity measures (Elhadad, 2006), greater lexical density, plus increased number of relative clauses and prepositional phrases (Ge-moets, 2004), all of which correlate with diminished comprehension and higher text difficulty.
neutral
train_96904
Obviously, positive pivot features tend to occur in positive instances, so the correlations built on positive instances are more reliable than the correlations built on negative instances; and vice versa.
to satisfy this requirement, we proposed the following formula: where P o (w) and P n (w) indicate the probability of word w in the source domain and the target domain respectively: ( ) ( ) where N o (w) and N n (w) is the number of examples with word w in the source domain and the target domain respectively; N o and N n is the number of examples in the source domain and the target domain respectively.
neutral
train_96905
Crucially, this set does not use the supertags for the words in the history.
unlike context-free grammar rules which are single level trees, supertags are multi-level trees which encapsulate both predicateargument structure of the anchor lexeme (by including nodes at which its arguments must substitute) and morpho-syntactic constraints such as subjectverb agreement within the supertag associated with the anchor.
neutral
train_96906
The probabilistic model refers only to supertag names, not to words.
mICA can associate dependency parses with rich linguistic information such as voice, the presence of empty subjects (PRO), wh-movement, and whether a verb heads a relative clause.
neutral
train_96907
In this paper, we propose the use of metadata contained in documents to improve coreference resolution.
such metadata often coincides with the discourse structure, and is presumably useful to coreference resolution.
neutral
train_96908
For the experimental results in this paper, the given scores are calculated as the average of the respective BLEU and METEOR scores obtained for each system output and are listed as percent figures.
the higher the translation quality of the pivot translation task is, the more dependend the selection of the optimal pivot language is on the system performance of the PVT-TRG task.
neutral
train_96909
Thus, the classifier which determines the precedence relation is not enough.
the first author has been supported by a KtF grant (09.009.2004).
neutral
train_96910
To train the maximum entropy classifiers we use about 41,000 sentences.
our combined approach is based on the premise that trigram LMs are well-suited for finding the order within NPs, PPs and other phrases where the head is not a finite verb.
neutral
train_96911
(Ringger et al., 2004;Marciniak and Strube, 2004;Elhadad et al., 2001)).
the label is the position of the target adverbial with respect to the non-adverbial siblings.
neutral
train_96912
This modification magnifies the contribution to each sense depending on the rank of the neighbour while still allowing a neighbour to contribute to all senses that it relates too.
the gains over the random baseline are greater at lower entropy levels indicating that the merits of detecting the skew of the distribution cannot all be due to lower polysemy levels.
neutral
train_96913
There is little difference between the Laplacian and normalised Laplacian pseudoinverses; both achieve better performance than the baseline B.
many of these measures are not amenable for use as kernel functions as they rely on properties which cannot be expressed as a vector inner product, such as the lowest common subsumer of two vertices.
neutral
train_96914
Deriving these lists from the WSJ test data gives an error rate of 1.65%.
certainly, the A + S cases are more difficult to identify, but perhaps some better structured approach could reduce the error rate further.
neutral
train_96915
We take a supervised learning approach, extracting features from "L. R".
adding a feature that matches a list of abbreviations can increase the error rate; using the list ("Mr.", "Co.") increases the number of errors by up to 25% in our experiments.
neutral
train_96916
output O is the softmax ψ applied to the quadratic transform q of the input I.
for our logistic regression (I-O) experiments, the architecture is specifically I-ψq O, i.e.
neutral
train_96917
(Henderson and Brill, 1999) perform parse selection by maximizing the expected precision of the selected parse with respect to the set of parses being combined.
(Sagae and Lavie, 2006) extend this method by tuning t on a development set to maximize fscore.
neutral
train_96918
Long named entities are frequently abbreviated in oral Chinese language for efficiency and simplicity.
the elimination means that one or more words in the full-name are ignored completely, while the reduction requires that at least one character is selected from each word.
neutral
train_96919
To illustrate the trade-offs in speed vs. accuracy that can be achieved by varying the two pruning parameters, we sweep through different values for the parameters and measure decoding accuracy, reported as word error rate (WER), and decoding speed, reported as times faster than real time (xfRT).
the first step in converting speech to a searchable index involves the use of an ASR system that produces word, word-fragment or phonetic transcripts.
neutral
train_96920
In this work, we derive this matrix from broadcast news development data.
this NISt StD 2006 evaluation metric used Actual/Maximum term Weighted Value (AtWV/MtWV) that allows one to weight FAs and Misses per the needs of the task at hand (NISt, 2006).
neutral
train_96921
As will be shown in our experiments, the oracle word/phrase accuracy using n-best hypotheses is far greater than the 1-best output.
as can be seen, with very small increase in arc density, the number of paths that are encoded in the WCN can be increased exponentially.
neutral
train_96922
The repository to be searched may be the web, or a portion of the web, or it may be an organisational document repository, including transcribed, structured and indexed recordings of previous meetings.
we extended this approach to look at a much finer grained segmentation: dialogue acts.
neutral
train_96923
Our experiments indicated that it is possible to perform automatic segmentation into dialogue acts with a relatively low error rate.
the core system runs about five times slower than real-time, and the full system is about fourteen times slower than real-time, on current commodity hardware.
neutral
train_96924
3m.com/meetingnetwork/), for most people they are not the most enjoyable aspect of their work.
our recognisers rely strongly on annotated in-domain data.
neutral
train_96925
How to effectively extract the indicative features for a specific language phenomenon is a very task-specific question, as we will show in the context of the VPC extraction task in Section 3.2.
parsing with precision grammars is increasingly achieving broad coverage over open-domain texts for a range of constraint-based frameworks (e.g., TAG, LFG, HpSG and CCG), and is being used in real-world applications including information extraction, question answering, grammar checking and machine translation (Uszkoreit, 2002;Oepen et al., 2004;Frank et al., 2006;Zhang and Kordoni, 2008;MacKinlay et al., 2009).
neutral
train_96926
Moreover, it would be interesting to investigate the applicability of the technique in other parsing strategies, e.g., head-corner or left-corner parsing.
a given triple should be classified as positive if and only if it is associated with at least one noncompositional token instance in the provided tokenlevel data.
neutral
train_96927
Given that some aspects of syntax are domain dependent (typically at the lexical level), single parsing models tend to not perform well across all domains (see Table 1).
the linear regressor is given values from the three features from the previous section (COSINEtOP50, UNKWORDS, and ENtROPY) and returns an estimate of the f-score the parsing model would achieve the target text.
neutral
train_96928
If the sub-tree is constructed from a binary rule rewrite X → Y Z, then the root nonterminal Y of some best sub-tree over some span (i, k) will generate break b k because Y is the highest nonterminal that covers word w k as the right-most terminal 4 .
opinions, findings, and recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the funding agency or the institutions where the work was completed.
neutral
train_96929
In a follow-up study, incorporated sentence information in this ILP framework.
studies using these rich speech recognition results for speech summarization are very limited.
neutral
train_96930
An example of a WCN is shown in Figure 3.
to train a dynamic/static classifier, we experimented with the following three different techniques.
neutral
train_96931
These domain-specific topics have very low frequencies, yet they are very relevant and valuable.
these systems have been well developed for laboratory research, and some have become commercially viable.
neutral
train_96932
The set of phrases selected for one entry may come from several reviews on this single entry, and many of them may include the same noun (e.g., 'good fish', 'not bad fish' and 'aboveaverage fish' for one restaurant).
from the dataset, 857,466 sentences were subjected to parse analysis; and a total of 434,372 phrases (114,369 unique ones) were extracted from the parsable subset (78.6%) of the sentences.
neutral
train_96933
The named-entities provide important features to downstream tasks about what words and phrases are important, as well as information on the intent.
's ads database: a smaller dataset contains 14k ads and a larger dataset of 42k ads.
neutral
train_96934
We do not employ a patterndiscovery algorithm for finding other contexts; the model propagates these labels, as before, using the features of the rest of the model.
to assert the Markovian assumption, each g k (j, x, s) only computes features based on x, s j , and y j−1 1 .
neutral
train_96935
By observing Figure 1 (a), we can express IS-A statements, such as Internet IS-A Computer Network etc.
,N P n suggests that N P 0 is a hypernym of N P i .
neutral
train_96936
The outcome of the first stage is a set of senses, S, where each s w i ∈ S denotes the i-th sense of word w ∈ W .
we need to evaluate the internal nodes that group the leaf nodes.
neutral
train_96937
Our model extracts a semantic representation from large document collections and their associated images without any human involvement.
our own work aims to develop a model of semantic representation that takes visual context into account.
neutral
train_96938
More recently, topic models have been gaining ground as a more structured representation of word meaning.
the task varies slightly from word association.
neutral
train_96939
Furthermore, the visual modality helps obtain crisper meaning distinctions.
we assume that the images and their surrounding text have been generated by a shared set of topics.
neutral
train_96940
We apply a range of topic scoring models to the evaluation task, drawing on WordNet, Wikipedia and the Google search engine, and existing research on lexical similarity/relatedness.
we have proposed the novel task of topic coherence evaluation as a form of intrinsic topic evaluation with relevance in document search/discovery and visualisation applications.
neutral
train_96941
This research seeks to fill the gap between topic evaluation in computational linguistics and machine learning, in developing techniques to perform intrinsic qualitative evaluation of learned topics.
lDA is a Bayesian graphical model for text document collections represented by bags-of-words (see Blei et al.
neutral
train_96942
In general, we find a statistically significant negative correlation between these values using χ 2 features, indicating that as the entropy of the pairwise cluster similarities increases (i.e., prototypes become more similar, and similarities become uniform), rater disagreement increases.
figure 2 plots Spearman's ρ on WordSim-353 against the number of clusters (K) for Wikipedia and Gigaword corpora, using pruned tf-idf and χ 2 features.
neutral
train_96943
in the case of singer or need) as well as its ability to include synonyms from less frequent senses (e.g., the experiment sense of research or the verify sense of prove).
the similarity between two words in a multiprototype model can be computed straightforwardly, requiring only simple modifications to standard distributional similarity methods such as those presented by Curran (2004).
neutral
train_96944
It is possible to circumvent the model-selection problem (choosing the best value of K) by simply combining the prototypes from clusterings of different sizes.
there are a number of ways it could be improved: Feature representations: Multiple prototypes improve Spearman correlation on WordSim-353 compared to previous methods using the same underlying representation (Agirre et al., 2009).
neutral
train_96945
Our model correctly identifies that the English the aligns to nothing on the foreign side.
ing to the tune and test set, we extracted approximately 32 million unique rules using our aligner, but only 3 million with GIZA++.
neutral
train_96946
A word c k can be possibly labeled as B y by the first classifier and E y by the second classifier.
in doing so, we can know further how well constituent boundary Classification Task Avg.
neutral
train_96947
We randomly selected 100 1-to 4-grams that appeared in both Europarl and MTC sentences (excluding stop words, numbers, and phrases containing periods and commas).
as observed by Liben-Nowell and Kleinberg (2003), hitting time has the drawback of being sensitive to portions of the graph that are far from the start node because it considers paths of length up to ∞.
neutral
train_96948
One reason for this is that about 90% of articles are used correctly by ESL learners; this is higher than the performance of state-of-the-art classifiers for article selection.
for details about annotation and data selection, please refer to the companion paper (Rozovskaya and Roth, 2010).
neutral
train_96949
Furthermore, T W E systems ArtDistr and ErrorDistr that use specific knowledge about article and error distributions, respectively, work better for Russian and Chinese groups than the General method that adds errors to the data uniformly at random.
to create training data that closely resemble text with naturally occurring errors, we use error frequency information and error distribution statistics obtained from corrected non-native text.
neutral
train_96950
But their system uses sophisticated syntactic features and they observe that the parser does not perform well on non-native data.
it has not been shown whether training on data with artificial errors is beneficial when compared to utilizing clean data.
neutral
train_96951
3 This definition fulfills all the semiring properties as defined in (Mohri, 2009).
the overformatting rate (OFR) is higher by 1.2% absolute in System B than in System A.
neutral
train_96952
2 Automatic speech recognition (ASR) of conversational speech is an extremely difficult problem.
we judge the quality of Mechanical Turk data by comparing the performance of one LVCSR system trained on Turker annotation and another trained on professional transcriptions of the same dataset.
neutral
train_96953
Our baseline system is the Maximum Entropy model with features from filler and confidence estimation models proposed by Rastrow et al.
figure 1 depicts a confusion network decoded by the hybrid system for a section of an utterance in our test-set.
neutral
train_96954
In a large-scale set of experiments, we quantify how language model perplexity correlates with ADS performance over multiple data sets and SLM techniques.
statistical Language Models (sLMs) include methods for more accurately estimating co-occurrence probabilities via back-off, smoothing, and clustering techniques (e.g.
neutral
train_96955
1 We rank the extractions in U R according to how similar their arguments' contextual distributions, P (c|e i ), are to those of the seed arguments.
the first, Unary, was an extraction task for unary relations (Company, Country, Language, Film) and the second, Binary, was a type-checking task for binary relations (Conquered, Founded, Headquartered, Merged).
neutral
train_96956
1 Additionally, a single encoding can generally be used to render a large number of languages such that the document encoding at best filters out a subset of languages which are incompatible with the given encoding, rather than disambiguates the source language.
for direct comparability with Cavnar and Trenkle (1994), we additionally carried out a preliminary experiment with hybrid byte n-grams (all of 1-to 5grams), combined with simple frequency-based feature selection of the top-1000 features for each ngram order.
neutral
train_96957
It also suggests that performance over shorter documents appears to be the dominating factor in the overall ranking of the different methods.
the micro-averaged scores indicate the average performance per document; as we always make a unique prediction per document, the micro-averaged precision, recall and F-score are always identical (as is the classification accuracy).
neutral
train_96958
This avoids the dynamic program of the blocked sampler but at the expense of considerably slower mixing.
slice sampling is an example of auxiliary variable sampling in which we make use of the fact that if we can draw samples from a joint distribution, then we can trivially obtain samples from the marginal distributions: where d is the variable of interest and u is an auxiliary variable.
neutral
train_96959
The slice sampler uniformly finds much better solutions than the Gibbs sampler regardless of initialisation.
the slice sampled models are restricted to learning binary branching one-to-one (or null) alignments, 8 while no restriction is placed on the Gibbs sampler (both use the same model, so have comparable LLH).
neutral
train_96960
(2009) proposed an auxialliary variable sampler, possibly complementary to ours, which was also evaluated on synchronous parsing.
the slice sampled model initialised with the naive LB structure achieves a higher likelihood than the M4 initialised model, although this is not reflected in their relative BLEU scores.
neutral
train_96961
For each entry, in addition to the standard phrasal translation probabilities, we define a count feature that represents the number of MWEs in the input language phrase.
the usefulness of explicitly modeling MWEs in the SMT framework has not yet been studied systematically.
neutral
train_96962
For this work, we focus on the constituency trees, word senses, and predicate argument structures.
finally, we compare the baseline (without sense) result with the word sense result on the test data.
neutral
train_96963
With the increasing of n, the sense becomes more and more special.
word senses are important information for recognizing semantic roles.
neutral
train_96964
2 These descriptions are less straightforward than those for the enclitics.
the second category contains larger grammars based on natural languages that illustrate a wider range of phenomena, and therefore test the interaction of the associated libraries.
neutral
train_96965
During query processing, the frequency indexes are first traversed sequentially to find a document that contains all the required elements in the query.
the indexing methods used in individual systems are usually not reported.
neutral
train_96966
This work suggests that web-scale tree query may soon be feasible.
tree query is a more complex and interesting task, due to several factors which we list below.
neutral
train_96967
We extract three types of text features (described below).
for our experiments we used the elastic net and specifically the glmnet package which contains an implementation of an efficient coordinate ascent procedure for training (friedman et al., 2008).
neutral
train_96968
The question template is a stylized information need that has a fixed structure and free slots whose instantiation varies across different topics.
we see that interactive MMR yields higher weighted recall at all length increments.
neutral
train_96969
Similar algorithms performed well in other complex QA tasks-in TREC 2003, a sentence retrieval variant beat all but one run on definition questions (Voorhees, 2003).
researchers have long known that information seeking is an iterative activity, which suggests that an interactive approach might be worth exploring.
neutral
train_96970
For ex-ample, in "I recommend this.
in order to obtain a topic-relevant context, we retrieved the top 10 relevant sentences corresponding to the given topic using the Lemur toolkit 3 .
neutral
train_96971
In terms of granularity, this task has been investigated from building word level sentiment lexicon (Turney, 2002;Moilanen and Pulman, 2008) to detecting phrase-level (Wilson et al., 2005;Agarwal et al., 2009) and sentence-level (Riloff and Wiebe, 2003;Hu and Liu, 2004) sentiment orientation.
adding the polarized bigram feature and transition feature (PB and T) individually can yield some improvement; however, adding both of them did not result in any further improvement -performance degrades compared to LF+PB.
neutral
train_96972
Union: Ms. Palin supported the bridge project while running for governor, but turned against it when it became a national scandal and a symbol of wasteful spending.
in addition to this high-level analysis, we further analyzed 10% of the cases to identify the the types of errors made in fusion as well as the techniques used and the effect of task difficulty on performance.
neutral
train_96973
Using multiple workers provides little benefit unless we are able to harness the collective judgments of their responses.
to these approaches, sentence fusion was introduced to combine fragments of sentences with common information for multi-document summarization (Barzilay and McKeown, 2005).
neutral
train_96974
In comparison, window-5 has the worst results, with performance very close to baseline.
for class-based unigrams, P (Q|S) is computed using only the cluster labels of the query terms as where C q i is the cluster that contains q i and P (q i |C q i , S) is the emission probability of the i th query term given its cluster and the sentence.
neutral
train_96975
For our experiments we use MINIPAR (Lin, 1998) to parse the whole corpus due to its robustness and speed.
if w appears three times in a document that contains two instances of w, the former method counts it as one co-occurrence, while the latter as six co-occurrences.
neutral
train_96976
The frequency associated with a concept is incremented in WordNet each time that concept is observed, as are the counts of the ancestor concepts in the Word-Net hierarchy (for nouns and verbs).
this paper shows that Information Content measures based on modest amounts of unannotated corpora have greater correlation with human similarity judgements than do those based on the largest corpus of sense-tagged text currently available.
neutral
train_96977
When a corpus is sense-tagged, mapping occurrences of a word to a concept is straightforward (since each sense of a word corresponds with a concept or synset in WordNet).
the size (in tokens) of each corpus is shown in the second column of table 2 (size), which is expressed in thousands (k), millions (m), and billions (b).
neutral
train_96978
This is frequently done these days as interest in dependency parsing grows but many languages only have PS treebanks.
dTs and PSTs can be ordered or unordered.
neutral
train_96979
Note that a theory can decide to omit some content; for example, we can have a theory which does not distinguish raising from control (the English PTB does not).
we can express the same content in either type of tree!
neutral
train_96980
In this work, we use crowdsourcing to generate evaluation data to validate simple techniques designed to adapt a widely-used high-performing named entity recognition system to new domains.
retraining an NEr system for a particular domain can be expensive if new annotations must be generated from scratch.
neutral
train_96981
(1) positive-greater than zero; "positive numbers" (2) plus, positive-involving advantage or good; "a plus (or positive) factor" (subjective) (3) collaborate, join forces, cooperate-work together on a common enterprise of project; "We joined forces with another research group"(objective) In most cases, if the word positive is used in the sense "greater than zero" (objective) in an English context, the corresponding Chinese translation is " '"; if "involving advantage or good"(subjective) is used, its Chinese translations are "È4', Ð'".
to standard multi-class Word Sense Disambiguation (WSD), it uses a coarse-grained sense inventory that allows to achieve higher accuracy than WSD and therefore introduces less noise when embedded in another task such as word translation.
neutral
train_96982
For example, the query "people" is ambiguous, and has the overall entropy of 1.73 due to the variety of URLs clicked.
previous research on this topic focused on binary classification of query ambiguity.
neutral
train_96983
In contrast, an unclear query "lyrics" has the overall entropy of 2.26.
acknowledgments: This work was partially supported by grants from Yahoo!
neutral
train_96984
This involves the desired parameters, which we solve for by estimating the others from data, as described next.
we first wish to thank Ainur Yessenalina for initial investigations and helpful comments.
neutral
train_96985
And so we need to combine the redundant edits in an intelligent manner.
in this work, we explore soliciting those edits from untrained human annotators, via the online service Amazon Mechanical Turk.
neutral
train_96986
Notice that values for ρ range from -1 to 1, with +1 indicating perfect rank correlation, -1 perfect inverse correlation, and 0 no correlation.
note that this does not necessarily indicate a 'cheating' worker, for even if a worker is acting in good faith, they might not be able to perform the task adequately, due to misunderstanding the task, or neglecting to attempt to use a small number of edits.
neutral
train_96987
We will refer to these similarities as sim cos and sim Jac , respectively.
current systems usually perform SRL in two pipelined steps: argument identification and argument classification.
neutral
train_96988
We choose the method, which considers NPs as the product attribute candidates, as the baseline (shown as NPs based).
they simply choose the NPs in a product review as the product attribute candidates (Hu and Liu, 2004;Popescu and Etzioni, 2005;Yi et al., 2003).
neutral
train_96989
Besides, this method treats the syntactic structure as a whole during exact matching, without considering any structural information.
to overcome the above problems, two generalization strategies are proposed in this paper.
neutral
train_96990
In particular, for a syntactic structure 3 T in the test set, if T exactly matches with one of the standard syntactic structures, then its corresponding string can be treated as a product attribute candidate.
the two generalization strategies, SynStru h and SynStru kernel can both significantly improve the performance for each domain, comparing to the SynStru based method.
neutral
train_96991
1 The test set is made up of 40 posts (170 sentences) on a thread discussing a player's behaviour in the same match.
we can see that performance suffers when the parser performs its own tokenisation.
neutral
train_96992
the target side parallel data used in the first baseline as described further in the next subsection.
this underlines the need to retrain translation models with timely material.
neutral
train_96993
The training data for Bulgarian consisted of two partially annotated Wikipedia article pairs.
we experimented with two broad domain test sets.
neutral
train_96994
In this paper, we introduced the idea of bridge transliteration systems that were developed employing well-studied orthographic approaches between constituent languages.
for example, let us consider a practical scenario where we have six languages from four different language families as shown in figure 1.
neutral
train_96995
Data is available between a language pair due to one of the following three reasons: Politically related languages: Due to the political dominance of English it is easy to obtain parallel names data between English and most languages.
the same test set used for testing the transitive systems was used for testing the direct system as well.
neutral
train_96996
The following features are then generated using this character-aligned data (here e i and h i form the i-th pair of aligned characters in the source word and target word respectively): • h i and source character bigrams ( {e i−1 , e i } or {e i , e i+1 }) • h i and source character trigrams ( {e i−2 , e i−1 , e i } or {e i−1 , e i , e i+1 } or {e i , e i+1 , e i+2 }) • h i , h i−1 and source character trigrams In this section, we outline our methodology for composing transitive transliteration systems between X and Y , using a bridge language Z, by chaining individual direct transliteration systems.
none of the above approaches address the problem of developing transliteration functionality between a pair of languages when no direct data exists between them but sufficient data is available between each of these languages and an intermediate language.
neutral
train_96997
But in practice one often wants to predict the MAP derivation for a new string w not contained in the training data.
we input an FST cascade and data and output the same FST cascade with trained weights.
neutral
train_96998
For any µ, the mixed weight vector w will not separate all the points.
what it does say is that, independent of µ, the mixed weight vector produced after convergence will not necessarily separate the entire data, even when T is separable.
neutral
train_96999
To train such a model is computationally expensive and can take on the order of days to train on a single machine.
for many parallel computing frameworks, including both multi-core computing as well as cluster computing with high rates of connectivity, this is less of an issue.
neutral