corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-674401 | cs/0606096 | Building a resource for studying translation shifts | <|reference_start|>Building a resource for studying translation shifts: This paper describes an interdisciplinary approach which brings together the fields of corpus linguistics and translation studies. It presents ongoing work on the creation of a corpus resource in which translation shifts are explicitly annotated. Translation shifts denote departures from formal correspondence between source and target text, i.e. deviations that have occurred during the translation process. A resource in which such shifts are annotated in a systematic way will make it possible to study those phenomena that need to be addressed if machine translation output is to resemble human translation. The resource described in this paper contains English source texts (parliamentary proceedings) and their German translations. The shift annotation is based on predicate-argument structures and proceeds in two steps: first, predicates and their arguments are annotated monolingually in a straightforward manner. Then, the corresponding English and German predicates and arguments are aligned with each other. Whenever a shift - mainly grammatical or semantic -has occurred, the alignment is tagged accordingly.<|reference_end|> | arxiv | @article{cyrus2006building,
title={Building a resource for studying translation shifts},
author={Lea Cyrus},
journal={Proc. LREC 2006, Genoa, May 24-26, 2006; pp. 1240-1245},
year={2006},
archivePrefix={arXiv},
eprint={cs/0606096},
primaryClass={cs.CL}
} | cyrus2006building |
arxiv-674402 | cs/0606097 | Synonym search in Wikipedia: Synarcher | <|reference_start|>Synonym search in Wikipedia: Synarcher: The program Synarcher for synonym (and related terms) search in the text corpus of special structure (Wikipedia) was developed. The results of the search are presented in the form of graph. It is possible to explore the graph and search for graph elements interactively. Adapted HITS algorithm for synonym search, program architecture, and program work evaluation with test examples are presented in the paper. The proposed algorithm can be applied to a query expansion by synonyms (in a search engine) and a synonym dictionary forming.<|reference_end|> | arxiv | @article{krizhanovsky2006synonym,
title={Synonym search in Wikipedia: Synarcher},
author={A. Krizhanovsky},
journal={arXiv preprint arXiv:cs/0606097},
year={2006},
archivePrefix={arXiv},
eprint={cs/0606097},
primaryClass={cs.IR cs.DM}
} | krizhanovsky2006synonym |
arxiv-674403 | cs/0606098 | Outlier Robust ICP for Minimizing Fractional RMSD | <|reference_start|>Outlier Robust ICP for Minimizing Fractional RMSD: We describe a variation of the iterative closest point (ICP) algorithm for aligning two point sets under a set of transformations. Our algorithm is superior to previous algorithms because (1) in determining the optimal alignment, it identifies and discards likely outliers in a statistically robust manner, and (2) it is guaranteed to converge to a locally optimal solution. To this end, we formalize a new distance measure, fractional root mean squared distance (frmsd), which incorporates the fraction of inliers into the distance function. We lay out a specific implementation, but our framework can easily incorporate most techniques and heuristics from modern registration algorithms. We experimentally validate our algorithm against previous techniques on 2 and 3 dimensional data exposed to a variety of outlier types.<|reference_end|> | arxiv | @article{phillips2006outlier,
title={Outlier Robust ICP for Minimizing Fractional RMSD},
author={Jeff M. Phillips and Ran Liu and Carlo Tomasi},
journal={arXiv preprint arXiv:cs/0606098},
year={2006},
number={Duke University Technical Report: CS-2006-05},
archivePrefix={arXiv},
eprint={cs/0606098},
primaryClass={cs.GR cs.CG}
} | phillips2006outlier |
arxiv-674404 | cs/0606099 | Fairness in Multiuser Systems with Polymatroid Capacity Region | <|reference_start|>Fairness in Multiuser Systems with Polymatroid Capacity Region: For a wide class of multi-user systems, a subset of capacity region which includes the corner points and the sum-capacity facet has a special structure known as polymatroid. Multiaccess channels with fixed input distributions and multiple-antenna broadcast channels are examples of such systems. Any interior point of the sum-capacity facet can be achieved by time-sharing among corner points or by an alternative method known as rate-splitting. The main purpose of this paper is to find a point on the sum-capacity facet which satisfies a notion of fairness among active users. This problem is addressed in two cases: (i) where the complexity of achieving interior points is not feasible, and (ii) where the complexity of achieving interior points is feasible. For the first case, the corner point for which the minimum rate of the active users is maximized (max-min corner point) is desired for signaling. A simple greedy algorithm is introduced to find the optimum max-min corner point. For the second case, the polymatroid properties are exploited to locate a rate-vector on the sum-capacity facet which is optimally fair in the sense that the minimum rate among all users is maximized (max-min rate). In the case that the rate of some users can not increase further (attain the max-min value), the algorithm recursively maximizes the minimum rate among the rest of the users. It is shown that the problems of deriving the time-sharing coefficients or rate-spitting scheme can be solved by decomposing the problem to some lower-dimensional subproblems. In addition, a fast algorithm to compute the time-sharing coefficients to attain a general point on the sum-capacity facet is proposed.<|reference_end|> | arxiv | @article{maddah-ali2006fairness,
title={Fairness in Multiuser Systems with Polymatroid Capacity Region},
author={Mohammad A. Maddah-Ali, Amin Mobasher, and Amir Kayvan Khandani},
journal={arXiv preprint arXiv:cs/0606099},
year={2006},
doi={10.1109/TIT.2009.2016058},
archivePrefix={arXiv},
eprint={cs/0606099},
primaryClass={cs.IT math.IT}
} | maddah-ali2006fairness |
arxiv-674405 | cs/0606100 | The generating function of the polytope of transport matrices $U(r,c)$ as a positive semidefinite kernel of the marginals $r$ and $c$ | <|reference_start|>The generating function of the polytope of transport matrices $U(r,c)$ as a positive semidefinite kernel of the marginals $r$ and $c$: This paper has been withdrawn by the author due to a crucial error in the proof of Lemma 5.<|reference_end|> | arxiv | @article{cuturi2006the,
title={The generating function of the polytope of transport matrices $U(r,c)$
as a positive semidefinite kernel of the marginals $r$ and $c$},
author={Marco Cuturi},
journal={arXiv preprint arXiv:cs/0606100},
year={2006},
archivePrefix={arXiv},
eprint={cs/0606100},
primaryClass={cs.LG cs.DM}
} | cuturi2006the |
arxiv-674406 | cs/0606101 | Stochastic Formal Methods: An application to accuracy of numeric software | <|reference_start|>Stochastic Formal Methods: An application to accuracy of numeric software: This paper provides a bound on the number of numeric operations (fixed or floating point) that can safely be performed before accuracy is lost. This work has important implications for control systems with safety-critical software, as these systems are now running fast enough and long enough for their errors to impact on their functionality. Furthermore, worst-case analysis would blindly advise the replacement of existing systems that have been successfully running for years. We present here a set of formal theorems validated by the PVS proof assistant. These theorems will allow code analyzing tools to produce formal certificates of accurate behavior. For example, FAA regulations for aircraft require that the probability of an error be below $10^{-9}$ for a 10 hour flight.<|reference_end|> | arxiv | @article{daumas2006stochastic,
title={Stochastic Formal Methods: An application to accuracy of numeric
software},
author={Marc Daumas (LIRMM, Lp2a), David Lester (LP2A, University of
Manchester)},
journal={arXiv preprint arXiv:cs/0606101},
year={2006},
archivePrefix={arXiv},
eprint={cs/0606101},
primaryClass={cs.MS}
} | daumas2006stochastic |
arxiv-674407 | cs/0606102 | Toward Functionality Oriented Programming | <|reference_start|>Toward Functionality Oriented Programming: The concept of functionality oriented programming is proposed, and some of its aspects are discussed, such as: (1) implementation independent basic types and generic collection types; (2) syntax requirements and recommendations for implementation independence; (3) unified documentation and code; (4) cross-module interface; and (5) cross-language program making scheme. A prototype example is given to demonstrate functionality oriented programming.<|reference_end|> | arxiv | @article{wang2006toward,
title={Toward Functionality Oriented Programming},
author={Chengpu Wang},
journal={arXiv preprint arXiv:cs/0606102},
year={2006},
archivePrefix={arXiv},
eprint={cs/0606102},
primaryClass={cs.PL cs.HC}
} | wang2006toward |
arxiv-674408 | cs/0606103 | Precision Arithmetic: A New Floating-Point Arithmetic | <|reference_start|>Precision Arithmetic: A New Floating-Point Arithmetic: A new deterministic floating-point arithmetic called precision arithmetic is developed to track precision for arithmetic calculations. It uses a novel rounding scheme to avoid excessive rounding error propagation of conventional floating-point arithmetic. Unlike interval arithmetic, its uncertainty tracking is based on statistics and the central limit theorem, with a much tighter bounding range. Its stable rounding error distribution is approximated by a truncated normal distribution. Generic standards and systematic methods for validating uncertainty-bearing arithmetics are discussed. The precision arithmetic is found to be better than interval arithmetic in both uncertainty-tracking and uncertainty-bounding for normal usages. The precision arithmetic is available publicly at http://precisionarithm.sourceforge.net.<|reference_end|> | arxiv | @article{wang2006precision,
title={Precision Arithmetic: A New Floating-Point Arithmetic},
author={Chengpu Wang},
journal={arXiv preprint arXiv:cs/0606103},
year={2006},
archivePrefix={arXiv},
eprint={cs/0606103},
primaryClass={cs.DM cs.DS cs.NA}
} | wang2006precision |
arxiv-674409 | cs/0606104 | An information-spectrum approach to large deviation theorems | <|reference_start|>An information-spectrum approach to large deviation theorems: In this paper we show a some new look at large deviation theorems from the viewpoint of the information-spectrum (IS) methods, which has been first exploited in information theory, and also demonstrate a new basic formula for the large deviation rate function in general, which is a pair of the lower and upper IS rate functions. In particular, we are interested in establishing the general large deviation rate functions that can be derivable as the Fenchel-Legendre transform of the cumulant generating function. The final goal is to show a necessary and sufficient condition for the rate function to be of Cram\'er-G\"artner-Ellis type.<|reference_end|> | arxiv | @article{han2006an,
title={An information-spectrum approach to large deviation theorems},
author={Te Sun Han},
journal={arXiv preprint arXiv:cs/0606104},
year={2006},
archivePrefix={arXiv},
eprint={cs/0606104},
primaryClass={cs.IT math.IT}
} | han2006an |
arxiv-674410 | cs/0606105 | Iso9000 Based Advanced Quality Approach for Continuous Improvement of Manufacturing Processes | <|reference_start|>Iso9000 Based Advanced Quality Approach for Continuous Improvement of Manufacturing Processes: The continuous improvement in TQM is considered as the core value by which organisation could maintain a competitive edge. Several techniques and tools are known to support this core value but most of the time these techniques are informal and without modelling the interdependence between the core value and tools. Thus, technique formalisation is one of TQM challenges for increasing efficiency of quality process implementation. In that way, the paper proposes and experiments an advanced quality modelling approach based on meta-modelling the "process approach" as advocated by the standard ISO9000:2000. This meta-model allows formalising the interdependence between technique, tools and core value<|reference_end|> | arxiv | @article{deeb2006iso9000,
title={Iso9000 Based Advanced Quality Approach for Continuous Improvement of
Manufacturing Processes},
author={Salah Deeb (CRAN), Beno^it Iung (CRAN)},
journal={12th IFAC Symposium on Information Control Problems in
Manufacturing, St-Etienne, France (17/05/2006) CDROM},
year={2006},
archivePrefix={arXiv},
eprint={cs/0606105},
primaryClass={cs.IR}
} | deeb2006iso9000 |
arxiv-674411 | cs/0606106 | Self-orthogonality of $q$-ary Images of $q^m$-ary Codes and Quantum Code Construction | <|reference_start|>Self-orthogonality of $q$-ary Images of $q^m$-ary Codes and Quantum Code Construction: A code over GF$(q^m)$ can be imaged or expanded into a code over GF$(q)$ using a basis for the extension field over the base field. The properties of such an image depend on the original code and the basis chosen for imaging. Problems relating the properties of a code and its image with respect to a basis have been of great interest in the field of coding theory. In this work, a generalized version of the problem of self-orthogonality of the $q$-ary image of a $q^m$-ary code has been considered. Given an inner product (more generally, a biadditive form), necessary and sufficient conditions have been derived for a code over a field extension and an expansion basis so that an image of that code is self-orthogonal. The conditions require that the original code be self-orthogonal with respect to several related biadditive forms whenever certain power sums of the dual basis elements do not vanish. Numerous interesting corollaries have been derived by specializing the general conditions. An interesting result for the canonical or regular inner product in fields of characteristic two is that only self-orthogonal codes result in self-orthogonal images. Another result is that image of a code is self-orthogonal for all bases if and only if trace of the code is self-orthogonal, except for the case of binary images of 4-ary codes. The conditions are particularly simple to state and apply for cyclic codes. To illustrate a possible application, new quantum error-correcting codes have been constructed with larger minimum distance than previously known.<|reference_end|> | arxiv | @article{b2006self-orthogonality,
title={Self-orthogonality of $q$-ary Images of $q^m$-ary Codes and Quantum Code
Construction},
author={Sundeep B and Andrew Thangaraj},
journal={arXiv preprint arXiv:cs/0606106},
year={2006},
archivePrefix={arXiv},
eprint={cs/0606106},
primaryClass={cs.IT math.IT}
} | b2006self-orthogonality |
arxiv-674412 | cs/0606107 | Human Information Processing with the Personal Memex | <|reference_start|>Human Information Processing with the Personal Memex: In this report, we describe the work done in a project that explored the human information processing aspects of a personal memex (a memex to organize personal information). In the project, we considered the use of the personal memex, focusing on information recall, by three populations: people with Mild Cognitive Impairment, those diagnosed with Macular Degeneration, and a high-functioning population. The outcomes of the project included human information processing-centered design guidelines for the memex interface, a low-fidelity prototype, and an annotated bibliography for human information processing, usability and design literature relating to the memex and the populations we explored.<|reference_end|> | arxiv | @article{burbey2006human,
title={Human Information Processing with the Personal Memex},
author={Ingrid Burbey, Gyuhyun Kwon, Uma Murthy, Nicholas Polys and Prince
Vincent},
journal={arXiv preprint arXiv:cs/0606107},
year={2006},
archivePrefix={arXiv},
eprint={cs/0606107},
primaryClass={cs.HC}
} | burbey2006human |
arxiv-674413 | cs/0606108 | A Product Oriented Modelling Concept: Holons for systems synchronisation and interoperability | <|reference_start|>A Product Oriented Modelling Concept: Holons for systems synchronisation and interoperability: Nowadays, enterprises are confronted to growing needs for traceability, product genealogy and product life cycle management. To meet those needs, the enterprise and applications in the enterprise environment have to manage flows of information that relate to flows of material and that are managed in shop floor level. Nevertheless, throughout product lifecycle coordination needs to be established between reality in the physical world (physical view) and the virtual world handled by manufacturing information systems (informational view). This paper presents the "Holon" modelling concept as a means for the synchronisation of both physical view and informational views. Afterwards, we show how the concept of holon can play a major role in ensuring interoperability in the enterprise context.<|reference_end|> | arxiv | @article{baïna2006a,
title={A Product Oriented Modelling Concept: Holons for systems synchronisation
and interoperability},
author={Salah Ba"ina (CRAN), Herv'e Panetto (CRAN), Khalid Benali (LORIA)},
journal={8th International Conference on Enterprise Information Systems,
ICEIS'2006, Paphos, Chypre (23/05/2006) -},
year={2006},
archivePrefix={arXiv},
eprint={cs/0606108},
primaryClass={cs.SE}
} | baïna2006a |
arxiv-674414 | cs/0606109 | Maximum gradient embeddings and monotone clustering | <|reference_start|>Maximum gradient embeddings and monotone clustering: Let (X,d_X) be an n-point metric space. We show that there exists a distribution D over non-contractive embeddings into trees f:X-->T such that for every x in X, the expectation with respect to D of the maximum over y in X of the ratio d_T(f(x),f(y)) / d_X(x,y) is at most C (log n)^2, where C is a universal constant. Conversely we show that the above quadratic dependence on log n cannot be improved in general. Such embeddings, which we call maximum gradient embeddings, yield a framework for the design of approximation algorithms for a wide range of clustering problems with monotone costs, including fault-tolerant versions of k-median and facility location.<|reference_end|> | arxiv | @article{mendel2006maximum,
title={Maximum gradient embeddings and monotone clustering},
author={Manor Mendel, Assaf Naor},
journal={Combinatorica 30(5) (2010), 581--615},
year={2006},
doi={10.1007/s00493-010-2302-z},
archivePrefix={arXiv},
eprint={cs/0606109},
primaryClass={cs.DS}
} | mendel2006maximum |
arxiv-674415 | cs/0606110 | Optimal Scheduling of Peer-to-Peer File Dissemination | <|reference_start|>Optimal Scheduling of Peer-to-Peer File Dissemination: Peer-to-peer (P2P) overlay networks such as BitTorrent and Avalanche are increasingly used for disseminating potentially large files from a server to many end users via the Internet. The key idea is to divide the file into many equally-sized parts and then let users download each part (or, for network coding based systems such as Avalanche, linear combinations of the parts) either from the server or from another user who has already downloaded it. However, their performance evaluation has typically been limited to comparing one system relative to another and typically been realized by means of simulation and measurements. In contrast, we provide an analytic performance analysis that is based on a new uplink-sharing version of the well-known broadcasting problem. Assuming equal upload capacities, we show that the minimal time to disseminate the file is the same as for the simultaneous send/receive version of the broadcasting problem. For general upload capacities, we provide a mixed integer linear program (MILP) solution and a complementary fluid limit solution. We thus provide a lower bound which can be used as a performance benchmark for any P2P file dissemination system. We also investigate the performance of a decentralized strategy, providing evidence that the performance of necessarily decentralized P2P file dissemination systems should be close to this bound and therefore that it is useful in practice.<|reference_end|> | arxiv | @article{mundinger2006optimal,
title={Optimal Scheduling of Peer-to-Peer File Dissemination},
author={Jochen Mundinger, Richard R. Weber and Gideon Weiss},
journal={arXiv preprint arXiv:cs/0606110},
year={2006},
archivePrefix={arXiv},
eprint={cs/0606110},
primaryClass={cs.NI cs.DS math.OC}
} | mundinger2006optimal |
arxiv-674416 | cs/0606111 | Models simulation and interoperability using MDA and HLA | <|reference_start|>Models simulation and interoperability using MDA and HLA: In the manufacturing context, there have been numerous efforts to use modeling and simulation tools and techniques to improve manufacturing efficiency over the last four decades. While an increasing number of manufacturing system decisions are being made based on the use of models, their use is still sporadic in many manufacturing environments. Our paper advocates for an approach combining MDA (model driven architecture) and HLA (High Level Architecture), the IEEE standard for modeling and simulation, in order to overcome the deficiencies of current simulation methods at the level of interoperability and reuse.<|reference_end|> | arxiv | @article{haouzi2006models,
title={Models simulation and interoperability using MDA and HLA},
author={Hind El Haouzi (CRAN)},
journal={Doctoral Symposium, IFAC/IFIP International conference on
Interoperability for Enterprise Applications and Software (I-ESA'2006), March
22-24, 2006,, France (2006)},
year={2006},
archivePrefix={arXiv},
eprint={cs/0606111},
primaryClass={cs.OH}
} | haouzi2006models |
arxiv-674417 | cs/0606112 | Product Centric Holons for Synchronisation and Interoperability in Manufacturing Environments | <|reference_start|>Product Centric Holons for Synchronisation and Interoperability in Manufacturing Environments: In the last few years, lot of work has been done in order to ensure enterprise applications interoperability; however, proposed solutions focus mainly on enterprise processes. Indeed, throughout product lifecycle coordination needs to be established between reality in the physical world (physical view) and the virtual world handled by manufacturing information systems (informational view). This paper presents a holonic approach that enables synchronisation of both physical and informational views. A model driven approach for interoperability is proposed to ensure interoperability of holon based models with other applications in the enterprise.<|reference_end|> | arxiv | @article{baina2006product,
title={Product Centric Holons for Synchronisation and Interoperability in
Manufacturing Environments},
author={Salah Baina (CRAN), G'erard Morel (CRAN)},
journal={12th IFAC Symposium on Information Control Problems in
Manufacturing, INCOM'2006, St-Etienne, France (17/05/2006) CDROM},
year={2006},
archivePrefix={arXiv},
eprint={cs/0606112},
primaryClass={cs.SE}
} | baina2006product |
arxiv-674418 | cs/0606113 | A common framework for aspect mining based on crosscutting concern sorts | <|reference_start|>A common framework for aspect mining based on crosscutting concern sorts: The increasing number of aspect mining techniques proposed in literature calls for a methodological way of comparing and combining them in order to assess, and improve on, their quality. This paper addresses this situation by proposing a common framework based on crosscutting concern sorts which allows for consistent assessment, comparison and combination of aspect mining techniques. The framework identifies a set of requirements that ensure homogeneity in formulating the mining goals, presenting the results and assessing their quality. We demonstrate feasibility of the approach by retrofitting an existing aspect mining technique to the framework, and by using it to design and implement two new mining techniques. We apply the three techniques to a known aspect mining benchmark and show how they can be consistently assessed and combined to increase the quality of the results. The techniques and combinations are implemented in FINT, our publicly available free aspect mining tool.<|reference_end|> | arxiv | @article{marin2006a,
title={A common framework for aspect mining based on crosscutting concern sorts},
author={Marius Marin, Leon Moonen and Arie van Deursen},
journal={Proceedings Working Conference on Reverse Engineering (WCRE), IEEE
Computer Society, 2006, pages 29-38},
year={2006},
doi={10.1109/WCRE.2006.6},
number={TUD-SERG-2006-009},
archivePrefix={arXiv},
eprint={cs/0606113},
primaryClass={cs.SE cs.PL}
} | marin2006a |
arxiv-674419 | cs/0606114 | Hidden Markov Process: A New Representation, Entropy Rate and Estimation Entropy | <|reference_start|>Hidden Markov Process: A New Representation, Entropy Rate and Estimation Entropy: We consider a pair of correlated processes {Z_n} and {S_n} (two sided), where the former is observable and the later is hidden. The uncertainty in the estimation of Z_n upon its finite past history is H(Z_n|Z_0^{n-1}), and for estimation of S_n upon this observation is H(S_n|Z_0^{n-1}), which are both sequences of n. The limits of these sequences (and their existence) are of practical and theoretical interest. The first limit, if exists, is the entropy rate. We call the second limit the estimation entropy. An example of a process jointly correlated to another one is the hidden Markov process. It is the memoryless observation of the Markov state process where state transitions are independent of past observations. We consider a new representation of hidden Markov process using iterated function system. In this representation the state transitions are deterministically related to the process. This representation provides a unified framework for the analysis of the two limiting entropies for this process, resulting in integral expressions for the limits. This analysis shows that under mild conditions the limits exist and provides a simple method for calculating the elements of the corresponding sequences.<|reference_end|> | arxiv | @article{rezaeian2006hidden,
title={Hidden Markov Process: A New Representation, Entropy Rate and Estimation
Entropy},
author={Mohammad Rezaeian},
journal={arXiv preprint arXiv:cs/0606114},
year={2006},
archivePrefix={arXiv},
eprint={cs/0606114},
primaryClass={cs.IT math.IT}
} | rezaeian2006hidden |
arxiv-674420 | cs/0606115 | Evaluating Variable Length Markov Chain Models for Analysis of User Web Navigation Sessions | <|reference_start|>Evaluating Variable Length Markov Chain Models for Analysis of User Web Navigation Sessions: Markov models have been widely used to represent and analyse user web navigation data. In previous work we have proposed a method to dynamically extend the order of a Markov chain model and a complimentary method for assessing the predictive power of such a variable length Markov chain. Herein, we review these two methods and propose a novel method for measuring the ability of a variable length Markov model to summarise user web navigation sessions up to a given length. While the summarisation ability of a model is important to enable the identification of user navigation patterns, the ability to make predictions is important in order to foresee the next link choice of a user after following a given trail so as, for example, to personalise a web site. We present an extensive experimental evaluation providing strong evidence that prediction accuracy increases linearly with summarisation ability.<|reference_end|> | arxiv | @article{borges2006evaluating,
title={Evaluating Variable Length Markov Chain Models for Analysis of User Web
Navigation Sessions},
author={Jose Borges and Mark Levene},
journal={arXiv preprint arXiv:cs/0606115},
year={2006},
archivePrefix={arXiv},
eprint={cs/0606115},
primaryClass={cs.AI cs.IR}
} | borges2006evaluating |
arxiv-674421 | cs/0606116 | New Algorithms for Regular Expression Matching | <|reference_start|>New Algorithms for Regular Expression Matching: In this paper we revisit the classical regular expression matching problem, namely, given a regular expression $R$ and a string $Q$, decide if $Q$ matches one of the strings specified by $R$. Let $m$ and $n$ be the length of $R$ and $Q$, respectively. On a standard unit-cost RAM with word length $w \geq \log n$, we show that the problem can be solved in $O(m)$ space with the following running times: \begin{equation*} \begin{cases} O(n\frac{m \log w}{w} + m \log w) & \text{if $m > w$} \\ O(n\log m + m\log m) & \text{if $\sqrt{w} < m \leq w$} \\ O(\min(n+ m^2, n\log m + m\log m)) & \text{if $m \leq \sqrt{w}$.} \end{cases} \end{equation*} This improves the best known time bound among algorithms using $O(m)$ space. Whenever $w \geq \log^2 n$ it improves all known time bounds regardless of how much space is used.<|reference_end|> | arxiv | @article{bille2006new,
title={New Algorithms for Regular Expression Matching},
author={Philip Bille},
journal={arXiv preprint arXiv:cs/0606116},
year={2006},
archivePrefix={arXiv},
eprint={cs/0606116},
primaryClass={cs.DS}
} | bille2006new |
arxiv-674422 | cs/0606117 | Performance comparison of multi-user detectors for the downlink of a broadband MC-CDMA system | <|reference_start|>Performance comparison of multi-user detectors for the downlink of a broadband MC-CDMA system: In this paper multi-user detection techniques, such as Parallel and Serial Interference Cancellations (PIC & SIC), General Minimum Mean Square Error (GMMSE) and polynomial MMSE, for the downlink of a broadband Multi-Carrier Code Division Multiple Access (MCCDMA) system are investigated. The Bit Error Rate (BER) and Frame Error Rate (FER) results are evaluated, and compared with single-user detection (MMSEC, EGC) approaches, as well. The performance evaluation takes into account the system load, channel coding and modulation schemes.<|reference_end|> | arxiv | @article{portier2006performance,
title={Performance comparison of multi-user detectors for the downlink of a
broadband MC-CDMA system},
author={Fabrice Portier (IETR), R. Legouable (FT R&D), L. Maret (LETI), F.
Bauer (NOKIA Research Center), N. Neda (UNIS), J.-F. H'elard (IETR), E.
Hemming (NOKIA Research Center), M. Des Noes (LETI), M. H'elard (FT R&D),
the projet europ'een Matrice Collaboration},
journal={Proceedings IST Mobile & Wireless Communications Summit (IST
Summit 2004) (2004) 1},
year={2006},
archivePrefix={arXiv},
eprint={cs/0606117},
primaryClass={cs.IT math.IT}
} | portier2006performance |
arxiv-674423 | cs/0606118 | Adapting a general parser to a sublanguage | <|reference_start|>Adapting a general parser to a sublanguage: In this paper, we propose a method to adapt a general parser (Link Parser) to sublanguages, focusing on the parsing of texts in biology. Our main proposal is the use of terminology (identication and analysis of terms) in order to reduce the complexity of the text to be parsed. Several other strategies are explored and finally combined among which text normalization, lexicon and morpho-guessing module extensions and grammar rules adaptation. We compare the parsing results before and after these adaptations.<|reference_end|> | arxiv | @article{aubin2006adapting,
title={Adapting a general parser to a sublanguage},
author={Sophie Aubin (LIPN), Adeline Nazarenko (LIPN), Claire N'edellec (MIG)},
journal={Proceedings of the International Conference on Recent Advances in
Natural Language Processing (RANLP'05) (2005) 89-93},
year={2006},
archivePrefix={arXiv},
eprint={cs/0606118},
primaryClass={cs.CL cs.IR}
} | aubin2006adapting |
arxiv-674424 | cs/0606119 | Lexical Adaptation of Link Grammar to the Biomedical Sublanguage: a Comparative Evaluation of Three Approaches | <|reference_start|>Lexical Adaptation of Link Grammar to the Biomedical Sublanguage: a Comparative Evaluation of Three Approaches: We study the adaptation of Link Grammar Parser to the biomedical sublanguage with a focus on domain terms not found in a general parser lexicon. Using two biomedical corpora, we implement and evaluate three approaches to addressing unknown words: automatic lexicon expansion, the use of morphological clues, and disambiguation using a part-of-speech tagger. We evaluate each approach separately for its effect on parsing performance and consider combinations of these approaches. In addition to a 45% increase in parsing efficiency, we find that the best approach, incorporating information from a domain part-of-speech tagger, offers a statistically signicant 10% relative decrease in error. The adapted parser is available under an open-source license at http://www.it.utu.fi/biolg.<|reference_end|> | arxiv | @article{pyysalo2006lexical,
title={Lexical Adaptation of Link Grammar to the Biomedical Sublanguage: a
Comparative Evaluation of Three Approaches},
author={Sampo Pyysalo, Tapio Salakoski, Sophie Aubin (LIPN), Adeline Nazarenko
(LIPN)},
journal={Proceedings of the Second International Symposium on Semantic
Mining in Biomedicine (SMBM 2006) (2006) 60-67},
year={2006},
archivePrefix={arXiv},
eprint={cs/0606119},
primaryClass={cs.CL cs.IR}
} | pyysalo2006lexical |
arxiv-674425 | cs/0606120 | On symmetric sandpiles | <|reference_start|>On symmetric sandpiles: A symmetric version of the well-known SPM model for sandpiles is introduced. We prove that the new model has fixed point dynamics. Although there might be several fixed points, a precise description of the fixed points is given. Moreover, we provide a simple closed formula for counting the number of fixed points originated by initial conditions made of a single column of grains.<|reference_end|> | arxiv | @article{formenti2006on,
title={On symmetric sandpiles},
author={Enrico Formenti (I3S), Beno^it Masson (I3S), Theophilos Pisokas (I3S)},
journal={arXiv preprint arXiv:cs/0606120},
year={2006},
archivePrefix={arXiv},
eprint={cs/0606120},
primaryClass={cs.CC cs.PF}
} | formenti2006on |
arxiv-674426 | cs/0606121 | Performance of Orthogonal Beamforming for SDMA with Limited Feedback | <|reference_start|>Performance of Orthogonal Beamforming for SDMA with Limited Feedback: On the multi-antenna broadcast channel, the spatial degrees of freedom support simultaneous transmission to multiple users. The optimal multiuser transmission, known as dirty paper coding, is not directly realizable. Moreover, close-to-optimal solutions such as Tomlinson-Harashima precoding are sensitive to CSI inaccuracy. This paper considers a more practical design called per user unitary and rate control (PU2RC), which has been proposed for emerging cellular standards. PU2RC supports multiuser simultaneous transmission, enables limited feedback, and is capable of exploiting multiuser diversity. Its key feature is an orthogonal beamforming (or precoding) constraint, where each user selects a beamformer (or precoder) from a codebook of multiple orthonormal bases. In this paper, the asymptotic throughput scaling laws for PU2RC with a large user pool are derived for different regimes of the signal-to-noise ratio (SNR). In the multiuser-interference-limited regime, the throughput of PU2RC is shown to scale logarithmically with the number of users. In the normal SNR and noise-limited regimes, the throughput is found to scale double logarithmically with the number of users and also linearly with the number of antennas at the base station. In addition, numerical results show that PU2RC achieves higher throughput and is more robust against CSI quantization errors than the popular alternative of zero-forcing beamforming if the number of users is sufficiently large.<|reference_end|> | arxiv | @article{huang2006performance,
title={Performance of Orthogonal Beamforming for SDMA with Limited Feedback},
author={Kaibin Huang, Jeffrey G. Andrews, and Robert W. Heath Jr},
journal={arXiv preprint arXiv:cs/0606121},
year={2006},
archivePrefix={arXiv},
eprint={cs/0606121},
primaryClass={cs.IT math.IT}
} | huang2006performance |
arxiv-674427 | cs/0606122 | Comparison of Image Similarity Queries in P2P Systems | <|reference_start|>Comparison of Image Similarity Queries in P2P Systems: Given some of the recent advances in Distributed Hash Table (DHT) based Peer-To-Peer (P2P) systems we ask the following questions: Are there applications where unstructured queries are still necessary (i.e., the underlying queries do not efficiently map onto any structured framework), and are there unstructured P2P systems that can deliver the high bandwidth and computing performance necessary to support such applications. Toward this end, we consider an image search application which supports queries based on image similarity metrics, such as color histogram intersection, and discuss why in this setting, standard DHT approaches are not directly applicable. We then study the feasibility of implementing such an image search system on two different unstructured P2P systems: power-law topology with percolation search, and an optimized super-node topology using structured broadcasts. We examine the average and maximum values for node bandwidth, storage and processing requirements in the percolation and super-node models, and show that current high-end computers and high-speed links have sufficient resources to enable deployments of large-scale complex image search systems.<|reference_end|> | arxiv | @article{mueller2006comparison,
title={Comparison of Image Similarity Queries in P2P Systems},
author={Wolfgang Mueller, P. Oscar Boykin, Nima Sarshar, Vwani P. Roychowdhury},
journal={arXiv preprint arXiv:cs/0606122},
year={2006},
archivePrefix={arXiv},
eprint={cs/0606122},
primaryClass={cs.DC cs.NI}
} | mueller2006comparison |
arxiv-674428 | cs/0606123 | Use MPLS in Lan's | <|reference_start|>Use MPLS in Lan's: To demonstrate the result of researches in laboratory with the focus in exhibiting the real impact of the use of the technology MPLS in LAN. Through these researches we will verify that the investment in this technology is shown, of the point of view cost/benefit, very interesting, being necessary, however, the adoption of another measured, in order to settle down a satisfactory level in the items Quality and safety in the sending of packages in VPN but assisting to the requirement latency of the net very well being shown in the tests that it consumes on average one Tuesday leaves of the time spend for the same function in routing IP.<|reference_end|> | arxiv | @article{alves2006use,
title={Use MPLS in Lan's},
author={Atos Ramos Alves},
journal={arXiv preprint arXiv:cs/0606123},
year={2006},
archivePrefix={arXiv},
eprint={cs/0606123},
primaryClass={cs.NI cs.CR}
} | alves2006use |
arxiv-674429 | cs/0606124 | Weighted hierarchical alignment of directed acyclic graph | <|reference_start|>Weighted hierarchical alignment of directed acyclic graph: In some applications of matching, the structural or hierarchical properties of the two graphs being aligned must be maintained. The hierarchical properties are induced by the direction of the edges in the two directed graphs. These structural relationships defined by the hierarchy in the graphs act as a constraint on the alignment. In this paper, we formalize the above problem as the weighted alignment between two directed acyclic graphs. We prove that this problem is NP-complete, show several upper bounds for approximating the solution, and finally introduce polynomial time algorithms for sub-classes of directed acyclic graphs.<|reference_end|> | arxiv | @article{falconer2006weighted,
title={Weighted hierarchical alignment of directed acyclic graph},
author={Sean M. Falconer and Dmitri Maslov},
journal={arXiv preprint arXiv:cs/0606124},
year={2006},
archivePrefix={arXiv},
eprint={cs/0606124},
primaryClass={cs.DS}
} | falconer2006weighted |
arxiv-674430 | cs/0606125 | Formalizing typical crosscutting concerns | <|reference_start|>Formalizing typical crosscutting concerns: We present a consistent system for referring crosscutting functionality, relating crosscutting concerns to specific implementation idioms, and formalizing their underlying relations through queries. The system is based on generic crosscutting concerns that we organize and describe in a catalog. We have designed and implemented a tool support for querying source code for instances of the proposed generic concerns and organizing them in composite concern models. The composite concern model adds a new dimension to the dominant decomposition of the system for describing and making explicit source code relations specific to crosscutting concerns implementations. We use the proposed approach to describe crosscutting concerns in design patterns and apply the tool to an opensource system (JHotDraw).<|reference_end|> | arxiv | @article{marin2006formalizing,
title={Formalizing typical crosscutting concerns},
author={Marius Marin},
journal={arXiv preprint arXiv:cs/0606125},
year={2006},
number={TUD-SERG-2006-010},
archivePrefix={arXiv},
eprint={cs/0606125},
primaryClass={cs.SE cs.PL}
} | marin2006formalizing |
arxiv-674431 | cs/0606126 | May We Have Your Attention: Analysis of a Selective Attention Task | <|reference_start|>May We Have Your Attention: Analysis of a Selective Attention Task: In this paper we present a deeper analysis than has previously been carried out of a selective attention problem, and the evolution of continuous-time recurrent neural networks to solve it. We show that the task has a rich structure, and agents must solve a variety of subproblems to perform well. We consider the relationship between the complexity of an agent and the ease with which it can evolve behavior that generalizes well across subproblems, and demonstrate a shaping protocol that improves generalization.<|reference_end|> | arxiv | @article{goldenberg2006may,
title={May We Have Your Attention: Analysis of a Selective Attention Task},
author={Eldan Goldenberg, Jacob R. Garcowski, Randall D. Beer},
journal={arXiv preprint arXiv:cs/0606126},
year={2006},
archivePrefix={arXiv},
eprint={cs/0606126},
primaryClass={cs.NE cs.AI}
} | goldenberg2006may |
arxiv-674432 | cs/0606127 | Approximately Efficient Cost-Sharing Mechanisms | <|reference_start|>Approximately Efficient Cost-Sharing Mechanisms: We make three different types of contributions to cost-sharing: First, we identify several new classes of combinatorial cost functions that admit incentive-compatible mechanisms achieving both a constant-factor approximation of budget-balance and a polylogarithmic approximation of the social cost formulation of efficiency. Second, we prove a new, optimal lower bound on the approximate efficiency of every budget-balanced Moulin mechanism for Steiner tree or SSRoB cost functions. This lower bound exposes a latent approximation hierarchy among different cost-sharing problems. Third, we show that weakening the definition of incentive-compatibility to strategyproofness can permit exponentially more efficient approximately budget-balanced mechanisms, in particular for set cover cost-sharing problems.<|reference_end|> | arxiv | @article{roughgarden2006approximately,
title={Approximately Efficient Cost-Sharing Mechanisms},
author={Tim Roughgarden (Stanford University) and Mukund Sundararajan
(Stanford University)},
journal={arXiv preprint arXiv:cs/0606127},
year={2006},
archivePrefix={arXiv},
eprint={cs/0606127},
primaryClass={cs.GT}
} | roughgarden2006approximately |
arxiv-674433 | cs/0606128 | Automatic forming lists of semantically related terms based on texts rating in the corpus with hyperlinks and categories (In Russian) | <|reference_start|>Automatic forming lists of semantically related terms based on texts rating in the corpus with hyperlinks and categories (In Russian): HITS adapted algorithm for synonym search, the program architecture, and the program work evaluation with test examples are presented in the paper. Synarcher program for synonym (and related terms) search in the text corpus of special structure (Wikipedia) was developed. The results of search are presented in the form of a graph. It is possible to explore the graph and search graph elements interactively. The proposed algorithm could be applied to the search request extending and for synonym dictionary forming.<|reference_end|> | arxiv | @article{krizhanovsky2006automatic,
title={Automatic forming lists of semantically related terms based on texts
rating in the corpus with hyperlinks and categories (In Russian)},
author={A. Krizhanovsky},
journal={arXiv preprint arXiv:cs/0606128},
year={2006},
archivePrefix={arXiv},
eprint={cs/0606128},
primaryClass={cs.IR cs.DM}
} | krizhanovsky2006automatic |
arxiv-674434 | cs/0607001 | A Novel Application of Lifting Scheme for Multiresolution Correlation of Complex Radar Signals | <|reference_start|>A Novel Application of Lifting Scheme for Multiresolution Correlation of Complex Radar Signals: The lifting scheme of discrete wavelet transform (DWT) is now quite well established as an efficient technique for image compression, and has been incorporated into the JPEG2000 standards. However, the potential of the lifting scheme has not been exploited in the context of correlationbased processing, such as encountered in radar applications. This paper presents a complete and consistent framework for the application of DWT for correlation of complex signals. In particular, lifting scheme factorization of biorthogonal filterbanks is carried out in dual analysis basis spaces for multiresolution correlation of complex radar signals in the DWT domain only. A causal formulation of lifting for orthogonal filterbank is also developed. The resulting parallel algorithms and consequent saving of computational effort are briefly dealt with.<|reference_end|> | arxiv | @article{bhattacharya2006a,
title={A Novel Application of Lifting Scheme for Multiresolution Correlation of
Complex Radar Signals},
author={Chinmoy Bhattacharya and P.R.Mahapatra},
journal={arXiv preprint arXiv:cs/0607001},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607001},
primaryClass={cs.DC cs.CC}
} | bhattacharya2006a |
arxiv-674435 | cs/0607002 | Coding for Parallel Channels: Gallager Bounds for Binary Linear Codes with Applications to Repeat-Accumulate Codes and Variations | <|reference_start|>Coding for Parallel Channels: Gallager Bounds for Binary Linear Codes with Applications to Repeat-Accumulate Codes and Variations: This paper is focused on the performance analysis of binary linear block codes (or ensembles) whose transmission takes place over independent and memoryless parallel channels. New upper bounds on the maximum-likelihood (ML) decoding error probability are derived. These bounds are applied to various ensembles of turbo-like codes, focusing especially on repeat-accumulate codes and their recent variations which possess low encoding and decoding complexity and exhibit remarkable performance under iterative decoding. The framework of the second version of the Duman and Salehi (DS2) bounds is generalized to the case of parallel channels, along with the derivation of their optimized tilting measures. The connection between the generalized DS2 and the 1961 Gallager bounds, addressed by Divsalar and by Sason and Shamai for a single channel, is explored in the case of an arbitrary number of independent parallel channels. The generalization of the DS2 bound for parallel channels enables to re-derive specific bounds which were originally derived by Liu et al. as special cases of the Gallager bound. In the asymptotic case where we let the block length tend to infinity, the new bounds are used to obtain improved inner bounds on the attainable channel regions under ML decoding. The tightness of the new bounds for independent parallel channels is exemplified for structured ensembles of turbo-like codes. The improved bounds with their optimized tilting measures show, irrespectively of the block length of the codes, an improvement over the union bound and other previously reported bounds for independent parallel channels; this improvement is especially pronounced for moderate to large block lengths.<|reference_end|> | arxiv | @article{sason2006coding,
title={Coding for Parallel Channels: Gallager Bounds for Binary Linear Codes
with Applications to Repeat-Accumulate Codes and Variations},
author={I. Sason and I. Goldenberg},
journal={arXiv preprint arXiv:cs/0607002},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607002},
primaryClass={cs.IT math.IT}
} | sason2006coding |
arxiv-674436 | cs/0607003 | Tightened Upper Bounds on the ML Decoding Error Probability of Binary Linear Block Codes | <|reference_start|>Tightened Upper Bounds on the ML Decoding Error Probability of Binary Linear Block Codes: The performance of maximum-likelihood (ML) decoded binary linear block codes is addressed via the derivation of tightened upper bounds on their decoding error probability. The upper bounds on the block and bit error probabilities are valid for any memoryless, binary-input and output-symmetric communication channel, and their effectiveness is exemplified for various ensembles of turbo-like codes over the AWGN channel. An expurgation of the distance spectrum of binary linear block codes further tightens the resulting upper bounds.<|reference_end|> | arxiv | @article{twitto2006tightened,
title={Tightened Upper Bounds on the ML Decoding Error Probability of Binary
Linear Block Codes},
author={M. Twitto, I. Sason and S. Shamai},
journal={arXiv preprint arXiv:cs/0607003},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607003},
primaryClass={cs.IT math.IT}
} | twitto2006tightened |
arxiv-674437 | cs/0607004 | On the Error Exponents of Some Improved Tangential-Sphere Bounds | <|reference_start|>On the Error Exponents of Some Improved Tangential-Sphere Bounds: The performance of maximum-likelihood (ML) decoded binary linear block codes over the AWGN channel is addressed via the tangential-sphere bound (TSB) and two of its recent improved versions. The paper is focused on the derivation of the error exponents of these bounds. Although it was exemplified that some recent improvements of the TSB tighten this bound for finite-length codes, it is demonstrated in this paper that their error exponents coincide. For an arbitrary ensemble of binary linear block codes, the common value of these error exponents is explicitly expressed in terms of the asymptotic growth rate of the average distance spectrum.<|reference_end|> | arxiv | @article{twitto2006on,
title={On the Error Exponents of Some Improved Tangential-Sphere Bounds},
author={M. Twitto and I. Sason},
journal={arXiv preprint arXiv:cs/0607004},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607004},
primaryClass={cs.IT math.IT}
} | twitto2006on |
arxiv-674438 | cs/0607005 | Belief Conditioning Rules (BCRs) | <|reference_start|>Belief Conditioning Rules (BCRs): In this paper we propose a new family of Belief Conditioning Rules (BCRs) for belief revision. These rules are not directly related with the fusion of several sources of evidence but with the revision of a belief assignment available at a given time according to the new truth (i.e. conditioning constraint) one has about the space of solutions of the problem.<|reference_end|> | arxiv | @article{smarandache2006belief,
title={Belief Conditioning Rules (BCRs)},
author={Florentin Smarandache, Jean Dezert},
journal={arXiv preprint arXiv:cs/0607005},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607005},
primaryClass={cs.AI}
} | smarandache2006belief |
arxiv-674439 | cs/0607006 | Applying and Combining Three Different Aspect Mining Techniques | <|reference_start|>Applying and Combining Three Different Aspect Mining Techniques: Understanding a software system at source-code level requires understanding the different concerns that it addresses, which in turn requires a way to identify these concerns in the source code. Whereas some concerns are explicitly represented by program entities (like classes, methods and variables) and thus are easy to identify, crosscutting concerns are not captured by a single program entity but are scattered over many program entities and are tangled with the other concerns. Because of their crosscutting nature, such crosscutting concerns are difficult to identify, and reduce the understandability of the system as a whole. In this paper, we report on a combined experiment in which we try to identify crosscutting concerns in the JHotDraw framework automatically. We first apply three independently developed aspect mining techniques to JHotDraw and evaluate and compare their results. Based on this analysis, we present three interesting combinations of these three techniques, and show how these combinations provide a more complete coverage of the detected concerns as compared to the original techniques individually. Our results are a first step towards improving the understandability of a system that contains crosscutting concerns, and can be used as a basis for refactoring the identified crosscutting concerns into aspects.<|reference_end|> | arxiv | @article{ceccato2006applying,
title={Applying and Combining Three Different Aspect Mining Techniques},
author={Mariano Ceccato, Marius Marin, Kim Mens, Leon Moonen, Paolo Tonella,
and Tom Tourwe},
journal={arXiv preprint arXiv:cs/0607006},
year={2006},
number={TUD-SERG-2006-002},
archivePrefix={arXiv},
eprint={cs/0607006},
primaryClass={cs.SE cs.PL}
} | ceccato2006applying |
arxiv-674440 | cs/0607007 | Theory of sexes by Geodakian as it is advanced by Iskrin | <|reference_start|>Theory of sexes by Geodakian as it is advanced by Iskrin: In 1960s V.Geodakian proposed a theory that explains sexes as a mechanism for evolutionary adaptation of the species to changing environmental conditions. In 2001 V.Iskrin refined and augmented the concepts of Geodakian and gave a new and interesting explanation to several phenomena which involve sex, and sex ratio, including the war-years phenomena. He also introduced a new concept of the "catastrophic sex ratio." This note is an attempt to digest technical aspects of the new ideas by Iskrin.<|reference_end|> | arxiv | @article{lubachevsky2006theory,
title={Theory of sexes by Geodakian as it is advanced by Iskrin},
author={Boris D. Lubachevsky},
journal={arXiv preprint arXiv:cs/0607007},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607007},
primaryClass={cs.NE cs.GL}
} | lubachevsky2006theory |
arxiv-674441 | cs/0607008 | 3-facial colouring of plane graphs | <|reference_start|>3-facial colouring of plane graphs: A plane graph is l-facially k-colourable if its vertices can be coloured with k colours such that any two distinct vertices on a facial segment of length at most l are coloured differently. We prove that every plane graph is 3-facially 11-colourable. As a consequence, we derive that every 2-connected plane graph with maximum face-size at most 7 is cyclically 11-colourable. These two bounds are for one off from those that are proposed by the (3l+1)-Conjecture and the Cyclic Conjecture.<|reference_end|> | arxiv | @article{havet20063-facial,
title={3-facial colouring of plane graphs},
author={F'ed'eric Havet (INRIA Sophia Antipolis), Jean-S'ebastien Sereni
(INRIA Sophia Antipolis), Riste Skrekovski},
journal={arXiv preprint arXiv:cs/0607008},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607008},
primaryClass={cs.DM}
} | havet20063-facial |
arxiv-674442 | cs/0607009 | Almost Periodicity, Finite Automata Mappings and Related Effectiveness Issues | <|reference_start|>Almost Periodicity, Finite Automata Mappings and Related Effectiveness Issues: The paper studies different variants of almost periodicity notion. We introduce the class of eventually strongly almost periodic sequences where some suffix is strongly almost periodic (=uniformly recurrent). The class of almost periodic sequences includes the class of eventually strongly almost periodic sequences, and we prove this inclusion to be strict. We prove that the class of eventually strongly almost periodic sequences is closed under finite automata mappings and finite transducers. Moreover, an effective form of this result is presented. Finally we consider some algorithmic questions concerning almost periodicity.<|reference_end|> | arxiv | @article{pritykin2006almost,
title={Almost Periodicity, Finite Automata Mappings and Related Effectiveness
Issues},
author={Yuri Pritykin},
journal={arXiv preprint arXiv:cs/0607009},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607009},
primaryClass={cs.DM}
} | pritykin2006almost |
arxiv-674443 | cs/0607010 | ITs, a structure sensitive information theory | <|reference_start|>ITs, a structure sensitive information theory: Broadly speaking Information theory (IT) assumes no structure of the underlying states. But what about contexts where states do have a clear structure - how should IT cope with such situations? And if such coping is at all possible then - how should structure be expressed so that it can be coped with? A possible answer to these questions is presented here. Noting that IT can cope well with a structure expressed as an accurate clustering (by shifting to the implied reduced alphabet), a generalization is suggested in which structure is expressed as a measure on reduced alphabets. Given such structure an extension of IT is presented where the reduced alphabets are treated simultaneously. This structure-sensitive IT, called ITs, extends traditional IT in the sense that: a)there are structure-sensitive analogs to the notions of traditional IT and b)translating a theorem in IT by replacing its notions with their structure-sensitive counterparts, yields a (provable) theorem of ITs. Seemingly paradoxically, ITs extends IT but it's completely within the framework of IT. The richness of the suggested structures is demonstrated by two disparate families studied in more detail: the family of hierarchical structures and the family of linear structures. The formal findings extend the scope of cases to which a rigorous application of IT can be applied (with implications on quantization, for example). The implications on the foundations of IT are that the assumption regarding no underlying structure of states is not mandatory and that there is a framework for expressing such underlying structure.<|reference_end|> | arxiv | @article{sattath2006its,,
title={ITs, a structure sensitive information theory},
author={Samuel Sattath},
journal={arXiv preprint arXiv:cs/0607010},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607010},
primaryClass={cs.IT math.IT}
} | sattath2006its, |
arxiv-674444 | cs/0607011 | A simple generalization of El-Gamal cryptosystem to non-abelian groups | <|reference_start|>A simple generalization of El-Gamal cryptosystem to non-abelian groups: In this paper we study the MOR cryptosystem. We use the group of unitriangular matrices over a finite field as the non-abelian group in the MOR cryptosystem. We show that a cryptosystem similar to the El-Gamal cryptosystem over finite fields can be built using the proposed groups and a set of automorphisms of these groups. We also show that the security of this proposed MOR cryptosystem is equivalent to the El-Gamal cryptosystem over finite fields.<|reference_end|> | arxiv | @article{mahalanobis2006a,
title={A simple generalization of El-Gamal cryptosystem to non-abelian groups},
author={Ayan Mahalanobis},
journal={arXiv preprint arXiv:cs/0607011},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607011},
primaryClass={cs.CR math.GR}
} | mahalanobis2006a |
arxiv-674445 | cs/0607012 | A Flexible Structured-based Representation for XML Document Mining | <|reference_start|>A Flexible Structured-based Representation for XML Document Mining: This paper reports on the INRIA group's approach to XML mining while participating in the INEX XML Mining track 2005. We use a flexible representation of XML documents that allows taking into account the structure only or both the structure and content. Our approach consists of representing XML documents by a set of their sub-paths, defined according to some criteria (length, root beginning, leaf ending). By considering those sub-paths as words, we can use standard methods for vocabulary reduction, and simple clustering methods such as K-means that scale well. We actually use an implementation of the clustering algorithm known as "dynamic clouds" that can work with distinct groups of independent variables put in separate variables. This is useful in our model since embedded sub-paths are not independent: we split potentially dependant paths into separate variables, resulting in each of them containing independant paths. Experiments with the INEX collections show good results for the structure-only collections, but our approach could not scale well for large structure-and-content collections.<|reference_end|> | arxiv | @article{vercoustre2006a,
title={A Flexible Structured-based Representation for XML Document Mining},
author={Anne-Marie Vercoustre (INRIA Rocquencourt / INRIA Sophia Antipolis),
Mounir Fegas (INRIA Rocquencourt / INRIA Sophia Antipolis), Saba Gul (INRIA
Rocquencourt / INRIA Sophia Antipolis), Yves Lechevallier (INRIA Rocquencourt
/ INRIA Sophia Antipolis)},
journal={Dans The Fourth International Workshop of the Initiative for the
Evaluation of XML Retrieval (INEX 2005)},
year={2006},
doi={10.1007/11766278\_34},
archivePrefix={arXiv},
eprint={cs/0607012},
primaryClass={cs.IR}
} | vercoustre2006a |
arxiv-674446 | cs/0607013 | Database Querying under Changing Preferences | <|reference_start|>Database Querying under Changing Preferences: We present here a formal foundation for an iterative and incremental approach to constructing and evaluating preference queries. Our main focus is on query modification: a query transformation approach which works by revising the preference relation in the query. We provide a detailed analysis of the cases where the order-theoretic properties of the preference relation are preserved by the revision. We consider a number of different revision operators: union, prioritized and Pareto composition. We also formulate algebraic laws that enable incremental evaluation of preference queries. Finally, we consider two variations of the basic framework: finite restrictions of preference relations and weak-order extensions of strict partial order preference relations.<|reference_end|> | arxiv | @article{chomicki2006database,
title={Database Querying under Changing Preferences},
author={Jan Chomicki},
journal={arXiv preprint arXiv:cs/0607013},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607013},
primaryClass={cs.DB cs.AI}
} | chomicki2006database |
arxiv-674447 | cs/0607014 | Strong Consistency of the Good-Turing Estimator | <|reference_start|>Strong Consistency of the Good-Turing Estimator: We consider the problem of estimating the total probability of all symbols that appear with a given frequency in a string of i.i.d. random variables with unknown distribution. We focus on the regime in which the block length is large yet no symbol appears frequently in the string. This is accomplished by allowing the distribution to change with the block length. Under a natural convergence assumption on the sequence of underlying distributions, we show that the total probabilities converge to a deterministic limit, which we characterize. We then show that the Good-Turing total probability estimator is strongly consistent.<|reference_end|> | arxiv | @article{wagner2006strong,
title={Strong Consistency of the Good-Turing Estimator},
author={Aaron B. Wagner, Pramod Viswanath, and Sanjeev R. Kulkarni},
journal={arXiv preprint arXiv:cs/0607014},
year={2006},
doi={10.1109/ISIT.2006.262066},
archivePrefix={arXiv},
eprint={cs/0607014},
primaryClass={cs.IT math.IT}
} | wagner2006strong |
arxiv-674448 | cs/0607015 | The uncovering of hidden structures by Latent Semantic Analysis | <|reference_start|>The uncovering of hidden structures by Latent Semantic Analysis: Latent Semantic Analysis (LSA) is a well known method for information retrieval. It has also been applied as a model of cognitive processing and word-meaning acquisition. This dual importance of LSA derives from its capacity to modulate the meaning of words by contexts, dealing successfully with polysemy and synonymy. The underlying reasons that make the method work are not clear enough. We propose that the method works because it detects an underlying block structure (the blocks corresponding to topics) in the term by document matrix. In real cases this block structure is hidden because of perturbations. We propose that the correct explanation for LSA must be searched in the structure of singular vectors rather than in the profile of singular values. Using Perron-Frobenius theory we show that the presence of disjoint blocks of documents is marked by sign-homogeneous entries in the vectors corresponding to the documents of one block and zeros elsewhere. In the case of nearly disjoint blocks, perturbation theory shows that if the perturbations are small the zeros in the leading vectors are replaced by small numbers (pseudo-zeros). Since the singular values of each block might be very different in magnitude, their order does not mirror the order of blocks. When the norms of the blocks are similar, LSA works fine, but we propose that when the topics have different sizes, the usual procedure of selecting the first k singular triplets (k being the number of blocks) should be replaced by a method that selects the perturbed Perron vectors for each block.<|reference_end|> | arxiv | @article{valle-lisboa2006the,
title={The uncovering of hidden structures by Latent Semantic Analysis},
author={Juan C. Valle-Lisboa (1), Eduardo Mizraji (1) ((1) Seccion Biofisica,
Facultad de Ciencias, Universidad de la Republica)},
journal={arXiv preprint arXiv:cs/0607015},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607015},
primaryClass={cs.IR}
} | valle-lisboa2006the |
arxiv-674449 | cs/0607016 | An Analysis of Arithmetic Constraints on Integer Intervals | <|reference_start|>An Analysis of Arithmetic Constraints on Integer Intervals: Arithmetic constraints on integer intervals are supported in many constraint programming systems. We study here a number of approaches to implement constraint propagation for these constraints. To describe them we introduce integer interval arithmetic. Each approach is explained using appropriate proof rules that reduce the variable domains. We compare these approaches using a set of benchmarks. For the most promising approach we provide results that characterize the effect of constraint propagation. This is a full version of our earlier paper, cs.PL/0403016.<|reference_end|> | arxiv | @article{apt2006an,
title={An Analysis of Arithmetic Constraints on Integer Intervals},
author={Krzysztof R. Apt and Peter Zoeteweij},
journal={arXiv preprint arXiv:cs/0607016},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607016},
primaryClass={cs.AI cs.PL}
} | apt2006an |
arxiv-674450 | cs/0607017 | Performance of STBC MC-CDMA systems over outdoor realistic MIMO channels | <|reference_start|>Performance of STBC MC-CDMA systems over outdoor realistic MIMO channels: The paper deals with orthogonal space-time block coded MC-CDMA systems in outdoor realistic downlink scenarios with up to two transmit and receive antennas. Assuming no channel state information at the transmitter, we compare several linear single-user detection and spreading schemes, with or without channel coding, achieving a spectral efficiency of 1-2 bits/s/Hz. The different results obtained demonstrate that spatial diversity significantly improves the performance of MC-CDMA systems, and allows different chip-mapping without notably decreasing performance. Moreover, the global system exhibits a good trade-off between complexity at mobile stations and performance. Then, Alamouti's STBC MC-CDMA schemes derive full benefit from the frequency and spatial diversities and can be considered as a very realistic and promising candidate for the air interface downlink of the 4/sup th/ generation mobile radio systems.<|reference_end|> | arxiv | @article{portier2006performance,
title={Performance of STBC MC-CDMA systems over outdoor realistic MIMO channels},
author={Fabrice Portier (IETR), Jean-Yves Baudais (IETR), Jean-Franc{c}ois
H'elard (IETR), the Projet europeen IST-MATRICE Collaboration},
journal={Proceedings IEEE 60th Vehicular Technology Conference (VTC
2004-Fall) (2004) 2409-2413},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607017},
primaryClass={cs.IT math.IT}
} | portier2006performance |
arxiv-674451 | cs/0607018 | Feynman Checkerboard as a Model of Discrete Space-Time | <|reference_start|>Feynman Checkerboard as a Model of Discrete Space-Time: In 1965, Feynman wrote of using a lattice containing one dimension of space and one dimension of time to derive aspects of quantum mechanics. Instead of summing the behavior of all possible paths as he did, this paper will consider the motion of single particles within this discrete Space-Time lattice, sometimes called Feynman's Checkerboard. This empirical approach yielded several predicted emergent properties for a discrete Space-Time lattice, one of which is novel and testable.<|reference_end|> | arxiv | @article{hanna2006feynman,
title={Feynman Checkerboard as a Model of Discrete Space-Time},
author={Edward Hanna},
journal={arXiv preprint arXiv:cs/0607018},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607018},
primaryClass={cs.CE}
} | hanna2006feynman |
arxiv-674452 | cs/0607019 | Modelling the Probability Density of Markov Sources | <|reference_start|>Modelling the Probability Density of Markov Sources: This paper introduces an objective function that seeks to minimise the average total number of bits required to encode the joint state of all of the layers of a Markov source. This type of encoder may be applied to the problem of optimising the bottom-up (recognition model) and top-down (generative model) connections in a multilayer neural network, and it unifies several previous results on the optimisation of multilayer neural networks.<|reference_end|> | arxiv | @article{luttrell2006modelling,
title={Modelling the Probability Density of Markov Sources},
author={Stephen Luttrell},
journal={arXiv preprint arXiv:cs/0607019},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607019},
primaryClass={cs.NE}
} | luttrell2006modelling |
arxiv-674453 | cs/0607020 | Iterative Decoding Performance Bounds for LDPC Codes on Noisy Channels | <|reference_start|>Iterative Decoding Performance Bounds for LDPC Codes on Noisy Channels: The asymptotic iterative decoding performances of low-density parity-check (LDPC) codes using min-sum (MS) and sum-product (SP) decoding algorithms on memoryless binary-input output-symmetric (MBIOS) channels are analyzed in this paper. For MS decoding, the analysis is done by upper bounding the bit error probability of the root bit of a tree code by the sequence error probability of a subcode of the tree code assuming the transmission of the all-zero codeword. The result is a recursive upper bound on the bit error probability after each iteration. For SP decoding, we derive a recursively determined lower bound on the bit error probability after each iteration. This recursive lower bound recovers the density evolution equation of LDPC codes on the binary erasure channel (BEC) with inequalities satisfied with equalities. A significant implication of this result is that the performance of LDPC codes under SP decoding on the BEC is an upper bound of the performance on all MBIOS channels with the same uncoded bit error probability. All results hold for the more general multi-edge type LDPC codes.<|reference_end|> | arxiv | @article{hsu2006iterative,
title={Iterative Decoding Performance Bounds for LDPC Codes on Noisy Channels},
author={Chun-Hao Hsu and Achilleas Anastasopoulos},
journal={arXiv preprint arXiv:cs/0607020},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607020},
primaryClass={cs.IT math.IT}
} | hsu2006iterative |
arxiv-674454 | cs/0607021 | Slepian-Wolf Code Design via Source-Channel Correspondence | <|reference_start|>Slepian-Wolf Code Design via Source-Channel Correspondence: We consider Slepian-Wolf code design based on LDPC (low-density parity-check) coset codes for memoryless source-side information pairs. A density evolution formula, equipped with a concentration theorem, is derived for Slepian- Wolf coding based on LDPC coset codes. As a consequence, an intimate connection between Slepian-Wolf coding and channel coding is established. Specifically we show that, under density evolution, design of binary LDPC coset codes for Slepian-Wolf coding of an arbitrary memoryless source-side information pair reduces to design of binary LDPC codes for binary-input output-symmetric channels without loss of optimality. With this connection, many classic results in channel coding can be easily translated into the Slepian-Wolf setting.<|reference_end|> | arxiv | @article{chen2006slepian-wolf,
title={Slepian-Wolf Code Design via Source-Channel Correspondence},
author={Jun Chen, Da-ke He, and Ashish Jagmohan},
journal={arXiv preprint arXiv:cs/0607021},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607021},
primaryClass={cs.IT math.IT}
} | chen2006slepian-wolf |
arxiv-674455 | cs/0607022 | Ten Incredibly Dangerous Software Ideas | <|reference_start|>Ten Incredibly Dangerous Software Ideas: This is a rough draft synopsis of a book presently in preparation. This book provides a systematic critique of the software industry. This critique is accomplished using classical methods in practical design science.<|reference_end|> | arxiv | @article{maney2006ten,
title={Ten Incredibly Dangerous Software Ideas},
author={G. A. Maney},
journal={arXiv preprint arXiv:cs/0607022},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607022},
primaryClass={cs.GL}
} | maney2006ten |
arxiv-674456 | cs/0607023 | Sharp threshold for hamiltonicity of random geometric graphs | <|reference_start|>Sharp threshold for hamiltonicity of random geometric graphs: We show for an arbitrary $\ell_p$ norm that the property that a random geometric graph $\mathcal G(n,r)$ contains a Hamiltonian cycle exhibits a sharp threshold at $r=r(n)=\sqrt{\frac{\log n}{\alpha_p n}}$, where $\alpha_p$ is the area of the unit disk in the $\ell_p$ norm. The proof is constructive and yields a linear time algorithm for finding a Hamiltonian cycle of $\RG$ a.a.s., provided $r=r(n)\ge\sqrt{\frac{\log n}{(\alpha_p -\epsilon)n}}$ for some fixed $\epsilon > 0$.<|reference_end|> | arxiv | @article{diaz2006sharp,
title={Sharp threshold for hamiltonicity of random geometric graphs},
author={J. Diaz, D. Mitsche, X. Perez},
journal={arXiv preprint arXiv:cs/0607023},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607023},
primaryClass={cs.DM}
} | diaz2006sharp |
arxiv-674457 | cs/0607024 | Results on Parity-Check Matrices with Optimal Stopping and/or Dead-End Set Enumerators | <|reference_start|>Results on Parity-Check Matrices with Optimal Stopping and/or Dead-End Set Enumerators: The performance of iterative decoding techniques for linear block codes correcting erasures depends very much on the sizes of the stopping sets associated with the underlying Tanner graph, or, equivalently, the parity-check matrix representing the code. In this paper, we introduce the notion of dead-end sets to explicitly demonstrate this dependency. The choice of the parity-check matrix entails a trade-off between performance and complexity. We give bounds on the complexity of iterative decoders achieving optimal performance in terms of the sizes of the underlying parity-check matrices. Further, we fully characterize codes for which the optimal stopping set enumerator equals the weight enumerator.<|reference_end|> | arxiv | @article{weber2006results,
title={Results on Parity-Check Matrices with Optimal Stopping and/or Dead-End
Set Enumerators},
author={Jos H. Weber and Khaled A.S. Abdel-Ghaffar},
journal={arXiv preprint arXiv:cs/0607024},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607024},
primaryClass={cs.IT math.IT}
} | weber2006results |
arxiv-674458 | cs/0607025 | The evolution of navigable small-world networks | <|reference_start|>The evolution of navigable small-world networks: Small-world networks, which combine randomized and structured elements, are seen as prevalent in nature. Several random graph models have been given for small-world networks, with one of the most fruitful, introduced by Jon Kleinberg, showing in which type of graphs it is possible to route, or navigate, between vertices with very little knowledge of the graph itself. Kleinberg's model is static, with random edges added to a fixed grid. In this paper we introduce, analyze and test a randomized algorithm which successively rewires a graph with every application. The resulting process gives a model for the evolution of small-world networks with properties similar to those studied by Kleinberg.<|reference_end|> | arxiv | @article{sandberg2006the,
title={The evolution of navigable small-world networks},
author={Oskar Sandberg and Ian Clarke},
journal={arXiv preprint arXiv:cs/0607025},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607025},
primaryClass={cs.DS cs.DC}
} | sandberg2006the |
arxiv-674459 | cs/0607026 | Path-independent load balancing with unreliable machines | <|reference_start|>Path-independent load balancing with unreliable machines: We consider algorithms for load balancing on unreliable machines. The objective is to optimize the two criteria of minimizing the makespan and minimizing job reassignments in response to machine failures. We assume that the set of jobs is known in advance but that the pattern of machine failures is unpredictable. Motivated by the requirements of BGP routing, we consider path-independent algorithms, with the property that the job assignment is completely determined by the subset of available machines and not the previous history of the assignments. We examine first the question of performance measurement of path-independent load-balancing algorithms, giving the measure of makespan and the normalized measure of reassignments cost. We then describe two classes of algorithms for optimizing these measures against an oblivious adversary for identical machines. The first, based on independent random assignments, gives expected reassignment costs within a factor of 2 of optimal and gives a makespan within a factor of O(log m/log log m) of optimal with high probability, for unknown job sizes. The second, in which jobs are first grouped into bins and at most one bin is assigned to each machine, gives constant-factor ratios on both reassignment cost and makespan, for known job sizes. Several open problems are discussed.<|reference_end|> | arxiv | @article{aspnes2006path-independent,
title={Path-independent load balancing with unreliable machines},
author={James Aspnes and Yang Richard Yang and Yitong Yin},
journal={arXiv preprint arXiv:cs/0607026},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607026},
primaryClass={cs.DS cs.NI}
} | aspnes2006path-independent |
arxiv-674460 | cs/0607027 | A general computation rule for lossy summaries/messages with examples from equalization | <|reference_start|>A general computation rule for lossy summaries/messages with examples from equalization: Elaborating on prior work by Minka, we formulate a general computation rule for lossy messages. An important special case (with many applications in communications) is the conversion of "soft-bit" messages to Gaussian messages. By this method, the performance of a Kalman equalizer is improved, both for uncoded and coded transmission.<|reference_end|> | arxiv | @article{hu2006a,
title={A general computation rule for lossy summaries/messages with examples
from equalization},
author={Junli Hu, Hans-Andrea Loeliger, Justin Dauwels, and Frank Kschischang},
journal={arXiv preprint arXiv:cs/0607027},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607027},
primaryClass={cs.IT math.IT}
} | hu2006a |
arxiv-674461 | cs/0607028 | A Quasi-Optimal Leader Election Algorithm in Radio Networks with Log-Logarithmic Awake Time Slots | <|reference_start|>A Quasi-Optimal Leader Election Algorithm in Radio Networks with Log-Logarithmic Awake Time Slots: Radio networks (RN) are distributed systems (\textit{ad hoc networks}) consisting in $n \ge 2$ radio stations. Assuming the number $n$ unknown, two distinct models of RN without collision detection (\textit{no-CD}) are addressed: the model with \textit{weak no-CD} RN and the one with \textit{strong no-CD} RN. We design and analyze two distributed leader election protocols, each one running in each of the above two (no-CD RN) models, respectively. Both randomized protocols are shown to elect a leader within $\BO(\log{(n)})$ expected time, with no station being awake for more than $\BO(\log{\log{(n)}})$ time slots (such algorithms are said to be \textit{energy-efficient}). Therefore, a new class of efficient algorithms is set up that matchthe $\Omega(\log{(n)})$ time lower-bound established by Kushilevitz and Mansour.<|reference_end|> | arxiv | @article{lavault2006a,
title={A Quasi-Optimal Leader Election Algorithm in Radio Networks with
Log-Logarithmic Awake Time Slots},
author={Christian Lavault (LIPN), Jean-Franc{c}ois Marckert (LaBRI), Vlady
Ravelomanana (LIPN)},
journal={arXiv preprint arXiv:cs/0607028},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607028},
primaryClass={cs.DC cs.NI}
} | lavault2006a |
arxiv-674462 | cs/0607029 | A Coding Theorem Characterizing Renyi's Entropy through Variable-to-Fixed Length Codes | <|reference_start|>A Coding Theorem Characterizing Renyi's Entropy through Variable-to-Fixed Length Codes: This paper has been withdrawn<|reference_end|> | arxiv | @article{aggarwal2006a,
title={A Coding Theorem Characterizing Renyi's Entropy through
Variable-to-Fixed Length Codes},
author={Vaneet Aggarwal, R.K. Bansal},
journal={arXiv preprint arXiv:cs/0607029},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607029},
primaryClass={cs.IT math.IT}
} | aggarwal2006a |
arxiv-674463 | cs/0607030 | Towards a General Theory of Simultaneous Diophantine Approximation of Formal Power Series: Multidimensional Linear Complexity | <|reference_start|>Towards a General Theory of Simultaneous Diophantine Approximation of Formal Power Series: Multidimensional Linear Complexity: We model the development of the linear complexity of multisequences by a stochastic infinite state machine, the Battery-Discharge-Model, BDM. The states s in S of the BDM have asymptotic probabilities or mass Pr(s)=1/(P(q,M) q^K(s)), where K(s) in N_0 is the class of the state s, and P(q,M)=\sum_(K in\N0) P_M(K)q^(-K)=\prod_(i=1..M) q^i/(q^i-1) is the generating function of the number of partitions into at most M parts. We have (for each timestep modulo M+1) just P_M(K) states of class K \. We obtain a closed formula for the asymptotic probability for the linear complexity deviation d(n) := L(n)-\lceil n\cdot M/(M+1)\rceil with Pr(d)=O(q^(-|d|(M+1))), for M in N, for d in Z. The precise formula is given in the text. It has been verified numerically for M=1..8, and is conjectured to hold for all M in N. From the asymptotic growth (proven for all M in N), we infer the Law of the Logarithm for the linear complexity deviation, -liminf_{n\to\infty} d_a(n) / log n = 1 /((M+1)log q) = limsup_{n\to\infty} d_a(n) / log n, which immediately yields L_a(n)/n \to M/(M+1) with measure one, for all M in N, a result recently shown already by Niederreiter and Wang. Keywords: Linear complexity, linear complexity deviation, multisequence, Battery Discharge Model, isometry.<|reference_end|> | arxiv | @article{vielhaber2006towards,
title={Towards a General Theory of Simultaneous Diophantine Approximation of
Formal Power Series: Multidimensional Linear Complexity},
author={Michael Vielhaber, Monica del Pilar Canales},
journal={arXiv preprint arXiv:cs/0607030},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607030},
primaryClass={cs.IT math.IT}
} | vielhaber2006towards |
arxiv-674464 | cs/0607031 | A distributed approximation algorithm for the minimum degree minimum weight spanning trees | <|reference_start|>A distributed approximation algorithm for the minimum degree minimum weight spanning trees: Fischer has shown how to compute a minimum weight spanning tree of degree at most $b \Delta^* + \lceil \log\_b n\rceil$ in time $O(n^{4 + 1/\ln b})$ for any constant $b > 1$, where $\Delta^*$ is the value of an optimal solution and $n$ is the number of nodes in the network. In this paper, we propose a distributed version of Fischer's algorithm that requires messages and time complexity $O(n^{2 + 1/\ln b})$, and O(n) space per node.<|reference_end|> | arxiv | @article{lavault2006a,
title={A distributed approximation algorithm for the minimum degree minimum
weight spanning trees},
author={Christian Lavault (LIPN), Mario Valencia-Pabon (LIPN)},
journal={arXiv preprint arXiv:cs/0607031},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607031},
primaryClass={cs.DC cs.DM}
} | lavault2006a |
arxiv-674465 | cs/0607032 | Asymptotic Analysis of a Leader Election Algorithm | <|reference_start|>Asymptotic Analysis of a Leader Election Algorithm: Itai and Rodeh showed that, on the average, the communication of a leader election algorithm takes no more than $LN$ bits, where $L \simeq 2.441716$ and $N$ denotes the size of the ring. We give a precise asymptotic analysis of the average number of rounds M(n) required by the algorithm, proving for example that $\dis M(\infty) := \lim\_{n\to \infty} M(n) = 2.441715879...$, where $n$ is the number of starting candidates in the election. Accurate asymptotic expressions of the second moment $M^{(2)}(n)$ of the discrete random variable at hand, its probability distribution, and the generalization to all moments are given. Corresponding asymptotic expansions $(n\to \infty)$ are provided for sufficiently large $j$, where $j$ counts the number of rounds. Our numerical results show that all computations perfectly fit the observed values. Finally, we investigate the generalization to probability $t/n$, where $t$ is a non negative real parameter. The real function $\dis M(\infty,t) := \lim\_{n\to \infty} M(n,t)$ is shown to admit \textit{one unique minimum} $M(\infty,t^{*})$ on the real segment $(0,2)$. Furthermore, the variations of $M(\infty,t)$ on thewhole real line are also studied in detail.<|reference_end|> | arxiv | @article{lavault2006asymptotic,
title={Asymptotic Analysis of a Leader Election Algorithm},
author={Christian Lavault (LIPN), Guy Louchard (ULB)},
journal={arXiv preprint arXiv:cs/0607032},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607032},
primaryClass={cs.DC cs.NA}
} | lavault2006asymptotic |
arxiv-674466 | cs/0607033 | Planar Graphs: Logical Complexity and Parallel Isomorphism Tests | <|reference_start|>Planar Graphs: Logical Complexity and Parallel Isomorphism Tests: We prove that every triconnected planar graph is definable by a first order sentence that uses at most 15 variables and has quantifier depth at most $11\log_2 n+43$. As a consequence, a canonic form of such graphs is computable in $AC^1$ by the 14-dimensional Weisfeiler-Lehman algorithm. This provides another way to show that the planar graph isomorphism is solvable in $AC^1$.<|reference_end|> | arxiv | @article{verbitsky2006planar,
title={Planar Graphs: Logical Complexity and Parallel Isomorphism Tests},
author={Oleg Verbitsky},
journal={arXiv preprint arXiv:cs/0607033},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607033},
primaryClass={cs.CC cs.LO}
} | verbitsky2006planar |
arxiv-674467 | cs/0607034 | Quasi-Optimal Leader Election Algorithms in Radio Networks with Loglogarithmic Awake Time Slots | <|reference_start|>Quasi-Optimal Leader Election Algorithms in Radio Networks with Loglogarithmic Awake Time Slots: A radio network (RN) is a distributed system consisting of $n$ radio stations. We design and analyze two distributed leader election protocols in RN where the number $n$ of radio stations is unknown. The first algorithm runs under the assumption of {\it limited collision detection}, while the second assumes that {\it no collision detection} is available. By ``limited collision detection'', we mean that if exactly one station sends (broadcasts) a message, then all stations (including the transmitter) that are listening at this moment receive the sent message. By contrast, the second no-collision-detection algorithm assumes that a station cannot simultaneously send and listen signals. Moreover, both protocols allow the stations to keep asleep as long as possible, thus minimizing their awake time slots (such algorithms are called {\it energy-efficient}). Both randomized protocols in RN areshown to elect a leader in $O(\log{(n)})$ expected time, with no station being awake for more than $O(\log{\log{(n)}})$ time slots. Therefore, a new class of efficient algorithms is set up that match the $\Omega(\log{(n)})$ time lower-bound established by Kushilevitz and Mansour.<|reference_end|> | arxiv | @article{lavault2006quasi-optimal,
title={Quasi-Optimal Leader Election Algorithms in Radio Networks with
Loglogarithmic Awake Time Slots},
author={Christian Lavault (LIPN), Jean-Franc{c}ois Marckert (LaBRI), Vlady
Ravelomanana (LIPN)},
journal={10th IEEE International Conference on Telecommunications (2003)
1113-1119},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607034},
primaryClass={cs.DC cs.NI}
} | lavault2006quasi-optimal |
arxiv-674468 | cs/0607035 | Resettable Zero Knowledge in the Bare Public-Key Model under Standard Assumption | <|reference_start|>Resettable Zero Knowledge in the Bare Public-Key Model under Standard Assumption: In this paper we resolve an open problem regarding resettable zero knowledge in the bare public-key (BPK for short) model: Does there exist constant round resettable zero knowledge argument with concurrent soundness for $\mathcal{NP}$ in BPK model without assuming \emph{sub-exponential hardness}? We give a positive answer to this question by presenting such a protocol for any language in $\mathcal{NP}$ in the bare public-key model assuming only collision-resistant hash functions against \emph{polynomial-time} adversaries.<|reference_end|> | arxiv | @article{deng2006resettable,
title={Resettable Zero Knowledge in the Bare Public-Key Model under Standard
Assumption},
author={Yi Deng, Dongdai Lin},
journal={arXiv preprint arXiv:cs/0607035},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607035},
primaryClass={cs.CR}
} | deng2006resettable |
arxiv-674469 | cs/0607036 | Combinatorial laplacians and positivity under partial transpose | <|reference_start|>Combinatorial laplacians and positivity under partial transpose: Density matrices of graphs are combinatorial laplacians normalized to have trace one (Braunstein \emph{et al.} \emph{Phys. Rev. A,} \textbf{73}:1, 012320 (2006)). If the vertices of a graph are arranged as an array, then its density matrix carries a block structure with respect to which properties such as separability can be considered. We prove that the so-called degree-criterion, which was conjectured to be necessary and sufficient for separability of density matrices of graphs, is equivalent to the PPT-criterion. As such it is not sufficient for testing the separability of density matrices of graphs (we provide an explicit example). Nonetheless, we prove the sufficiency when one of the array dimensions has length two (for an alternative proof see Wu, \emph{Phys. Lett. A}\textbf{351} (2006), no. 1-2, 18--22). Finally we derive a rational upper bound on the concurrence of density matrices of graphs and show that this bound is exact for graphs on four vertices.<|reference_end|> | arxiv | @article{hildebrand2006combinatorial,
title={Combinatorial laplacians and positivity under partial transpose},
author={Roland Hildebrand, Stefano Mancini, and Simone Severini},
journal={Math. Struct. in Comp. Sci. Vol.18, pp.205-219 (2008)},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607036},
primaryClass={cs.CC quant-ph}
} | hildebrand2006combinatorial |
arxiv-674470 | cs/0607037 | The Minimal Cost Algorithm for Off-Line Diagnosability of Discrete Event Systems | <|reference_start|>The Minimal Cost Algorithm for Off-Line Diagnosability of Discrete Event Systems: The failure diagnosis for {\it discrete event systems} (DESs) has been given considerable attention in recent years. Both on-line and off-line diagnostics in the framework of DESs was first considered by Lin Feng in 1994, and particularly an algorithm for diagnosability of DESs was presented. Motivated by some existing problems to be overcome in previous work, in this paper, we investigate the minimal cost algorithm for diagnosability of DESs. More specifically: (i) we give a generic method for judging a system's off-line diagnosability, and the complexity of this algorithm is polynomial-time; (ii) and in particular, we present an algorithm of how to search for the minimal set in all observable event sets, whereas the previous algorithm may find {\it non-minimal} one.<|reference_end|> | arxiv | @article{fan2006the,
title={The Minimal Cost Algorithm for Off-Line Diagnosability of Discrete Event
Systems},
author={Zhujun Fan},
journal={arXiv preprint arXiv:cs/0607037},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607037},
primaryClass={cs.AI cs.CC}
} | fan2006the |
arxiv-674471 | cs/0607038 | Retouched Bloom Filters: Allowing Networked Applications to Flexibly Trade Off False Positives Against False Negatives | <|reference_start|>Retouched Bloom Filters: Allowing Networked Applications to Flexibly Trade Off False Positives Against False Negatives: Where distributed agents must share voluminous set membership information, Bloom filters provide a compact, though lossy, way for them to do so. Numerous recent networking papers have examined the trade-offs between the bandwidth consumed by the transmission of Bloom filters, and the error rate, which takes the form of false positives, and which rises the more the filters are compressed. In this paper, we introduce the retouched Bloom filter (RBF), an extension that makes the Bloom filter more flexible by permitting the removal of selected false positives at the expense of generating random false negatives. We analytically show that RBFs created through a random process maintain an overall error rate, expressed as a combination of the false positive rate and the false negative rate, that is equal to the false positive rate of the corresponding Bloom filters. We further provide some simple heuristics and improved algorithms that decrease the false positive rate more than than the corresponding increase in the false negative rate, when creating RBFs. Finally, we demonstrate the advantages of an RBF over a Bloom filter in a distributed network topology measurement application, where information about large stop sets must be shared among route tracing monitors.<|reference_end|> | arxiv | @article{donnet2006retouched,
title={Retouched Bloom Filters: Allowing Networked Applications to Flexibly
Trade Off False Positives Against False Negatives},
author={Benoit Donnet, Bruno Baynat, Timur Friedman},
journal={arXiv preprint arXiv:cs/0607038},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607038},
primaryClass={cs.NI}
} | donnet2006retouched |
arxiv-674472 | cs/0607039 | Set-Theoretic Preliminaries for Computer Scientists | <|reference_start|>Set-Theoretic Preliminaries for Computer Scientists: The basics of set theory are usually copied, directly or indirectly, by computer scientists from introductions to mathematical texts. Often mathematicians are content with special cases when the general case is of no mathematical interest. But sometimes what is of no mathematical interest is of great practical interest in computer science. For example, non-binary relations in mathematics tend to have numerical indexes and tend to be unsorted. In the theory and practice of relational databases both these simplifications are unwarranted. In response to this situation we present here an alternative to the ``set-theoretic preliminaries'' usually found in computer science texts. This paper separates binary relations from the kind of relations that are needed in relational databases. Its treatment of functions supports both computer science in general and the kind of relations needed in databases. As a sample application this paper shows how the mathematical theory of relations naturally leads to the relational data model and how the operations on relations are by themselves already a powerful vehicle for queries.<|reference_end|> | arxiv | @article{van emden2006set-theoretic,
title={Set-Theoretic Preliminaries for Computer Scientists},
author={M.H. van Emden},
journal={arXiv preprint arXiv:cs/0607039},
year={2006},
number={DCS-304-IR},
archivePrefix={arXiv},
eprint={cs/0607039},
primaryClass={cs.DM cs.DB}
} | van emden2006set-theoretic |
arxiv-674473 | cs/0607040 | PALS: Efficient Or-Parallelism on Beowulf Clusters | <|reference_start|>PALS: Efficient Or-Parallelism on Beowulf Clusters: This paper describes the development of the PALS system, an implementation of Prolog capable of efficiently exploiting or-parallelism on distributed-memory platforms--specifically Beowulf clusters. PALS makes use of a novel technique, called incremental stack-splitting. The technique proposed builds on the stack-splitting approach, previously described by the authors and experimentally validated on shared-memory systems, which in turn is an evolution of the stack-copying method used in a variety of parallel logic and constraint systems--e.g., MUSE, YAP, and Penny. The PALS system is the first distributed or-parallel implementation of Prolog based on the stack-splitting method ever realized. The results presented confirm the superiority of this method as a simple yet effective technique to transition from shared-memory to distributed-memory systems. PALS extends stack-splitting by combining it with incremental copying; the paper provides a description of the implementation of PALS, including details of how distributed scheduling is handled. We also investigate methodologies to effectively support order-sensitive predicates (e.g., side-effects) in the context of the stack-splitting scheme. Experimental results obtained from running PALS on both Shared Memory and Beowulf systems are presented and analyzed.<|reference_end|> | arxiv | @article{pontelli2006pals:,
title={PALS: Efficient Or-Parallelism on Beowulf Clusters},
author={Enrico Pontelli (1), Karen Villaverde (1), Hai-Feng Guo (2), Gopal
Gupta (3) ((1) New Mexico State University, (2) University of Nebraska at
Omaha, (3) University of Texas at Dallas)},
journal={arXiv preprint arXiv:cs/0607040},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607040},
primaryClass={cs.DC cs.PL}
} | pontelli2006pals: |
arxiv-674474 | cs/0607041 | Methods for Partitioning Data to Improve Parallel Execution Time for Sorting on Heterogeneous Clusters | <|reference_start|>Methods for Partitioning Data to Improve Parallel Execution Time for Sorting on Heterogeneous Clusters: The aim of the paper is to introduce general techniques in order to optimize the parallel execution time of sorting on a distributed architectures with processors of various speeds. Such an application requires a partitioning step. For uniformly related processors (processors speeds are related by a constant factor), we develop a constant time technique for mastering processor load and execution time in an heterogeneous environment and also a technique to deal with unknown cost functions. For non uniformly related processors, we use a technique based on dynamic programming. Most of the time, the solutions are in O(p) (p is the number of processors), independent of the problem size n. Consequently, there is a small overhead regarding the problem we deal with but it is inherently limited by the knowing of time complexity of the portion of code following the partitioning.<|reference_end|> | arxiv | @article{cérin2006methods,
title={Methods for Partitioning Data to Improve Parallel Execution Time for
Sorting on Heterogeneous Clusters},
author={Christophe C'erin (LIPN), Jean-Christophe Dubacq (LIPN), Jean-Louis
Roch (INRIA Rh^one-Alpes / ID-IMAG), the SafeScale Collaboration},
journal={Advances in Grid and Pervasive Computing (2006) 175-186},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607041},
primaryClass={cs.DC cs.PF}
} | cérin2006methods |
arxiv-674475 | cs/0607042 | Towards a classical proof of exponential lower bound for 2-probe smooth codes | <|reference_start|>Towards a classical proof of exponential lower bound for 2-probe smooth codes: Let C: {0,1}^n -> {0,1}^m be a code encoding an n-bit string into an m-bit string. Such a code is called a (q, c, e) smooth code if there exists a decoding algorithm which while decoding any bit of the input, makes at most q probes on the code word and the probability that it looks at any location is at most c/m. The error made by the decoding algorithm is at most e. Smooth codes were introduced by Katz and Trevisan in connection with Locally decodable codes. For 2-probe smooth codes Kerenedis and de Wolf have shown exponential in n lower bound on m in case c and e are constants. Their lower bound proof went through quantum arguments and interestingly there is no completely classical argument as yet for the same (albeit completely classical !) statement. We do not match the bounds shown by Kerenedis and de Wolf but however show the following. Let C: {0,1}^n -> {0,1}^m be a (2,c,e) smooth code and if e <= c^2/8n^2, then m >= 2^(n/320c^2 - 1)$. We hope that the arguments and techniques used in this paper extend (or are helpful in making similar other arguments), to match the bounds shown using quantum arguments. More so, hopefully they extend to show bounds for codes with greater number of probes where quantum arguments unfortunately do not yield good bounds (even for 3-probe codes).<|reference_end|> | arxiv | @article{jain2006towards,
title={Towards a classical proof of exponential lower bound for 2-probe smooth
codes},
author={Rahul Jain},
journal={arXiv preprint arXiv:cs/0607042},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607042},
primaryClass={cs.CR cs.IT math.IT}
} | jain2006towards |
arxiv-674476 | cs/0607043 | Analysis of CDMA systems that are characterized by eigenvalue spectrum | <|reference_start|>Analysis of CDMA systems that are characterized by eigenvalue spectrum: An approach by which to analyze the performance of the code division multiple access (CDMA) scheme, which is a core technology used in modern wireless communication systems, is provided. The approach characterizes the objective system by the eigenvalue spectrum of a cross-correlation matrix composed of signature sequences used in CDMA communication, which enables us to handle a wider class of CDMA systems beyond the basic model reported by Tanaka. The utility of the novel scheme is shown by analyzing a system in which the generation of signature sequences is designed for enhancing the orthogonality.<|reference_end|> | arxiv | @article{takeda2006analysis,
title={Analysis of CDMA systems that are characterized by eigenvalue spectrum},
author={Koujin Takeda, Shinsuke Uda, Yoshiyuki Kabashima},
journal={Europhys. Lett. 76 (2006) 1193-1199},
year={2006},
doi={10.1209/epl/i2006-10380-5},
archivePrefix={arXiv},
eprint={cs/0607043},
primaryClass={cs.IT cond-mat.dis-nn math.IT}
} | takeda2006analysis |
arxiv-674477 | cs/0607044 | Use of UML and Model Transformations for Workflow Process Definitions | <|reference_start|>Use of UML and Model Transformations for Workflow Process Definitions: Currently many different modeling languages are used for workflow definitions in BPM systems. Authors of this paper analyze the two most popular graphical languages, with highest possibility of wide practical usage - UML Activity diagrams (AD) and Business Process Modeling Notation (BPMN). The necessary in practice workflow aspects are briefly discussed, and on this basis a natural AD profile is proposed, which covers all of them. A functionally equivalent BPMN subset is also selected. The semantics of both languages in the context of process execution (namely, mapping to BPEL) is also analyzed in the paper. By analyzing AD and BPMN metamodels, authors conclude that an exact transformation from AD to BPMN is not trivial even for the selected subset, though these languages are considered to be similar. Authors show how this transformation could be defined in the MOLA transformation language.<|reference_end|> | arxiv | @article{kalnins2006use,
title={Use of UML and Model Transformations for Workflow Process Definitions},
author={Audris Kalnins, Valdis Vitolins},
journal={Audris Kalnins, Valdis Vitolins, Databases and Information
Systems, BalticDB&IS'2006, edited by Olegas Vasilecas, Johann Eder, Albertas
Caplinskas, Vilnius, Technika, 2006, pp. 3.-15},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607044},
primaryClass={cs.SE}
} | kalnins2006use |
arxiv-674478 | cs/0607045 | Improved online hypercube packing | <|reference_start|>Improved online hypercube packing: In this paper, we study online multidimensional bin packing problem when all items are hypercubes. Based on the techniques in one dimensional bin packing algorithm Super Harmonic by Seiden, we give a framework for online hypercube packing problem and obtain new upper bounds of asymptotic competitive ratios. For square packing, we get an upper bound of 2.1439, which is better than 2.24437. For cube packing, we also give a new upper bound 2.6852 which is better than 2.9421 by Epstein and van Stee.<|reference_end|> | arxiv | @article{han2006improved,
title={Improved online hypercube packing},
author={Xin Han, Deshi Ye, Yong Zhou},
journal={arXiv preprint arXiv:cs/0607045},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607045},
primaryClass={cs.DS}
} | han2006improved |
arxiv-674479 | cs/0607046 | Strip Packing vs Bin Packing | <|reference_start|>Strip Packing vs Bin Packing: In this paper we establish a general algorithmic framework between bin packing and strip packing, with which we achieve the same asymptotic bounds by applying bin packing algorithms to strip packing. More precisely we obtain the following results: (1) Any offline bin packing algorithm can be applied to strip packing maintaining the same asymptotic worst-case ratio. Thus using FFD (MFFD) as a subroutine, we get a practical (simple and fast) algorithm for strip packing with an upper bound 11/9 (71/60). A simple AFPTAS for strip packing immediately follows. (2) A class of Harmonic-based algorithms for bin packing can be applied to online strip packing maintaining the same asymptotic competitive ratio. It implies online strip packing admits an upper bound of 1.58889 on the asymptotic competitive ratio, which is very close to the lower bound 1.5401 and significantly improves the previously best bound of 1.6910 and affirmatively answers an open question posed by Csirik et. al.<|reference_end|> | arxiv | @article{han2006strip,
title={Strip Packing vs. Bin Packing},
author={Xin Han, Kazuo Iwama, Deshi Ye, Guochuan Zhang},
journal={arXiv preprint arXiv:cs/0607046},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607046},
primaryClass={cs.DS}
} | han2006strip |
arxiv-674480 | cs/0607047 | PAC Classification based on PAC Estimates of Label Class Distributions | <|reference_start|>PAC Classification based on PAC Estimates of Label Class Distributions: A standard approach in pattern classification is to estimate the distributions of the label classes, and then to apply the Bayes classifier to the estimates of the distributions in order to classify unlabeled examples. As one might expect, the better our estimates of the label class distributions, the better the resulting classifier will be. In this paper we make this observation precise by identifying risk bounds of a classifier in terms of the quality of the estimates of the label class distributions. We show how PAC learnability relates to estimates of the distributions that have a PAC guarantee on their $L_1$ distance from the true distribution, and we bound the increase in negative log likelihood risk in terms of PAC bounds on the KL-divergence. We give an inefficient but general-purpose smoothing method for converting an estimated distribution that is good under the $L_1$ metric into a distribution that is good under the KL-divergence.<|reference_end|> | arxiv | @article{palmer2006pac,
title={PAC Classification based on PAC Estimates of Label Class Distributions},
author={Nick Palmer and Paul W. Goldberg},
journal={arXiv preprint arXiv:cs/0607047},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607047},
primaryClass={cs.LG}
} | palmer2006pac |
arxiv-674481 | cs/0607048 | Evaluation de Techniques de Traitement des Refus\'es pour l'Octroi de Cr\'edit | <|reference_start|>Evaluation de Techniques de Traitement des Refus\'es pour l'Octroi de Cr\'edit: We present the problem of "Reject Inference" for credit acceptance. Because of the current legal framework (Basel II), credit institutions need to industrialize their processes for credit acceptance, including Reject Inference. We present here a methodology to compare various techniques of Reject Inference and show that it is necessary, in the absence of real theoretical results, to be able to produce and compare models adapted to available data (selection of "best" model conditionnaly on data). We describe some simulations run on a small data set to illustrate the approach and some strategies for choosing the control group, which is the only valid approach to Reject Inference.<|reference_end|> | arxiv | @article{viennet2006evaluation,
title={Evaluation de Techniques de Traitement des Refus\'{e}s pour l'Octroi de
Cr\'{e}dit},
author={Emmanuel Viennet (LIPN), Franc{c}oise Fogelman Souli'e (KXEN),
Benoit Rognier (KXEN)},
journal={38i\`{e}mes Journ\'{e}es de Statistiques (2006) 105},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607048},
primaryClass={cs.NE math.ST stat.TH}
} | viennet2006evaluation |
arxiv-674482 | cs/0607049 | Secure Component Deployment in the OSGi(tm) Release 4 Platform | <|reference_start|>Secure Component Deployment in the OSGi(tm) Release 4 Platform: Last years have seen a dramatic increase in the use of component platforms, not only in classical application servers, but also more and more in the domain of Embedded Systems. The OSGi(tm) platform is one of these platforms dedicated to lightweight execution environments, and one of the most prominent. However, new platforms also imply new security flaws, and a lack of both knowledge and tools for protecting the exposed systems. This technical report aims at fostering the understanding of security mechanisms in component deployment. It focuses on securing the deployment of components. It presents the cryptographic mechanisms necessary for signing OSGi(tm) bundles, as well as the detailed process of bundle signature and validation. We also present the SFelix platform, which is a secure extension to Felix OSGi(tm) framework implementation. It includes our implementation of the bundle signature process, as specified by OSGi(tm) Release 4 Security Layer. Moreover, a tool for signing and publishing bundles, SFelix JarSigner, has been developed to conveniently integrate bundle signature in the bundle deployment process.<|reference_end|> | arxiv | @article{parrend2006secure,
title={Secure Component Deployment in the OSGi(tm) Release 4 Platform},
author={Pierre Parrend (INRIA Rh^one-Alpes), St'ephane Fr'enot (INRIA
Rh^one-Alpes)},
journal={arXiv preprint arXiv:cs/0607049},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607049},
primaryClass={cs.CR cs.OS}
} | parrend2006secure |
arxiv-674483 | cs/0607050 | Interactive Hatching and Stippling by Example | <|reference_start|>Interactive Hatching and Stippling by Example: We describe a system that lets a designer interactively draw patterns of strokes in the picture plane, then guide the synthesis of similar patterns over new picture regions. Synthesis is based on an initial user-assisted analysis phase in which the system recognizes distinct types of strokes (hatching and stippling) and organizes them according to perceptual grouping criteria. The synthesized strokes are produced by combining properties (eg. length, orientation, parallelism, proximity) of the stroke groups extracted from the input examples. We illustrate our technique with a drawing application that allows the control of attributes and scale-dependent reproduction of the synthesized patterns.<|reference_end|> | arxiv | @article{barla2006interactive,
title={Interactive Hatching and Stippling by Example},
author={Pascal Barla (INRIA Rh^one-Alpes / GRAVIR-IMAG), Simon Breslav
(EECS), Lee Markosian (EECS), Jo"elle Thollot (INRIA Rh^one-Alpes /
GRAVIR-IMAG)},
journal={arXiv preprint arXiv:cs/0607050},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607050},
primaryClass={cs.GR}
} | barla2006interactive |
arxiv-674484 | cs/0607051 | Raisonner avec des diagrammes : perspectives cognitives et computationnelles | <|reference_start|>Raisonner avec des diagrammes : perspectives cognitives et computationnelles: Diagrammatic, analogical or iconic representations are often contrasted with linguistic or logical representations, in which the shape of the symbols is arbitrary. The aim of this paper is to make a case for the usefulness of diagrams in inferential knowledge representation systems. Although commonly used, diagrams have for a long time suffered from the reputation of being only a heuristic tool or a mere support for intuition. The first part of this paper is an historical background paying tribute to the logicians, psychologists and computer scientists who put an end to this formal prejudice against diagrams. The second part is a discussion of their characteristics as opposed to those of linguistic forms. The last part is aimed at reviving the interest for heterogeneous representation systems including both linguistic and diagrammatic representations.<|reference_end|> | arxiv | @article{recanati2006raisonner,
title={Raisonner avec des diagrammes : perspectives cognitives et
computationnelles},
author={Catherine Recanati (LIPN)},
journal={Intellectica 40 (2005) 9-42},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607051},
primaryClass={cs.CL}
} | recanati2006raisonner |
arxiv-674485 | cs/0607052 | Dealing with Metonymic Readings of Named Entities | <|reference_start|>Dealing with Metonymic Readings of Named Entities: The aim of this paper is to propose a method for tagging named entities (NE), using natural language processing techniques. Beyond their literal meaning, named entities are frequently subject to metonymy. We show the limits of current NE type hierarchies and detail a new proposal aiming at dynamically capturing the semantics of entities in context. This model can analyze complex linguistic phenomena like metonymy, which are known to be difficult for natural language processing but crucial for most applications. We present an implementation and some test using the French ESTER corpus and give significant results.<|reference_end|> | arxiv | @article{poibeau2006dealing,
title={Dealing with Metonymic Readings of Named Entities},
author={Thierry Poibeau (LIPN)},
journal={Dans Actes de The 28th Annual Conference of the Cognitive Science
Society (CogSci 2006) - The 28th Annual Conference of the Cognitive Science
Society (CogSci 2006), Vancouver : Canada (2006)},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607052},
primaryClass={cs.AI cs.CL}
} | poibeau2006dealing |
arxiv-674486 | cs/0607053 | Linguistically Grounded Models of Language Change | <|reference_start|>Linguistically Grounded Models of Language Change: Questions related to the evolution of language have recently known an impressive increase of interest (Briscoe, 2002). This short paper aims at questioning the scientific status of these models and their relations to attested data. We show that one cannot directly model non-linguistic factors (exogenous factors) even if they play a crucial role in language evolution. We then examine the relation between linguistic models and attested language data, as well as their contribution to cognitive linguistics.<|reference_end|> | arxiv | @article{poibeau2006linguistically,
title={Linguistically Grounded Models of Language Change},
author={Thierry Poibeau (LIPN)},
journal={The 28th Annual Conference of the Cognitive Science Society
(CogSci 2006), Canada (2006)},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607053},
primaryClass={cs.AI cs.CL}
} | poibeau2006linguistically |
arxiv-674487 | cs/0607054 | Elementary Proof of a Theorem of Jean Ville | <|reference_start|>Elementary Proof of a Theorem of Jean Ville: Considerable thought has been devoted to an adequate definition of the class of infinite, random binary sequences (the sort of sequence that almost certainly arises from flipping a fair coin indefinitely). The first mathematical exploration of this problem was due to R. Von Mises, and based on his concept of a "selection function." A decisive objection to Von Mises' idea was formulated in a theorem offered by Jean Ville in 1939. It shows that some sequences admitted by Von Mises as "random" in fact manifest a certain kind of systematicity. Ville's proof is challenging, and an alternative approach has appeared only in condensed form. We attempt to provide an expanded version of the latter, alternative argument.<|reference_end|> | arxiv | @article{lieb2006elementary,
title={Elementary Proof of a Theorem of Jean Ville},
author={Elliott H. Lieb, Daniel Osherson and Scott Weinstein},
journal={arXiv preprint arXiv:cs/0607054},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607054},
primaryClass={cs.CC}
} | lieb2006elementary |
arxiv-674488 | cs/0607055 | Boundary cliques, clique trees and perfect sequences of maximal cliques of a chordal graph | <|reference_start|>Boundary cliques, clique trees and perfect sequences of maximal cliques of a chordal graph: We characterize clique trees of a chordal graph in their relation to simplicial vertices and perfect sequences of maximal cliques. We investigate boundary cliques defined by Shibata and clarify their relation to endpoints of clique trees. Next we define a symmetric binary relation between the set of clique trees and the set of perfect sequences of maximal cliques. We describe the relation as a bipartite graph and prove that the bipartite graph is always connected. Lastly we consider to characterize chordal graphs from the aspect of non-uniqueness of clique trees.<|reference_end|> | arxiv | @article{hara2006boundary,
title={Boundary cliques, clique trees and perfect sequences of maximal cliques
of a chordal graph},
author={Hisayuki Hara and Akimichi Takemura},
journal={arXiv preprint arXiv:cs/0607055},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607055},
primaryClass={cs.DM}
} | hara2006boundary |
arxiv-674489 | cs/0607056 | Reasoning with Intervals on Granules | <|reference_start|>Reasoning with Intervals on Granules: The formalizations of periods of time inside a linear model of Time are usually based on the notion of intervals, that may contain or may not their endpoints. This is not enought when the periods are written in terms of coarse granularities with respect to the event taken into account. For instance, how to express the inter-war period in terms of a {\em years} interval? This paper presents a new type of intervals, neither open, nor closed or open-closed and the extension of operations on intervals of this new type, in order to reduce the gap between the discourse related to temporal relationship and its translation into a discretized model of Time.<|reference_end|> | arxiv | @article{schwer2006reasoning,
title={Reasoning with Intervals on Granules},
author={Sylviane Schwer (LIPN)},
journal={Journal of Universal Computer Science 8 (8) (2002) 793-808},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607056},
primaryClass={cs.AI cs.DM}
} | schwer2006reasoning |
arxiv-674490 | cs/0607057 | The Average Size of Giant Components Between the Double-Jump | <|reference_start|>The Average Size of Giant Components Between the Double-Jump: We study the sizes of connected components according to their excesses during a random graph process built with $n$ vertices. The considered model is the continuous one defined in Janson 2000. An ${\ell}$-component is a connected component with ${\ell}$ edges more than vertices. $\ell$ is also called the \textit{excess} of such component. As our main result, we show that when $\ell$ and ${n \over \ell}$ are both large, the expected number of vertices that ever belong to an $\ell$-component is about ${12}^{1/3} {\ell}^{1/3} n^{2/3}$. We also obtain limit theorems for the number of creations of $\ell$-components.<|reference_end|> | arxiv | @article{ravelomanana2006the,
title={The Average Size of Giant Components Between the Double-Jump},
author={Vlady Ravelomanana (LIPN), the Projet PAI Amadeus Collaboration},
journal={Algorithmica Issue sp\'{e}ciale "Analysis of Algorithms" (2006) A
para\^{i}tre},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607057},
primaryClass={cs.DM math.CO math.PR}
} | ravelomanana2006the |
arxiv-674491 | cs/0607058 | Craig's Interpolation Theorem formalised and mechanised in Isabelle/HOL | <|reference_start|>Craig's Interpolation Theorem formalised and mechanised in Isabelle/HOL: We formalise and mechanise a construtive, proof theoretic proof of Craig's Interpolation Theorem in Isabelle/HOL. We give all the definitions and lemma statements both formally and informally. We also transcribe informally the formal proofs. We detail the main features of our mechanisation, such as the formalisation of binding for first order formulae. We also give some applications of Craig's Interpolation Theorem.<|reference_end|> | arxiv | @article{ridge2006craig's,
title={Craig's Interpolation Theorem formalised and mechanised in Isabelle/HOL},
author={Tom Ridge},
journal={arXiv preprint arXiv:cs/0607058},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607058},
primaryClass={cs.LO}
} | ridge2006craig's |
arxiv-674492 | cs/0607059 | Creation and Growth of Components in a Random Hypergraph Process | <|reference_start|>Creation and Growth of Components in a Random Hypergraph Process: Denote by an $\ell$-component a connected $b$-uniform hypergraph with $k$ edges and $k(b-1) - \ell$ vertices. We prove that the expected number of creations of $\ell$-component during a random hypergraph process tends to 1 as $\ell$ and $b$ tend to $\infty$ with the total number of vertices $n$ such that $\ell = o(\sqrt[3]{\frac{n}{b}})$. Under the same conditions, we also show that the expected number of vertices that ever belong to an $\ell$-component is approximately $12^{1/3} (b-1)^{1/3} \ell^{1/3} n^{2/3}$. As an immediate consequence, it follows that with high probability the largest $\ell$-component during the process is of size $O((b-1)^{1/3} \ell^{1/3} n^{2/3})$. Our results give insight about the size of giant components inside the phase transition of random hypergraphs.<|reference_end|> | arxiv | @article{ravelomanana2006creation,
title={Creation and Growth of Components in a Random Hypergraph Process},
author={Vlady Ravelomanana (LIPN), Alphonse Laza Rijamame (D.M.I)},
journal={Proceedings of The Twelfth Annual International Computing and
Combinatorics Conference (COCOON'06) -- Lecture Notes in Computer Science
(2006) \`{a} para\^{i}tre},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607059},
primaryClass={cs.DM math.CO math.PR}
} | ravelomanana2006creation |
arxiv-674493 | cs/0607060 | Circle Formation of Weak Mobile Robots | <|reference_start|>Circle Formation of Weak Mobile Robots: In this paper we prove the conjecture of D\'{e}fago & Konagaya. Furthermore, we describe a deterministic protocol for forming a regular n-gon in finite time.<|reference_end|> | arxiv | @article{dieudonne2006circle,
title={Circle Formation of Weak Mobile Robots},
author={Yoann Dieudonne (LaRIA), Ouiddad Labbani-Igbida (CREA), Franck Petit
(LaRIA)},
journal={arXiv preprint arXiv:cs/0607060},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607060},
primaryClass={cs.RO}
} | dieudonne2006circle |
arxiv-674494 | cs/0607061 | On Some Peculiarities of Dynamic Switch between Component Implementations in an Autonomic Computing System | <|reference_start|>On Some Peculiarities of Dynamic Switch between Component Implementations in an Autonomic Computing System: Behavior of the delta algorithm of autonomic switch between two component implementations is considered on several examples of a client-server systems involving, in particular, periodic change of intensities of requests for the component. It is shown that in the cases of some specific combinations of elementary requests costs, the number of clients in the system, the number of requests per unit of time, and the cost of switch between the implementations, the algorithm may reveal behavior that is rather far from the desired. A sufficient criterion of a success of the algorithm is proposed based on the analysis of the accumulated implementations costs difference as a function of time. Suggestions are pointed out of practical evaluation of the algorithm functioning regarding the observations made in this paper.<|reference_end|> | arxiv | @article{mackarov2006on,
title={On Some Peculiarities of Dynamic Switch between Component
Implementations in an Autonomic Computing System},
author={Igor Mackarov (Maharishi University of Management)},
journal={arXiv preprint arXiv:cs/0607061},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607061},
primaryClass={cs.DS cs.DC cs.NA}
} | mackarov2006on |
arxiv-674495 | cs/0607062 | Get out the vote: Determining support or opposition from Congressional floor-debate transcripts | <|reference_start|>Get out the vote: Determining support or opposition from Congressional floor-debate transcripts: We investigate whether one can determine from the transcripts of U.S. Congressional floor debates whether the speeches represent support of or opposition to proposed legislation. To address this problem, we exploit the fact that these speeches occur as part of a discussion; this allows us to use sources of information regarding relationships between discourse segments, such as whether a given utterance indicates agreement with the opinion expressed by another. We find that the incorporation of such information yields substantial improvements over classifying speeches in isolation.<|reference_end|> | arxiv | @article{thomas2006get,
title={Get out the vote: Determining support or opposition from Congressional
floor-debate transcripts},
author={Matt Thomas, Bo Pang and Lillian Lee},
journal={arXiv preprint arXiv:cs/0607062},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607062},
primaryClass={cs.CL cs.SI physics.soc-ph}
} | thomas2006get |
arxiv-674496 | cs/0607063 | Prioritizing Software Inspection Results using Static Profiling | <|reference_start|>Prioritizing Software Inspection Results using Static Profiling: Static software checking tools are useful as an additional automated software inspection step that can easily be integrated in the development cycle and assist in creating secure, reliable and high quality code. However, an often quoted disadvantage of these tools is that they generate an overly large number of warnings, including many false positives due to the approximate analysis techniques. This information overload effectively limits their usefulness. In this paper we present ELAN, a technique that helps the user prioritize the information generated by a software inspection tool, based on a demand-driven computation of the likelihood that execution reaches the locations for which warnings are reported. This analysis is orthogonal to other prioritization techniques known from literature, such as severity levels and statistical analysis to reduce false positives. We evaluate feasibility of our technique using a number of case studies and assess the quality of our predictions by comparing them to actual values obtained by dynamic profiling.<|reference_end|> | arxiv | @article{boogerd2006prioritizing,
title={Prioritizing Software Inspection Results using Static Profiling},
author={Cathal Boogerd and Leon Moonen},
journal={arXiv preprint arXiv:cs/0607063},
year={2006},
number={TUD-SERG-2006-001},
archivePrefix={arXiv},
eprint={cs/0607063},
primaryClass={cs.SE}
} | boogerd2006prioritizing |
arxiv-674497 | cs/0607064 | How to Find Good Finite-Length Codes: From Art Towards Science | <|reference_start|>How to Find Good Finite-Length Codes: From Art Towards Science: We explain how to optimize finite-length LDPC codes for transmission over the binary erasure channel. Our approach relies on an analytic approximation of the erasure probability. This is in turn based on a finite-length scaling result to model large scale erasures and a union bound involving minimal stopping sets to take into account small error events. We show that the performances of optimized ensembles as observed in simulations are well described by our approximation. Although we only address the case of transmission over the binary erasure channel, our method should be applicable to a more general setting.<|reference_end|> | arxiv | @article{amraoui2006how,
title={How to Find Good Finite-Length Codes: From Art Towards Science},
author={Abdelaziz Amraoui, Andrea Montanari, Ruediger Urbanke},
journal={arXiv preprint arXiv:cs/0607064},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607064},
primaryClass={cs.IT math.IT}
} | amraoui2006how |
arxiv-674498 | cs/0607065 | Decomposable Theories | <|reference_start|>Decomposable Theories: We present in this paper a general algorithm for solving first-order formulas in particular theories called "decomposable theories". First of all, using special quantifiers, we give a formal characterization of decomposable theories and show some of their properties. Then, we present a general algorithm for solving first-order formulas in any decomposable theory "T". The algorithm is given in the form of five rewriting rules. It transforms a first-order formula "P", which can possibly contain free variables, into a conjunction "Q" of solved formulas easily transformable into a Boolean combination of existentially quantified conjunctions of atomic formulas. In particular, if "P" has no free variables then "Q" is either the formula "true" or "false". The correctness of our algorithm proves the completeness of the decomposable theories. Finally, we show that the theory "Tr" of finite or infinite trees is a decomposable theory and give some benchmarks realized by an implementation of our algorithm, solving formulas on two-partner games in "Tr" with more than 160 nested alternated quantifiers.<|reference_end|> | arxiv | @article{djelloul2006decomposable,
title={Decomposable Theories},
author={Khalil Djelloul},
journal={arXiv preprint arXiv:cs/0607065},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607065},
primaryClass={cs.LO cs.AI}
} | djelloul2006decomposable |
arxiv-674499 | cs/0607066 | Generalized h-index for Disclosing Latent Facts in Citation Networks | <|reference_start|>Generalized h-index for Disclosing Latent Facts in Citation Networks: What is the value of a scientist and its impact upon the scientific thinking? How can we measure the prestige of a journal or of a conference? The evaluation of the scientific work of a scientist and the estimation of the quality of a journal or conference has long attracted significant interest, due to the benefits from obtaining an unbiased and fair criterion. Although it appears to be simple, defining a quality metric is not an easy task. To overcome the disadvantages of the present metrics used for ranking scientists and journals, J.E. Hirsch proposed a pioneering metric, the now famous h-index. In this article, we demonstrate several inefficiencies of this index and develop a pair of generalizations and effective variants of it to deal with scientist ranking and with publication forum ranking. The new citation indices are able to disclose trendsetters in scientific research, as well as researchers that constantly shape their field with their influential work, no matter how old they are. We exhibit the effectiveness and the benefits of the new indices to unfold the full potential of the h-index, with extensive experimental results obtained from DBLP, a widely known on-line digital library.<|reference_end|> | arxiv | @article{sidiropoulos2006generalized,
title={Generalized h-index for Disclosing Latent Facts in Citation Networks},
author={Antonis Sidiropoulos, Dimitrios Katsaros, Yannis Manolopoulos},
journal={arXiv preprint arXiv:cs/0607066},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607066},
primaryClass={cs.DL}
} | sidiropoulos2006generalized |
arxiv-674500 | cs/0607067 | Competing with stationary prediction strategies | <|reference_start|>Competing with stationary prediction strategies: In this paper we introduce the class of stationary prediction strategies and construct a prediction algorithm that asymptotically performs as well as the best continuous stationary strategy. We make mild compactness assumptions but no stochastic assumptions about the environment. In particular, no assumption of stationarity is made about the environment, and the stationarity of the considered strategies only means that they do not depend explicitly on time; we argue that it is natural to consider only stationary strategies even for highly non-stationary environments.<|reference_end|> | arxiv | @article{vovk2006competing,
title={Competing with stationary prediction strategies},
author={Vladimir Vovk},
journal={arXiv preprint arXiv:cs/0607067},
year={2006},
archivePrefix={arXiv},
eprint={cs/0607067},
primaryClass={cs.LG}
} | vovk2006competing |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.