corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-674801 | cs/0609096 | Finite-State Dimension and Lossy Decompressors | <|reference_start|>Finite-State Dimension and Lossy Decompressors: This paper examines information-theoretic questions regarding the difficulty of compressing data versus the difficulty of decompressing data and the role that information loss plays in this interaction. Finite-state compression and decompression are shown to be of equivalent difficulty, even when the decompressors are allowed to be lossy. Inspired by Kolmogorov complexity, this paper defines the optimal *decompression *ratio achievable on an infinite sequence by finite-state decompressors (that is, finite-state transducers outputting the sequence in question). It is shown that the optimal compression ratio achievable on a sequence S by any *information lossless* finite state compressor, known as the finite-state dimension of S, is equal to the optimal decompression ratio achievable on S by any finite-state decompressor. This result implies a new decompression characterization of finite-state dimension in terms of lossy finite-state transducers.<|reference_end|> | arxiv | @article{doty2006finite-state,
title={Finite-State Dimension and Lossy Decompressors},
author={David Doty and Philippe Moser},
journal={arXiv preprint arXiv:cs/0609096},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609096},
primaryClass={cs.CC cs.IT math.IT}
} | doty2006finite-state |
arxiv-674802 | cs/0609097 | Traveing Salesperson Problems for a double integrator | <|reference_start|>Traveing Salesperson Problems for a double integrator: In this paper we propose some novel path planning strategies for a double integrator with bounded velocity and bounded control inputs. First, we study the following version of the Traveling Salesperson Problem (TSP): given a set of points in $\real^d$, find the fastest tour over the point set for a double integrator. We first give asymptotic bounds on the time taken to complete such a tour in the worst-case. Then, we study a stochastic version of the TSP for double integrator where the points are randomly sampled from a uniform distribution in a compact environment in $\real^2$ and $\real^3$. We propose novel algorithms that perform within a constant factor of the optimal strategy with high probability. Lastly, we study a dynamic TSP: given a stochastic process that generates targets, is there a policy which guarantees that the number of unvisited targets does not diverge over time? If such stable policies exist, what is the minimum wait for a target? We propose novel stabilizing receding-horizon algorithms whose performances are within a constant factor from the optimum with high probability, in $\real^2$ as well as $\real^3$. We also argue that these algorithms give identical performances for a particular nonholonomic vehicle, Dubins vehicle.<|reference_end|> | arxiv | @article{savla2006traveing,
title={Traveing Salesperson Problems for a double integrator},
author={Ketan Savla, Francesco Bullo and Emilio Frazzoli},
journal={arXiv preprint arXiv:cs/0609097},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609097},
primaryClass={cs.RO}
} | savla2006traveing |
arxiv-674803 | cs/0609098 | Reducing the Makespan in Hierarchical Reliable Multicast Tree | <|reference_start|>Reducing the Makespan in Hierarchical Reliable Multicast Tree: In hierarchical reliable multicast environment, makespan is the time that is required to fully and successfully transmit a packet from the sender to all receivers. Low makespan is vital for achieving high throughput with a TCP-like window based sending scheme. In hierarchical reliable multicast methods, the number of repair servers and their locations influence the makespan. In this paper we propose a new method to decide the locations of repair servers that can reduce the makespan in hierarchical reliable multicast networks. Our method has a formulation based on mixed integer programming to analyze the makespan minimization problem. A notable aspect of the formulation is that heterogeneous links and packet losses are taken into account in the formulation. Three different heuristics are presented to find the locations of repair servers in reasonable time in the formulation. Through simulations, three heuristics are carefully analyzed and compared on networks with different sizes. We also evaluate our proposals on PGM (Pragmatic General Multicast) reliable multicast protocol using ns-2 simulation. The results show that the our best heuristic is close to the lower bound by a factor of 2.3 in terms of makespan and by a factor of 5.5 in terms of the number of repair servers.<|reference_end|> | arxiv | @article{byun2006reducing,
title={Reducing the Makespan in Hierarchical Reliable Multicast Tree},
author={Sang-Seon Byun, Chuck Yoo},
journal={arXiv preprint arXiv:cs/0609098},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609098},
primaryClass={cs.NI}
} | byun2006reducing |
arxiv-674804 | cs/0609099 | Coding for Parallel Channels: Gallager Bounds and Applications to Repeat-Accumulate Codes | <|reference_start|>Coding for Parallel Channels: Gallager Bounds and Applications to Repeat-Accumulate Codes: This paper is focused on the performance analysis of binary linear block codes (or ensembles) whose transmission takes place over independent and memoryless parallel channels. New upper bounds on the maximum-likelihood (ML) decoding error probability are derived. The framework of the second version of the Duman and Salehi (DS2) bounds is generalized to the case of parallel channels, along with the derivation of optimized tilting measures. The connection between the generalized DS2 and the 1961 Gallager bounds, known previously for a single channel, is revisited for the case of parallel channels. The new bounds are used to obtain improved inner bounds on the attainable channel regions under ML decoding. These improved bounds are applied to ensembles of turbo-like codes, focusing on repeat-accumulate codes and their recent variations.<|reference_end|> | arxiv | @article{sason2006coding,
title={Coding for Parallel Channels: Gallager Bounds and Applications to
Repeat-Accumulate Codes},
author={Igal Sason and Idan Goldenberg},
journal={arXiv preprint arXiv:cs/0609099},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609099},
primaryClass={cs.IT math.IT}
} | sason2006coding |
arxiv-674805 | cs/0609100 | Total Variation Minimization and Graph Cuts for Moving Objects Segmentation | <|reference_start|>Total Variation Minimization and Graph Cuts for Moving Objects Segmentation: In this paper, we are interested in the application to video segmentation of the discrete shape optimization problem involving the shape weighted perimeter and an additional term depending on a parameter. Based on recent works and in particular the one of Darbon and Sigelle, we justify the equivalence of the shape optimization problem and a weighted total variation regularization. For solving this problem, we adapt the projection algorithm proposed recently for solving the basic TV regularization problem. Another solution to the shape optimization investigated here is the graph cut technique. Both methods have the advantage to lead to a global minimum. Since we can distinguish moving objects from static elements of a scene by analyzing norm of the optical flow vectors, we choose the optical flow norm as initial data. In order to have the contour as close as possible to an edge in the image, we use a classical edge detector function as the weight of the weighted total variation. This model has been used in one of our former works. We also apply the same methods to a video segmentation model used by Jehan-Besson, Barlaud and Aubert. In this case, only standard perimeter is incorporated in the shape functional. We also propose another way for finding moving objects by using an a contrario detection of objects on the image obtained by solving the Rudin-Osher-Fatemi Total Variation regularization problem.We can notice the segmentation can be associated to a level set in the former methods.<|reference_end|> | arxiv | @article{ranchin2006total,
title={Total Variation Minimization and Graph Cuts for Moving Objects
Segmentation},
author={Florent Ranchin (CEREMADE), Antonin Chambolle (CMAP), Franc{c}oise
Dibos (LAGA)},
journal={arXiv preprint arXiv:cs/0609100},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609100},
primaryClass={cs.CV}
} | ranchin2006total |
arxiv-674806 | cs/0609101 | Can rare SAT formulas be easily recognized? On the efficiency of message passing algorithms for K-SAT at large clause-to-variable ratios | <|reference_start|>Can rare SAT formulas be easily recognized? On the efficiency of message passing algorithms for K-SAT at large clause-to-variable ratios: For large clause-to-variable ratio, typical K-SAT instances drawn from the uniform distribution have no solution. We argue, based on statistical mechanics calculations using the replica and cavity methods, that rare satisfiable instances from the uniform distribution are very similar to typical instances drawn from the so-called planted distribution, where instances are chosen uniformly between the ones that admit a given solution. It then follows, from a recent article by Feige, Mossel and Vilenchik, that these rare instances can be easily recognized (in O(log N) time and with probability close to 1) by a simple message-passing algorithm.<|reference_end|> | arxiv | @article{altarelli2006can,
title={Can rare SAT formulas be easily recognized? On the efficiency of message
passing algorithms for K-SAT at large clause-to-variable ratios},
author={Fabrizio Altarelli, Remi Monasson, Francesco Zamponi},
journal={J. Phys. A: Math. Theor. 40, 867-886 (2007)},
year={2006},
doi={10.1088/1751-8113/40/5/001},
archivePrefix={arXiv},
eprint={cs/0609101},
primaryClass={cs.CC cond-mat.stat-mech}
} | altarelli2006can |
arxiv-674807 | cs/0609102 | Using groups for investigating rewrite systems | <|reference_start|>Using groups for investigating rewrite systems: We describe several technical tools that prove to be efficient for investigating the rewrite systems associated with a family of algebraic laws, and might be useful for more general rewrite systems. These tools consist in introducing a monoid of partial operators, listing the monoid relations expressing the possible local confluence of the rewrite system, then introducing the group presented by these relations, and finally replacing the initial rewrite system with a internal process entirely sitting in the latter group. When the approach can be completed, one typically obtains a practical method for constructing algebras satisfying prescribed laws and for solving the associated word problem.<|reference_end|> | arxiv | @article{dehornoy2006using,
title={Using groups for investigating rewrite systems},
author={Patrick Dehornoy (LMNO)},
journal={arXiv preprint arXiv:cs/0609102},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609102},
primaryClass={cs.LO}
} | dehornoy2006using |
arxiv-674808 | cs/0609103 | Minimum-weight Cycle Covers and Their Approximability | <|reference_start|>Minimum-weight Cycle Covers and Their Approximability: A cycle cover of a graph is a set of cycles such that every vertex is part of exactly one cycle. An L-cycle cover is a cycle cover in which the length of every cycle is in the set L. We investigate how well L-cycle covers of minimum weight can be approximated. For undirected graphs, we devise a polynomial-time approximation algorithm that achieves a constant approximation ratio for all sets L. On the other hand, we prove that the problem cannot be approximated within a factor of 2-eps for certain sets L. For directed graphs, we present a polynomial-time approximation algorithm that achieves an approximation ratio of O(n), where $n$ is the number of vertices. This is asymptotically optimal: We show that the problem cannot be approximated within a factor of o(n). To contrast the results for cycle covers of minimum weight, we show that the problem of computing L-cycle covers of maximum weight can, at least in principle, be approximated arbitrarily well.<|reference_end|> | arxiv | @article{manthey2006minimum-weight,
title={Minimum-weight Cycle Covers and Their Approximability},
author={Bodo Manthey},
journal={arXiv preprint arXiv:cs/0609103},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609103},
primaryClass={cs.DS cs.CC cs.DM}
} | manthey2006minimum-weight |
arxiv-674809 | cs/0609104 | On Verifying Complex Properties using Symbolic Shape Analysis | <|reference_start|>On Verifying Complex Properties using Symbolic Shape Analysis: One of the main challenges in the verification of software systems is the analysis of unbounded data structures with dynamic memory allocation, such as linked data structures and arrays. We describe Bohne, a new analysis for verifying data structures. Bohne verifies data structure operations and shows that 1) the operations preserve data structure invariants and 2) the operations satisfy their specifications expressed in terms of changes to the set of objects stored in the data structure. During the analysis, Bohne infers loop invariants in the form of disjunctions of universally quantified Boolean combinations of formulas. To synthesize loop invariants of this form, Bohne uses a combination of decision procedures for Monadic Second-Order Logic over trees, SMT-LIB decision procedures (currently CVC Lite), and an automated reasoner within the Isabelle interactive theorem prover. This architecture shows that synthesized loop invariants can serve as a useful communication mechanism between different decision procedures. Using Bohne, we have verified operations on data structures such as linked lists with iterators and back pointers, trees with and without parent pointers, two-level skip lists, array data structures, and sorted lists. We have deployed Bohne in the Hob and Jahob data structure analysis systems, enabling us to combine Bohne with analyses of data structure clients and apply it in the context of larger programs. This report describes the Bohne algorithm as well as techniques that Bohne uses to reduce the ammount of annotations and the running time of the analysis.<|reference_end|> | arxiv | @article{wies2006on,
title={On Verifying Complex Properties using Symbolic Shape Analysis},
author={Thomas Wies, Viktor Kuncak, Karen Zee, Andreas Podelski, Martin Rinard},
journal={arXiv preprint arXiv:cs/0609104},
year={2006},
number={MPI-I-2006-2-001},
archivePrefix={arXiv},
eprint={cs/0609104},
primaryClass={cs.PL cs.LO cs.SE}
} | wies2006on |
arxiv-674810 | cs/0609105 | Binomial multichannel algorithm | <|reference_start|>Binomial multichannel algorithm: The binomial multichannel algorithm is proposed. Some its properties are discussed.<|reference_end|> | arxiv | @article{lavrenov2006binomial,
title={Binomial multichannel algorithm},
author={A. Lavrenov},
journal={arXiv preprint arXiv:cs/0609105},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609105},
primaryClass={cs.CR}
} | lavrenov2006binomial |
arxiv-674811 | cs/0609106 | Throughput Optimal Distributed Control of Stochastic Wireless Networks | <|reference_start|>Throughput Optimal Distributed Control of Stochastic Wireless Networks: The Maximum Differential Backlog (MDB) control policy of Tassiulas and Ephremides has been shown to adaptively maximize the stable throughput of multi-hop wireless networks with random traffic arrivals and queueing. The practical implementation of the MDB policy in wireless networks with mutually interfering links, however, requires the development of distributed optimization algorithms. Within the context of CDMA-based multi-hop wireless networks, we develop a set of node-based scaled gradient projection power control algorithms which solves the MDB optimization problem in a distributed manner using low communication overhead. As these algorithms require time to converge to a neighborhood of the optimum, the optimal rates determined by the MDB policy can only be found iteratively over time. For this, we show that the iterative MDB policy with convergence time remains throughput optimal.<|reference_end|> | arxiv | @article{xi2006throughput,
title={Throughput Optimal Distributed Control of Stochastic Wireless Networks},
author={Yufang Xi and Edmund M. Yeh},
journal={arXiv preprint arXiv:cs/0609106},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609106},
primaryClass={cs.NI}
} | xi2006throughput |
arxiv-674812 | cs/0609107 | A multipurpose Hopf deformation of the Algebra of Feynman-like Diagrams | <|reference_start|>A multipurpose Hopf deformation of the Algebra of Feynman-like Diagrams: We construct a three parameter deformation of the Hopf algebra $\mathbf{LDIAG}$. This new algebra is a true Hopf deformation which reduces to $\mathbf{LDIAG}$ on one hand and to $\mathbf{MQSym}$ on the other, relating $\mathbf{LDIAG}$ to other Hopf algebras of interest in contemporary physics. Further, its product law reproduces that of the algebra of polyzeta functions.<|reference_end|> | arxiv | @article{duchamp2006a,
title={A multipurpose Hopf deformation of the Algebra of Feynman-like Diagrams},
author={G'erard Henry Edmond Duchamp (LIPN), Allan I. Solomon (LPTMC), Pawel
Blasiak (LPTMC), Karol A. Penson (LPTMC), Andrzej Horzela (LPTMC)},
journal={arXiv preprint arXiv:cs/0609107},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609107},
primaryClass={cs.OH math-ph math.MP}
} | duchamp2006a |
arxiv-674813 | cs/0609108 | Generalized Majority-Minority Operations are Tractable | <|reference_start|>Generalized Majority-Minority Operations are Tractable: Generalized majority-minority (GMM) operations are introduced as a common generalization of near unanimity operations and Mal'tsev operations on finite sets. We show that every instance of the constraint satisfaction problem (CSP), where all constraint relations are invariant under a (fixed) GMM operation, is solvable in polynomial time. This constitutes one of the largest tractable cases of the CSP.<|reference_end|> | arxiv | @article{dalmau2006generalized,
title={Generalized Majority-Minority Operations are Tractable},
author={Victor Dalmau},
journal={Logical Methods in Computer Science, Volume 2, Issue 4 (September
28, 2006) lmcs:2237},
year={2006},
doi={10.2168/LMCS-2(4:1)2006},
archivePrefix={arXiv},
eprint={cs/0609108},
primaryClass={cs.CC cs.LO}
} | dalmau2006generalized |
arxiv-674814 | cs/0609109 | The recognizability of sets of graphs is a robust property | <|reference_start|>The recognizability of sets of graphs is a robust property: Once the set of finite graphs is equipped with an algebra structure (arising from the definition of operations that generalize the concatenation of words), one can define the notion of a recognizable set of graphs in terms of finite congruences. Applications to the construction of efficient algorithms and to the theory of context-free sets of graphs follow naturally. The class of recognizable sets depends on the signature of graph operations. We consider three signatures related respectively to Hyperedge Replacement (HR) context-free graph grammars, to Vertex Replacement (VR) context-free graph grammars, and to modular decompositions of graphs. We compare the corresponding classes of recognizable sets. We show that they are robust in the sense that many variants of each signature (where in particular operations are defined by quantifier-free formulas, a quite flexible framework) yield the same notions of recognizability. We prove that for graphs without large complete bipartite subgraphs, HR-recognizability and VR-recognizability coincide. The same combinatorial condition equates HR-context-free and VR-context-free sets of graphs. Inasmuch as possible, results are formulated in the more general framework of relational structures.<|reference_end|> | arxiv | @article{courcelle2006the,
title={The recognizability of sets of graphs is a robust property},
author={Bruno Courcelle (LaBRI), Pascal Weil (LaBRI)},
journal={Theoretical Computer Science 342 (2005) 173-228},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609109},
primaryClass={cs.LO math.LO}
} | courcelle2006the |
arxiv-674815 | cs/0609110 | Algebraic recognizability of languages | <|reference_start|>Algebraic recognizability of languages: Recognizable languages of finite words are part of every computer science cursus, and they are routinely described as a cornerstone for applications and for theory. We would like to briefly explore why that is, and how this word-related notion extends to more complex models, such as those developed for modeling distributed or timed behaviors.<|reference_end|> | arxiv | @article{weil2006algebraic,
title={Algebraic recognizability of languages},
author={Pascal Weil (LaBRI)},
journal={Mathematical Foundations of Computer Science 2004, Tch\`{e}que,
R\'{e}publique (2004) 149-175},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609110},
primaryClass={cs.LO}
} | weil2006algebraic |
arxiv-674816 | cs/0609111 | A State-Based Regression Formulation for Domains with Sensing Actions<br> and Incomplete Information | <|reference_start|>A State-Based Regression Formulation for Domains with Sensing Actions<br> and Incomplete Information: We present a state-based regression function for planning domains where an agent does not have complete information and may have sensing actions. We consider binary domains and employ a three-valued characterization of domains with sensing actions to define the regression function. We prove the soundness and completeness of our regression formulation with respect to the definition of progression. More specifically, we show that (i) a plan obtained through regression for a planning problem is indeed a progression solution of that planning problem, and that (ii) for each plan found through progression, using regression one obtains that plan or an equivalent one.<|reference_end|> | arxiv | @article{tuan2006a,
title={A State-Based Regression Formulation for Domains with Sensing
Actions<br> and Incomplete Information},
author={Le-Chi Tuan, Chitta Baral, Tran Cao Son},
journal={Logical Methods in Computer Science, Volume 2, Issue 4 (October 2,
2006) lmcs:2238},
year={2006},
doi={10.2168/LMCS-2(4:2)2006},
archivePrefix={arXiv},
eprint={cs/0609111},
primaryClass={cs.AI}
} | tuan2006a |
arxiv-674817 | cs/0609112 | A Richer Understanding of the Complexity of Election Systems | <|reference_start|>A Richer Understanding of the Complexity of Election Systems: We provide an overview of some recent progress on the complexity of election systems. The issues studied include the complexity of the winner, manipulation, bribery, and control problems.<|reference_end|> | arxiv | @article{faliszewski2006a,
title={A Richer Understanding of the Complexity of Election Systems},
author={Piotr Faliszewski, Edith Hemaspaandra, Lane A. Hemaspaandra, Joerg
Rothe},
journal={arXiv preprint arXiv:cs/0609112},
year={2006},
number={URCS TR-2006-903},
archivePrefix={arXiv},
eprint={cs/0609112},
primaryClass={cs.GT cs.CC cs.MA}
} | faliszewski2006a |
arxiv-674818 | cs/0609113 | Algebraic recognizability of regular tree languages | <|reference_start|>Algebraic recognizability of regular tree languages: We propose a new algebraic framework to discuss and classify recognizable tree languages, and to characterize interesting classes of such languages. Our algebraic tool, called preclones, encompasses the classical notion of syntactic Sigma-algebra or minimal tree automaton, but adds new expressivity to it. The main result in this paper is a variety theorem \`{a} la Eilenberg, but we also discuss important examples of logically defined classes of recognizable tree languages, whose characterization and decidability was established in recent papers (by Benedikt and S\'{e}goufin, and by Bojanczyk and Walukiewicz) and can be naturally formulated in terms of pseudovarieties of preclones. Finally, this paper constitutes the foundation for another paper by the same authors, where first-order definable tree languages receive an algebraic characterization.<|reference_end|> | arxiv | @article{esik2006algebraic,
title={Algebraic recognizability of regular tree languages},
author={Zoltan Esik, Pascal Weil (LaBRI)},
journal={Theoretical Computer Science 340 (2005) 291-321},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609113},
primaryClass={cs.DM}
} | esik2006algebraic |
arxiv-674819 | cs/0609114 | A VFRoe scheme for 1D shallow water flows : wetting and drying simulation | <|reference_start|>A VFRoe scheme for 1D shallow water flows : wetting and drying simulation: A finite-volume method for the one-dimensional shallow-water equations including topographic source terms is presented. Exploiting an original idea by Leroux, the system of partial-differential equations is completed by a trivial equation for the bathymetry. By applying a change of variable, the system is given a celerity-speed formulation, and linearized. As a result, an approximate Riemann solver preserving the positivity of the celerity can be constructed, permitting wetting and drying flow simulations to be performed. Finally, the simulation of numerical test cases is presented.<|reference_end|> | arxiv | @article{bello2006a,
title={A VFRoe scheme for 1D shallow water flows : wetting and drying
simulation},
author={Abdou Wahidi Bello (INRIA Sophia Antipolis / INRIA Rh^one-Alpes)},
journal={arXiv preprint arXiv:cs/0609114},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609114},
primaryClass={cs.NA}
} | bello2006a |
arxiv-674820 | cs/0609115 | Measuring Fundamental Properties of Real-World Complex Networks | <|reference_start|>Measuring Fundamental Properties of Real-World Complex Networks: Complex networks, modeled as large graphs, received much attention during these last years. However, data on such networks is only available through intricate measurement procedures. Until recently, most studies assumed that these procedures eventually lead to samples large enough to be representative of the whole, at least concerning some key properties. This has crucial impact on network modeling and simulation, which rely on these properties. Recent contributions proved that this approach may be misleading, but no solution has been proposed. We provide here the first practical way to distinguish between cases where it is indeed misleading, and cases where the observed properties may be trusted. It consists in studying how the properties of interest evolve when the sample grows, and in particular whether they reach a steady state or not. In order to illustrate this method and to demonstrate its relevance, we apply it to data-sets on complex network measurements that are representative of the ones commonly used. The obtained results show that the method fulfills its goals very well. We moreover identify some properties which seem easier to evaluate in practice, thus opening interesting perspectives.<|reference_end|> | arxiv | @article{latapy2006measuring,
title={Measuring Fundamental Properties of Real-World Complex Networks},
author={Matthieu Latapy, Clemence Magnien (LIP6 - CNRS and UPMC, France)},
journal={arXiv preprint arXiv:cs/0609115},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609115},
primaryClass={cs.NI cond-mat.stat-mech cs.DS}
} | latapy2006measuring |
arxiv-674821 | cs/0609116 | Theory and Practice of Triangle Problems in Very Large (Sparse (Power-Law)) Graphs | <|reference_start|>Theory and Practice of Triangle Problems in Very Large (Sparse (Power-Law)) Graphs: Finding, counting and/or listing triangles (three vertices with three edges) in large graphs are natural fundamental problems, which received recently much attention because of their importance in complex network analysis. We provide here a detailed state of the art on these problems, in a unified way. We note that, until now, authors paid surprisingly little attention to space complexity, despite its both fundamental and practical interest. We give the space complexities of known algorithms and discuss their implications. Then we propose improvements of a known algorithm, as well as a new algorithm, which are time optimal for triangle listing and beats previous algorithms concerning space complexity. They have the additional advantage of performing better on power-law graphs, which we also study. We finally show with an experimental study that these two algorithms perform very well in practice, allowing to handle cases that were previously out of reach.<|reference_end|> | arxiv | @article{latapy2006theory,
title={Theory and Practice of Triangle Problems in Very Large (Sparse
(Power-Law)) Graphs},
author={Matthieu Latapy (LIAFA - CNRS, Universite Paris 7)},
journal={arXiv preprint arXiv:cs/0609116},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609116},
primaryClass={cs.DS cond-mat.stat-mech cs.NI}
} | latapy2006theory |
arxiv-674822 | cs/0609117 | Constructing LDPC Codes by 2-Lifts | <|reference_start|>Constructing LDPC Codes by 2-Lifts: We propose a new low-density parity-check code construction scheme based on 2-lifts. The proposed codes have an advantage of admitting efficient hardware implementations. With the motivation of designing codes with low error floors, we present an analysis of the low-weight stopping set distributions of the proposed codes. Based on this analysis, we propose design criteria for designing codes with low error floors. Numerical results show that the resulting codes have low error probabilities over binary erasure channels.<|reference_end|> | arxiv | @article{ma2006constructing,
title={Constructing LDPC Codes by 2-Lifts},
author={Xudong Ma and En-hui Yang},
journal={Proceeding of IEEE International Symposium on Information Theory
(ISIT) 2007},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609117},
primaryClass={cs.IT math.IT}
} | ma2006constructing |
arxiv-674823 | cs/0609118 | Duality of Fix-Points for Distributive Lattices | <|reference_start|>Duality of Fix-Points for Distributive Lattices: We present a novel algorithm for calculating fix-points. The algorithm calculates fix-points of an endo-function f on a distributive lattice, by performing reachability computation a graph derived from the dual of f; this is in comparison to traditional algorithms that are based on iterated application of f until a fix-point is reached.<|reference_end|> | arxiv | @article{sampath2006duality,
title={Duality of Fix-Points for Distributive Lattices},
author={Prahladavaradan Sampath},
journal={arXiv preprint arXiv:cs/0609118},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609118},
primaryClass={cs.DS cs.DM}
} | sampath2006duality |
arxiv-674824 | cs/0609119 | Verification, Validation and Integrity of Distributed and Interchanged Rule Based Policies and Contracts in the Semantic Web | <|reference_start|>Verification, Validation and Integrity of Distributed and Interchanged Rule Based Policies and Contracts in the Semantic Web: Rule-based policy and contract systems have rarely been studied in terms of their software engineering properties. This is a serious omission, because in rule-based policy or contract representation languages rules are being used as a declarative programming language to formalize real-world decision logic and create IS production systems upon. This paper adopts an SE methodology from extreme programming, namely test driven development, and discusses how it can be adapted to verification, validation and integrity testing (V&V&I) of policy and contract specifications. Since, the test-driven approach focuses on the behavioral aspects and the drawn conclusions instead of the structure of the rule base and the causes of faults, it is independent of the complexity of the rule language and the system under test and thus much easier to use and understand for the rule engineer and the user.<|reference_end|> | arxiv | @article{paschke2006verification,,
title={Verification, Validation and Integrity of Distributed and Interchanged
Rule Based Policies and Contracts in the Semantic Web},
author={Adrian Paschke},
journal={A.Paschke: Verification, Validation, Integrity of Rule Based
Policies and Contracts in the Semantic Web, 2nd International Semantic Web
Policy Workshop (SWPW'06), Nov. 5-9, 2006, Athens, GA, USA},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609119},
primaryClass={cs.AI cs.SE}
} | paschke2006verification, |
arxiv-674825 | cs/0609120 | Rule-based Knowledge Representation for Service Level Agreement | <|reference_start|>Rule-based Knowledge Representation for Service Level Agreement: Automated management and monitoring of service contracts like Service Level Agreements (SLAs) or higher-level policies is vital for efficient and reliable distributed service-oriented architectures (SOA) with high quality of ser-vice (QoS) levels. IT service provider need to manage, execute and maintain thousands of SLAs for different customers and different types of services, which needs new levels of flexibility and automation not available with the current technol-ogy. I propose a novel rule-based knowledge representation (KR) for SLA rules and a respective rule-based service level management (RBSLM) framework. My rule-based approach based on logic programming provides several advantages including automated rule chaining allowing for compact knowledge representation and high levels of automation as well as flexibility to adapt to rapidly changing business requirements. Therewith, I address an urgent need service-oriented busi-nesses do have nowadays which is to dynamically change their business and contractual logic in order to adapt to rapidly changing business environments and to overcome the restricting nature of slow change cycles.<|reference_end|> | arxiv | @article{paschke2006rule-based,
title={Rule-based Knowledge Representation for Service Level Agreement},
author={Adrian Paschke},
journal={arXiv preprint arXiv:cs/0609120},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609120},
primaryClass={cs.AI cs.DB cs.LO cs.MA cs.SE}
} | paschke2006rule-based |
arxiv-674826 | cs/0609121 | Approximating Rate-Distortion Graphs of Individual Data: Experiments in Lossy Compression and Denoising | <|reference_start|>Approximating Rate-Distortion Graphs of Individual Data: Experiments in Lossy Compression and Denoising: Classical rate-distortion theory requires knowledge of an elusive source distribution. Instead, we analyze rate-distortion properties of individual objects using the recently developed algorithmic rate-distortion theory. The latter is based on the noncomputable notion of Kolmogorov complexity. To apply the theory we approximate the Kolmogorov complexity by standard data compression techniques, and perform a number of experiments with lossy compression and denoising of objects from different domains. We also introduce a natural generalization to lossy compression with side information. To maintain full generality we need to address a difficult searching problem. While our solutions are therefore not time efficient, we do observe good denoising and compression performance.<|reference_end|> | arxiv | @article{de rooij2006approximating,
title={Approximating Rate-Distortion Graphs of Individual Data: Experiments in
Lossy Compression and Denoising},
author={Steven de Rooij and Paul Vitanyi},
journal={arXiv preprint arXiv:cs/0609121},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609121},
primaryClass={cs.IT math.IT}
} | de rooij2006approximating |
arxiv-674827 | cs/0609122 | Multi-Antenna Cooperative Wireless Systems: A Diversity-Multiplexing Tradeoff Perspective | <|reference_start|>Multi-Antenna Cooperative Wireless Systems: A Diversity-Multiplexing Tradeoff Perspective: We consider a general multiple antenna network with multiple sources, multiple destinations and multiple relays in terms of the diversity-multiplexing tradeoff (DMT). We examine several subcases of this most general problem taking into account the processing capability of the relays (half-duplex or full-duplex), and the network geometry (clustered or non-clustered). We first study the multiple antenna relay channel with a full-duplex relay to understand the effect of increased degrees of freedom in the direct link. We find DMT upper bounds and investigate the achievable performance of decode-and-forward (DF), and compress-and-forward (CF) protocols. Our results suggest that while DF is DMT optimal when all terminals have one antenna each, it may not maintain its good performance when the degrees of freedom in the direct link is increased, whereas CF continues to perform optimally. We also study the multiple antenna relay channel with a half-duplex relay. We show that the half-duplex DMT behavior can significantly be different from the full-duplex case. We find that CF is DMT optimal for half-duplex relaying as well, and is the first protocol known to achieve the half-duplex relay DMT. We next study the multiple-access relay channel (MARC) DMT. Finally, we investigate a system with a single source-destination pair and multiple relays, each node with a single antenna, and show that even under the idealistic assumption of full-duplex relays and a clustered network, this virtual multi-input multi-output (MIMO) system can never fully mimic a real MIMO DMT. For cooperative systems with multiple sources and multiple destinations the same limitation remains to be in effect.<|reference_end|> | arxiv | @article{yuksel2006multi-antenna,
title={Multi-Antenna Cooperative Wireless Systems: A Diversity-Multiplexing
Tradeoff Perspective},
author={Melda Yuksel, Elza Erkip},
journal={arXiv preprint arXiv:cs/0609122},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609122},
primaryClass={cs.IT math.IT}
} | yuksel2006multi-antenna |
arxiv-674828 | cs/0609123 | Optimal Design of Multiple Description Lattice Vector Quantizers | <|reference_start|>Optimal Design of Multiple Description Lattice Vector Quantizers: In the design of multiple description lattice vector quantizers (MDLVQ), index assignment plays a critical role. In addition, one also needs to choose the Voronoi cell size of the central lattice v, the sublattice index N, and the number of side descriptions K to minimize the expected MDLVQ distortion, given the total entropy rate of all side descriptions Rt and description loss probability p. In this paper we propose a linear-time MDLVQ index assignment algorithm for any K >= 2 balanced descriptions in any dimensions, based on a new construction of so-called K-fraction lattice. The algorithm is greedy in nature but is proven to be asymptotically (N -> infinity) optimal for any K >= 2 balanced descriptions in any dimensions, given Rt and p. The result is stronger when K = 2: the optimality holds for finite N as well, under some mild conditions. For K > 2, a local adjustment algorithm is developed to augment the greedy index assignment, and conjectured to be optimal for finite N. Our algorithmic study also leads to better understanding of v, N and K in optimal MDLVQ design. For K = 2 we derive, for the first time, a non-asymptotical closed form expression of the expected distortion of optimal MDLVQ in p, Rt, N. For K > 2, we tighten the current asymptotic formula of the expected distortion, relating the optimal values of N and K to p and Rt more precisely.<|reference_end|> | arxiv | @article{huang2006optimal,
title={Optimal Design of Multiple Description Lattice Vector Quantizers},
author={Xiang Huang, Xiaolin Wu},
journal={arXiv preprint arXiv:cs/0609123},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609123},
primaryClass={cs.IT math.IT}
} | huang2006optimal |
arxiv-674829 | cs/0609124 | The Three Gap Theorem (Steinhauss Conjecture) | <|reference_start|>The Three Gap Theorem (Steinhauss Conjecture): We deal with the distribution of N points placed consecutively around the circle by a fixed angle of a. From the proof of Tony van Ravenstein, we propose a detailed proof of the Steinhaus conjecture whose result is the following: the N points partition the circle into gaps of at most three different lengths. We study the mathematical notions required for the proof of this theorem revealed during a formal proof carried out in Coq.<|reference_end|> | arxiv | @article{mayero2006the,
title={The Three Gap Theorem (Steinhauss Conjecture)},
author={Micaela Mayero (INRIA Futurs)},
journal={Types for Proofs and Programs: International Workshop, TYPES'99,
L\"{o}keberg, Sweden, June 1999. Selected Papers (2000) 162},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609124},
primaryClass={cs.LO}
} | mayero2006the |
arxiv-674830 | cs/0609125 | Problem Evolution: A new approach to problem solving systems | <|reference_start|>Problem Evolution: A new approach to problem solving systems: In this paper we present a novel tool to evaluate problem solving systems. Instead of using a system to solve a problem, we suggest using the problem to evaluate the system. By finding a numerical representation of a problem's complexity, one can implement genetic algorithm to search for the most complex problem the given system can solve. This allows a comparison between different systems that solve the same set of problems. In this paper we implement this approach on pattern recognition neural networks to try and find the most complex pattern a given configuration can solve. The complexity of the pattern is calculated using linguistic complexity. The results demonstrate the power of the problem evolution approach in ranking different neural network configurations according to their pattern recognition abilities. Future research and implementations of this technique are also discussed.<|reference_end|> | arxiv | @article{gordon2006problem,
title={Problem Evolution: A new approach to problem solving systems},
author={Goren Gordon and Uri Einziger-Lowicz},
journal={arXiv preprint arXiv:cs/0609125},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609125},
primaryClass={cs.NE}
} | gordon2006problem |
arxiv-674831 | cs/0609126 | E-prints and Journal Articles in Astronomy: a Productive Co-existence | <|reference_start|>E-prints and Journal Articles in Astronomy: a Productive Co-existence: Are the e-prints (electronic preprints) from the arXiv repository being used instead of the journal articles? In this paper we show that the e-prints have not undermined the usage of journal papers in the astrophysics community. As soon as the journal article is published, the astronomical community prefers to read the journal article and the use of e-prints through the NASA Astrophysics Data System drops to zero. This suggests that the majority of astronomers have access to institutional subscriptions and that they choose to read the journal article when given the choice. Within the NASA Astrophysics Data System they are given this choice, because the e-print and the journal article are treated equally, since both are just one click away. In other words, the e-prints have not undermined journal use in the astrophysics community and thus currently do not pose a financial threat to the publishers. We present readership data for the arXiv category "astro-ph" and the 4 core journals in astronomy (Astrophysical Journal, Astronomical Journal, Monthly Notices of the Royal Astronomical Society and Astronomy & Astrophysics). Furthermore, we show that the half-life (the point where the use of an article drops to half the use of a newly published article) for an e-print is shorter than for a journal paper. The ADS is funded by NASA Grant NNG06GG68G. arXiv receives funding from NSF award #0404553<|reference_end|> | arxiv | @article{henneken2006e-prints,
title={E-prints and Journal Articles in Astronomy: a Productive Co-existence},
author={Edwin A. Henneken, Michael J. Kurtz, Simeon Warner, Paul Ginsparg,
Guenther Eichhorn, Alberto Accomazzi, Carolyn S. Grant, Donna Thompson,
Elizabeth Bohlen, Stephen S. Murray},
journal={Learn.Publ.20:16-22,2007},
year={2006},
doi={10.1087/095315107779490661},
archivePrefix={arXiv},
eprint={cs/0609126},
primaryClass={cs.DL astro-ph}
} | henneken2006e-prints |
arxiv-674832 | cs/0609127 | On Bus Graph Realizability | <|reference_start|>On Bus Graph Realizability: In this paper, we consider the following graph embedding problem: Given a bipartite graph G = (V1; V2;E), where the maximum degree of vertices in V2 is 4, can G be embedded on a two dimensional grid such that each vertex in V1 is drawn as a line segment along a grid line, each vertex in V2 is drawn as a point at a grid point, and each edge e = (u; v) for some u 2 V1 and v 2 V2 is drawn as a line segment connecting u and v, perpendicular to the line segment for u? We show that this problem is NP-complete, and sketch how our proof techniques can be used to show the hardness of several other related problems.<|reference_end|> | arxiv | @article{ada2006on,
title={On Bus Graph Realizability},
author={Anil Ada, Melanie Coggan, Paul Di Marco, Alain Doyon, Liam Flookes,
Samuli Heilala, Ethan Kim, Jonathan Li On Wing, Louis-Francois
Preville-Ratelle, Sue Whitesides, and Nuo Yu},
journal={arXiv preprint arXiv:cs/0609127},
year={2006},
number={SOCS-TR-2006.1},
archivePrefix={arXiv},
eprint={cs/0609127},
primaryClass={cs.CG cs.DM}
} | ada2006on |
arxiv-674833 | cs/0609128 | Max-Cut and Max-Bisection are NP-hard on unit disk graphs | <|reference_start|>Max-Cut and Max-Bisection are NP-hard on unit disk graphs: We prove that the Max-Cut and Max-Bisection problems are NP-hard on unit disk graphs. We also show that $\lambda$-precision graphs are planar for $\lambda$ > 1 / \sqrt{2}$.<|reference_end|> | arxiv | @article{diaz2006max-cut,
title={Max-Cut and Max-Bisection are NP-hard on unit disk graphs},
author={Josep Diaz and Marcin Kaminski},
journal={arXiv preprint arXiv:cs/0609128},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609128},
primaryClass={cs.DS cs.CC}
} | diaz2006max-cut |
arxiv-674834 | cs/0609129 | One approach to the digital visualization of hedgehogs in holomorphic dynamics | <|reference_start|>One approach to the digital visualization of hedgehogs in holomorphic dynamics: In the field of holomorphic dynamics in one complex variable, hedgehog is the local invariant set arising about a Cremer point and endowed with a very complicate shape as well as relating to very weak numerical conditions. We give a solution to the open problem of its digital visualization, featuring either a time saving approach and a far-reaching insight.<|reference_end|> | arxiv | @article{rosa2006one,
title={One approach to the digital visualization of hedgehogs in holomorphic
dynamics},
author={Alessandro Rosa},
journal={Electronic Journal of Differential Equations and Control
Processes, n.1, 2007},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609129},
primaryClass={cs.MS math.DS}
} | rosa2006one |
arxiv-674835 | cs/0609130 | A Predicative Harmonization of the Time and Provable Hierarchies | <|reference_start|>A Predicative Harmonization of the Time and Provable Hierarchies: A decidable transfinite hierarchy is defined by assigning ordinals to the programs of an imperative language. It singles out: the classes TIMEF(n^c) and TIMEF(n_c); the finite Grzegorczyk classes at and above the elementary level, and the \Sigma_k-IND fragments of PA. Limited operators, diagonalization, and majorization functions are not used.<|reference_end|> | arxiv | @article{caporaso2006a,
title={A Predicative Harmonization of the Time and Provable Hierarchies},
author={Salvatore Caporaso},
journal={arXiv preprint arXiv:cs/0609130},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609130},
primaryClass={cs.LO cs.CC}
} | caporaso2006a |
arxiv-674836 | cs/0609131 | A Fast Block Matching Algorithm for Video Motion Estimation Based on Particle Swarm Optimization and Motion Prejudgment | <|reference_start|>A Fast Block Matching Algorithm for Video Motion Estimation Based on Particle Swarm Optimization and Motion Prejudgment: In this paper, we propose a fast 2-D block-based motion estimation algorithm called Particle Swarm Optimization - Zero-motion Prejudgment(PSO-ZMP) which consists of three sequential routines: 1)Zero-motion prejudgment. The routine aims at finding static macroblocks(MB) which do not need to perform remaining search thus reduces the computational cost; 2)Predictive image coding and 3)PSO matching routine. Simulation results obtained show that the proposed PSO-ZMP algorithm achieves over 10 times of computation less than Diamond Search(DS) and 5 times less than the recent proposed Adaptive Rood Pattern Searching(ARPS). Meanwhile the PSNR performances using PSO-ZMP are very close to that using DS and ARPS in some less-motioned sequences. While in some sequences containing dense and complex motion contents, the PSNR performances of PSO-ZMP are several dB lower than that using DS and ARPS but in an acceptable degree.<|reference_end|> | arxiv | @article{ren2006a,
title={A Fast Block Matching Algorithm for Video Motion Estimation Based on
Particle Swarm Optimization and Motion Prejudgment},
author={Ran Ren, Madan mohan Manokar, Yaogang Shi, Baoyu Zheng},
journal={arXiv preprint arXiv:cs/0609131},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609131},
primaryClass={cs.MM}
} | ren2006a |
arxiv-674837 | cs/0609132 | Semantic Description of Parameters in Web Service Annotations | <|reference_start|>Semantic Description of Parameters in Web Service Annotations: A modification of OWL-S regarding parameter description is proposed. It is strictly based on Description Logic. In addition to class description of parameters it also allows the modelling of relations between parameters and the precise description of the size of data to be supplied to a service. In particular, it solves two major issues identified within current proposals for a Semantic Web Service annotation standard.<|reference_end|> | arxiv | @article{gruber2006semantic,
title={Semantic Description of Parameters in Web Service Annotations},
author={Jochen Gruber},
journal={arXiv preprint arXiv:cs/0609132},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609132},
primaryClass={cs.AI}
} | gruber2006semantic |
arxiv-674838 | cs/0609133 | An application-oriented terminology evaluation: the case of back-of-the book indexes | <|reference_start|>An application-oriented terminology evaluation: the case of back-of-the book indexes: This paper addresses the problem of computational terminology evaluation not per se but in a specific application context. This paper describes the evaluation procedure that has been used to assess the validity of our overall indexing approach and the quality of the IndDoc indexing tool. Even if user-oriented extended evaluation is irreplaceable, we argue that early evaluations are possible and they are useful for development guidance.<|reference_end|> | arxiv | @article{mekki2006an,
title={An application-oriented terminology evaluation: the case of back-of-the
book indexes},
author={Touria A"it El Mekki (LERIA), Adeline Nazarenko (LIPN)},
journal={Workshop on Terminology design: quality criteria and evaluation
methods (TermEval), Italie (2006) 18-21},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609133},
primaryClass={cs.AI cs.IR}
} | mekki2006an |
arxiv-674839 | cs/0609134 | Using NLP to build the hypertextuel network of a back-of-the-book index | <|reference_start|>Using NLP to build the hypertextuel network of a back-of-the-book index: Relying on the idea that back-of-the-book indexes are traditional devices for navigation through large documents, we have developed a method to build a hypertextual network that helps the navigation in a document. Building such an hypertextual network requires selecting a list of descriptors, identifying the relevant text segments to associate with each descriptor and finally ranking the descriptors and reference segments by relevance order. We propose a specific document segmentation method and a relevance measure for information ranking. The algorithms are tested on 4 corpora (of different types and domains) without human intervention or any semantic knowledge.<|reference_end|> | arxiv | @article{mekki2006using,
title={Using NLP to build the hypertextuel network of a back-of-the-book index},
author={Touria A"it El Mekki (LIPN), Adeline Nazarenko (LERIA)},
journal={Proceedings of the International Conference on Recent Advances in
Natural Language Processing (RANLP) (2005) 316-320},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609134},
primaryClass={cs.AI cs.IR}
} | mekki2006using |
arxiv-674840 | cs/0609135 | Event-based Information Extraction for the biomedical domain: the Caderige project | <|reference_start|>Event-based Information Extraction for the biomedical domain: the Caderige project: This paper gives an overview of the Caderige project. This project involves teams from different areas (biology, machine learning, natural language processing) in order to develop high-level analysis tools for extracting structured information from biological bibliographical databases, especially Medline. The paper gives an overview of the approach and compares it to the state of the art.<|reference_end|> | arxiv | @article{alphonse2006event-based,
title={Event-based Information Extraction for the biomedical domain: the
Caderige project},
author={Erick Alphonse (MIG), Sophie Aubin (LIPN), Philippe Bessi`eres (MIG),
Gilles Bisson (Leibniz - IMAG), Thierry Hamon (LIPN), Sandrine Lagarrigue
(INRA-ENSAR), Adeline Nazarenko (LIPN), Alain-Pierre Manine (MIG), Claire
N'edellec (MIG), Mohamed Ould Abdel Vetah (MIG), Thierry Poibeau (LIPN),
Davy Weissenbacher (LIPN)},
journal={Proceedings of the International Joint Workshop on Natural
Language Processing in Biomedicine and Its Applications (COLING'04), Suisse
(2004) 43-39},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609135},
primaryClass={cs.AI cs.IR}
} | alphonse2006event-based |
arxiv-674841 | cs/0609136 | The ALVIS Format for Linguistically Annotated Documents | <|reference_start|>The ALVIS Format for Linguistically Annotated Documents: The paper describes the ALVIS annotation format designed for the indexing of large collections of documents in topic-specific search engines. This paper is exemplified on the biological domain and on MedLine abstracts, as developing a specialized search engine for biologists is one of the ALVIS case studies. The ALVIS principle for linguistic annotations is based on existing works and standard propositions. We made the choice of stand-off annotations rather than inserted mark-up. Annotations are encoded as XML elements which form the linguistic subsection of the document record.<|reference_end|> | arxiv | @article{nazarenko2006the,
title={The ALVIS Format for Linguistically Annotated Documents},
author={Adeline Nazarenko (LIPN), Erick Alphonse (LIPN), Julien Derivi`ere
(LIPN), Thierry Hamon (LIPN), Guillaume Vauvert (LIPN), Davy Weissenbacher
(LIPN)},
journal={Proceedings of the fifth international conference on Language
Resources and Evaluation, LREC 2006 (2006) 1782-1786},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609136},
primaryClass={cs.AI}
} | nazarenko2006the |
arxiv-674842 | cs/0609137 | Ontologies and Information Extraction | <|reference_start|>Ontologies and Information Extraction: This report argues that, even in the simplest cases, IE is an ontology-driven process. It is not a mere text filtering method based on simple pattern matching and keywords, because the extracted pieces of texts are interpreted with respect to a predefined partial domain model. This report shows that depending on the nature and the depth of the interpretation to be done for extracting the information, more or less knowledge must be involved. This report is mainly illustrated in biology, a domain in which there are critical needs for content-based exploration of the scientific literature and which becomes a major application domain for IE.<|reference_end|> | arxiv | @article{nédellec2006ontologies,
title={Ontologies and Information Extraction},
author={Claire N'edellec (MIG), Adeline Nazarenko (LIPN)},
journal={LIPN Internal Report (2005)},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609137},
primaryClass={cs.AI cs.IR}
} | nédellec2006ontologies |
arxiv-674843 | cs/0609138 | MDL Denoising Revisited | <|reference_start|>MDL Denoising Revisited: We refine and extend an earlier MDL denoising criterion for wavelet-based denoising. We start by showing that the denoising problem can be reformulated as a clustering problem, where the goal is to obtain separate clusters for informative and non-informative wavelet coefficients, respectively. This suggests two refinements, adding a code-length for the model index, and extending the model in order to account for subband-dependent coefficient distributions. A third refinement is derivation of soft thresholding inspired by predictive universal coding with weighted mixtures. We propose a practical method incorporating all three refinements, which is shown to achieve good performance and robustness in denoising both artificial and natural signals.<|reference_end|> | arxiv | @article{roos2006mdl,
title={MDL Denoising Revisited},
author={Teemu Roos, Petri Myllym"aki, Jorma Rissanen},
journal={arXiv preprint arXiv:cs/0609138},
year={2006},
doi={10.1109/TSP.2009.2021633},
archivePrefix={arXiv},
eprint={cs/0609138},
primaryClass={cs.IT math.IT}
} | roos2006mdl |
arxiv-674844 | cs/0609139 | The Capacity of Channels with Feedback | <|reference_start|>The Capacity of Channels with Feedback: We introduce a general framework for treating channels with memory and feedback. First, we generalize Massey's concept of directed information and use it to characterize the feedback capacity of general channels. Second, we present coding results for Markov channels. This requires determining appropriate sufficient statistics at the encoder and decoder. Third, a dynamic programming framework for computing the capacity of Markov channels is presented. Fourth, it is shown that the average cost optimality equation (ACOE) can be viewed as an implicit single-letter characterization of the capacity. Fifth, scenarios with simple sufficient statistics are described.<|reference_end|> | arxiv | @article{tatikonda2006the,
title={The Capacity of Channels with Feedback},
author={Sekhar Tatikonda and Sanjoy Mitter},
journal={arXiv preprint arXiv:cs/0609139},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609139},
primaryClass={cs.IT math.IT}
} | tatikonda2006the |
arxiv-674845 | cs/0609140 | Motion Primitives for Robotic Flight Control | <|reference_start|>Motion Primitives for Robotic Flight Control: We introduce a simple framework for learning aggressive maneuvers in flight control of UAVs. Having inspired from biological environment, dynamic movement primitives are analyzed and extended using nonlinear contraction theory. Accordingly, primitives of an observed movement are stably combined and concatenated. We demonstrate our results experimentally on the Quanser Helicopter, in which we first imitate aggressive maneuvers and then use them as primitives to achieve new maneuvers that can fly over an obstacle.<|reference_end|> | arxiv | @article{perk2006motion,
title={Motion Primitives for Robotic Flight Control},
author={Baris E. Perk, J. J. E. Slotine},
journal={arXiv preprint arXiv:cs/0609140},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609140},
primaryClass={cs.RO cs.LG}
} | perk2006motion |
arxiv-674846 | cs/0609141 | Polygon Convexity: A Minimal O(n) Test | <|reference_start|>Polygon Convexity: A Minimal O(n) Test: An O(n) test for polygon convexity is stated and proved. It is also proved that the test is minimal in a certain exact sense.<|reference_end|> | arxiv | @article{pinelis2006polygon,
title={Polygon Convexity: A Minimal O(n) Test},
author={Iosif Pinelis},
journal={arXiv preprint arXiv:cs/0609141},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609141},
primaryClass={cs.CG cs.CC math.CO math.MG}
} | pinelis2006polygon |
arxiv-674847 | cs/0609142 | Modular self-organization | <|reference_start|>Modular self-organization: The aim of this paper is to provide a sound framework for addressing a difficult problem: the automatic construction of an autonomous agent's modular architecture. We combine results from two apparently uncorrelated domains: Autonomous planning through Markov Decision Processes and a General Data Clustering Approach using a kernel-like method. Our fundamental idea is that the former is a good framework for addressing autonomy whereas the latter allows to tackle self-organizing problems.<|reference_end|> | arxiv | @article{scherrer2006modular,
title={Modular self-organization},
author={Bruno Scherrer (INRIA Lorraine - LORIA)},
journal={arXiv preprint arXiv:cs/0609142},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609142},
primaryClass={cs.AI}
} | scherrer2006modular |
arxiv-674848 | cs/0609143 | ECA-LP / ECA-RuleML: A Homogeneous Event-Condition-Action Logic Programming Language | <|reference_start|>ECA-LP / ECA-RuleML: A Homogeneous Event-Condition-Action Logic Programming Language: Event-driven reactive functionalities are an urgent need in nowadays distributed service-oriented applications and (Semantic) Web-based environments. An important problem to be addressed is how to correctly and efficiently capture and process the event-based behavioral, reactive logic represented as ECA rules in combination with other conditional decision logic which is represented as derivation rules. In this paper we elaborate on a homogeneous integration approach which combines derivation rules, reaction rules (ECA rules) and other rule types such as integrity constraint into the general framework of logic programming. The developed ECA-LP language provides expressive features such as ID-based updates with support for external and self-updates of the intensional and extensional knowledge, transac-tions including integrity testing and an event algebra to define and process complex events and actions based on a novel interval-based Event Calculus variant.<|reference_end|> | arxiv | @article{paschke2006eca-lp,
title={ECA-LP / ECA-RuleML: A Homogeneous Event-Condition-Action Logic
Programming Language},
author={Adrian Paschke},
journal={Paschke, A.: ECA-LP / ECA-RuleML: A Homogeneous
Event-Condition-Action Logic Programming Language, Int. Conf. on Rules and
Rule Markup Languages for the Semantic Web (RuleML06), Athens, Georgia, USA,
Nov. 2006},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609143},
primaryClass={cs.AI cs.LO cs.SE}
} | paschke2006eca-lp |
arxiv-674849 | cs/0609144 | The Management and Integration of Biomedical Knowledge: Application in the Health-e-Child Project (Position Paper) | <|reference_start|>The Management and Integration of Biomedical Knowledge: Application in the Health-e-Child Project (Position Paper): The Health-e-Child project aims to develop an integrated healthcare platform for European paediatrics. In order to achieve a comprehensive view of childrens health, a complex integration of biomedical data, information, and knowledge is necessary. Ontologies will be used to formally define this domain knowledge and will form the basis for the medical knowledge management system. This paper introduces an innovative methodology for the vertical integration of biomedical knowledge. This approach will be largely clinician-centered and will enable the definition of ontology fragments, connections between them (semantic bridges) and enriched ontology fragments (views). The strategy for the specification and capture of fragments, bridges and views is outlined with preliminary examples demonstrated in the collection of biomedical information from hospital databases, biomedical ontologies, and biomedical public databases.<|reference_end|> | arxiv | @article{jimenez-ruiz2006the,
title={The Management and Integration of Biomedical Knowledge: Application in
the Health-e-Child Project (Position Paper)},
author={E. Jimenez-Ruiz, R. Berlanga, I. Sanz, R. McClatchey, R. Danger, D.
Manset, J. Paraire, A. Rios},
journal={arXiv preprint arXiv:cs/0609144},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609144},
primaryClass={cs.DB}
} | jimenez-ruiz2006the |
arxiv-674850 | cs/0609145 | A Semidefinite Relaxation for Air Traffic Flow Scheduling | <|reference_start|>A Semidefinite Relaxation for Air Traffic Flow Scheduling: We first formulate the problem of optimally scheduling air traffic low with sector capacity constraints as a mixed integer linear program. We then use semidefinite relaxation techniques to form a convex relaxation of that problem. Finally, we present a randomization algorithm to further improve the quality of the solution. Because of the specific structure of the air traffic flow problem, the relaxation has a single semidefinite constraint of size dn where d is the maximum delay and n the number of flights.<|reference_end|> | arxiv | @article{d'aspremont2006a,
title={A Semidefinite Relaxation for Air Traffic Flow Scheduling},
author={Alexandre d'Aspremont, Laurent El Ghaoui},
journal={arXiv preprint arXiv:cs/0609145},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609145},
primaryClass={cs.CE}
} | d'aspremont2006a |
arxiv-674851 | cs/0609146 | A Combinatorial Family of Near Regular LDPC Codes | <|reference_start|>A Combinatorial Family of Near Regular LDPC Codes: An elementary combinatorial Tanner graph construction for a family of near-regular low density parity check codes achieving high girth is presented. The construction allows flexibility in the choice of design parameters like rate, average degree, girth and block length of the code and yields an asymptotic family. The complexity of constructing codes in the family grows only quadratically with the block length.<|reference_end|> | arxiv | @article{krishnan2006a,
title={A Combinatorial Family of Near Regular LDPC Codes},
author={K. Murali Krishnan, Rajdeep Singh, L. Sunil Chandran and Priti Shankar},
journal={ISIT 2007},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609146},
primaryClass={cs.IT math.IT}
} | krishnan2006a |
arxiv-674852 | cs/0609147 | Identifying Crosscutting Concerns Using Fan-in Analysis | <|reference_start|>Identifying Crosscutting Concerns Using Fan-in Analysis: Aspect mining is a reverse engineering process that aims at finding crosscutting concerns in existing systems. This paper proposes an aspect mining approach based on determining methods that are called from many different places, and hence have a high fan-in, which can be seen as a symptom of crosscutting functionality. The approach is semi-automatic, and consists of three steps: metric calculation, method filtering, and call site analysis. Carrying out these steps is an interactive process supported by an Eclipse plug-in called FINT. Fan-in analysis has been applied to three open source Java systems, totaling around 200,000 lines of code. The most interesting concerns identified are discussed in detail, which includes several concerns not previously discussed in the aspect-oriented literature. The results show that a significant number of crosscutting concerns can be recognized using fan-in analysis, and each of the three steps can be supported by tools.<|reference_end|> | arxiv | @article{marin2006identifying,
title={Identifying Crosscutting Concerns Using Fan-in Analysis},
author={Marius Marin, Arie van Deursen, Leon Moonen},
journal={ACM Transactions on Software Engineering and Methodology, 2007},
year={2006},
number={TUD-SERG-2006-013},
archivePrefix={arXiv},
eprint={cs/0609147},
primaryClass={cs.SE}
} | marin2006identifying |
arxiv-674853 | cs/0609148 | Pseudo-Codeword Performance Analysis for LDPC Convolutional Codes | <|reference_start|>Pseudo-Codeword Performance Analysis for LDPC Convolutional Codes: Message-passing iterative decoders for low-density parity-check (LDPC) block codes are known to be subject to decoding failures due to so-called pseudo-codewords. These failures can cause the large signal-to-noise ratio performance of message-passing iterative decoding to be worse than that predicted by the maximum-likelihood decoding union bound. In this paper we address the pseudo-codeword problem from the convolutional-code perspective. In particular, we compare the performance of LDPC convolutional codes with that of their ``wrapped'' quasi-cyclic block versions and we show that the minimum pseudo-weight of an LDPC convolutional code is at least as large as the minimum pseudo-weight of an underlying quasi-cyclic code. This result, which parallels a well-known relationship between the minimum Hamming weight of convolutional codes and the minimum Hamming weight of their quasi-cyclic counterparts, is due to the fact that every pseudo-codeword in the convolutional code induces a pseudo-codeword in the block code with pseudo-weight no larger than that of the convolutional code's pseudo-codeword. This difference in the weight spectra leads to improved performance at low-to-moderate signal-to-noise ratios for the convolutional code, a conclusion supported by simulation results.<|reference_end|> | arxiv | @article{smarandache2006pseudo-codeword,
title={Pseudo-Codeword Performance Analysis for LDPC Convolutional Codes},
author={Roxana Smarandache, Ali E. Pusane, Pascal O. Vontobel, Daniel J.
Costello Jr},
journal={arXiv preprint arXiv:cs/0609148},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609148},
primaryClass={cs.IT math.IT}
} | smarandache2006pseudo-codeword |
arxiv-674854 | cs/0609149 | Dynamic Spectrum Access: Signal Processing, Networking, and Regulatory Policy | <|reference_start|>Dynamic Spectrum Access: Signal Processing, Networking, and Regulatory Policy: In this article, we first provide a taxonomy of dynamic spectrum access. We then focus on opportunistic spectrum access, the overlay approach under the hierarchical access model of dynamic spectrum access. we aim to provide an overview of challenges and recent developments in both technological and regulatory aspects of opportunistic spectrum access.<|reference_end|> | arxiv | @article{zhao2006dynamic,
title={Dynamic Spectrum Access: Signal Processing, Networking, and Regulatory
Policy},
author={Qing Zhao and Brian M. Sadler},
journal={arXiv preprint arXiv:cs/0609149},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609149},
primaryClass={cs.NI}
} | zhao2006dynamic |
arxiv-674855 | cs/0609150 | Modelling and Simulation of Scheduling Policies Implemented in Ethernet Switch by Using Coloured Petri Nets | <|reference_start|>Modelling and Simulation of Scheduling Policies Implemented in Ethernet Switch by Using Coloured Petri Nets: The objective of this paper is to propose models enabling to study the behaviour of Ethernet switch for Networked Control Systems. Two scheduler policies are analyzed: the static priority and the WRR (Weighted Round Robin). The modelling work is based on Coloured Petri Nets. A temporal validation step based on the simulation of these modelling, shows that the obtained results are near to the expected behaviour of these scheduler policies.<|reference_end|> | arxiv | @article{brahimi2006modelling,
title={Modelling and Simulation of Scheduling Policies Implemented in Ethernet
Switch by Using Coloured Petri Nets},
author={Belynda Brahimi (CRAN), Christophe Aubrun (CRAN), Eric Rondeau (CRAN)},
journal={11th IEEE International Conference on Emerging Technologies and
Factory Automation, Tch\`{e}que, R\'{e}publique (2006) 667 - 674},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609150},
primaryClass={cs.NI}
} | brahimi2006modelling |
arxiv-674856 | cs/0609151 | Control compensation based on upper bound delay in networked control systems | <|reference_start|>Control compensation based on upper bound delay in networked control systems: Recent interest in networked control systems (NCS) has instigated research in both communication networks and control. Analysis of NCSs has usually been performed from either the network or the control point of view, but not many papers exist where the analysis of both is done in the same context. In this paper an overall analysis of the networked control system is presented. First, the procedure of obtaining the upper bound delay value for packet transmission in the switched Ethernet network is presented. Next, the obtained delay estimate is utilised in delay compensation for improving the Quality of Performance (QoP) of the control systems. The presented upper bound delay algorithm applies ideas from network calculus theory. For the improvement of QoP, two delay compensation strategies, the Smith predictor based and the robust control based delay compensation strategies, are presented and compared.<|reference_end|> | arxiv | @article{vatanski2006control,
title={Control compensation based on upper bound delay in networked control
systems},
author={Nikolai Vatanski, Jean-Philippe Georges (CRAN), Christophe Aubrun
(CRAN), Eric Rondeau (CRAN), Sirkka-Liisa J"ams"a Jounela},
journal={17th International Symposium on Mathematical Theory of Networks
and Systems (MTNS), Japon (2006)},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609151},
primaryClass={cs.NI}
} | vatanski2006control |
arxiv-674857 | cs/0609152 | Use of upper bound delay estimate in stability analysis and robust control compensation in networked control systems | <|reference_start|>Use of upper bound delay estimate in stability analysis and robust control compensation in networked control systems: Recent interest in networked control systems (NCS) has instigated research in various areas of both communication networks and control. The analysis of NCS has often been performed either from the network, or the control point of view and not many papers exist were the analysis of both is done in the same context. Here a simple overall analysis is presented. In the paper the procedure of obtaining the upper bound delay value in the switched Ethernet network is proposed and the obtained delay estimate is used in stability analysis of the feedback loop and in the control compensation. The upper bound delay algorithm is based on the network calculus theory, the stability analysis uses the small gain theorem, and control compensating strategy is based on Smith predictor, where however the upper bound delay is utilised in obtaining the delay estimate.<|reference_end|> | arxiv | @article{georges2006use,
title={Use of upper bound delay estimate in stability analysis and robust
control compensation in networked control systems},
author={Jean-Philippe Georges (CRAN), Nikolai Vatanski, Eric Rondeau (CRAN),
Sirkka-Liisa J"ams"a Jounela},
journal={12th IFAC Symposium on Information Control Problems in
Manufacturing, INCOM 2006, St-Etienne, France (16/05/2006) CDROM},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609152},
primaryClass={cs.NI}
} | georges2006use |
arxiv-674858 | cs/0609153 | Mining Generalized Graph Patterns based on User Examples | <|reference_start|>Mining Generalized Graph Patterns based on User Examples: There has been a lot of recent interest in mining patterns from graphs. Often, the exact structure of the patterns of interest is not known. This happens, for example, when molecular structures are mined to discover fragments useful as features in chemical compound classification task, or when web sites are mined to discover sets of web pages representing logical documents. Such patterns are often generated from a few small subgraphs (cores), according to certain generalization rules (GRs). We call such patterns "generalized patterns"(GPs). While being structurally different, GPs often perform the same function in the network. Previously proposed approaches to mining GPs either assumed that the cores and the GRs are given, or that all interesting GPs are frequent. These are strong assumptions, which often do not hold in practical applications. In this paper, we propose an approach to mining GPs that is free from the above assumptions. Given a small number of GPs selected by the user, our algorithm discovers all GPs similar to the user examples. First, a machine learning-style approach is used to find the cores. Second, generalizations of the cores in the graph are computed to identify GPs. Evaluation on synthetic data, generated using real cores and GRs from biological and web domains, demonstrates effectiveness of our approach.<|reference_end|> | arxiv | @article{dmitriev2006mining,
title={Mining Generalized Graph Patterns based on User Examples},
author={Pavel Dmitriev, Carl Lagoze},
journal={arXiv preprint arXiv:cs/0609153},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609153},
primaryClass={cs.DS cs.LG}
} | dmitriev2006mining |
arxiv-674859 | cs/0609154 | Loop Calculus Helps to Improve Belief Propagation and Linear Programming Decodings of Low-Density-Parity-Check Codes | <|reference_start|>Loop Calculus Helps to Improve Belief Propagation and Linear Programming Decodings of Low-Density-Parity-Check Codes: We illustrate the utility of the recently developed loop calculus for improving the Belief Propagation (BP) algorithm. If the algorithm that minimizes the Bethe free energy fails we modify the free energy by accounting for a critical loop in a graphical representation of the code. The log-likelihood specific critical loop is found by means of the loop calculus. The general method is tested using an example of the Linear Programming (LP) decoding, that can be viewed as a special limit of the BP decoding. Considering the (155,64,20) code that performs over Additive-White-Gaussian-Noise channel we show that the loop calculus improves the LP decoding and corrects all previously found dangerous configurations of log-likelihoods related to pseudo-codewords with low effective distance, thus reducing the code's error-floor.<|reference_end|> | arxiv | @article{chertkov2006loop,
title={Loop Calculus Helps to Improve Belief Propagation and Linear Programming
Decodings of Low-Density-Parity-Check Codes},
author={Michael Chertkov and Vladimir Y. Chernyak},
journal={arXiv preprint arXiv:cs/0609154},
year={2006},
number={LAUR-06-6751},
archivePrefix={arXiv},
eprint={cs/0609154},
primaryClass={cs.IT cond-mat.dis-nn cond-mat.stat-mech math.IT}
} | chertkov2006loop |
arxiv-674860 | cs/0609155 | Detection of Markov Random Fields on Two-Dimensional Intersymbol Interference Channels | <|reference_start|>Detection of Markov Random Fields on Two-Dimensional Intersymbol Interference Channels: We present a novel iterative algorithm for detection of binary Markov random fields (MRFs) corrupted by two-dimensional (2D) intersymbol interference (ISI) and additive white Gaussian noise (AWGN). We assume a first-order binary MRF as a simple model for correlated images. We assume a 2D digital storage channel, where the MRF is interleaved before being written and then read by a 2D transducer; such channels occur in recently proposed optical disk storage systems. The detection algorithm is a concatenation of two soft-input/soft-output (SISO) detectors: an iterative row-column soft-decision feedback (IRCSDF) ISI detector, and a MRF detector. The MRF detector is a SISO version of the stochastic relaxation algorithm by Geman and Geman in IEEE Trans. Pattern Anal. and Mach. Intell., Nov. 1984. On the 2 x 2 averaging-mask ISI channel, at a bit error rate (BER) of 10^{-5}, the concatenated algorithm achieves SNR savings of between 0.5 and 2.0 dB over the IRCSDF detector alone; the savings increase as the MRFs become more correlated, or as the SNR decreases. The algorithm is also fairly robust to mismatches between the assumed and actual MRF parameters.<|reference_end|> | arxiv | @article{zhu2006detection,
title={Detection of Markov Random Fields on Two-Dimensional Intersymbol
Interference Channels},
author={Ying Zhu, Taikun Cheng, Krishnamoorthy Sivakumar, and Benjamin J.
Belzer},
journal={arXiv preprint arXiv:cs/0609155},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609155},
primaryClass={cs.IT math.IT}
} | zhu2006detection |
arxiv-674861 | cs/0609156 | Entangled Graphs | <|reference_start|>Entangled Graphs: In this paper we prove a separability criterion for mixed states in $\mathbb C^p\otimes\mathbb C^q$. We also show that the density matrix of a graph with only one entangled edge is entangled.<|reference_end|> | arxiv | @article{rahiminia2006entangled,
title={Entangled Graphs},
author={Hadi Rahiminia, Massoud Amini},
journal={arXiv preprint arXiv:cs/0609156},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609156},
primaryClass={cs.IT cs.DM math.IT}
} | rahiminia2006entangled |
arxiv-674862 | cs/0609157 | Sensor Scheduling for Optimal Observability Using Estimation Entropy | <|reference_start|>Sensor Scheduling for Optimal Observability Using Estimation Entropy: We consider sensor scheduling as the optimal observability problem for partially observable Markov decision processes (POMDP). This model fits to the cases where a Markov process is observed by a single sensor which needs to be dynamically adjusted or by a set of sensors which are selected one at a time in a way that maximizes the information acquisition from the process. Similar to conventional POMDP problems, in this model the control action is based on all past measurements; however here this action is not for the control of state process, which is autonomous, but it is for influencing the measurement of that process. This POMDP is a controlled version of the hidden Markov process, and we show that its optimal observability problem can be formulated as an average cost Markov decision process (MDP) scheduling problem. In this problem, a policy is a rule for selecting sensors or adjusting the measuring device based on the measurement history. Given a policy, we can evaluate the estimation entropy for the joint state-measurement processes which inversely measures the observability of state process for that policy. Considering estimation entropy as the cost of a policy, we show that the problem of finding optimal policy is equivalent to an average cost MDP scheduling problem where the cost function is the entropy function over the belief space. This allows the application of the policy iteration algorithm for finding the policy achieving minimum estimation entropy, thus optimum observability.<|reference_end|> | arxiv | @article{rezaeian2006sensor,
title={Sensor Scheduling for Optimal Observability Using Estimation Entropy},
author={Mohammad Rezaeian},
journal={arXiv preprint arXiv:cs/0609157},
year={2006},
doi={10.1109/PERCOMW.2007.105},
archivePrefix={arXiv},
eprint={cs/0609157},
primaryClass={cs.IT cs.AI math.IT}
} | rezaeian2006sensor |
arxiv-674863 | cs/0609158 | A Fast Image Encryption Scheme based on Chaotic Standard Map | <|reference_start|>A Fast Image Encryption Scheme based on Chaotic Standard Map: In recent years, a variety of effective chaos-based image encryption schemes have been proposed. The typical structure of these schemes has the permutation and the diffusion stages performed alternatively. The confusion and diffusion effect is solely contributed by the permutation and the diffusion stage, respectively. As a result, more overall rounds than necessary are required to achieve a certain level of security. In this paper, we suggest to introduce certain diffusion effect in the confusion stage by simple sequential add-and-shift operations. The purpose is to reduce the workload of the time-consuming diffusion part so that fewer overall rounds and hence a shorter encryption time is needed. Simulation results show that at a similar performance level, the proposed cryptosystem needs less than one-third the encryption time of an existing cryptosystem. The effective acceleration of the encryption speed is thus achieved.<|reference_end|> | arxiv | @article{wong2006a,
title={A Fast Image Encryption Scheme based on Chaotic Standard Map},
author={Kwok-Wo Wong, Bernie Sin-Hung Kwok, and Wing-Shing Law},
journal={arXiv preprint arXiv:cs/0609158},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609158},
primaryClass={cs.CR cs.MM}
} | wong2006a |
arxiv-674864 | cs/0609159 | Duality for Several Families of Evaluation Codes | <|reference_start|>Duality for Several Families of Evaluation Codes: We consider generalizations of Reed-Muller codes, toric codes, and codes from certain plane curves, such as those defined by norm and trace functions on finite fields. In each case we are interested in codes defined by evaluating arbitrary subsets of monomials, and in identifying when the dual codes are also obtained by evaluating monomials. We then move to the context of order domain theory, in which the subsets of monomials can be chosen to optimize decoding performance using the Berlekamp-Massey-Sakata algorithm with majority voting. We show that for the codes under consideration these subsets are well-behaved and the dual codes are also defined by monomials.<|reference_end|> | arxiv | @article{bras-amorós2006duality,
title={Duality for Several Families of Evaluation Codes},
author={Maria Bras-Amor'os, Michael E. O'Sullivan},
journal={arXiv preprint arXiv:cs/0609159},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609159},
primaryClass={cs.IT cs.DM math.IT}
} | bras-amorós2006duality |
arxiv-674865 | cs/0609160 | Redundancies of Correction-Capability-Optimized Reed-Muller Codes | <|reference_start|>Redundancies of Correction-Capability-Optimized Reed-Muller Codes: This article is focused on some variations of Reed-Muller codes that yield improvements to the rate for a prescribed decoding performance under the Berlekamp-Massey-Sakata algorithm with majority voting. Explicit formulas for the redundancies of the new codes are given.<|reference_end|> | arxiv | @article{bras-amorós2006redundancies,
title={Redundancies of Correction-Capability-Optimized Reed-Muller Codes},
author={Maria Bras-Amor'os, Michael E. O'Sullivan},
journal={arXiv preprint arXiv:cs/0609160},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609160},
primaryClass={cs.IT cs.DM math.IT}
} | bras-amorós2006redundancies |
arxiv-674866 | cs/0609161 | The Order Bound on the Minimum Distance of the One-Point Codes Associated to a Garcia-Stichtenoth Tower of Function Fields | <|reference_start|>The Order Bound on the Minimum Distance of the One-Point Codes Associated to a Garcia-Stichtenoth Tower of Function Fields: Garcia and Stichtenoth discovered two towers of function fields that meet the Drinfeld-Vl\u{a}du\c{t} bound on the ratio of the number of points to the genus. For one of these towers, Garcia, Pellikaan and Torres derived a recursive description of the Weierstrass semigroups associated to a tower of points on the associated curves. In this article, a non-recursive description of the semigroups is given and from this the enumeration of each of the semigroups is derived as well as its inverse. This enables us to find an explicit formula for the order (Feng-Rao) bound on the minimum distance of the associated one-point codes.<|reference_end|> | arxiv | @article{bras-amorós2006the,
title={The Order Bound on the Minimum Distance of the One-Point Codes
Associated to a Garcia-Stichtenoth Tower of Function Fields},
author={Maria Bras-Amor'os, Michael E. O'Sullivan},
journal={arXiv preprint arXiv:cs/0609161},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609161},
primaryClass={cs.IT cs.DM math.IT}
} | bras-amorós2006the |
arxiv-674867 | cs/0609162 | On Semigroups Generated by Two Consecutive Integers and Improved Hermitian Codes | <|reference_start|>On Semigroups Generated by Two Consecutive Integers and Improved Hermitian Codes: Analysis of the Berlekamp-Massey-Sakata algorithm for decoding one-point codes leads to two methods for improving code rate. One method, due to Feng and Rao, removes parity checks that may be recovered by their majority voting algorithm. The second method is to design the code to correct only those error vectors of a given weight that are also geometrically generic. In this work, formulae are given for the redundancies of Hermitian codes optimized with respect to these criteria as well as the formula for the order bound on the minimum distance. The results proceed from an analysis of numerical semigroups generated by two consecutive integers. The formula for the redundancy of optimal Hermitian codes correcting a given number of errors answers an open question stated by Pellikaan and Torres in 1999.<|reference_end|> | arxiv | @article{bras-amorós2006on,
title={On Semigroups Generated by Two Consecutive Integers and Improved
Hermitian Codes},
author={Maria Bras-Amor'os, Michael E. O'Sullivan},
journal={arXiv preprint arXiv:cs/0609162},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609162},
primaryClass={cs.IT cs.DM math.IT}
} | bras-amorós2006on |
arxiv-674868 | cs/0609163 | Labeling Schemes with Queries | <|reference_start|>Labeling Schemes with Queries: We study the question of ``how robust are the known lower bounds of labeling schemes when one increases the number of consulted labels''. Let $f$ be a function on pairs of vertices. An $f$-labeling scheme for a family of graphs $\cF$ labels the vertices of all graphs in $\cF$ such that for every graph $G\in\cF$ and every two vertices $u,v\in G$, the value $f(u,v)$ can be inferred by merely inspecting the labels of $u$ and $v$. This paper introduces a natural generalization: the notion of $f$-labeling schemes with queries, in which the value $f(u,v)$ can be inferred by inspecting not only the labels of $u$ and $v$ but possibly the labels of some additional vertices. We show that inspecting the label of a single additional vertex (one {\em query}) enables us to reduce the label size of many labeling schemes significantly.<|reference_end|> | arxiv | @article{korman2006labeling,
title={Labeling Schemes with Queries},
author={Amos Korman and Shay Kutten},
journal={arXiv preprint arXiv:cs/0609163},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609163},
primaryClass={cs.DC}
} | korman2006labeling |
arxiv-674869 | cs/0609164 | Conditional Expressions for Blind Deconvolution: Multi-point form | <|reference_start|>Conditional Expressions for Blind Deconvolution: Multi-point form: We present conditional expression (CE) for finding blurs convolved in given images. The CE is given in terms of the zero-values of the blurs evaluated at multi-point. The CE can detect multiple blur all at once. We illustrate the multiple blur-detection by using a test image.<|reference_end|> | arxiv | @article{aogaki2006conditional,
title={Conditional Expressions for Blind Deconvolution: Multi-point form},
author={S. Aogaki, I. Moritani, T. Sugai, F. Takeutchi, and F.M. Toyama},
journal={arXiv preprint arXiv:cs/0609164},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609164},
primaryClass={cs.CV}
} | aogaki2006conditional |
arxiv-674870 | cs/0609165 | Simple method to eliminate blur based on Lane and Bates algorithm | <|reference_start|>Simple method to eliminate blur based on Lane and Bates algorithm: A simple search method for finding a blur convolved in a given image is presented. The method can be easily extended to a large blur. The method has been experimentally tested with a model blurred image.<|reference_end|> | arxiv | @article{aogaki2006simple,
title={Simple method to eliminate blur based on Lane and Bates algorithm},
author={S. Aogaki, I. Moritani, T. Sugai, F. Takeutchi, and F.M. Toyama},
journal={arXiv preprint arXiv:cs/0609165},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609165},
primaryClass={cs.CV}
} | aogaki2006simple |
arxiv-674871 | cs/0609166 | Private Approximate Heavy Hitters | <|reference_start|>Private Approximate Heavy Hitters: We consider the problem of private computation of approximate Heavy Hitters. Alice and Bob each hold a vector and, in the vector sum, they want to find the B largest values along with their indices. While the exact problem requires linear communication, protocols in the literature solve this problem approximately using polynomial computation time, polylogarithmic communication, and constantly many rounds. We show how to solve the problem privately with comparable cost, in the sense that nothing is learned by Alice and Bob beyond what is implied by their input, the ideal top-B output, and goodness of approximation (equivalently, the Euclidean norm of the vector sum). We give lower bounds showing that the Euclidean norm must leak by any efficient algorithm.<|reference_end|> | arxiv | @article{strauss2006private,
title={Private Approximate Heavy Hitters},
author={Martin J. Strauss, Xuan Zheng},
journal={arXiv preprint arXiv:cs/0609166},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609166},
primaryClass={cs.CR}
} | strauss2006private |
arxiv-674872 | cs/0609167 | Updates in Answer Set Programming: An Approach Based on Basic Structural Properties | <|reference_start|>Updates in Answer Set Programming: An Approach Based on Basic Structural Properties: We have studied the update operator defined for update sequences by Eiter et al. without tautologies and we have observed that it satisfies an interesting property This property, which we call Weak Independence of Syntax (WIS), is similar to one of the postulates proposed by Alchourron, Gardenfors, and Makinson (AGM); only that in this case it applies to nonmonotonic logic. In addition, we consider other five additional basic properties about update programs and we show that the operator of Eiter et al. satisfies them. This work continues the analysis of the AGM postulates under a refined view that considers nelson logic as a monotonic logic which allows us to expand our understanding of answer sets. Moreover, nelson logic helped us to derive an alternative definition of the operator defined by Eiter et al. avoiding the use of unnecessary extra atoms.<|reference_end|> | arxiv | @article{osorio2006updates,
title={Updates in Answer Set Programming: An Approach Based on Basic Structural
Properties},
author={Mauricio Osorio and V'ictor Cuevas},
journal={arXiv preprint arXiv:cs/0609167},
year={2006},
archivePrefix={arXiv},
eprint={cs/0609167},
primaryClass={cs.LO}
} | osorio2006updates |
arxiv-674873 | cs/0610001 | Practical Entropy-Compressed Rank/Select Dictionary | <|reference_start|>Practical Entropy-Compressed Rank/Select Dictionary: Rank/Select dictionaries are data structures for an ordered set $S \subset \{0,1,...,n-1\}$ to compute $\rank(x,S)$ (the number of elements in $S$ which are no greater than $x$), and $\select(i,S)$ (the $i$-th smallest element in $S$), which are the fundamental components of \emph{succinct data structures} of strings, trees, graphs, etc. In those data structures, however, only asymptotic behavior has been considered and their performance for real data is not satisfactory. In this paper, we propose novel four Rank/Select dictionaries, esp, recrank, vcode and sdarray, each of which is small if the number of elements in $S$ is small, and indeed close to $nH_0(S)$ ($H_0(S) \leq 1$ is the zero-th order \textit{empirical entropy} of $S$) in practice, and its query time is superior to the previous ones. Experimental results reveal the characteristics of our data structures and also show that these data structures are superior to existing implementations in both size and query time.<|reference_end|> | arxiv | @article{okanohara2006practical,
title={Practical Entropy-Compressed Rank/Select Dictionary},
author={Daisuke Okanohara, Kunihiko Sadakane},
journal={arXiv preprint arXiv:cs/0610001},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610001},
primaryClass={cs.DS}
} | okanohara2006practical |
arxiv-674874 | cs/0610002 | Conditional Expressions for Blind Deconvolution: Derivative form | <|reference_start|>Conditional Expressions for Blind Deconvolution: Derivative form: We developed novel conditional expressions (CEs) for Lane and Bates' blind deconvolution. The CEs are given in term of the derivatives of the zero-values of the z-transform of given images. The CEs make it possible to automatically detect multiple blur convolved in the given images all at once without performing any analysis of the zero-sheets of the given images. We illustrate the multiple blur-detection by the CEs for a model image<|reference_end|> | arxiv | @article{aogaki2006conditional,
title={Conditional Expressions for Blind Deconvolution: Derivative form},
author={S. Aogaki, I. Moritani, T. Sugai, F. Takeutchi, and F.M. Toyama},
journal={arXiv preprint arXiv:cs/0610002},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610002},
primaryClass={cs.CV}
} | aogaki2006conditional |
arxiv-674875 | cs/0610003 | Embedding Metrics into Ultrametrics and Graphs into Spanning Trees with Constant Average Distortion | <|reference_start|>Embedding Metrics into Ultrametrics and Graphs into Spanning Trees with Constant Average Distortion: This paper addresses the basic question of how well can a tree approximate distances of a metric space or a graph. Given a graph, the problem of constructing a spanning tree in a graph which strongly preserves distances in the graph is a fundamental problem in network design. We present scaling distortion embeddings where the distortion scales as a function of $\epsilon$, with the guarantee that for each $\epsilon$ the distortion of a fraction $1-\epsilon$ of all pairs is bounded accordingly. Such a bound implies, in particular, that the \emph{average distortion} and $\ell_q$-distortions are small. Specifically, our embeddings have \emph{constant} average distortion and $O(\sqrt{\log n})$ $\ell_2$-distortion. This follows from the following results: we prove that any metric space embeds into an ultrametric with scaling distortion $O(\sqrt{1/\epsilon})$. For the graph setting we prove that any weighted graph contains a spanning tree with scaling distortion $O(\sqrt{1/\epsilon})$. These bounds are tight even for embedding in arbitrary trees. For probabilistic embedding into spanning trees we prove a scaling distortion of $\tilde{O}(\log^2 (1/\epsilon))$, which implies \emph{constant} $\ell_q$-distortion for every fixed $q<\infty$.<|reference_end|> | arxiv | @article{abraham2006embedding,
title={Embedding Metrics into Ultrametrics and Graphs into Spanning Trees with
Constant Average Distortion},
author={Ittai Abraham, Yair Bartal, Ofer Neiman},
journal={arXiv preprint arXiv:cs/0610003},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610003},
primaryClass={cs.DM}
} | abraham2006embedding |
arxiv-674876 | cs/0610004 | Rapport technique du projet OGRE | <|reference_start|>Rapport technique du projet OGRE: This repport concerns automatic understanding of (french) iterative sentences, i.e. sentences where one single verb has to be interpreted by a more or less regular plurality of events. A linguistic analysis is proposed along an extension of Reichenbach's theory, several formal representations are considered and a corpus of 18000 newspaper extracts is described.<|reference_end|> | arxiv | @article{bécher2006rapport,
title={Rapport technique du projet OGRE},
author={G'erard B'echer (GREYC), Patrice Enjalbert (GREYC), Estelle Fiev'e
(LIMSI), Laurent Gosselin (DS), Franc{c}ois L'evy (LIPN), G'erard Ligozat
(LIMSI)},
journal={arXiv preprint arXiv:cs/0610004},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610004},
primaryClass={cs.CL cs.AI}
} | bécher2006rapport |
arxiv-674877 | cs/0610005 | Domain Wall Displacement Detection Technology Research Report | <|reference_start|>Domain Wall Displacement Detection Technology Research Report: This article introduce a new data storage method called DWDD(Domain Wall Displacement Detection) and tell you why it succeed.<|reference_end|> | arxiv | @article{ren2006domain,
title={Domain Wall Displacement Detection Technology Research Report},
author={Ran Ren},
journal={arXiv preprint arXiv:cs/0610005},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610005},
primaryClass={cs.OH}
} | ren2006domain |
arxiv-674878 | cs/0610006 | A Typed Hybrid Description Logic Programming Language with Polymorphic Order-Sorted DL-Typed Unification for Semantic Web Type Systems | <|reference_start|>A Typed Hybrid Description Logic Programming Language with Polymorphic Order-Sorted DL-Typed Unification for Semantic Web Type Systems: In this paper we elaborate on a specific application in the context of hybrid description logic programs (hybrid DLPs), namely description logic Semantic Web type systems (DL-types) which are used for term typing of LP rules based on a polymorphic, order-sorted, hybrid DL-typed unification as procedural semantics of hybrid DLPs. Using Semantic Web ontologies as type systems facilitates interchange of domain-independent rules over domain boundaries via dynamically typing and mapping of explicitly defined type ontologies.<|reference_end|> | arxiv | @article{paschke2006a,
title={A Typed Hybrid Description Logic Programming Language with Polymorphic
Order-Sorted DL-Typed Unification for Semantic Web Type Systems},
author={Adrian Paschke},
journal={In: Proc. of 2nd Int. Workshop on OWL: Experiences and Directions
2006 (OWLED'06) at ISWC'06, Athens, Georgia, USA, 2006},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610006},
primaryClass={cs.AI}
} | paschke2006a |
arxiv-674879 | cs/0610007 | Full Text Searching in the Astrophysics Data System | <|reference_start|>Full Text Searching in the Astrophysics Data System: The Smithsonian/NASA Astrophysics Data System (ADS) provides a search system for the astronomy and physics scholarly literature. All major and many smaller astronomy journals that were published on paper have been scanned back to volume 1 and are available through the ADS free of charge. All scanned pages have been converted to text and can be searched through the ADS Full Text Search System. In addition, searches can be fanned out to several external search systems to include the literature published in electronic form. Results from the different search systems are combined into one results list. The ADS Full Text Search System is available at: http://adsabs.harvard.edu/fulltext_service.html<|reference_end|> | arxiv | @article{eichhorn2006full,
title={Full Text Searching in the Astrophysics Data System},
author={G"unther Eichhorn, Alberto Accomazzi, Carolyn S. Grant, Edwin A.
Henneken, Donna M. Thompson, Michael J. Kurtz, Stephen S. Murray},
journal={arXiv preprint arXiv:cs/0610007},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610007},
primaryClass={cs.DL astro-ph cs.DB}
} | eichhorn2006full |
arxiv-674880 | cs/0610008 | Connectivity in the Astronomy Digital Library | <|reference_start|>Connectivity in the Astronomy Digital Library: The Astrophysics Data System (ADS) provides an extensive system of links between the literature and other on-line information. Recently, the journals of the American Astronomical Society (AAS) and a group of NASA data centers have collaborated to provide more links between on-line data obtained by space missions and the on-line journals. Authors can now specify which data sets they have used in their article. This information is used by the participants to provide the links between the literature and the data. The ADS is available at: http://ads.harvard.edu<|reference_end|> | arxiv | @article{eichhorn2006connectivity,
title={Connectivity in the Astronomy Digital Library},
author={G"unther Eichhorn, Alberto Accomazzi, Carolyn S. Grant, Edwin A.
Henneken, Donna M. Thompson, Michael J. Kurtz, Stephen S. Murray},
journal={arXiv preprint arXiv:cs/0610008},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610008},
primaryClass={cs.DL astro-ph cs.DB}
} | eichhorn2006connectivity |
arxiv-674881 | cs/0610009 | VPSPACE and a Transfer Theorem over the Reals | <|reference_start|>VPSPACE and a Transfer Theorem over the Reals: We introduce a new class VPSPACE of families of polynomials. Roughly speaking, a family of polynomials is in VPSPACE if its coefficients can be computed in polynomial space. Our main theorem is that if (uniform, constant-free) VPSPACE families can be evaluated efficiently then the class PAR of decision problems that can be solved in parallel polynomial time over the real numbers collapses to P. As a result, one must first be able to show that there are VPSPACE families which are hard to evaluate in order to separate over the reals P from NP, or even from PAR.<|reference_end|> | arxiv | @article{koiran2006vpspace,
title={VPSPACE and a Transfer Theorem over the Reals},
author={Pascal Koiran (LIP), Sylvain Perifel (LIP)},
journal={arXiv preprint arXiv:cs/0610009},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610009},
primaryClass={cs.CC}
} | koiran2006vpspace |
arxiv-674882 | cs/0610010 | One-Pass, One-Hash n-Gram Statistics Estimation | <|reference_start|>One-Pass, One-Hash n-Gram Statistics Estimation: In multimedia, text or bioinformatics databases, applications query sequences of n consecutive symbols called n-grams. Estimating the number of distinct n-grams is a view-size estimation problem. While view sizes can be estimated by sampling under statistical assumptions, we desire an unassuming algorithm with universally valid accuracy bounds. Most related work has focused on repeatedly hashing the data, which is prohibitive for large data sources. We prove that a one-pass one-hash algorithm is sufficient for accurate estimates if the hashing is sufficiently independent. To reduce costs further, we investigate recursive random hashing algorithms and show that they are sufficiently independent in practice. We compare our running times with exact counts using suffix arrays and show that, while we use hardly any storage, we are an order of magnitude faster. The approach further is extended to a one-pass/one-hash computation of n-gram entropy and iceberg counts. The experiments use a large collection of English text from the Gutenberg Project as well as synthetic data.<|reference_end|> | arxiv | @article{lemire2006one-pass,,
title={One-Pass, One-Hash n-Gram Statistics Estimation},
author={Daniel Lemire and Owen Kaser},
journal={arXiv preprint arXiv:cs/0610010},
year={2006},
number={TR-06-001},
archivePrefix={arXiv},
eprint={cs/0610010},
primaryClass={cs.DB cs.CL}
} | lemire2006one-pass, |
arxiv-674883 | cs/0610011 | Creation and use of Citations in the ADS | <|reference_start|>Creation and use of Citations in the ADS: With over 20 million records, the ADS citation database is regularly used by researchers and librarians to measure the scientific impact of individuals, groups, and institutions. In addition to the traditional sources of citations, the ADS has recently added references extracted from the arXiv e-prints on a nightly basis. We review the procedures used to harvest and identify the reference data used in the creation of citations, the policies and procedures that we follow to avoid double-counting and to eliminate contributions which may not be scholarly in nature. Finally, we describe how users and institutions can easily obtain quantitative citation data from the ADS, both interactively and via web-based programming tools. The ADS is available at http://ads.harvard.edu.<|reference_end|> | arxiv | @article{accomazzi2006creation,
title={Creation and use of Citations in the ADS},
author={Alberto Accomazzi, Gunther Eichhorn, Michael J. Kurtz, Carolyn S.
Grant, Edwin Henneken, Markus Demleitner, Donna Thompson, Elizabeth Bohlen,
Stephen S. Murray},
journal={arXiv preprint arXiv:cs/0610011},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610011},
primaryClass={cs.DL astro-ph cs.DB cs.IR}
} | accomazzi2006creation |
arxiv-674884 | cs/0610012 | On Shift Sequences for Interleaved Construction of Sequence Sets with Low Correlation | <|reference_start|>On Shift Sequences for Interleaved Construction of Sequence Sets with Low Correlation: Construction of signal sets with low correlation property is of interest to designers of CDMA systems. One of the preferred ways of constructing such sets is the interleaved construction which uses two sequences a and b with 2-level autocorrelation and a shift sequence e. The shift sequence has to satisfy certain conditions for the resulting signal set to have low correlation properties. This article shows that the conditions reported in literature are too strong and gives a version which results in more number of shift sequences. An open problem on the existence of shift sequences for attaining an interleaved set with maximum correlation value bounded by v+2 is also taken up and solved.<|reference_end|> | arxiv | @article{pillai2006on,
title={On Shift Sequences for Interleaved Construction of Sequence Sets with
Low Correlation},
author={N Rajesh Pillai, Yogesh Kumar},
journal={arXiv preprint arXiv:cs/0610012},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610012},
primaryClass={cs.IT math.IT}
} | pillai2006on |
arxiv-674885 | cs/0610013 | Cooperative Processes for Scientific Workflows | <|reference_start|>Cooperative Processes for Scientific Workflows: The work described in this paper is a contribution to the problems of managing in data-intensive scientific applications. First, we discuss scientific workflows and motivate there use in scientific applications. Then, we introduce the concept of cooperative processes and describe their interactions and uses in a flexible cooperative workflow system called \textit{Bonita}. Finally, we propose an approach to integrate and synthesize the data exchanged by the mapping of data-intensive science into Bonita, using a binary approach, and illustrate the endeavors done to enhance the performance computations within a dynamic environment.<|reference_end|> | arxiv | @article{gaaloul2006cooperative,
title={Cooperative Processes for Scientific Workflows},
author={Khaled Gaaloul (INRIA Lorraine - LORIA), Franc{c}ois Charoy (INRIA
Lorraine - LORIA), Claude Godart (INRIA Lorraine - LORIA)},
journal={Dans 6th International Conference on Computational Science 3, 3993
(2006) 976-979},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610013},
primaryClass={cs.NI}
} | gaaloul2006cooperative |
arxiv-674886 | cs/0610014 | Implementing a Unification Algorithm for Protocol Analysis with XOR | <|reference_start|>Implementing a Unification Algorithm for Protocol Analysis with XOR: In this paper, we propose a unification algorithm for the theory $E$ which combines unification algorithms for $E\_{\std}$ and $E\_{\ACUN}$ (ACUN properties, like XOR) but compared to the more general combination methods uses specific properties of the equational theories for further optimizations. Our optimizations drastically reduce the number of non-deterministic choices, in particular those for variable identification and linear orderings. This is important for reducing both the runtime of the unification algorithm and the number of unifiers in the complete set of unifiers. We emphasize that obtaining a ``small'' set of unifiers is essential for the efficiency of the constraint solving procedure within which the unification algorithm is used. The method is implemented in the CL-Atse tool for security protocol analysis.<|reference_end|> | arxiv | @article{tuengerthal2006implementing,
title={Implementing a Unification Algorithm for Protocol Analysis with XOR},
author={Max Tuengerthal (Christian-Albrechts-Universit"at Zu Kiel), Ralf
Kuesters (CHRISTIAN-Albrechts-Universit"aT Zu Kiel), Mathieu Turuani (INRIA
Lorraine - Loria / Lifc)},
journal={Dans UNIF'06 - 20th International Workshop on Unification (2006)
1-5},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610014},
primaryClass={cs.CR}
} | tuengerthal2006implementing |
arxiv-674887 | cs/0610015 | Why did the accident happen? A norm-based reasoning approach | <|reference_start|>Why did the accident happen? A norm-based reasoning approach: In this paper we describe an architecture of a system that answer the question : Why did the accident happen? from the textual description of an accident. We present briefly the different parts of the architecture and then we describe with more detail the semantic part of the system i.e. the part in which the norm-based reasoning is performed on the explicit knowlege extracted from the text.<|reference_end|> | arxiv | @article{nouioua2006why,
title={Why did the accident happen? A norm-based reasoning approach},
author={Farid Nouioua (LIPN)},
journal={Logical Aspects of Computational Linguistics, student
sessionUniversit\'{e} de bordeaux (Ed.) (2005) 31-34},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610015},
primaryClass={cs.AI}
} | nouioua2006why |
arxiv-674888 | cs/0610016 | Norm Based Causal Reasoning in Textual Corpus | <|reference_start|>Norm Based Causal Reasoning in Textual Corpus: Truth based entailments are not sufficient for a good comprehension of NL. In fact, it can not deduce implicit information necessary to understand a text. On the other hand, norm based entailments are able to reach this goal. This idea was behind the development of Frames (Minsky 75) and Scripts (Schank 77, Schank 79) in the 70's. But these theories are not formalized enough and their adaptation to new situations is far from being obvious. In this paper, we present a reasoning system which uses norms in a causal reasoning process in order to find the cause of an accident from a text describing it.<|reference_end|> | arxiv | @article{nouioua2006norm,
title={Norm Based Causal Reasoning in Textual Corpus},
author={Farid Nouioua (LIPN)},
journal={Proceedings of the Sixth International Workshop on Computational
Semantics IWCS-6, France (2005) 396-400},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610016},
primaryClass={cs.AI cs.CL}
} | nouioua2006norm |
arxiv-674889 | cs/0610017 | A Quasigroup Based Cryptographic System | <|reference_start|>A Quasigroup Based Cryptographic System: This paper presents a quasigroup encryptor that has very good scrambling properties. We show that the output of the encryptor maximizes the output entropy and the encrypted output for constant and random inputs is very similar. The system architecture of the quasigroup encryptor and the autocorrelation properties of the output sequences are provided.<|reference_end|> | arxiv | @article{satti2006a,
title={A Quasigroup Based Cryptographic System},
author={Maruti Satti},
journal={arXiv preprint arXiv:cs/0610017},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610017},
primaryClass={cs.CR}
} | satti2006a |
arxiv-674890 | cs/0610018 | Raisonnement stratifi\'e \`a base de normes pour inf\'erer les causes dans un corpus textuel | <|reference_start|>Raisonnement stratifi\'e \`a base de normes pour inf\'erer les causes dans un corpus textuel: To understand texts written in natural language (LN), we use our knowledge about the norms of the domain. Norms allow to infer more implicit information from the text. This kind of information can, in general, be defeasible, but it remains useful and acceptable while the text do not contradict it explicitly. In this paper we describe a non-monotonic reasoning system based on the norms of the car crash domain. The system infers the cause of an accident from its textual description. The cause of an accident is seen as the most specific norm which has been violated. The predicates and the rules of the system are stratified: organized on layers in order to obtain an efficient reasoning.<|reference_end|> | arxiv | @article{nouioua2006raisonnement,
title={Raisonnement stratifi\'{e} \`{a} base de normes pour inf\'{e}rer les
causes dans un corpus textuel},
author={Farid Nouioua (LIPN)},
journal={The Seventh International Symposium On Programming and
SystemsUSTHB d'Alger (Ed.) (2005) 81-92},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610018},
primaryClass={cs.AI cs.CL}
} | nouioua2006raisonnement |
arxiv-674891 | cs/0610019 | NectaRSS, an RSS feed ranking system that implicitly learns user preferences | <|reference_start|>NectaRSS, an RSS feed ranking system that implicitly learns user preferences: In this paper a new RSS feed ranking method called NectaRSS is introduced. The system recommends information to a user based on his/her past choices. User preferences are automatically acquired, avoiding explicit feedback, and ranking is based on those preferences distilled to a user profile. NectaRSS uses the well-known vector space model for user profiles and new documents, and compares them using information-retrieval techniques, but introduces a novel method for user profile creation and adaptation from users' past choices. The efficiency of the proposed method has been tested by embedding it into an intelligent aggregator (RSS feed reader), which has been used by different and heterogeneous users. Besides, this paper proves that the ranking of newsitems yielded by NectaRSS improves its quality with user's choices, and its superiority over other algorithms that use a different information representation method.<|reference_end|> | arxiv | @article{samper2006nectarss,,
title={NectaRSS, an RSS feed ranking system that implicitly learns user
preferences},
author={Juan J. Samper, Pedro A. Castillo, Lourdes Araujo, J. J. Merelo},
journal={arXiv preprint arXiv:cs/0610019},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610019},
primaryClass={cs.IR cs.HC}
} | samper2006nectarss, |
arxiv-674892 | cs/0610020 | XString: XML as a String | <|reference_start|>XString: XML as a String: Extensible markup language (XML) is a technology that has been much hyped, so that XML has become an industry buzzword. Behind the hype is a powerful technology for data representation in a platform independent manner. As a text document, however, XML suffers from being too bloated, and requires an XML parser to access and manipulate it. XString is an encoding method for XML, in essence, a markup language's markup language. XString gives the benefit of compressing XML, and allows for easy manipulation and processing of XML source as a very long string.<|reference_end|> | arxiv | @article{gilreath2006xstring:,
title={XString: XML as a String},
author={William F. Gilreath},
journal={arXiv preprint arXiv:cs/0610020},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610020},
primaryClass={cs.DB}
} | gilreath2006xstring: |
arxiv-674893 | cs/0610021 | On the Fading Paper Achievable Region of the Fading MIMO Broadcast Channel | <|reference_start|>On the Fading Paper Achievable Region of the Fading MIMO Broadcast Channel: We consider transmission over the ergodic fading multi-antenna broadcast (MIMO-BC) channel with partial channel state information at the transmitter and full information at the receiver. Over the equivalent {\it non}-fading channel, capacity has recently been shown to be achievable using transmission schemes that were designed for the ``dirty paper'' channel. We focus on a similar ``fading paper'' model. The evaluation of the fading paper capacity is difficult to obtain. We confine ourselves to the {\it linear-assignment} capacity, which we define, and use convex analysis methods to prove that its maximizing distribution is Gaussian. We compare our fading-paper transmission to an application of dirty paper coding that ignores the partial state information and assumes the channel is fixed at the average fade. We show that a gain is easily achieved by appropriately exploiting the information. We also consider a cooperative upper bound on the sum-rate capacity as suggested by Sato. We present a numeric example that indicates that our scheme is capable of realizing much of this upper bound.<|reference_end|> | arxiv | @article{bennatan2006on,
title={On the Fading Paper Achievable Region of the Fading MIMO Broadcast
Channel},
author={Amir Bennatan and David Burshtein},
journal={arXiv preprint arXiv:cs/0610021},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610021},
primaryClass={cs.IT math.IT}
} | bennatan2006on |
arxiv-674894 | cs/0610022 | Iterative Decoding of Low-Density Parity Check Codes (A Survey) | <|reference_start|>Iterative Decoding of Low-Density Parity Check Codes (A Survey): Much progress has been made on decoding algorithms for error-correcting codes in the last decade. In this article, we give an introduction to some fundamental results on iterative, message-passing algorithms for low-density parity check codes. For certain important stochastic channels, this line of work has enabled getting very close to Shannon capacity with algorithms that are extremely efficient (both in theory and practice).<|reference_end|> | arxiv | @article{guruswami2006iterative,
title={Iterative Decoding of Low-Density Parity Check Codes (A Survey)},
author={Venkatesan Guruswami},
journal={Bulletin of the EATCS, Issue 90, October 2006},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610022},
primaryClass={cs.IT cs.CC math.IT}
} | guruswami2006iterative |
arxiv-674895 | cs/0610023 | Une exp\'erience de s\'emantique inf\'erentielle | <|reference_start|>Une exp\'erience de s\'emantique inf\'erentielle: We develop a system which must be able to perform the same inferences that a human reader of an accident report can do and more particularly to determine the apparent causes of the accident. We describe the general framework in which we are situated, linguistic and semantic levels of the analysis and the inference rules used by the system.<|reference_end|> | arxiv | @article{nouioua2006une,
title={Une exp\'{e}rience de s\'{e}mantique inf\'{e}rentielle},
author={Farid Nouioua (LIPN), Daniel Kayser (LIPN)},
journal={Actes de TALN'06UCL Presses Universitaires de Louvain (Ed.) (2006)
246-255},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610023},
primaryClass={cs.AI}
} | nouioua2006une |
arxiv-674896 | cs/0610024 | Network calculus based FDI approach for switched Ethernet architecture | <|reference_start|>Network calculus based FDI approach for switched Ethernet architecture: The Networked Control Systems (NCS) are complex systems which integrate information provided by several domians such as automatic control, computer science, communication network. The work presented in this paper concerns fault detection, isolation and compensation of communication network. The proposed method is based on the classical approach of Fault Detection and Isolation and Fault Tolerant Control (FDI/FTC) currently used in diagnosis. The modelling of the network to be supervised is based on both couloured petri nets and network calculus theory often used to represent and analyse the network behaviour. The goal is to implement inside network devices algorithms enabling to detect, isolate and compensate communication faults in an autonomous way.<|reference_end|> | arxiv | @article{brahimi2006network,
title={Network calculus based FDI approach for switched Ethernet architecture},
author={Belynda Brahimi (CRAN), Christophe Aubrun (CRAN), Eric Rondeau (CRAN)},
journal={6th IFAC Symposium on Fault Detection, Supervision and Safety of
Technical Processes, Chine (29/08/2006) 6 pages},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610024},
primaryClass={cs.NI}
} | brahimi2006network |
arxiv-674897 | cs/0610025 | Low Correlation Sequences over the QAM Constellation | <|reference_start|>Low Correlation Sequences over the QAM Constellation: This paper presents the first concerted look at low correlation sequence families over QAM constellations of size M^2=4^m and their potential applicability as spreading sequences in a CDMA setting. Five constructions are presented, and it is shown how such sequence families have the ability to transport a larger amount of data as well as enable variable-rate signalling on the reverse link. Canonical family CQ has period N, normalized maximum-correlation parameter theta_max bounded above by A sqrt(N), where 'A' ranges from 1.8 in the 16-QAM case to 3.0 for large M. In a CDMA setting, each user is enabled to transfer 2m bits of data per period of the spreading sequence which can be increased to 3m bits of data by halving the size of the sequence family. The technique used to construct CQ is easily extended to produce larger sequence families and an example is provided. Selected family SQ has a lower value of theta_max but permits only (m+1)-bit data modulation. The interleaved 16-QAM sequence family IQ has theta_max <= sqrt(2) sqrt(N) and supports 3-bit data modulation. The remaining two families are over a quadrature-PAM (Q-PAM) subset of size 2M of the M^2-QAM constellation. Family P has a lower value of theta_max in comparison with Family SQ, while still permitting (m+1)-bit data modulation. Interleaved family IP, over the 8-ary Q-PAM constellation, permits 3-bit data modulation and interestingly, achieves the Welch lower bound on theta_max.<|reference_end|> | arxiv | @article{anand2006low,
title={Low Correlation Sequences over the QAM Constellation},
author={M. Anand, P. Vijay Kumar},
journal={arXiv preprint arXiv:cs/0610025},
year={2006},
doi={10.1109/TIT.2007.913512},
archivePrefix={arXiv},
eprint={cs/0610025},
primaryClass={cs.IT math.IT}
} | anand2006low |
arxiv-674898 | cs/0610026 | Covering selfish machines | <|reference_start|>Covering selfish machines: We consider the machine covering problem for selfish related machines. For a constant number of machines, m, we show a monotone polynomial time approximation scheme (PTAS) with running time that is linear in the number of jobs. It uses a new technique for reducing the number of jobs while remaining close to the optimal solution. We also present an FPTAS for the classical machine covering problem (the previous best result was a PTAS) and use this to give a monotone FPTAS. Additionally, we give a monotone approximation algorithm with approximation ratio \min(m,(2+\eps)s_1/s_m) where \eps>0 can be chosen arbitrarily small and s_i is the (real) speed of machine i. Finally we give improved results for two machines. Our paper presents the first results for this problem in the context of selfish machines.<|reference_end|> | arxiv | @article{epstein2006covering,
title={Covering selfish machines},
author={Leah Epstein and Rob van Stee},
journal={arXiv preprint arXiv:cs/0610026},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610026},
primaryClass={cs.GT}
} | epstein2006covering |
arxiv-674899 | cs/0610027 | LTL with the Freeze Quantifier and Register Automata | <|reference_start|>LTL with the Freeze Quantifier and Register Automata: A data word is a sequence of pairs of a letter from a finite alphabet and an element from an infinite set, where the latter can only be compared for equality. To reason about data words, linear temporal logic is extended by the freeze quantifier, which stores the element at the current word position into a register, for equality comparisons deeper in the formula. By translations from the logic to alternating automata with registers and then to faulty counter automata whose counters may erroneously increase at any time, and from faulty and error-free counter automata to the logic, we obtain a complete complexity table for logical fragments defined by varying the set of temporal operators and the number of registers. In particular, the logic with future-time operators and 1 register is decidable but not primitive recursive over finite data words. Adding past-time operators or 1 more register, or switching to infinite data words, cause undecidability.<|reference_end|> | arxiv | @article{demri2006ltl,
title={LTL with the Freeze Quantifier and Register Automata},
author={Stephane Demri and Ranko Lazic},
journal={arXiv preprint arXiv:cs/0610027},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610027},
primaryClass={cs.LO cs.CC}
} | demri2006ltl |
arxiv-674900 | cs/0610028 | Memory and compiler optimizations for low-power and -energy | <|reference_start|>Memory and compiler optimizations for low-power and -energy: Embedded systems become more and more widespread, especially autonomous ones, and clearly tend to be ubiquitous. In such systems, low-power and low-energy usage get ever more crucial. Furthermore, these issues also become paramount in (massively) multi-processors systems, either in one machine or more widely in a grid. The various problems faced pertain to autonomy, power supply possibilities, thermal dissipation, or even sheer energy cost. Although it has since long been studied in harware, energy optimization is more recent in software. In this paper, we thus aim at raising awareness to low-power and low-energy issues in the language and compilation community. We thus broadly but briefly survey techniques and solutions to this energy issue, focusing on a few specific aspects in the context of compiler optimizations and memory management.<|reference_end|> | arxiv | @article{zendra2006memory,
title={Memory and compiler optimizations for low-power and -energy},
author={Olivier Zendra (INRIA Lorraine - LORIA, LORIA)},
journal={Dans 1st ECOOP Workshop on Implementation, Compilation,
Optimization of Object-Oriented Languages, Programs and Systems
(ICOOOLPS'2006). (2006)},
year={2006},
archivePrefix={arXiv},
eprint={cs/0610028},
primaryClass={cs.PL cs.PF}
} | zendra2006memory |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.