corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-674901
cs/0610029
Data in the ADS -- Understanding How to Use it Better
<|reference_start|>Data in the ADS -- Understanding How to Use it Better: The Smithsonian/NASA ADS Abstract Service contains a wealth of data for astronomers and librarians alike, yet the vast majority of usage consists of rudimentary searches. Hints on how to obtain more focused search results by using more of the various capabilities of the ADS are presented, including searching by affiliation. We also discuss the classification of articles by content and by referee status. The ADS is funded by NASA Grant NNG06GG68G-16613687.<|reference_end|>
arxiv
@article{grant2006data, title={Data in the ADS -- Understanding How to Use it Better}, author={Carolyn S. Grant, Alberto Accomazzi, Donna Thompson, Edwin Henneken, Guenther Eichhorn, Michael J. Kurtz, and Stephen S. Murray}, journal={arXiv preprint arXiv:cs/0610029}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610029}, primaryClass={cs.DL cs.DB} }
grant2006data
arxiv-674902
cs/0610030
Paper to Screen: Processing Historical Scans in the ADS
<|reference_start|>Paper to Screen: Processing Historical Scans in the ADS: The NASA Astrophysics Data System in conjunction with the Wolbach Library at the Harvard-Smithsonian Center for Astrophysics is working on a project to microfilm historical observatory publications. The microfilm is then scanned for inclusion in the ADS. The ADS currently contains over 700,000 scanned pages of volumes of historical literature. Many of these volumes lack clear pagination or other bibliographic data that are necessary to take advantage of the searching capabilities of the ADS. This paper will address some of the interesting challenges that needed to be resolved during the processing of the Observatory Reports included in the ADS.<|reference_end|>
arxiv
@article{thompson2006paper, title={Paper to Screen: Processing Historical Scans in the ADS}, author={Donna M. Thompson, Alberto Accomazzi, Guenther Eichhorn, Carolyn Grant, Edwin Henneken, Michael J. Kurtz, Elizabeth Bohlen, Stephen S. Murray}, journal={arXiv preprint arXiv:cs/0610030}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610030}, primaryClass={cs.DL cs.HC} }
thompson2006paper
arxiv-674903
cs/0610031
Pathways: Augmenting interoperability across scholarly repositories
<|reference_start|>Pathways: Augmenting interoperability across scholarly repositories: In the emerging eScience environment, repositories of papers, datasets, software, etc., should be the foundation of a global and natively-digital scholarly communications system. The current infrastructure falls far short of this goal. Cross-repository interoperability must be augmented to support the many workflows and value-chains involved in scholarly communication. This will not be achieved through the promotion of single repository architecture or content representation, but instead requires an interoperability framework to connect the many heterogeneous systems that will exist. We present a simple data model and service architecture that augments repository interoperability to enable scholarly value-chains to be implemented. We describe an experiment that demonstrates how the proposed infrastructure can be deployed to implement the workflow involved in the creation of an overlay journal over several different repository systems (Fedora, aDORe, DSpace and arXiv).<|reference_end|>
arxiv
@article{warner2006pathways:, title={Pathways: Augmenting interoperability across scholarly repositories}, author={Simeon Warner, Jeroen Bekaert, Carl Lagoze, Xiaoming Liu, Sandy Payette, Herbert Van de Sompel}, journal={arXiv preprint arXiv:cs/0610031}, year={2006}, doi={10.1007/s00799-007-0016-7}, archivePrefix={arXiv}, eprint={cs/0610031}, primaryClass={cs.DL} }
warner2006pathways:
arxiv-674904
cs/0610032
Pipelined Feed-Forward Cyclic Redundancy Check (CRC) Calculation
<|reference_start|>Pipelined Feed-Forward Cyclic Redundancy Check (CRC) Calculation: This paper discusses a method for pipelining the calculation of CRC's, such as ITU/CCITT CRC32, into a mostly feed-forward architecture. This method allows several benefits such as independent scaling of circuit frequency and data throughput. Additionally it allows calculation over packet tails (packet length not a multiple of CRC input width). Finally it offers the ability to update a CRC where a subset of data in the packet has changed.<|reference_end|>
arxiv
@article{walma2006pipelined, title={Pipelined Feed-Forward Cyclic Redundancy Check (CRC) Calculation}, author={Mathys Walma}, journal={arXiv preprint arXiv:cs/0610032}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610032}, primaryClass={cs.NI} }
walma2006pipelined
arxiv-674905
cs/0610033
A kernel for time series based on global alignments
<|reference_start|>A kernel for time series based on global alignments: We propose in this paper a new family of kernels to handle times series, notably speech data, within the framework of kernel methods which includes popular algorithms such as the Support Vector Machine. These kernels elaborate on the well known Dynamic Time Warping (DTW) family of distances by considering the same set of elementary operations, namely substitutions and repetitions of tokens, to map a sequence onto another. Associating to each of these operations a given score, DTW algorithms use dynamic programming techniques to compute an optimal sequence of operations with high overall score. In this paper we consider instead the score spanned by all possible alignments, take a smoothed version of their maximum and derive a kernel out of this formulation. We prove that this kernel is positive definite under favorable conditions and show how it can be tuned effectively for practical applications as we report encouraging results on a speech recognition task.<|reference_end|>
arxiv
@article{cuturi2006a, title={A kernel for time series based on global alignments}, author={Marco Cuturi, Jean-Philippe Vert, Oystein Birkenes, Tomoko Matsui}, journal={arXiv preprint arXiv:cs/0610033}, year={2006}, doi={10.1109/ICASSP.2007.366260}, archivePrefix={arXiv}, eprint={cs/0610033}, primaryClass={cs.CV cs.LG} }
cuturi2006a
arxiv-674906
cs/0610034
Postinal Determinacy of Games with Infinitely Many Priorities
<|reference_start|>Postinal Determinacy of Games with Infinitely Many Priorities: We study two-player games of infinite duration that are played on finite or infinite game graphs. A winning strategy for such a game is positional if it only depends on the current position, and not on the history of the play. A game is positionally determined if, from each position, one of the two players has a positional winning strategy. The theory of such games is well studied for winning conditions that are defined in terms of a mapping that assigns to each position a priority from a finite set. Specifically, in Muller games the winner of a play is determined by the set of those priorities that have been seen infinitely often; an important special case are parity games where the least (or greatest) priority occurring infinitely often determines the winner. It is well-known that parity games are positionally determined whereas Muller games are determined via finite-memory strategies. In this paper, we extend this theory to the case of games with infinitely many priorities. Such games arise in several application areas, for instance in pushdown games with winning conditions depending on stack contents. For parity games there are several generalisations to the case of infinitely many priorities. While max-parity games over omega or min-parity games over larger ordinals than omega require strategies with infinite memory, we can prove that min-parity games with priorities in omega are positionally determined. Indeed, it turns out that the min-parity condition over omega is the only infinitary Muller condition that guarantees positional determinacy on all game graphs.<|reference_end|>
arxiv
@article{graedel2006postinal, title={Postinal Determinacy of Games with Infinitely Many Priorities}, author={Erich Graedel and Igor Walukiewicz}, journal={arXiv preprint arXiv:cs/0610034}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610034}, primaryClass={cs.LO cs.GT} }
graedel2006postinal
arxiv-674907
cs/0610035
Positional Determinacy of Games with Infinitely Many Priorities
<|reference_start|>Positional Determinacy of Games with Infinitely Many Priorities: We study two-player games of infinite duration that are played on finite or infinite game graphs. A winning strategy for such a game is positional if it only depends on the current position, and not on the history of the play. A game is positionally determined if, from each position, one of the two players has a positional winning strategy. The theory of such games is well studied for winning conditions that are defined in terms of a mapping that assigns to each position a priority from a finite set. Specifically, in Muller games the winner of a play is determined by the set of those priorities that have been seen infinitely often; an important special case are parity games where the least (or greatest) priority occurring infinitely often determines the winner. It is well-known that parity games are positionally determined whereas Muller games are determined via finite-memory strategies. In this paper, we extend this theory to the case of games with infinitely many priorities. Such games arise in several application areas, for instance in pushdown games with winning conditions depending on stack contents. For parity games there are several generalisations to the case of infinitely many priorities. While max-parity games over omega or min-parity games over larger ordinals than omega require strategies with infinite memory, we can prove that min-parity games with priorities in omega are positionally determined. Indeed, it turns out that the min-parity condition over omega is the only infinitary Muller condition that guarantees positional determinacy on all game graphs.<|reference_end|>
arxiv
@article{graedel2006positional, title={Positional Determinacy of Games with Infinitely Many Priorities}, author={Erich Graedel and Igor Walukiewicz}, journal={Logical Methods in Computer Science, Volume 2, Issue 4 (November 3, 2006) lmcs:2242}, year={2006}, doi={10.2168/LMCS-2(4:6)2006}, archivePrefix={arXiv}, eprint={cs/0610035}, primaryClass={cs.LO cs.GT} }
graedel2006positional
arxiv-674908
cs/0610036
Optimization of Memory Usage in Tardos's Fingerprinting Codes
<|reference_start|>Optimization of Memory Usage in Tardos's Fingerprinting Codes: It is known that Tardos's collusion-secure probabilistic fingerprinting code (Tardos code; STOC'03) has length of theoretically minimal order with respect to the number of colluding users. However, Tardos code uses certain continuous probability distribution in codeword generation, which creates some problems for practical use, in particular, it requires large extra memory. A solution proposed so far is to use some finite probability distributions instead. In this paper, we determine the optimal finite distribution in order to decrease extra memory amount. By our result, the extra memory is reduced to 1/32 of the original, or even becomes needless, in some practical setting. Moreover, the code length is also reduced, e.g. to about 20.6% of Tardos code asymptotically. Finally, we address some other practical issues such as approximation errors which are inevitable in any real implementation.<|reference_end|>
arxiv
@article{nuida2006optimization, title={Optimization of Memory Usage in Tardos's Fingerprinting Codes}, author={Koji Nuida, Manabu Hagiwara, Hajime Watanabe, and Hideki Imai}, journal={arXiv preprint arXiv:cs/0610036}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610036}, primaryClass={cs.CR cs.NA} }
nuida2006optimization
arxiv-674909
cs/0610037
The Capacity Region of a Class of Discrete Degraded Interference Channels
<|reference_start|>The Capacity Region of a Class of Discrete Degraded Interference Channels: We provide a single-letter characterization for the capacity region of a class of discrete degraded interference channels (DDICs). The class of DDICs considered includes the discrete additive degraded interference channel (DADIC) studied by Benzel. We show that for the class of DDICs studied, encoder cooperation does not increase the capacity region, and therefore, the capacity region of the class of DDICs is the same as the capacity region of the corresponding degraded broadcast channel.<|reference_end|>
arxiv
@article{liu2006the, title={The Capacity Region of a Class of Discrete Degraded Interference Channels}, author={Nan Liu and Sennur Ulukus}, journal={arXiv preprint arXiv:cs/0610037}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610037}, primaryClass={cs.IT math.IT} }
liu2006the
arxiv-674910
cs/0610038
Church's thesis is questioned by new calculation paradigm
<|reference_start|>Church's thesis is questioned by new calculation paradigm: Church's thesis claims that all effecticely calculable functions are recursive. A shortcoming of the various definitions of recursive functions lies in the fact that it is not a matter of a syntactical check to find out if an entity gives rise to a function. Eight new ideas for a precise setup of arithmetical logic and its metalanguage give the proper environment for the construction of a special computer, the ARBACUS computer. Computers do not come to a necessary halt; it is requested that calculators are constructed on the basis of computers in a way that they always come to a halt, then all calculations are effective. The ARBATOR is defined as a calculator with two-layer-computation. It allows for the calculation of all primitive recursive functions, but multi-level-arbation also allows for the calculation of other arbative functions that are not primitive recursive. The new paradigm of calculation does not have the above mentioned shortcoming. The defenders of Church's thesis are challenged to show that exotic arbative functions are recursive and to put forward a recursive function that is not arbative. A construction with three-tier-multi-level-arbation that includes a diagonalisation leads to the extravagant yet calculable Snark-function that is not arbative. As long as it is not shown that all exotic arbative functions and particularily the Snark-function are arithmetically representable Goedel's first incompleteness sentence is in limbo.<|reference_end|>
arxiv
@article{hutzelmeyer2006church's, title={Church's thesis is questioned by new calculation paradigm}, author={Hannes Hutzelmeyer}, journal={arXiv preprint arXiv:cs/0610038}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610038}, primaryClass={cs.LO} }
hutzelmeyer2006church's
arxiv-674911
cs/0610039
The Application of Fuzzy Logic to the Construction of the Ranking Function of Information Retrieval Systems
<|reference_start|>The Application of Fuzzy Logic to the Construction of the Ranking Function of Information Retrieval Systems: The quality of the ranking function is an important factor that determines the quality of the Information Retrieval system. Each document is assigned a score by the ranking function; the score indicates the likelihood of relevance of the document given a query. In the vector space model, the ranking function is defined by a mathematic expression. We propose a fuzzy logic (FL) approach to defining the ranking function. FL provides a convenient way of converting knowledge expressed in a natural language into fuzzy logic rules. The resulting ranking function could be easily viewed, extended, and verified: * if (tf is high) and (idf is high) > (relevance is high); * if (overlap is high) > (relevance is high). By using above FL rules, we are able to achieve performance approximately equal to the state of the art search engine Apache Lucene (deltaP10 +0.92%; deltaMAP -0.1%). The fuzzy logic approach allows combining the logic-based model with the vector model. The resulting model possesses simplicity and formalism of the logic based model, and the flexibility and performance of the vector model.<|reference_end|>
arxiv
@article{rubens2006the, title={The Application of Fuzzy Logic to the Construction of the Ranking Function of Information Retrieval Systems}, author={Neil Rubens}, journal={N. Rubens. The application of fuzzy logic to the construction of the ranking function of information retrieval systems. Computer Modelling and New Technologies, 10(1):20-27, 2006}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610039}, primaryClass={cs.IR cs.AI} }
rubens2006the
arxiv-674912
cs/0610040
Crosstalk-free Conjugate Networks for Optical Multicast Switching
<|reference_start|>Crosstalk-free Conjugate Networks for Optical Multicast Switching: High-speed photonic switching networks can switch optical signals at the rate of several terabits per second. However, they suffer from an intrinsic crosstalk problem when two optical signals cross at the same switch element. To avoid crosstalk, active connections must be node-disjoint in the switching network. In this paper, we propose a sequence of decomposition and merge operations, called conjugate transformation, performed on each switch element to tackle this problem. The network resulting from this transformation is called conjugate network. By using the numbering-schemes of networks, we prove that if the route assignments in the original network are link-disjoint, their corresponding ones in the conjugate network would be node-disjoint. Thus, traditional nonblocking switching networks can be transformed into crosstalk-free optical switches in a routine manner. Furthermore, we show that crosstalk-free multicast switches can also be obtained from existing nonblocking multicast switches via the same conjugate transformation.<|reference_end|>
arxiv
@article{deng2006crosstalk-free, title={Crosstalk-free Conjugate Networks for Optical Multicast Switching}, author={Yun Deng, Tony T. Lee}, journal={arXiv preprint arXiv:cs/0610040}, year={2006}, doi={10.1109/JLT.2006.882249}, archivePrefix={arXiv}, eprint={cs/0610040}, primaryClass={cs.NI} }
deng2006crosstalk-free
arxiv-674913
cs/0610041
A Computational Model of Spatial Memory Anticipation during Visual Search
<|reference_start|>A Computational Model of Spatial Memory Anticipation during Visual Search: Some visual search tasks require to memorize the location of stimuli that have been previously scanned. Considerations about the eye movements raise the question of how we are able to maintain a coherent memory, despite the frequent drastically changes in the perception. In this article, we present a computational model that is able to anticipate the consequences of the eye movements on the visual perception in order to update a spatial memory<|reference_end|>
arxiv
@article{fix2006a, title={A Computational Model of Spatial Memory Anticipation during Visual Search}, author={J'er'emy Fix (INRIA Lorraine - LORIA), Julien Vitay (INRIA Lorraine - LORIA), Nicolas Rougier (INRIA Lorraine - LORIA)}, journal={Dans Anticipatory Behavior in Adaptive Learning Systems 2006 (2006)}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610041}, primaryClass={cs.NE} }
fix2006a
arxiv-674914
cs/0610042
A Polynomial Time Algorithm for The Traveling Salesman Problem
<|reference_start|>A Polynomial Time Algorithm for The Traveling Salesman Problem: The ATSP polytope can be expressed by asymmetric polynomial size linear program.<|reference_end|>
arxiv
@article{gubin2006a, title={A Polynomial Time Algorithm for The Traveling Salesman Problem}, author={Sergey Gubin}, journal={Complementary to Yannakakis' Theorem, 22nd MCCCC, University of Nevada, Las Vegas, 2008, p.8}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610042}, primaryClass={cs.DM cs.CC cs.DS} }
gubin2006a
arxiv-674915
cs/0610043
Farthest-Point Heuristic based Initialization Methods for K-Modes Clustering
<|reference_start|>Farthest-Point Heuristic based Initialization Methods for K-Modes Clustering: The k-modes algorithm has become a popular technique in solving categorical data clustering problems in different application domains. However, the algorithm requires random selection of initial points for the clusters. Different initial points often lead to considerable distinct clustering results. In this paper we present an experimental study on applying a farthest-point heuristic based initialization method to k-modes clustering to improve its performance. Experiments show that new initialization method leads to better clustering accuracy than random selection initialization method for k-modes clustering.<|reference_end|>
arxiv
@article{he2006farthest-point, title={Farthest-Point Heuristic based Initialization Methods for K-Modes Clustering}, author={Zengyou He}, journal={arXiv preprint arXiv:cs/0610043}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610043}, primaryClass={cs.AI} }
he2006farthest-point
arxiv-674916
cs/0610044
M\'ecanismes de Transmission Multipoint pour R\'eseaux Locaux Sans Fil IEEE 80211
<|reference_start|>M\'ecanismes de Transmission Multipoint pour R\'eseaux Locaux Sans Fil IEEE 80211: Le standard IEEE 802.11 est inefficace pour la transmission multim\'{e}dia en multipoint. En particulier, les paquets multipoints sont envoy\'{e}s en boucle ouverte de la m\^{e}me mani\`{e}re que les paquets broadcast. L'absence d'acquittements rend impossible la mise en oeuvre de m\'{e}canismes de contr\^{o}le de congestion, de m\'{e}canisme de fiabilisation de la transmission ainsi que d'algorithmes d'adaptation du d\'{e}bit de transmission physique. Dans ce rapport, nous proposons de nouveaux m\'{e}canismes de ransmission multipoint qui se basent sur une approche leader pour renvoyer des acquittements. Nous nous interessons \`{a} des solutions pratiques qui sont suceptibles d'\^{e}tre implant\'{e}s dans les cartes r\'{e}seaux sans fil actuelles et futures et qui restent compatibles avec les stations IEEE 802.11 standards. Nous proposons deux m\'{e}canismes pour adapter le d\'{e}bit de transmission physique des flots multipoints: un m\'{e}canisme simplifi\'{e} appel\'{e} LB-ARF et un m\'{e}canisme plus robuste appel\'{e} RRAM. Nos simulations montrent que pour des environnements statiques, un m\'{e}canisme aussi simple que LB-ARF suffit pour obtenir de bonnes performances. Le m\'{e}canisme RRAM est quant \`{a} lui aussi efficace dans des environnements statiques que lorsque les stations sont mobiles.<|reference_end|>
arxiv
@article{turletti2006m\'{e}canismes, title={M\'{e}canismes de Transmission Multipoint pour R\'{e}seaux Locaux Sans Fil IEEE 802.11}, author={Thierry Turletti (INRIA Sophia Antipolis / INRIA Rh^one-Alpes), Yongho Seok (INRIA Sophia Antipolis / INRIA Rh^one-Alpes)}, journal={arXiv preprint arXiv:cs/0610044}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610044}, primaryClass={cs.NI} }
turletti2006m\'{e}canismes
arxiv-674917
cs/0610045
Spectra of large block matrices
<|reference_start|>Spectra of large block matrices: In a frequency selective slow-fading channel in a MIMO system, the channel matrix is of the form of a block matrix. This paper proposes a method to calculate the limit of the eigenvalue distribution of block matrices if the size of the blocks tends to infinity. While it considers random matrices, it takes an operator-valued free probability approach to achieve this goal. Using this method, one derives a system of equations, which can be solved numerically to compute the desired eigenvalue distribution. The paper initially tackles the problem for square block matrices, then extends the solution to rectangular block matrices. Finally, it deals with Wishart type block matrices. For two special cases, the results of our approach are compared with results from simulations. The first scenario investigates the limit eigenvalue distribution of block Toeplitz matrices. The second scenario deals with the distribution of Wishart type block matrices for a frequency selective slow-fading channel in a MIMO system for two different cases of $n_R=n_T$ and $n_R=2n_T$. Using this method, one may calculate the capacity and the Signal-to-Interference-and-Noise Ratio in large MIMO systems.<|reference_end|>
arxiv
@article{far2006spectra, title={Spectra of large block matrices}, author={Reza Rashidi Far, Tamer Oraby, Wlodzimierz Bryc and Roland Speicher}, journal={arXiv preprint arXiv:cs/0610045}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610045}, primaryClass={cs.IT math.IT math.OA} }
far2006spectra
arxiv-674918
cs/0610046
Streaming Maximum-Minimum Filter Using No More than Three Comparisons per Element
<|reference_start|>Streaming Maximum-Minimum Filter Using No More than Three Comparisons per Element: The running maximum-minimum (max-min) filter computes the maxima and minima over running windows of size w. This filter has numerous applications in signal processing and time series analysis. We present an easy-to-implement online algorithm requiring no more than 3 comparisons per element, in the worst case. Comparatively, no algorithm is known to compute the running maximum (or minimum) filter in 1.5 comparisons per element, in the worst case. Our algorithm has reduced latency and memory usage.<|reference_end|>
arxiv
@article{lemire2006streaming, title={Streaming Maximum-Minimum Filter Using No More than Three Comparisons per Element}, author={Daniel Lemire}, journal={Daniel Lemire, Streaming Maximum-Minimum Filter Using No More than Three Comparisons per Element, Nordic Journal of Computing, Volume 13, Number 4, pages 328-339, 2006}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610046}, primaryClass={cs.DS} }
lemire2006streaming
arxiv-674919
cs/0610047
Capacity of the Trapdoor Channel with Feedback
<|reference_start|>Capacity of the Trapdoor Channel with Feedback: We establish that the feedback capacity of the trapdoor channel is the logarithm of the golden ratio and provide a simple communication scheme that achieves capacity. As part of the analysis, we formulate a class of dynamic programs that characterize capacities of unifilar finite-state channels. The trapdoor channel is an instance that admits a simple analytic solution.<|reference_end|>
arxiv
@article{permuter2006capacity, title={Capacity of the Trapdoor Channel with Feedback}, author={Haim Permuter, Paul Cuff, Benjamin Van Roy and Tsachy Weissman}, journal={arXiv preprint arXiv:cs/0610047}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610047}, primaryClass={cs.IT math.IT} }
permuter2006capacity
arxiv-674920
cs/0610048
MV3: A new word based stream cipher using rapid mixing and revolving buffers
<|reference_start|>MV3: A new word based stream cipher using rapid mixing and revolving buffers: MV3 is a new word based stream cipher for encrypting long streams of data. A direct adaptation of a byte based cipher such as RC4 into a 32- or 64-bit word version will obviously need vast amounts of memory. This scaling issue necessitates a look for new components and principles, as well as mathematical analysis to justify their use. Our approach, like RC4's, is based on rapidly mixing random walks on directed graphs (that is, walks which reach a random state quickly, from any starting point). We begin with some well understood walks, and then introduce nonlinearity in their steps in order to improve security and show long term statistical correlations are negligible. To minimize the short term correlations, as well as to deter attacks using equations involving successive outputs, we provide a method for sequencing the outputs derived from the walk using three revolving buffers. The cipher is fast -- it runs at a speed of less than 5 cycles per byte on a Pentium IV processor. A word based cipher needs to output more bits per step, which exposes more correlations for attacks. Moreover we seek simplicity of construction and transparent analysis. To meet these requirements, we use a larger state and claim security corresponding to only a fraction of it. Our design is for an adequately secure word-based cipher; our very preliminary estimate puts the security close to exhaustive search for keys of size < 256 bits.<|reference_end|>
arxiv
@article{keller2006mv3:, title={MV3: A new word based stream cipher using rapid mixing and revolving buffers}, author={Nathan Keller, Stephen D. Miller, Ilya Mironov, and Ramarathnam Venkatesan}, journal={arXiv preprint arXiv:cs/0610048}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610048}, primaryClass={cs.CR cs.DM math.CO} }
keller2006mv3:
arxiv-674921
cs/0610049
Restricted Complexity, General Complexity
<|reference_start|>Restricted Complexity, General Complexity: Why has the problematic of complexity appeared so late? And why would it be justified?<|reference_end|>
arxiv
@article{morin2006restricted, title={Restricted Complexity, General Complexity}, author={Edgar Morin}, journal={arXiv preprint arXiv:cs/0610049}, year={2006}, doi={10.1142/9789812707420_0002}, archivePrefix={arXiv}, eprint={cs/0610049}, primaryClass={cs.CC nlin.AO} }
morin2006restricted
arxiv-674922
cs/0610050
The Mathematical Parallels Between Packet Switching and Information Transmission
<|reference_start|>The Mathematical Parallels Between Packet Switching and Information Transmission: All communication networks comprise of transmission systems and switching systems, even though they are usually treated as two separate issues. Communication channels are generally disturbed by noise from various sources. In circuit switched networks, reliable communication requires the error-tolerant transmission of bits over noisy channels. In packet switched networks, however, not only can bits be corrupted with noise, but resources along connection paths are also subject to contention. Thus, quality of service (QoS) is determined by buffer delays and packet losses. The theme of this paper is to show that transmission noise and packet contention actually have similar characteristics and can be tamed by comparable means to achieve reliable communication, and a number of analogies between switching and transmission are identified. The sampling theorem of bandlimited signals provides the cornerstone of digital communication and signal processing. Recently, the Birkhoff-von Neumann decomposition of traffic matrices has been widely applied to packet switches. With respect to the complexity reduction of packet switching, we show that the decomposition of a doubly stochastic traffic matrix plays a similar role to that of the sampling theorem in digital transmission. We conclude that packet switching systems are governed by mathematical laws that are similar to those of digital transmission systems as envisioned by Shannon in his seminal 1948 paper, A Mathematical Theory of Communication.<|reference_end|>
arxiv
@article{lee2006the, title={The Mathematical Parallels Between Packet Switching and Information Transmission}, author={Tony T. Lee}, journal={arXiv preprint arXiv:cs/0610050}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610050}, primaryClass={cs.IT cs.NI math.IT} }
lee2006the
arxiv-674923
cs/0610051
Strong bi-homogeneous B\'ezout theorem and its use in effective real algebraic geometry
<|reference_start|>Strong bi-homogeneous B\'ezout theorem and its use in effective real algebraic geometry: Let f1, ..., fs be a polynomial family in Q[X1,..., Xn] (with s less than n) of degree bounded by D. Suppose that f1, ..., fs generates a radical ideal, and defines a smooth algebraic variety V. Consider a projection P. We prove that the degree of the critical locus of P restricted to V is bounded by D^s(D-1)^(n-s) times binomial of n and n-s. This result is obtained in two steps. First the critical points of P restricted to V are characterized as projections of the solutions of Lagrange's system for which a bi-homogeneous structure is exhibited. Secondly we prove a bi-homogeneous B\'ezout Theorem, which bounds the sum of the degrees of the equidimensional components of the radical of an ideal generated by a bi-homogeneous polynomial family. This result is improved when f1,..., fs is a regular sequence. Moreover, we use Lagrange's system to design an algorithm computing at least one point in each connected component of a smooth real algebraic set. This algorithm generalizes, to the non equidimensional case, the one of Safey El Din and Schost. The evaluation of the output size of this algorithm gives new upper bounds on the first Betti number of a smooth real algebraic set. Finally, we estimate its arithmetic complexity and prove that in the worst cases it is polynomial in n, s, D^s(D-1)^(n-s) and the binomial of n and n-s, and the complexity of evaluation of f1,..., fs.<|reference_end|>
arxiv
@article{din2006strong, title={Strong bi-homogeneous B\'{e}zout theorem and its use in effective real algebraic geometry}, author={Mohab Safey El Din (INRIA Rocquencourt), Philippe Trebuchet (INRIA Rocquencourt)}, journal={arXiv preprint arXiv:cs/0610051}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610051}, primaryClass={cs.SC} }
din2006strong
arxiv-674924
cs/0610052
Finite-Dimensional Bounds on Zm and Binary LDPC Codes with Belief Propagation Decoders
<|reference_start|>Finite-Dimensional Bounds on Zm and Binary LDPC Codes with Belief Propagation Decoders: This paper focuses on finite-dimensional upper and lower bounds on decodable thresholds of Zm and binary low-density parity-check (LDPC) codes, assuming belief propagation decoding on memoryless channels. A concrete framework is presented, admitting systematic searches for new bounds. Two noise measures are considered: the Bhattacharyya noise parameter and the soft bit value for a maximum a posteriori probability (MAP) decoder on the uncoded channel. For Zm LDPC codes, an iterative m-dimensional bound is derived for m-ary-input/symmetric-output channels, which gives a sufficient stability condition for Zm LDPC codes and is complemented by a matched necessary stability condition introduced herein. Applications to coded modulation and to codes with non-equiprobable distributed codewords are also discussed. For binary codes, two new lower bounds are provided for symmetric channels, including a two-dimensional iterative bound and a one-dimensional non-iterative bound, the latter of which is the best known bound that is tight for binary symmetric channels (BSCs), and is a strict improvement over the bound derived by the channel degradation argument. By adopting the reverse channel perspective, upper and lower bounds on the decodable Bhattacharyya noise parameter are derived for non-symmetric channels, which coincides with the existing bound for symmetric channels.<|reference_end|>
arxiv
@article{wang2006finite-dimensional, title={Finite-Dimensional Bounds on Zm and Binary LDPC Codes with Belief Propagation Decoders}, author={Chih-Chun Wang (1), Sanjeev R. Kulkarni (2), H. Vincent Poor (2) ((1) Purdue University, (2) Princeton University)}, journal={arXiv preprint arXiv:cs/0610052}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610052}, primaryClass={cs.IT math.IT} }
wang2006finite-dimensional
arxiv-674925
cs/0610053
Towards a Bayesian framework for option pricing
<|reference_start|>Towards a Bayesian framework for option pricing: In this paper, we describe a general method for constructing the posterior distribution of an option price. Our framework takes as inputs the prior distributions of the parameters of the stochastic process followed by the underlying, as well as the likelihood function implied by the observed price history for the underlying. Our work extends that of Karolyi (1993) and Darsinos and Satchell (2001), but with the crucial difference that the likelihood function we use for inference is that which is directly implied by the underlying, rather than imposed in an ad hoc manner via the introduction of a function representing "measurement error." As such, an important problem still relevant for our method is that of model risk, and we address this issue by describing how to perform a Bayesian averaging of parameter inferences based on the different models considered using our framework.<|reference_end|>
arxiv
@article{gzyl2006towards, title={Towards a Bayesian framework for option pricing}, author={Henryk Gzyl, Enrique ter Horst, Samuel Malone}, journal={arXiv preprint arXiv:cs/0610053}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610053}, primaryClass={cs.CE q-fin.PR} }
gzyl2006towards
arxiv-674926
cs/0610054
Enumeration Problems Related to Ground Horn Theories
<|reference_start|>Enumeration Problems Related to Ground Horn Theories: We investigate the enumeration of varieties of boolean theories related to Horn clauses. We describe a number of combinatorial equivalences among different characterizations and calculate the number of different theories in $n$ variables for slightly different characterizations. The method of counting is via counting models using a satisfiability checker.<|reference_end|>
arxiv
@article{dershowitz2006enumeration, title={Enumeration Problems Related to Ground Horn Theories}, author={Nachum Dershowitz, Mitchell A. Harris, and Guan-Shieng Huang}, journal={arXiv preprint arXiv:cs/0610054}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610054}, primaryClass={cs.LO cs.DM} }
dershowitz2006enumeration
arxiv-674927
cs/0610055
Extending the Calculus of Constructions with Tarski's fix-point theorem
<|reference_start|>Extending the Calculus of Constructions with Tarski's fix-point theorem: We propose to use Tarski's least fixpoint theorem as a basis to define recursive functions in the calculus of inductive constructions. This widens the class of functions that can be modeled in type-theory based theorem proving tool to potentially non-terminating functions. This is only possible if we extend the logical framework by adding the axioms that correspond to classical logic. We claim that the extended framework makes it possible to reason about terminating and non-terminating computations and we show that common facilities of the calculus of inductive construction, like program extraction can be extended to also handle the new functions.<|reference_end|>
arxiv
@article{bertot2006extending, title={Extending the Calculus of Constructions with Tarski's fix-point theorem}, author={Yves Bertot (INRIA Sophia Antipolis)}, journal={arXiv preprint arXiv:cs/0610055}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610055}, primaryClass={cs.LO} }
bertot2006extending
arxiv-674928
cs/0610056
Constructing experimental indicators for Open Access documents
<|reference_start|>Constructing experimental indicators for Open Access documents: The ongoing paradigm change in the scholarly publication system ('science is turning to e-science') makes it necessary to construct alternative evaluation criteria/metrics which appropriately take into account the unique characteristics of electronic publications and other research output in digital formats. Today, major parts of scholarly Open Access (OA) publications and the self-archiving area are not well covered in the traditional citation and indexing databases. The growing share and importance of freely accessible research output demands new approaches/metrics for measuring and for evaluating of these new types of scientific publications. In this paper we propose a simple quantitative method which establishes indicators by measuring the access/download pattern of OA documents and other web entities of a single web server. The experimental indicators (search engine, backlink and direct access indicator) are constructed based on standard local web usage data. This new type of web-based indicator is developed to model the specific demand for better study/evaluation of the accessibility, visibility and interlinking of open accessible documents. We conclude that e-science will need new stable e-indicators.<|reference_end|>
arxiv
@article{mayr2006constructing, title={Constructing experimental indicators for Open Access documents}, author={Philipp Mayr}, journal={arXiv preprint arXiv:cs/0610056}, year={2006}, doi={10.3152/147154406781775940}, archivePrefix={arXiv}, eprint={cs/0610056}, primaryClass={cs.DL} }
mayr2006constructing
arxiv-674929
cs/0610057
Properties of codes in rank metric
<|reference_start|>Properties of codes in rank metric: We study properties of rank metric and codes in rank metric over finite fields. We show that in rank metric perfect codes do not exist. We derive an existence bound that is the equivalent of the Gilbert--Varshamov bound in Hamming metric. We study the asymptotic behavior of the minimum rank distance of codes satisfying GV. We derive the probability distribution of minimum rank distance for random and random $\F{q}$-linear codes. We give an asymptotic equivalent of their average minimum rank distance and show that random $\F{q}$-linear codes are on GV bound for rank metric. We show that the covering density of optimum codes whose codewords can be seen as square matrices is lower bounded by a function depending only on the error-correcting capability of the codes. We show that there are quasi-perfect codes in rank metric over fields of characteristic 2.<|reference_end|>
arxiv
@article{loidreau2006properties, title={Properties of codes in rank metric}, author={P. Loidreau}, journal={arXiv preprint arXiv:cs/0610057}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610057}, primaryClass={cs.DM cs.IT math.IT} }
loidreau2006properties
arxiv-674930
cs/0610058
Context-sensitive access to e-document corpus
<|reference_start|>Context-sensitive access to e-document corpus: The methodology of context-sensitive access to e-documents considers context as a problem model based on the knowledge extracted from the application domain, and presented in the form of application ontology. Efficient access to an information in the text form is needed. Wiki resources as a modern text format provides huge number of text in a semi formalized structure. At the first stage of the methodology, documents are indexed against the ontology representing macro-situation. The indexing method uses a topic tree as a middle layer between documents and the application ontology. At the second stage documents relevant to the current situation (the abstract and operational contexts) are identified and sorted by degree of relevance. Abstract context is a problem-oriented ontology-based model. Operational context is an instantiation of the abstract context with data provided by the information sources. The following parts of the methodology are described: (i) metrics for measuring similarity of e-documents to ontology, (ii) a document index storing results of indexing of e-documents against the ontology; (iii) a method for identification of relevant e-documents based on semantic similarity measures. Wikipedia (wiki resource) is used as a corpus of e-documents for approach evaluation in a case study. Text categorization, the presence of metadata, and an existence of a lot of articles related to different topics characterize the corpus.<|reference_end|>
arxiv
@article{smirnov2006context-sensitive, title={Context-sensitive access to e-document corpus}, author={A. V. Smirnov, T. V. Levashova, M. P. Pashkin, N. G. Shilov, A. A. Krizhanovsky, A. M. Kashevnik, and A. S. Komarova}, journal={arXiv preprint arXiv:cs/0610058}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610058}, primaryClass={cs.IR} }
smirnov2006context-sensitive
arxiv-674931
cs/0610059
Camera motion estimation through planar deformation determination
<|reference_start|>Camera motion estimation through planar deformation determination: In this paper, we propose a global method for estimating the motion of a camera which films a static scene. Our approach is direct, fast and robust, and deals with adjacent frames of a sequence. It is based on a quadratic approximation of the deformation between two images, in the case of a scene with constant depth in the camera coordinate system. This condition is very restrictive but we show that provided translation and depth inverse variations are small enough, the error on optical flow involved by the approximation of depths by a constant is small. In this context, we propose a new model of camera motion, that allows to separate the image deformation in a similarity and a ``purely'' projective application, due to change of optical axis direction. This model leads to a quadratic approximation of image deformation that we estimate with an M-estimator; we can immediatly deduce camera motion parameters.<|reference_end|>
arxiv
@article{jonchery2006camera, title={Camera motion estimation through planar deformation determination}, author={Claire Jonchery (MAP5), Franc{c}oise Dibos (LAGA, IG), Georges Koepfler (MAP5)}, journal={Journal of Mathematical Imaging and Vision 32, 1 (2008) 73-87}, year={2006}, doi={10.1007/s10851-008-0086-1}, archivePrefix={arXiv}, eprint={cs/0610059}, primaryClass={cs.CV} }
jonchery2006camera
arxiv-674932
cs/0610060
Comparing Typical Opening Move Choices Made by Humans and Chess Engines
<|reference_start|>Comparing Typical Opening Move Choices Made by Humans and Chess Engines: The opening book is an important component of a chess engine, and thus computer chess programmers have been developing automated methods to improve the quality of their books. For chess, which has a very rich opening theory, large databases of high-quality games can be used as the basis of an opening book, from which statistics relating to move choices from given positions can be collected. In order to find out whether the opening books used by modern chess engines in machine versus machine competitions are ``comparable'' to those used by chess players in human versus human competitions, we carried out analysis on 26 test positions using statistics from two opening books one compiled from humans' games and the other from machines' games. Our analysis using several nonparametric measures, shows that, overall, there is a strong association between humans' and machines' choices of opening moves when using a book to guide their choices.<|reference_end|>
arxiv
@article{levene2006comparing, title={Comparing Typical Opening Move Choices Made by Humans and Chess Engines}, author={Mark Levene and Judit Bar-Ilan}, journal={arXiv preprint arXiv:cs/0610060}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610060}, primaryClass={cs.AI} }
levene2006comparing
arxiv-674933
cs/0610061
The Delay-Limited Capacity Region of OFDM Broadcast Channels
<|reference_start|>The Delay-Limited Capacity Region of OFDM Broadcast Channels: In this work, the delay limited capacity (DLC) of orthogonal frequency division multiplexing (OFDM) systems is investigated. The analysis is organized into two parts. In the first part, the impact of system parameters on the OFDM DLC is analyzed in a general setting. The main results are that under weak assumptions the maximum achievable single user DLC is almost independent of the distribution of the path attenuations in the low signal-to-noise (SNR) region but depends strongly on the delay spread. In the high SNR region the roles are exchanged. Here, the impact of delay spread is negligible while the impact of the distribution becomes dominant. The relevant asymptotic quantities are derived without employing simplifying assumptions on the OFDM correlation structure. Moreover, for both cases it is shown that the DLC is maximized if the total channel energy is uniformly spread, i.e. the power delay profile is uniform. It is worth pointing out that since universal bounds are obtained the results can also be used for other classes of parallel channels with block fading characteristic. The second part extends the setting to the broadcast channel and studies the corresponding OFDM DLC BC region. An algorithm for computing the OFDM BC DLC region is presented. To derive simple but smart resource allocation strategies, the principle of rate water-filling employing order statistics is introduced. This yields analytical lower bounds on the OFDM DLC region based on orthogonal frequency division multiple access (OFDMA) and ordinal channel state information (CSI). Finally, the schemes are compared to an algorithm using full CSI.<|reference_end|>
arxiv
@article{wunder2006the, title={The Delay-Limited Capacity Region of OFDM Broadcast Channels}, author={Gerhard Wunder, Thomas Michel}, journal={arXiv preprint arXiv:cs/0610061}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610061}, primaryClass={cs.IT math.IT} }
wunder2006the
arxiv-674934
cs/0610062
A type-based termination criterion for dependently-typed higher-order rewrite systems
<|reference_start|>A type-based termination criterion for dependently-typed higher-order rewrite systems: Several authors devised type-based termination criteria for ML-like languages allowing non-structural recursive calls. We extend these works to general rewriting and dependent types, hence providing a powerful termination criterion for the combination of rewriting and beta-reduction in the Calculus of Constructions.<|reference_end|>
arxiv
@article{blanqui2006a, title={A type-based termination criterion for dependently-typed higher-order rewrite systems}, author={Frederic Blanqui (INRIA Lorraine - LORIA)}, journal={Dans 15th International Conference on Rewriting Techniques and Applications - RTA'04 (2004) 15 p}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610062}, primaryClass={cs.LO cs.PL} }
blanqui2006a
arxiv-674935
cs/0610063
The Calculus of Algebraic Constructions
<|reference_start|>The Calculus of Algebraic Constructions: This paper is concerned with the foundations of the Calculus of Algebraic Constructions (CAC), an extension of the Calculus of Constructions by inductive data types. CAC generalizes inductive types equipped with higher-order primitive recursion, by providing definitions of functions by pattern-matching which capture recursor definitions for arbitrary non-dependent and non-polymorphic inductive types satisfying a strictly positivity condition. CAC also generalizes the first-order framework of abstract data types by providing dependent types and higher-order rewrite rules.<|reference_end|>
arxiv
@article{blanqui2006the, title={The Calculus of Algebraic Constructions}, author={Fr'ed'eric Blanqui (LRI), Jean-Pierre Jouannaud (LRI), Mitsuhiro Okada}, journal={Dans Rewriting Techniques and Applications, 10th International Conference, RTA-99 1631 (1999)}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610063}, primaryClass={cs.LO} }
blanqui2006the
arxiv-674936
cs/0610064
Termination and Confluence of Higher-Order Rewrite Systems
<|reference_start|>Termination and Confluence of Higher-Order Rewrite Systems: In the last twenty years, several approaches to higher-order rewriting have been proposed, among which Klop's Combinatory Rewrite Systems (CRSs), Nipkow's Higher-order Rewrite Systems (HRSs) and Jouannaud and Okada's higher-order algebraic specification languages, of which only the last one considers typed terms. The later approach has been extended by Jouannaud, Okada and the present author into Inductive Data Type Systems (IDTSs). In this paper, we extend IDTSs with the CRS higher-order pattern-matching mechanism, resulting in simply-typed CRSs. Then, we show how the termination criterion developed for IDTSs with first-order pattern-matching, called the General Schema, can be extended so as to prove the strong normalization of IDTSs with higher-order pattern-matching. Next, we compare the unified approach with HRSs. We first prove that the extended General Schema can also be applied to HRSs. Second, we show how Nipkow's higher-order critical pair analysis technique for proving local confluence can be applied to IDTSs.<|reference_end|>
arxiv
@article{blanqui2006termination, title={Termination and Confluence of Higher-Order Rewrite Systems}, author={Fr'ed'eric Blanqui (LRI)}, journal={Dans Rewriting Techniques and Applications, 11th International Conference, RTA 2000 1833 (2000)}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610064}, primaryClass={cs.LO} }
blanqui2006termination
arxiv-674937
cs/0610065
Definitions by Rewriting in the Calculus of Constructions
<|reference_start|>Definitions by Rewriting in the Calculus of Constructions: The main novelty of this paper is to consider an extension of the Calculus of Constructions where predicates can be defined with a general form of rewrite rules. We prove the strong normalization of the reduction relation generated by the beta-rule and the user-defined rules under some general syntactic conditions including confluence. As examples, we show that two important systems satisfy these conditions: a sub-system of the Calculus of Inductive Constructions which is the basis of the proof assistant Coq, and the Natural Deduction Modulo a large class of equational theories.<|reference_end|>
arxiv
@article{blanqui2006definitions, title={Definitions by Rewriting in the Calculus of Constructions}, author={Fr'ed'eric Blanqui (LRI)}, journal={Dans 16th Annual IEEE Symposium on Logic in Computer Science (2001)}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610065}, primaryClass={cs.LO} }
blanqui2006definitions
arxiv-674938
cs/0610066
Inductive-data-type Systems
<|reference_start|>Inductive-data-type Systems: In a previous work ("Abstract Data Type Systems", TCS 173(2), 1997), the last two authors presented a combined language made of a (strongly normalizing) algebraic rewrite system and a typed lambda-calculus enriched by pattern-matching definitions following a certain format, called the "General Schema", which generalizes the usual recursor definitions for natural numbers and similar "basic inductive types". This combined language was shown to be strongly normalizing. The purpose of this paper is to reformulate and extend the General Schema in order to make it easily extensible, to capture a more general class of inductive types, called "strictly positive", and to ease the strong normalization proof of the resulting system. This result provides a computation model for the combination of an algebraic specification language based on abstract data types and of a strongly typed functional language with strictly positive inductive types.<|reference_end|>
arxiv
@article{blanqui2006inductive-data-type, title={Inductive-data-type Systems}, author={Fr'ed'eric Blanqui (LRI), Jean-Pierre Jouannaud (LRI), Mitsuhiro Okada}, journal={arXiv preprint arXiv:cs/0610066}, year={2006}, doi={10.1016/S0304-3975(00)00347-9}, archivePrefix={arXiv}, eprint={cs/0610066}, primaryClass={cs.LO} }
blanqui2006inductive-data-type
arxiv-674939
cs/0610067
Language, logic and ontology: uncovering the structure of commonsense knowledge
<|reference_start|>Language, logic and ontology: uncovering the structure of commonsense knowledge: The purpose of this paper is twofold: (i) we argue that the structure of commonsense knowledge must be discovered, rather than invented; and (ii) we argue that natural language, which is the best known theory of our (shared) commonsense knowledge, should itself be used as a guide to discovering the structure of commonsense knowledge. In addition to suggesting a systematic method to the discovery of the structure of commonsense knowledge, the method we propose seems to also provide an explanation for a number of phenomena in natural language, such as metaphor, intensionality, and the semantics of nominal compounds. Admittedly, our ultimate goal is quite ambitious, and it is no less than the systematic 'discovery' of a well-typed ontology of commonsense knowledge, and the subsequent formulation of the long-awaited goal of a meaning algebra.<|reference_end|>
arxiv
@article{saba2006language,, title={Language, logic and ontology: uncovering the structure of commonsense knowledge}, author={Walid S. Saba}, journal={arXiv preprint arXiv:cs/0610067}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610067}, primaryClass={cs.AI math.LO} }
saba2006language,
arxiv-674940
cs/0610068
Type theory and rewriting
<|reference_start|>Type theory and rewriting: We study the properties, in particular termination, of dependent types systems for lambda calculus and rewriting.<|reference_end|>
arxiv
@article{blanqui2006type, title={Type theory and rewriting}, author={Fr'ed'eric Blanqui (LRI)}, journal={arXiv preprint arXiv:cs/0610068}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610068}, primaryClass={cs.LO} }
blanqui2006type
arxiv-674941
cs/0610069
An Isabelle formalization of protocol-independent secrecy with an application to e-commerce
<|reference_start|>An Isabelle formalization of protocol-independent secrecy with an application to e-commerce: A protocol-independent secrecy theorem is established and applied to several non-trivial protocols. In particular, it is applied to protocols proposed for protecting the computation results of free-roaming mobile agents doing comparison shopping. All the results presented here have been formally proved in Isabelle by building on Larry Paulson's inductive approach. This therefore provides a library of general theorems that can be applied to other protocols.<|reference_end|>
arxiv
@article{blanqui2006an, title={An Isabelle formalization of protocol-independent secrecy with an application to e-commerce}, author={Fr'ed'eric Blanqui (INRIA Futurs)}, journal={arXiv preprint arXiv:cs/0610069}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610069}, primaryClass={cs.LO} }
blanqui2006an
arxiv-674942
cs/0610070
Inductive types in the Calculus of Algebraic Constructions
<|reference_start|>Inductive types in the Calculus of Algebraic Constructions: In a previous work, we proved that almost all of the Calculus of Inductive Constructions (CIC), which is the basis of the proof assistant Coq, can be seen as a Calculus of Algebraic Constructions (CAC), an extension of the Calculus of Constructions with functions and predicates defined by higher-order rewrite rules. In this paper, we not only prove that CIC as a whole can be seen as a CAC, but also that it can be extended with non-free constructors, pattern-matching on defined symbols, non-strictly positive types and inductive-recursive types.<|reference_end|>
arxiv
@article{blanqui2006inductive, title={Inductive types in the Calculus of Algebraic Constructions}, author={Fr'ed'eric Blanqui (LIX)}, journal={Dans Typed Lambda Calculi and Applications, 6th International Conference, TLCA 2003 2701 (2003)}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610070}, primaryClass={cs.LO} }
blanqui2006inductive
arxiv-674943
cs/0610071
Rewriting modulo in Deduction modulo
<|reference_start|>Rewriting modulo in Deduction modulo: We study the termination of rewriting modulo a set of equations in the Calculus of Algebraic Constructions, an extension of the Calculus of Constructions with functions and predicates defined by higher-order rewrite rules. In a previous work, we defined general syntactic conditions based on the notion of computable closure for ensuring the termination of the combination of rewriting and beta-reduction. Here, we show that this result is preserved when considering rewriting modulo a set of equations if the equivalence classes generated by these equations are finite, the equations are linear and satisfy general syntactic conditions also based on the notion of computable closure. This includes equations like associativity and commutativity, and provides an original treatment of termination modulo equations.<|reference_end|>
arxiv
@article{blanqui2006rewriting, title={Rewriting modulo in Deduction modulo}, author={Fr'ed'eric Blanqui (LIX)}, journal={Dans Rewriting Techniques and Applications, 14th International Conference, RTA 2003 2706 (2003)}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610071}, primaryClass={cs.LO} }
blanqui2006rewriting
arxiv-674944
cs/0610072
Definitions by rewriting in the Calculus of Constructions
<|reference_start|>Definitions by rewriting in the Calculus of Constructions: This paper presents general syntactic conditions ensuring the strong normalization and the logical consistency of the Calculus of Algebraic Constructions, an extension of the Calculus of Constructions with functions and predicates defined by higher-order rewrite rules. On the one hand, the Calculus of Constructions is a powerful type system in which one can formalize the propositions and natural deduction proofs of higher-order logic. On the other hand, rewriting is a simple and powerful computation paradigm. The combination of both allows, among other things, to develop formal proofs with a reduced size and more automation compared with more traditional proof assistants. The main novelty is to consider a general form of rewriting at the predicate-level which generalizes the strong elimination of the Calculus of Inductive Constructions.<|reference_end|>
arxiv
@article{blanqui2006definitions, title={Definitions by rewriting in the Calculus of Constructions}, author={Fr'ed'eric Blanqui (INRIA Lorraine - LORIA, LIX)}, journal={Mathematical Structures in Computer Science 15, 1 (2005) 37-92}, year={2006}, doi={10.1017/S0960129504004426}, number={Journal version of LICS'01}, archivePrefix={arXiv}, eprint={cs/0610072}, primaryClass={cs.LO} }
blanqui2006definitions
arxiv-674945
cs/0610073
Inductive types in the Calculus of Algebraic Constructions
<|reference_start|>Inductive types in the Calculus of Algebraic Constructions: In a previous work, we proved that an important part of the Calculus of Inductive Constructions (CIC), the basis of the Coq proof assistant, can be seen as a Calculus of Algebraic Constructions (CAC), an extension of the Calculus of Constructions with functions and predicates defined by higher-order rewrite rules. In this paper, we prove that almost all CIC can be seen as a CAC, and that it can be further extended with non-strictly positive types and inductive-recursive types together with non-free constructors and pattern-matching on defined symbols.<|reference_end|>
arxiv
@article{blanqui2006inductive, title={Inductive types in the Calculus of Algebraic Constructions}, author={Fr'ed'eric Blanqui (INRIA Lorraine - LORIA)}, journal={Fundamenta Informaticae 65, 1-2 (2005) 61-86}, year={2006}, number={Journal version of TLCA'03}, archivePrefix={arXiv}, eprint={cs/0610073}, primaryClass={cs.LO} }
blanqui2006inductive
arxiv-674946
cs/0610074
Collaborative Decoding of Interleaved Reed-Solomon Codes and Concatenated Code Designs
<|reference_start|>Collaborative Decoding of Interleaved Reed-Solomon Codes and Concatenated Code Designs: Interleaved Reed-Solomon codes are applied in numerous data processing, data transmission, and data storage systems. They are generated by interleaving several codewords of ordinary Reed-Solomon codes. Usually, these codewords are decoded independently by classical algebraic decoding methods. However, by collaborative algebraic decoding approaches, such interleaved schemes allow the correction of error patterns beyond half the minimum distance, provided that the errors in the received signal occur in bursts. In this work, collaborative decoding of interleaved Reed-Solomon codes by multi-sequence shift-register synthesis is considered and analyzed. Based on the framework of interleaved Reed-Solomon codes, concatenated code designs are investigated, which are obtained by interleaving several Reed-Solomon codes, and concatenating them with an inner block code.<|reference_end|>
arxiv
@article{schmidt2006collaborative, title={Collaborative Decoding of Interleaved Reed-Solomon Codes and Concatenated Code Designs}, author={Georg Schmidt, Vladimir R. Sidorenko, and Martin Bossert}, journal={arXiv preprint arXiv:cs/0610074}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610074}, primaryClass={cs.IT math.IT} }
schmidt2006collaborative
arxiv-674947
cs/0610075
On Geometric Algebra representation of Binary Spatter Codes
<|reference_start|>On Geometric Algebra representation of Binary Spatter Codes: Kanerva's Binary Spatter Codes are reformulated in terms of geometric algebra. The key ingredient of the construction is the representation of XOR binding in terms of geometric product.<|reference_end|>
arxiv
@article{aerts2006on, title={On Geometric Algebra representation of Binary Spatter Codes}, author={Diederik Aerts, Marek Czachor, Bart De Moor}, journal={arXiv preprint arXiv:cs/0610075}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610075}, primaryClass={cs.AI quant-ph} }
aerts2006on
arxiv-674948
cs/0610076
Peano Count Trees (P-Trees) and Rule Association Mining for Gene Expression Profiling of Microarray Data
<|reference_start|>Peano Count Trees (P-Trees) and Rule Association Mining for Gene Expression Profiling of Microarray Data: The greatest challenge in maximizing the use of gene expression data is to develop new computational tools capable of interconnecting and interpreting the results from different organisms and experimental settings. We propose an integrative and comprehensive approach including a super-chip containing data from microarray experiments collected on different species subjected to hypoxic and anoxic stress. A data mining technology called Peano count tree (P-trees) is used to represent genomic data in multidimensions. Each microarray spot is presented as a pixel with its corresponding red/green intensity feature bands. Each bad is stored separately in a reorganized 8-separate (bSQ) file format. Each bSQ is converted to a quadrant base tree structure (P-tree) from which a superchip is represented as expression P-trees (EP-trees) and repression P-trees (RP-trees). The use of association rule mining is proposed to derived to meanigingfully organize signal transduction pathways taking in consideration evolutionary considerations. We argue that the genetic constitution of an organism (K) can be represented by the total number of genes belonging to two groups. The group X constitutes genes (X1,Xn) and they can be represented as 1 or 0 depending on whether the gene was expressed or not. The second group of Y genes (Y1,Yn) is expressed at different levels. These genes have a very high repression, high expression, very repressed or highly repressed. However, many genes of the group Y are specie specific and modulated by the products and combinations of genes of the group X. In this paper, we introduce the dSQ and P-tree technology; the biological implications of association rule mining using X and Y gene groups and some advances in the integration of this information using the BRAIN architecture.<|reference_end|>
arxiv
@article{valdivia-granda2006peano, title={Peano Count Trees (P-Trees) and Rule Association Mining for Gene Expression Profiling of Microarray Data}, author={Willy Valdivia-Granda, William Perrizo, Edward Deckard, Francis Larson}, journal={2002 International Conference in Bioinformatics. Bangkok, Thailand}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610076}, primaryClass={cs.DS cs.IR q-bio.MN} }
valdivia-granda2006peano
arxiv-674949
cs/0610077
MIMO Broadcast Channels with Block Diagonalization and Finite Rate Feedback
<|reference_start|>MIMO Broadcast Channels with Block Diagonalization and Finite Rate Feedback: Block diagonalization is a linear precoding technique for the multiple antenna broadcast (downlink) channel that involves transmission of multiple data streams to each receiver such that no multi-user interference is experienced at any of the receivers. This low-complexity scheme operates only a few dB away from capacity but does require very accurate channel knowledge at the transmitter, which can be very difficult to obtain in fading scenarios. We consider a limited feedback system where each receiver knows its channel perfectly, but the transmitter is only provided with a finite number of channel feedback bits from each receiver. Using a random vector quantization argument, we quantify the throughput loss due to imperfect channel knowledge as a function of the feedback level. The quality of channel knowledge must improve proportional to the SNR in order to prevent interference-limitations, and we show that scaling the number of feedback bits linearly with the system SNR is sufficient to maintain a bounded rate loss. Finally, we investigate a simple scalar quantization scheme that is seen to achieve the same scaling behavior as vector quantization.<|reference_end|>
arxiv
@article{ravindran2006mimo, title={MIMO Broadcast Channels with Block Diagonalization and Finite Rate Feedback}, author={Niranjay Ravindran and Nihar Jindal}, journal={arXiv preprint arXiv:cs/0610077}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610077}, primaryClass={cs.IT math.IT} }
ravindran2006mimo
arxiv-674950
cs/0610078
Rapid Prototyping over IEEE 80211
<|reference_start|>Rapid Prototyping over IEEE 80211: This paper introduces Prawn, a tool for prototyping communication protocols over IEEE 802.11 networks. Prawn allows researchers to conduct both functional assessment and performance evaluation as an inherent part of the protocol design process. Since Prawn runs on real IEEE 802.11 nodes, prototypes can be evaluated and adjusted under realistic conditions. Once the prototype has been extensively tested and thoroughly validated, and its functional design tuned accordingly, it is then ready for implementation. Prawn facilitates prototype development by providing: (i) a set of building blocks that implement common functions needed by a wide range of wireless protocols (e.g., neighbor discovery, link quality assessment, message transmission and reception), and (ii) an API that allows protocol designers to access Prawn primitives. We show through a number of case studies how Prawn supports prototyping as part of protocol design and, as a result of enabling deployment and testing under real-world scenarios, how Prawn provides useful feedback on protocol operation and performance.<|reference_end|>
arxiv
@article{abdesslem2006rapid, title={Rapid Prototyping over IEEE 802.11}, author={Fehmi Ben Abdesslem, Luigi Iannone, Marcelo Dias de Amorim, Katia Obraczka, Ignacio Solis, and Serge Fdida}, journal={arXiv preprint arXiv:cs/0610078}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610078}, primaryClass={cs.NI} }
abdesslem2006rapid
arxiv-674951
cs/0610079
An Enhanced Covering Lemma for Multiterminal Source Coding
<|reference_start|>An Enhanced Covering Lemma for Multiterminal Source Coding: An enhanced covering lemma for a Markov chain is proved in this paper, and then the distributed source coding problem of correlated general sources with one average distortion criterion under fixed-length coding is investigated. Based on the enhanced lemma, a sufficient and necessary condition for determining the achievability of rate-distortion triples is given.<|reference_end|>
arxiv
@article{yang2006an, title={An Enhanced Covering Lemma for Multiterminal Source Coding}, author={Shengtian Yang, Peiliang Qiu}, journal={arXiv preprint arXiv:cs/0610079}, year={2006}, doi={10.1109/ITW2.2006.323809}, archivePrefix={arXiv}, eprint={cs/0610079}, primaryClass={cs.IT math.IT} }
yang2006an
arxiv-674952
cs/0610080
Computable Closed Euclidean Subsets with and without Computable Points
<|reference_start|>Computable Closed Euclidean Subsets with and without Computable Points: The empty set of course contains no computable point. On the other hand, surprising results due to Zaslavskii, Tseitin, Kreisel, and Lacombe assert the existence of NON-empty co-r.e. closed sets devoid of computable points: sets which are `large' in the sense of positive Lebesgue measure. We observe that a certain size is in fact necessary: every non-empty co-r.e. closed real set without computable points has continuum cardinality. This leads us to investigate for various classes of computable real subsets whether they necessarily contain a (not necessarily effectively findable) computable point.<|reference_end|>
arxiv
@article{roux2006computable, title={Computable Closed Euclidean Subsets with and without Computable Points}, author={St'ephane Le Roux and Martin Ziegler}, journal={arXiv preprint arXiv:cs/0610080}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610080}, primaryClass={cs.LO math.LO} }
roux2006computable
arxiv-674953
cs/0610081
Semantics of Separation-Logic Typing and Higher-order Frame Rules for<br> Algol-like Languages
<|reference_start|>Semantics of Separation-Logic Typing and Higher-order Frame Rules for<br> Algol-like Languages: We show how to give a coherent semantics to programs that are well-specified in a version of separation logic for a language with higher types: idealized algol extended with heaps (but with immutable stack variables). In particular, we provide simple sound rules for deriving higher-order frame rules, allowing for local reasoning.<|reference_end|>
arxiv
@article{birkedal2006semantics, title={Semantics of Separation-Logic Typing and Higher-order Frame Rules for<br> Algol-like Languages}, author={Lars Birkedal, Noah Torp-Smith, Hongseok Yang}, journal={Logical Methods in Computer Science, Volume 2, Issue 5 (November 3, 2006) lmcs:2232}, year={2006}, doi={10.2168/LMCS-2(5:1)2006}, archivePrefix={arXiv}, eprint={cs/0610081}, primaryClass={cs.LO} }
birkedal2006semantics
arxiv-674954
cs/0610082
Theoretical analysis of network cranback protocols performance
<|reference_start|>Theoretical analysis of network cranback protocols performance: Suggested the decision of the network cranback protocols performance analyzing problem from Eyal Felstine, Reuven Cohen and Ofer Hadar, " Crankback Prediction in Hierarchical ATM networks", Journal of Network and Systems Management, Vol. 10, No. 3, September 2002. It show that the false alarm probability and probability of successful way crossing can be calculated. The main optimization equations are developed for cranback protocol parameters by using analytical expressions for statistical protocol characteristics.<|reference_end|>
arxiv
@article{stepanov2006theoretical, title={Theoretical analysis of network cranback protocols performance}, author={Sander Stepanov, Ofer Hadar}, journal={arXiv preprint arXiv:cs/0610082}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610082}, primaryClass={cs.IT math.IT} }
stepanov2006theoretical
arxiv-674955
cs/0610083
Estimation of the traffic in the binary channel for data networks
<|reference_start|>Estimation of the traffic in the binary channel for data networks: It is impossible to provide an effective utilization of communication networks without the analysis of the quantitative characteristics of the traffic in real time. The constant supervision of all channels of the data practically is impracticable because requires transfer of the significant additional information on a network and large resources expenses for devices of the control. Thus, the task on traffic estimation with small expenses in real time is the urgent.<|reference_end|>
arxiv
@article{stepanov2006estimation, title={Estimation of the traffic in the binary channel for data networks}, author={Sander Stepanov}, journal={arXiv preprint arXiv:cs/0610083}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610083}, primaryClass={cs.IT math.IT} }
stepanov2006estimation
arxiv-674956
cs/0610084
Share and Disperse: How to Resist Against Aggregator Compromises in Sensor Networks
<|reference_start|>Share and Disperse: How to Resist Against Aggregator Compromises in Sensor Networks: A common approach to overcome the limited nature of sensor networks is to aggregate data at intermediate nodes. A challenging issue in this context is to guarantee end-to-end security mainly because sensor networks are extremely vulnerable to node compromises. In order to secure data aggregation, in this paper we propose three schemes that rely on multipath routing. The first one guarantees data confidentiality through secret sharing, while the second and third ones provide data availability through information dispersal. Based on qualitative analysis and implementation, we show that, by applying these schemes, a sensor network can achieve data confidentiality, authenticity, and protection against denial of service attacks even in the presence of multiple compromised nodes.<|reference_end|>
arxiv
@article{claveirole2006share, title={Share and Disperse: How to Resist Against Aggregator Compromises in Sensor Networks}, author={Thomas Claveirole, Marcelo Dias de Amorim, Michel Abdalla, and Yannis Viniotis}, journal={arXiv preprint arXiv:cs/0610084}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610084}, primaryClass={cs.NI cs.CR} }
claveirole2006share
arxiv-674957
cs/0610085
Symbolic Simulation-Checking of Dense-Time Systems
<|reference_start|>Symbolic Simulation-Checking of Dense-Time Systems: Intuitively, an (implementation) automata is simulated by a (specification) automata if every externally observable transition by the implementation automata can also be made by the specification automata. In this work, we present a symbolic algorithm for the simulation-checking of timed automatas. We first present a simulation-checking procedure that operates on state spaces, representable with convex polyhedra, of timed automatas. We then present techniques to represent those intermediate result convex polyhedra with zones and make the procedure an algorithm. We then discuss how to handle Zeno states in the implementation automata. Finally, we have endeavored to realize the algorithm and report the performance of our algorithm in the experiment.<|reference_end|>
arxiv
@article{wang2006symbolic, title={Symbolic Simulation-Checking of Dense-Time Systems}, author={Farn Wang}, journal={arXiv preprint arXiv:cs/0610085}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610085}, primaryClass={cs.LO cs.SE} }
wang2006symbolic
arxiv-674958
cs/0610086
The central nature of the Hidden Subgroup problem
<|reference_start|>The central nature of the Hidden Subgroup problem: We show that several problems that figure prominently in quantum computing, including Hidden Coset, Hidden Shift, and Orbit Coset, are equivalent or reducible to Hidden Subgroup for a large variety of groups. We also show that, over permutation groups, the decision version and search version of Hidden Subgroup are polynomial-time equivalent. For Hidden Subgroup over dihedral groups, such an equivalence can be obtained if the order of the group is smooth. Finally, we give nonadaptive program checkers for Hidden Subgroup and its decision version.<|reference_end|>
arxiv
@article{fenner2006the, title={The central nature of the Hidden Subgroup problem}, author={S. A. Fenner, Y. Zhang}, journal={arXiv preprint arXiv:cs/0610086}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610086}, primaryClass={cs.CC quant-ph} }
fenner2006the
arxiv-674959
cs/0610087
An Application of the Mobile Transient Internet Architecture to IP Mobility and Inter-Operability
<|reference_start|>An Application of the Mobile Transient Internet Architecture to IP Mobility and Inter-Operability: We introduce an application of a mobile transient network architecture on top of the current Internet. This paper is an application extension to a conceptual mobile network architecture. It attempts to specifically reinforce some of the powerful notions exposed by the architecture from an application perspective. Of these notions, we explore the network expansion layer, an overlay of components and services, that enables a persistent identification network and other required services. The overlay abstraction introduces several benefits of which mobility and communication across heterogenous network structures are of interest to this paper. We present implementations of several components and protocols including gateways, Agents and the Open Device Access Protocol. Our present identification network implementation exploits the current implementation of the Handle System through the use of distributed, global and persistent identifiers called handles. Handles are used to identify and locate devices and services abstracting any physical location or network association from the communicating ends. A communication framework is finally demonstrated that would allow for mobile devices on the public Internet to have persistent identifiers and thus be persistently accessible either directly or indirectly. This application expands IP inter-operability beyond its current boundaries.<|reference_end|>
arxiv
@article{khoury2006an, title={An Application of the Mobile Transient Internet Architecture to IP Mobility and Inter-Operability}, author={Joud Khoury, Henry N Jerez, Nicolas Nehme-Antoun, Chaouki Abdallah}, journal={arXiv preprint arXiv:cs/0610087}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610087}, primaryClass={cs.NI} }
khoury2006an
arxiv-674960
cs/0610088
Vector field visualization with streamlines
<|reference_start|>Vector field visualization with streamlines: We have recently developed an algorithm for vector field visualization with oriented streamlines, able to depict the flow directions everywhere in a dense vector field and the sense of the local orientations. The algorithm has useful applications in the visualization of the director field in nematic liquid crystals. Here we propose an improvement of the algorithm able to enhance the visualization of the local magnitude of the field. This new approach of the algorithm is compared with the same procedure applied to the Line Integral Convolution (LIC) visualization.<|reference_end|>
arxiv
@article{sparavigna2006vector, title={Vector field visualization with streamlines}, author={A. Sparavigna and B. Montrucchio}, journal={arXiv preprint arXiv:cs/0610088}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610088}, primaryClass={cs.GR} }
sparavigna2006vector
arxiv-674961
cs/0610089
Reversible Logic to Cryptographic Hardware: A New Paradigm
<|reference_start|>Reversible Logic to Cryptographic Hardware: A New Paradigm: Differential Power Analysis (DPA) presents a major challenge to mathematically-secure cryptographic protocols. Attackers can break the encryption by measuring the energy consumed in the working digital circuit. To prevent this type of attack, this paper proposes the use of reversible logic for designing the ALU of a cryptosystem. Ideally, reversible circuits dissipate zero energy. Thus, it would be of great significance to apply reversible logic to designing secure cryptosystems. As far as is known, this is the first attempt to apply reversible logic to developing secure cryptosystems. In a prototype of a reversible ALU for a crypto-processor, reversible designs of adders and Montgomery multipliers are presented. The reversible designs of a carry propagate adder, four-to-two and five-to-two carry save adders are presented using a reversible TSG gate. One of the important properties of the TSG gate is that it can work singly as a reversible full adder. In order to design the reversible Montgomery multiplier, novel reversible sequential circuits are also proposed which are integrated with the proposed adders to design a reversible modulo multiplier. It is intended that this paper will provide a starting point for developing cryptosystems secure against DPA attacks.<|reference_end|>
arxiv
@article{thapliyal2006reversible, title={Reversible Logic to Cryptographic Hardware: A New Paradigm}, author={Himanshu Thapliyal and Mark Zwolinski}, journal={arXiv preprint arXiv:cs/0610089}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610089}, primaryClass={cs.CR} }
thapliyal2006reversible
arxiv-674962
cs/0610090
Combined Integer and Floating Point Multiplication Architecture(CIFM) for FPGAs and Its Reversible Logic Implementation
<|reference_start|>Combined Integer and Floating Point Multiplication Architecture(CIFM) for FPGAs and Its Reversible Logic Implementation: In this paper, the authors propose the idea of a combined integer and floating point multiplier(CIFM) for FPGAs. The authors propose the replacement of existing 18x18 dedicated multipliers in FPGAs with dedicated 24x24 multipliers designed with small 4x4 bit multipliers. It is also proposed that for every dedicated 24x24 bit multiplier block designed with 4x4 bit multipliers, four redundant 4x4 multiplier should be provided to enforce the feature of self repairability (to recover from the faults). In the proposed CIFM reconfigurability at run time is also provided resulting in low power. The major source of motivation for providing the dedicated 24x24 bit multiplier stems from the fact that single precision floating point multiplier requires 24x24 bit integer multiplier for mantissa multiplication. A reconfigurable, self-repairable 24x24 bit multiplier (implemented with 4x4 bit multiply modules) will ideally suit this purpose, making FPGAs more suitable for integer as well floating point operations. A dedicated 4x4 bit multiplier is also proposed in this paper. Moreover, in the recent years, reversible logic has emerged as a promising technology having its applications in low power CMOS, quantum computing, nanotechnology, and optical computing. It is not possible to realize quantum computing without reversible logic. Thus, this paper also paper provides the reversible logic implementation of the proposed CIFM. The reversible CIFM designed and proposed here will form the basis of the completely reversible FPGAs.<|reference_end|>
arxiv
@article{thapliyal2006combined, title={Combined Integer and Floating Point Multiplication Architecture(CIFM) for FPGAs and Its Reversible Logic Implementation}, author={Himanshu Thapliyal, Hamid R. Arabnia and A.P Vinod}, journal={arXiv preprint arXiv:cs/0610090}, year={2006}, doi={10.1109/MWSCAS.2006.382306}, archivePrefix={arXiv}, eprint={cs/0610090}, primaryClass={cs.AR} }
thapliyal2006combined
arxiv-674963
cs/0610091
On the Behavior of Journal Impact Factor Rank-Order Distribution
<|reference_start|>On the Behavior of Journal Impact Factor Rank-Order Distribution: An empirical law for the rank-order behavior of journal impact factors is found. Using an extensive data base on impact factors including journals on Education, Agrosciences, Geosciences, Biosciences and Environ- mental, Chemical, Computer, Engineering, Material, Mathematical, Medical and Physical Sciences we have found extremely good fits out- performing other rank-order models. Some extensions to other areas of knowledge are discussed.<|reference_end|>
arxiv
@article{mansilla2006on, title={On the Behavior of Journal Impact Factor Rank-Order Distribution}, author={R. Mansilla, E. K"oppen, G. Cocho and P. Miramontes}, journal={arXiv preprint arXiv:cs/0610091}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610091}, primaryClass={cs.IR physics.soc-ph} }
mansilla2006on
arxiv-674964
cs/0610092
Happy endings for flip graphs
<|reference_start|>Happy endings for flip graphs: We show that the triangulations of a finite point set form a flip graph that can be embedded isometrically into a hypercube, if and only if the point set has no empty convex pentagon. Point sets of this type include convex subsets of lattices, points on two lines, and several other infinite families. As a consequence, flip distance in such point sets can be computed efficiently.<|reference_end|>
arxiv
@article{eppstein2006happy, title={Happy endings for flip graphs}, author={David Eppstein}, journal={Journal of Computational Geometry 1(1):3-28, 2010}, year={2006}, doi={10.20382/jocg.v1i1a2}, archivePrefix={arXiv}, eprint={cs/0610092}, primaryClass={cs.CG math.CO math.MG} }
eppstein2006happy
arxiv-674965
cs/0610093
Semantic results for ontic and epistemic change
<|reference_start|>Semantic results for ontic and epistemic change: We give some semantic results for an epistemic logic incorporating dynamic operators to describe information changing events. Such events include epistemic changes, where agents become more informed about the non-changing state of the world, and ontic changes, wherein the world changes. The events are executed in information states that are modeled as pointed Kripke models. Our contribution consists of three semantic results. (i) Given two information states, there is an event transforming one into the other. The linguistic correspondent to this is that every consistent formula can be made true in every information state by the execution of an event. (ii) A more technical result is that: every event corresponds to an event in which the postconditions formalizing ontic change are assignments to `true' and `false' only (instead of assignments to arbitrary formulas in the logical language). `Corresponds' means that execution of either event in a given information state results in bisimilar information states. (iii) The third, also technical, result is that every event corresponds to a sequence of events wherein all postconditions are assignments of a single atom only (instead of simultaneous assignments of more than one atom).<|reference_end|>
arxiv
@article{van ditmarsch2006semantic, title={Semantic results for ontic and epistemic change}, author={H.P. van Ditmarsch and B.P. Kooi}, journal={G. Bonanno, W. van der Hoek, and M. Wooldridge (editors), Logic and the Foundations of Game and Decision Theory (LOFT 7), pages 87-117. Texts in Logic and Games, Amsterdam University Press, 2008}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610093}, primaryClass={cs.LO cs.AI cs.MA} }
van ditmarsch2006semantic
arxiv-674966
cs/0610094
Migrating Multi-page Web Applications to Single-page AJAX Interfaces
<|reference_start|>Migrating Multi-page Web Applications to Single-page AJAX Interfaces: Recently, a new web development technique for creating interactive web applications, dubbed AJAX, has emerged. In this new model, the single-page web interface is composed of individual components which can be updated/replaced independently. With the rise of AJAX web applications classical multi-page web applications are becoming legacy systems. If until a year ago, the concern revolved around migrating legacy systems to web-based settings, today we have a new challenge of migrating web applications to single-page AJAX applications. Gaining an understanding of the navigational model and user interface structure of the source application is the first step in the migration process. In this paper, we explore how reverse engineering techniques can help analyze classic web applications for this purpose. Our approach, using a schema-based clustering technique, extracts a navigational model of web applications, and identifies candidate user interface components to be migrated to a single-page AJAX interface. Additionally, results of a case study, conducted to evaluate our tool, are presented.<|reference_end|>
arxiv
@article{mesbah2006migrating, title={Migrating Multi-page Web Applications to Single-page AJAX Interfaces}, author={Ali Mesbah and Arie van Deursen}, journal={Proceedings of the 11th European Conference on Software Maintenance and Reengineering (CSMR'07), IEEE Computer Society, 2007}, year={2006}, number={TUD-SERG-2006-018}, archivePrefix={arXiv}, eprint={cs/0610094}, primaryClass={cs.SE} }
mesbah2006migrating
arxiv-674967
cs/0610095
Solving planning domains with polytree causal graphs is NP-complete
<|reference_start|>Solving planning domains with polytree causal graphs is NP-complete: We show that solving planning domains on binary variables with polytree causal graph is \NP-complete. This is in contrast to a polynomial-time algorithm of Domshlak and Brafman that solves these planning domains for polytree causal graphs of bounded indegree.<|reference_end|>
arxiv
@article{giménez2006solving, title={Solving planning domains with polytree causal graphs is NP-complete}, author={Omer Gim'enez}, journal={arXiv preprint arXiv:cs/0610095}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610095}, primaryClass={cs.AI cs.CC} }
giménez2006solving
arxiv-674968
cs/0610096
Partial Evaluation for Program Comprehension
<|reference_start|>Partial Evaluation for Program Comprehension: Program comprehension is the most tedious and time consuming task of software maintenance, an important phase of the software life cycle. This is particularly true while maintaining scientific application programs that have been written in Fortran for decades and that are still vital in various domains even though more modern languages are used to implement their user interfaces. Very often, programs have evolved as their application domains increase continually and have become very complex due to extensive modifications. This generality in programs is implemented by input variables whose value does not vary in the context of a given application. Thus, it is very interesting for the maintainer to propagate such information, that is to obtain a simplified program, which behaves like the initial one when used according to the restriction. We have adapted partial evaluation for program comprehension. Our partial evaluator performs mainly two tasks: constant propagation and statements simplification. It includes an interprocedural alias analysis. As our aim is program comprehension rather than optimization, there are two main differences with classical partial evaluation. We do not change the original<|reference_end|>
arxiv
@article{blazy2006partial, title={Partial Evaluation for Program Comprehension}, author={Sandrine Blazy (CEDRIC)}, journal={Dans ACM Computing Surveys, Symposium on partial evaluation 30, 3 es (1998)}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610096}, primaryClass={cs.SE} }
blazy2006partial
arxiv-674969
cs/0610097
Reuse of Specification Patterns with the B Method
<|reference_start|>Reuse of Specification Patterns with the B Method: This paper describes an approach for reusing specification patterns. Specification patterns are design patterns that are expressed in a formal specification language. Reusing a specification pattern means instantiating it or composing it with other specification patterns. Three levels of composition are defined: juxtaposition, composition with inter-patterns links and unification. This paper shows through examples how to define specification patterns in B, how to reuse them directly in B, and also how to reuse the proofs associated with specification patterns.<|reference_end|>
arxiv
@article{blazy2006reuse, title={Reuse of Specification Patterns with the B Method}, author={Sandrine Blazy (CEDRIC), Fr'ed'eric Gervais (CEDRIC), R'egine Laleau (CEDRIC)}, journal={Dans ZB 2003: Formal Specification and Development in Z and B, 2651 (2003) 40-57}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610097}, primaryClass={cs.SE} }
blazy2006reuse
arxiv-674970
cs/0610098
Characterizing Solution Concepts in Games Using Knowledge-Based Programs
<|reference_start|>Characterizing Solution Concepts in Games Using Knowledge-Based Programs: We show how solution concepts in games such as Nash equilibrium, correlated equilibrium, rationalizability, and sequential equilibrium can be given a uniform definition in terms of \emph{knowledge-based programs}. Intuitively, all solution concepts are implementations of two knowledge-based programs, one appropriate for games represented in normal form, the other for games represented in extensive form. These knowledge-based programs can be viewed as embodying rationality. The representation works even if (a) information sets do not capture an agent's knowledge, (b) uncertainty is not represented by probability, or (c) the underlying game is not common knowledge.<|reference_end|>
arxiv
@article{halpern2006characterizing, title={Characterizing Solution Concepts in Games Using Knowledge-Based Programs}, author={Joseph Y. Halpern and Yoram Moses}, journal={arXiv preprint arXiv:cs/0610098}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610098}, primaryClass={cs.GT cs.DC cs.MA} }
halpern2006characterizing
arxiv-674971
cs/0610099
Properties of Codes with the Rank Metric
<|reference_start|>Properties of Codes with the Rank Metric: In this paper, we study properties of rank metric codes in general and maximum rank distance (MRD) codes in particular. For codes with the rank metric, we first establish Gilbert and sphere-packing bounds, and then obtain the asymptotic forms of these two bounds and the Singleton bound. Based on the asymptotic bounds, we observe that asymptotically Gilbert-Varsharmov bound is exceeded by MRD codes and sphere-packing bound cannot be attained. We also establish bounds on the rank covering radius of maximal codes, and show that all MRD codes are maximal codes and all the MRD codes known so far achieve the maximum rank covering radius.<|reference_end|>
arxiv
@article{gadouleau2006properties, title={Properties of Codes with the Rank Metric}, author={Maximilien Gadouleau and Zhiyuan Yan}, journal={arXiv preprint arXiv:cs/0610099}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610099}, primaryClass={cs.IT math.IT} }
gadouleau2006properties
arxiv-674972
cs/0610100
A Mobile Transient Internet Architecture
<|reference_start|>A Mobile Transient Internet Architecture: This paper describes a new architecture for transient mobile networks destined to merge existing and future network architectures, communication implementations and protocol operations by introducing a new paradigm to data delivery and identification. The main goal of our research is to enable seamless end-to-end communication between mobile and stationary devices across multiple networks and through multiple communication environments. The architecture establishes a set of infrastructure components and protocols that set the ground for a Persistent Identification Network (PIN). The basis for the operation of PIN is an identification space consisting of unique location independent identifiers similar to the ones implemented in the Handle system. Persistent Identifiers are used to identify and locate Digital Entities which can include devices, services, users and even traffic. The architecture establishes a primary connection independent logical structure that can operate over conventional networks or more advanced peer-to-peer aggregation networks. Communication is based on routing pools and novel protocols for routing data across several abstraction levels of the network, regardless of the end-points' current association and state...<|reference_end|>
arxiv
@article{jerez2006a, title={A Mobile Transient Internet Architecture}, author={Henry N Jerez, Joud Khoury, Chaouki Abdallah}, journal={arXiv preprint arXiv:cs/0610100}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610100}, primaryClass={cs.NI cs.IT math.IT} }
jerez2006a
arxiv-674973
cs/0610101
Entropy generation in a model of reversible computation
<|reference_start|>Entropy generation in a model of reversible computation: We present a model in which, due to the quantum nature of the signals controlling the implementation time of successive unitary computational steps, \emph{physical} irreversibility appears in the execution of a \emph{logically} reversible computation.<|reference_end|>
arxiv
@article{de falco2006entropy, title={Entropy generation in a model of reversible computation}, author={Diego de Falco and Dario Tamascelli}, journal={RAIRO-Inf.Theor.Appl. 40, (2006) 93-105}, year={2006}, doi={10.1051/ita:2006013}, archivePrefix={arXiv}, eprint={cs/0610101}, primaryClass={cs.CC quant-ph} }
de falco2006entropy
arxiv-674974
cs/0610102
Quantum communication is possible with pure state
<|reference_start|>Quantum communication is possible with pure state: It is believed that quantum communication is not possible with a pure ensemble of states because quantum entropy of pure state is zero. This is indeed possible due to geometric consequence of entanglement.<|reference_end|>
arxiv
@article{mitra2006quantum, title={Quantum communication is possible with pure state}, author={Arindam Mitra}, journal={arXiv preprint arXiv:cs/0610102}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610102}, primaryClass={cs.IT math.IT} }
mitra2006quantum
arxiv-674975
cs/0610103
On the Secrecy Capacity of Fading Channels
<|reference_start|>On the Secrecy Capacity of Fading Channels: We consider the secure transmission of information over an ergodic fading channel in the presence of an eavesdropper. Our eavesdropper can be viewed as the wireless counterpart of Wyner's wiretapper. The secrecy capacity of such a system is characterized under the assumption of asymptotically long coherence intervals. We first consider the full Channel State Information (CSI) case, where the transmitter has access to the channel gains of the legitimate receiver and the eavesdropper. The secrecy capacity under this full CSI assumption serves as an upper bound for the secrecy capacity when only the CSI of the legitimate receiver is known at the transmitter, which is characterized next. In each scenario, the perfect secrecy capacity is obtained along with the optimal power and rate allocation strategies. We then propose a low-complexity on/off power allocation strategy that achieves near-optimal performance with only the main channel CSI. More specifically, this scheme is shown to be asymptotically optimal as the average SNR goes to infinity, and interestingly, is shown to attain the secrecy capacity under the full CSI assumption. Remarkably, our results reveal the positive impact of fading on the secrecy capacity and establish the critical role of rate adaptation, based on the main channel CSI, in facilitating secure communications over slow fading channels.<|reference_end|>
arxiv
@article{gopala2006on, title={On the Secrecy Capacity of Fading Channels}, author={Praveen Kumar Gopala, Lifeng Lai and Hesham El Gamal}, journal={arXiv preprint arXiv:cs/0610103}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610103}, primaryClass={cs.IT math.IT} }
gopala2006on
arxiv-674976
cs/0610104
ARQ Diversity in Fading Random Access Channels
<|reference_start|>ARQ Diversity in Fading Random Access Channels: A cross-layer optimization approach is adopted for the design of symmetric random access wireless systems. Instead of the traditional collision model, a more realistic physical layer model is considered. Based on this model, an Incremental Redundancy Automatic Repeat reQuest (IR-ARQ) scheme, tailored to jointly combat the effects of collisions, multi-path fading, and additive noise, is developed. The Diversity-Multiplexing-Delay tradeoff (DMDT) of the proposed scheme is analyzed for fully-loaded queues, and compared with that of Gallager tree algorithm for collision resolution and the network-assisted diversity multiple access (NDMA) protocol of Tsatsanis et al.. The fully-loaded queue model is then replaced by one with random arrivals, under which these protocols are compared in terms of the stability region, average delay and diversity gain. Overall, our analytical and numerical results establish the superiority of the proposed IR-ARQ scheme and reveal some important insights. For example, it turns out that the performance is optimized, for a given total throughput, by maximizing the probability that a certain user sends a new packet and minimizing the transmission rate employed by each user.<|reference_end|>
arxiv
@article{nam2006arq, title={ARQ Diversity in Fading Random Access Channels}, author={Young-Han Nam, Praveen Kumar Gopala and Hesham El Gamal}, journal={arXiv preprint arXiv:cs/0610104}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610104}, primaryClass={cs.IT math.IT} }
nam2006arq
arxiv-674977
cs/0610105
How To Break Anonymity of the Netflix Prize Dataset
<|reference_start|>How To Break Anonymity of the Netflix Prize Dataset: We present a new class of statistical de-anonymization attacks against high-dimensional micro-data, such as individual preferences, recommendations, transaction records and so on. Our techniques are robust to perturbation in the data and tolerate some mistakes in the adversary's background knowledge. We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's largest online movie rental service. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber's record in the dataset. Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information.<|reference_end|>
arxiv
@article{narayanan2006how, title={How To Break Anonymity of the Netflix Prize Dataset}, author={Arvind Narayanan and Vitaly Shmatikov}, journal={arXiv preprint arXiv:cs/0610105}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610105}, primaryClass={cs.CR cs.DB} }
narayanan2006how
arxiv-674978
cs/0610106
On the Error Exponents of ARQ Channels with Deadlines
<|reference_start|>On the Error Exponents of ARQ Channels with Deadlines: We consider communication over Automatic Repeat reQuest (ARQ) memoryless channels with deadlines. In particular, an upper bound L is imposed on the maximum number of ARQ transmission rounds. In this setup, it is shown that incremental redundancy ARQ outperforms Forney's memoryless decoding in terms of the achievable error exponents.<|reference_end|>
arxiv
@article{gopala2006on, title={On the Error Exponents of ARQ Channels with Deadlines}, author={Praveen Kumar Gopala, Young-Han Nam and Hesham El Gamal}, journal={arXiv preprint arXiv:cs/0610106}, year={2006}, doi={10.1109/TIT.2007.907431}, archivePrefix={arXiv}, eprint={cs/0610106}, primaryClass={cs.IT math.IT} }
gopala2006on
arxiv-674979
cs/0610107
Interference Channels with Common Information
<|reference_start|>Interference Channels with Common Information: In this paper, we consider the discrete memoryless interference channel with common information, in which two senders need deliver not only private messages but also certain common messages to their corresponding receivers. We derive an achievable rate region for such a channel by exploiting a random coding strategy, namely cascaded superposition coding. We reveal that the derived achievable rate region generalizes some important existing results for the interference channels with or without common information. Furthermore, we specialize to a class of deterministic interference channels with common information, and show that the derived achievable rate region is indeed the capacity region for this class of channels.<|reference_end|>
arxiv
@article{jiang2006interference, title={Interference Channels with Common Information}, author={Jinhua Jiang, Yan Xin, and Hari Krishna Garg}, journal={arXiv preprint arXiv:cs/0610107}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610107}, primaryClass={cs.IT math.IT} }
jiang2006interference
arxiv-674980
cs/0610108
Doppler Spectrum Estimation by Ramanujan Fourier Transforms
<|reference_start|>Doppler Spectrum Estimation by Ramanujan Fourier Transforms: The Doppler spectrum estimation of a weather radar signal in a classic way can be made by two methods, temporal one based in the autocorrelation of the successful signals, whereas the other one uses the estimation of the power spectral density PSD by using Fourier transforms. We introduces a new tool of signal processing based on Ramanujan sums cq(n), adapted to the analysis of arithmetical sequences with several resonances p/q. These sums are almost periodic according to time n of resonances and aperiodic according to the order q of resonances. New results will be supplied by the use of Ramanujan Fourier Transform (RFT) for the estimation of the Doppler spectrum for the weather radar signal.<|reference_end|>
arxiv
@article{lagha2006doppler, title={Doppler Spectrum Estimation by Ramanujan Fourier Transforms}, author={Mohand Lagha (AERONAUTIC Department of Blida University, Femto-ST), Messaoud Bensebti (AERONAUTIC Department of Blida University)}, journal={arXiv preprint arXiv:cs/0610108}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610108}, primaryClass={cs.NA cs.CE} }
lagha2006doppler
arxiv-674981
cs/0610109
Intrusion detection mechanisms for VoIP applications
<|reference_start|>Intrusion detection mechanisms for VoIP applications: VoIP applications are emerging today as an important component in business and communication industry. In this paper, we address the intrusion detection and prevention in VoIP networks and describe how a conceptual solution based on the Bayes inference approach can be used to reinforce the existent security mechanisms. Our approach is based on network monitoring and analyzing of the VoIP-specific traffic. We give a detailed example on attack detection using the SIP signaling protocol.<|reference_end|>
arxiv
@article{nassar2006intrusion, title={Intrusion detection mechanisms for VoIP applications}, author={Mohamed El Baker Nassar (INRIA Lorraine - LORIA), Radu State (INRIA Lorraine - LORIA), Olivier Festor (INRIA Lorraine - LORIA)}, journal={Dans Third annual VoIP security workshop (VSW'06) (2006)}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610109}, primaryClass={cs.NI} }
nassar2006intrusion
arxiv-674982
cs/0610110
Stochastic Formal Methods for Hybrid Systems
<|reference_start|>Stochastic Formal Methods for Hybrid Systems: We provide a framework to bound the probability that accumulated errors were never above a given threshold on hybrid systems. Such systems are used for example to model an aircraft or a nuclear power plant on one side and its software on the other side. This report contains simple formulas based on L\'evy's and Markov's inequalities and it presents a formal theory of random variables with a special focus on producing concrete results. We selected four very common applications that fit in our framework and cover the common practices of hybrid systems that evolve for a long time. We compute the number of bits that remain continuously significant in the first two applications with a probability of failure around one against a billion, where worst case analysis considers that no significant bit remains. We are using PVS as such formal tools force explicit statement of all hypotheses and prevent incorrect uses of theorems.<|reference_end|>
arxiv
@article{daumas2006stochastic, title={Stochastic Formal Methods for Hybrid Systems}, author={Marc Daumas (ELIAUS), David Lester (University of Manchester), Erik Martin-Dorel (ELIAUS, Lamps), Annick Truffert (LAMPS)}, journal={arXiv preprint arXiv:cs/0610110}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610110}, primaryClass={cs.MS} }
daumas2006stochastic
arxiv-674983
cs/0610111
Local approximate inference algorithms
<|reference_start|>Local approximate inference algorithms: We present a new local approximation algorithm for computing Maximum a Posteriori (MAP) and log-partition function for arbitrary exponential family distribution represented by a finite-valued pair-wise Markov random field (MRF), say $G$. Our algorithm is based on decomposition of $G$ into {\em appropriately} chosen small components; then computing estimates locally in each of these components and then producing a {\em good} global solution. We show that if the underlying graph $G$ either excludes some finite-sized graph as its minor (e.g. Planar graph) or has low doubling dimension (e.g. any graph with {\em geometry}), then our algorithm will produce solution for both questions within {\em arbitrary accuracy}. We present a message-passing implementation of our algorithm for MAP computation using self-avoiding walk of graph. In order to evaluate the computational cost of this implementation, we derive novel tight bounds on the size of self-avoiding walk tree for arbitrary graph. As a consequence of our algorithmic result, we show that the normalized log-partition function (also known as free-energy) for a class of {\em regular} MRFs will converge to a limit, that is computable to an arbitrary accuracy.<|reference_end|>
arxiv
@article{jung2006local, title={Local approximate inference algorithms}, author={Kyomin Jung and Devavrat Shah}, journal={arXiv preprint arXiv:cs/0610111}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610111}, primaryClass={cs.AI} }
jung2006local
arxiv-674984
cs/0610112
On the Performance of Lossless Joint Source-Channel Coding Based on Linear Codes
<|reference_start|>On the Performance of Lossless Joint Source-Channel Coding Based on Linear Codes: A general lossless joint source-channel coding scheme based on linear codes is proposed and then analyzed in this paper. It is shown that a linear code with good joint spectrum can be used to establish limit-approaching joint source-channel coding schemes for arbitrary sources and channels, where the joint spectrum of the code is a generalization of the input-output weight distribution.<|reference_end|>
arxiv
@article{yang2006on, title={On the Performance of Lossless Joint Source-Channel Coding Based on Linear Codes}, author={Shengtian Yang, Peiliang Qiu}, journal={arXiv preprint arXiv:cs/0610112}, year={2006}, doi={10.1109/ITW2.2006.323779}, archivePrefix={arXiv}, eprint={cs/0610112}, primaryClass={cs.IT math.IT} }
yang2006on
arxiv-674985
cs/0610113
CHAC A MOACO Algorithm for Computation of Bi-Criteria Military Unit Path in the Battlefield
<|reference_start|>CHAC A MOACO Algorithm for Computation of Bi-Criteria Military Unit Path in the Battlefield: In this paper we propose a Multi-Objective Ant Colony Optimization (MOACO) algorithm called CHAC, which has been designed to solve the problem of finding the path on a map (corresponding to a simulated battlefield) that minimizes resources while maximizing safety. CHAC has been tested with two different state transition rules: an aggregative function that combines the heuristic and pheromone information of both objectives and a second one that is based on the dominance concept of multiobjective optimization problems. These rules have been evaluated in several different situations (maps with different degree of difficulty), and we have found that they yield better results than a greedy algorithm (taken as baseline) in addition to a military behaviour that is also better in the tactical sense. The aggregative function, in general, yields better results than the one based on dominance.<|reference_end|>
arxiv
@article{mora2006chac., title={CHAC. A MOACO Algorithm for Computation of Bi-Criteria Military Unit Path in the Battlefield}, author={A.M. Mora, J.J. Merelo, C. Millan, J. Torrecillas, J.L.J. Laredo}, journal={Published in Proceedings of the Workshop on Nature Inspired Cooperative Strategies for Optimization. NICSO'2006, Pelta & Krasnogor, (eds) pp 85-98, Jun. 2006}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610113}, primaryClass={cs.MA cs.CC} }
mora2006chac.
arxiv-674986
cs/0610114
Instant Computing - A New Computation Paradigm
<|reference_start|>Instant Computing - A New Computation Paradigm: Voltage peaks on a conventional computer's power lines allow for the well-known dangerous DPA attacks. We show that measurement of a quantum computer's transient state during a computational step reveals information about a complete computation of arbitrary length, which can be extracted by repeated probing, if the computer is suitably programmed. Instant computing, as we name this mode of operation, recognizes for any total or partial recursive function arguments lying in the domain of definition and yields their function value with arbitrary small error probability in probabilistic linear time. This implies recognition of (not necessarily recursively enumerable) complements of recursively enumerable sets and the solution of the halting problem. Future quantum computers are shown to be likely to allow for instant computing, and some consequences are pointed out.<|reference_end|>
arxiv
@article{thomann2006instant, title={Instant Computing - A New Computation Paradigm}, author={Hans-Rudolf Thomann}, journal={arXiv preprint arXiv:cs/0610114}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610114}, primaryClass={cs.CC cs.CR quant-ph} }
thomann2006instant
arxiv-674987
cs/0610115
An Achievable Rate Region for the Gaussian Interference Channel
<|reference_start|>An Achievable Rate Region for the Gaussian Interference Channel: An achievable rate region for the Gaussian interference channel is derived using Sato's modified frequency division multiplexing idea and a special case of Han and Kobayashi's rate region (denoted by $\Gmat^\prime$). We show that the new inner bound includes $\Gmat^\prime$, Sason's rate region $\Dmat$, as well as the achievable region via TDM/FDM, as its subsets. The advantage of this improved inner bound over $\Gmat^\prime$ arises due to its inherent ability to utilize the whole transmit power range on the real line without violating the power constraint. We also provide analysis to examine the conditions for the new achievable region to strictly extend $\Gmat^\prime$.<|reference_end|>
arxiv
@article{shang2006an, title={An Achievable Rate Region for the Gaussian Interference Channel}, author={Xiaohu Shang, Biao Chen}, journal={arXiv preprint arXiv:cs/0610115}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610115}, primaryClass={cs.IT math.IT} }
shang2006an
arxiv-674988
cs/0610116
DepAnn - An Annotation Tool for Dependency Treebanks
<|reference_start|>DepAnn - An Annotation Tool for Dependency Treebanks: DepAnn is an interactive annotation tool for dependency treebanks, providing both graphical and text-based annotation interfaces. The tool is aimed for semi-automatic creation of treebanks. It aids the manual inspection and correction of automatically created parses, making the annotation process faster and less error-prone. A novel feature of the tool is that it enables the user to view outputs from several parsers as the basis for creating the final tree to be saved to the treebank. DepAnn uses TIGER-XML, an XML-based general encoding format for both, representing the parser outputs and saving the annotated treebank. The tool includes an automatic consistency checker for sentence structures. In addition, the tool enables users to build structures manually, add comments on the annotations, modify the tagsets, and mark sentences for further revision.<|reference_end|>
arxiv
@article{kakkonen2006depann, title={DepAnn - An Annotation Tool for Dependency Treebanks}, author={Tuomo Kakkonen}, journal={Proceedings of the 11th ESSLLI Student Session at the 18th European Summer School in Logic, Language and Information (ESSLLI 2006), pp. 214-225. Malaga, Spain, 2006}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610116}, primaryClass={cs.CL} }
kakkonen2006depann
arxiv-674989
cs/0610117
Quantifier elimination for the reals with a predicate for the powers of two
<|reference_start|>Quantifier elimination for the reals with a predicate for the powers of two: In 1985, van den Dries showed that the theory of the reals with a predicate for the integer powers of two admits quantifier elimination in an expanded language, and is hence decidable. He gave a model-theoretic argument, which provides no apparent bounds on the complexity of a decision procedure. We provide a syntactic argument that yields a procedure that is primitive recursive, although not elementary. In particular, we show that it is possible to eliminate a single block of existential quantifiers in time $2^0_{O(n)}$, where $n$ is the length of the input formula and $2_k^x$ denotes $k$-fold iterated exponentiation.<|reference_end|>
arxiv
@article{avigad2006quantifier, title={Quantifier elimination for the reals with a predicate for the powers of two}, author={Jeremy Avigad and Yimu Yin}, journal={arXiv preprint arXiv:cs/0610117}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610117}, primaryClass={cs.LO} }
avigad2006quantifier
arxiv-674990
cs/0610118
Applying Part-of-Seech Enhanced LSA to Automatic Essay Grading
<|reference_start|>Applying Part-of-Seech Enhanced LSA to Automatic Essay Grading: Latent Semantic Analysis (LSA) is a widely used Information Retrieval method based on "bag-of-words" assumption. However, according to general conception, syntax plays a role in representing meaning of sentences. Thus, enhancing LSA with part-of-speech (POS) information to capture the context of word occurrences appears to be theoretically feasible extension. The approach is tested empirically on a automatic essay grading system using LSA for document similarity comparisons. A comparison on several POS-enhanced LSA models is reported. Our findings show that the addition of contextual information in the form of POS tags can raise the accuracy of the LSA-based scoring models up to 10.77 per cent.<|reference_end|>
arxiv
@article{kakkonen2006applying, title={Applying Part-of-Seech Enhanced LSA to Automatic Essay Grading}, author={Tuomo Kakkonen, Niko Myller, Erkki Sutinen}, journal={Proceedings of the 4th IEEE International Conference on Information Technology: Research and Education (ITRE 2006). Tel Aviv, Israel, 2006}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610118}, primaryClass={cs.IR cs.CL} }
kakkonen2006applying
arxiv-674991
cs/0610119
Approximate Convex Optimization by Online Game Playing
<|reference_start|>Approximate Convex Optimization by Online Game Playing: Lagrangian relaxation and approximate optimization algorithms have received much attention in the last two decades. Typically, the running time of these methods to obtain a $\epsilon$ approximate solution is proportional to $\frac{1}{\epsilon^2}$. Recently, Bienstock and Iyengar, following Nesterov, gave an algorithm for fractional packing linear programs which runs in $\frac{1}{\epsilon}$ iterations. The latter algorithm requires to solve a convex quadratic program every iteration - an optimization subroutine which dominates the theoretical running time. We give an algorithm for convex programs with strictly convex constraints which runs in time proportional to $\frac{1}{\epsilon}$. The algorithm does NOT require to solve any quadratic program, but uses gradient steps and elementary operations only. Problems which have strictly convex constraints include maximum entropy frequency estimation, portfolio optimization with loss risk constraints, and various computational problems in signal processing. As a side product, we also obtain a simpler version of Bienstock and Iyengar's result for general linear programming, with similar running time. We derive these algorithms using a new framework for deriving convex optimization algorithms from online game playing algorithms, which may be of independent interest.<|reference_end|>
arxiv
@article{hazan2006approximate, title={Approximate Convex Optimization by Online Game Playing}, author={Elad Hazan}, journal={arXiv preprint arXiv:cs/0610119}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610119}, primaryClass={cs.DS} }
hazan2006approximate
arxiv-674992
cs/0610120
Classdesc and Graphcode: support for scientific programming in C++
<|reference_start|>Classdesc and Graphcode: support for scientific programming in C++: Object-oriented programming languages such as Java and Objective C have become popular for implementing agent-based and other object-based simulations since objects in those languages can {\em reflect} (i.e. make runtime queries of an object's structure). This allows, for example, a fairly trivial {\em serialisation} routine (conversion of an object into a binary representation that can be stored or passed over a network) to be written. However C++ does not offer this ability, as type information is thrown away at compile time. Yet C++ is often a preferred development environment, whether for performance reasons or for its expressive features such as operator overloading. In scientific coding, changes to a model's codes takes place constantly, as the model is refined, and different phenomena are studied. Yet traditionally, facilities such as checkpointing, routines for initialising model parameters and analysis of model output depend on the underlying model remaining static, otherwise each time a model is modified, a whole slew of supporting routines needs to be changed to reflect the new data structures. Reflection offers the advantage of the simulation framework adapting to the underlying model without programmer intervention, reducing the effort of modifying the model. In this paper, we present the {\em Classdesc} system which brings many of the benefits of object reflection to C++, {\em ClassdescMP} which dramatically simplifies coding of MPI based parallel programs and {\em Graphcode} a general purpose data parallel programming environment.<|reference_end|>
arxiv
@article{standish2006classdesc, title={Classdesc and Graphcode: support for scientific programming in C++}, author={Russell K. Standish and Duraid Madina}, journal={arXiv preprint arXiv:cs/0610120}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610120}, primaryClass={cs.MS cs.CE cs.DC} }
standish2006classdesc
arxiv-674993
cs/0610121
Construction algorithm for network error-correcting codes attaining the Singleton bound
<|reference_start|>Construction algorithm for network error-correcting codes attaining the Singleton bound: We give a centralized deterministic algorithm for constructing linear network error-correcting codes that attain the Singleton bound of network error-correcting codes. The proposed algorithm is based on the algorithm by Jaggi et al. We give estimates on the time complexity and the required symbol size of the proposed algorithm. We also estimate the probability of a random choice of local encoding vectors by all intermediate nodes giving a network error-correcting codes attaining the Singleton bound. We also clarify the relationship between the robust network coding and the network error-correcting codes with known locations of errors.<|reference_end|>
arxiv
@article{matsumoto2006construction, title={Construction algorithm for network error-correcting codes attaining the Singleton bound}, author={Ryutaroh Matsumoto}, journal={IEICE Trans. Fundamentals, vol. E90-A, no. 9, pp. 1729-1735, September 2007}, year={2006}, doi={10.1093/ietfec/e90-a.9.1729}, archivePrefix={arXiv}, eprint={cs/0610121}, primaryClass={cs.IT cs.DM cs.NI math.IT} }
matsumoto2006construction
arxiv-674994
cs/0610122
Faithful Polynomial Evaluation with Compensated Horner Algorithm
<|reference_start|>Faithful Polynomial Evaluation with Compensated Horner Algorithm: This paper presents two sufficient conditions to ensure a faithful evaluation of polynomial in IEEE-754 floating point arithmetic. Faithfulness means that the computed value is one of the two floating point neighbours of the exact result; it can be satisfied using a more accurate algorithm than the classic Horner scheme. One condition here provided is an apriori bound of the polynomial condition number derived from the error analysis of the compensated Horner algorithm. The second condition is both dynamic and validated to check at the running time the faithfulness of a given evaluation. Numerical experiments illustrate the behavior of these two conditions and that associated running time over-cost is really interesting.<|reference_end|>
arxiv
@article{langlois2006faithful, title={Faithful Polynomial Evaluation with Compensated Horner Algorithm}, author={Philippe Langlois (LP2A-DALI), Nicolas Louvet (LP2A-DALI)}, journal={arXiv preprint arXiv:cs/0610122}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610122}, primaryClass={cs.NA cs.MS} }
langlois2006faithful
arxiv-674995
cs/0610123
Proof Nets and the Identity of Proofs
<|reference_start|>Proof Nets and the Identity of Proofs: These are the notes for a 5-lecture-course given at ESSLLI 2006 in Malaga, Spain. The URL of the school is http://esslli2006.lcc.uma.es/ . This version slightly differs from the one which has been distributed at the school because typos have been removed and comments and suggestions by students have been worked in. The course is intended to be introductory. That means no prior knowledge of proof nets is required. However, the student should be familiar with the basics of propositional logic, and should have seen formal proofs in some formal deductive system (e.g., sequent calculus, natural deduction, resolution, tableaux, calculus of structures, Frege-Hilbert-systems, ...). It is probably helpful if the student knows already what cut elimination is, but this is not strictly necessary. In these notes, I will introduce the concept of ``proof nets'' from the viewpoint of the problem of the identity of proofs. I will proceed in a rather informal way. The focus will be more on presenting ideas than on presenting technical details. The goal of the course is to give the student an overview of the theory of proof nets and make the vast amount of literature on the topic easier accessible to the beginner. For introducing the basic concepts of the theory, I will in the first part of the course stick to the unit-free multiplicative fragment of linear logic because of its rather simple notion of proof nets. In the second part of the course we will see proof nets for more sophisticated logics. This is a basic introduction into proof nets from the perspective of the identity of proofs. We discuss how deductive proofs can be translated into proof nets and what a correctness criterion is.<|reference_end|>
arxiv
@article{strassburger2006proof, title={Proof Nets and the Identity of Proofs}, author={Lutz Strassburger (INRIA Futurs)}, journal={arXiv preprint arXiv:cs/0610123}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610123}, primaryClass={cs.LO} }
strassburger2006proof
arxiv-674996
cs/0610124
Dependency Treebanks: Methods, Annotation Schemes and Tools
<|reference_start|>Dependency Treebanks: Methods, Annotation Schemes and Tools: In this paper, current dependencybased treebanks are introduced and analyzed. The methods used for building the resources, the annotation schemes applied, and the tools used (such as POS taggers, parsers and annotation software) are discussed.<|reference_end|>
arxiv
@article{kakkonen2006dependency, title={Dependency Treebanks: Methods, Annotation Schemes and Tools}, author={Tuomo Kakkonen}, journal={Proceedings of the 15th Nordic Conference of Computational Linguistics (NODALIDA 2005), pp. 94-104. Joensuu, Finland, 2005}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610124}, primaryClass={cs.CL} }
kakkonen2006dependency
arxiv-674997
cs/0610125
Report on article: P=NP Linear programming formulation of the Traveling Salesman Problem
<|reference_start|>Report on article: P=NP Linear programming formulation of the Traveling Salesman Problem: This article presents counter examples for three articles claiming that P=NP. Articles for which it applies are: Moustapha Diaby "P = NP: Linear programming formulation of the traveling salesman problem" and "Equality of complexity classes P and NP: Linear programming formulation of the quadratic assignment problem", and also Sergey Gubin "A Polynomial Time Algorithm for The Traveling Salesman Problem"<|reference_end|>
arxiv
@article{hofman2006report, title={Report on article: P=NP Linear programming formulation of the Traveling Salesman Problem}, author={Radoslaw Hofman}, journal={arXiv preprint arXiv:cs/0610125}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610125}, primaryClass={cs.CC cs.DM cs.DS} }
hofman2006report
arxiv-674998
cs/0610126
Fitness Uniform Optimization
<|reference_start|>Fitness Uniform Optimization: In evolutionary algorithms, the fitness of a population increases with time by mutating and recombining individuals and by a biased selection of more fit individuals. The right selection pressure is critical in ensuring sufficient optimization progress on the one hand and in preserving genetic diversity to be able to escape from local optima on the other hand. Motivated by a universal similarity relation on the individuals, we propose a new selection scheme, which is uniform in the fitness values. It generates selection pressure toward sparsely populated fitness regions, not necessarily toward higher fitness, as is the case for all other selection schemes. We show analytically on a simple example that the new selection scheme can be much more effective than standard selection schemes. We also propose a new deletion scheme which achieves a similar result via deletion and show how such a scheme preserves genetic diversity more effectively than standard approaches. We compare the performance of the new schemes to tournament selection and random deletion on an artificial deceptive problem and a range of NP-hard problems: traveling salesman, set covering and satisfiability.<|reference_end|>
arxiv
@article{hutter2006fitness, title={Fitness Uniform Optimization}, author={Marcus Hutter and Shane Legg}, journal={IEEE Transactions on Evolutionary Computation, 10:5 (2006) 568-589}, year={2006}, doi={10.1109/TEVC.2005.863127}, number={IDSIA-16-06}, archivePrefix={arXiv}, eprint={cs/0610126}, primaryClass={cs.NE cs.LG} }
hutter2006fitness
arxiv-674999
cs/0610127
The intersection and the union of the asynchronous systems
<|reference_start|>The intersection and the union of the asynchronous systems: The asynchronous systems $f$ are the models of the asynchronous circuits from digital electrical engineering. They are multi-valued functions that associate to each input $u:\mathbf{R}\to \{0,1\}^{m}$ a set of states $x\in f(u),$ where $x:\mathbf{R}\to \{0,1\}^{n}.$ The intersection of the systems allows adding supplementary conditions in modeling and the union of the systems allows considering the validity of one of two systems in modeling, for example when testing the asynchronous circuits and the circuit is supposed to be 'good' or 'bad'. The purpose of the paper is that of analyzing the intersection and the union against the initial/final states, initial/final time, initial/final state functions, subsystems, dual systems, inverse systems, Cartesian product of systems, parallel connection and serial connection of systems.<|reference_end|>
arxiv
@article{vlad2006the, title={The intersection and the union of the asynchronous systems}, author={Serban E. Vlad}, journal={arXiv preprint arXiv:cs/0610127}, year={2006}, archivePrefix={arXiv}, eprint={cs/0610127}, primaryClass={cs.GL} }
vlad2006the
arxiv-675000
cs/0610128
Hierarchical Bin Buffering: Online Local Moments for Dynamic External Memory Arrays
<|reference_start|>Hierarchical Bin Buffering: Online Local Moments for Dynamic External Memory Arrays: Local moments are used for local regression, to compute statistical measures such as sums, averages, and standard deviations, and to approximate probability distributions. We consider the case where the data source is a very large I/O array of size n and we want to compute the first N local moments, for some constant N. Without precomputation, this requires O(n) time. We develop a sequence of algorithms of increasing sophistication that use precomputation and additional buffer space to speed up queries. The simpler algorithms partition the I/O array into consecutive ranges called bins, and they are applicable not only to local-moment queries, but also to algebraic queries (MAX, AVERAGE, SUM, etc.). With N buffers of size sqrt{n}, time complexity drops to O(sqrt n). A more sophisticated approach uses hierarchical buffering and has a logarithmic time complexity (O(b log_b n)), when using N hierarchical buffers of size n/b. Using Overlapped Bin Buffering, we show that only a single buffer is needed, as with wavelet-based algorithms, but using much less storage. Applications exist in multidimensional and statistical databases over massive data sets, interactive image processing, and visualization.<|reference_end|>
arxiv
@article{lemire2006hierarchical, title={Hierarchical Bin Buffering: Online Local Moments for Dynamic External Memory Arrays}, author={Daniel Lemire and Owen Kaser}, journal={ACM Transactions on Algorithms 4(1): 14 (2008)}, year={2006}, doi={10.1145/1328911.1328925}, archivePrefix={arXiv}, eprint={cs/0610128}, primaryClass={cs.DS cs.DB} }
lemire2006hierarchical