corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-674301
cs/0605143
High-level synthesis under I/O Timing and Memory constraints
<|reference_start|>High-level synthesis under I/O Timing and Memory constraints: The design of complex Systems-on-Chips implies to take into account communication and memory access constraints for the integration of dedicated hardware accelerator. In this paper, we present a methodology and a tool that allow the High-Level Synthesis of DSP algorithm, under both I/O timing and memory constraints. Based on formal models and a generic architecture, this tool helps the designer to find a reasonable trade-off between both the required I/O timing behavior and the internal memory access parallelism of the circuit. The interest of our approach is demonstrated on the case study of a FFT algorithm.<|reference_end|>
arxiv
@article{coussy2006high-level, title={High-level synthesis under I/O Timing and Memory constraints}, author={Philippe Coussy (LESTER), Gwenol'e Corre (LESTER), Pierre Bomel (LESTER), Eric Senn (LESTER), Eric Martin (LESTER)}, journal={International Symposium on Circuits And Systems (2005) 680-683}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605143}, primaryClass={cs.AR} }
coussy2006high-level
arxiv-674302
cs/0605144
A Memory Aware High Level Synthesis Too
<|reference_start|>A Memory Aware High Level Synthesis Too: We introduce a new approach to take into account the memory architecture and the memory mapping in High- Level Synthesis for data intensive applications. We formalize the memory mapping as a set of constraints for the synthesis, and defined a Memory Constraint Graph and an accessibility criterion to be used in the scheduling step. We use a memory mapping file to include those memory constraints in our HLS tool GAUT. It is possible, with the help of GAUT, to explore a wide range of solutions, and to reach a good tradeoff between time, power-consumption, and area.<|reference_end|>
arxiv
@article{corre2006a, title={A Memory Aware High Level Synthesis Too}, author={Gwenol'e Corre (LESTER), Nathalie Julien (LESTER), Eric Senn (LESTER), Eric Martin (LESTER)}, journal={International Symposium on VLSI (2004) 279-280}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605144}, primaryClass={cs.AR} }
corre2006a
arxiv-674303
cs/0605145
Memory Aware High-Level Synthesis for Embedded Systems
<|reference_start|>Memory Aware High-Level Synthesis for Embedded Systems: We introduce a new approach to take into account the memory architecture and the memory mapping in the High- Level Synthesis of Real-Time embedded systems. We formalize the memory mapping as a set of constraints used in the scheduling step. We use a memory mapping file to include those memory constraints in our HLS tool GAUT. Our scheduling algorithm exhibits a relatively low complexity that permits to tackle complex designs in a reasonable time. Finally, we show how to explore, with the help of GAUT, a wide range of solutions, and to reach a good tradeoff between time, power-consumption, and area.<|reference_end|>
arxiv
@article{corre2006memory, title={Memory Aware High-Level Synthesis for Embedded Systems}, author={Gwenol'e Corre (LESTER), Eric Senn (LESTER), Nathalie Julien (LESTER), Eric Martin (LESTER)}, journal={IADIS conference on Applied Computing, Portugal (2004) 499-506}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605145}, primaryClass={cs.AR} }
corre2006memory
arxiv-674304
cs/0605146
Synth\`ese Comportementale Sous Contraintes de Communication et de Placement M\'emoire pour les composants du TDSI
<|reference_start|>Synth\`ese Comportementale Sous Contraintes de Communication et de Placement M\'emoire pour les composants du TDSI: The design of complex Digital Signal Processing systems implies to minimize architectural cost and to maximize timing performances while taking into account communication and memory accesses constraints for the integration of dedicated hardware accelerator. Unfortunately, the traditional Matlab/ Simulink design flows gather not very flexible hardware blocs. In this paper, we present a methodology and a tool that permit the High-Level Synthesis of DSP applications, under both I/O timing and memory constraints. Based on formal models and a generic architecture, our tool GAUT helps the designer in finding a reasonable trade-off between the circuit's performance and its architectural complexity. The efficiency of our approach is demonstrated on the case study of a FFT algorithm.<|reference_end|>
arxiv
@article{corre2006synth\`{e}se, title={Synth\`{e}se Comportementale Sous Contraintes de Communication et de Placement M\'{e}moire pour les composants du TDSI}, author={Gwenol'e Corre (LESTER), Philippe Coussy (LESTER), Pierre Bomel (LESTER), Eric Senn (LESTER), Eric Martin (LESTER)}, journal={GRETSI'05 (Colloque sur le Traitement du Signal et de l'Image), Belgique (2005) 779-782}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605146}, primaryClass={cs.AR} }
corre2006synth\`{e}se
arxiv-674305
cs/0605147
Utilisation de la linguistique en reconnaissance de la parole : un \'etat de l'art
<|reference_start|>Utilisation de la linguistique en reconnaissance de la parole : un \'etat de l'art: To transcribe speech, automatic speech recognition systems use statistical methods, particularly hidden Markov model and N-gram models. Although these techniques perform well and lead to efficient systems, they approach their maximum possibilities. It seems thus necessary, in order to outperform current results, to use additional information, especially bound to language. However, introducing such knowledge must be realized taking into account specificities of spoken language (hesitations for example) and being robust to possible misrecognized words. This document presents a state of the art of these researches, evaluating the impact of the insertion of linguistic information on the quality of the transcription.<|reference_end|>
arxiv
@article{huet2006utilisation, title={Utilisation de la linguistique en reconnaissance de la parole : un \'{e}tat de l'art}, author={St'ephane Huet (IRISA / INRIA Rennes, IRISA / INRIA Rennes), Pascale S'ebillot (IRISA / INRIA Rennes), Guillaume Gravier (IRISA / INRIA Rennes)}, journal={arXiv preprint arXiv:cs/0605147}, year={2006}, archivePrefix={arXiv}, eprint={cs/0605147}, primaryClass={cs.HC cs.CL} }
huet2006utilisation
arxiv-674306
cs/0606001
Tight Bounds for the Min-Max Boundary Decomposition Cost of Weighted Graphs
<|reference_start|>Tight Bounds for the Min-Max Boundary Decomposition Cost of Weighted Graphs: Many load balancing problems that arise in scientific computing applications ask to partition a graph with weights on the vertices and costs on the edges into a given number of almost equally-weighted parts such that the maximum boundary cost over all parts is small. Here, this partitioning problem is considered for bounded-degree graphs G=(V,E) with edge costs c: E->R+ that have a p-separator theorem for some p>1, i.e., any (arbitrarily weighted) subgraph of G can be separated into two parts of roughly the same weight by removing a vertex set S such that the edges incident to S in the subgraph have total cost at most proportional to (SUM_e c^p_e)^(1/p), where the sum is over all edges e in the subgraph. We show for all positive integers k and weights w that the vertices of G can be partitioned into k parts such that the weight of each part differs from the average weight by less than MAX{w_v; v in V}, and the boundary edges of each part have cost at most proportional to (SUM_e c_e^p/k)^(1/p) + MAX_e c_e. The partition can be computed in time nearly proportional to the time for computing a separator S of G. Our upper bound on the boundary costs is shown to be tight up to a constant factor for infinitely many instances with a broad range of parameters. Previous results achieved this bound only if one has c=1, w=1, and one allows parts with weight exceeding the average by a constant fraction.<|reference_end|>
arxiv
@article{steurer2006tight, title={Tight Bounds for the Min-Max Boundary Decomposition Cost of Weighted Graphs}, author={David Steurer}, journal={arXiv preprint arXiv:cs/0606001}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606001}, primaryClass={cs.DS cs.DM} }
steurer2006tight
arxiv-674307
cs/0606002
Mining Behavioral Groups in Large Wireless LANs
<|reference_start|>Mining Behavioral Groups in Large Wireless LANs: One vision of future wireless networks is that they will be deeply integrated and embedded in our lives and will involve the use of personalized mobile devices. User behavior in such networks is bound to affect the network performance. It is imperative to study and characterize the fundamental structure of wireless user behavior in order to model, manage, leverage and design efficient mobile networks. It is also important to make such study as realistic as possible, based on extensive measurements collected from existing deployed wireless networks. In this study, using our systematic TRACE approach, we analyze wireless users' behavioral patterns by extensively mining wireless network logs from two major university campuses. We represent the data using location preference vectors, and utilize unsupervised learning (clustering) to classify trends in user behavior using novel similarity metrics. Matrix decomposition techniques are used to identify (and differentiate between) major patterns. While our findings validate intuitive repetitive behavioral trends and user grouping, it is surprising to find the qualitative commonalities of user behaviors from the two universities. We discover multi-modal user behavior for more than 60% of the users, and there are hundreds of distinct groups with unique behavioral patterns in both campuses. The sizes of the major groups follow a power-law distribution. Our methods and findings provide an essential step towards network management and behavior-aware network protocols and applications, to name a few.<|reference_end|>
arxiv
@article{hsu2006mining, title={Mining Behavioral Groups in Large Wireless LANs}, author={Wei-jen Hsu, Debojyoti Dutta, and Ahmed Helmy}, journal={arXiv preprint arXiv:cs/0606002}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606002}, primaryClass={cs.NI} }
hsu2006mining
arxiv-674308
cs/0606003
Modeling Aspect Mechanisms: A Top-Down Approach
<|reference_start|>Modeling Aspect Mechanisms: A Top-Down Approach: A plethora of diverse aspect mechanisms exist today, all of which integrate concerns into artifacts that exhibit crosscutting structure. What we lack and need is a characterization of the design space that these aspect mechanisms inhabit and a model description of their weaving processes. A good design space representation provides a common framework for understanding and evaluating existing mechanisms. A well-understood model of the weaving process can guide the implementor of new aspect mechanisms. It can guide the designer when mechanisms implementing new kinds of weaving are needed. It can also help teach aspect-oriented programming (AOP). In this paper we present and evaluate such a model of the design space for aspect mechanisms and their weaving processes. We model weaving, at an abstract level, as a concern integration process. We derive a weaving process model (WPM) top-down, differentiating a reactive from a nonreactive process. The model provides an in-depth explanation of the key subpro existing aspect mechanisms.<|reference_end|>
arxiv
@article{kojarski2006modeling, title={Modeling Aspect Mechanisms: A Top-Down Approach}, author={Sergei Kojarski and David H. Lorenz}, journal={In Proceedings of the 28th International Conference on Software Engineering (ICSE'06), pages 212--221, Shanghai, China, May 20-28, 2006}, year={2006}, number={CS-2006-04}, archivePrefix={arXiv}, eprint={cs/0606003}, primaryClass={cs.SE cs.PL} }
kojarski2006modeling
arxiv-674309
cs/0606004
A Framework for the Development of Manufacturing Simulators: Towards New Generation of Simulation Systems
<|reference_start|>A Framework for the Development of Manufacturing Simulators: Towards New Generation of Simulation Systems: In this paper, an attempt is made to systematically discuss the development of simulation systems for manufacturing system design. General requirements on manufacturing simulators are formulated and a framework to address the requirements is suggested. Problems of information representation as an activity underlying simulation are considered. This is to form the necessary mathematical foundation for manufacturing simulations. The theoretical findings are explored through a pilot study. A conclusion about the suitability of the suggested approach to the development of simulation systems for manufacturing system design is made, and implications for future research are described.<|reference_end|>
arxiv
@article{kryssanov2006a, title={A Framework for the Development of Manufacturing Simulators: Towards New Generation of Simulation Systems}, author={V.V. Kryssanov, V.A. Abramov, H. Hibino, Y. Fukuda}, journal={In: H. Fujimoto and R.E. DeVor (eds), Proceedings of the 1998 Japan-U.S.A. Symposium on Flexible Automation. 1998, Vol. III, pp. 1307-1314}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606004}, primaryClass={cs.CE cs.HC} }
kryssanov2006a
arxiv-674310
cs/0606005
The KAA project: a trust policy point of view
<|reference_start|>The KAA project: a trust policy point of view: In the context of ambient networks where each small device must trust its neighborhood rather than a fixed network, we propose in this paper a \textit{trust management framework} inspired by known social patterns and based on the following statements: each mobile constructs itself a local level of trust what means that it does not accept recommendation by other peers, and the only relevant parameter, beyond some special cases discussed later, to evaluate the level of trust is the number of common trusted mobiles. These trusted mobiles are considered as entries in a local database called history for each device and we use identity-based cryptography to ensure strong security: history must be a non-tansferable object.<|reference_end|>
arxiv
@article{galice2006the, title={The KAA project: a trust policy point of view}, author={Samuel Galice (INRIA Rh^one-Alpes), V'eronique Legrand (INRIA Rh^one-Alpes), Marine Minier (INRIA Rh^one-Alpes), John Mullins (INRIA Rh^one-Alpes), St'ephane Ub'eda (INRIA Rh^one-Alpes)}, journal={arXiv preprint arXiv:cs/0606005}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606005}, primaryClass={cs.NI} }
galice2006the
arxiv-674311
cs/0606006
Foundations of Modern Language Resource Archives
<|reference_start|>Foundations of Modern Language Resource Archives: A number of serious reasons will convince an increasing amount of researchers to store their relevant material in centers which we will call "language resource archives". They combine the duty of taking care of long-term preservation as well as the task to give access to their material to different user groups. Access here is meant in the sense that an active interaction with the data will be made possible to support the integration of new data, new versions or commentaries of all sort. Modern Language Resource Archives will have to adhere to a number of basic principles to fulfill all requirements and they will have to be involved in federations to create joint language resource domains making it even more simple for the researchers to access the data. This paper makes an attempt to formulate the essential pillars language resource archives have to adhere to.<|reference_end|>
arxiv
@article{wittenburg2006foundations, title={Foundations of Modern Language Resource Archives}, author={Peter Wittenburg (MPIPS), Daan Broeder (MPIPS), Wolfgang Klein (MPIPS), Stephen Levinson (MPIPS), Laurent Romary (INRIA Lorraine - LORIA)}, journal={arXiv preprint arXiv:cs/0606006}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606006}, primaryClass={cs.CL} }
wittenburg2006foundations
arxiv-674312
cs/0606007
A parent-centered radial layout algorithm for interactive graph visualization and animation
<|reference_start|>A parent-centered radial layout algorithm for interactive graph visualization and animation: We have developed (1) a graph visualization system that allows users to explore graphs by viewing them as a succession of spanning trees selected interactively, (2) a radial graph layout algorithm, and (3) an animation algorithm that generates meaningful visualizations and smooth transitions between graphs while minimizing edge crossings during transitions and in static layouts. Our system is similar to the radial layout system of Yee et al. (2001), but differs primarily in that each node is positioned on a coordinate system centered on its own parent rather than on a single coordinate system for all nodes. Our system is thus easy to define recursively and lends itself to parallelization. It also guarantees that layouts have many nice properties, such as: it guarantees certain edges never cross during an animation. We compared the layouts and transitions produced by our algorithms to those produced by Yee et al. Results from several experiments indicate that our system produces fewer edge crossings during transitions between graph drawings, and that the transitions more often involve changes in local scaling rather than structure. These findings suggest the system has promise as an interactive graph exploration tool in a variety of settings.<|reference_end|>
arxiv
@article{pavlo2006a, title={A parent-centered radial layout algorithm for interactive graph visualization and animation}, author={Andrew Pavlo (1), Christopher Homan (2), Jonathan Schull (2) ((1) University of Wisconsin-Madison, (2) Rochester Institute of Technology)}, journal={arXiv preprint arXiv:cs/0606007}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606007}, primaryClass={cs.HC cs.CG cs.GR} }
pavlo2006a
arxiv-674313
cs/0606008
Repository Replication Using NNTP and SMTP
<|reference_start|>Repository Replication Using NNTP and SMTP: We present the results of a feasibility study using shared, existing, network-accessible infrastructure for repository replication. We investigate how dissemination of repository contents can be ``piggybacked'' on top of existing email and Usenet traffic. Long-term persistence of the replicated repository may be achieved thanks to current policies and procedures which ensure that mail messages and news posts are retrievable for evidentiary and other legal purposes for many years after the creation date. While the preservation issues of migration and emulation are not addressed with this approach, it does provide a simple method of refreshing content with unknown partners.<|reference_end|>
arxiv
@article{smith2006repository, title={Repository Replication Using NNTP and SMTP}, author={Joan A. Smith, Martin Klein, Michael L. Nelson}, journal={arXiv preprint arXiv:cs/0606008}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606008}, primaryClass={cs.DL} }
smith2006repository
arxiv-674314
cs/0606009
The Consequences of Eliminating NP Solutions
<|reference_start|>The Consequences of Eliminating NP Solutions: Given a function based on the computation of an NP machine, can one in general eliminate some solutions? That is, can one in general decrease the ambiguity? This simple question remains, even after extensive study by many researchers over many years, mostly unanswered. However, complexity-theoretic consequences and enabling conditions are known. In this tutorial-style article we look at some of those, focusing on the most natural framings: reducing the number of solutions of NP functions, refining the solutions of NP functions, and subtracting from or otherwise shrinking #P functions. We will see how small advice strings are important here, but we also will see how increasing advice size to achieve robustness is central to the proof of a key ambiguity-reduction result for NP functions.<|reference_end|>
arxiv
@article{faliszewski2006the, title={The Consequences of Eliminating NP Solutions}, author={Piotr Faliszewski and Lane A. Hemaspaandra}, journal={arXiv preprint arXiv:cs/0606009}, year={2006}, number={URCS TR-2006-898}, archivePrefix={arXiv}, eprint={cs/0606009}, primaryClass={cs.CC} }
faliszewski2006the
arxiv-674315
cs/0606010
A Decision-Making Support System Based on Know-How
<|reference_start|>A Decision-Making Support System Based on Know-How: The research results described are concerned with: - developing a domain modeling method and tools to provide the design and implementation of decision-making support systems for computer integrated manufacturing; - building a decision-making support system based on know-how and its software environment. The research is funded by NEDO, Japan.<|reference_end|>
arxiv
@article{kryssanov2006a, title={A Decision-Making Support System Based on Know-How}, author={V.V. Kryssanov, V.A. Abramov, Y. Fukuda, K. Konishi}, journal={CIRP Journal of Manufacturing Systems. 1998, Vol. 27, No.4, 427-432}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606010}, primaryClass={cs.CE cs.AI} }
kryssanov2006a
arxiv-674316
cs/0606011
Vectorial Resilient $PC(l)$ of Order $k$ Boolean Functions from AG-Codes
<|reference_start|>Vectorial Resilient $PC(l)$ of Order $k$ Boolean Functions from AG-Codes: Propagation criterion of degree $l$ and order $k$ ($PC(l)$ of order $k$) and resiliency of vectorial Boolean functions are important for cryptographic purpose (see [1, 2, 3,6, 7,8,10,11,16]. Kurosawa, Stoh [8] and Carlet [1] gave a construction of Boolean functions satisfying $PC(l)$ of order $k$ from binary linear or nonlinear codes in. In this paper, algebraic-geometric codes over $GF(2^m)$ are used to modify Carlet and Kurosawa-Satoh's construction for giving vectorial resilient Boolean functions satisfying $PC(l)$ of order $k$. The new construction is compared with previously known results.<|reference_end|>
arxiv
@article{chen2006vectorial, title={Vectorial Resilient $PC(l)$ of Order $k$ Boolean Functions from AG-Codes}, author={Hao Chen, Liang Ma and Jianhua Li}, journal={arXiv preprint arXiv:cs/0606011}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606011}, primaryClass={cs.CR cs.IT math.IT} }
chen2006vectorial
arxiv-674317
cs/0606012
On the communication between cells of a cellular automaton on the penta- and heptagrids of the hyperbolic plane
<|reference_start|>On the communication between cells of a cellular automaton on the penta- and heptagrids of the hyperbolic plane: This contribution belongs to a combinatorial approach to hyperbolic geometry and it is aimed at possible applications to computer simulations. It is based on the splitting method which was introduced by the author and which is reminded in the second section of the paper. Then we sketchily remind the application to the classical case of the pentagrid, i.e. the tiling of the hyperbolic plane which is generated by reflections of the regular rectangular pentagon in its sides and, recursively, of its images in their sides. From this application, we derived a system of coordinates to locate the tiles, allowing an implementation of cellular automata. At the software level, cells exchange messages thanks to a new representation which improves the speed of contacts between cells. In the new setting, communications are exchanged along actual geodesics and the contribution of the cellular automaton is also linear in the coordinates of the cells.<|reference_end|>
arxiv
@article{margenstern2006on, title={On the communication between cells of a cellular automaton on the penta- and heptagrids of the hyperbolic plane}, author={Maurice Margenstern}, journal={Journal of Cellular Automata, 1(3), (2006), 213-232}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606012}, primaryClass={cs.CG cs.CC} }
margenstern2006on
arxiv-674318
cs/0606013
Good Illumination of Minimum Range
<|reference_start|>Good Illumination of Minimum Range: A point p is 1-well illuminated by a set F of n point lights if p lies in the interior of the convex hull of F. This concept corresponds to triangle-guarding or well-covering. In this paper we consider the illumination range of the light sources as a parameter to be optimized. First, we solve the problem of minimizing the light sources' illumination range to 1-well illuminate a given point p. We also compute a minimal set of light sources that 1-well illuminates p with minimum illumination range. Second, we solve the problem of minimizing the light sources' illumination range to 1-well illuminate all the points of a line segment with an O(n^2) algorithm. Finally, we give an O(n^2 log n) algorithm for preprocessing the data so that one can obtain the illumination range needed to 1-well illuminate a point of a line segment in O(log n) time. These results can be applied to solve problems of 1-well illuminating a trajectory by approaching it to a polygonal path.<|reference_end|>
arxiv
@article{abellanas2006good, title={Good Illumination of Minimum Range}, author={M. Abellanas, A. Bajuelos, G. Hern'andez, F. Hurtado, I. Matos, B. Palop}, journal={arXiv preprint arXiv:cs/0606013}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606013}, primaryClass={cs.CG} }
abellanas2006good
arxiv-674319
cs/0606014
On the Capacity of Multiple Access Channels with State Information and Feedback
<|reference_start|>On the Capacity of Multiple Access Channels with State Information and Feedback: In this paper, the multiple access channel (MAC) with channel state is analyzed in a scenario where a) the channel state is known non-causally to the transmitters and b) there is perfect causal feedback from the receiver to the transmitters. An achievable region and an outer bound are found for a discrete memoryless MAC that extend existing results, bringing together ideas from the two separate domains of MAC with state and MAC with feedback. Although this achievable region does not match the outer bound in general, special cases where they meet are identified. In the case of a Gaussian MAC, a specialized achievable region is found by using a combination of dirty paper coding and a generalization of the Schalkwijk-Kailath, Ozarow and Merhav-Weissman schemes, and this region is found to be capacity achieving. Specifically, it is shown that additive Gaussian interference that is known non-causally to the transmitter causes no loss in capacity for the Gaussian MAC with feedback.<|reference_end|>
arxiv
@article{wu2006on, title={On the Capacity of Multiple Access Channels with State Information and Feedback}, author={Wei Wu, Sriram Vishwanath and Ari Arapostathis}, journal={arXiv preprint arXiv:cs/0606014}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606014}, primaryClass={cs.IT math.IT} }
wu2006on
arxiv-674320
cs/0606015
The Size of Optimal Sequence Sets for Synchronous CDMA Systems
<|reference_start|>The Size of Optimal Sequence Sets for Synchronous CDMA Systems: The sum capacity on a symbol-synchronous CDMA system having processing gain $N$ and supporting $K$ power constrained users is achieved by employing at most $2N-1$ sequences. Analogously, the minimum received power (energy-per-chip) on the symbol-synchronous CDMA system supporting $K$ users that demand specified data rates is attained by employing at most $2N-1$ sequences. If there are $L$ oversized users in the system, at most $2N-L-1$ sequences are needed. $2N-1$ is the minimum number of sequences needed to guarantee optimal allocation for single dimensional signaling. $N$ orthogonal sequences are sufficient if a few users (at most $N-1$) are allowed to signal in multiple dimensions. If there are no oversized users, these split users need to signal only in two dimensions each. The above results are shown by proving a converse to a well-known result of Weyl on the interlacing eigenvalues of the sum of two Hermitian matrices, one of which is of rank 1. The converse is analogous to Mirsky's converse to the interlacing eigenvalues theorem for bordering matrices.<|reference_end|>
arxiv
@article{sundaresan2006the, title={The Size of Optimal Sequence Sets for Synchronous CDMA Systems}, author={Rajesh Sundaresan and Arun Padakandla}, journal={arXiv preprint arXiv:cs/0606015}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606015}, primaryClass={cs.IT math.IT} }
sundaresan2006the
arxiv-674321
cs/0606016
Performance Analysis of Iterative Channel Estimation and Multiuser Detection in Multipath DS-CDMA Channels
<|reference_start|>Performance Analysis of Iterative Channel Estimation and Multiuser Detection in Multipath DS-CDMA Channels: This paper examines the performance of decision feedback based iterative channel estimation and multiuser detection in channel coded aperiodic DS-CDMA systems operating over multipath fading channels. First, explicit expressions describing the performance of channel estimation and parallel interference cancellation based multiuser detection are developed. These results are then combined to characterize the evolution of the performance of a system that iterates among channel estimation, multiuser detection and channel decoding. Sufficient conditions for convergence of this system to a unique fixed point are developed.<|reference_end|>
arxiv
@article{li2006performance, title={Performance Analysis of Iterative Channel Estimation and Multiuser Detection in Multipath DS-CDMA Channels}, author={Husheng Li, Sharon M. Betz and H. Vincent Poor}, journal={arXiv preprint arXiv:cs/0606016}, year={2006}, doi={10.1109/TSP.2007.893229}, archivePrefix={arXiv}, eprint={cs/0606016}, primaryClass={cs.IT math.IT} }
li2006performance
arxiv-674322
cs/0606017
From semiotics of hypermedia to physics of semiosis: A view from system theory
<|reference_start|>From semiotics of hypermedia to physics of semiosis: A view from system theory: Given that theoretical analysis and empirical validation is fundamental to any model, whether conceptual or formal, it is surprising that these two tools of scientific discovery are so often ignored in the contemporary studies of communication. In this paper, we pursued the ideas of a) correcting and expanding the modeling approaches of linguistics, which are otherwise inapplicable (more precisely, which should not but are widely applied), to the general case of hypermedia-based communication, and b) developing techniques for empirical validation of semiotic models, which are nowadays routinely used to explore (in fact, to conjecture about) internal mechanisms of complex systems, yet on a purely speculative basis. This study thus offers two experimentally tested substantive contributions: the formal representation of communication as the mutually-orienting behavior of coupled autonomous systems, and the mathematical interpretation of the semiosis of communication, which together offer a concrete and parsimonious understanding of diverse communication phenomena.<|reference_end|>
arxiv
@article{kryssanov2006from, title={From semiotics of hypermedia to physics of semiosis: A view from system theory}, author={V.V. Kryssanov, K. Kakusho}, journal={Semiotica. 2005, Vol.154-1/4, 11-38}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606017}, primaryClass={cs.HC cs.CL cs.IT math.IT} }
kryssanov2006from
arxiv-674323
cs/0606018
Emerging Markets for RFID Traces
<|reference_start|>Emerging Markets for RFID Traces: RFID tags are held to become ubiquitous in logistics in the near future, and item-level tagging will pave the way for Ubiquitous Computing, for example in application fields like smart homes. Our paper addresses the value and the production cost of information that can be gathered by observing these tags over time and different locations. We argue that RFID technology will induce a thriving market for such information, resulting in easy data access for analysts to infer business intelligence and individual profiles of unusually high detail. Understanding these information markets is important for many reasons: They represent new business opportunities, and market players need to be aware of their roles in these markets. Policy makers need to confirm that the market structure will not negatively affect overall welfare. Finally, though we are not addressing the complex issue of privacy, we are convinced that market forces will have a significant impact on the effectiveness of deployed security enhancements to RFID technology. In this paper we take a few first steps into a relatively new field of economic research and conclude with a list of research problems that promise deeper insights into the matter.<|reference_end|>
arxiv
@article{bauer2006emerging, title={Emerging Markets for RFID Traces}, author={Matthias Bauer and Benjamin Fabian and Matthias Fischmann and Seda G"urses}, journal={arXiv preprint arXiv:cs/0606018}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606018}, primaryClass={cs.CY cs.CR} }
bauer2006emerging
arxiv-674324
cs/0606019
A synchronous pi-calculus
<|reference_start|>A synchronous pi-calculus: The SL synchronous programming model is a relaxation of the Esterel synchronous model where the reaction to the absence of a signal within an instant can only happen at the next instant. In previous work, we have revisited the SL synchronous programming model. In particular, we have discussed an alternative design of the model including thread spawning and recursive definitions, introduced a CPS translation to a tail recursive form, and proposed a notion of bisimulation equivalence. In the present work, we extend the tail recursive model with first-order data types obtaining a non-deterministic synchronous model whose complexity is comparable to the one of the pi-calculus. We show that our approach to bisimulation equivalence can cope with this extension and in particular that labelled bisimulation can be characterised as a contextual bisimulation.<|reference_end|>
arxiv
@article{amadio2006a, title={A synchronous pi-calculus}, author={Roberto Amadio (PPS)}, journal={Journal of Information and Computation 205, 9 (2007) 1470-1490}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606019}, primaryClass={cs.LO cs.PL} }
amadio2006a
arxiv-674325
cs/0606020
Imagination as Holographic Processor for Text Animation
<|reference_start|>Imagination as Holographic Processor for Text Animation: Imagination is the critical point in developing of realistic artificial intelligence (AI) systems. One way to approach imagination would be simulation of its properties and operations. We developed two models: AI-Brain Network Hierarchy of Languages and Semantical Holographic Calculus as well as simulation system ScriptWriter that emulate the process of imagination through an automatic animation of English texts. The purpose of this paper is to demonstrate the model and to present ScriptWriter system http://nvo.sdsc.edu/NVO/JCSG/get_SRB_mime_file2.cgi//home/tamara.sdsc/test/demo.zip?F=/home/tamara.sdsc/test/demo.zip&M=application/x-gtar for simulation of the imagination.<|reference_end|>
arxiv
@article{astakhov2006imagination, title={Imagination as Holographic Processor for Text Animation}, author={Vadim Astakhov, Tamara Astakhova, Brian Sanders}, journal={arXiv preprint arXiv:cs/0606020}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606020}, primaryClass={cs.AI} }
astakhov2006imagination
arxiv-674326
cs/0606021
A simulation engine to support production scheduling using genetics-based machine learning
<|reference_start|>A simulation engine to support production scheduling using genetics-based machine learning: The ever higher complexity of manufacturing systems, continually shortening life cycles of products and their increasing variety, as well as the unstable market situation of the recent years require introducing grater flexibility and responsiveness to manufacturing processes. From this perspective, one of the critical manufacturing tasks, which traditionally attract significant attention in both academia and the industry, but which have no satisfactory universal solution, is production scheduling. This paper proposes an approach based on genetics-based machine learning (GBML) to treat the problem of flow shop scheduling. By the approach, a set of scheduling rules is represented as an individual of genetic algorithms, and the fitness of the individual is estimated based on the makespan of the schedule generated by using the rule-set. A concept of the interactive software environment consisting of a simulator and a GBML simulation engine is introduced to support human decision-making during scheduling. A pilot study is underway to evaluate the performance of the GBML technique in comparison with other methods (such as Johnson's algorithm and simulated annealing) while completing test examples.<|reference_end|>
arxiv
@article{tamaki2006a, title={A simulation engine to support production scheduling using genetics-based machine learning}, author={H. Tamaki, V.V. Kryssanov, S. Kitamura}, journal={In: K. Mertins, O. Krause, and B. Schallock (eds), Global Production Management, pp. 482-489. 1999, Kluwer Academic Publishers}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606021}, primaryClass={cs.CE cs.AI} }
tamaki2006a
arxiv-674327
cs/0606022
Limited Feedback Beamforming Over Temporally-Correlated Channels
<|reference_start|>Limited Feedback Beamforming Over Temporally-Correlated Channels: Feedback of quantized channel state information (CSI), called limited feedback, enables transmit beamforming in multiple-input-multiple-output (MIMO) wireless systems with a small amount of overhead. Due to its efficiency, beamforming with limited feedback has been adopted in several wireless communication standards. Prior work on limited feedback commonly adopts the block fading channel model where temporal correlation in wireless channels is neglected. This paper considers temporally-correlated channels and designs single-user transmit beamforming with limited feedback. Analytical results concerning CSI feedback are derived by modeling quantized CSI as a first-order finite-state Markov chain. These results include the source bit rate generated by time-varying quantized CSI, the required bit rate for a CSI feedback channel, and the effect of feedback delay. In particular, based on the theory of Markov chain convergence rate, feedback delay is proved to reduce the throughput gain due to CSI feedback at least exponentially. Furthermore, an algorithm is proposed for CSI feedback compression in time. Combining the results in this work leads to a new method for designing limited feedback beamforming as demonstrated by a design example.<|reference_end|>
arxiv
@article{huang2006limited, title={Limited Feedback Beamforming Over Temporally-Correlated Channels}, author={Kaibin Huang, Robert W. Heath Jr, and Jeffrey G. Andrews}, journal={arXiv preprint arXiv:cs/0606022}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606022}, primaryClass={cs.IT math.IT} }
huang2006limited
arxiv-674328
cs/0606023
Parallel Evaluation of Mathematica Programs in Remote Computers Available in Network
<|reference_start|>Parallel Evaluation of Mathematica Programs in Remote Computers Available in Network: Mathematica is a powerful application package for doing mathematics and is used almost in all branches of science. It has widespread applications ranging from quantum computation, statistical analysis, number theory, zoology, astronomy, and many more. Mathematica gives a rich set of programming extensions to its end-user language, and it permits us to write programs in procedural, functional, or logic (rule-based) style, or a mixture of all three. For tasks requiring interfaces to the external environment, mathematica provides mathlink, which allows us to communicate mathematica programs with external programs written in C, C++, F77, F90, F95, Java, or other languages. It has also extensive capabilities for editing graphics, equations, text, etc. In this article, we explore the basic mechanisms of parallelization of a mathematica program by sharing different parts of the program into all other computers available in the network. Doing the parallelization, we can perform large computational operations within a very short period of time, and therefore, the efficiency of the numerical works can be achieved. Parallel computation supports any version of mathematica and it also works as well even if different versions of mathematica are installed in different computers. The whole operation can run under any supported operating system like Unix, Windows, Macintosh, etc. Here we focus our study only for the Unix based operating system, but this method works as well for all other cases.<|reference_end|>
arxiv
@article{maiti2006parallel, title={Parallel Evaluation of Mathematica Programs in Remote Computers Available in Network}, author={Santanu K. Maiti}, journal={arXiv preprint arXiv:cs/0606023}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606023}, primaryClass={cs.MS cs.PL} }
maiti2006parallel
arxiv-674329
cs/0606024
Consecutive Support: Better Be Close!
<|reference_start|>Consecutive Support: Better Be Close!: We propose a new measure of support (the number of occur- rences of a pattern), in which instances are more important if they occur with a certain frequency and close after each other in the stream of trans- actions. We will explain this new consecutive support and discuss how patterns can be found faster by pruning the search space, for instance using so-called parent support recalculation. Both consecutiveness and the notion of hypercliques are incorporated into the Eclat algorithm. Synthetic examples show how interesting phenomena can now be discov- ered in the datasets. The new measure can be applied in many areas, ranging from bio-informatics to trade, supermarkets, and even law en- forcement. E.g., in bio-informatics it is important to find patterns con- tained in many individuals, where patterns close together in one chro- mosome are more significant.<|reference_end|>
arxiv
@article{de graaf2006consecutive, title={Consecutive Support: Better Be Close!}, author={Edgar de Graaf, Jeannette de Graaf, and Walter A. Kosters}, journal={arXiv preprint arXiv:cs/0606024}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606024}, primaryClass={cs.AI cs.DB} }
de graaf2006consecutive
arxiv-674330
cs/0606025
A Chaotic Cipher Mmohocc and Its Security Analysis
<|reference_start|>A Chaotic Cipher Mmohocc and Its Security Analysis: In this paper we introduce a new chaotic stream cipher Mmohocc which utilizes the fundamental chaos characteristics. The designs of the major components of the cipher are given. Its cryptographic properties of period, auto- and cross-correlations, and the mixture of Markov processes and spatiotemporal effects are investigated. The cipher is resistant to the related-key-IV attacks, Time/Memory/Data tradeoff attacks, algebraic attacks, and chosen-text attacks. The keystreams successfully passed two batteries of statistical tests and the encryption speed is comparable with RC4.<|reference_end|>
arxiv
@article{zhang2006a, title={A Chaotic Cipher Mmohocc and Its Security Analysis}, author={Xiaowen Zhang, Li Shu, Ke Tang}, journal={arXiv preprint arXiv:cs/0606025}, year={2006}, doi={10.1117/12.717682}, archivePrefix={arXiv}, eprint={cs/0606025}, primaryClass={cs.CR} }
zhang2006a
arxiv-674331
cs/0606026
Generating parity check equations for bounded-distance iterative erasure decoding
<|reference_start|>Generating parity check equations for bounded-distance iterative erasure decoding: A generic $(r,m)$-erasure correcting set is a collection of vectors in $\bF_2^r$ which can be used to generate, for each binary linear code of codimension $r$, a collection of parity check equations that enables iterative decoding of all correctable erasure patterns of size at most $m$. That is to say, the only stopping sets of size at most $m$ for the generated parity check equations are the erasure patterns for which there is more than one manner to fill in theerasures to obtain a codeword. We give an explicit construction of generic $(r,m)$-erasure correcting sets of cardinality $\sum_{i=0}^{m-1} {r-1\choose i}$. Using a random-coding-like argument, we show that for fixed $m$, the minimum size of a generic $(r,m)$-erasure correcting set is linear in $r$. Keywords: iterative decoding, binary erasure channel, stopping set<|reference_end|>
arxiv
@article{hollmann2006generating, title={Generating parity check equations for bounded-distance iterative erasure decoding}, author={Henk D.L. Hollmann and Ludo M.G.M. Tolhuizen}, journal={arXiv preprint arXiv:cs/0606026}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606026}, primaryClass={cs.IT math.IT} }
hollmann2006generating
arxiv-674332
cs/0606027
Building a logical model in the machining domain for CAPP expert systems
<|reference_start|>Building a logical model in the machining domain for CAPP expert systems: Recently, extensive efforts have been made on the application of expert system technique to solving the process planning task in the machining domain. This paper introduces a new formal method to design CAPP expert systems. The formal method is applied to provide a contour of the CAPP expert system building technology. Theoretical aspects of the formalism are described and illustrated by an example of know-how analysis. Flexible facilities to utilize multiple knowledge types and multiple planning strategies within one system are provided by the technology.<|reference_end|>
arxiv
@article{kryssanov2006building, title={Building a logical model in the machining domain for CAPP expert systems}, author={V.V. Kryssanov, A.S. Kleshchev, Y. Fukuda, and K. Konishi}, journal={International Journal of Production Research, 1998, vol. 36, No. 4, 1075-1089}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606027}, primaryClass={cs.AI cs.CE cs.SE} }
kryssanov2006building
arxiv-674333
cs/0606028
Affine Transformations of Loop Nests for Parallel Execution and Distribution of Data over Processors
<|reference_start|>Affine Transformations of Loop Nests for Parallel Execution and Distribution of Data over Processors: The paper is devoted to the problem of mapping affine loop nests onto distributed memory parallel computers. A method to find affine transformations of loop nests for parallel execution and distribution of data over processors is presented. The method tends to minimize the number of communications between processors and to improve locality of data within one processor. A problem of determination of data exchange sequence is investigated. Conditions to determine the ability to arrange broadcast is presented.<|reference_end|>
arxiv
@article{adutskevich2006affine, title={Affine Transformations of Loop Nests for Parallel Execution and Distribution of Data over Processors}, author={E.V. Adutskevich, S.V. Bakhanovich, N.A. Likhoded}, journal={Preprint / The National Academy of Sciences of Belarus. Institute of Mathematics: N 3 (574). Minsk, 2005}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606028}, primaryClass={cs.DC} }
adutskevich2006affine
arxiv-674334
cs/0606029
Belief Calculus
<|reference_start|>Belief Calculus: In Dempster-Shafer belief theory, general beliefs are expressed as belief mass distribution functions over frames of discernment. In Subjective Logic beliefs are expressed as belief mass distribution functions over binary frames of discernment. Belief representations in Subjective Logic, which are called opinions, also contain a base rate parameter which express the a priori belief in the absence of evidence. Philosophically, beliefs are quantitative representations of evidence as perceived by humans or by other intelligent agents. The basic operators of classical probability calculus, such as addition and multiplication, can be applied to opinions, thereby making belief calculus practical. Through the equivalence between opinions and Beta probability density functions, this also provides a calculus for Beta probability density functions. This article explains the basic elements of belief calculus.<|reference_end|>
arxiv
@article{josang2006belief, title={Belief Calculus}, author={Audun Josang}, journal={arXiv preprint arXiv:cs/0606029}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606029}, primaryClass={cs.AI} }
josang2006belief
arxiv-674335
cs/0606030
Explicit Randomness is not Necessary when Modeling Probabilistic Encryption
<|reference_start|>Explicit Randomness is not Necessary when Modeling Probabilistic Encryption: Although good encryption functions are probabilistic, most symbolic models do not capture this aspect explicitly. A typical solution, recently used to prove the soundness of such models with respect to computational ones, is to explicitly represent the dependency of ciphertexts on random coins as labels. In order to make these label-based models useful, it seems natural to try to extend the underlying decision procedures and the implementation of existing tools. In this paper we put forth a more practical alternative based on the following soundness theorem. We prove that for a large class of security properties (that includes rather standard formulations for secrecy and authenticity properties), security of protocols in the simpler model implies security in the label-based model. Combined with the soundness result of (\textbf{?}) our theorem enables the translation of security results in unlabeled symbolic models to computational security.<|reference_end|>
arxiv
@article{cortier2006explicit, title={Explicit Randomness is not Necessary when Modeling Probabilistic Encryption}, author={V'eronique Cortier (INRIA Lorraine - LORIA / LIFC), Heinrich H"ordegen (INRIA Lorraine - LORIA / LIFC), Bogdan Warinschi (INRIA Lorraine - LORIA / LIFC)}, journal={arXiv preprint arXiv:cs/0606030}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606030}, primaryClass={cs.CR} }
cortier2006explicit
arxiv-674336
cs/0606031
Complexity of Resolution of Parametric Systems of Polynomial Equations and Inequations
<|reference_start|>Complexity of Resolution of Parametric Systems of Polynomial Equations and Inequations: Consider a system of n polynomial equations and r polynomial inequations in n indeterminates of degree bounded by d with coefficients in a polynomial ring of s parameters with rational coefficients of bit-size at most $\sigma$. From the real viewpoint, solving such a system often means describing some semi-algebraic sets in the parameter space over which the number of real solutions of the considered parametric system is constant. Following the works of Lazard and Rouillier, this can be done by the computation of a discriminant variety. In this report we focus on the case where for a generic specialization of the parameters the system of equations generates a radical zero-dimensional ideal, which is usual in the applications. In this case, we provide a deterministic method computing the minimal discriminant variety reducing the problem to a problem of elimination. Moreover, we prove that the degree of the computed minimal discriminant variety is bounded by $D:=(n+r)d^{(n+1)}$ and that the complexity of our method is $\sigma^{\mathcal{O}(1)} D^{\mathcal{O}(n+s)}$ bit-operations on a deterministic Turing machine.<|reference_end|>
arxiv
@article{moroz2006complexity, title={Complexity of Resolution of Parametric Systems of Polynomial Equations and Inequations}, author={Guillaume Moroz (LIP6, INRIA Rocquencourt)}, journal={arXiv preprint arXiv:cs/0606031}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606031}, primaryClass={cs.SC} }
moroz2006complexity
arxiv-674337
cs/0606032
A secure archive for Voice-over-IP conversations
<|reference_start|>A secure archive for Voice-over-IP conversations: An efficient archive securing the integrity of VoIP-based two-party conversations is presented. The solution is based on chains of hashes and continuously chained electronic signatures. Security is concentrated in a single, efficient component, allowing for a detailed analysis.<|reference_end|>
arxiv
@article{hett2006a, title={A secure archive for Voice-over-IP conversations}, author={Christian Hett, Nicolai Kuntze, and Andreas U. Schmidt}, journal={arXiv preprint arXiv:cs/0606032}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606032}, primaryClass={cs.CR} }
hett2006a
arxiv-674338
cs/0606033
Natural Halting Probabilities, Partial Randomness, and Zeta Functions
<|reference_start|>Natural Halting Probabilities, Partial Randomness, and Zeta Functions: We introduce the zeta number, natural halting probability and natural complexity of a Turing machine and we relate them to Chaitin's Omega number, halting probability, and program-size complexity. A classification of Turing machines according to their zeta numbers is proposed: divergent, convergent and tuatara. We prove the existence of universal convergent and tuatara machines. Various results on (algorithmic) randomness and partial randomness are proved. For example, we show that the zeta number of a universal tuatara machine is c.e. and random. A new type of partial randomness, asymptotic randomness, is introduced. Finally we show that in contrast to classical (algorithmic) randomness--which cannot be naturally characterised in terms of plain complexity--asymptotic randomness admits such a characterisation.<|reference_end|>
arxiv
@article{calude2006natural, title={Natural Halting Probabilities, Partial Randomness, and Zeta Functions}, author={Cristian S. Calude and Michael A. Stay}, journal={arXiv preprint arXiv:cs/0606033}, year={2006}, number={CDMTCS 273}, archivePrefix={arXiv}, eprint={cs/0606033}, primaryClass={cs.CC} }
calude2006natural
arxiv-674339
cs/0606034
A constructive and unifying framework for zero-bit watermarking
<|reference_start|>A constructive and unifying framework for zero-bit watermarking: In the watermark detection scenario, also known as zero-bit watermarking, a watermark, carrying no hidden message, is inserted in content. The watermark detector checks for the presence of this particular weak signal in content. The article looks at this problem from a classical detection theory point of view, but with side information enabled at the embedding side. This means that the watermark signal is a function of the host content. Our study is twofold. The first step is to design the best embedding function for a given detection function, and the best detection function for a given embedding function. This yields two conditions, which are mixed into one `fundamental' partial differential equation. It appears that many famous watermarking schemes are indeed solution to this `fundamental' equation. This study thus gives birth to a constructive framework unifying solutions, so far perceived as very different.<|reference_end|>
arxiv
@article{furon2006a, title={A constructive and unifying framework for zero-bit watermarking}, author={Teddy Furon (IRISA)}, journal={arXiv preprint arXiv:cs/0606034}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606034}, primaryClass={cs.MM cs.CR} }
furon2006a
arxiv-674340
cs/0606035
Finding roots of polynomials over finite fields
<|reference_start|>Finding roots of polynomials over finite fields: We propose an improved algorithm for finding roots of polynomials over finite fields. This makes possible significant speedup of the decoding process of Bose-Chaudhuri-Hocquenghem, Reed-Solomon, and some other error-correcting codes.<|reference_end|>
arxiv
@article{fedorenko2006finding, title={Finding roots of polynomials over finite fields}, author={Sergei V. Fedorenko, Piter V. Trifonov}, journal={IEEE Transactions on Communications, Volume 50, Issue 11, Nov. 2002, Pages:1709 - 1711}, year={2006}, doi={10.1109/TCOMM.2002.805269}, archivePrefix={arXiv}, eprint={cs/0606035}, primaryClass={cs.IT math.IT} }
fedorenko2006finding
arxiv-674341
cs/0606036
Computational Euclid
<|reference_start|>Computational Euclid: We analyse the axioms of Euclidean geometry according to standard object-oriented software development methodology. We find a perfect match: the main undefined concepts of the axioms translate to object classes. The result is a suite of C++ classes that efficiently supports the construction of complex geometric configurations. Although all computations are performed in floating-point arithmetic, they correctly implement as semi-decision algorithms the tests for equality of points, a point being on a line or in a plane, a line being in a plane, parallelness of lines, of a line and a plane, and of planes. That is, in accordance to the fundamental limitations to computability requiring that only negative outcomes are given with certainty, while positive outcomes only imply possibility of these conditions being true.<|reference_end|>
arxiv
@article{van emden2006computational, title={Computational Euclid}, author={M.H. van Emden and B. Moa}, journal={arXiv preprint arXiv:cs/0606036}, year={2006}, number={DCS-315-IR}, archivePrefix={arXiv}, eprint={cs/0606036}, primaryClass={cs.CG} }
van emden2006computational
arxiv-674342
cs/0606037
Average-Case Complexity
<|reference_start|>Average-Case Complexity: We survey the average-case complexity of problems in NP. We discuss various notions of good-on-average algorithms, and present completeness results due to Impagliazzo and Levin. Such completeness results establish the fact that if a certain specific (but somewhat artificial) NP problem is easy-on-average with respect to the uniform distribution, then all problems in NP are easy-on-average with respect to all samplable distributions. Applying the theory to natural distributional problems remain an outstanding open question. We review some natural distributional problems whose average-case complexity is of particular interest and that do not yet fit into this theory. A major open question whether the existence of hard-on-average problems in NP can be based on the P$\neq$NP assumption or on related worst-case assumptions. We review negative results showing that certain proof techniques cannot prove such a result. While the relation between worst-case and average-case complexity for general NP problems remains open, there has been progress in understanding the relation between different "degrees" of average-case complexity. We discuss some of these "hardness amplification" results.<|reference_end|>
arxiv
@article{bogdanov2006average-case, title={Average-Case Complexity}, author={Andrej Bogdanov and Luca Trevisan}, journal={arXiv preprint arXiv:cs/0606037}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606037}, primaryClass={cs.CC} }
bogdanov2006average-case
arxiv-674343
cs/0606038
Tight Bounds on the Complexity of Recognizing Odd-Ranked Elements
<|reference_start|>Tight Bounds on the Complexity of Recognizing Odd-Ranked Elements: Let S = <s_1, s_2, s_3, ..., s_n> be a given vector of n real numbers. The rank of a real z with respect to S is defined as the number of elements s_i in S such that s_i is less than or equal to z. We consider the following decision problem: determine whether the odd-numbered elements s_1, s_3, s_5, ... are precisely the elements of S whose rank with respect to S is odd. We prove a bound of Theta(n log n) on the number of operations required to solve this problem in the algebraic computation tree model.<|reference_end|>
arxiv
@article{thite2006tight, title={Tight Bounds on the Complexity of Recognizing Odd-Ranked Elements}, author={Shripad Thite}, journal={arXiv preprint arXiv:cs/0606038}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606038}, primaryClass={cs.CC cs.DS} }
thite2006tight
arxiv-674344
cs/0606039
Evolutionary Design: Philosophy, Theory, and Application Tactics
<|reference_start|>Evolutionary Design: Philosophy, Theory, and Application Tactics: Although it has contributed to remarkable improvements in some specific areas, attempts to develop a universal design theory are generally characterized by failure. This paper sketches arguments for a new approach to engineering design based on Semiotics - the science about signs. The approach is to combine different design theories over all the product life cycle stages into one coherent and traceable framework. Besides, it is to bring together the designer's and user's understandings of the notion of 'good product'. Building on the insight from natural sciences that complex systems always exhibit a self-organizing meaning-influential hierarchical dynamics, objective laws controlling product development are found through an examination of design as a semiosis process. These laws are then applied to support evolutionary design of products. An experiment validating some of the theoretical findings is outlined, and concluding remarks are given.<|reference_end|>
arxiv
@article{kryssanov2006evolutionary, title={Evolutionary Design: Philosophy, Theory, and Application Tactics}, author={V.V. Kryssanov, H. Tamaki, S. Kitamura}, journal={CIRP Journal of Manufacturing Systems, 2005, Vol. 34/2}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606039}, primaryClass={cs.CE cs.AI} }
kryssanov2006evolutionary
arxiv-674345
cs/0606040
Approximation Algorithms for Multi-Criteria Traveling Salesman Problems
<|reference_start|>Approximation Algorithms for Multi-Criteria Traveling Salesman Problems: In multi-criteria optimization problems, several objective functions have to be optimized. Since the different objective functions are usually in conflict with each other, one cannot consider only one particular solution as the optimal solution. Instead, the aim is to compute a so-called Pareto curve of solutions. Since Pareto curves cannot be computed efficiently in general, we have to be content with approximations to them. We design a deterministic polynomial-time algorithm for multi-criteria g-metric STSP that computes (min{1 +g, 2g^2/(2g^2 -2g +1)} + eps)-approximate Pareto curves for all 1/2<=g<=1. In particular, we obtain a (2+eps)-approximation for multi-criteria metric STSP. We also present two randomized approximation algorithms for multi-criteria g-metric STSP that achieve approximation ratios of (2g^3 +2g^2)/(3g^2 -2g +1) + eps and (1 +g)/(1 +3g -4g^2) + eps, respectively. Moreover, we present randomized approximation algorithms for multi-criteria g-metric ATSP (ratio 1/2 + g^3/(1 -3g^2) + eps) for g < 1/sqrt(3)), STSP with weights 1 and 2 (ratio 4/3) and ATSP with weights 1 and 2 (ratio 3/2). To do this, we design randomized approximation schemes for multi-criteria cycle cover and graph factor problems.<|reference_end|>
arxiv
@article{manthey2006approximation, title={Approximation Algorithms for Multi-Criteria Traveling Salesman Problems}, author={Bodo Manthey, L. Shankar Ram}, journal={arXiv preprint arXiv:cs/0606040}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606040}, primaryClass={cs.DS cs.CC} }
manthey2006approximation
arxiv-674346
cs/0606041
Characterization of Pentagons Determined by Two X-rays
<|reference_start|>Characterization of Pentagons Determined by Two X-rays: This paper contains some results of pentagons which can be determined by two X-rays. The results reveal this problem is more complicated.<|reference_end|>
arxiv
@article{chen2006characterization, title={Characterization of Pentagons Determined by Two X-rays}, author={Ming-Zhe Chen}, journal={arXiv preprint arXiv:cs/0606041}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606041}, primaryClass={cs.CG} }
chen2006characterization
arxiv-674347
cs/0606042
Enabling user-driven Checkpointing strategies in Reverse-mode Automatic Differentiation
<|reference_start|>Enabling user-driven Checkpointing strategies in Reverse-mode Automatic Differentiation: This paper presents a new functionality of the Automatic Differentiation (AD) tool Tapenade. Tapenade generates adjoint codes which are widely used for optimization or inverse problems. Unfortunately, for large applications the adjoint code demands a great deal of memory, because it needs to store a large set of intermediates values. To cope with that problem, Tapenade implements a sub-optimal version of a technique called checkpointing, which is a trade-off between storage and recomputation. Our long-term goal is to provide an optimal checkpointing strategy for every code, not yet achieved by any AD tool. Towards that goal, we first introduce modifications in Tapenade in order to give the user the choice to select the checkpointing strategy most suitable for their code. Second, we conduct experiments in real-size scientific codes in order to gather hints that help us to deduce an optimal checkpointing strategy. Some of the experimental results show memory savings up to 35% and execution time up to 90%.<|reference_end|>
arxiv
@article{hascoet2006enabling, title={Enabling user-driven Checkpointing strategies in Reverse-mode Automatic Differentiation}, author={Laurent Hascoet (INRIA Sophia Antipolis), Mauricio Araya-Polo (INRIA Sophia Antipolis)}, journal={arXiv preprint arXiv:cs/0606042}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606042}, primaryClass={cs.DS} }
hascoet2006enabling
arxiv-674348
cs/0606043
Schedule generation schemes for the job-shop problem with sequence-dependent setup times: dominance properties and computational analysis
<|reference_start|>Schedule generation schemes for the job-shop problem with sequence-dependent setup times: dominance properties and computational analysis: We consider the job-shop problem with sequence-dependent setup times. We focus on the formal definition of schedule generation schemes (SGSs) based on the semi-active, active, and non-delay schedule categories. We study dominance properties of the sets of schedules obtainable with each SGS. We show how the proposed SGSs can be used within single-pass and multi-pass priority rule based heuristics. We study several priority rules for the problem and provide a comparative computational analysis of the different SGSs on sets of instances taken from the literature. The proposed SGSs significantly improve previously best-known results on a set of hard benchmark instances.<|reference_end|>
arxiv
@article{artigues2006schedule, title={Schedule generation schemes for the job-shop problem with sequence-dependent setup times: dominance properties and computational analysis}, author={Christian Artigues (LIA), Pierre Lopez (LAAS), Pierre-Dimitri Ayache (LIA)}, journal={Annals of Operations Research 138 (2005) 21-52}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606043}, primaryClass={cs.CC math.CO} }
artigues2006schedule
arxiv-674349
cs/0606044
Frugality ratios and improved truthful mechanisms for vertex cover
<|reference_start|>Frugality ratios and improved truthful mechanisms for vertex cover: In {\em set-system auctions}, there are several overlapping teams of agents, and a task that can be completed by any of these teams. The buyer's goal is to hire a team and pay as little as possible. Recently, Karlin, Kempe and Tamir introduced a new definition of {\em frugality ratio} for this setting. Informally, the frugality ratio is the ratio of the total payment of a mechanism to perceived fair cost. In this paper, we study this together with alternative notions of fair cost, and how the resulting frugality ratios relate to each other for various kinds of set systems. We propose a new truthful polynomial-time auction for the vertex cover problem (where the feasible sets correspond to the vertex covers of a given graph), based on the {\em local ratio} algorithm of Bar-Yehuda and Even. The mechanism guarantees to find a winning set whose cost is at most twice the optimal. In this situation, even though it is NP-hard to find a lowest-cost feasible set, we show that {\em local optimality} of a solution can be used to derive frugality bounds that are within a constant factor of best possible. To prove this result, we use our alternative notions of frugality via a bootstrapping technique, which may be of independent interest.<|reference_end|>
arxiv
@article{elkind2006frugality, title={Frugality ratios and improved truthful mechanisms for vertex cover}, author={Edith Elkind, Leslie Ann Goldberg, Paul W. Goldberg}, journal={arXiv preprint arXiv:cs/0606044}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606044}, primaryClass={cs.GT} }
elkind2006frugality
arxiv-674350
cs/0606045
Trusted Computing in Mobile Action
<|reference_start|>Trusted Computing in Mobile Action: Due to the convergence of various mobile access technologies like UMTS, WLAN, and WiMax the need for a new supporting infrastructure arises. This infrastructure should be able to support more efficient ways to authenticate users and devices, potentially enabling novel services based on the security provided by the infrastructure. In this paper we exhibit some usage scenarios from the mobile domain integrating trusted computing, which show that trusted computing offers new paradigms for implementing trust and by this enables new technical applications and business scenarios. The scenarios show how the traditional boundaries between technical and authentication domains become permeable while a high security level is maintained.<|reference_end|>
arxiv
@article{kuntze2006trusted, title={Trusted Computing in Mobile Action}, author={Nicolai Kuntze and Andreas U. Schmidt}, journal={arXiv preprint arXiv:cs/0606045}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606045}, primaryClass={cs.CR} }
kuntze2006trusted
arxiv-674351
cs/0606046
Authorised Translations of Electronic Documents
<|reference_start|>Authorised Translations of Electronic Documents: A concept is proposed to extend authorised translations of documents to electronically signed, digital documents. Central element of the solution is an electronic seal, embodied as an XML data structure, which attests to the correctness of the translation and the authorisation of the translator. The seal contains a digital signature binding together original and translated document, thus enabling forensic inspection and therefore legal security in the appropriation of the translation. Organisational aspects of possible implementation variants of electronic authorised translations are discussed and a realisation as a stand-alone web-service is presented.<|reference_end|>
arxiv
@article{piechalski2006authorised, title={Authorised Translations of Electronic Documents}, author={Jan Piechalski and Andreas U. Schmidt}, journal={arXiv preprint arXiv:cs/0606046}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606046}, primaryClass={cs.OH} }
piechalski2006authorised
arxiv-674352
cs/0606047
Asynchronous iterative computations with Web information retrieval structures: The PageRank case
<|reference_start|>Asynchronous iterative computations with Web information retrieval structures: The PageRank case: There are several ideas being used today for Web information retrieval, and specifically in Web search engines. The PageRank algorithm is one of those that introduce a content-neutral ranking function over Web pages. This ranking is applied to the set of pages returned by the Google search engine in response to posting a search query. PageRank is based in part on two simple common sense concepts: (i)A page is important if many important pages include links to it. (ii)A page containing many links has reduced impact on the importance of the pages it links to. In this paper we focus on asynchronous iterative schemes to compute PageRank over large sets of Web pages. The elimination of the synchronizing phases is expected to be advantageous on heterogeneous platforms. The motivation for a possible move to such large scale distributed platforms lies in the size of matrices representing Web structure. In orders of magnitude: $10^{10}$ pages with $10^{11}$ nonzero elements and $10^{12}$ bytes just to store a small percentage of the Web (the already crawled); distributed memory machines are necessary for such computations. The present research is part of our general objective, to explore the potential of asynchronous computational models as an underlying framework for very large scale computations over the Grid. The area of ``internet algorithmics'' appears to offer many occasions for computations of unprecedent dimensionality that would be good candidates for this framework.<|reference_end|>
arxiv
@article{kollias2006asynchronous, title={Asynchronous iterative computations with Web information retrieval structures: The PageRank case}, author={Giorgos Kollias, Efstratios Gallopoulos, Daniel B. Szyld}, journal={arXiv preprint arXiv:cs/0606047}, year={2006}, number={TR HPCLAB-SCG 5/08-05}, archivePrefix={arXiv}, eprint={cs/0606047}, primaryClass={cs.DC} }
kollias2006asynchronous
arxiv-674353
cs/0606048
A New Quartet Tree Heuristic for Hierarchical Clustering
<|reference_start|>A New Quartet Tree Heuristic for Hierarchical Clustering: We consider the problem of constructing an an optimal-weight tree from the 3*(n choose 4) weighted quartet topologies on n objects, where optimality means that the summed weight of the embedded quartet topologiesis optimal (so it can be the case that the optimal tree embeds all quartets as non-optimal topologies). We present a heuristic for reconstructing the optimal-weight tree, and a canonical manner to derive the quartet-topology weights from a given distance matrix. The method repeatedly transforms a bifurcating tree, with all objects involved as leaves, achieving a monotonic approximation to the exact single globally optimal tree. This contrasts to other heuristic search methods from biological phylogeny, like DNAML or quartet puzzling, which, repeatedly, incrementally construct a solution from a random order of objects, and subsequently add agreement values.<|reference_end|>
arxiv
@article{cilibrasi2006a, title={A New Quartet Tree Heuristic for Hierarchical Clustering}, author={Rudi Cilibrasi and Paul M.B. Vitanyi}, journal={arXiv preprint arXiv:cs/0606048}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606048}, primaryClass={cs.DS cs.CV cs.DM math.ST physics.data-an q-bio.QM stat.TH} }
cilibrasi2006a
arxiv-674354
cs/0606049
Decentralized Erasure Codes for Distributed Networked Storage
<|reference_start|>Decentralized Erasure Codes for Distributed Networked Storage: We consider the problem of constructing an erasure code for storage over a network when the data sources are distributed. Specifically, we assume that there are n storage nodes with limited memory and k<n sources generating the data. We want a data collector, who can appear anywhere in the network, to query any k storage nodes and be able to retrieve the data. We introduce Decentralized Erasure Codes, which are linear codes with a specific randomized structure inspired by network coding on random bipartite graphs. We show that decentralized erasure codes are optimally sparse, and lead to reduced communication, storage and computation cost over random linear coding.<|reference_end|>
arxiv
@article{dimakis2006decentralized, title={Decentralized Erasure Codes for Distributed Networked Storage}, author={Alexandros G. Dimakis, Vinod Prabhakaran, Kannan Ramchandran}, journal={arXiv preprint arXiv:cs/0606049}, year={2006}, doi={10.1109/TIT.2006.874535}, archivePrefix={arXiv}, eprint={cs/0606049}, primaryClass={cs.IT cs.NI math.IT} }
dimakis2006decentralized
arxiv-674355
cs/0606050
Syntactic Characterisations of Polynomial-Time Optimisation Classes (Syntactic Characterizations of Polynomial-Time Optimization Classes)
<|reference_start|>Syntactic Characterisations of Polynomial-Time Optimisation Classes (Syntactic Characterizations of Polynomial-Time Optimization Classes): In Descriptive Complexity, there is a vast amount of literature on decision problems, and their classes such as \textbf{P, NP, L and NL}. ~ However, research on the descriptive complexity of optimisation problems has been limited. Optimisation problems corresponding to the \textbf{NP} class have been characterised in terms of logic expressions by Papadimitriou and Yannakakis, Panconesi and Ranjan, Kolaitis and Thakur, Khanna et al, and by Zimand. Gr\"{a}del characterised the polynomial class \textbf{P} of decision problems. In this paper, we attempt to characterise the optimisation versions of \textbf{P} via expressions in second order logic, many of them using universal Horn formulae with successor relations. The polynomially bound versions of maximisation (maximization) and minimisation (minimization) problems are treated first, and then the maximisation problems in the "not necessarily polynomially bound" class.<|reference_end|>
arxiv
@article{manyem2006syntactic, title={Syntactic Characterisations of Polynomial-Time Optimisation Classes (Syntactic Characterizations of Polynomial-Time Optimization Classes)}, author={Prabhu Manyem}, journal={arXiv preprint arXiv:cs/0606050}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606050}, primaryClass={cs.CC cs.LO} }
manyem2006syntactic
arxiv-674356
cs/0606051
Minimum Pseudo-Weight and Minimum Pseudo-Codewords of LDPC Codes
<|reference_start|>Minimum Pseudo-Weight and Minimum Pseudo-Codewords of LDPC Codes: In this correspondence, we study the minimum pseudo-weight and minimum pseudo-codewords of low-density parity-check (LDPC) codes under linear programming (LP) decoding. First, we show that the lower bound of Kelly, Sridhara, Xu and Rosenthal on the pseudo-weight of a pseudo-codeword of an LDPC code with girth greater than 4 is tight if and only if this pseudo-codeword is a real multiple of a codeword. Then, we show that the lower bound of Kashyap and Vardy on the stopping distance of an LDPC code is also a lower bound on the pseudo-weight of a pseudo-codeword of this LDPC code with girth 4, and this lower bound is tight if and only if this pseudo-codeword is a real multiple of a codeword. Using these results we further show that for some LDPC codes, there are no other minimum pseudo-codewords except the real multiples of minimum codewords. This means that the LP decoding for these LDPC codes is asymptotically optimal in the sense that the ratio of the probabilities of decoding errors of LP decoding and maximum-likelihood decoding approaches to 1 as the signal-to-noise ratio leads to infinity. Finally, some LDPC codes are listed to illustrate these results.<|reference_end|>
arxiv
@article{xia2006minimum, title={Minimum Pseudo-Weight and Minimum Pseudo-Codewords of LDPC Codes}, author={Shu-Tao Xia, Fang-Wei Fu}, journal={arXiv preprint arXiv:cs/0606051}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606051}, primaryClass={cs.IT math.IT} }
xia2006minimum
arxiv-674357
cs/0606052
Topology for Distributed Inference on Graphs
<|reference_start|>Topology for Distributed Inference on Graphs: Let $N$ local decision makers in a sensor network communicate with their neighbors to reach a decision \emph{consensus}. Communication is local, among neighboring sensors only, through noiseless or noisy links. We study the design of the network topology that optimizes the rate of convergence of the iterative decision consensus algorithm. We reformulate the topology design problem as a spectral graph design problem, namely, maximizing the eigenratio~$\gamma$ of two eigenvalues of the graph Laplacian~$L$, a matrix that is naturally associated with the interconnectivity pattern of the network. This reformulation avoids costly Monte Carlo simulations and leads to the class of non-bipartite Ramanujan graphs for which we find a lower bound on~$\gamma$. For Ramanujan topologies and noiseless links, the local probability of error converges much faster to the overall global probability of error than for structured graphs, random graphs, or graphs exhibiting small-world characteristics. With noisy links, we determine the optimal number of iterations before calling a decision. Finally, we introduce a new class of random graphs that are easy to construct, can be designed with arbitrary number of sensors, and whose spectral and convergence properties make them practically equivalent to Ramanujan topologies.<|reference_end|>
arxiv
@article{kar2006topology, title={Topology for Distributed Inference on Graphs}, author={Soummya Kar, Saeed Aldosari and Jos'e M. F. Moura}, journal={arXiv preprint arXiv:cs/0606052}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606052}, primaryClass={cs.IT math.IT} }
kar2006topology
arxiv-674358
cs/0606053
Context-Sensitive Languages, Rational Graphs and Determinism
<|reference_start|>Context-Sensitive Languages, Rational Graphs and Determinism: We investigate families of infinite automata for context-sensitive languages. An infinite automaton is an infinite labeled graph with two sets of initial and final vertices. Its language is the set of all words labelling a path from an initial vertex to a final vertex. In 2001, Morvan and Stirling proved that rational graphs accept the context-sensitive languages between rational sets of initial and final vertices. This result was later extended to sub-families of rational graphs defined by more restricted classes of transducers. languages.<br><br> Our contribution is to provide syntactical and self-contained proofs of the above results, when earlier constructions relied on a non-trivial normal form of context-sensitive grammars defined by Penttonen in the 1970's. These new proof techniques enable us to summarize and refine these results by considering several sub-families defined by restrictions on the type of transducers, the degree of the graph or the size of the set of initial vertices.<|reference_end|>
arxiv
@article{carayol2006context-sensitive, title={Context-Sensitive Languages, Rational Graphs and Determinism}, author={Arnaud Carayol and Antoine Meyer}, journal={Logical Methods in Computer Science, Volume 2, Issue 2 (July 19, 2006) lmcs:2254}, year={2006}, doi={10.2168/LMCS-2(2:6)2006}, archivePrefix={arXiv}, eprint={cs/0606053}, primaryClass={cs.LO} }
carayol2006context-sensitive
arxiv-674359
cs/0606054
Threshold-Controlled Global Cascading in Wireless Sensor Networks
<|reference_start|>Threshold-Controlled Global Cascading in Wireless Sensor Networks: We investigate cascade dynamics in threshold-controlled (multiplex) propagation on random geometric networks. We find that such local dynamics can serve as an efficient, robust, and reliable prototypical activation protocol in sensor networks in responding to various alarm scenarios. We also consider the same dynamics on a modified network by adding a few long-range communication links, resulting in a small-world network. We find that such construction can further enhance and optimize the speed of the network's response, while keeping energy consumption at a manageable level.<|reference_end|>
arxiv
@article{lu2006threshold-controlled, title={Threshold-Controlled Global Cascading in Wireless Sensor Networks}, author={Qiming Lu, Gyorgy Korniss, Boleslaw K. Szymanski}, journal={Proceeding of third International Conference on Networked Sensing Systems, 164-171 (TRF, 2006)}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606054}, primaryClass={cs.NI} }
lu2006threshold-controlled
arxiv-674360
cs/0606055
Simple Methods For Drawing Rational Surfaces as Four or Six Bezier Patches
<|reference_start|>Simple Methods For Drawing Rational Surfaces as Four or Six Bezier Patches: In this paper, we give several simple methods for drawing a whole rational surface (without base points) as several Bezier patches. The first two methods apply to surfaces specified by triangular control nets and partition the real projective plane RP2 into four and six triangles respectively. The third method applies to surfaces specified by rectangular control nets and partitions the torus RP1 X RP1 into four rectangular regions. In all cases, the new control nets are obtained by sign flipping and permutation of indices from the original control net. The proofs that these formulae are correct involve very little computations and instead exploit the geometry of the parameter space (RP2 or RP1 X RP1). We illustrate our method on some classical examples. We also propose a new method for resolving base points using a simple ``blowing up'' technique involving the computation of ``resolved'' control nets.<|reference_end|>
arxiv
@article{gallier2006simple, title={Simple Methods For Drawing Rational Surfaces as Four or Six Bezier Patches}, author={Jean Gallier}, journal={arXiv preprint arXiv:cs/0606055}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606055}, primaryClass={cs.CG cs.GR} }
gallier2006simple
arxiv-674361
cs/0606056
Fast and Simple Methods For Computing Control Points
<|reference_start|>Fast and Simple Methods For Computing Control Points: The purpose of this paper is to present simple and fast methods for computing control points for polynomial curves and polynomial surfaces given explicitly in terms of polynomials (written as sums of monomials). We give recurrence formulae w.r.t. arbitrary affine frames. As a corollary, it is amusing that we can also give closed-form expressions in the case of the frame (r, s) for curves, and the frame ((1, 0, 0), (0, 1, 0), (0, 0, 1) for surfaces. Our methods have the same low polynomial (time and space) complexity as the other best known algorithms, and are very easy to implement.<|reference_end|>
arxiv
@article{gallier2006fast, title={Fast and Simple Methods For Computing Control Points}, author={Jean Gallier and Weqing Gu}, journal={arXiv preprint arXiv:cs/0606056}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606056}, primaryClass={cs.CC cs.GR} }
gallier2006fast
arxiv-674362
cs/0606057
Approximability of Bounded Occurrence Max Ones
<|reference_start|>Approximability of Bounded Occurrence Max Ones: We study the approximability of Max Ones when the number of variable occurrences is bounded by a constant. For conservative constraint languages (i.e., when the unary relations are included) we give a complete classification when the number of occurrences is three or more and a partial classification when the bound is two. For the non-conservative case we prove that it is either trivial or equivalent to the corresponding conservative problem under polynomial-time many-one reductions.<|reference_end|>
arxiv
@article{kuivinen2006approximability, title={Approximability of Bounded Occurrence Max Ones}, author={Fredrik Kuivinen}, journal={arXiv preprint arXiv:cs/0606057}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606057}, primaryClass={cs.CC} }
kuivinen2006approximability
arxiv-674363
cs/0606058
Lower bounds and complete problems in nondeterministic linear time and sublinear space complexity classes
<|reference_start|>Lower bounds and complete problems in nondeterministic linear time and sublinear space complexity classes: Proving lower bounds remains the most difficult of tasks in computational complexity theory. In this paper, we show that whereas most natural NP-complete problems belong to NLIN (linear time on nondeterministic RAMs), some of them, typically the planar versions of many NP-complete problems are recognized by nondeterministic RAMs in linear time and sublinear space. The main results of this paper are the following: as the second author did for NLIN, we give exact logical characterizations of nondeterministic polynomial time-space complexity classes; we derive from them a class of problems, which are complete in these classes, and as a consequence of such a precise result and of some recent separation theorems using diagonalization, prove time-space lower bounds for these problems.<|reference_end|>
arxiv
@article{chapdelaine2006lower, title={Lower bounds and complete problems in nondeterministic linear time and sublinear space complexity classes}, author={Philippe Chapdelaine, Etienne Grandjean}, journal={arXiv preprint arXiv:cs/0606058}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606058}, primaryClass={cs.CC cs.LO} }
chapdelaine2006lower
arxiv-674364
cs/0606059
Tromino tilings of Domino-Deficient Rectangles
<|reference_start|>Tromino tilings of Domino-Deficient Rectangles: We consider tromino tilings of $m\times n$ domino-deficient rectangles, where $3|(mn-2)$ and $m,n\geq0$, and characterize all cases of domino removal that admit such tilings, thereby settling the open problem posed by J. M. Ash and S. Golomb in \cite {marshall}. Based on this characterization, we design a procedure for constructing such a tiling if one exists. We also consider the problem of counting such tilings and derive the exact formula for the number of tilings for $2\times(3t+1)$ rectangles, the exact generating function for $4\times(3t+2)$ rectangles, where $t\geq0$, and an upper bound on the number of tromino tilings for $m\times n$ domino-deficient rectangles. We also consider general 2-deficiency in $n\times4$ rectangles, where $n\geq8$, and characterize all pairs of squares which do not permit a tromino tiling.<|reference_end|>
arxiv
@article{aanjaneya2006tromino, title={Tromino tilings of Domino-Deficient Rectangles}, author={Mridul Aanjaneya}, journal={arXiv preprint arXiv:cs/0606059}, year={2006}, number={Technical Report no. IIT/CSE/TR/2006/MA/1, June 05, 2006, Dept. of Computer Sc. and Engg., IIT Kharagpur 721302, India}, archivePrefix={arXiv}, eprint={cs/0606059}, primaryClass={cs.DM math.CO} }
aanjaneya2006tromino
arxiv-674365
cs/0606060
Complex Networks: New Concepts and Tools for Real-Time Imaging and Vision
<|reference_start|>Complex Networks: New Concepts and Tools for Real-Time Imaging and Vision: This article discusses how concepts and methods of complex networks can be applied to real-time imaging and computer vision. After a brief introduction of complex networks basic concepts, their use as means to represent and characterize images, as well as for modeling visual saliency, are briefly described. The possibility to apply complex networks in order to model and simulate the performance of parallel and distributed computing systems for performance of visual methods is also proposed.<|reference_end|>
arxiv
@article{costa2006complex, title={Complex Networks: New Concepts and Tools for Real-Time Imaging and Vision}, author={Luciano da Fontoura Costa}, journal={arXiv preprint arXiv:cs/0606060}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606060}, primaryClass={cs.CV cs.DC physics.soc-ph} }
costa2006complex
arxiv-674366
cs/0606061
On the Efficiency of Strategies for Subdividing Polynomial Triangular Surface Patches
<|reference_start|>On the Efficiency of Strategies for Subdividing Polynomial Triangular Surface Patches: In this paper, we investigate the efficiency of various strategies for subdividing polynomial triangular surface patches. We give a simple algorithm performing a regular subdivision in four calls to the standard de Casteljau algorithm (in its subdivision version). A naive version uses twelve calls. We also show that any method for obtaining a regular subdivision using the standard de Casteljau algorithm requires at least 4 calls. Thus, our method is optimal. We give another subdivision algorithm using only three calls to the de Casteljau algorithm. Instead of being regular, the subdivision pattern is diamond-like. Finally, we present a ``spider-like'' subdivision scheme producing six subtriangles in four calls to the de Casteljau algorithm.<|reference_end|>
arxiv
@article{gallier2006on, title={On the Efficiency of Strategies for Subdividing Polynomial Triangular Surface Patches}, author={Jean Gallier}, journal={arXiv preprint arXiv:cs/0606061}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606061}, primaryClass={cs.CG cs.GR} }
gallier2006on
arxiv-674367
cs/0606062
Logics for Unranked Trees: An Overview
<|reference_start|>Logics for Unranked Trees: An Overview: Labeled unranked trees are used as a model of XML documents, and logical languages for them have been studied actively over the past several years. Such logics have different purposes: some are better suited for extracting data, some for expressing navigational properties, and some make it easy to relate complex properties of trees to the existence of tree automata for those properties. Furthermore, logics differ significantly in their model-checking properties, their automata models, and their behavior on ordered and unordered trees. In this paper we present a survey of logics for unranked trees.<|reference_end|>
arxiv
@article{libkin2006logics, title={Logics for Unranked Trees: An Overview}, author={Leonid Libkin}, journal={Logical Methods in Computer Science, Volume 2, Issue 3 (July 26, 2006) lmcs:2244}, year={2006}, doi={10.2168/LMCS-2(3:2)2006}, archivePrefix={arXiv}, eprint={cs/0606062}, primaryClass={cs.LO cs.DB} }
libkin2006logics
arxiv-674368
cs/0606063
FLAIM: A Multi-level Anonymization Framework for Computer and Network Logs
<|reference_start|>FLAIM: A Multi-level Anonymization Framework for Computer and Network Logs: FLAIM (Framework for Log Anonymization and Information Management) addresses two important needs not well addressed by current log anonymizers. First, it is extremely modular and not tied to the specific log being anonymized. Second, it supports multi-level anonymization, allowing system administrators to make fine-grained trade-offs between information loss and privacy/security concerns. In this paper, we examine anonymization solutions to date and note the above limitations in each. We further describe how FLAIM addresses these problems, and we describe FLAIM's architecture and features in detail.<|reference_end|>
arxiv
@article{slagell2006flaim:, title={FLAIM: A Multi-level Anonymization Framework for Computer and Network Logs}, author={Adam Slagell, Kiran Lakkaraju and Katherine Luo}, journal={arXiv preprint arXiv:cs/0606063}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606063}, primaryClass={cs.CR} }
slagell2006flaim:
arxiv-674369
cs/0606064
Improved Exponential Time Lower Bound of Knapsack Problem under BT model
<|reference_start|>Improved Exponential Time Lower Bound of Knapsack Problem under BT model: M.Alekhnovich et al. recently have proposed a model of algorithms, called BT model, which covers Greedy, Backtrack and Simple Dynamic Programming methods and can be further divided into fixed, adaptive and fully adaptive three kinds, and have proved exponential time lower bounds of exact and approximation algorithms under adaptive BT model for Knapsack problem which are $\Omega(2^{n/2}/\sqrt n)=\Omega(2^{0.5n}/\sqrt n)$ and $\Omega((1/\epsilon)^{1/3.17})\approx\Omega((1/\epsilon)^{0.315})$(for approximation ratio $1-\epsilon$) respectively (M. Alekhovich, A. Borodin, J. Buresh-Oppenheim, R. Impagliazzo, A. Magen, and T. Pitassi, Toward a Model for Backtracking and Dynamic Programming, \emph{Proceedings of Twentieth Annual IEEE Conference on Computational Complexity}, pp308-322, 2005). In this note, we slightly improved their lower bounds to $\Omega(2^{(2-\epsilon)n/3}/\sqrt{n})\approx \Omega(2^{0.66n}/\sqrt{n})$ and $\Omega((1/\epsilon)^{1/2.38})\approx\Omega((1/\epsilon)^{0.420})$, and proposed as an open question what is the best achievable lower bounds for knapsack under adaptive BT models.<|reference_end|>
arxiv
@article{li2006improved, title={Improved Exponential Time Lower Bound of Knapsack Problem under BT model}, author={Xin Li, Tian Liu, Han Peng, Hongtao Sun, Jiaqi Zhu}, journal={arXiv preprint arXiv:cs/0606064}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606064}, primaryClass={cs.CC} }
li2006improved
arxiv-674370
cs/0606065
On the complexity of XPath containment in the presence of disjunction, DTDs, and variables
<|reference_start|>On the complexity of XPath containment in the presence of disjunction, DTDs, and variables: XPath is a simple language for navigating an XML-tree and returning a set of answer nodes. The focus in this paper is on the complexity of the containment problem for various fragments of XPath. We restrict attention to the most common XPath expressions which navigate along the child and/or descendant axis. In addition to basic expressions using only node tests and simple predicates, we also consider disjunction and variables (ranging over nodes). Further, we investigate the containment problem relative to a given DTD. With respect to variables we study two semantics, (1) the original semantics of XPath, where the values of variables are given by an outer context, and (2) an existential semantics introduced by Deutsch and Tannen, in which the values of variables are existentially quantified. In this framework, we establish an exact classification of the complexity of the containment problem for many XPath fragments.<|reference_end|>
arxiv
@article{neven2006on, title={On the complexity of XPath containment in the presence of disjunction, DTDs, and variables}, author={Frank Neven and Thomas Schwentick}, journal={Logical Methods in Computer Science, Volume 2, Issue 3 (July 26, 2006) lmcs:2243}, year={2006}, doi={10.2168/LMCS-2(3:1)2006}, archivePrefix={arXiv}, eprint={cs/0606065}, primaryClass={cs.DB cs.LO} }
neven2006on
arxiv-674371
cs/0606066
The Cumulative Rule for Belief Fusion
<|reference_start|>The Cumulative Rule for Belief Fusion: The problem of combining beliefs in the Dempster-Shafer belief theory has attracted considerable attention over the last two decades. The classical Dempster's Rule has often been criticised, and many alternative rules for belief combination have been proposed in the literature. The consensus operator for combining beliefs has nice properties and produces more intuitive results than Dempster's rule, but has the limitation that it can only be applied to belief distribution functions on binary state spaces. In this paper we present a generalisation of the consensus operator that can be applied to Dirichlet belief functions on state spaces of arbitrary size. This rule, called the cumulative rule of belief combination, can be derived from classical statistical theory, and corresponds well with human intuition.<|reference_end|>
arxiv
@article{josang2006the, title={The Cumulative Rule for Belief Fusion}, author={Audun Josang}, journal={arXiv preprint arXiv:cs/0606066}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606066}, primaryClass={cs.AI} }
josang2006the
arxiv-674372
cs/0606067
Scheduling Algorithms for Procrastinators
<|reference_start|>Scheduling Algorithms for Procrastinators: This paper presents scheduling algorithms for procrastinators, where the speed that a procrastinator executes a job increases as the due date approaches. We give optimal off-line scheduling policies for linearly increasing speed functions. We then explain the computational/numerical issues involved in implementing this policy. We next explore the online setting, showing that there exist adversaries that force any online scheduling policy to miss due dates. This impossibility result motivates the problem of minimizing the maximum interval stretch of any job; the interval stretch of a job is the job's flow time divided by the job's due date minus release time. We show that several common scheduling strategies, including the "hit-the-highest-nail" strategy beloved by procrastinators, have arbitrarily large maximum interval stretch. Then we give the "thrashing" scheduling policy and show that it is a \Theta(1) approximation algorithm for the maximum interval stretch.<|reference_end|>
arxiv
@article{bender2006scheduling, title={Scheduling Algorithms for Procrastinators}, author={Michael A. Bender, Raphael Clifford and Kostas Tsichlas}, journal={arXiv preprint arXiv:cs/0606067}, year={2006}, doi={10.1007/s10951-007-0038-4}, archivePrefix={arXiv}, eprint={cs/0606067}, primaryClass={cs.DS} }
bender2006scheduling
arxiv-674373
cs/0606068
Security and Non-Repudiation for Voice-Over-IP Conversations
<|reference_start|>Security and Non-Repudiation for Voice-Over-IP Conversations: We present a concept to achieve non-repudiation for natural language conversations by electronically signing packet-based, digital, voice communication. Signing a VoIP-based conversation means to protect the integrity and authenticity of the bidirectional data stream and its temporal sequence which together establish the security context of the communication. Our concept is conceptually close to the protocols that embody VoIP and provides a high level of inherent security. It enables signatures over voice as true declarations of will, in principle between unacquainted speakers. We point to trusted computing enabled devices as possible trusted signature terminals for voice communication.<|reference_end|>
arxiv
@article{hett2006security, title={Security and Non-Repudiation for Voice-Over-IP Conversations}, author={Christian Hett, Nicolai Kuntze, and Andreas U. Schmidt}, journal={arXiv preprint arXiv:cs/0606068}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606068}, primaryClass={cs.CR} }
hett2006security
arxiv-674374
cs/0606069
Inference and Evaluation of the Multinomial Mixture Model for Text Clustering
<|reference_start|>Inference and Evaluation of the Multinomial Mixture Model for Text Clustering: In this article, we investigate the use of a probabilistic model for unsupervised clustering in text collections. Unsupervised clustering has become a basic module for many intelligent text processing applications, such as information retrieval, text classification or information extraction. The model considered in this contribution consists of a mixture of multinomial distributions over the word counts, each component corresponding to a different theme. We present and contrast various estimation procedures, which apply both in supervised and unsupervised contexts. In supervised learning, this work suggests a criterion for evaluating the posterior odds of new documents which is more statistically sound than the "naive Bayes" approach. In an unsupervised context, we propose measures to set up a systematic evaluation framework and start with examining the Expectation-Maximization (EM) algorithm as the basic tool for inference. We discuss the importance of initialization and the influence of other features such as the smoothing strategy or the size of the vocabulary, thereby illustrating the difficulties incurred by the high dimensionality of the parameter space. We also propose a heuristic algorithm based on iterative EM with vocabulary reduction to solve this problem. Using the fact that the latent variables can be analytically integrated out, we finally show that Gibbs sampling algorithm is tractable and compares favorably to the basic expectation maximization approach.<|reference_end|>
arxiv
@article{rigouste2006inference, title={Inference and Evaluation of the Multinomial Mixture Model for Text Clustering}, author={Lo"is Rigouste (TSI), Olivier Capp'e (TSI), Franc{c}ois Yvon (TSI)}, journal={Information Processing & Management 43, 5 (01/09/2007) 1260?1280}, year={2006}, doi={10.1016/j.ipm.2006.11.001}, archivePrefix={arXiv}, eprint={cs/0606069}, primaryClass={cs.IR cs.CL} }
rigouste2006inference
arxiv-674375
cs/0606070
Is there an Elegant Universal Theory of Prediction?
<|reference_start|>Is there an Elegant Universal Theory of Prediction?: Solomonoff's inductive learning model is a powerful, universal and highly elegant theory of sequence prediction. Its critical flaw is that it is incomputable and thus cannot be used in practice. It is sometimes suggested that it may still be useful to help guide the development of very general and powerful theories of prediction which are computable. In this paper it is shown that although powerful algorithms exist, they are necessarily highly complex. This alone makes their theoretical analysis problematic, however it is further shown that beyond a moderate level of complexity the analysis runs into the deeper problem of Goedel incompleteness. This limits the power of mathematics to analyse and study prediction algorithms, and indeed intelligent systems in general.<|reference_end|>
arxiv
@article{legg2006is, title={Is there an Elegant Universal Theory of Prediction?}, author={Shane Legg}, journal={arXiv preprint arXiv:cs/0606070}, year={2006}, number={IDSIA - 12 - 06}, archivePrefix={arXiv}, eprint={cs/0606070}, primaryClass={cs.AI cs.CC} }
legg2006is
arxiv-674376
cs/0606071
Scheduling and Codeword Length Optimization in Time Varying Wireless Networks
<|reference_start|>Scheduling and Codeword Length Optimization in Time Varying Wireless Networks: In this paper, a downlink scenario in which a single-antenna base station communicates with K single antenna users, over a time-correlated fading channel, is considered. It is assumed that channel state information is perfectly known at each receiver, while the statistical characteristics of the fading process and the fading gain at the beginning of each frame are known to the transmitter. By evaluating the random coding error exponent of the time-correlated fading channel, it is shown that there is an optimal codeword length which maximizes the throughput. The throughput of the conventional scheduling that transmits to the user with the maximum signal to noise ratio is examined using both fixed length codewords and variable length codewords. Although optimizing the codeword length improves the performance, it is shown that using the conventional scheduling, the gap between the achievable throughput and the maximum possible throughput of the system tends to infinity as K goes to infinity. A simple scheduling that considers both the signal to noise ratio and the channel time variation is proposed. It is shown that by using this scheduling, the gap between the achievable throughput and the maximum throughput of the system approaches zero.<|reference_end|>
arxiv
@article{sadrabadi2006scheduling, title={Scheduling and Codeword Length Optimization in Time Varying Wireless Networks}, author={Mehdi Ansari Sadrabadi, Alireza Bayesteh and Amir K. Khandani}, journal={arXiv preprint arXiv:cs/0606071}, year={2006}, number={#2006-01}, archivePrefix={arXiv}, eprint={cs/0606071}, primaryClass={cs.IT math.IT} }
sadrabadi2006scheduling
arxiv-674377
cs/0606072
Relational Parametricity and Control
<|reference_start|>Relational Parametricity and Control: We study the equational theory of Parigot's second-order &lambda;&mu;-calculus in connection with a call-by-name continuation-passing style (CPS) translation into a fragment of the second-order &lambda;-calculus. It is observed that the relational parametricity on the target calculus induces a natural notion of equivalence on the &lambda;&mu;-terms. On the other hand, the unconstrained relational parametricity on the &lambda;&mu;-calculus turns out to be inconsistent with this CPS semantics. Following these facts, we propose to formulate the relational parametricity on the &lambda;&mu;-calculus in a constrained way, which might be called ``focal parametricity''.<|reference_end|>
arxiv
@article{hasegawa2006relational, title={Relational Parametricity and Control}, author={Masahito Hasegawa}, journal={Logical Methods in Computer Science, Volume 2, Issue 3 (July 27, 2006) lmcs:2245}, year={2006}, doi={10.2168/LMCS-2(3:3)2006}, archivePrefix={arXiv}, eprint={cs/0606072}, primaryClass={cs.PL cs.LO} }
hasegawa2006relational
arxiv-674378
cs/0606073
Comparison of the estimation of the degree of polarization from four or two intensity images degraded by speckle noise
<|reference_start|>Comparison of the estimation of the degree of polarization from four or two intensity images degraded by speckle noise: Active polarimetric imagery is a powerful tool for accessing the information present in a scene. Indeed, the polarimetric images obtained can reveal polarizing properties of the objects that are not avalaible using conventional imaging systems. However, when coherent light is used to illuminate the scene, the images are degraded by speckle noise. The polarization properties of a scene are characterized by the degree of polarization. In standard polarimetric imagery system, four intensity images are needed to estimate this degree . If we assume the uncorrelation of the measurements, this number can be decreased to two images using the Orthogonal State Contrast Image (OSCI). However, this approach appears too restrictive in some cases. We thus propose in this paper a new statistical parametric method to estimate the degree of polarization assuming correlated measurements with only two intensity images. The estimators obtained from four images, from the OSCI and from the proposed method, are compared using simulated polarimetric data degraded by speckle noise.<|reference_end|>
arxiv
@article{roche2006comparison, title={Comparison of the estimation of the degree of polarization from four or two intensity images degraded by speckle noise}, author={Muriel Roche (IF), Philippe R'efr'egier (IF)}, journal={EUSIPCO 2006 (2006) -}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606073}, primaryClass={cs.IR physics.optics} }
roche2006comparison
arxiv-674379
cs/0606074
Rate Regions for Relay Broadcast Channels
<|reference_start|>Rate Regions for Relay Broadcast Channels: A partially cooperative relay broadcast channel (RBC) is a three-node network with one source node and two destination nodes (destinations 1 and 2) where destination 1 can act as a relay to assist destination 2. Inner and outer bounds on the capacity region of the discrete memoryless partially cooperative RBC are obtained. When the relay function is disabled, the inner and outer bounds reduce to new bounds on the capacity region of broadcast channels. Four classes of RBCs are studied in detail. For the partially cooperative RBC with degraded message sets, inner and outer bounds are obtained. For the semideterministic partially cooperative RBC and the orthogonal partially cooperative RBC, the capacity regions are established. For the parallel partially cooperative RBC with unmatched degraded subchannels, the capacity region is established for the case of degraded message sets. The capacity is also established when the source node has only a private message for destination 2, i.e., the channel reduces to a parallel relay channel with unmatched degraded subchannels.<|reference_end|>
arxiv
@article{liang2006rate, title={Rate Regions for Relay Broadcast Channels}, author={Yingbin Liang and Gerhard Kramer}, journal={arXiv preprint arXiv:cs/0606074}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606074}, primaryClass={cs.IT math.IT} }
liang2006rate
arxiv-674380
cs/0606075
10^(10^6) Worlds and Beyond: Efficient Representation and Processing of Incomplete Information
<|reference_start|>10^(10^6) Worlds and Beyond: Efficient Representation and Processing of Incomplete Information: Current systems and formalisms for representing incomplete information generally suffer from at least one of two weaknesses. Either they are not strong enough for representing results of simple queries, or the handling and processing of the data, e.g. for query evaluation, is intractable. In this paper, we present a decomposition-based approach to addressing this problem. We introduce world-set decompositions (WSDs), a space-efficient formalism for representing any finite set of possible worlds over relational databases. WSDs are therefore a strong representation system for any relational query language. We study the problem of efficiently evaluating relational algebra queries on sets of worlds represented by WSDs. We also evaluate our technique experimentally in a large census data scenario and show that it is both scalable and efficient.<|reference_end|>
arxiv
@article{antova200610^(10^6), title={10^(10^6) Worlds and Beyond: Efficient Representation and Processing of Incomplete Information}, author={Lyublena Antova, Christoph Koch, Dan Olteanu}, journal={arXiv preprint arXiv:cs/0606075}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606075}, primaryClass={cs.DB} }
antova200610^(10^6)
arxiv-674381
cs/0606076
A Flexible Bandwidth Reservation Framework for Bulk Data Transfers in Grid Networks
<|reference_start|>A Flexible Bandwidth Reservation Framework for Bulk Data Transfers in Grid Networks: In grid networks, distributed resources are interconnected by wide area network to support compute and data-intensive applications, which require reliable and efficient transfer of gigabits (even terabits) of data. Different from best-effort traffic in Internet, bulk data transfer in grid requires bandwidth reservation as a fundamental service. Existing reservation schemes such as RSVP are designed for real-time traffic specified by reservation rate, transfer start time but with unknown lifetime. In comparison, bulk data transfer requests are defined in terms of volume and deadline, which provide more information, and allow more flexibility in reservation schemes, i.e., transfer start time can be flexibly chosen, and reservation for a single request can be divided into multiple intervals with different reservation rates. We define a flexible reservation framework using time-rate function algebra, and identify a series of practical reservation scheme families with increasing generality and potential performance, namely, FixTime-FixRate, FixTime-FlexRate, FlexTime-FlexRate, and Multi-Interval. Simple heuristics are used to select representative scheme from each family for performance comparison. Simulation results show that the increasing flexibility can potentially improve system performance, minimizing both blocking probability and mean flow time. We also discuss the distributed implementation of proposed framework.<|reference_end|>
arxiv
@article{chen2006a, title={A Flexible Bandwidth Reservation Framework for Bulk Data Transfers in Grid Networks}, author={Bin Bin Chen (INRIA Rh^one-Alpes), Pascale Primet (INRIA Rh^one-Alpes)}, journal={arXiv preprint arXiv:cs/0606076}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606076}, primaryClass={cs.NI} }
chen2006a
arxiv-674382
cs/0606077
On Sequence Prediction for Arbitrary Measures
<|reference_start|>On Sequence Prediction for Arbitrary Measures: Suppose we are given two probability measures on the set of one-way infinite finite-alphabet sequences and consider the question when one of the measures predicts the other, that is, when conditional probabilities converge (in a certain sense) when one of the measures is chosen to generate the sequence. This question may be considered a refinement of the problem of sequence prediction in its most general formulation: for a given class of probability measures, does there exist a measure which predicts all of the measures in the class? To address this problem, we find some conditions on local absolute continuity which are sufficient for prediction and which generalize several different notions which are known to be sufficient for prediction. We also formulate some open questions to outline a direction for finding the conditions on classes of measures for which prediction is possible.<|reference_end|>
arxiv
@article{ryabko2006on, title={On Sequence Prediction for Arbitrary Measures}, author={Daniil Ryabko and Marcus Hutter}, journal={Proc. IEEE International Symposium on Information Theory (ISIT 2007) pages 2346-2350}, year={2006}, number={IDSIA-13-06}, archivePrefix={arXiv}, eprint={cs/0606077}, primaryClass={cs.LG} }
ryabko2006on
arxiv-674383
cs/0606078
Dimension Extractors and Optimal Decompression
<|reference_start|>Dimension Extractors and Optimal Decompression: A *dimension extractor* is an algorithm designed to increase the effective dimension -- i.e., the amount of computational randomness -- of an infinite binary sequence, in order to turn a "partially random" sequence into a "more random" sequence. Extractors are exhibited for various effective dimensions, including constructive, computable, space-bounded, time-bounded, and finite-state dimension. Using similar techniques, the Kucera-Gacs theorem is examined from the perspective of decompression, by showing that every infinite sequence S is Turing reducible to a Martin-Loef random sequence R such that the asymptotic number of bits of R needed to compute n bits of S, divided by n, is precisely the constructive dimension of S, which is shown to be the optimal ratio of query bits to computed bits achievable with Turing reductions. The extractors and decompressors that are developed lead directly to new characterizations of some effective dimensions in terms of optimal decompression by Turing reductions.<|reference_end|>
arxiv
@article{doty2006dimension, title={Dimension Extractors and Optimal Decompression}, author={David Doty}, journal={arXiv preprint arXiv:cs/0606078}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606078}, primaryClass={cs.CC cs.IT math.IT} }
doty2006dimension
arxiv-674384
cs/0606079
Ten-Year Cross-Disciplinary Comparison of the Growth of Open Access and How it Increases Research Citation Impact
<|reference_start|>Ten-Year Cross-Disciplinary Comparison of the Growth of Open Access and How it Increases Research Citation Impact: Lawrence (2001)found computer science articles that were openly accessible (OA) on the Web were cited more. We replicated this in physics. We tested 1,307,038 articles published across 12 years (1992-2003) in 10 disciplines (Biology, Psychology, Sociology, Health, Political Science, Economics, Education, Law, Business, Management). A robot trawls the Web for full-texts using reference metadata ISI citation data (signal detectability d'=2.45; bias = 0.52). Percentage OA (relative to total OA + NOA) articles varies from 5%-16% (depending on discipline, year and country) and is slowly climbing annually (correlation r=.76, sample size N=12, probability p < 0.005). Comparing OA and NOA articles in the same journal/year, OA articles have consistently more citations, the advantage varying from 36%-172% by discipline and year. Comparing articles within six citation ranges (0, 1, 2-3, 4-7, 8-15, 16+ citations), the annual percentage of OA articles is growing significantly faster than NOA within every citation range (r > .90, N=12, p < .0005) and the effect is greater with the more highly cited articles (r = .98, N=6, p < .005). Causality cannot be determined from these data, but our prior finding of a similar pattern in physics, where percent OA is much higher (and even approaches 100% in some subfields), makes it unlikely that the OA citation advantage is merely or mostly a self-selection bias (for making only one's better articles OA). Further research will analyze the effect's timing, causal components and relation to other variables.<|reference_end|>
arxiv
@article{hajjem2006ten-year, title={Ten-Year Cross-Disciplinary Comparison of the Growth of Open Access and How it Increases Research Citation Impact}, author={C. Hajjem, S. Harnad, Y. Gingras}, journal={IEEE Data Engineering Bulletin 28(4): 39-47; 2005}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606079}, primaryClass={cs.DL} }
hajjem2006ten-year
arxiv-674385
cs/0606080
On the structure of linear-time reducibility
<|reference_start|>On the structure of linear-time reducibility: In 1975, Ladner showed that under the hypothesis that P is not equal to NP, there exists a language which is neither in P, nor NP-complete. This result was latter generalized by Schoning and several authors to various polynomial-time complexity classes. We show here that such results also apply to linear-time reductions on RAMs (resp. Turing machines), and hence allow for separation results in linear-time classes similar to Ladner's ones for polynomial time.<|reference_end|>
arxiv
@article{chapdelaine2006on, title={On the structure of linear-time reducibility}, author={Philippe Chapdelaine}, journal={arXiv preprint arXiv:cs/0606080}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606080}, primaryClass={cs.CC} }
chapdelaine2006on
arxiv-674386
cs/0606081
New Millennium AI and the Convergence of History
<|reference_start|>New Millennium AI and the Convergence of History: Artificial Intelligence (AI) has recently become a real formal science: the new millennium brought the first mathematically sound, asymptotically optimal, universal problem solvers, providing a new, rigorous foundation for the previously largely heuristic field of General AI and embedded agents. At the same time there has been rapid progress in practical methods for learning true sequence-processing programs, as opposed to traditional methods limited to stationary pattern association. Here we will briefly review some of the new results, and speculate about future developments, pointing out that the time intervals between the most notable events in over 40,000 years or 2^9 lifetimes of human history have sped up exponentially, apparently converging to zero within the next few decades. Or is this impression just a by-product of the way humans allocate memory space to past events?<|reference_end|>
arxiv
@article{schmidhuber2006new, title={New Millennium AI and the Convergence of History}, author={Juergen Schmidhuber}, journal={arXiv preprint arXiv:cs/0606081}, year={2006}, number={IDSIA-14-06}, archivePrefix={arXiv}, eprint={cs/0606081}, primaryClass={cs.AI} }
schmidhuber2006new
arxiv-674387
cs/0606082
Lack of Finite Characterizations for the Distance-based Revision
<|reference_start|>Lack of Finite Characterizations for the Distance-based Revision: Lehmann, Magidor, and Schlechta developed an approach to belief revision based on distances between any two valuations. Suppose we are given such a distance D. This defines an operator |D, called a distance operator, which transforms any two sets of valuations V and W into the set V |D W of all elements of W that are closest to V. This operator |D defines naturally the revision of K by A as the set of all formulas satisfied in M(K) |D M(A) (i.e. those models of A that are closest to the models of K). This constitutes a distance-based revision operator. Lehmann et al. characterized families of them using a loop condition of arbitrarily big size. An interesting question is whether this loop condition can be replaced by a finite one. Extending the results of Schlechta, we will provide elements of negative answer. In fact, we will show that for families of distance operators, there is no "normal" characterization. Approximatively, a normal characterization contains only finite and universally quantified conditions. These results have an interest of their own for they help to understand the limits of what is possible in this area. Now, we are quite confident that this work can be continued to show similar impossibility results for distance-based revision operators, which suggests that the big loop condition cannot be simplified.<|reference_end|>
arxiv
@article{ben-naim2006lack, title={Lack of Finite Characterizations for the Distance-based Revision}, author={Jonathan Ben-Naim (LIF)}, journal={Tenth International Conference on Principles of Knowledge Representation and Reasoning (KR'06) (2006) 239-248}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606082}, primaryClass={cs.LO} }
ben-naim2006lack
arxiv-674388
cs/0606083
The Diversity Order of the Semidefinite Relaxation Detector
<|reference_start|>The Diversity Order of the Semidefinite Relaxation Detector: We consider the detection of binary (antipodal) signals transmitted in a spatially multiplexed fashion over a fading multiple-input multiple-output (MIMO) channel and where the detection is done by means of semidefinite relaxation (SDR). The SDR detector is an attractive alternative to maximum likelihood (ML) detection since the complexity is polynomial rather than exponential. Assuming that the channel matrix is drawn with i.i.d. real valued Gaussian entries, we study the receiver diversity and prove that the SDR detector achieves the maximum possible diversity. Thus, the error probability of the receiver tends to zero at the same rate as the optimal maximum likelihood (ML) receiver in the high signal to noise ratio (SNR) limit. This significantly strengthens previous performance guarantees available for the semidefinite relaxation detector. Additionally, it proves that full diversity detection is in certain scenarios also possible when using a non-combinatorial receiver structure.<|reference_end|>
arxiv
@article{jalden2006the, title={The Diversity Order of the Semidefinite Relaxation Detector}, author={J. Jalden and B. Ottersten}, journal={arXiv preprint arXiv:cs/0606083}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606083}, primaryClass={cs.IT math.IT} }
jalden2006the
arxiv-674389
cs/0606084
The Completeness of Propositional Resolution: A Simple and Constructive<br> Proof
<|reference_start|>The Completeness of Propositional Resolution: A Simple and Constructive<br> Proof: It is well known that the resolution method (for propositional logic) is complete. However, completeness proofs found in the literature use an argument by contradiction showing that if a set of clauses is unsatisfiable, then it must have a resolution refutation. As a consequence, none of these proofs actually gives an algorithm for producing a resolution refutation from an unsatisfiable set of clauses. In this note, we give a simple and constructive proof of the completeness of propositional resolution which consists of an algorithm together with a proof of its correctness.<|reference_end|>
arxiv
@article{gallier2006the, title={The Completeness of Propositional Resolution: A Simple and Constructive<br> Proof}, author={Jean Gallier}, journal={Logical Methods in Computer Science, Volume 2, Issue 5 (November 7, 2006) lmcs:2234}, year={2006}, doi={10.2168/LMCS-2(5:3)2006}, archivePrefix={arXiv}, eprint={cs/0606084}, primaryClass={cs.LO cs.AI} }
gallier2006the
arxiv-674390
cs/0606085
Provably Secure Universal Steganographic Systems
<|reference_start|>Provably Secure Universal Steganographic Systems: We propose a simple universal (that is, distribution--free) steganographic system in which covertexts with and without hidden texts are statistically indistinguishable. The stegosystem can be applied to any source generating i.i.d. covertexts with unknown distribution, and the hidden text is transmitted exactly, with zero probability of error. Moreover, the proposed steganographic system has two important properties. First, the rate of transmission of hidden information approaches the Shannon entropy of the covertext source as the size of blocks used for hidden text encoding tends to infinity. Second, if the size of the alphabet of the covertext source and its minentropy tend to infinity then the number of bits of hidden text per letter of covertext tends to $\log(n!)/n$ where $n$ is the (fixed) size of blocks used for hidden text encoding. The proposed stegosystem uses randomization.<|reference_end|>
arxiv
@article{ryabko2006provably, title={Provably Secure Universal Steganographic Systems}, author={Boris Ryabko, Daniil Ryabko}, journal={arXiv preprint arXiv:cs/0606085}, year={2006}, number={Cryptology ePrint Archive, Report 2006/063}, archivePrefix={arXiv}, eprint={cs/0606085}, primaryClass={cs.CR} }
ryabko2006provably
arxiv-674391
cs/0606086
Uniform Random Sampling of Traces in Very Large Models
<|reference_start|>Uniform Random Sampling of Traces in Very Large Models: This paper presents some first results on how to perform uniform random walks (where every trace has the same probability to occur) in very large models. The models considered here are described in a succinct way as a set of communicating reactive modules. The method relies upon techniques for counting and drawing uniformly at random words in regular languages. Each module is considered as an automaton defining such a language. It is shown how it is possible to combine local uniform drawings of traces, and to obtain some global uniform random sampling, without construction of the global model.<|reference_end|>
arxiv
@article{denise2006uniform, title={Uniform Random Sampling of Traces in Very Large Models}, author={Alain Denise (LRI), Marie-Claude Gaudel (LRI), Sandrine-Dominique Gouraud (LRI), Richard Lasseigne (ELM), Sylvain Peyronnet (ELM), the RaST Collaboration}, journal={First International Workshop on Random Testing, \'{E}tats-Unis d'Am\'{e}rique (2006) 10-19}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606086}, primaryClass={cs.LO} }
denise2006uniform
arxiv-674392
cs/0606087
Violator Spaces: Structure and Algorithms
<|reference_start|>Violator Spaces: Structure and Algorithms: Sharir and Welzl introduced an abstract framework for optimization problems, called LP-type problems or also generalized linear programming problems, which proved useful in algorithm design. We define a new, and as we believe, simpler and more natural framework: violator spaces, which constitute a proper generalization of LP-type problems. We show that Clarkson's randomized algorithms for low-dimensional linear programming work in the context of violator spaces. For example, in this way we obtain the fastest known algorithm for the P-matrix generalized linear complementarity problem with a constant number of blocks. We also give two new characterizations of LP-type problems: they are equivalent to acyclic violator spaces, as well as to concrete LP-type problems (informally, the constraints in a concrete LP-type problem are subsets of a linearly ordered ground set, and the value of a set of constraints is the minimum of its intersection).<|reference_end|>
arxiv
@article{gärtner2006violator, title={Violator Spaces: Structure and Algorithms}, author={Bernd G"artner and Jirka Matousek and Leo R"ust and Petr Skovron}, journal={arXiv preprint arXiv:cs/0606087}, year={2006}, doi={10.1007/11841036_36}, archivePrefix={arXiv}, eprint={cs/0606087}, primaryClass={cs.DM} }
gärtner2006violator
arxiv-674393
cs/0606088
Breaking barriers for people with voice disabilities: Combining virtual keyboards with speech synthesizers, and VoIP applications
<|reference_start|>Breaking barriers for people with voice disabilities: Combining virtual keyboards with speech synthesizers, and VoIP applications: Text-to-speech technology has been broadly used to help people with voice disabilities to overcome their difficulties. With text-to-speech, a person types at a keyboard, the text is synthesized, and the sound comes out through the computer speakers. In recent years, Voice over IP (VoIP) applications have become very popular and have been used by people worldwide. These applications allow people to talk for free over the Internet and also to make traditional calls through the Public-Switched Telephone Network (PSTN) at a small fraction of the cost offered by traditional phone companies. We have created a system, called EasyVoice, which integrates speech synthesizers with VoIP applications. The result allows a person with motor impairments and voice disabilities to talk with another person located anywhere in the world. The benefits in this case are much stronger than the ones obtained by non-disabled people using VoIP applications. People with motor impairments sometimes can hardly use a regular or mobile phone. Thus, the advantage is not only the reduction in cost, but more important, the ability to talk at all.<|reference_end|>
arxiv
@article{condado2006breaking, title={Breaking barriers for people with voice disabilities: Combining virtual keyboards with speech synthesizers, and VoIP applications}, author={Paulo A. Condado, Fernando G. Lobo}, journal={arXiv preprint arXiv:cs/0606088}, year={2006}, number={Also UAlg-ILAB Report No. 200604}, archivePrefix={arXiv}, eprint={cs/0606088}, primaryClass={cs.CY} }
condado2006breaking
arxiv-674394
cs/0606089
NVision-PA: A Tool for Visual Analysis of Command Behavior Based on Process Accounting Logs (with a Case Study in HPC Cluster Security)
<|reference_start|>NVision-PA: A Tool for Visual Analysis of Command Behavior Based on Process Accounting Logs (with a Case Study in HPC Cluster Security): In the UNIX/Linux environment the kernel can log every command process created by every user with process accounting. Thus process accounting logs have many potential uses, particularly the monitoring and forensic investigation of security events. Previous work successfully leveraged the use of process accounting logs to identify a difficult to detect and damaging intrusion against high performance computing (HPC) clusters, masquerade attacks, where intruders masquerade as legitimate users with purloined authentication credentials. While masqueraders on HPC clusters were found to be identifiable with a high accuracy (greater than 90%), this accuracy is still not high enough for HPC production environments where greater than 99% accuracy is needed. This paper incrementally advances the goal of more accurately identifying masqueraders on HPC clusters by seeking to identify features within command sets that distinguish masqueraders. To accomplish this goal, we created NVision-PA, a software tool that produces text and graphic statistical summaries describing input processing accounting logs. We report NVision-PA results describing two different process accounting logs; one from Internet usage and one from HPC cluster usage. These results identify the distinguishing features of Internet users (as proxies for masqueraders) posing as clusters users. This research is both a promising next step toward creating a real-time masquerade detection sensor for production HPC clusters as well as providing another tool for system administrators to use for statistically monitoring and managing legitimate workloads (as indicated by command usage) in HPC environments.<|reference_end|>
arxiv
@article{ermopoulos2006nvision-pa:, title={NVision-PA: A Tool for Visual Analysis of Command Behavior Based on Process Accounting Logs (with a Case Study in HPC Cluster Security)}, author={Charis Ermopoulos and William Yurcik}, journal={arXiv preprint arXiv:cs/0606089}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606089}, primaryClass={cs.CR cs.DC} }
ermopoulos2006nvision-pa:
arxiv-674395
cs/0606090
Error Rate Analysis for Coded Multicarrier Systems over Quasi-Static Fading Channels
<|reference_start|>Error Rate Analysis for Coded Multicarrier Systems over Quasi-Static Fading Channels: Several recent standards such as IEEE 802.11a/g, IEEE 802.16, and ECMA Multiband Orthogonal Frequency Division Multiplexing (MB-OFDM) for high data-rate Ultra-Wideband (UWB), employ bit-interleaved convolutionally-coded multicarrier modulation over quasi-static fading channels. Motivated by the lack of appropriate error rate analysis techniques for this popular type of system and channel model, we present two novel analytical methods for bit error rate (BER) estimation of coded multicarrier systems operating over frequency-selective quasi-static channels with non-ideal interleaving. In the first method, the approximate performance of the system is calculated for each realization of the channel, which is suitable for obtaining the outage BER performance (a common performance measure for e.g. MB-OFDM systems). The second method assumes Rayleigh distributed frequency-domain subcarrier channel gains and knowledge of their correlation matrix, and can be used to directly obtain the average BER performance. Both methods are applicable to convolutionally-coded interleaved multicarrier systems employing Quadrature Amplitude Modulation (QAM), and are also able to account for narrowband interference (modeled as a sum of tone interferers). To illustrate the application of the proposed analysis, both methods are used to study the performance of a tone-interference-impaired MB-OFDM system.<|reference_end|>
arxiv
@article{snow2006error, title={Error Rate Analysis for Coded Multicarrier Systems over Quasi-Static Fading Channels}, author={Chris Snow, Lutz Lampe and Robert Schober}, journal={arXiv preprint arXiv:cs/0606090}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606090}, primaryClass={cs.IT math.IT} }
snow2006error
arxiv-674396
cs/0606091
On computing fixpoints in well-structured regular model checking, with applications to lossy channel systems
<|reference_start|>On computing fixpoints in well-structured regular model checking, with applications to lossy channel systems: We prove a general finite convergence theorem for "upward-guarded" fixpoint expressions over a well-quasi-ordered set. This has immediate applications in regular model checking of well-structured systems, where a main issue is the eventual convergence of fixpoint computations. In particular, we are able to directly obtain several new decidability results on lossy channel systems.<|reference_end|>
arxiv
@article{baier2006on, title={On computing fixpoints in well-structured regular model checking, with applications to lossy channel systems}, author={C. Baier, N. Bertrand, Ph. Schnoebelen}, journal={Proc. LPAR 2006, LNCS 4246, pp. 347-361, Springer 2006}, year={2006}, doi={10.1007/11916277_24}, archivePrefix={arXiv}, eprint={cs/0606091}, primaryClass={cs.SC cs.GT} }
baier2006on
arxiv-674397
cs/0606092
Static Analysis using Parameterised Boolean Equation Systems
<|reference_start|>Static Analysis using Parameterised Boolean Equation Systems: The well-known problem of state space explosion in model checking is even more critical when applying this technique to programming languages, mainly due to the presence of complex data structures. One recent and promising approach to deal with this problem is the construction of an abstract and correct representation of the global program state allowing to match visited states during program model exploration. In particular, one powerful method to implement abstract matching is to fill the state vector with a minimal amount of relevant variables for each program point. In this paper, we combine the on-the-fly model-checking approach (incremental construction of the program state space) and the static analysis method called influence analysis (extraction of significant variables for each program point) in order to automatically construct an abstract matching function. Firstly, we describe the problem as an alternation-free value-based mu-calculus formula, whose validity can be checked on the program model expressed as a labeled transition system (LTS). Secondly, we translate the analysis into the local resolution of a parameterised boolean equation system (PBES), whose representation enables a more efficient construction of the resulting abstract matching function. Finally, we show how our proposal may be elegantly integrated into CADP, a generic framework for both the design and analysis of distributed systems and the development of verification tools.<|reference_end|>
arxiv
@article{gallardo2006static, title={Static Analysis using Parameterised Boolean Equation Systems}, author={Mar'ia Del Mar Gallardo (GISUM), Christophe Joubert (GISUM), Pedro Merino (GISUM)}, journal={arXiv preprint arXiv:cs/0606092}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606092}, primaryClass={cs.SE} }
gallardo2006static
arxiv-674398
cs/0606093
Predictions as statements and decisions
<|reference_start|>Predictions as statements and decisions: Prediction is a complex notion, and different predictors (such as people, computer programs, and probabilistic theories) can pursue very different goals. In this paper I will review some popular kinds of prediction and argue that the theory of competitive on-line learning can benefit from the kinds of prediction that are now foreign to it.<|reference_end|>
arxiv
@article{vovk2006predictions, title={Predictions as statements and decisions}, author={Vladimir Vovk}, journal={arXiv preprint arXiv:cs/0606093}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606093}, primaryClass={cs.LG} }
vovk2006predictions
arxiv-674399
cs/0606094
On Typechecking Top-Down XML Tranformations: Fixed Input or Output Schemas
<|reference_start|>On Typechecking Top-Down XML Tranformations: Fixed Input or Output Schemas: Typechecking consists of statically verifying whether the output of an XML transformation always conforms to an output type for documents satisfying a given input type. In this general setting, both the input and output schema as well as the transformation are part of the input for the problem. However, scenarios where the input or output schema can be considered to be fixed, are quite common in practice. In the present work, we investigate the computational complexity of the typechecking problem in the latter setting.<|reference_end|>
arxiv
@article{martens2006on, title={On Typechecking Top-Down XML Tranformations: Fixed Input or Output Schemas}, author={Wim Martens, Frank Neven, and Marc Gyssens}, journal={arXiv preprint arXiv:cs/0606094}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606094}, primaryClass={cs.DB cs.PL} }
martens2006on
arxiv-674400
cs/0606095
A verification algorithm for Declarative Concurrent Programming
<|reference_start|>A verification algorithm for Declarative Concurrent Programming: A verification method for distributed systems based on decoupling forward and backward behaviour is proposed. This method uses an event structure based algorithm that, given a CCS process, constructs its causal compression relative to a choice of observable actions. Verifying the original process equipped with distributed backtracking on non-observable actions, is equivalent to verifying its relative compression which in general is much smaller. We call this method Declarative Concurrent Programming (DCP). DCP technique compares well with direct bisimulation based methods. Benchmarks for the classic dining philosophers problem show that causal compression is rather efficient both time- and space-wise. State of the art verification tools can successfully handle more than 15 agents, whereas they can handle no more than 5 following the traditional direct method; an altogether spectacular improvement, since in this example the specification size is exponential in the number of agents.<|reference_end|>
arxiv
@article{krivine2006a, title={A verification algorithm for Declarative Concurrent Programming}, author={Jean Krivine (INRIA Rocquencourt)}, journal={arXiv preprint arXiv:cs/0606095}, year={2006}, archivePrefix={arXiv}, eprint={cs/0606095}, primaryClass={cs.DC} }
krivine2006a