corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-675801
cs/0703073
A New Numerical Abstract Domain Based on Difference-Bound Matrices
<|reference_start|>A New Numerical Abstract Domain Based on Difference-Bound Matrices: This paper presents a new numerical abstract domain for static analysis by abstract interpretation. This domain allows us to represent invariants of the form (x-y<=c) and (+/-x<=c), where x and y are variables values and c is an integer or real constant. Abstract elements are represented by Difference-Bound Matrices, widely used by model-checkers, but we had to design new operators to meet the needs of abstract interpretation. The result is a complete lattice of infinite height featuring widening, narrowing and common transfer functions. We focus on giving an efficient O(n2) representation and graph-based O(n3) algorithms - where n is the number of variables|and claim that this domain always performs more precisely than the well-known interval domain. To illustrate the precision/cost tradeoff of this domain, we have implemented simple abstract interpreters for toy imperative and parallel languages which allowed us to prove some non-trivial algorithms correct.<|reference_end|>
arxiv
@article{miné2007a, title={A New Numerical Abstract Domain Based on Difference-Bound Matrices}, author={Antoine Min'e (LIENS)}, journal={Program As Data Objects II (PADOII) (05/2001) 155-172}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703073}, primaryClass={cs.PL} }
miné2007a
arxiv-675802
cs/0703074
Field-Sensitive Value Analysis of Embedded C Programs with Union Types and Pointer Arithmetics
<|reference_start|>Field-Sensitive Value Analysis of Embedded C Programs with Union Types and Pointer Arithmetics: We propose a memory abstraction able to lift existing numerical static analyses to C programs containing union types, pointer casts, and arbitrary pointer arithmetics. Our framework is that of a combined points-to and data-value analysis. We abstract the contents of compound variables in a field-sensitive way, whether these fields contain numeric or pointer values, and use stock numerical abstract domains to find an overapproximation of all possible memory states--with the ability to discover relationships between variables. A main novelty of our approach is the dynamic mapping scheme we use to associate a flat collection of abstract cells of scalar type to the set of accessed memory locations, while taking care of byte-level aliases - i.e., C variables with incompatible types allocated in overlapping memory locations. We do not rely on static type information which can be misleading in C programs as it does not account for all the uses a memory zone may be put to. Our work was incorporated within the Astr\'{e}e static analyzer that checks for the absence of run-time-errors in embedded, safety-critical, numerical-intensive software. It replaces the former memory domain limited to well-typed, union-free, pointer-cast free data-structures. Early results demonstrate that this abstraction allows analyzing a larger class of C programs, without much cost overhead.<|reference_end|>
arxiv
@article{miné2007field-sensitive, title={Field-Sensitive Value Analysis of Embedded C Programs with Union Types and Pointer Arithmetics}, author={Antoine Min'e (LIENS)}, journal={Languages, Compilers, and Tools for Embedded Systems (LCTES) (06/2006) 54-63}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703074}, primaryClass={cs.PL} }
miné2007field-sensitive
arxiv-675803
cs/0703075
A Few Graph-Based Relational Numerical Abstract Domains
<|reference_start|>A Few Graph-Based Relational Numerical Abstract Domains: This article presents the systematic design of a class of relational numerical abstract domains from non-relational ones. Constructed domains represent sets of invariants of the form (vj - vi in C), where vj and vi are two variables, and C lives in an abstraction of P(Z), P(Q), or P(R). We will call this family of domains weakly relational domains. The underlying concept allowing this construction is an extension of potential graphs and shortest-path closure algorithms in exotic-like algebras. Example constructions are given in order to retrieve well-known domains as well as new ones. Such domains can then be used in the Abstract Interpretation framework in order to design various static analyses. Amajor benfit of this construction is its modularity, allowing to quickly implement new abstract domains from existing ones.<|reference_end|>
arxiv
@article{miné2007a, title={A Few Graph-Based Relational Numerical Abstract Domains}, author={Antoine Min'e (LIENS)}, journal={Static Analysis Symposium (SAS) (09/2002) 117-132}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703075}, primaryClass={cs.PL} }
miné2007a
arxiv-675804
cs/0703076
Symbolic Methods to Enhance the Precision of Numerical Abstract Domains
<|reference_start|>Symbolic Methods to Enhance the Precision of Numerical Abstract Domains: We present lightweight and generic symbolic methods to improve the precison of numerical static analyses based on Abstract Interpretation. The main idea is to simplify numerical expressions before they are fed to abstract transfer functions. An important novelty is that these simplifications are performed on-the-fly, using information gathered dynamically by the analyzer. A first method, called "linearization," allows abstracting arbitrary expressions into affine forms with interval coefficients while simplifying them. A second method, called "symbolic constant propagation," enhances the simplification feature of the linearization by propagating assigned expressions in a symbolic way. Combined together, these methods increase the relationality level of numerical abstract domains and make them more robust against program transformations. We show how they can be integrated within the classical interval, octagon and polyhedron domains. These methods have been incorporated within the Astr\'{e}e static analyzer that checks for the absence of run-time errors in embedded critical avionics software. We present an experimental proof of their usefulness.<|reference_end|>
arxiv
@article{miné2007symbolic, title={Symbolic Methods to Enhance the Precision of Numerical Abstract Domains}, author={Antoine Min'e (LIENS)}, journal={Verification, Abstract Interpretation and Model Checking (VMCAI) (01/2006) 348-363}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703076}, primaryClass={cs.PL} }
miné2007symbolic
arxiv-675805
cs/0703077
Relational Abstract Domains for the Detection of Floating-Point Run-Time Errors
<|reference_start|>Relational Abstract Domains for the Detection of Floating-Point Run-Time Errors: We present a new idea to adapt relational abstract domains to the analysis of IEEE 754-compliant floating-point numbers in order to statically detect, through abstract Interpretation-based static analyses, potential floating-point run-time exceptions such as overflows or invalid operations. In order to take the non-linearity of rounding into account, expressions are modeled as linear forms with interval coefficients. We show how to extend already existing numerical abstract domains, such as the octagon abstract domain, to efficiently abstract transfer functions based on interval linear forms. We discuss specific fixpoint stabilization techniques and give some experimental results.<|reference_end|>
arxiv
@article{miné2007relational, title={Relational Abstract Domains for the Detection of Floating-Point Run-Time Errors}, author={Antoine Min'e (LIENS)}, journal={European Symposium on Programming (ESOP) (03/2004) 3-17}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703077}, primaryClass={cs.PL} }
miné2007relational
arxiv-675806
cs/0703078
Broadcast Capacity Region of Two-Phase Bidirectional Relaying
<|reference_start|>Broadcast Capacity Region of Two-Phase Bidirectional Relaying: In a three-node network a half-duplex relay node enables bidirectional communication between two nodes with a spectral efficient two phase protocol. In the first phase, two nodes transmit their message to the relay node, which decodes the messages and broadcast a re-encoded composition in the second phase. In this work we determine the capacity region of the broadcast phase. In this scenario each receiving node has perfect information about the message that is intended for the other node. The resulting set of achievable rates of the two-phase bidirectional relaying includes the region which can be achieved by applying XOR on the decoded messages at the relay node. We also prove the strong converse for the maximum error probability and show that this implies that the $[\eps_1,\eps_2]$-capacity region defined with respect to the average error probability is constant for small values of error parameters $\eps_1$, $\eps_2$.<|reference_end|>
arxiv
@article{oechtering2007broadcast, title={Broadcast Capacity Region of Two-Phase Bidirectional Relaying}, author={Tobias J. Oechtering, Igor Bjelakovic, Clemens Schnurr and Holger Boche}, journal={arXiv preprint arXiv:cs/0703078}, year={2007}, doi={10.1109/TIT.2007.911158}, archivePrefix={arXiv}, eprint={cs/0703078}, primaryClass={cs.IT math.IT} }
oechtering2007broadcast
arxiv-675807
cs/0703079
Automata with Nested Pebbles Capture First-Order Logic with Transitive Closure
<|reference_start|>Automata with Nested Pebbles Capture First-Order Logic with Transitive Closure: String languages recognizable in (deterministic) log-space are characterized either by two-way (deterministic) multi-head automata, or following Immerman, by first-order logic with (deterministic) transitive closure. Here we elaborate this result, and match the number of heads to the arity of the transitive closure. More precisely, first-order logic with k-ary deterministic transitive closure has the same power as deterministic automata walking on their input with k heads, additionally using a finite set of nested pebbles. This result is valid for strings, ordered trees, and in general for families of graphs having a fixed automaton that can be used to traverse the nodes of each of the graphs in the family. Other examples of such families are grids, toruses, and rectangular mazes. For nondeterministic automata, the logic is restricted to positive occurrences of transitive closure. The special case of k=1 for trees, shows that single-head deterministic tree-walking automata with nested pebbles are characterized by first-order logic with unary deterministic transitive closure. This refines our earlier result that placed these automata between first-order and monadic second-order logic on trees.<|reference_end|>
arxiv
@article{engelfriet2007automata, title={Automata with Nested Pebbles Capture First-Order Logic with Transitive Closure}, author={Joost Engelfriet, Hendrik Jan Hoogeboom}, journal={Logical Methods in Computer Science, Volume 3, Issue 2 (April 26, 2007) lmcs:2220}, year={2007}, doi={10.2168/LMCS-3(2:3)2007}, archivePrefix={arXiv}, eprint={cs/0703079}, primaryClass={cs.LO} }
engelfriet2007automata
arxiv-675808
cs/0703080
A Systematic Approach to Web-Application Development
<|reference_start|>A Systematic Approach to Web-Application Development: Designing a web-application from a specification involves a series of well-planned and well executed steps leading to the final product. This often involves critical changes in design while testing the application, which itself is slow and cumbersome. Traditional approaches either fully automate the web-application development process, or let developers write everything from scratch. Our approach is based on a middle-ground, with precise control on the workflow and usage of a set of custom-made software tools to automate a significant part of code generation.<|reference_end|>
arxiv
@article{dutta2007a, title={A Systematic Approach to Web-Application Development}, author={Joy Dutta, Paul Fodor}, journal={arXiv preprint arXiv:cs/0703080}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703080}, primaryClass={cs.SE} }
dutta2007a
arxiv-675809
cs/0703081
Randomized Computations on Large Data Sets: Tight Lower Bounds
<|reference_start|>Randomized Computations on Large Data Sets: Tight Lower Bounds: We study the randomized version of a computation model (introduced by Grohe, Koch, and Schweikardt (ICALP'05); Grohe and Schweikardt (PODS'05)) that restricts random access to external memory and internal memory space. Essentially, this model can be viewed as a powerful version of a data stream model that puts no cost on sequential scans of external memory (as other models for data streams) and, in addition, (like other external memory models, but unlike streaming models), admits several large external memory devices that can be read and written to in parallel. We obtain tight lower bounds for the decision problems set equality, multiset equality, and checksort. More precisely, we show that any randomized one-sided-error bounded Monte Carlo algorithm for these problems must perform Omega(log N) random accesses to external memory devices, provided that the internal memory size is at most O(N^(1/4)/log N), where N denotes the size of the input data. From the lower bound on the set equality problem we can infer lower bounds on the worst case data complexity of query evaluation for the languages XQuery, XPath, and relational algebra on streaming data. More precisely, we show that there exist queries in XQuery, XPath, and relational algebra, such that any (randomized) Las Vegas algorithm that evaluates these queries must perform Omega(log N) random accesses to external memory devices, provided that the internal memory size is at most O(N^(1/4)/log N).<|reference_end|>
arxiv
@article{grohe2007randomized, title={Randomized Computations on Large Data Sets: Tight Lower Bounds}, author={Martin Grohe, Andre Hernich, Nicole Schweikardt}, journal={arXiv preprint arXiv:cs/0703081}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703081}, primaryClass={cs.DB cs.CC} }
grohe2007randomized
arxiv-675810
cs/0703082
Remarks on the O(N) Implementation of the Fast Marching Method
<|reference_start|>Remarks on the O(N) Implementation of the Fast Marching Method: The fast marching algorithm computes an approximate solution to the eikonal equation in O(N log N) time, where the factor log N is due to the administration of a priority queue. Recently, Yatziv, Bartesaghi and Sapiro have suggested to use an untidy priority queue, reducing the overall complexity to O(N) at the price of a small error in the computed solution. In this paper, we give an explicit estimate of the error introduced, which is based on a discrete comparison principle. This estimates implies in particular that the choice of an accuracy level that is independent of the speed function F results in the complexity bound O(Fmax /Fmin N). A numerical experiment illustrates this robustness problem for large ratios Fmax /Fmin .<|reference_end|>
arxiv
@article{rasch2007remarks, title={Remarks on the O(N) Implementation of the Fast Marching Method}, author={Christian Rasch and Thomas Satzger}, journal={arXiv preprint arXiv:cs/0703082}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703082}, primaryClass={cs.NA} }
rasch2007remarks
arxiv-675811
cs/0703083
Characterization of Search Engine Caches
<|reference_start|>Characterization of Search Engine Caches: Search engines provide cached copies of indexed content so users will have something to "click on" if the remote resource is temporarily or permanently unavailable. Depending on their proprietary caching strategies, search engines will purge their indexes and caches of resources that exceed a threshold of unavailability. Although search engine caches are provided only as an aid to the interactive user, we are interested in building reliable preservation services from the aggregate of these limited caching services. But first, we must understand the contents of search engine caches. In this paper, we have examined the cached contents of Ask, Google, MSN and Yahoo to profile such things as overlap between index and cache, size, MIME type and "staleness" of the cached resources. We also examined the overlap of the various caches with the holdings of the Internet Archive.<|reference_end|>
arxiv
@article{mccown2007characterization, title={Characterization of Search Engine Caches}, author={Frank McCown and Michael L. Nelson}, journal={arXiv preprint arXiv:cs/0703083}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703083}, primaryClass={cs.DL cs.CY} }
mccown2007characterization
arxiv-675812
cs/0703084
The Octagon Abstract Domain
<|reference_start|>The Octagon Abstract Domain: This article presents a new numerical abstract domain for static analysis by abstract interpretation. It extends a former numerical abstract domain based on Difference-Bound Matrices and allows us to represent invariants of the form (+/-x+/-y<=c), where x and y are program variables and c is a real constant. We focus on giving an efficient representation based on Difference-Bound Matrices - O(n2) memory cost, where n is the number of variables - and graph-based algorithms for all common abstract operators - O(n3) time cost. This includes a normal form algorithm to test equivalence of representation and a widening operator to compute least fixpoint approximations.<|reference_end|>
arxiv
@article{miné2007the, title={The Octagon Abstract Domain}, author={Antoine Min'e (LIENS)}, journal={Analysis, Slicing and Transformation (AST) (10/2001) 310-319}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703084}, primaryClass={cs.PL} }
miné2007the
arxiv-675813
cs/0703085
Dimension and Relative Frequencies
<|reference_start|>Dimension and Relative Frequencies: We show how to calculate the finite-state dimension (equivalently, the finite-state compressibility) of a saturated sets $X$ consisting of {\em all} infinite sequences $S$ over a finite alphabet $\Sigma_m$ satisfying some given condition $P$ on the asymptotic frequencies with which various symbols from $\Sigma_m$ appear in $S$. When the condition $P$ completely specifies an empirical probability distribution $\pi$ over $\Sigma_m$, i.e., a limiting frequency of occurrence for {\em every} symbol in $\Sigma_m$, it has been known since 1949 that the Hausdorff dimension of $X$ is precisely $\CH(\pi)$, the Shannon entropy of $\pi$, and the finite-state dimension was proven to have this same value in 2001. The saturated sets were studied by Volkmann and Cajar decades ago. It got attention again only with the recent developments in multifractal analysis by Barreira, Saussol, Schmeling, and separately Olsen. However, the powerful methods they used -- ergodic theory and multifractal analysis -- do not yield a value for the finite-state (or even computable) dimension in an obvious manner. We give a pointwise characterization of finite-state dimensions of saturated sets. Simultaneously, we also show that their finite-state dimension and strong dimension coincide with their Hausdorff and packing dimension respectively, though the techniques we use are completely elementary. Our results automatically extend to less restrictive effective settings (e.g., constructive, computable, and polynomial-time dimensions).<|reference_end|>
arxiv
@article{gu2007dimension, title={Dimension and Relative Frequencies}, author={Xiaoyang Gu, Jack H. Lutz}, journal={arXiv preprint arXiv:cs/0703085}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703085}, primaryClass={cs.CC} }
gu2007dimension
arxiv-675814
cs/0703086
A Technical Report On Grid Benchmarking using SEE VO
<|reference_start|>A Technical Report On Grid Benchmarking using SEE VO: Grids include heterogeneous resources, which are based on different hardware and software architectures or components. In correspondence with this diversity of the infrastructure, the execution time of any single job, as well as the total grid performance can both be affected substantially, which can be demonstrated by measurements. Running a simple benchmarking suite can show this heterogeneity and give us results about the differences over the grid sites.<|reference_end|>
arxiv
@article{kouvakis2007a, title={A Technical Report On Grid Benchmarking using SEE V.O}, author={John Kouvakis, Fotis Georgatos}, journal={arXiv preprint arXiv:cs/0703086}, year={2007}, number={TR-MOD07-001}, archivePrefix={arXiv}, eprint={cs/0703086}, primaryClass={cs.PF} }
kouvakis2007a
arxiv-675815
cs/0703087
Social Information Processing in Social News Aggregation
<|reference_start|>Social Information Processing in Social News Aggregation: The rise of the social media sites, such as blogs, wikis, Digg and Flickr among others, underscores the transformation of the Web to a participatory medium in which users are collaboratively creating, evaluating and distributing information. The innovations introduced by social media has lead to a new paradigm for interacting with information, what we call 'social information processing'. In this paper, we study how social news aggregator Digg exploits social information processing to solve the problems of document recommendation and rating. First, we show, by tracking stories over time, that social networks play an important role in document recommendation. The second contribution of this paper consists of two mathematical models. The first model describes how collaborative rating and promotion of stories emerges from the independent decisions made by many users. The second model describes how a user's influence, the number of promoted stories and the user's social network, changes in time. We find qualitative agreement between predictions of the model and user data gathered from Digg.<|reference_end|>
arxiv
@article{lerman2007social, title={Social Information Processing in Social News Aggregation}, author={Kristina Lerman}, journal={arXiv preprint arXiv:cs/0703087}, year={2007}, doi={10.1109/MIC.2007.136}, archivePrefix={arXiv}, eprint={cs/0703087}, primaryClass={cs.CY cs.AI cs.HC cs.MA} }
lerman2007social
arxiv-675816
cs/0703088
Plot 94 in ambiance X-Window
<|reference_start|>Plot 94 in ambiance X-Window: <PLOT > is a collection of routines to draw surfaces, contours and so on. In this work we are presenting a version, that functions over work stations with the operative system UNIX, that count with the graphic ambiance X-WINDOW with the tools XLIB and OSF/MOTIF. This implant was realized for the work stations DEC 5000-200, DEC IPX, and DEC ALFA of the CINVESTAV (Center of Investigation and Advanced Studies). Also implanted in SILICON GRAPHICS of the CENAC (National Center of Calculation of the Polytechnic National Institute<|reference_end|>
arxiv
@article{vega-paez2007plot, title={Plot 94 in ambiance X-Window}, author={Ignacio Vega-Paez and Carlos Alberto Hernandez-Hernandez}, journal={Proceedings in Information Systems Analysis and Synthesis ISAS 1995, 5th, International Symposium on Systems Research, Informatics and Cybernetics, pp. 135-139, August 16-20, 95, Baden-Baden, Germany}, year={2007}, number={IBP-TR1995-01}, archivePrefix={arXiv}, eprint={cs/0703088}, primaryClass={cs.CV cs.GR} }
vega-paez2007plot
arxiv-675817
cs/0703089
Space Program Language (SPL/SQL) for the Relational Approach of the Spatial Databases
<|reference_start|>Space Program Language (SPL/SQL) for the Relational Approach of the Spatial Databases: In this project we are presenting a grammar which unify the design and development of spatial databases. In order to make it, we combine nominal and spatial information, the former is represented by the relational model and latter by a modification of the same model. The modification lets to represent spatial data structures (as Quadtrees, Octrees, etc.) in a integrated way. This grammar is important because with it we can create tools to build systems that combine spatial-nominal characteristics such as Geographical Information Systems (GIS), Hypermedia Systems, Computed Aided Design Systems (CAD), and so on<|reference_end|>
arxiv
@article{vega-paez2007space, title={Space Program Language (SPL/SQL) for the Relational Approach of the Spatial Databases}, author={Ignacio Vega-Paez and Feliu D. Sagols T}, journal={Proceedings in Information Systems Analysis and Synthesis ISAS 1995, 5th International Symposium on Systems Research, Informatics and Cybernetics, pp. 89-93 August 16-20, 95, Baden-Baden, Germany}, year={2007}, number={IBP-TR1995-02}, archivePrefix={arXiv}, eprint={cs/0703089}, primaryClass={cs.DB cs.CG} }
vega-paez2007space
arxiv-675818
cs/0703090
Orthogonal Frequency Division Multiplexing: An Overview
<|reference_start|>Orthogonal Frequency Division Multiplexing: An Overview: Orthogonal Frequency Division Multiplexing (OFDM) is a multi-carrier modulation scheme that provides efficient bandwidth utilization and robustness against time dispersive channels. This paper deals with the basic system model for OFDM based systems and with self-interference, or the corruption of desired signal by itself in OFDM systems. A simple transceiver based on OFDM modulation is presented. Important impairments in OFDM systems are mathematically analyzed<|reference_end|>
arxiv
@article{inderjeet2007orthogonal, title={Orthogonal Frequency Division Multiplexing: An Overview}, author={Kaur Inderjeet, Sharma Kanchan, Kulkarni.M}, journal={arXiv preprint arXiv:cs/0703090}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703090}, primaryClass={cs.IT math.IT} }
inderjeet2007orthogonal
arxiv-675819
cs/0703091
Multimodal Meaning Representation for Generic Dialogue Systems Architectures
<|reference_start|>Multimodal Meaning Representation for Generic Dialogue Systems Architectures: An unified language for the communicative acts between agents is essential for the design of multi-agents architectures. Whatever the type of interaction (linguistic, multimodal, including particular aspects such as force feedback), whatever the type of application (command dialogue, request dialogue, database querying), the concepts are common and we need a generic meta-model. In order to tend towards task-independent systems, we need to clarify the modules parameterization procedures. In this paper, we focus on the characteristics of a meta-model designed to represent meaning in linguistic and multimodal applications. This meta-model is called MMIL for MultiModal Interface Language, and has first been specified in the framework of the IST MIAMM European project. What we want to test here is how relevant is MMIL for a completely different context (a different task, a different interaction type, a different linguistic domain). We detail the exploitation of MMIL in the framework of the IST OZONE European project, and we draw the conclusions on the role of MMIL in the parameterization of task-independent dialogue managers.<|reference_end|>
arxiv
@article{landragin2007multimodal, title={Multimodal Meaning Representation for Generic Dialogue Systems Architectures}, author={Fr'ed'eric Landragin (INRIA Lorraine - LORIA), Alexandre Denis (INRIA Lorraine - LORIA), Annalisa Ricci (INRIA Lorraine - LORIA), Laurent Romary (INRIA Lorraine - LORIA)}, journal={Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC 2004) (2004) 521-524}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703091}, primaryClass={cs.AI cs.MM} }
landragin2007multimodal
arxiv-675820
cs/0703092
Comparing BB84 and Authentication-Aided Kak's Three-Stage Quantum Protocol
<|reference_start|>Comparing BB84 and Authentication-Aided Kak's Three-Stage Quantum Protocol: This paper compares the popular quantum key distribution (QKD) protocol BB84 with the more recent Kak's three-stage protocol and the latter is shown to be more secure. A theoretical representation of an authentication-aided version of Kak's three-stage protocol is provided that makes it possible to deal with the man-in-the-middle attack.<|reference_end|>
arxiv
@article{basuchowdhuri2007comparing, title={Comparing BB84 and Authentication-Aided Kak's Three-Stage Quantum Protocol}, author={Partha Basuchowdhuri}, journal={arXiv preprint arXiv:cs/0703092}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703092}, primaryClass={cs.CR} }
basuchowdhuri2007comparing
arxiv-675821
cs/0703093
Some problems in asymptotic convex geometry and random matrices motivated by numerical algorithms
<|reference_start|>Some problems in asymptotic convex geometry and random matrices motivated by numerical algorithms: The simplex method in Linear Programming motivates several problems of asymptotic convex geometry. We discuss some conjectures and known results in two related directions -- computing the size of projections of high dimensional polytopes and estimating the norms of random matrices and their inverses.<|reference_end|>
arxiv
@article{vershynin2007some, title={Some problems in asymptotic convex geometry and random matrices motivated by numerical algorithms}, author={Roman Vershynin}, journal={Banach spaces and their applications in analysis, 209--218, Walter de Gruyter, Berlin, 2007}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703093}, primaryClass={cs.CG cs.DS cs.NA} }
vershynin2007some
arxiv-675822
cs/0703094
Geographic Routing Around Obstacles in Wireless Sensor Networks
<|reference_start|>Geographic Routing Around Obstacles in Wireless Sensor Networks: Geographic routing is becoming the protocol of choice for many sensor network applications. The current state of the art is unsatisfactory: some algorithms are very efficient, however they require a preliminary planarization of the communication graph. Planarization induces overhead and is not realistic in many scenarios. On the otherhand, georouting algorithms which do not rely on planarization have fairly low success rates and either fail to route messages around all but the simplest obstacles or have a high topology control overhead (e.g. contour detection algorithms). To overcome these limitations, we propose GRIC, the first lightweight and efficient on demand (i.e. all-to-all) geographic routing algorithm which does not require planarization and has almost 100% delivery rates (when no obstacles are added). Furthermore, the excellent behavior of our algorithm is maintained even in the presence of large convex obstacles. The case of hard concave obstacles is also studied; such obstacles are hard instances for which performance diminishes.<|reference_end|>
arxiv
@article{powell2007geographic, title={Geographic Routing Around Obstacles in Wireless Sensor Networks}, author={Olivier Powell and Sotiris Nikolesteas}, journal={arXiv preprint arXiv:cs/0703094}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703094}, primaryClass={cs.DC} }
powell2007geographic
arxiv-675823
cs/0703095
Copula Component Analysis
<|reference_start|>Copula Component Analysis: A framework named Copula Component Analysis (CCA) for blind source separation is proposed as a generalization of Independent Component Analysis (ICA). It differs from ICA which assumes independence of sources that the underlying components may be dependent with certain structure which is represented by Copula. By incorporating dependency structure, much accurate estimation can be made in principle in the case that the assumption of independence is invalidated. A two phrase inference method is introduced for CCA which is based on the notion of multidimensional ICA.<|reference_end|>
arxiv
@article{ma2007copula, title={Copula Component Analysis}, author={Jian Ma and Zengqi Sun}, journal={arXiv preprint arXiv:cs/0703095}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703095}, primaryClass={cs.IR cs.AI} }
ma2007copula
arxiv-675824
cs/0703096
Asynchronous Event-Driven Particle Algorithms
<|reference_start|>Asynchronous Event-Driven Particle Algorithms: We present, in a unifying way, the main components of three asynchronous event-driven algorithms for simulating physical systems of interacting particles. The first example, hard-particle molecular dynamics, is well-known. We also present a recently-developed diffusion kinetic Monte Carlo algorithm, as well as a novel stochastic molecular-dynamics algorithm that builds on the Direct Simulation Monte Carlo. We explain how to effectively combine asynchronous event-driven with classical time-driven or with synchronous event-driven handling. Finally, we discuss some promises and challenges for event-driven simulation of realistic physical systems.<|reference_end|>
arxiv
@article{donev2007asynchronous, title={Asynchronous Event-Driven Particle Algorithms}, author={Aleksandar Donev}, journal={arXiv preprint arXiv:cs/0703096}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703096}, primaryClass={cs.OH} }
donev2007asynchronous
arxiv-675825
cs/0703097
On Approximating Optimal Weighted Lobbying, and Frequency of Correctness versus Average-Case Polynomial Time
<|reference_start|>On Approximating Optimal Weighted Lobbying, and Frequency of Correctness versus Average-Case Polynomial Time: We investigate issues related to two hard problems related to voting, the optimal weighted lobbying problem and the winner problem for Dodgson elections. Regarding the former, Christian et al. [CFRS06] showed that optimal lobbying is intractable in the sense of parameterized complexity. We provide an efficient greedy algorithm that achieves a logarithmic approximation ratio for this problem and even for a more general variant--optimal weighted lobbying. We prove that essentially no better approximation ratio than ours can be proven for this greedy algorithm. The problem of determining Dodgson winners is known to be complete for parallel access to NP [HHR97]. Homan and Hemaspaandra [HH06] proposed an efficient greedy heuristic for finding Dodgson winners with a guaranteed frequency of success, and their heuristic is a ``frequently self-knowingly correct algorithm.'' We prove that every distributional problem solvable in polynomial time on the average with respect to the uniform distribution has a frequently self-knowingly correct polynomial-time algorithm. Furthermore, we study some features of probability weight of correctness with respect to Procaccia and Rosenschein's junta distributions [PR07].<|reference_end|>
arxiv
@article{erdelyi2007on, title={On Approximating Optimal Weighted Lobbying, and Frequency of Correctness versus Average-Case Polynomial Time}, author={Gabor Erdelyi, Lane A. Hemaspaandra, Joerg Rothe, Holger Spakowski}, journal={arXiv preprint arXiv:cs/0703097}, year={2007}, number={URCS-TR-2007-914}, archivePrefix={arXiv}, eprint={cs/0703097}, primaryClass={cs.GT cs.CC cs.MA} }
erdelyi2007on
arxiv-675826
cs/0703098
Polynomial time algorithm for 3-SAT Examples of use
<|reference_start|>Polynomial time algorithm for 3-SAT Examples of use: The algorithm checks the propositional formulas for patterns of unsatisfiability.<|reference_end|>
arxiv
@article{gubin2007polynomial, title={Polynomial time algorithm for 3-SAT. Examples of use}, author={Sergey Gubin}, journal={arXiv preprint arXiv:cs/0703098}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703098}, primaryClass={cs.CC cs.DM cs.DS cs.LO} }
gubin2007polynomial
arxiv-675827
cs/0703099
Constrained Cost-Coupled Stochastic Games with Independent State Processes
<|reference_start|>Constrained Cost-Coupled Stochastic Games with Independent State Processes: We consider a non-cooperative constrained stochastic games with N players with the following special structure. With each player there is an associated controlled Markov chain. The transition probabilities of the i-th Markov chain depend only on the state and actions of controller i. The information structure that we consider is such that each player knows the state of its own MDP and its own actions. It does not know the states of, and the actions taken by other players. Finally, each player wishes to minimize a time-average cost function, and has constraints over other time-avrage cost functions. Both the cost that is minimized as well as those defining the constraints depend on the state and actions of all players. We study in this paper the existence of a Nash equilirium. Examples in power control in wireless communications are given.<|reference_end|>
arxiv
@article{altman2007constrained, title={Constrained Cost-Coupled Stochastic Games with Independent State Processes}, author={E. Altman, K. Avrachenkov, N. Bonneau, M. Debbah, R. El-Azouzi, D. Sadoc Menasche}, journal={arXiv preprint arXiv:cs/0703099}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703099}, primaryClass={cs.IT cs.GT math.IT} }
altman2007constrained
arxiv-675828
cs/0703100
Approximation Algorithms for Multiprocessor Scheduling under Uncertainty
<|reference_start|>Approximation Algorithms for Multiprocessor Scheduling under Uncertainty: Motivated by applications in grid computing and project management, we study multiprocessor scheduling in scenarios where there is uncertainty in the successful execution of jobs when assigned to processors. We consider the problem of multiprocessor scheduling under uncertainty, in which we are given n unit-time jobs and m machines, a directed acyclic graph C giving the dependencies among the jobs, and for every job j and machine i, the probability p_{ij} of the successful completion of job j when scheduled on machine i in any given particular step. The goal of the problem is to find a schedule that minimizes the expected makespan, that is, the expected completion time of all the jobs. The problem of multiprocessor scheduling under uncertainty was introduced by Malewicz and was shown to be NP-hard even when all the jobs are independent. In this paper, we present polynomial-time approximation algorithms for the problem, for special cases of the dag C. We obtain an O(log(n))-approximation for the case of independent jobs, an O(log(m)log(n)log(n+m)/loglog(n+m))-approximation when C is a collection of disjoint chains, an O(log(m)log^2(n))-approximation when C is a collection of directed out- or in-trees, and an O(log(m)log^2(n)log(n+m)/loglog(n+m))-approximation when C is a directed forest.<|reference_end|>
arxiv
@article{lin2007approximation, title={Approximation Algorithms for Multiprocessor Scheduling under Uncertainty}, author={Guolong Lin and Rajmohan Rajaraman}, journal={arXiv preprint arXiv:cs/0703100}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703100}, primaryClass={cs.DC cs.CC cs.DS} }
lin2007approximation
arxiv-675829
cs/0703101
A Note on Approximate Nearest Neighbor Methods
<|reference_start|>A Note on Approximate Nearest Neighbor Methods: A number of authors have described randomized algorithms for solving the epsilon-approximate nearest neighbor problem. In this note I point out that the epsilon-approximate nearest neighbor property often fails to be a useful approximation property, since epsilon-approximate solutions fail to satisfy the necessary preconditions for using nearest neighbors for classification and related tasks.<|reference_end|>
arxiv
@article{breuel2007a, title={A Note on Approximate Nearest Neighbor Methods}, author={Thomas M. Breuel}, journal={arXiv preprint arXiv:cs/0703101}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703101}, primaryClass={cs.IR cs.CC cs.CV} }
breuel2007a
arxiv-675830
cs/0703102
Generation of Efficient Codes for Realizing Boolean Functions in Nanotechnologies
<|reference_start|>Generation of Efficient Codes for Realizing Boolean Functions in Nanotechnologies: We address the challenge of implementing reliable computation of Boolean functions in future nanocircuit fabrics. Such fabrics are projected to have very high defect rates. We overcome this limitation by using a combination of cheap but unreliable nanodevices and reliable but expensive CMOS devices. In our approach, defect tolerance is achieved through a novel coding of Boolean functions; specifically, we exploit the dont cares of Boolean functions encountered in multi-level Boolean logic networks for constructing better codes. We show that compared to direct application of existing coding techniques, the coding overhead in terms of extra bits can be reduced, on average by 23%, and savings can go up to 34%. We demonstrate that by incorporating efficient coding techniques more than a 40% average yield improvement is possible in case of 1% and 0.1% defect rates. With 0.1% defect density, the savings can be up to 90%.<|reference_end|>
arxiv
@article{singh2007generation, title={Generation of Efficient Codes for Realizing Boolean Functions in Nanotechnologies}, author={Ashish Kumar Singh, Adnan Aziz, Sriram Vishwanath, Michael Orshansky}, journal={arXiv preprint arXiv:cs/0703102}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703102}, primaryClass={cs.IT cs.DM math.IT} }
singh2007generation
arxiv-675831
cs/0703103
Concept of a Value in Multilevel Security Databases
<|reference_start|>Concept of a Value in Multilevel Security Databases: This paper has been withdrawn.<|reference_end|>
arxiv
@article{tao2007concept, title={Concept of a Value in Multilevel Security Databases}, author={Jia Tao, Shashi Gadia, Tsz Shing Cheng}, journal={arXiv preprint arXiv:cs/0703103}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703103}, primaryClass={cs.DB} }
tao2007concept
arxiv-675832
cs/0703104
Encoding via Gr\"obner bases and discrete Fourier transforms for several types of algebraic codes
<|reference_start|>Encoding via Gr\"obner bases and discrete Fourier transforms for several types of algebraic codes: We propose a novel encoding scheme for algebraic codes such as codes on algebraic curves, multidimensional cyclic codes, and hyperbolic cascaded Reed-Solomon codes and present numerical examples. We employ the recurrence from the Gr\"obner basis of the locator ideal for a set of rational points and the two-dimensional inverse discrete Fourier transform. We generalize the functioning of the generator polynomial for Reed-Solomon codes and develop systematic encoding for various algebraic codes.<|reference_end|>
arxiv
@article{matsui2007encoding, title={Encoding via Gr\"obner bases and discrete Fourier transforms for several types of algebraic codes}, author={Hajime Matsui, Seiichi Mita}, journal={arXiv preprint arXiv:cs/0703104}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703104}, primaryClass={cs.IT math.IT} }
matsui2007encoding
arxiv-675833
cs/0703105
New List Decoding Algorithms for Reed-Solomon and BCH Codes
<|reference_start|>New List Decoding Algorithms for Reed-Solomon and BCH Codes: In this paper we devise a rational curve fitting algorithm and apply it to the list decoding of Reed-Solomon and BCH codes. The proposed list decoding algorithms exhibit the following significant properties. 1 The algorithm corrects up to $n(1-\sqrt{1-D})$ errors for a (generalized) $(n, k, d=n-k+1)$ Reed-Solomon code, which matches the Johnson bound, where $D\eqdef \frac{d}{n}$ denotes the normalized minimum distance. In comparison with the Guruswami-Sudan algorithm, which exhibits the same list correction capability, the former requires multiplicity, which dictates the algorithmic complexity, $O(n(1-\sqrt{1-D}))$, whereas the latter requires multiplicity $O(n^2(1-D))$. With the up-to-date most efficient implementation, the former has complexity $O(n^{6}(1-\sqrt{1-D})^{7/2})$, whereas the latter has complexity $O(n^{10}(1-D)^4)$. 2. With the multiplicity set to one, the derivative list correction capability precisely sits in between the conventional hard-decision decoding and the optimal list decoding. Moreover, the number of candidate codewords is upper bounded by a constant for a fixed code rate and thus, the derivative algorithm exhibits quadratic complexity $O(n^2)$. 3. By utilizing the unique properties of the Berlekamp algorithm, the algorithm corrects up to $\frac{n}{2}(1-\sqrt{1-2D})$ errors for a narrow-sense $(n, k, d)$ binary BCH code, which matches the Johnson bound for binary codes. The algorithmic complexity is $O(n^{6}(1-\sqrt{1-2D})^7)$.<|reference_end|>
arxiv
@article{wu2007new, title={New List Decoding Algorithms for Reed-Solomon and BCH Codes}, author={Yingquan Wu}, journal={IEEE Trans. Inform. Theory, Aug. 2008}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703105}, primaryClass={cs.IT cs.CC math.IT} }
wu2007new
arxiv-675834
cs/0703106
Practical Identity-Based Encryption (IBE) in Multiple PKG Environments and Its Applications
<|reference_start|>Practical Identity-Based Encryption (IBE) in Multiple PKG Environments and Its Applications: In this paper, we present a new identity-based encryption (IBE) scheme using bilinear pairings. Our IBE scheme enjoys the same \textsf{Key Extraction} and \textsf{Decryption} algorithms with the famous IBE scheme of Boneh and Franklin (BF-IBE for short), while differs from the latter in that it has modified \textsf{Setup} and \textsf{Encryption} algorithms. Compared with BF-IBE, we show that ours are more practical in a multiple private key generator (PKG) environment, mainly due to that the session secret $g_{ID}$ could be pre-computed \emph{before} any interaction, and the sender could encrypt a message using $g_{ID}$ prior to negotiating with the intended recipient(s). As an application of our IBE scheme, we also derive an escrowed ElGamal scheme which possesses certain good properties in practice.<|reference_end|>
arxiv
@article{wang2007practical, title={Practical Identity-Based Encryption (IBE) in Multiple PKG Environments and Its Applications}, author={Shengbao Wang}, journal={arXiv preprint arXiv:cs/0703106}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703106}, primaryClass={cs.CR} }
wang2007practical
arxiv-675835
cs/0703107
Clustering and Sharing Incentives in BitTorrent Systems
<|reference_start|>Clustering and Sharing Incentives in BitTorrent Systems: Peer-to-peer protocols play an increasingly instrumental role in Internet content distribution. It is therefore important to gain a complete understanding of how these protocols behave in practice and how their operating parameters affect overall system performance. This paper presents the first detailed experimental investigation of the peer selection strategy in the popular BitTorrent protocol. By observing more than 40 nodes in instrumented private torrents, we validate three protocol properties that, though believed to hold, have not been previously demonstrated experimentally: the clustering of similar-bandwidth peers, the effectiveness of BitTorrent's sharing incentives, and the peers' high uplink utilization. In addition, we observe that BitTorrent's modified choking algorithm in seed state provides uniform service to all peers, and that an underprovisioned initial seed leads to absence of peer clustering and less effective sharing incentives. Based on our results, we provide guidelines for seed provisioning by content providers, and discuss a tracker protocol extension that addresses an identified limitation of the protocol.<|reference_end|>
arxiv
@article{legout2007clustering, title={Clustering and Sharing Incentives in BitTorrent Systems}, author={Arnaud Legout (INRIA Sophia Antipolis / INRIA Rh^one-Alpes), Nikitas Liogkas (UCLA), Eddie Kohler (UCLA), Lixia Zhang (UCLA)}, journal={Dans ACM SIGMETRICS'2007 (2007)}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703107}, primaryClass={cs.NI} }
legout2007clustering
arxiv-675836
cs/0703108
Wireless Lan to Support Multimedia Communication Using Spread Spectrum Technology
<|reference_start|>Wireless Lan to Support Multimedia Communication Using Spread Spectrum Technology: Wireless LAN is currently enjoying rapid deployment in University departments, business offices, hospitals and homes. It becomes an inexpensive technology and allows multiple numbers of the households to simultaneously access the internet while roaming about the house. In the present work, the design and development of a wireless LAN is highlighted which utilizes direct sequence spread spectrum (DSSS) technology at 900MHz RF carrier frequency in its physical layer. This provides enormous security in the physical layer and hence it is very difficult to hack or jam the network. The installation cost is also less due to the use of 900 MHz RF carrier frequency..<|reference_end|>
arxiv
@article{dhar2007wireless, title={Wireless Lan to Support Multimedia Communication Using Spread Spectrum Technology}, author={Sourav Dhar, Rabindranath Bera, K.Mal}, journal={arXiv preprint arXiv:cs/0703108}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703108}, primaryClass={cs.NI} }
dhar2007wireless
arxiv-675837
cs/0703109
Tag-Cloud Drawing: Algorithms for Cloud Visualization
<|reference_start|>Tag-Cloud Drawing: Algorithms for Cloud Visualization: Tag clouds provide an aggregate of tag-usage statistics. They are typically sent as in-line HTML to browsers. However, display mechanisms suited for ordinary text are not ideal for tags, because font sizes may vary widely on a line. As well, the typical layout does not account for relationships that may be known between tags. This paper presents models and algorithms to improve the display of tag clouds that consist of in-line HTML, as well as algorithms that use nested tables to achieve a more general 2-dimensional layout in which tag relationships are considered. The first algorithms leverage prior work in typesetting and rectangle packing, whereas the second group of algorithms leverage prior work in Electronic Design Automation. Experiments show our algorithms can be efficiently implemented and perform well.<|reference_end|>
arxiv
@article{kaser2007tag-cloud, title={Tag-Cloud Drawing: Algorithms for Cloud Visualization}, author={Owen Kaser and Daniel Lemire}, journal={arXiv preprint arXiv:cs/0703109}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703109}, primaryClass={cs.DS} }
kaser2007tag-cloud
arxiv-675838
cs/0703110
Geometric Complexity Theory IV: nonstandard quantum group for the Kronecker problem
<|reference_start|>Geometric Complexity Theory IV: nonstandard quantum group for the Kronecker problem: The Kronecker coefficient g_{\lambda \mu \nu} is the multiplicity of the GL(V)\times GL(W)-irreducible V_\lambda \otimes W_\mu in the restriction of the GL(X)-irreducible X_\nu via the natural map GL(V)\times GL(W) \to GL(V \otimes W), where V, W are \mathbb{C}-vector spaces and X = V \otimes W. A fundamental open problem in algebraic combinatorics is to find a positive combinatorial formula for these coefficients. We construct two quantum objects for this problem, which we call the nonstandard quantum group and nonstandard Hecke algebra. We show that the nonstandard quantum group has a compact real form and its representations are completely reducible, that the nonstandard Hecke algebra is semisimple, and that they satisfy an analog of quantum Schur-Weyl duality. Using these nonstandard objects as a guide, we follow the approach of Adsul, Sohoni, and Subrahmanyam to construct, in the case dim(V) = dim(W) =2, a representation \check{X}_\nu of the nonstandard quantum group that specializes to Res_{GL(V) \times GL(W)} X_\nu at q=1. We then define a global crystal basis +HNSTC(\nu) of \check{X}_\nu that solves the two-row Kronecker problem: the number of highest weight elements of +HNSTC(\nu) of weight (\lambda,\mu) is the Kronecker coefficient g_{\lambda \mu \nu}. We go on to develop the beginnings of a graphical calculus for this basis, along the lines of the U_q(\sl_2) graphical calculus, and use this to organize the crystal components of +HNSTC(\nu) into eight families. This yields a fairly simple, explicit and positive formula for two-row Kronecker coefficients, generalizing a formula of Brown, van Willigenburg, and Zabrocki. As a byproduct of the approach, we also obtain a rule for the decomposition of Res_{GL_2 \times GL_2 \rtimes \S_2} X_\nu into irreducibles.<|reference_end|>
arxiv
@article{blasiak2007geometric, title={Geometric Complexity Theory IV: nonstandard quantum group for the Kronecker problem}, author={Jonah Blasiak, Ketan D. Mulmuley, Milind Sohoni}, journal={arXiv preprint arXiv:cs/0703110}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703110}, primaryClass={cs.CC} }
blasiak2007geometric
arxiv-675839
cs/0703111
Maximum Weighted Sum Rate of Multi-Antenna Broadcast Channels
<|reference_start|>Maximum Weighted Sum Rate of Multi-Antenna Broadcast Channels: Recently, researchers showed that dirty paper coding (DPC) is the optimal transmission strategy for multiple-input multiple-output broadcast channels (MIMO-BC). In this paper, we study how to determine the maximum weighted sum of DPC rates through solving the maximum weighted sum rate problem of the dual MIMO multiple access channel (MIMO-MAC) with a sum power constraint. We first simplify the maximum weighted sum rate problem such that enumerating all possible decoding orders in the dual MIMO-MAC is unnecessary. We then design an efficient algorithm based on conjugate gradient projection (CGP) to solve the maximum weighted sum rate problem. Our proposed CGP method utilizes the powerful concept of Hessian conjugacy. We also develop a rigorous algorithm to solve the projection problem. We show that CGP enjoys provable convergence, nice scalability, and great efficiency for large MIMO-BC systems.<|reference_end|>
arxiv
@article{liu2007maximum, title={Maximum Weighted Sum Rate of Multi-Antenna Broadcast Channels}, author={Jia Liu and Y. Thomas Hou}, journal={arXiv preprint arXiv:cs/0703111}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703111}, primaryClass={cs.IT math.IT} }
liu2007maximum
arxiv-675840
cs/0703112
User-level DSM System for Modern High-Performance Interconnection Networks
<|reference_start|>User-level DSM System for Modern High-Performance Interconnection Networks: In this paper, we introduce a new user-level DSM system which has the ability to directly interact with underlying interconnection networks. The DSM system provides the application programmer a flexible API to program parallel applications either using shared memory semantics over physically distributed memory or to use an efficient remote memory demand paging technique. We also introduce a new time slice based memory consistency protocol which is used by the DSM system. We present preliminary results from our implementation on a small Opteron Linux cluster interconnected over Myrinet.<|reference_end|>
arxiv
@article{ramesh2007user-level, title={User-level DSM System for Modern High-Performance Interconnection Networks}, author={Bharath Ramesh and Srinidhi Varadarajan}, journal={arXiv preprint arXiv:cs/0703112}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703112}, primaryClass={cs.DC} }
ramesh2007user-level
arxiv-675841
cs/0703113
Automatic Selection of Bitmap Join Indexes in Data Warehouses
<|reference_start|>Automatic Selection of Bitmap Join Indexes in Data Warehouses: The queries defined on data warehouses are complex and use several join operations that induce an expensive computational cost. This cost becomes even more prohibitive when queries access very large volumes of data. To improve response time, data warehouse administrators generally use indexing techniques such as star join indexes or bitmap join indexes. This task is nevertheless complex and fastidious. Our solution lies in the field of data warehouse auto-administration. In this framework, we propose an automatic index selection strategy. We exploit a data mining technique ; more precisely frequent itemset mining, in order to determine a set of candidate indexes from a given workload. Then, we propose several cost models allowing to create an index configuration composed by the indexes providing the best profit. These models evaluate the cost of accessing data using bitmap join indexes, and the cost of updating and storing these indexes.<|reference_end|>
arxiv
@article{aouiche2007automatic, title={Automatic Selection of Bitmap Join Indexes in Data Warehouses}, author={Kamel Aouiche and Jerome Darmont and Omar Boussaid and Fadila Bentayeb}, journal={arXiv preprint arXiv:cs/0703113}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703113}, primaryClass={cs.DB} }
aouiche2007automatic
arxiv-675842
cs/0703114
Clustering-Based Materialized View Selection in Data Warehouses
<|reference_start|>Clustering-Based Materialized View Selection in Data Warehouses: Materialized view selection is a non-trivial task. Hence, its complexity must be reduced. A judicious choice of views must be cost-driven and influenced by the workload experienced by the system. In this paper, we propose a framework for materialized view selection that exploits a data mining technique (clustering), in order to determine clusters of similar queries. We also propose a view merging algorithm that builds a set of candidate views, as well as a greedy process for selecting a set of views to materialize. This selection is based on cost models that evaluate the cost of accessing data using views and the cost of storing these views. To validate our strategy, we executed a workload of decision-support queries on a test data warehouse, with and without using our strategy. Our experimental results demonstrate its efficiency, even when storage space is limited.<|reference_end|>
arxiv
@article{aouiche2007clustering-based, title={Clustering-Based Materialized View Selection in Data Warehouses}, author={Kamel Aouiche and Pierre-Emmanuel Jouve and Jerome Darmont}, journal={arXiv preprint arXiv:cs/0703114}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703114}, primaryClass={cs.DB} }
aouiche2007clustering-based
arxiv-675843
cs/0703115
We cite as we communicate: A communication model for the citation process
<|reference_start|>We cite as we communicate: A communication model for the citation process: Building on ideas from linguistics, psychology, and social sciences about the possible mechanisms of human decision-making, we propose a novel theoretical framework for the citation analysis. Given the existing trend to investigate citation statistics in the context of various forms of power and Zipfian laws, we show that the popular models of citation have poor predictive ability and can hardly provide for an adequate explanation of the observed behavior of the empirical data. An alternative model is then derived, using the apparatus of statistical mechanics. The model is applied to approximate the citation frequencies of scientific articles from two large collections, and it demonstrates a predictive potential much superior to the one of any of the citation models known to the authors from the literature. Some analytical properties of the developed model are discussed, and conclusions are drawn. Directions for future work are also given at the paper's end.<|reference_end|>
arxiv
@article{kryssanov2007we, title={We cite as we communicate: A communication model for the citation process}, author={Victor V. Kryssanov, Evgeny L. Kuleshov, Frank J. Rinaldo, Hitoshi Ogawa}, journal={arXiv preprint arXiv:cs/0703115}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703115}, primaryClass={cs.DL cs.CY physics.data-an} }
kryssanov2007we
arxiv-675844
cs/0703116
On the Design of Generic Static Analyzers for Modern Imperative Languages
<|reference_start|>On the Design of Generic Static Analyzers for Modern Imperative Languages: The design and implementation of precise static analyzers for significant fragments of modern imperative languages like C, C++, Java and Python is a challenging problem. In this paper, we consider a core imperative language that has several features found in mainstream languages such as those including recursive functions, run-time system and user-defined exceptions, and a realistic data and memory model. For this language we provide a concrete semantics --characterizing both finite and infinite computations-- and a generic abstract semantics that we prove sound with respect to the concrete one. We say the abstract semantics is generic since it is designed to be completely parametric on the analysis domains: in particular, it provides support for \emph{relational} domains (i.e., abstract domains that can capture the relationships between different data objects). We also sketch how the proposed methodology can be extended to accommodate a larger language that includes pointers, compound data objects and non-structured control flow mechanisms. The approach, which is based on structured, big-step $\mathrm{G}^\infty\mathrm{SOS}$ operational semantics and on abstract interpretation, is modular in that the overall static analyzer is naturally partitioned into components with clearly identified responsibilities and interfaces, something that greatly simplifies both the proof of correctness and the implementation.<|reference_end|>
arxiv
@article{bagnara2007on, title={On the Design of Generic Static Analyzers for Modern Imperative Languages}, author={Roberto Bagnara, Patricia M. Hill, Andrea Pescetti, Enea Zaffanella}, journal={arXiv preprint arXiv:cs/0703116}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703116}, primaryClass={cs.PL cs.LO} }
bagnara2007on
arxiv-675845
cs/0703117
Self-adaptive Gossip Policies for Distributed Population-based Algorithms
<|reference_start|>Self-adaptive Gossip Policies for Distributed Population-based Algorithms: Gossipping has demonstrate to be an efficient mechanism for spreading information among P2P networks. Within the context of P2P computing, we propose the so-called Evolvable Agent Model for distributed population-based algorithms which uses gossipping as communication policy, and represents every individual as a self-scheduled single thread. The model avoids obsolete nodes in the population by defining a self-adaptive refresh rate which depends on the latency and bandwidth of the network. Such a mechanism balances the migration rate to the congestion of the links pursuing global population coherence. We perform an experimental evaluation of this model on a real parallel system and observe how solution quality and algorithm speed scale with the number of processors with this seamless approach.<|reference_end|>
arxiv
@article{laredo2007self-adaptive, title={Self-adaptive Gossip Policies for Distributed Population-based Algorithms}, author={J.L.J. Laredo, E.A. Eiben, M. Schoenauer, P.A. Castillo, A.M. Mora, F. Fernandez, J.J. Merelo}, journal={arXiv preprint arXiv:cs/0703117}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703117}, primaryClass={cs.DC} }
laredo2007self-adaptive
arxiv-675846
cs/0703118
Mathematical model of interest matchmaking in electronic social networks
<|reference_start|>Mathematical model of interest matchmaking in electronic social networks: The problem of matchmaking in electronic social networks is formulated as an optimization problem. In particular, a function measuring the matching degree of fields of interest of a search profile with those of an advertising profile is proposed.<|reference_end|>
arxiv
@article{de vries2007mathematical, title={Mathematical model of interest matchmaking in electronic social networks}, author={Andreas de Vries}, journal={Applied Mathematics 5 (16) 2014, 2619-2629}, year={2007}, doi={10.4236/am.2014.516250}, archivePrefix={arXiv}, eprint={cs/0703118}, primaryClass={cs.CY cs.AI} }
de vries2007mathematical
arxiv-675847
cs/0703119
Support-Graph Preconditioners for 2-Dimensional Trusses
<|reference_start|>Support-Graph Preconditioners for 2-Dimensional Trusses: We use support theory, in particular the fretsaw extensions of Shklarski and Toledo, to design preconditioners for the stiffness matrices of 2-dimensional truss structures that are stiffly connected. Provided that all the lengths of the trusses are within constant factors of each other, that the angles at the corners of the triangles are bounded away from 0 and $\pi$, and that the elastic moduli and cross-sectional areas of all the truss elements are within constant factors of each other, our preconditioners allow us to solve linear equations in the stiffness matrices to accuracy $\epsilon$ in time $O (n^{5/4} (\log^{2}n \log \log n)^{3/4} \log (1/\epsilon))$.<|reference_end|>
arxiv
@article{daitch2007support-graph, title={Support-Graph Preconditioners for 2-Dimensional Trusses}, author={Samuel I. Daitch and Daniel A. Spielman}, journal={arXiv preprint arXiv:cs/0703119}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703119}, primaryClass={cs.NA} }
daitch2007support-graph
arxiv-675848
cs/0703120
Sequential decoding for lossless streaming source coding with side information
<|reference_start|>Sequential decoding for lossless streaming source coding with side information: The problem of lossless fixed-rate streaming coding of discrete memoryless sources with side information at the decoder is studied. A random time-varying tree-code is used to sequentially bin strings and a Stack Algorithm with a variable bias uses the side information to give a delay-universal coding system for lossless source coding with side information. The scheme is shown to give exponentially decaying probability of error with delay, with exponent equal to Gallager's random coding exponent for sources with side information. The mean of the random variable of computation for the stack decoder is bounded, and conditions on the bias are given to guarantee a finite $\rho^{th}$ moment for $0 \leq \rho \leq 1$. Further, the problem is also studied in the case where there is a discrete memoryless channel between encoder and decoder. The same scheme is slightly modified to give a joint-source channel encoder and Stack Algorithm-based sequential decoder using side information. Again, by a suitable choice of bias, the probability of error decays exponentially with delay and the random variable of computation has a finite mean. Simulation results for several examples are given.<|reference_end|>
arxiv
@article{palaiyanur2007sequential, title={Sequential decoding for lossless streaming source coding with side information}, author={Hari Palaiyanur and Anant Sahai}, journal={arXiv preprint arXiv:cs/0703120}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703120}, primaryClass={cs.IT math.IT} }
palaiyanur2007sequential
arxiv-675849
cs/0703121
Differential Equations for Algebraic Functions
<|reference_start|>Differential Equations for Algebraic Functions: It is classical that univariate algebraic functions satisfy linear differential equations with polynomial coefficients. Linear recurrences follow for the coefficients of their power series expansions. We show that the linear differential equation of minimal order has coefficients whose degree is cubic in the degree of the function. We also show that there exists a linear differential equation of order linear in the degree whose coefficients are only of quadratic degree. Furthermore, we prove the existence of recurrences of order and degree close to optimal. We study the complexity of computing these differential equations and recurrences. We deduce a fast algorithm for the expansion of algebraic series.<|reference_end|>
arxiv
@article{bostan2007differential, title={Differential Equations for Algebraic Functions}, author={Alin Bostan (INRIA Rocquencourt), Fr'ed'eric Chyzak (INRIA Rocquencourt), Bruno Salvy (INRIA Rocquencourt), Gr'egoire Lecerf (LM-Versailles), 'Eric Schost}, journal={ISSAC'07, pages 25--32, ACM Press, 2007.}, year={2007}, doi={10.1145/1277548.1277553}, archivePrefix={arXiv}, eprint={cs/0703121}, primaryClass={cs.SC math.CA} }
bostan2007differential
arxiv-675850
cs/0703122
Rapid Almost-Complete Broadcasting in Faulty Networks
<|reference_start|>Rapid Almost-Complete Broadcasting in Faulty Networks: This paper studies the problem of broadcasting in synchronous point-to-point networks, where one initiator owns a piece of information that has to be transmitted to all other vertices as fast as possible. The model of fractional dynamic faults with threshold is considered: in every step either a fixed number $T$, or a fraction $\alpha$, of sent messages can be lost depending on which quantity is larger. As the main result we show that in complete graphs and hypercubes it is possible to inform all but a constant number of vertices, exhibiting only a logarithmic slowdown, i.e. in time $O(D\log n)$ where $D$ is the diameter of the network and $n$ is the number of vertices. Moreover, for complete graphs under some additional conditions (sense of direction, or $\alpha<0.55$) the remaining constant number of vertices can be informed in the same time, i.e. $O(\log n)$.<|reference_end|>
arxiv
@article{královič2007rapid, title={Rapid Almost-Complete Broadcasting in Faulty Networks}, author={Rastislav Kr'aloviv{c}, Richard Kr'aloviv{c}}, journal={arXiv preprint arXiv:cs/0703122}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703122}, primaryClass={cs.DC} }
královič2007rapid
arxiv-675851
cs/0703123
Adaptive Methods for Linear Programming Decoding
<|reference_start|>Adaptive Methods for Linear Programming Decoding: Detectability of failures of linear programming (LP) decoding and the potential for improvement by adding new constraints motivate the use of an adaptive approach in selecting the constraints for the underlying LP problem. In this paper, we make a first step in studying this method, and show that it can significantly reduce the complexity of the problem, which was originally exponential in the maximum check-node degree. We further show that adaptively adding new constraints, e.g. by combining parity checks, can provide large gains in the performance.<|reference_end|>
arxiv
@article{taghavi2007adaptive, title={Adaptive Methods for Linear Programming Decoding}, author={Mohammad H. Taghavi and Paul H. Siegel}, journal={arXiv preprint arXiv:cs/0703123}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703123}, primaryClass={cs.IT math.IT} }
taghavi2007adaptive
arxiv-675852
cs/0703124
Modelling Complexity in Musical Rhythm
<|reference_start|>Modelling Complexity in Musical Rhythm: This paper constructs a tree structure for the music rhythm using the L-system. It models the structure as an automata and derives its complexity. It also solves the complexity for the L-system. This complexity can resolve the similarity between trees. This complexity serves as a measure of psychological complexity for rhythms. It resolves the music complexity of various compositions including the Mozart effect K488. Keyword: music perception, psychological complexity, rhythm, L-system, automata, temporal associative memory, inverse problem, rewriting rule, bracketed string, tree similarity<|reference_end|>
arxiv
@article{liou2007modelling, title={Modelling Complexity in Musical Rhythm}, author={Cheng-Yuan Liou, Tai-Hei Wu, Chia-Ying Lee}, journal={Complexity 15(4) (2010) 19~30 final form at http://www3.interscience.wiley.com/cgi-bin/fulltext/123191810/PDFSTART}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703124}, primaryClass={cs.AI} }
liou2007modelling
arxiv-675853
cs/0703125
Intrinsic dimension of a dataset: what properties does one expect?
<|reference_start|>Intrinsic dimension of a dataset: what properties does one expect?: We propose an axiomatic approach to the concept of an intrinsic dimension of a dataset, based on a viewpoint of geometry of high-dimensional structures. Our first axiom postulates that high values of dimension be indicative of the presence of the curse of dimensionality (in a certain precise mathematical sense). The second axiom requires the dimension to depend smoothly on a distance between datasets (so that the dimension of a dataset and that of an approximating principal manifold would be close to each other). The third axiom is a normalization condition: the dimension of the Euclidean $n$-sphere $\s^n$ is $\Theta(n)$. We give an example of a dimension function satisfying our axioms, even though it is in general computationally unfeasible, and discuss a computationally cheap function satisfying most but not all of our axioms (the ``intrinsic dimensionality'' of Ch\'avez et al.)<|reference_end|>
arxiv
@article{pestov2007intrinsic, title={Intrinsic dimension of a dataset: what properties does one expect?}, author={Vladimir Pestov}, journal={Proceedings of the 20th International Joint Conference on Neural Networks (IJCNN'2007), Orlando, Florida (Aug. 12--17, 2007), pp. 1775--1780.}, year={2007}, doi={10.1109/IJCNN.2007.4371431}, archivePrefix={arXiv}, eprint={cs/0703125}, primaryClass={cs.LG} }
pestov2007intrinsic
arxiv-675854
cs/0703126
Evolutionary Socioeconomics: Notes on the Computer Simulation according to the de Finetti - Simon Principia
<|reference_start|>Evolutionary Socioeconomics: Notes on the Computer Simulation according to the de Finetti - Simon Principia: The present note includes explanatory comments about the synergic interaction within the sphere of the socioeconomic analysis between the two following theoretical frameworks. (I). The Darwinian evolutionary model. (II). The computer simulation in accordance with the principia established by Bruno de Finetti and Herbert Simon.<|reference_end|>
arxiv
@article{tucci2007evolutionary, title={Evolutionary Socioeconomics: Notes on the Computer Simulation according to the de Finetti - Simon Principia}, author={Michele Tucci}, journal={arXiv preprint arXiv:cs/0703126}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703126}, primaryClass={cs.CY} }
tucci2007evolutionary
arxiv-675855
cs/0703127
Isochronous Data Transmission With Rates Close to Channel Capacity
<|reference_start|>Isochronous Data Transmission With Rates Close to Channel Capacity: The existing ARQ schemes (including a hybrid ARQ) have a throughput depending on packet error probability. In this paper we describe a strategy for delay tolerant applications which provide a constant throughput until the algorithm robustness criterion is not failed. The algorithm robustness criterion is applied to find the optimum size of the retransmission block in the assumption of the small changes of coding rate within the rate compatible codes family.<|reference_end|>
arxiv
@article{zhdanov2007isochronous, title={Isochronous Data Transmission With Rates Close to Channel Capacity}, author={Alexander Zhdanov}, journal={arXiv preprint arXiv:cs/0703127}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703127}, primaryClass={cs.IT math.IT} }
zhdanov2007isochronous
arxiv-675856
cs/0703128
Physarum machine: Implementation of Kolmogorov-Uspensky machine in biological substrat
<|reference_start|>Physarum machine: Implementation of Kolmogorov-Uspensky machine in biological substrat: We implement Kolmogorov-Uspensky machine on a plasmodium of true slime mold {\em Physarum polycephalum}. We provide experimental findings on realization of the machine instructions, illustrate basic operations, and elements of programming.<|reference_end|>
arxiv
@article{adamatzky2007physarum, title={Physarum machine: Implementation of Kolmogorov-Uspensky machine in biological substrat}, author={Andrew Adamatzky}, journal={Paralllel Processing Letters Vol. 17, No. 4 (December 2007) pp. 455-467}, year={2007}, doi={10.1142/S0129626407003150}, archivePrefix={arXiv}, eprint={cs/0703128}, primaryClass={cs.AR cs.CC} }
adamatzky2007physarum
arxiv-675857
cs/0703129
A theorem on the quantum evaluation of Weight Enumerators for a certain class of Cyclic Codes with a note on Cyclotomic cosets
<|reference_start|>A theorem on the quantum evaluation of Weight Enumerators for a certain class of Cyclic Codes with a note on Cyclotomic cosets: This note is a stripped down version of a published paper on the Potts partition function, where we concentrate solely on the linear coding aspect of our approach. It is meant as a resource for people interested in coding theory but who do not know much of the mathematics involved and how quantum computation may provide a speed up in the computation of a very important quantity in coding theory. We provide a theorem on the quantum computation of the Weight Enumerator polynomial for a restricted family of cyclic codes. The complexity of obtaining an exact evaluation is $O(k^{2s}(\log q)^{2})$, where $s$ is a parameter which determines the class of cyclic codes in question, $q$ is the characteristic of the finite field over which the code is defined, and $k$ is the dimension of the code. We also provide an overview of cyclotomic cosets and discuss applications including how they can be used to speed up the computation of the weight enumerator polynomial (which is related to the Potts partition function). We also give an algorithm which returns the coset leaders and the size of each coset from the list $\{0,1,2,...,N-1\}$, whose time complexity is soft-O(N). This algorithm uses standard techniques but we include it as a resource for students. Note that cyclotomic cosets do not improve the asymptotic complexity of the computation of weight enumerators.<|reference_end|>
arxiv
@article{geraci2007a, title={A theorem on the quantum evaluation of Weight Enumerators for a certain class of Cyclic Codes with a note on Cyclotomic cosets}, author={Joseph Geraci, Frank Van Bussel}, journal={arXiv preprint arXiv:cs/0703129}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703129}, primaryClass={cs.IT math.IT quant-ph} }
geraci2007a
arxiv-675858
cs/0703130
Space-contained conflict revision, for geographic information
<|reference_start|>Space-contained conflict revision, for geographic information: Using qualitative reasoning with geographic information, contrarily, for instance, with robotics, looks not only fastidious (i.e.: encoding knowledge Propositional Logics PL), but appears to be computational complex, and not tractable at all, most of the time. However, knowledge fusion or revision, is a common operation performed when users merge several different data sets in a unique decision making process, without much support. Introducing logics would be a great improvement, and we propose in this paper, means for deciding -a priori- if one application can benefit from a complete revision, under only the assumption of a conjecture that we name the "containment conjecture", which limits the size of the minimal conflicts to revise. We demonstrate that this conjecture brings us the interesting computational property of performing a not-provable but global, revision, made of many local revisions, at a tractable size. We illustrate this approach on an application.<|reference_end|>
arxiv
@article{doukari2007space-contained, title={Space-contained conflict revision, for geographic information}, author={Omar Doukari (LSIS), Robert Jeansoulin (IGM-LabInfo)}, journal={Proc. of 10th AGILE International Conference on Geographic Information Science, AGILE 2007. (07/05/2007) 1-14}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703130}, primaryClass={cs.AI} }
doukari2007space-contained
arxiv-675859
cs/0703131
Open Access Scientometrics and the UK Research Assessment Exercise
<|reference_start|>Open Access Scientometrics and the UK Research Assessment Exercise: Scientometric predictors of research performance need to be validated by showing that they have a high correlation with the external criterion they are trying to predict. The UK Research Assessment Exercise (RAE), together with the growing movement toward making the full-texts of research articles freely available on the web -- offer a unique opportunity to test and validate a wealth of old and new scientometric predictors, through multiple regression analysis: Publications, journal impact factors, citations, co-citations, citation chronometrics (age, growth, latency to peak, decay rate), hub/authority scores, h-index, prior funding, student counts, co-authorship scores, endogamy/exogamy, textual proximity, download/co-downloads and their chronometrics, etc. can all be tested and validated jointly, discipline by discipline, against their RAE panel rankings in the forthcoming parallel panel-based and metric RAE in 2008. The weights of each predictor can be calibrated to maximize the joint correlation with the rankings. Open Access Scientometrics will provide powerful new means of navigating, evaluating, predicting and analyzing the growing Open Access database, as well as powerful incentives for making it grow faster. ~<|reference_end|>
arxiv
@article{harnad2007open, title={Open Access Scientometrics and the UK Research Assessment Exercise}, author={Stevan Harnad}, journal={arXiv preprint arXiv:cs/0703131}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703131}, primaryClass={cs.IR cs.DL} }
harnad2007open
arxiv-675860
cs/0703132
Structure induction by lossless graph compression
<|reference_start|>Structure induction by lossless graph compression: This work is motivated by the necessity to automate the discovery of structure in vast and evergrowing collection of relational data commonly represented as graphs, for example genomic networks. A novel algorithm, dubbed Graphitour, for structure induction by lossless graph compression is presented and illustrated by a clear and broadly known case of nested structure in a DNA molecule. This work extends to graphs some well established approaches to grammatical inference previously applied only to strings. The bottom-up graph compression problem is related to the maximum cardinality (non-bipartite) maximum cardinality matching problem. The algorithm accepts a variety of graph types including directed graphs and graphs with labeled nodes and arcs. The resulting structure could be used for representation and classification of graphs.<|reference_end|>
arxiv
@article{peshkin2007structure, title={Structure induction by lossless graph compression}, author={Leonid Peshkin}, journal={In proceedings of the Data Compression Conference, 2007, pp 53-62, published by the IEEE Computer Society Press}, year={2007}, doi={10.1109/DCC.2007.73}, archivePrefix={arXiv}, eprint={cs/0703132}, primaryClass={cs.DS cs.IT cs.LG math.IT} }
peshkin2007structure
arxiv-675861
cs/0703133
Computing Good Nash Equilibria in Graphical Games
<|reference_start|>Computing Good Nash Equilibria in Graphical Games: This paper addresses the problem of fair equilibrium selection in graphical games. Our approach is based on the data structure called the {\em best response policy}, which was proposed by Kearns et al. \cite{kls} as a way to represent all Nash equilibria of a graphical game. In \cite{egg}, it was shown that the best response policy has polynomial size as long as the underlying graph is a path. In this paper, we show that if the underlying graph is a bounded-degree tree and the best response policy has polynomial size then there is an efficient algorithm which constructs a Nash equilibrium that guarantees certain payoffs to all participants. Another attractive solution concept is a Nash equilibrium that maximizes the social welfare. We show that, while exactly computing the latter is infeasible (we prove that solving this problem may involve algebraic numbers of an arbitrarily high degree), there exists an FPTAS for finding such an equilibrium as long as the best response policy has polynomial size. These two algorithms can be combined to produce Nash equilibria that satisfy various fairness criteria.<|reference_end|>
arxiv
@article{elkind2007computing, title={Computing Good Nash Equilibria in Graphical Games}, author={Edith Elkind, Leslie Ann Goldberg, Paul W. Goldberg}, journal={arXiv preprint arXiv:cs/0703133}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703133}, primaryClass={cs.GT cs.DS cs.MA} }
elkind2007computing
arxiv-675862
cs/0703134
Automatic Generation of Benchmarks for Plagiarism Detection Tools using Grammatical Evolution
<|reference_start|>Automatic Generation of Benchmarks for Plagiarism Detection Tools using Grammatical Evolution: This paper has been withdrawn by the authors due to a major rewriting.<|reference_end|>
arxiv
@article{cebrian2007automatic, title={Automatic Generation of Benchmarks for Plagiarism Detection Tools using Grammatical Evolution}, author={Manuel Cebrian, Manuel Alfonseca and Alfonso Ortega}, journal={arXiv preprint arXiv:cs/0703134}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703134}, primaryClass={cs.NE cs.IT math.IT} }
cebrian2007automatic
arxiv-675863
cs/0703135
Dependency Parsing with Dynamic Bayesian Network
<|reference_start|>Dependency Parsing with Dynamic Bayesian Network: Exact parsing with finite state automata is deemed inappropriate because of the unbounded non-locality languages overwhelmingly exhibit. We propose a way to structure the parsing task in order to make it amenable to local classification methods. This allows us to build a Dynamic Bayesian Network which uncovers the syntactic dependency structure of English sentences. Experiments with the Wall Street Journal demonstrate that the model successfully learns from labeled data.<|reference_end|>
arxiv
@article{savova2007dependency, title={Dependency Parsing with Dynamic Bayesian Network}, author={Virginia Savova and Leonid Peshkin}, journal={In proceedings of American Association for Artificial Intelligence AAAI 2005}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703135}, primaryClass={cs.CL cs.AI} }
savova2007dependency
arxiv-675864
cs/0703136
Uncovering Plagiarism Networks
<|reference_start|>Uncovering Plagiarism Networks: Plagiarism detection in educational programming assignments is still a problematic issue in terms of resource waste, ethical controversy, legal risks, and technical complexity. This paper presents AC, a modular plagiarism detection system. The design is portable across platforms and assignment formats and provides easy extraction into the internal assignment representation. Multiple similarity measures have been incorporated, both existing and newly-developed. Statistical analysis and several graphical visualizations aid in the interpretation of analysis results. The system has been evaluated with a survey that encompasses several academic semesters of use at the authors' institution.<|reference_end|>
arxiv
@article{freire2007uncovering, title={Uncovering Plagiarism Networks}, author={Manuel Freire, Manuel Cebrian and Emilio del Rosal}, journal={arXiv preprint arXiv:cs/0703136}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703136}, primaryClass={cs.IT cs.SI math.IT} }
freire2007uncovering
arxiv-675865
cs/0703137
ReSHAPE: A Framework for Dynamic Resizing and Scheduling of Homogeneous Applications in a Parallel Environment
<|reference_start|>ReSHAPE: A Framework for Dynamic Resizing and Scheduling of Homogeneous Applications in a Parallel Environment: Applications in science and engineering often require huge computational resources for solving problems within a reasonable time frame. Parallel supercomputers provide the computational infrastructure for solving such problems. A traditional application scheduler running on a parallel cluster only supports static scheduling where the number of processors allocated to an application remains fixed throughout the lifetime of execution of the job. Due to the unpredictability in job arrival times and varying resource requirements, static scheduling can result in idle system resources thereby decreasing the overall system throughput. In this paper we present a prototype framework called ReSHAPE, which supports dynamic resizing of parallel MPI applications executed on distributed memory platforms. The framework includes a scheduler that supports resizing of applications, an API to enable applications to interact with the scheduler, and a library that makes resizing viable. Applications executed using the ReSHAPE scheduler framework can expand to take advantage of additional free processors or can shrink to accommodate a high priority application, without getting suspended. In our research, we have mainly focused on structured applications that have two-dimensional data arrays distributed across a two-dimensional processor grid. The resize library includes algorithms for processor selection and processor mapping. Experimental results show that the ReSHAPE framework can improve individual job turn-around time and overall system throughput.<|reference_end|>
arxiv
@article{sudarsan2007reshape:, title={ReSHAPE: A Framework for Dynamic Resizing and Scheduling of Homogeneous Applications in a Parallel Environment}, author={Rajesh Sudarsan and Calvin J. Ribbens}, journal={arXiv preprint arXiv:cs/0703137}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703137}, primaryClass={cs.DC} }
sudarsan2007reshape:
arxiv-675866
cs/0703138
Reinforcement Learning for Adaptive Routing
<|reference_start|>Reinforcement Learning for Adaptive Routing: Reinforcement learning means learning a policy--a mapping of observations into actions--based on feedback from the environment. The learning can be viewed as browsing a set of policies while evaluating them by trial through interaction with the environment. We present an application of gradient ascent algorithm for reinforcement learning to a complex domain of packet routing in network communication and compare the performance of this algorithm to other routing methods on a benchmark problem.<|reference_end|>
arxiv
@article{peshkin2007reinforcement, title={Reinforcement Learning for Adaptive Routing}, author={Leonid Peshkin and Virginia Savova}, journal={In Proceedings of the Intnl Joint Conf on Neural Networks (IJCNN), 2002}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703138}, primaryClass={cs.LG cs.AI cs.NI} }
peshkin2007reinforcement
arxiv-675867
cs/0703139
TCP throughput guarantee in the DiffServ Assured Forwarding service: what about the results?
<|reference_start|>TCP throughput guarantee in the DiffServ Assured Forwarding service: what about the results?: Since the proposition of Quality of Service architectures by the IETF, the interaction between TCP and the QoS services has been intensively studied. This paper proposes to look forward to the results obtained in terms of TCP throughput guarantee in the DiffServ Assured Forwarding (DiffServ/AF) service and to present an overview of the different proposals to solve the problem. It has been demonstrated that the standardized IETF DiffServ conditioners such as the token bucket color marker and the time sliding window color maker were not good TCP traffic descriptors. Starting with this point, several propositions have been made and most of them presents new marking schemes in order to replace or improve the traditional token bucket color marker. The main problem is that TCP congestion control is not designed to work with the AF service. Indeed, both mechanisms are antagonists. TCP has the property to share in a fair manner the bottleneck bandwidth between flows while DiffServ network provides a level of service controllable and predictable. In this paper, we build a classification of all the propositions made during these last years and compare them. As a result, we will see that these conditioning schemes can be separated in three sets of action level and that the conditioning at the network edge level is the most accepted one. We conclude that the problem is still unsolved and that TCP, conditioned or not conditioned, remains inappropriate to the DiffServ/AF service.<|reference_end|>
arxiv
@article{lochin2007tcp, title={TCP throughput guarantee in the DiffServ Assured Forwarding service: what about the results?}, author={Emmanuel Lochin and Pascal Anelli}, journal={arXiv preprint arXiv:cs/0703139}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703139}, primaryClass={cs.NI} }
lochin2007tcp
arxiv-675868
cs/0703140
How to Guarantee Secrecy for Cryptographic Protocols
<|reference_start|>How to Guarantee Secrecy for Cryptographic Protocols: In this paper we propose a general definition of secrecy for cryptographic protocols in the Dolev-Yao model. We give a sufficient condition ensuring secrecy for protocols where rules have encryption depth at most two, that is satisfied by almost all practical protocols. The only allowed primitives in the class of protocols we consider are pairing and encryption with atomic keys. Moreover, we describe an algorithm of practical interest which transforms a cryptographic protocol into a secure one from the point of view of secrecy, without changing its original goal with respect to secrecy of nonces and keys, provided the protocol satisfies some conditions. These conditions are not very restrictive and are satisfied for most practical protocols.<|reference_end|>
arxiv
@article{beauquier2007how, title={How to Guarantee Secrecy for Cryptographic Protocols}, author={Dani`ele Beauquier (LACL), Fr'ed'eric Gauche (LACL)}, journal={Rapport interne (03/2007)}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703140}, primaryClass={cs.CR} }
beauquier2007how
arxiv-675869
cs/0703141
Constructive Conjugate Codes for Quantum Error Correction and Cryptography
<|reference_start|>Constructive Conjugate Codes for Quantum Error Correction and Cryptography: A conjugate code pair is defined as a pair of linear codes either of which contains the dual of the other. A conjugate code pair represents the essential structure of the corresponding Calderbank-Shor-Steane (CSS) quantum error-correcting code. It is known that conjugate code pairs are applicable to quantum cryptography. In this work, a polynomial construction of conjugate code pairs is presented. The constructed pairs achieve the highest known achievable rate on additive channels, and are decodable with algorithms of polynomial complexity.<|reference_end|>
arxiv
@article{hamada2007constructive, title={Constructive Conjugate Codes for Quantum Error Correction and Cryptography}, author={Mitsuru Hamada}, journal={arXiv preprint arXiv:cs/0703141}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703141}, primaryClass={cs.IT math.IT} }
hamada2007constructive
arxiv-675870
cs/0703142
Pragmatic Space-Time Trellis Codes for Block Fading Channels
<|reference_start|>Pragmatic Space-Time Trellis Codes for Block Fading Channels: A pragmatic approach for the construction of space-time codes over block fading channels is investigated. The approach consists in using common convolutional encoders and Viterbi decoders with suitable generators and rates, thus greatly simplifying the implementation of space-time codes. For the design of pragmatic space-time codes a methodology is proposed and applied, based on the extension of the concept of generalized transfer function for convolutional codes over block fading channels. Our search algorithm produces the convolutional encoder generators of pragmatic space-time codes for various number of states, number of antennas and fading rate. Finally it is shown that, for the investigated cases, the performance of pragmatic space-time codes is better than that of previously known space-time codes, confirming that they are a valuable choice in terms of both implementation complexity and performance.<|reference_end|>
arxiv
@article{chiani2007pragmatic, title={Pragmatic Space-Time Trellis Codes for Block Fading Channels}, author={Marco Chiani, Andrea Conti, Velio Tralli}, journal={arXiv preprint arXiv:cs/0703142}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703142}, primaryClass={cs.IT math.IT} }
chiani2007pragmatic
arxiv-675871
cs/0703143
How much feedback is required in MIMO Broadcast Channels?
<|reference_start|>How much feedback is required in MIMO Broadcast Channels?: In this paper, a downlink communication system, in which a Base Station (BS) equipped with M antennas communicates with N users each equipped with K receive antennas ($K \leq M$), is considered. It is assumed that the receivers have perfect Channel State Information (CSI), while the BS only knows the partial CSI, provided by the receivers via feedback. The minimum amount of feedback required at the BS, to achieve the maximum sum-rate capacity in the asymptotic case of $N \to \infty$ and different ranges of SNR is studied. In the fixed and low SNR regimes, it is demonstrated that to achieve the maximum sum-rate, an infinite amount of feedback is required. Moreover, in order to reduce the gap to the optimum sum-rate to zero, in the fixed SNR regime, the minimum amount of feedback scales as $\theta(\ln \ln \ln N)$, which is achievable by the Random Beam-Forming scheme proposed in [14]. In the high SNR regime, two cases are considered; in the case of $K < M$, it is proved that the minimum amount of feedback bits to reduce the gap between the achievable sum-rate and the maximum sum-rate to zero grows logaritmically with SNR, which is achievable by the "Generalized Random Beam-Forming" scheme, proposed in [18]. In the case of $K = M$, it is shown that by using the Random Beam-Forming scheme and the total amount of feedback not growing with SNR, the maximum sum-rate capacity is achieved.<|reference_end|>
arxiv
@article{bayesteh2007how, title={How much feedback is required in MIMO Broadcast Channels?}, author={Alireza Bayesteh and Amir K. Khandani}, journal={arXiv preprint arXiv:cs/0703143}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703143}, primaryClass={cs.IT math.IT} }
bayesteh2007how
arxiv-675872
cs/0703144
On The Capacity Of Time-Varying Channels With Periodic Feedback
<|reference_start|>On The Capacity Of Time-Varying Channels With Periodic Feedback: The capacity of time-varying channels with periodic feedback at the transmitter is evaluated. It is assumed that the channel state information is perfectly known at the receiver and is fed back to the transmitter at the regular time-intervals. The system capacity is investigated in two cases: i) finite state Markov channel, and ii) additive white Gaussian noise channel with time-correlated fading. In the first case, it is shown that the capacity is achievable by multiplexing multiple codebooks across the channel. In the second case, the channel capacity and the optimal adaptive coding is obtained. It is shown that the optimal adaptation can be achieved by a single Gaussian codebook, while adaptively allocating the total power based on the side information at the transmitter.<|reference_end|>
arxiv
@article{sadrabadi2007on, title={On The Capacity Of Time-Varying Channels With Periodic Feedback}, author={Mehdi Ansari Sadrabadi, Mohammad Ali Maddah-Ali and Amir K. Khandani}, journal={arXiv preprint arXiv:cs/0703144}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703144}, primaryClass={cs.IT math.IT} }
sadrabadi2007on
arxiv-675873
cs/0703145
The Simultaneous Triple Product Property and Group-theoretic Results for the Exponent of Matrix Multiplication
<|reference_start|>The Simultaneous Triple Product Property and Group-theoretic Results for the Exponent of Matrix Multiplication: We describe certain special consequences of certain elementary methods from group theory for studying the algebraic complexity of matrix multiplication, as developed by H. Cohn, C. Umans et. al. in 2003 and 2005. The measure of complexity here is the exponent of matrix multiplication, a real parameter between 2 and 3, which has been conjectured to be 2. More specifically, a finite group may simultaneously "realize" several independent matrix multiplications via its regular algebra if it has a family of triples of "index" subsets which satisfy the so-called simultaneous triple product property (STPP), in which case the complexity of these several multiplications does not exceed the rank (complexity) of the algebra. This leads to bounds for the exponent in terms of the size of the group and the sizes of its STPP triples, as well as the dimensions of its distinct irreducible representations. Wreath products of Abelian with symmetric groups appear especially important, in this regard, and we give an example of such a group which shows that the exponent is less than 2.84, and could be possibly be as small as 2.02 depending on the number of simultaneous matrix multiplications it realizes.<|reference_end|>
arxiv
@article{murthy2007the, title={The Simultaneous Triple Product Property and Group-theoretic Results for the Exponent of Matrix Multiplication}, author={Sandeep Murthy}, journal={arXiv preprint arXiv:cs/0703145}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703145}, primaryClass={cs.DS cs.CC math.GR} }
murthy2007the
arxiv-675874
cs/0703146
A Polynomial Time Algorithm for SAT
<|reference_start|>A Polynomial Time Algorithm for SAT: Article presents the compatibility matrix method and illustrates it with the application to P vs NP problem. The method is a generalization of descriptive geometry: in the method, we draft problems and solve them utilizing the image creation technique. The method reveals: P = NP = PSPACE<|reference_end|>
arxiv
@article{gubin2007a, title={A Polynomial Time Algorithm for SAT}, author={Sergey Gubin}, journal={arXiv preprint arXiv:cs/0703146}, year={2007}, number={MCCCC 23,24,25}, archivePrefix={arXiv}, eprint={cs/0703146}, primaryClass={cs.CC cs.DM cs.DS cs.LO} }
gubin2007a
arxiv-675875
cs/0703147
The finite tiling problem is undecidable in the hyperbolic plane
<|reference_start|>The finite tiling problem is undecidable in the hyperbolic plane: In this paper, we consider the finite tiling problem which was proved undecidable in the Euclidean plane by Jarkko Kari in 1994. Here, we prove that the same problem for the hyperbolic plane is also undecidable.<|reference_end|>
arxiv
@article{margenstern2007the, title={The finite tiling problem is undecidable in the hyperbolic plane}, author={Maurice Margenstern}, journal={The Finite Tiling Problem Is Undecidable in the Hyperbolic Plane, International Journal of Foundations of Computer Science, 19(4), (2008), 971-982}, year={2007}, doi={10.1142/S0129054108006078}, archivePrefix={arXiv}, eprint={cs/0703147}, primaryClass={cs.CG cs.DM} }
margenstern2007the
arxiv-675876
cs/0703148
Computer Science and Game Theory: A Brief Survey
<|reference_start|>Computer Science and Game Theory: A Brief Survey: There has been a remarkable increase in work at the interface of computer science and game theory in the past decade. In this article I survey some of the main themes of work in the area, with a focus on the work in computer science. Given the length constraints, I make no attempt at being comprehensive, especially since other surveys are also available, and a comprehensive survey book will appear shortly.<|reference_end|>
arxiv
@article{halpern2007computer, title={Computer Science and Game Theory: A Brief Survey}, author={Joseph Y. Halpern}, journal={arXiv preprint arXiv:cs/0703148}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703148}, primaryClass={cs.GT econ.TH} }
halpern2007computer
arxiv-675877
cs/0703149
Exploring Logic Artificial Chemistries: An Illogical Attempt?
<|reference_start|>Exploring Logic Artificial Chemistries: An Illogical Attempt?: Robustness to a wide variety of negative factors and the ability to self-repair is an inherent and natural characteristic of all life forms on earth. As opposed to nature, man-made systems are in most cases not inherently robust and a significant effort has to be made in order to make them resistant against failures. This can be done in a wide variety of ways and on various system levels. In the field of digital systems, for example, techniques such as triple modular redundancy (TMR) are frequently used, which results in a considerable hardware overhead. Biologically-inspired computing by means of bio-chemical metaphors offers alternative paradigms, which need to be explored and evaluated. Here, we are interested to evaluate the potential of nature-inspired artificial chemistries and membrane systems as an alternative information representing and processing paradigm in order to obtain robust and spatially extended Boolean computing systems in a distributed environment. We investigate conceptual approaches inspired by artificial chemistries and membrane systems and compare proof-of-concepts. First, we show, that elementary logical functions can be implemented. Second, we illustrate how they can be made more robust and how they can be assembled to larger-scale systems. Finally, we discuss the implications for and paths to possible genuine implementations. Compared to the main body of work in artificial chemistries, we take a very pragmatic and implementation-oriented approach and are interested in realizing Boolean computations only. The results emphasize that artificial chemistries can be used to implement Boolean logic in a spatially extended and distributed environment and can also be made highly robust, but at a significant price.<|reference_end|>
arxiv
@article{teuscher2007exploring, title={Exploring Logic Artificial Chemistries: An Illogical Attempt?}, author={Christof Teuscher}, journal={The First IEEE Symposium on Artificial Life, April 1-5, 2007, Hawaii, USA}, year={2007}, doi={10.1109/ALIFE.2007.367659}, archivePrefix={arXiv}, eprint={cs/0703149}, primaryClass={cs.NE nlin.AO} }
teuscher2007exploring
arxiv-675878
cs/0703150
Type-II/III DCT/DST algorithms with reduced number of arithmetic operations
<|reference_start|>Type-II/III DCT/DST algorithms with reduced number of arithmetic operations: We present algorithms for the discrete cosine transform (DCT) and discrete sine transform (DST), of types II and III, that achieve a lower count of real multiplications and additions than previously published algorithms, without sacrificing numerical accuracy. Asymptotically, the operation count is reduced from ~ 2N log_2 N to ~ (17/9) N log_2 N for a power-of-two transform size N. Furthermore, we show that a further N multiplications may be saved by a certain rescaling of the inputs or outputs, generalizing a well-known technique for N=8 by Arai et al. These results are derived by considering the DCT to be a special case of a DFT of length 4N, with certain symmetries, and then pruning redundant operations from a recent improved fast Fourier transform algorithm (based on a recursive rescaling of the conjugate-pair split radix algorithm). The improved algorithms for DCT-III, DST-II, and DST-III follow immediately from the improved count for the DCT-II.<|reference_end|>
arxiv
@article{shao2007type-ii/iii, title={Type-II/III DCT/DST algorithms with reduced number of arithmetic operations}, author={Xuancheng Shao and Steven G. Johnson}, journal={Signal Processing vol. 88, issue 6, p. 1553-1564 (2008)}, year={2007}, doi={10.1016/j.sigpro.2008.01.004}, archivePrefix={arXiv}, eprint={cs/0703150}, primaryClass={cs.NA cs.DS cs.MS} }
shao2007type-ii/iii
arxiv-675879
cs/0703151
Asymptotic Analysis of Amplify and Forward Relaying in a Parallel MIMO Relay Network
<|reference_start|>Asymptotic Analysis of Amplify and Forward Relaying in a Parallel MIMO Relay Network: This paper considers the setup of a parallel MIMO relay network in which $K$ relays, each equipped with $N$ antennas, assist the transmitter and the receiver, each equipped with $M$ antennas, in the half-duplex mode, under the assumption that $N\geq{M}$. This setup has been studied in the literature like in \cite{nabar}, \cite{nabar2}, and \cite{qr}. In this paper, a simple scheme, the so-called Incremental Cooperative Beamforming, is introduced and shown to achieve the capacity of the network in the asymptotic case of $K\to{\infty}$ with a gap no more than $O(\frac{1}{\log(K)})$. This result is shown to hold, as long as the power of the relays scales as $\omega(\frac{\log^9(K)}{K})$. Finally, the asymptotic SNR behavior is studied and it is proved that the proposed scheme achieves the full multiplexing gain, regardless of the number of relays.<|reference_end|>
arxiv
@article{gharan2007asymptotic, title={Asymptotic Analysis of Amplify and Forward Relaying in a Parallel MIMO Relay Network}, author={Shahab Oveis Gharan, Alireza Bayesteh, and Amir K. Khandani}, journal={arXiv preprint arXiv:cs/0703151}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703151}, primaryClass={cs.IT math.IT} }
gharan2007asymptotic
arxiv-675880
cs/0703152
Quantum Lambda Calculi with Classical Control: Syntax and Expressive Power
<|reference_start|>Quantum Lambda Calculi with Classical Control: Syntax and Expressive Power: We study an untyped lambda calculus with quantum data and classical control. This work stems from previous proposals by Selinger and Valiron and by Van Tonder. We focus on syntax and expressiveness, rather than (denotational) semantics. We prove subject reduction, confluence and a standardization theorem. Moreover, we prove the computational equivalence of the proposed calculus with a suitable class of quantum circuit families.<|reference_end|>
arxiv
@article{lago2007quantum, title={Quantum Lambda Calculi with Classical Control: Syntax and Expressive Power}, author={Ugo Dal Lago, Andrea Masini, Margherita Zorzi}, journal={arXiv preprint arXiv:cs/0703152}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703152}, primaryClass={cs.LO} }
lago2007quantum
arxiv-675881
cs/0703153
The periodic domino problem is undecidable in the hyperbolic plane
<|reference_start|>The periodic domino problem is undecidable in the hyperbolic plane: In this paper, we consider the periodic tiling problem which was proved undecidable in the Euclidean plane by Yu. Gurevich and I. Koriakov in 1972. Here, we prove that the same problem for the hyperbolic plane is also undecidable.<|reference_end|>
arxiv
@article{margenstern2007the, title={The periodic domino problem is undecidable in the hyperbolic plane}, author={Maurice Margenstern}, journal={arXiv preprint arXiv:cs/0703153}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703153}, primaryClass={cs.CG cs.DM} }
margenstern2007the
arxiv-675882
cs/0703154
A Hot Channel
<|reference_start|>A Hot Channel: This paper studies on-chip communication with non-ideal heat sinks. A channel model is proposed where the variance of the additive noise depends on the weighted sum of the past channel input powers. It is shown that, depending on the weights, the capacity can be either bounded or unbounded in the input power. A necessary condition and a sufficient condition for the capacity to be bounded are presented.<|reference_end|>
arxiv
@article{koch2007a, title={A Hot Channel}, author={Tobias Koch, Amos Lapidoth, and Paul P. Sotiriadis}, journal={arXiv preprint arXiv:cs/0703154}, year={2007}, doi={10.1109/ITW.2007.4313127}, archivePrefix={arXiv}, eprint={cs/0703154}, primaryClass={cs.IT math.IT} }
koch2007a
arxiv-675883
cs/0703155
Liveness of Heap Data for Functional Programs
<|reference_start|>Liveness of Heap Data for Functional Programs: Functional programming languages use garbage collection for heap memory management. Ideally, garbage collectors should reclaim all objects that are dead at the time of garbage collection. An object is dead at an execution instant if it is not used in future. Garbage collectors collect only those dead objects that are not reachable from any program variable. This is because they are not able to distinguish between reachable objects that are dead and reachable objects that are live. In this paper, we describe a static analysis to discover reachable dead objects in programs written in first-order, eager functional programming languages. The results of this technique can be used to make reachable dead objects unreachable, thereby allowing garbage collectors to reclaim more dead objects.<|reference_end|>
arxiv
@article{karkare2007liveness, title={Liveness of Heap Data for Functional Programs}, author={Amey Karkare, Uday Khedker, Amitabha Sanyal}, journal={arXiv preprint arXiv:cs/0703155}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703155}, primaryClass={cs.PL} }
karkare2007liveness
arxiv-675884
cs/0703156
Case Base Mining for Adaptation Knowledge Acquisition
<|reference_start|>Case Base Mining for Adaptation Knowledge Acquisition: In case-based reasoning, the adaptation of a source case in order to solve the target problem is at the same time crucial and difficult to implement. The reason for this difficulty is that, in general, adaptation strongly depends on domain-dependent knowledge. This fact motivates research on adaptation knowledge acquisition (AKA). This paper presents an approach to AKA based on the principles and techniques of knowledge discovery from databases and data-mining. It is implemented in CABAMAKA, a system that explores the variations within the case base to elicit adaptation knowledge. This system has been successfully tested in an application of case-based reasoning to decision support in the domain of breast cancer treatment.<|reference_end|>
arxiv
@article{d'aquin2007case, title={Case Base Mining for Adaptation Knowledge Acquisition}, author={Mathieu D'Aquin (KMI), Fadi Badra (INRIA Lorraine - LORIA), Sandrine Lafrogne (INRIA Lorraine - LORIA), Jean Lieber (INRIA Lorraine - LORIA), Amedeo Napoli (INRIA Lorraine - LORIA), Laszlo Szathmary (INRIA Lorraine - LORIA)}, journal={Dans Twentieth International Joint Conference on Artificial Intelligence - IJCAI'07 (2007) 750--755}, year={2007}, archivePrefix={arXiv}, eprint={cs/0703156}, primaryClass={cs.AI} }
d'aquin2007case
arxiv-675885
cs/9301101
Verifying the Unification Algorithm in LCF
<|reference_start|>Verifying the Unification Algorithm in LCF: Manna and Waldinger's theory of substitutions and unification has been verified using the Cambridge LCF theorem prover. A proof of the monotonicity of substitution is presented in detail, as an example of interaction with LCF. Translating the theory into LCF's domain-theoretic logic is largely straightforward. Well-founded induction on a complex ordering is translated into nested structural inductions. Correctness of unification is expressed using predicates for such properties as idempotence and most-generality. The verification is presented as a series of lemmas. The LCF proofs are compared with the original ones, and with other approaches. It appears difficult to find a logic that is both simple and flexible, especially for proving termination.<|reference_end|>
arxiv
@article{paulson2000verifying, title={Verifying the Unification Algorithm in LCF}, author={Lawrence C. Paulson}, journal={Science of Computer Programming 5 (1985), 143-170}, year={2000}, archivePrefix={arXiv}, eprint={cs/9301101}, primaryClass={cs.LO} }
paulson2000verifying
arxiv-675886
cs/9301102
Constructing Recursion Operators in Intuitionistic Type Theory
<|reference_start|>Constructing Recursion Operators in Intuitionistic Type Theory: Martin-L\"of's Intuitionistic Theory of Types is becoming popular for formal reasoning about computer programs. To handle recursion schemes other than primitive recursion, a theory of well-founded relations is presented. Using primitive recursion over higher types, induction and recursion are formally derived for a large class of well-founded relations. Included are < on natural numbers, and relations formed by inverse images, addition, multiplication, and exponentiation of other relations. The constructions are given in full detail to allow their use in theorem provers for Type Theory, such as Nuprl. The theory is compared with work in the field of ordinal recursion over higher types.<|reference_end|>
arxiv
@article{paulson2000constructing, title={Constructing Recursion Operators in Intuitionistic Type Theory}, author={Lawrence C. Paulson}, journal={Journal of Symbolic Computation 2 (1986), 325-355}, year={2000}, archivePrefix={arXiv}, eprint={cs/9301102}, primaryClass={cs.LO} }
paulson2000constructing
arxiv-675887
cs/9301103
Proving Termination of Normalization Functions for Conditional Expressions
<|reference_start|>Proving Termination of Normalization Functions for Conditional Expressions: Boyer and Moore have discussed a recursive function that puts conditional expressions into normal form [1]. It is difficult to prove that this function terminates on all inputs. Three termination proofs are compared: (1) using a measure function, (2) in domain theory using LCF, (3) showing that its recursion relation, defined by the pattern of recursive calls, is well-founded. The last two proofs are essentially the same though conducted in markedly different logical frameworks. An obviously total variant of the normalize function is presented as the `computational meaning' of those two proofs. A related function makes nested recursive calls. The three termination proofs become more complex: termination and correctness must be proved simultaneously. The recursion relation approach seems flexible enough to handle subtle termination proofs where previously domain theory seemed essential.<|reference_end|>
arxiv
@article{paulson2000proving, title={Proving Termination of Normalization Functions for Conditional Expressions}, author={Lawrence C. Paulson}, journal={Journal of Automated Reasoning 2 (1986), 63-74}, year={2000}, archivePrefix={arXiv}, eprint={cs/9301103}, primaryClass={cs.LO} }
paulson2000proving
arxiv-675888
cs/9301104
Natural Deduction as Higher-Order Resolution
<|reference_start|>Natural Deduction as Higher-Order Resolution: An interactive theorem prover, Isabelle, is under development. In LCF, each inference rule is represented by one function for forwards proof and another (a tactic) for backwards proof. In Isabelle, each inference rule is represented by a Horn clause. Resolution gives both forwards and backwards proof, supporting a large class of logics. Isabelle has been used to prove theorems in Martin-L\"of's Constructive Type Theory. Quantifiers pose several difficulties: substitution, bound variables, Skolemization. Isabelle's representation of logical syntax is the typed lambda-calculus, requiring higher- order unification. It may have potential for logic programming. Depth-first subgoaling along inference rules constitutes a higher-order Prolog.<|reference_end|>
arxiv
@article{paulson2000natural, title={Natural Deduction as Higher-Order Resolution}, author={Lawrence C. Paulson}, journal={Journal of Logic Programming 3 (1986), 237-258}, year={2000}, archivePrefix={arXiv}, eprint={cs/9301104}, primaryClass={cs.LO} }
paulson2000natural
arxiv-675889
cs/9301105
The Foundation of a Generic Theorem Prover
<|reference_start|>The Foundation of a Generic Theorem Prover: Isabelle is an interactive theorem prover that supports a variety of logics. It represents rules as propositions (not as functions) and builds proofs by combining rules. These operations constitute a meta-logic (or `logical framework') in which the object-logics are formalized. Isabelle is now based on higher-order logic -- a precise and well-understood foundation. Examples illustrate use of this meta-logic to formalize logics and proofs. Axioms for first-order logic are shown sound and complete. Backwards proof is formalized by meta-reasoning about object-level entailment. Higher-order logic has several practical advantages over other meta-logics. Many proof techniques are known, such as Huet's higher-order unification procedure.<|reference_end|>
arxiv
@article{paulson2000the, title={The Foundation of a Generic Theorem Prover}, author={Lawrence C. Paulson}, journal={Journal of Automated Reasoning 5 (1989), 363-397}, year={2000}, archivePrefix={arXiv}, eprint={cs/9301105}, primaryClass={cs.LO} }
paulson2000the
arxiv-675890
cs/9301106
Isabelle: The Next 700 Theorem Provers
<|reference_start|>Isabelle: The Next 700 Theorem Provers: Isabelle is a generic theorem prover, designed for interactive reasoning in a variety of formal theories. At present it provides useful proof procedures for Constructive Type Theory, various first-order logics, Zermelo-Fraenkel set theory, and higher-order logic. This survey of Isabelle serves as an introduction to the literature. It explains why generic theorem proving is beneficial. It gives a thorough history of Isabelle, beginning with its origins in the LCF system. It presents an account of how logics are represented, illustrated using classical logic. The approach is compared with the Edinburgh Logical Framework. Several of the Isabelle object-logics are presented.<|reference_end|>
arxiv
@article{paulson2000isabelle:, title={Isabelle: The Next 700 Theorem Provers}, author={Lawrence C. Paulson}, journal={published in P. Odifreddi (editor), Logic and Computer Science (Academic Press, 1990), 361-386}, year={2000}, archivePrefix={arXiv}, eprint={cs/9301106}, primaryClass={cs.LO} }
paulson2000isabelle:
arxiv-675891
cs/9301107
A Formulation of the Simple Theory of Types (for Isabelle)
<|reference_start|>A Formulation of the Simple Theory of Types (for Isabelle): Simple type theory is formulated for use with the generic theorem prover Isabelle. This requires explicit type inference rules. There are function, product, and subset types, which may be empty. Descriptions (the eta-operator) introduce the Axiom of Choice. Higher-order logic is obtained through reflection between formulae and terms of type bool. Recursive types and functions can be formally constructed. Isabelle proof procedures are described. The logic appears suitable for general mathematics as well as computational problems.<|reference_end|>
arxiv
@article{paulson2000a, title={A Formulation of the Simple Theory of Types (for Isabelle)}, author={Lawrence C. Paulson}, journal={published in P. Martin-L\"of & G. Mints (editors), COLOG-88: International Conf. in Computer Logic (Springer LNCS 417, 1990), 246-274}, year={2000}, archivePrefix={arXiv}, eprint={cs/9301107}, primaryClass={cs.LO} }
paulson2000a
arxiv-675892
cs/9301108
A Higher-Order Implementation of Rewriting
<|reference_start|>A Higher-Order Implementation of Rewriting: Many automatic theorem-provers rely on rewriting. Using theorems as rewrite rules helps to simplify the subgoals that arise during a proof. LCF is an interactive theorem-prover intended for reasoning about computation. Its implementation of rewriting is presented in detail. LCF provides a family of rewriting functions, and operators to combine them. A succession of functions is described, from pattern matching primitives to the rewriting tool that performs most inferences in LCF proofs. The design is highly modular. Each function performs a basic, specific task, such as recognizing a certain form of tautology. Each operator implements one method of building a rewriting function from simpler ones. These pieces can be put together in numerous ways, yielding a variety of rewrit- ing strategies. The approach involves programming with higher-order functions. Rewriting functions are data values, produced by computation on other rewriting functions. The code is in daily use at Cambridge, demonstrating the practical use of functional programming.<|reference_end|>
arxiv
@article{paulson2001a, title={A Higher-Order Implementation of Rewriting}, author={Lawrence C. Paulson}, journal={Science of Computer Programming 3 (1983), 119-149}, year={2001}, archivePrefix={arXiv}, eprint={cs/9301108}, primaryClass={cs.LO} }
paulson2001a
arxiv-675893
cs/9301109
Logic Programming, Functional Programming, and Inductive Definitions
<|reference_start|>Logic Programming, Functional Programming, and Inductive Definitions: An attempt at unifying logic and functional programming is reported. As a starting point, we take the view that "logic programs" are not about logic but constitute inductive definitions of sets and relations. A skeletal language design based on these considerations is sketched and a prototype implementation discussed.<|reference_end|>
arxiv
@article{paulson2001logic, title={Logic Programming, Functional Programming, and Inductive Definitions}, author={Lawrence C. Paulson and Andrew W. Smith}, journal={published in P. Schroeder-Heister (editor), Extensions of Logic Programming (Springer, 1991), 283-310}, year={2001}, archivePrefix={arXiv}, eprint={cs/9301109}, primaryClass={cs.LO} }
paulson2001logic
arxiv-675894
cs/9301110
Designing a Theorem Prover
<|reference_start|>Designing a Theorem Prover: A step-by-step presentation of the code for a small theorem prover introduces theorem-proving techniques. The programming language used is Standard ML. The prover operates on a sequent calculus formulation of first-order logic, which is briefly explained. The implementation of unification and logical inference is shown. The prover is demonstrated on several small examples, including one that shows its limitations. The final part of the paper is a survey of contemporary research on interactive theorem proving.<|reference_end|>
arxiv
@article{paulson2001designing, title={Designing a Theorem Prover}, author={Lawrence C. Paulson}, journal={published in S. Abramsky, D. M. Gabbay, T. S. E. Maibaum (editors), Handbook of Logic in Computer Science, Vol II (Oxford, 1992), 415-475}, year={2001}, archivePrefix={arXiv}, eprint={cs/9301110}, primaryClass={cs.LO} }
paulson2001designing
arxiv-675895
cs/9301111
Nested satisfiability
<|reference_start|>Nested satisfiability: A special case of the satisfiability problem, in which the clauses have a hierarchical structure, is shown to be solvable in linear time, assuming that the clauses have been represented in a convenient way.<|reference_end|>
arxiv
@article{knuth1990nested, title={Nested satisfiability}, author={Donald E. Knuth}, journal={Acta Inform. 28 (1990), no. 1, 1--6}, year={1990}, number={Knuth migration 11/2004}, archivePrefix={arXiv}, eprint={cs/9301111}, primaryClass={cs.CC} }
knuth1990nested
arxiv-675896
cs/9301112
A note on digitized angles
<|reference_start|>A note on digitized angles: We study the configurations of pixels that occur when two digitized straight lines meet each other.<|reference_end|>
arxiv
@article{knuth1990a, title={A note on digitized angles}, author={Donald E. Knuth}, journal={Electronic Publishing 3 (1990), no. 2, 99--104}, year={1990}, number={Knuth migration 11/2004}, archivePrefix={arXiv}, eprint={cs/9301112}, primaryClass={cs.GR} }
knuth1990a
arxiv-675897
cs/9301113
Textbook examples of recursion
<|reference_start|>Textbook examples of recursion: We discuss properties of recursive schemas related to McCarthy's ``91 function'' and to Takeuchi's triple recursion. Several theorems are proposed as interesting candidates for machine verification, and some intriguing open questions are raised.<|reference_end|>
arxiv
@article{knuth1991textbook, title={Textbook examples of recursion}, author={Donald E. Knuth}, journal={Artificial intelligence and mathematical theory of computation}, year={1991}, number={Knuth migration 11/2004 207-229, Academic Press, Boston, MA, 1991}, archivePrefix={arXiv}, eprint={cs/9301113}, primaryClass={cs.CC} }
knuth1991textbook
arxiv-675898
cs/9301114
Theory and practice
<|reference_start|>Theory and practice: The author argues to Silicon Valley that the most important and powerful part of computer science is work that is simultaneously theoretical and practical. He particularly considers the intersection of the theory of algorithms and practical software development. He combines examples from the development of the TeX typesetting system with clever jokes, criticisms, and encouragements.<|reference_end|>
arxiv
@article{knuth1991theory, title={Theory and practice}, author={Donald E. Knuth}, journal={Theoretical Comp. Sci. 90 (1991), 1--15}, year={1991}, number={Knuth migration 11/2004}, archivePrefix={arXiv}, eprint={cs/9301114}, primaryClass={cs.GL} }
knuth1991theory
arxiv-675899
cs/9301115
Context-free multilanguages
<|reference_start|>Context-free multilanguages: This article is a sketch of ideas that were once intended to appear in the author's famous series, "The Art of Computer Programming". He generalizes the notion of a context-free language from a set to a multiset of words over an alphabet. The idea is to keep track of the number of ways to parse a string. For example, "fruit flies like a banana" can famously be parsed in two ways; analogous examples in the setting of programming languages may yet be important in the future. The treatment is informal but essentially rigorous.<|reference_end|>
arxiv
@article{knuth1991context-free, title={Context-free multilanguages}, author={Donald E. Knuth}, journal={Theoretical Studies in Computer Science, Ginsburg Festschrift}, year={1991}, number={Knuth migration 11/2004 1991}, archivePrefix={arXiv}, eprint={cs/9301115}, primaryClass={cs.DS} }
knuth1991context-free
arxiv-675900
cs/9301116
The problem of compatible representatives
<|reference_start|>The problem of compatible representatives: The purpose of this note is to attach a name to a natural class of combinatorial problems and to point out that this class includes many important special cases. We also show that a simple problem of placing nonoverlapping labels on a rectangular map is NP-complete.<|reference_end|>
arxiv
@article{knuth1992the, title={The problem of compatible representatives}, author={Donald E. Knuth, Arvind Raghunathan}, journal={SIAM J. Discrete Math. 5 (1992), no. 3, 422--427}, year={1992}, number={Knuth migration 11/2004}, archivePrefix={arXiv}, eprint={cs/9301116}, primaryClass={cs.DS math.CO} }
knuth1992the