corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-5801
0812.3642
MIMO Two-way Relay Channel: Diversity-Multiplexing Tradeoff Analysis
<|reference_start|>MIMO Two-way Relay Channel: Diversity-Multiplexing Tradeoff Analysis: A multi-hop two-way relay channel is considered in which all the terminals are equipped with multiple antennas. Assuming independent quasi-static Rayleigh fading channels and channel state information available at the receivers, we characterize the optimal diversity-multiplexing gain tradeoff (DMT) curve for a full-duplex relay terminal. It is shown that the optimal DMT can be achieved by a compress-and-forward type relaying strategy in which the relay quantizes its received signal and transmits the corresponding channel codeword. It is noteworthy that, with this transmission protocol, the two transmissions in opposite directions can achieve their respective single user optimal DMT performances simultaneously, despite the interference they cause to each other. Motivated by the optimality of this scheme in the case of the two-way relay channel, a novel dynamic compress-and-forward (DCF) protocol is proposed for the one-way multi-hop MIMO relay channel for a half-duplex relay terminal, and this scheme is shown to achieve the optimal DMT performance.<|reference_end|>
arxiv
@article{gunduz2008mimo, title={MIMO Two-way Relay Channel: Diversity-Multiplexing Tradeoff Analysis}, author={Deniz Gunduz, Andrea Goldsmith, H. Vincent Poor}, journal={arXiv preprint arXiv:0812.3642}, year={2008}, archivePrefix={arXiv}, eprint={0812.3642}, primaryClass={cs.IT math.IT} }
gunduz2008mimo
arxiv-5802
0812.3648
A New Method for Knowledge Representation in Expert System's (XMLKR)
<|reference_start|>A New Method for Knowledge Representation in Expert System's (XMLKR): Knowledge representation it is an essential section of a Expert Systems, Because in this section we have a framework to establish an expert system then we can modeling and use by this to design an expert system. Many method it is exist for knowledge representation but each method have problems, in this paper we introduce a new method of object oriented by XML language as XMLKR to knowledge representation, and we want to discuss advantage and disadvantage of this method.<|reference_end|>
arxiv
@article{bahrami2008a, title={A New Method for Knowledge Representation in Expert System's (XMLKR)}, author={Mehdi Bahrami}, journal={Emerging Trends in Engineering and Technology, 2008. ICETET '08. First International Conference, IEEE}, year={2008}, doi={10.1109/ICETET.2008.194}, archivePrefix={arXiv}, eprint={0812.3648}, primaryClass={cs.DC cs.AI} }
bahrami2008a
arxiv-5803
0812.3677
Artificial intelligence for Bidding Hex
<|reference_start|>Artificial intelligence for Bidding Hex: We present a Monte Carlo algorithm for efficiently finding near optimal moves and bids in the game of Bidding Hex. The algorithm is based on the recent solution of Random-Turn Hex by Peres, Schramm, Sheffield, and Wilson together with Richman's work connecting random-turn games to bidding games.<|reference_end|>
arxiv
@article{payne2008artificial, title={Artificial intelligence for Bidding Hex}, author={Sam Payne and Elina Robeva}, journal={Games of No Chance 4, MSRI Publications 63 (2015), 207-214}, year={2008}, archivePrefix={arXiv}, eprint={0812.3677}, primaryClass={math.CO cs.GT math.PR} }
payne2008artificial
arxiv-5804
0812.3702
Algorithmic and Statistical Challenges in Modern Large-Scale Data Analysis are the Focus of MMDS 2008
<|reference_start|>Algorithmic and Statistical Challenges in Modern Large-Scale Data Analysis are the Focus of MMDS 2008: The 2008 Workshop on Algorithms for Modern Massive Data Sets (MMDS 2008), sponsored by the NSF, DARPA, LinkedIn, and Yahoo!, was held at Stanford University, June 25--28. The goals of MMDS 2008 were (1) to explore novel techniques for modeling and analyzing massive, high-dimensional, and nonlinearly-structured scientific and internet data sets; and (2) to bring together computer scientists, statisticians, mathematicians, and data analysis practitioners to promote cross-fertilization of ideas.<|reference_end|>
arxiv
@article{mahoney2008algorithmic, title={Algorithmic and Statistical Challenges in Modern Large-Scale Data Analysis are the Focus of MMDS 2008}, author={Michael W. Mahoney, Lek-Heng Lim, and Gunnar E. Carlsson}, journal={arXiv preprint arXiv:0812.3702}, year={2008}, archivePrefix={arXiv}, eprint={0812.3702}, primaryClass={cs.DS} }
mahoney2008algorithmic
arxiv-5805
0812.3709
Minimum Expected Distortion in Gaussian Source Coding with Fading Side Information
<|reference_start|>Minimum Expected Distortion in Gaussian Source Coding with Fading Side Information: An encoder, subject to a rate constraint, wishes to describe a Gaussian source under squared error distortion. The decoder, besides receiving the encoder's description, also observes side information consisting of uncompressed source symbol subject to slow fading and noise. The decoder knows the fading realization but the encoder knows only its distribution. The rate-distortion function that simultaneously satisfies the distortion constraints for all fading states was derived by Heegard and Berger. A layered encoding strategy is considered in which each codeword layer targets a given fading state. When the side-information channel has two discrete fading states, the expected distortion is minimized by optimally allocating the encoding rate between the two codeword layers. For multiple fading states, the minimum expected distortion is formulated as the solution of a convex optimization problem with linearly many variables and constraints. Through a limiting process on the primal and dual solutions, it is shown that single-layer rate allocation is optimal when the fading probability density function is continuous and quasiconcave (e.g., Rayleigh, Rician, Nakagami, and log-normal). In particular, under Rayleigh fading, the optimal single codeword layer targets the least favorable state as if the side information was absent.<|reference_end|>
arxiv
@article{ng2008minimum, title={Minimum Expected Distortion in Gaussian Source Coding with Fading Side Information}, author={Chris T. K. Ng, Chao Tian, Andrea J. Goldsmith, Shlomo Shamai (Shitz)}, journal={IEEE Trans. Inf. Theory, vol. 58, no. 9, pp. 5725-5739, Sep. 2012}, year={2008}, doi={10.1109/TIT.2012.2204476}, archivePrefix={arXiv}, eprint={0812.3709}, primaryClass={cs.IT math.IT} }
ng2008minimum
arxiv-5806
0812.3715
Business processes integration and performance indicators in a PLM
<|reference_start|>Business processes integration and performance indicators in a PLM: In an economic environment more and more competitive, the effective management of information and knowledge is a strategic issue for industrial enterprises. In the global marketplace, companies must use reactive strategies and reduce their products development cycle. In this context, the PLM (Product Lifecycle Management) is considered as a key component of the information system. The aim of this paper is to present an approach to integrate Business Processes in a PLM system. This approach is implemented in automotive sector with second-tier subcontractor<|reference_end|>
arxiv
@article{bissay2008business, title={Business processes integration and performance indicators in a PLM}, author={Aur'elie Bissay (LIESP), Philippe Pernelle (LIESP), Arnaud Lefebvre (LIESP), Abdelaziz Bouras (LIESP)}, journal={APMS'08, Espoo : Finlande (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0812.3715}, primaryClass={cs.DB} }
bissay2008business
arxiv-5807
0812.3716
Context-aware adaptation for group communication support applications with dynamic architecture
<|reference_start|>Context-aware adaptation for group communication support applications with dynamic architecture: In this paper, we propose a refinement-based adaptation approach for the architecture of distributed group communication support applications. Unlike most of previous works, our approach reaches implementable, context-aware and dynamically adaptable architectures. To model the context, we manage simultaneously four parameters that influence Qos provided by the application. These parameters are: the available bandwidth, the exchanged data communication priority, the energy level and the available memory for processing. These parameters make it possible to refine the choice between the various architectural configurations when passing from a given abstraction level to the lower level which implements it. Our approach allows the importance degree associated with each parameter to be adapted dynamically. To implement adaptation, we switch between the various configurations of the same level, and we modify the state of the entities of a given configuration when necessary. We adopt the direct and mediated Producer- Consumer architectural styles and graphs for architecture modelling. In order to validate our approach we elaborate a simulation model.<|reference_end|>
arxiv
@article{rodriguez2008context-aware, title={Context-aware adaptation for group communication support applications with dynamic architecture}, author={Ismael Bouassida Rodriguez (LAAS), Khalil DRIRA (LAAS), Christophe Chassot (LAAS), Mohamed Jmaiel (ReDCAD)}, journal={System and Information Sciences Notes 2, 1 (2007) 88}, year={2008}, archivePrefix={arXiv}, eprint={0812.3716}, primaryClass={cs.SE} }
rodriguez2008context-aware
arxiv-5808
0812.3719
Architecture Logicielles pour des Applications h\'et\'erog\`enes, distribu\'ees et reconfigurables
<|reference_start|>Architecture Logicielles pour des Applications h\'et\'erog\`enes, distribu\'ees et reconfigurables: The recent apparition of mobile wireless sensor aware to their physical environment and able to process information must allow proposing applications able to take into account their physical context and to react according to the changes of the environment. It suppose to design applications integrating both software and hardware components able to communicate. Applications must use context information from components to measure the quality of the proposed services in order to adapt them in real time. This work is interested in the integration of sensors in distributed applications. It present a service oriented software architecture allowing to manage and to reconfigure applications in heterogeneous environment where entities of different nature collaborate: software components and wireless sensors.<|reference_end|>
arxiv
@article{louberry2008architecture, title={Architecture Logicielles pour des Applications h\'et\'erog\`enes, distribu\'ees et reconfigurables}, author={Christine Louberry (LIUPPA), Marc Dalmau (LIUPPA), Philippe Roose (LIUPPA)}, journal={arXiv preprint arXiv:0812.3719}, year={2008}, archivePrefix={arXiv}, eprint={0812.3719}, primaryClass={cs.SE} }
louberry2008architecture
arxiv-5809
0812.3742
Quickest Change Detection of a Markov Process Across a Sensor Array
<|reference_start|>Quickest Change Detection of a Markov Process Across a Sensor Array: Recent attention in quickest change detection in the multi-sensor setting has been on the case where the densities of the observations change at the same instant at all the sensors due to the disruption. In this work, a more general scenario is considered where the change propagates across the sensors, and its propagation can be modeled as a Markov process. A centralized, Bayesian version of this problem, with a fusion center that has perfect information about the observations and a priori knowledge of the statistics of the change process, is considered. The problem of minimizing the average detection delay subject to false alarm constraints is formulated as a partially observable Markov decision process (POMDP). Insights into the structure of the optimal stopping rule are presented. In the limiting case of rare disruptions, we show that the structure of the optimal test reduces to thresholding the a posteriori probability of the hypothesis that no change has happened. We establish the asymptotic optimality (in the vanishing false alarm probability regime) of this threshold test under a certain condition on the Kullback-Leibler (K-L) divergence between the post- and the pre-change densities. In the special case of near-instantaneous change propagation across the sensors, this condition reduces to the mild condition that the K-L divergence be positive. Numerical studies show that this low complexity threshold test results in a substantial improvement in performance over naive tests such as a single-sensor test or a test that wrongly assumes that the change propagates instantaneously.<|reference_end|>
arxiv
@article{raghavan2008quickest, title={Quickest Change Detection of a Markov Process Across a Sensor Array}, author={Vasanthan Raghavan, Venugopal V. Veeravalli}, journal={arXiv preprint arXiv:0812.3742}, year={2008}, doi={10.1109/TIT.2010.2040869}, archivePrefix={arXiv}, eprint={0812.3742}, primaryClass={cs.IT math.IT math.ST stat.TH} }
raghavan2008quickest
arxiv-5810
0812.3788
Foundations of SPARQL Query Optimization
<|reference_start|>Foundations of SPARQL Query Optimization: The SPARQL query language is a recent W3C standard for processing RDF data, a format that has been developed to encode information in a machine-readable way. We investigate the foundations of SPARQL query optimization and (a) provide novel complexity results for the SPARQL evaluation problem, showing that the main source of complexity is operator OPTIONAL alone; (b) propose a comprehensive set of algebraic query rewriting rules; (c) present a framework for constraint-based SPARQL optimization based upon the well-known chase procedure for Conjunctive Query minimization. In this line, we develop two novel termination conditions for the chase. They subsume the strongest conditions known so far and do not increase the complexity of the recognition problem, thus making a larger class of both Conjunctive and SPARQL queries amenable to constraint-based optimization. Our results are of immediate practical interest and might empower any SPARQL query optimizer.<|reference_end|>
arxiv
@article{schmidt2008foundations, title={Foundations of SPARQL Query Optimization}, author={Michael Schmidt, Michael Meier, Georg Lausen}, journal={arXiv preprint arXiv:0812.3788}, year={2008}, archivePrefix={arXiv}, eprint={0812.3788}, primaryClass={cs.DB} }
schmidt2008foundations
arxiv-5811
0812.3836
Bootstrapping Inductive and Coinductive Types in HasCASL
<|reference_start|>Bootstrapping Inductive and Coinductive Types in HasCASL: We discuss the treatment of initial datatypes and final process types in the wide-spectrum language HasCASL. In particular, we present specifications that illustrate how datatypes and process types arise as bootstrapped concepts using HasCASL's type class mechanism, and we describe constructions of types of finite and infinite trees that establish the conservativity of datatype and process type declarations adhering to certain reasonable formats. The latter amounts to modifying known constructions from HOL to avoid unique choice; in categorical terminology, this means that we establish that quasitoposes with an internal natural numbers object support initial algebras and final coalgebras for a range of polynomial functors, thereby partially generalising corresponding results from topos theory. Moreover, we present similar constructions in categories of internal complete partial orders in quasitoposes.<|reference_end|>
arxiv
@article{schröder2008bootstrapping, title={Bootstrapping Inductive and Coinductive Types in HasCASL}, author={Lutz Schr"oder}, journal={Logical Methods in Computer Science, Volume 4, Issue 4 (December 25, 2008) lmcs:1166}, year={2008}, doi={10.2168/LMCS-4(4:17)2008}, archivePrefix={arXiv}, eprint={0812.3836}, primaryClass={cs.LO cs.SE} }
schröder2008bootstrapping
arxiv-5812
0812.3871
Decting Errors in Reversible Circuits With Invariant Relationships
<|reference_start|>Decting Errors in Reversible Circuits With Invariant Relationships: Reversible logic is experience renewed interest as we are approach the limits of CMOS technologies. While physical implementations of reversible gates have yet to materialize, it is safe to assume that they will rely on faulty individual components. In this work we present a present a method to provide fault tolerance to a reversible circuit based on invariant relationships.<|reference_end|>
arxiv
@article{alves2008decting, title={Decting Errors in Reversible Circuits With Invariant Relationships}, author={Nuno Alves}, journal={arXiv preprint arXiv:0812.3871}, year={2008}, archivePrefix={arXiv}, eprint={0812.3871}, primaryClass={cs.AR} }
alves2008decting
arxiv-5813
0812.3873
The K-Receiver Broadcast Channel with Confidential Messages
<|reference_start|>The K-Receiver Broadcast Channel with Confidential Messages: The secrecy capacity region for the K-receiver degraded broadcast channel (BC) is given for confidential messages sent to the receivers and to be kept secret from an external wiretapper. Superposition coding and Wyner's random code partitioning are used to show the achievable rate tuples. Error probability analysis and equivocation calculation are also provided. In the converse proof, a new definition for the auxiliary random variables is used, which is different from either the case of the 2-receiver BC without common message or the K-receiver BC with common message, both with an external wiretapper; or the K-receiver BC without a wiretapper.<|reference_end|>
arxiv
@article{choo2008the, title={The K-Receiver Broadcast Channel with Confidential Messages}, author={Li-Chia Choo, Kai-Kit Wong}, journal={arXiv preprint arXiv:0812.3873}, year={2008}, archivePrefix={arXiv}, eprint={0812.3873}, primaryClass={cs.IT math.IT} }
choo2008the
arxiv-5814
0812.3890
Optimal Relay-Subset Selection and Time-Allocation in Decode-and-Forward Cooperative Networks
<|reference_start|>Optimal Relay-Subset Selection and Time-Allocation in Decode-and-Forward Cooperative Networks: We present the optimal relay-subset selection and transmission-time for a decode-and-forward, half-duplex cooperative network of arbitrary size. The resource allocation is obtained by maximizing over the rates obtained for each possible subset of active relays, and the unique time allocation for each set can be obtained by solving a linear system of equations. We also present a simple recursive algorithm for the optimization problem which reduces the computational load of finding the required matrix inverses, and reduces the number of required iterations. Our results, in terms of outage rate, confirm the benefit of adding potential relays to a small network and the diminishing marginal returns for a larger network. We also show that optimizing over the channel resources ensures that more relays are active over a larger SNR range, and that linear network constellations significantly outperform grid constellations. Through simulations, the optimization is shown to be robust to node numbering.<|reference_end|>
arxiv
@article{beres2008optimal, title={Optimal Relay-Subset Selection and Time-Allocation in Decode-and-Forward Cooperative Networks}, author={Elzbieta Beres and Raviraj Adve}, journal={arXiv preprint arXiv:0812.3890}, year={2008}, archivePrefix={arXiv}, eprint={0812.3890}, primaryClass={cs.IT math.IT} }
beres2008optimal
arxiv-5815
0812.3893
Succinct Greedy Geometric Routing in the Euclidean Plane
<|reference_start|>Succinct Greedy Geometric Routing in the Euclidean Plane: In greedy geometric routing, messages are passed in a network embedded in a metric space according to the greedy strategy of always forwarding messages to nodes that are closer to the destination. We show that greedy geometric routing schemes exist for the Euclidean metric in R^2, for 3-connected planar graphs, with coordinates that can be represented succinctly, that is, with O(log n) bits, where n is the number of vertices in the graph. Moreover, our embedding strategy introduces a coordinate system for R^2 that supports distance comparisons using our succinct coordinates. Thus, our scheme can be used to significantly reduce bandwidth, space, and header size over other recently discovered greedy geometric routing implementations for R^2.<|reference_end|>
arxiv
@article{goodrich2008succinct, title={Succinct Greedy Geometric Routing in the Euclidean Plane}, author={Michael T. Goodrich and Darren Strash}, journal={arXiv preprint arXiv:0812.3893}, year={2008}, archivePrefix={arXiv}, eprint={0812.3893}, primaryClass={cs.CG} }
goodrich2008succinct
arxiv-5816
0812.3933
Pancake Flipping with Two Spatulas
<|reference_start|>Pancake Flipping with Two Spatulas: In this paper we study several variations of the \emph{pancake flipping problem}, which is also well known as the problem of \emph{sorting by prefix reversals}. We consider the variations in the sorting process by adding with prefix reversals other similar operations such as prefix transpositions and prefix transreversals. These type of sorting problems have applications in interconnection networks and computational biology. We first study the problem of sorting unsigned permutations by prefix reversals and prefix transpositions and present a 3-approximation algorithm for this problem. Then we give a 2-approximation algorithm for sorting by prefix reversals and prefix transreversals. We also provide a 3-approximation algorithm for sorting by prefix reversals and prefix transpositions where the operations are always applied at the unsorted suffix of the permutation. We further analyze the problem in more practical way and show quantitatively how approximation ratios of our algorithms improve with the increase of number of prefix reversals applied by optimal algorithms. Finally, we present experimental results to support our analysis.<|reference_end|>
arxiv
@article{hasan2008pancake, title={Pancake Flipping with Two Spatulas}, author={Masud Hasan, Atif Rahman, M. Sohel Rahman, Mahfuza Sharmin, and Rukhsana Yeasmin}, journal={arXiv preprint arXiv:0812.3933}, year={2008}, archivePrefix={arXiv}, eprint={0812.3933}, primaryClass={cs.DS cs.OH} }
hasan2008pancake
arxiv-5817
0812.3946
Comparing RNA structures using a full set of biologically relevant edit operations is intractable
<|reference_start|>Comparing RNA structures using a full set of biologically relevant edit operations is intractable: Arc-annotated sequences are useful for representing structural information of RNAs and have been extensively used for comparing RNA structures in both terms of sequence and structural similarities. Among the many paradigms referring to arc-annotated sequences and RNA structures comparison (see \cite{IGMA_BliDenDul08} for more details), the most important one is the general edit distance. The problem of computing an edit distance between two non-crossing arc-annotated sequences was introduced in \cite{Evans99}. The introduced model uses edit operations that involve either single letters or pairs of letters (never considered separately) and is solvable in polynomial-time \cite{ZhangShasha:1989}. To account for other possible RNA structural evolutionary events, new edit operations, allowing to consider either silmutaneously or separately letters of a pair were introduced in \cite{jiangli}; unfortunately at the cost of computational tractability. It has been proved that comparing two RNA secondary structures using a full set of biologically relevant edit operations is {\sf\bf NP}-complete. Nevertheless, in \cite{DBLP:conf/spire/GuignonCH05}, the authors have used a strong combinatorial restriction in order to compare two RNA stem-loops with a full set of biologically relevant edit operations; which have allowed them to design a polynomial-time and space algorithm for comparing general secondary RNA structures. In this paper we will prove theoretically that comparing two RNA structures using a full set of biologically relevant edit operations cannot be done without strong combinatorial restrictions.<|reference_end|>
arxiv
@article{blin2008comparing, title={Comparing RNA structures using a full set of biologically relevant edit operations is intractable}, author={Guillaume Blin (IGM), Sylvie Hamel (DIRO), St'ephane Vialette (IGM)}, journal={arXiv preprint arXiv:0812.3946}, year={2008}, archivePrefix={arXiv}, eprint={0812.3946}, primaryClass={cs.DS q-bio.QM} }
blin2008comparing
arxiv-5818
0812.4009
Graph Field Automata
<|reference_start|>Graph Field Automata: The Graph Automata have been the paradigm in the expression of utilizing Graphs as a language. Matrix Graph grammars \cite{Pedro} are an algebratization of graph rewriting systems. Here we present the dual of this formalizm which some extensions which we term Graph Field Automata The advantage to this approach is a framework for expressing machines that can use Matrix Graph Grammars.<|reference_end|>
arxiv
@article{herman2008graph, title={Graph Field Automata}, author={Joshua Herman, Keith David Pedersen}, journal={arXiv preprint arXiv:0812.4009}, year={2008}, archivePrefix={arXiv}, eprint={0812.4009}, primaryClass={cs.CC} }
herman2008graph
arxiv-5819
0812.4012
De Bruijn Graph Homomorphisms and Recursive De Bruijn Sequences
<|reference_start|>De Bruijn Graph Homomorphisms and Recursive De Bruijn Sequences: This paper presents a method to find new De Bruijn cycles based on ones of lesser order. This is done by mapping a De Bruijn cycle to several vertex disjoint cycles in a De Bruijn digraph of higher order and connecting these cycles into one full cycle. We characterize homomorphisms between De Bruijn digraphs of different orders that allow this construction. These maps generalize the well-known D-morphism of Lempel between De Bruijn digraphs of consecutive orders. Also, an efficient recursive algorithm that yields an exponential number of nonbinary De Bruijn cycles is implemented.<|reference_end|>
arxiv
@article{alhakim2008de, title={De Bruijn Graph Homomorphisms and Recursive De Bruijn Sequences}, author={Abbas Alhakim and Mufutau Akinwande}, journal={arXiv preprint arXiv:0812.4012}, year={2008}, archivePrefix={arXiv}, eprint={0812.4012}, primaryClass={math.CO cs.IT math.IT} }
alhakim2008de
arxiv-5820
0812.4044
The Offset Tree for Learning with Partial Labels
<|reference_start|>The Offset Tree for Learning with Partial Labels: We present an algorithm, called the Offset Tree, for learning to make decisions in situations where the payoff of only one choice is observed, rather than all choices. The algorithm reduces this setting to binary classification, allowing one to reuse of any existing, fully supervised binary classification algorithm in this partial information setting. We show that the Offset Tree is an optimal reduction to binary classification. In particular, it has regret at most $(k-1)$ times the regret of the binary classifier it uses (where $k$ is the number of choices), and no reduction to binary classification can do better. This reduction is also computationally optimal, both at training and test time, requiring just $O(\log_2 k)$ work to train on an example or make a prediction. Experiments with the Offset Tree show that it generally performs better than several alternative approaches.<|reference_end|>
arxiv
@article{beygelzimer2008the, title={The Offset Tree for Learning with Partial Labels}, author={Alina Beygelzimer and John Langford}, journal={arXiv preprint arXiv:0812.4044}, year={2008}, archivePrefix={arXiv}, eprint={0812.4044}, primaryClass={cs.LG cs.AI} }
beygelzimer2008the
arxiv-5821
0812.4073
Multi-level algorithms for modularity clustering
<|reference_start|>Multi-level algorithms for modularity clustering: Modularity is one of the most widely used quality measures for graph clusterings. Maximizing modularity is NP-hard, and the runtime of exact algorithms is prohibitive for large graphs. A simple and effective class of heuristics coarsens the graph by iteratively merging clusters (starting from singletons), and optionally refines the resulting clustering by iteratively moving individual vertices between clusters. Several heuristics of this type have been proposed in the literature, but little is known about their relative performance. This paper experimentally compares existing and new coarsening- and refinement-based heuristics with respect to their effectiveness (achieved modularity) and efficiency (runtime). Concerning coarsening, it turns out that the most widely used criterion for merging clusters (modularity increase) is outperformed by other simple criteria, and that a recent algorithm by Schuetz and Caflisch is no improvement over simple greedy coarsening for these criteria. Concerning refinement, a new multi-level algorithm is shown to produce significantly better clusterings than conventional single-level algorithms. A comparison with published benchmark results and algorithm implementations shows that combinations of coarsening and multi-level refinement are competitive with the best algorithms in the literature.<|reference_end|>
arxiv
@article{noack2008multi-level, title={Multi-level algorithms for modularity clustering}, author={Andreas Noack, Randolf Rotta}, journal={Proceedings of the 8th International Symposium on Experimental Algorithms (SEA 2009). Lecture Notes in Computer Science 5526, Springer (2009) 257-268}, year={2008}, archivePrefix={arXiv}, eprint={0812.4073}, primaryClass={cs.DS cond-mat.stat-mech cs.DM physics.soc-ph} }
noack2008multi-level
arxiv-5822
0812.4170
Finding Still Lifes with Memetic/Exact Hybrid Algorithms
<|reference_start|>Finding Still Lifes with Memetic/Exact Hybrid Algorithms: The maximum density still life problem (MDSLP) is a hard constraint optimization problem based on Conway's game of life. It is a prime example of weighted constrained optimization problem that has been recently tackled in the constraint-programming community. Bucket elimination (BE) is a complete technique commonly used to solve this kind of constraint satisfaction problem. When the memory required to apply BE is too high, a heuristic method based on it (denominated mini-buckets) can be used to calculate bounds for the optimal solution. Nevertheless, the curse of dimensionality makes these techniques unpractical for large size problems. In response to this situation, we present a memetic algorithm for the MDSLP in which BE is used as a mechanism for recombining solutions, providing the best possible child from the parental set. Subsequently, a multi-level model in which this exact/metaheuristic hybrid is further hybridized with branch-and-bound techniques and mini-buckets is studied. Extensive experimental results analyze the performance of these models and multi-parent recombination. The resulting algorithm consistently finds optimal patterns for up to date solved instances in less time than current approaches. Moreover, it is shown that this proposal provides new best known solutions for very large instances.<|reference_end|>
arxiv
@article{gallardo2008finding, title={Finding Still Lifes with Memetic/Exact Hybrid Algorithms}, author={Jose E. Gallardo, Carlos Cotta, Antonio J. Fernandez}, journal={arXiv preprint arXiv:0812.4170}, year={2008}, archivePrefix={arXiv}, eprint={0812.4170}, primaryClass={cs.NE cs.AI} }
gallardo2008finding
arxiv-5823
0812.4171
The Complexity of Weighted Boolean #CSP with Mixed Signs
<|reference_start|>The Complexity of Weighted Boolean #CSP with Mixed Signs: We give a complexity dichotomy for the problem of computing the partition function of a weighted Boolean constraint satisfaction problem. Such a problem is parameterized by a set of rational-valued functions, which generalize constraints. Each function assigns a weight to every assignment to a set of Boolean variables. Our dichotomy extends previous work in which the weight functions were restricted to being non-negative. We represent a weight function as a product of the form (-1)^s g, where the polynomial s determines the sign of the weight and the non-negative function g determines its magnitude. We show that the problem of computing the partition function (the sum of the weights of all possible variable assignments) is in polynomial time if either every weight function can be defined by a "pure affine" magnitude with a quadratic sign polynomial or every function can be defined by a magnitude of "product type" with a linear sign polynomial. In all other cases, computing the partition function is FP^#P-complete.<|reference_end|>
arxiv
@article{bulatov2008the, title={The Complexity of Weighted Boolean #CSP with Mixed Signs}, author={Andrei Bulatov, Martin Dyer, Leslie Ann Goldberg, Markus Jalsenius and David Richerby}, journal={arXiv preprint arXiv:0812.4171}, year={2008}, archivePrefix={arXiv}, eprint={0812.4171}, primaryClass={cs.CC cs.DM} }
bulatov2008the
arxiv-5824
0812.4181
XML Rewriting Attacks: Existing Solutions and their Limitations
<|reference_start|>XML Rewriting Attacks: Existing Solutions and their Limitations: Web Services are web-based applications made available for web users or remote Web-based programs. In order to promote interoperability, they publish their interfaces in the so-called WSDL file and allow remote call over the network. Although Web Services can be used in different ways, the industry standard is the Service Oriented Architecture Web Services that doesn't rely on the implementation details. In this architecture, communication is performed through XML-based messages called SOAP messages. However, those messages are prone to attacks that can lead to code injection, unauthorized accesses, identity theft, etc. This type of attacks, called XML Rewriting Attacks, are all based on unauthorized, yet possible, modifications of SOAP messages. We present in this paper an explanation of this kind of attack, review the existing solutions, and show their limitations. We also propose some ideas to secure SOAP messages, as well as implementation ideas.<|reference_end|>
arxiv
@article{benameur2008xml, title={XML Rewriting Attacks: Existing Solutions and their Limitations}, author={Azzedine Benameur, Faisal Abdul Kadir, Serge Fenet}, journal={IADIS Applied Computing 2008}, year={2008}, archivePrefix={arXiv}, eprint={0812.4181}, primaryClass={cs.CR cs.SE} }
benameur2008xml
arxiv-5825
0812.4206
How Many Attackers Can Selfish Defenders Catch?
<|reference_start|>How Many Attackers Can Selfish Defenders Catch?: In a distributed system with {\it attacks} and {\it defenses,} both {\it attackers} and {\it defenders} are self-interested entities. We assume a {\it reward-sharing} scheme among {\it interdependent} defenders; each defender wishes to (locally) maximize her own total {\it fair share} to the attackers extinguished due to her involvement (and possibly due to those of others). What is the {\em maximum} amount of protection achievable by a number of such defenders against a number of attackers while the system is in a {\it Nash equilibrium}? As a measure of system protection, we adopt the {\it Defense-Ratio} \cite{MPPS05a}, which provides the expected (inverse) proportion of attackers caught by the defenders. In a {\it Defense-Optimal} Nash equilibrium, the Defense-Ratio is optimized. We discover that the possibility of optimizing the Defense-Ratio (in a Nash equilibrium) depends in a subtle way on how the number of defenders compares to two natural graph-theoretic thresholds we identify. In this vein, we obtain, through a combinatorial analysis of Nash equilibria, a collection of trade-off results: - When the number of defenders is either sufficiently small or sufficiently large, there are cases where the Defense-Ratio can be optimized. The optimization problem is computationally tractable for a large number of defenders; the problem becomes ${\cal NP}$-complete for a small number of defenders and the intractability is inherited from a previously unconsidered combinatorial problem in {\em Fractional Graph Theory}. - Perhaps paradoxically, there is a middle range of values for the number of defenders where optimizing the Defense-Ratio is never possible.<|reference_end|>
arxiv
@article{mavronicolas2008how, title={How Many Attackers Can Selfish Defenders Catch?}, author={Marios Mavronicolas, Burkhard Monien and Vicky Papadopoulou}, journal={arXiv preprint arXiv:0812.4206}, year={2008}, archivePrefix={arXiv}, eprint={0812.4206}, primaryClass={cs.GT} }
mavronicolas2008how
arxiv-5826
0812.4235
Client-server multi-task learning from distributed datasets
<|reference_start|>Client-server multi-task learning from distributed datasets: A client-server architecture to simultaneously solve multiple learning tasks from distributed datasets is described. In such architecture, each client is associated with an individual learning task and the associated dataset of examples. The goal of the architecture is to perform information fusion from multiple datasets while preserving privacy of individual data. The role of the server is to collect data in real-time from the clients and codify the information in a common database. The information coded in this database can be used by all the clients to solve their individual learning task, so that each client can exploit the informative content of all the datasets without actually having access to private data of others. The proposed algorithmic framework, based on regularization theory and kernel methods, uses a suitable class of mixed effect kernels. The new method is illustrated through a simulated music recommendation system.<|reference_end|>
arxiv
@article{dinuzzo2008client-server, title={Client-server multi-task learning from distributed datasets}, author={Francesco Dinuzzo, Gianluigi Pillonetto, Giuseppe De Nicolao}, journal={arXiv preprint arXiv:0812.4235}, year={2008}, doi={10.1109/TNN.2010.2095882}, archivePrefix={arXiv}, eprint={0812.4235}, primaryClass={cs.LG cs.AI} }
dinuzzo2008client-server
arxiv-5827
0812.4279
Correlated Equilibria in Continuous Games: Characterization and Computation
<|reference_start|>Correlated Equilibria in Continuous Games: Characterization and Computation: We present several new characterizations of correlated equilibria in games with continuous utility functions. These have the advantage of being more computationally and analytically tractable than the standard definition in terms of departure functions. We use these characterizations to construct effective algorithms for approximating a single correlated equilibrium or the entire set of correlated equilibria of a game with polynomial utility functions.<|reference_end|>
arxiv
@article{stein2008correlated, title={Correlated Equilibria in Continuous Games: Characterization and Computation}, author={Noah D. Stein, Pablo A. Parrilo, Asuman Ozdaglar}, journal={Games and Economic Behavior, Vol. 71, No. 2, March 2011, Pages 436-455}, year={2008}, doi={10.1016/j.geb.2010.04.004}, number={LIDS Technical Report 2805}, archivePrefix={arXiv}, eprint={0812.4279}, primaryClass={cs.GT} }
stein2008correlated
arxiv-5828
0812.4293
An Improved Approximation Algorithm for the Column Subset Selection Problem
<|reference_start|>An Improved Approximation Algorithm for the Column Subset Selection Problem: We consider the problem of selecting the best subset of exactly $k$ columns from an $m \times n$ matrix $A$. We present and analyze a novel two-stage algorithm that runs in $O(\min\{mn^2,m^2n\})$ time and returns as output an $m \times k$ matrix $C$ consisting of exactly $k$ columns of $A$. In the first (randomized) stage, the algorithm randomly selects $\Theta(k \log k)$ columns according to a judiciously-chosen probability distribution that depends on information in the top-$k$ right singular subspace of $A$. In the second (deterministic) stage, the algorithm applies a deterministic column-selection procedure to select and return exactly $k$ columns from the set of columns selected in the first stage. Let $C$ be the $m \times k$ matrix containing those $k$ columns, let $P_C$ denote the projection matrix onto the span of those columns, and let $A_k$ denote the best rank-$k$ approximation to the matrix $A$. Then, we prove that, with probability at least 0.8, $$ \FNorm{A - P_CA} \leq \Theta(k \log^{1/2} k) \FNorm{A-A_k}. $$ This Frobenius norm bound is only a factor of $\sqrt{k \log k}$ worse than the best previously existing existential result and is roughly $O(\sqrt{k!})$ better than the best previous algorithmic result for the Frobenius norm version of this Column Subset Selection Problem (CSSP). We also prove that, with probability at least 0.8, $$ \TNorm{A - P_CA} \leq \Theta(k \log^{1/2} k)\TNorm{A-A_k} + \Theta(k^{3/4}\log^{1/4}k)\FNorm{A-A_k}. $$ This spectral norm bound is not directly comparable to the best previously existing bounds for the spectral norm version of this CSSP. Our bound depends on $\FNorm{A-A_k}$, whereas previous results depend on $\sqrt{n-k}\TNorm{A-A_k}$; if these two quantities are comparable, then our bound is asymptotically worse by a $(k \log k)^{1/4}$ factor.<|reference_end|>
arxiv
@article{boutsidis2008an, title={An Improved Approximation Algorithm for the Column Subset Selection Problem}, author={Christos Boutsidis, Michael W. Mahoney, and Petros Drineas}, journal={arXiv preprint arXiv:0812.4293}, year={2008}, archivePrefix={arXiv}, eprint={0812.4293}, primaryClass={cs.DS} }
boutsidis2008an
arxiv-5829
0812.4296
Tsallis $q$-exponential describes the distribution of scientific citations - A new characterization of the impact
<|reference_start|>Tsallis $q$-exponential describes the distribution of scientific citations - A new characterization of the impact: In this work we have studied the research activity for countries of Europe, Latin America and Africa for all sciences between 1945 and November 2008. All the data are captured from the Web of Science database during this period. The analysis of the experimental data shows that, within a nonextensive thermostatistical formalism, the Tsallis \emph{q}-exponential distribution $N(c)$ satisfactorily describes Institute of Scientific Information citations. The data which are examined in the present survey can be fitted successfully as a first approach by applying a {\it single} curve (namely, $N(c) \propto 1/[1+(q-1) c/T]^{\frac{1}{q-1}}$ with $q\simeq 4/3$ for {\it all} the available citations $c$, $T$ being an "effective temperature". The present analysis ultimately suggests that the phenomenon might essentially be {\it one and the same} along the {\it entire} range of the citation number. Finally, this manuscript provides a new ranking index, via the "effective temperature" $T$, for the impact level of the research activity in these countries, taking into account the number of the publications and their citations.<|reference_end|>
arxiv
@article{anastasiadis2008tsallis, title={Tsallis $q$-exponential describes the distribution of scientific citations - A new characterization of the impact}, author={A.D. Anastasiadis, Marcelo P. de Albuquerque, Marcio P. de Albuquerque and Diogo B. Mussi}, journal={arXiv preprint arXiv:0812.4296}, year={2008}, archivePrefix={arXiv}, eprint={0812.4296}, primaryClass={cs.DL physics.data-an} }
anastasiadis2008tsallis
arxiv-5830
0812.4322
Solution of Peter Winkler's Pizza Problem
<|reference_start|>Solution of Peter Winkler's Pizza Problem: Bob cuts a pizza into slices of not necessarily equal size and shares it with Alice by alternately taking turns. One slice is taken in each turn. The first turn is Alice's. She may choose any of the slices. In all other turns only those slices can be chosen that have a neighbor slice already eaten. We prove a conjecture of Peter Winkler by showing that Alice has a strategy for obtaining 4/9 of the pizza. This is best possible, that is, there is a cutting and a strategy for Bob to get 5/9 of the pizza. We also give a characterization of Alice's best possible gain depending on the number of slices. For a given cutting of the pizza, we describe a linear time algorithm that computes Alice's strategy gaining at least 4/9 of the pizza and another algorithm that computes the optimal strategy for both players in any possible position of the game in quadratic time. We distinguish two types of turns, shifts and jumps. We prove that Alice can gain 4/9, 7/16 and 1/3 of the pizza if she is allowed to make at most two jumps, at most one jump and no jump, respectively, and the three constants are the best possible.<|reference_end|>
arxiv
@article{cibulka2008solution, title={Solution of Peter Winkler's Pizza Problem}, author={Josef Cibulka, Jan Kynv{c}l, Viola M'esz'aros, Rudolf Stolav{r} and Pavel Valtr}, journal={In: Fete of Combinatorics and Computer Science, Bolyai Society Mathematical Studies, vol. 20, pp. 63-93, Springer, 2010}, year={2008}, doi={10.1007/978-3-642-13580-4_4}, archivePrefix={arXiv}, eprint={0812.4322}, primaryClass={cs.DM} }
cibulka2008solution
arxiv-5831
0812.4329
Some sufficient conditions on Hamiltonian digraph
<|reference_start|>Some sufficient conditions on Hamiltonian digraph: Z-mapping graph is a balanced bipartite graph $G$ of a digraph $D$ by split each vertex of $D$ into a pair of vertices of $G$. Based on the property of the $G$, it is proved that if $D$ is strong connected and $G$ is Hamiltonian, then $D$ is Hamiltonian. It is also proved if $D$ is Hamiltonian, then $G$ contains at least a perfect matching. Thus some existence sufficient conditions for Hamiltonian digraph and Hamiltonian graph are proved to be equivalent, and two sufficient conditions of disjoint Hamiltonian digraph are given in this paper.<|reference_end|>
arxiv
@article{zhu2008some, title={Some sufficient conditions on Hamiltonian digraph}, author={Guohun Zhu}, journal={arXiv preprint arXiv:0812.4329}, year={2008}, archivePrefix={arXiv}, eprint={0812.4329}, primaryClass={cs.DM} }
zhu2008some
arxiv-5832
0812.4332
Content-based and Algorithmic Classifications of Journals: Perspectives on the Dynamics of Scientific Communication and Indexer Effects
<|reference_start|>Content-based and Algorithmic Classifications of Journals: Perspectives on the Dynamics of Scientific Communication and Indexer Effects: The aggregated journal-journal citation matrix -based on the Journal Citation Reports (JCR) of the Science Citation Index- can be decomposed by indexers and/or algorithmically. In this study, we test the results of two recently available algorithms for the decomposition of large matrices against two content-based classifications of journals: the ISI Subject Categories and the field/subfield classification of Glaenzel & Schubert (2003). The content-based schemes allow for the attribution of more than a single category to a journal, whereas the algorithms maximize the ratio of within-category citations over between-category citations in the aggregated category-category citation matrix. By adding categories, indexers generate between-category citations, which may enrich the database, for example, in the case of inter-disciplinary developments. The consequent indexer effects are significant in sparse areas of the matrix more than in denser ones. Algorithmic decompositions, on the other hand, are more heavily skewed towards a relatively small number of categories, while this is deliberately counter-acted upon in the case of content-based classifications. Because of the indexer effects, science policy studies and the sociology of science should be careful when using content-based classifications, which are made for bibliographic disclosure, and not for the purpose of analyzing latent structures in scientific communications. Despite the large differences among them, the four classification schemes enable us to generate surprisingly similar maps of science at the global level. Erroneous classifications are cancelled as noise at the aggregate level, but may disturb the evaluation locally.<|reference_end|>
arxiv
@article{rafols2008content-based, title={Content-based and Algorithmic Classifications of Journals: Perspectives on the Dynamics of Scientific Communication and Indexer Effects}, author={Ismael Rafols and Loet Leydesdorff}, journal={arXiv preprint arXiv:0812.4332}, year={2008}, archivePrefix={arXiv}, eprint={0812.4332}, primaryClass={physics.data-an cs.DL cs.IR physics.soc-ph} }
rafols2008content-based
arxiv-5833
0812.4334
Multi-User SISO Precoding based on Generalized Multi-Unitary Decomposition for Single-carrier Transmission in Frequency Selective Channel
<|reference_start|>Multi-User SISO Precoding based on Generalized Multi-Unitary Decomposition for Single-carrier Transmission in Frequency Selective Channel: In this paper, we propose to exploit the richly scattered multi-path nature of a frequency selective channel to provide additional degrees of freedom for desigining effective precoding schemes for multi-user communications. We design the precoding matrix for multi-user communications based on the Generalized Multi-Unitary Decomposition (GMUD), where the channel matrix H is transformed into P_i*R_r*Q_i^H. An advantage of GMUD is that multiple pairs of unitary matrices P_i and Q_i can be obtained with one single R_r. Since the column of Q_i can be used as the transmission beam of a particular user, multiple solutions of Q_i provide a large selection of transmission beams, which can be exploited to achieve high degrees of orthogonality between the multipaths, as well as between the interfering users. Hence the proposed precoding technique based on GMUD achieves better performance than precoding based on singular value decomposition.<|reference_end|>
arxiv
@article{chua2008multi-user, title={Multi-User SISO Precoding based on Generalized Multi-Unitary Decomposition for Single-carrier Transmission in Frequency Selective Channel}, author={Wee Seng Chua, Chau Yuen, Yong Liang Guan, Francois Chin}, journal={arXiv preprint arXiv:0812.4334}, year={2008}, archivePrefix={arXiv}, eprint={0812.4334}, primaryClass={cs.IT math.IT} }
chua2008multi-user
arxiv-5834
0812.4346
The Plane-Width of Graphs
<|reference_start|>The Plane-Width of Graphs: Map vertices of a graph to (not necessarily distinct) points of the plane so that two adjacent vertices are mapped at least a unit distance apart. The plane-width of a graph is the minimum diameter of the image of the vertex set over all such mappings. We establish a relation between the plane-width of a graph and its chromatic number, and connect it to other well-known areas, including the circular chromatic number and the problem of packing unit discs in the plane. We also investigate how plane-width behaves under various operations, such as homomorphism, disjoint union, complement, and the Cartesian product.<|reference_end|>
arxiv
@article{kaminski2008the, title={The Plane-Width of Graphs}, author={Marcin Kaminski, Paul Medvedev, Martin Milanic}, journal={Journal of Graph Theory 68 (2011) 229-245}, year={2008}, doi={10.1002/jgt.20554}, archivePrefix={arXiv}, eprint={0812.4346}, primaryClass={cs.DM} }
kaminski2008the
arxiv-5835
0812.4360
Driven by Compression Progress: A Simple Principle Explains Essential Aspects of Subjective Beauty, Novelty, Surprise, Interestingness, Attention, Curiosity, Creativity, Art, Science, Music, Jokes
<|reference_start|>Driven by Compression Progress: A Simple Principle Explains Essential Aspects of Subjective Beauty, Novelty, Surprise, Interestingness, Attention, Curiosity, Creativity, Art, Science, Music, Jokes: I argue that data becomes temporarily interesting by itself to some self-improving, but computationally limited, subjective observer once he learns to predict or compress the data in a better way, thus making it subjectively simpler and more beautiful. Curiosity is the desire to create or discover more non-random, non-arbitrary, regular data that is novel and surprising not in the traditional sense of Boltzmann and Shannon but in the sense that it allows for compression progress because its regularity was not yet known. This drive maximizes interestingness, the first derivative of subjective beauty or compressibility, that is, the steepness of the learning curve. It motivates exploring infants, pure mathematicians, composers, artists, dancers, comedians, yourself, and (since 1990) artificial systems.<|reference_end|>
arxiv
@article{schmidhuber2008driven, title={Driven by Compression Progress: A Simple Principle Explains Essential Aspects of Subjective Beauty, Novelty, Surprise, Interestingness, Attention, Curiosity, Creativity, Art, Science, Music, Jokes}, author={Juergen Schmidhuber}, journal={Short version: J. Schmidhuber. Simple Algorithmic Theory of Subjective Beauty, Novelty, Surprise, Interestingness, Attention, Curiosity, Creativity, Art, Science, Music, Jokes. Journal of SICE 48(1), 21-32, 2009}, year={2008}, archivePrefix={arXiv}, eprint={0812.4360}, primaryClass={cs.AI cs.NE} }
schmidhuber2008driven
arxiv-5836
0812.4367
On Some Classes of Functions and Hypercubes
<|reference_start|>On Some Classes of Functions and Hypercubes: In this paper, some classes of discrete functions of $k$-valued logic are considered, that depend on sets of their variables in a particular way. Obtained results allow to "construct" these functions and to present them in their tabular, analytical or matrix form, that is, as hypercubes, and in particular Latin hypercubes. Results connected with identifying of variables of some classes of functions are obtained.<|reference_end|>
arxiv
@article{kovachev2008on, title={On Some Classes of Functions and Hypercubes}, author={Dimiter Stoichkov Kovachev}, journal={arXiv preprint arXiv:0812.4367}, year={2008}, archivePrefix={arXiv}, eprint={0812.4367}, primaryClass={cs.DM cs.CC} }
kovachev2008on
arxiv-5837
0812.4442
An $O(k^3 log n)$-Approximation Algorithm for Vertex-Connectivity Survivable Network Design
<|reference_start|>An $O(k^3 log n)$-Approximation Algorithm for Vertex-Connectivity Survivable Network Design: In the Survivable Network Design problem (SNDP), we are given an undirected graph $G(V,E)$ with costs on edges, along with a connectivity requirement $r(u,v)$ for each pair $u,v$ of vertices. The goal is to find a minimum-cost subset $E^*$ of edges, that satisfies the given set of pairwise connectivity requirements. In the edge-connectivity version we need to ensure that there are $r(u,v)$ edge-disjoint paths for every pair $u, v$ of vertices, while in the vertex-connectivity version the paths are required to be vertex-disjoint. The edge-connectivity version of SNDP is known to have a 2-approximation. However, no non-trivial approximation algorithm has been known so far for the vertex version of SNDP, except for special cases of the problem. We present an extremely simple algorithm to achieve an $O(k^3 \log n)$-approximation for this problem, where $k$ denotes the maximum connectivity requirement, and $n$ denotes the number of vertices. We also give a simple proof of the recently discovered $O(k^2 \log n)$-approximation result for the single-source version of vertex-connectivity SNDP. We note that in both cases, our analysis in fact yields slightly better guarantees in that the $\log n$ term in the approximation guarantee can be replaced with a $\log \tau$ term where $\tau$ denotes the number of distinct vertices that participate in one or more pairs with a positive connectivity requirement.<|reference_end|>
arxiv
@article{chuzhoy2008an, title={An $O(k^{3} log n)$-Approximation Algorithm for Vertex-Connectivity Survivable Network Design}, author={Julia Chuzhoy and Sanjeev Khanna}, journal={arXiv preprint arXiv:0812.4442}, year={2008}, archivePrefix={arXiv}, eprint={0812.4442}, primaryClass={cs.DS} }
chuzhoy2008an
arxiv-5838
0812.4446
The Latent Relation Mapping Engine: Algorithm and Experiments
<|reference_start|>The Latent Relation Mapping Engine: Algorithm and Experiments: Many AI researchers and cognitive scientists have argued that analogy is the core of cognition. The most influential work on computational modeling of analogy-making is Structure Mapping Theory (SMT) and its implementation in the Structure Mapping Engine (SME). A limitation of SME is the requirement for complex hand-coded representations. We introduce the Latent Relation Mapping Engine (LRME), which combines ideas from SME and Latent Relational Analysis (LRA) in order to remove the requirement for hand-coded representations. LRME builds analogical mappings between lists of words, using a large corpus of raw text to automatically discover the semantic relations among the words. We evaluate LRME on a set of twenty analogical mapping problems, ten based on scientific analogies and ten based on common metaphors. LRME achieves human-level performance on the twenty problems. We compare LRME with a variety of alternative approaches and find that they are not able to reach the same level of performance.<|reference_end|>
arxiv
@article{turney2008the, title={The Latent Relation Mapping Engine: Algorithm and Experiments}, author={Peter D. Turney (National Research Council of Canada)}, journal={Journal of Artificial Intelligence Research, (2008), 33, 615-655}, year={2008}, doi={10.1613/jair.2693}, number={NRC-50738}, archivePrefix={arXiv}, eprint={0812.4446}, primaryClass={cs.CL cs.AI cs.LG} }
turney2008the
arxiv-5839
0812.4460
Emergence of Spontaneous Order Through Neighborhood Formation in Peer-to-Peer Recommender Systems
<|reference_start|>Emergence of Spontaneous Order Through Neighborhood Formation in Peer-to-Peer Recommender Systems: The advent of the Semantic Web necessitates paradigm shifts away from centralized client/server architectures towards decentralization and peer-to-peer computation, making the existence of central authorities superfluous and even impossible. At the same time, recommender systems are gaining considerable impact in e-commerce, providing people with recommendations that are personalized and tailored to their very needs. These recommender systems have traditionally been deployed with stark centralized scenarios in mind, operating in closed communities detached from their host network's outer perimeter. We aim at marrying these two worlds, i.e., decentralized peer-to-peer computing and recommender systems, in one agent-based framework. Our architecture features an epidemic-style protocol maintaining neighborhoods of like-minded peers in a robust, selforganizing fashion. In order to demonstrate our architecture's ability to retain scalability, robustness and to allow for convergence towards high-quality recommendations, we conduct offline experiments on top of the popular MovieLens dataset.<|reference_end|>
arxiv
@article{diaz-aviles2008emergence, title={Emergence of Spontaneous Order Through Neighborhood Formation in Peer-to-Peer Recommender Systems}, author={Ernesto Diaz-Aviles, Lars Schmidt-Thieme and Cai-Nicolas Ziegler}, journal={WWW '05 International Workshop on Innovations in Web Infrastructure (IWI '05) May 10, 2005, Chiba, Japan}, year={2008}, archivePrefix={arXiv}, eprint={0812.4460}, primaryClass={cs.AI cs.IR cs.MA} }
diaz-aviles2008emergence
arxiv-5840
0812.4461
Mining User Profiles to Support Structure and Explanation in Open Social Networking
<|reference_start|>Mining User Profiles to Support Structure and Explanation in Open Social Networking: The proliferation of media sharing and social networking websites has brought with it vast collections of site-specific user generated content. The result is a Social Networking Divide in which the concepts and structure common across different sites are hidden. The knowledge and structures from one social site are not adequately exploited to provide new information and resources to the same or different users in comparable social sites. For music bloggers, this latent structure, forces bloggers to select sub-optimal blogrolls. However, by integrating the social activities of music bloggers and listeners, we are able to overcome this limitation: improving the quality of the blogroll neighborhoods, in terms of similarity, by 85 percent when using tracks and by 120 percent when integrating tags from another site.<|reference_end|>
arxiv
@article{stewart2008mining, title={Mining User Profiles to Support Structure and Explanation in Open Social Networking}, author={Avare Stewart, Ernesto Diaz-Aviles, and Wolfgang Nejdl}, journal={In Proceedings of the International Workshop on Interacting with Multimedia Content in the Social Semantic Web (IMC-SSW'08), pages 21-30. Koblenz, Germany, Dec. 3, 2008}, year={2008}, archivePrefix={arXiv}, eprint={0812.4461}, primaryClass={cs.IR} }
stewart2008mining
arxiv-5841
0812.4470
There are k-uniform cubefree binary morphisms for all k >= 0
<|reference_start|>There are k-uniform cubefree binary morphisms for all k >= 0: A word is cubefree if it contains no non-empty subword of the form xxx. A morphism h : Sigma^* -> Sigma^* is k-uniform if h(a) has length k for all a in Sigma. A morphism is cubefree if it maps cubefree words to cubefree words. We show that for all k >= 0 there exists a k-uniform cubefree binary morphism.<|reference_end|>
arxiv
@article{currie2008there, title={There are k-uniform cubefree binary morphisms for all k >= 0}, author={James Currie and Narad Rampersad}, journal={arXiv preprint arXiv:0812.4470}, year={2008}, archivePrefix={arXiv}, eprint={0812.4470}, primaryClass={math.CO cs.FL} }
currie2008there
arxiv-5842
0812.4471
Diversity-Multiplexing Tradeoff of Network Coding with Bidirectional Random Relaying
<|reference_start|>Diversity-Multiplexing Tradeoff of Network Coding with Bidirectional Random Relaying: This paper develops a diversity-multiplexing tradeoff (DMT) over a bidirectional random relay set in a wireless network where the distribution of all nodes is a stationary Poisson point process. This is a nontrivial extension of the DMT because it requires consideration of the cooperation (or lack thereof) of relay nodes, the traffic pattern and the time allocation between the forward and reverse traffic directions. We then use this tradeoff to compare the DMTs of traditional time-division multihop (TDMH) and network coding (NC). Our main results are the derivations of the DMT for both TDMH and NC. This shows, surprisingly, that if relay nodes collaborate NC does not always have a better DMT than TDMH since it is difficult to simultaneously achieve bidirectional transmit diversity for both source nodes. In fact, for certain traffic patterns NC can have a worse DMT due to suboptimal time allocation between the forward and reverse transmission directions.<|reference_end|>
arxiv
@article{liu2008diversity-multiplexing, title={Diversity-Multiplexing Tradeoff of Network Coding with Bidirectional Random Relaying}, author={Chun-Hung Liu and Jeffery G. Andrews}, journal={arXiv preprint arXiv:0812.4471}, year={2008}, archivePrefix={arXiv}, eprint={0812.4471}, primaryClass={cs.IT math.IT} }
liu2008diversity-multiplexing
arxiv-5843
0812.4485
A computationally-efficient construction for the matrix-based key distribution in sensor network
<|reference_start|>A computationally-efficient construction for the matrix-based key distribution in sensor network: This paper introduces a variant for the symmetric matrix-based key distribution in sensor network introduced by Du et al. Our slight modification shows that the usage of specific structures for the public matrix instead of fully random matrix with elements in $\mathbb{Z}_q$ can reduce the computation overhead for generating the public key information and the key itself. An intensive analysis followed by modified scheme demonstrates the value of our contribution in relation with the current work and show the equivalence of the security<|reference_end|>
arxiv
@article{mohaisen2008a, title={A computationally-efficient construction for the matrix-based key distribution in sensor network}, author={Abedelaziz Mohaisen}, journal={arXiv preprint arXiv:0812.4485}, year={2008}, archivePrefix={arXiv}, eprint={0812.4485}, primaryClass={cs.CR} }
mohaisen2008a
arxiv-5844
0812.4487
New Sequences Design from Weil Representation with Low Two-Dimensional Correlation in Both Time and Phase Shifts
<|reference_start|>New Sequences Design from Weil Representation with Low Two-Dimensional Correlation in Both Time and Phase Shifts: For a given prime $p$, a new construction of families of the complex valued sequences of period $p$ with efficient implementation is given by applying both multiplicative characters and additive characters of finite field $\mathbb{F}_p$. Such a signal set consists of $p^2(p-2)$ time-shift distinct sequences, the magnitude of the two-dimensional autocorrelation function (i.e., the ambiguity function) in both time and phase of each sequence is upper bounded by $2\sqrt{p}$ at any shift not equal to $(0, 0)$, and the magnitude of the ambiguity function of any pair of phase-shift distinct sequences is upper bounded by $4\sqrt{p}$. Furthermore, the magnitude of their Fourier transform spectrum is less than or equal to 2. A proof is given through finding a simple elementary construction for the sequences constructed from the Weil representation by Gurevich, Hadani and Sochen. An open problem for directly establishing these assertions without involving the Weil representation is addressed.<|reference_end|>
arxiv
@article{wang2008new, title={New Sequences Design from Weil Representation with Low Two-Dimensional Correlation in Both Time and Phase Shifts}, author={Zilong Wang and Guang Gong}, journal={arXiv preprint arXiv:0812.4487}, year={2008}, archivePrefix={arXiv}, eprint={0812.4487}, primaryClass={cs.IT cs.DM math.IT math.RT} }
wang2008new
arxiv-5845
0812.4514
Quantum generalized Reed-Solomon codes: Unified framework for quantum MDS codes
<|reference_start|>Quantum generalized Reed-Solomon codes: Unified framework for quantum MDS codes: We construct a new family of quantum MDS codes from classical generalized Reed-Solomon codes and derive the necessary and sufficient condition under which these quantum codes exist. We also give code bounds and show how to construct them analytically. We find that existing quantum MDS codes can be unified under these codes in the sense that when a quantum MDS code exists, then a quantum code of this type with the same parameters also exists. Thus as far as is known at present, they are the most important family of quantum MDS codes.<|reference_end|>
arxiv
@article{li2008quantum, title={Quantum generalized Reed-Solomon codes: Unified framework for quantum MDS codes}, author={Zhuo Li, Li-Juan Xing, and Xin-Mei Wang}, journal={Phys. Rev. A, 2008, 77, 012308}, year={2008}, doi={10.1103/PhysRevA.77.012308}, archivePrefix={arXiv}, eprint={0812.4514}, primaryClass={quant-ph cs.IT math.IT} }
li2008quantum
arxiv-5846
0812.4523
System Theoretic Viewpoint on Modeling of Complex Systems: Design, Synthesis, Simulation, and Control
<|reference_start|>System Theoretic Viewpoint on Modeling of Complex Systems: Design, Synthesis, Simulation, and Control: We consider the basic features of complex dynamic and control systems, including systems having hierarchical structure. Special attention is paid to the problems of design and synthesis of complex systems and control models, and to the development of simulation techniques and systems. A model of complex system is proposed and briefly analyzed.<|reference_end|>
arxiv
@article{bagdasaryan2008system, title={System Theoretic Viewpoint on Modeling of Complex Systems: Design, Synthesis, Simulation, and Control}, author={Armen Bagdasaryan}, journal={arXiv preprint arXiv:0812.4523}, year={2008}, archivePrefix={arXiv}, eprint={0812.4523}, primaryClass={cs.CE} }
bagdasaryan2008system
arxiv-5847
0812.4542
Assessing scientific research performance and impact with single indices
<|reference_start|>Assessing scientific research performance and impact with single indices: We provide a comprehensive and critical review of the h-index and its most important modifications proposed in the literature, as well as of other similar indicators measuring research output and impact. Extensions of some of these indices are presented and illustrated.<|reference_end|>
arxiv
@article{panaretos2008assessing, title={Assessing scientific research performance and impact with single indices}, author={John Panaretos, Chrisovaladis Malesios}, journal={arXiv preprint arXiv:0812.4542}, year={2008}, archivePrefix={arXiv}, eprint={0812.4542}, primaryClass={cs.IR physics.soc-ph} }
panaretos2008assessing
arxiv-5848
0812.4547
Random Projections for the Nonnegative Least-Squares Problem
<|reference_start|>Random Projections for the Nonnegative Least-Squares Problem: Constrained least-squares regression problems, such as the Nonnegative Least Squares (NNLS) problem, where the variables are restricted to take only nonnegative values, often arise in applications. Motivated by the recent development of the fast Johnson-Lindestrauss transform, we present a fast random projection type approximation algorithm for the NNLS problem. Our algorithm employs a randomized Hadamard transform to construct a much smaller NNLS problem and solves this smaller problem using a standard NNLS solver. We prove that our approach finds a nonnegative solution vector that, with high probability, is close to the optimum nonnegative solution in a relative error approximation sense. We experimentally evaluate our approach on a large collection of term-document data and verify that it does offer considerable speedups without a significant loss in accuracy. Our analysis is based on a novel random projection type result that might be of independent interest. In particular, given a tall and thin matrix $\Phi \in \mathbb{R}^{n \times d}$ ($n \gg d$) and a vector $y \in \mathbb{R}^d$, we prove that the Euclidean length of $\Phi y$ can be estimated very accurately by the Euclidean length of $\tilde{\Phi}y$, where $\tilde{\Phi}$ consists of a small subset of (appropriately rescaled) rows of $\Phi$.<|reference_end|>
arxiv
@article{boutsidis2008random, title={Random Projections for the Nonnegative Least-Squares Problem}, author={Christos Boutsidis, Petros Drineas}, journal={arXiv preprint arXiv:0812.4547}, year={2008}, archivePrefix={arXiv}, eprint={0812.4547}, primaryClass={cs.DS} }
boutsidis2008random
arxiv-5849
0812.4580
Feature Markov Decision Processes
<|reference_start|>Feature Markov Decision Processes: General purpose intelligent learning agents cycle through (complex,non-MDP) sequences of observations, actions, and rewards. On the other hand, reinforcement learning is well-developed for small finite state Markov Decision Processes (MDPs). So far it is an art performed by human designers to extract the right state representation out of the bare observations, i.e. to reduce the agent setup to the MDP framework. Before we can think of mechanizing this search for suitable MDPs, we need a formal objective criterion. The main contribution of this article is to develop such a criterion. I also integrate the various parts into one learning algorithm. Extensions to more realistic dynamic Bayesian networks are developed in a companion article.<|reference_end|>
arxiv
@article{hutter2008feature, title={Feature Markov Decision Processes}, author={Marcus Hutter}, journal={Proc. 2nd Conf. on Artificial General Intelligence (AGI 2009) pages 61-66}, year={2008}, archivePrefix={arXiv}, eprint={0812.4580}, primaryClass={cs.AI cs.IT cs.LG math.IT} }
hutter2008feature
arxiv-5850
0812.4581
Feature Dynamic Bayesian Networks
<|reference_start|>Feature Dynamic Bayesian Networks: Feature Markov Decision Processes (PhiMDPs) are well-suited for learning agents in general environments. Nevertheless, unstructured (Phi)MDPs are limited to relatively simple environments. Structured MDPs like Dynamic Bayesian Networks (DBNs) are used for large-scale real-world problems. In this article I extend PhiMDP to PhiDBN. The primary contribution is to derive a cost criterion that allows to automatically extract the most relevant features from the environment, leading to the "best" DBN representation. I discuss all building blocks required for a complete general learning algorithm.<|reference_end|>
arxiv
@article{hutter2008feature, title={Feature Dynamic Bayesian Networks}, author={Marcus Hutter}, journal={Proc. 2nd Conf. on Artificial General Intelligence (AGI 2009) pages 67-73}, year={2008}, archivePrefix={arXiv}, eprint={0812.4581}, primaryClass={cs.AI cs.IT cs.LG math.IT} }
hutter2008feature
arxiv-5851
0812.4614
I, Quantum Robot: Quantum Mind control on a Quantum Computer
<|reference_start|>I, Quantum Robot: Quantum Mind control on a Quantum Computer: The logic which describes quantum robots is not orthodox quantum logic, but a deductive calculus which reproduces the quantum tasks (computational processes, and actions) taking into account quantum superposition and quantum entanglement. A way toward the realization of intelligent quantum robots is to adopt a quantum metalanguage to control quantum robots. A physical implementation of a quantum metalanguage might be the use of coherent states in brain signals.<|reference_end|>
arxiv
@article{zizzi2008i,, title={I, Quantum Robot: Quantum Mind control on a Quantum Computer}, author={Paola Zizzi}, journal={arXiv preprint arXiv:0812.4614}, year={2008}, archivePrefix={arXiv}, eprint={0812.4614}, primaryClass={quant-ph cs.AI cs.LO cs.RO} }
zizzi2008i,
arxiv-5852
0812.4627
Bayesian Compressive Sensing via Belief Propagation
<|reference_start|>Bayesian Compressive Sensing via Belief Propagation: Compressive sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for stable, sub-Nyquist signal acquisition. When a statistical characterization of the signal is available, Bayesian inference can complement conventional CS methods based on linear programming or greedy algorithms. We perform approximate Bayesian inference using belief propagation (BP) decoding, which represents the CS encoding matrix as a graphical model. Fast computation is obtained by reducing the size of the graphical model with sparse encoding matrices. To decode a length-N signal containing K large coefficients, our CS-BP decoding algorithm uses O(Klog(N)) measurements and O(Nlog^2(N)) computation. Finally, although we focus on a two-state mixture Gaussian model, CS-BP is easily adapted to other signal models.<|reference_end|>
arxiv
@article{baron2008bayesian, title={Bayesian Compressive Sensing via Belief Propagation}, author={Dror Baron (Technion - Israel Institute of Technology), Shriram Sarvotham (Halliburton), and Richard G. Baraniuk (Rice University)}, journal={arXiv preprint arXiv:0812.4627}, year={2008}, archivePrefix={arXiv}, eprint={0812.4627}, primaryClass={cs.IT math.IT} }
baron2008bayesian
arxiv-5853
0812.4642
Error-Trellis State Complexity of LDPC Convolutional Codes Based on Circulant Matrices
<|reference_start|>Error-Trellis State Complexity of LDPC Convolutional Codes Based on Circulant Matrices: Let H(D) be the parity-check matrix of an LDPC convolutional code corresponding to the parity-check matrix H of a QC code obtained using the method of Tanner et al. We see that the entries in H(D) are all monomials and several rows (columns) have monomial factors. Let us cyclically shift the rows of H. Then the parity-check matrix H'(D) corresponding to the modified matrix H' defines another convolutional code. However, its free distance is lower-bounded by the minimum distance of the original QC code. Also, each row (column) of H'(D) has a factor different from the one in H(D). We show that the state-space complexity of the error-trellis associated with H'(D) can be significantly reduced by controlling the row shifts applied to H with the error-correction capability being preserved.<|reference_end|>
arxiv
@article{tajima2008error-trellis, title={Error-Trellis State Complexity of LDPC Convolutional Codes Based on Circulant Matrices}, author={M. Tajima, K. Okino, and T. Miyagoshi}, journal={arXiv preprint arXiv:0812.4642}, year={2008}, archivePrefix={arXiv}, eprint={0812.4642}, primaryClass={cs.IT math.IT} }
tajima2008error-trellis
arxiv-5854
0812.4646
Time series of Internet AS-level topology graphs: four patterns and one model
<|reference_start|>Time series of Internet AS-level topology graphs: four patterns and one model: Researchers have proposed a variety of Internet topology models. However almost all of them focus on generating one graph based on one single static source graph. On the other hand, Internet topology is evolving over time continuously with the addition and deletion of nodes and edges. If a model is based on all the topologies in the past, instead of one of them, it will be more accurate and closer to the real world topology. In this paper, we study the Internet As-level topology time-series from two different sources and find that both of them obey four same dynamic graph patterns. Then we propose a mode that can infer the topology in the future based on all the topologies in the past. Through theoretical and experimental analysis, we prove the topology that our model generates can match both the static and dynamic graph patterns. In addition, the parameters in the model are meaningful. Finally, we theoretically and experimentally prove that these parameters are directly related to some important graph characteristics.<|reference_end|>
arxiv
@article{liu2008time, title={Time series of Internet AS-level topology graphs: four patterns and one model}, author={Lian-dong Liu and Ke Xu}, journal={arXiv preprint arXiv:0812.4646}, year={2008}, archivePrefix={arXiv}, eprint={0812.4646}, primaryClass={cs.NI} }
liu2008time
arxiv-5855
0812.4706
On the total order of reducibility of a pencil of algebraic plane curves
<|reference_start|>On the total order of reducibility of a pencil of algebraic plane curves: In this paper, the problem of bounding the number of reducible curves in a pencil of algebraic plane curves is addressed. Unlike most of the previous related works, each reducible curve of the pencil is here counted with its appropriate multiplicity. It is proved that this number of reducible curves, counted with multiplicity, is bounded by d^2-1 where d is the degree of the pencil. Then, a sharper bound is given by taking into account the Newton's polygon of the pencil.<|reference_end|>
arxiv
@article{busé2008on, title={On the total order of reducibility of a pencil of algebraic plane curves}, author={Laurent Bus'e (INRIA Sophia Antipolis), Guillaume Ch`eze (IMT)}, journal={Journal of Algebra 341, 1 (2011) 256-278}, year={2008}, doi={10.1016/j.jalgebra.2011.06.006}, archivePrefix={arXiv}, eprint={0812.4706}, primaryClass={math.AC cs.SC math.AG} }
busé2008on
arxiv-5856
0812.4710
Indoor Channel Measurements and Communications System Design at 60 GHz
<|reference_start|>Indoor Channel Measurements and Communications System Design at 60 GHz: This paper presents a brief overview of several studies concerning the indoor wireless communications at 60 GHz performed by the IETR. The characterization and the modeling of the radio propagation channel are based on several measurement campaigns realized with the channel sounder developed at IETR. Some typical residential environments were also simulated by ray tracing and Gaussian Beam Tracking. The obtained results show a good agreement with the similar experimental results. Currently, the IETR is developing a high data rate wireless communication system operating at 60 GHz. The single-carrier architecture of this system is also presented.<|reference_end|>
arxiv
@article{rakotondrainibe2008indoor, title={Indoor Channel Measurements and Communications System Design at 60 GHz}, author={Lahatra Rakotondrainibe (IETR), Gheorghe Zaharia (IETR), Gha"is El Zein (IETR), Yves Lostanlen (IETR)}, journal={XXIX URSI General Assembly, Chicago : \'Etats-Unis (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0812.4710}, primaryClass={cs.NI} }
rakotondrainibe2008indoor
arxiv-5857
0812.4727
Induction and Co-induction in Sequent Calculus
<|reference_start|>Induction and Co-induction in Sequent Calculus: Proof search has been used to specify a wide range of computation systems. In order to build a framework for reasoning about such specifications, we make use of a sequent calculus involving induction and co-induction. These proof principles are based on a proof theoretic (rather than set-theoretic) notion of definition. Definitions are akin to (stratified) logic programs, where the left and right rules for defined atoms allow one to view theories as "closed" or defining fixed points. The use of definitions makes it possible to reason intensionally about syntax, in particular enforcing free equality via unification. We add in a consistent way rules for pre and post fixed points, thus allowing the user to reason inductively and co-inductively about properties of computational system making full use of higher-order abstract syntax. Consistency is guaranteed via cut-elimination, where we give the first, to our knowledge, cut-elimination procedure in the presence of general inductive and co-inductive definitions.<|reference_end|>
arxiv
@article{tiu2008induction, title={Induction and Co-induction in Sequent Calculus}, author={Alwen Tiu and Alberto Momigliano}, journal={arXiv preprint arXiv:0812.4727}, year={2008}, archivePrefix={arXiv}, eprint={0812.4727}, primaryClass={cs.LO} }
tiu2008induction
arxiv-5858
0812.4744
On Wireless Link Scheduling and Flow Control
<|reference_start|>On Wireless Link Scheduling and Flow Control: This thesis focuses on link scheduling in wireless mesh networks by taking into account physical layer characteristics. The assumption made throughout is that a packet is received successfully only if the Signal to Interference and Noise Ratio (SINR) at the receiver exceeds the communication threshold. The thesis also discusses the complementary problem of flow control. (1) We consider various problems on centralized link scheduling in Spatial Time Division Multiple Access (STDMA) wireless mesh networks. We motivate the use of spatial reuse as performance metric and provide an explicit characterization of spatial reuse. We propose link scheduling algorithms based on certain graph models (communication graph, SINR graph) of the network. Our algorithms achieve higher spatial reuse than that of existing algorithms, with only a slight increase in computational complexity. (2) We investigate random access algorithms in wireless networks. We assume that the receiver is capable of power-based capture and propose a splitting algorithm that varies transmission powers of users on the basis of quaternary channel feedback. We model the algorithm dynamics by a Discrete Time Markov Chain and consequently show that its maximum stable throughput is 0.5518. Our algorithm achieves higher maximum stable throughput and significantly lower delay than the First Come First Serve (FCFS) splitting algorithm with uniform transmission power. (3) We consider the problem of flow control in packet networks from an information-theoretic perspective. We derive the maximum entropy of a flow which conforms to traffic constraints imposed by a generalized token bucket regulator (GTBR), by taking into account the covert information present in randomness of packet lengths.<|reference_end|>
arxiv
@article{gore2008on, title={On Wireless Link Scheduling and Flow Control}, author={Ashutosh Deepak Gore}, journal={arXiv preprint arXiv:0812.4744}, year={2008}, number={EE-PHD-08-007}, archivePrefix={arXiv}, eprint={0812.4744}, primaryClass={cs.NI} }
gore2008on
arxiv-5859
0812.4792
On Optimal Linear Redistribution of VCG Payments in Assignment of Heterogeneous Objects
<|reference_start|>On Optimal Linear Redistribution of VCG Payments in Assignment of Heterogeneous Objects: There are p heterogeneous objects to be assigned to n competing agents (n > p) each with unit demand. It is required to design a Groves mechanism for this assignment problem satisfying weak budget balance, individual rationality, and minimizing the budget imbalance. This calls for designing an appropriate rebate function. Our main result is an impossibility theorem which rules out linear rebate functions with non-zero efficiency in heterogeneous object assignment. Motivated by this theorem, we explore two approaches to get around this impossibility. In the first approach, we show that linear rebate functions with non-zero are possible when the valuations for the objects are correlated. In the second approach, we show that rebate functions with non-zero efficiency are possible if linearity is relaxed.<|reference_end|>
arxiv
@article{gujar2008on, title={On Optimal Linear Redistribution of VCG Payments in Assignment of Heterogeneous Objects}, author={Sujit Gujar, Yadati Narahari}, journal={arXiv preprint arXiv:0812.4792}, year={2008}, archivePrefix={arXiv}, eprint={0812.4792}, primaryClass={cs.GT} }
gujar2008on
arxiv-5860
0812.4798
The Road Coloring for Mapping on k States(withdrawn)
<|reference_start|>The Road Coloring for Mapping on k States(withdrawn): Let $\Gamma$ be directed strongly connected finite graph of uniform outdegree (constant outdegree of any vertex) and let some coloring of edges of $\Gamma$ turn the graph into deterministic complete automaton. Let the word $s$ be a word in the alphabet of colors (considered also as letters) on the edges of $\Gamma$ and let $\Gamma s$ be a mapping of vertices $\Gamma$.<|reference_end|>
arxiv
@article{trahtman2008the, title={The Road Coloring for Mapping on k States(withdrawn)}, author={A. N. Trahtman}, journal={arXiv preprint arXiv:0812.4798}, year={2008}, archivePrefix={arXiv}, eprint={0812.4798}, primaryClass={cs.DM} }
trahtman2008the
arxiv-5861
0812.4803
Technical Report: Achievable Rates for the MAC with Correlated Channel-State Information
<|reference_start|>Technical Report: Achievable Rates for the MAC with Correlated Channel-State Information: In this paper we provide an achievable rate region for the discrete memoryless multiple access channel with correlated state information known non-causally at the encoders using a random binning technique. This result is a generalization of the random binning technique used by Gel'fand and Pinsker for the problem with non-causal channel state information at the encoder in point to point communication.<|reference_end|>
arxiv
@article{philosof2008technical, title={Technical Report: Achievable Rates for the MAC with Correlated Channel-State Information}, author={Tal Philosof, Ram Zamir and Uri Erez}, journal={arXiv preprint arXiv:0812.4803}, year={2008}, archivePrefix={arXiv}, eprint={0812.4803}, primaryClass={cs.IT math.IT} }
philosof2008technical
arxiv-5862
0812.4814
Nominalistic Logic (Extended Abstract)
<|reference_start|>Nominalistic Logic (Extended Abstract): Nominalistic Logic (NL) is a new presentation of Paul Gilmore's Intensional Type Theory (ITT) as a sequent calculus together with a succinct nominalization axiom (N) that permits names of predicates as individuals in certain cases. The logic has a flexible comprehension axiom, but no extensionality axiom and no infinity axiom, although axiom N is the key to the derivation of Peano's postulates for the natural numbers.<|reference_end|>
arxiv
@article{villadsen2008nominalistic, title={Nominalistic Logic (Extended Abstract)}, author={J{o}rgen Villadsen}, journal={arXiv preprint arXiv:0812.4814}, year={2008}, archivePrefix={arXiv}, eprint={0812.4814}, primaryClass={cs.LO} }
villadsen2008nominalistic
arxiv-5863
0812.4826
Delay-Throughput Tradeoff for Supportive Two-Tier Networks
<|reference_start|>Delay-Throughput Tradeoff for Supportive Two-Tier Networks: Consider a static wireless network that has two tiers with different priorities: a primary tier vs. a secondary tier. The primary tier consists of randomly distributed legacy nodes of density $n$, which have an absolute priority to access the spectrum. The secondary tier consists of randomly distributed cognitive nodes of density $m=n^\beta$ with $\beta\geq 2$, which can only access the spectrum opportunistically to limit the interference to the primary tier. By allowing the secondary tier to route the packets for the primary tier, we show that the primary tier can achieve a throughput scaling of $\lambda_p(n)=\Theta(1/\log n)$ per node and a delay-throughput tradeoff of $D_p(n)=\Theta(\sqrt{n^\beta\log n}\lambda_p(n))$ for $\lambda_p(n)=O(1/\log n)$, while the secondary tier still achieves the same optimal delay-throughput tradeoff as a stand-alone network.<|reference_end|>
arxiv
@article{gao2008delay-throughput, title={Delay-Throughput Tradeoff for Supportive Two-Tier Networks}, author={Long Gao, Rui Zhang, Changchuan Yin, Shuguang Cui}, journal={arXiv preprint arXiv:0812.4826}, year={2008}, archivePrefix={arXiv}, eprint={0812.4826}, primaryClass={cs.IT cs.NI math.IT} }
gao2008delay-throughput
arxiv-5864
0812.4835
Semi-Quantum Key Distribution
<|reference_start|>Semi-Quantum Key Distribution: Secure key distribution among two remote parties is impossible when both are classical, unless some unproven (and arguably unrealistic) computation-complexity assumptions are made, such as the difficulty of factorizing large numbers. On the other hand, a secure key distribution is possible when both parties are quantum. What is possible when only one party (Alice) is quantum, yet the other (Bob) has only classical capabilities? Recently, a semi-quantum key distribution protocol was presented (Boyer, Kenigsberg and Mor, Physical Review Letters, 2007), in which one of the parties (Bob) is classical, and yet, the protocol is proven to be completely robust against an eavesdropping attempt. Here we extend that result much further. We present two protocols with this constraint, and prove their robustness against attacks: we prove that any attempt of an adversary to obtain information (and even a tiny amount of information) necessarily induces some errors that the legitimate parties could notice. One protocol presented here is identical to the one referred to above, however, its robustness is proven here in a much more general scenario. The other protocol is very different as it is based on randomization.<|reference_end|>
arxiv
@article{boyer2008semi-quantum, title={Semi-Quantum Key Distribution}, author={Michel Boyer, Ran Gelles, Dan Kenigsberg, Tal Mor}, journal={arXiv preprint arXiv:0812.4835}, year={2008}, doi={10.1103/PhysRevA.79.032341}, archivePrefix={arXiv}, eprint={0812.4835}, primaryClass={quant-ph cs.CR} }
boyer2008semi-quantum
arxiv-5865
0812.4848
The Complexity of Generalized Satisfiability for Linear Temporal Logic
<|reference_start|>The Complexity of Generalized Satisfiability for Linear Temporal Logic: In a seminal paper from 1985, Sistla and Clarke showed that satisfiability for Linear Temporal Logic (LTL) is either NP-complete or PSPACE-complete, depending on the set of temporal operators used. If, in contrast, the set of propositional operators is restricted, the complexity may decrease. This paper undertakes a systematic study of satisfiability for LTL formulae over restricted sets of propositional and temporal operators. Since every propositional operator corresponds to a Boolean function, there exist infinitely many propositional operators. In order to systematically cover all possible sets of them, we use Post's lattice. With its help, we determine the computational complexity of LTL satisfiability for all combinations of temporal operators and all but two classes of propositional functions. Each of these infinitely many problems is shown to be either PSPACE-complete, NP-complete, or in P.<|reference_end|>
arxiv
@article{bauland2008the, title={The Complexity of Generalized Satisfiability for Linear Temporal Logic}, author={Michael Bauland, Thomas Schneider, Henning Schnoor, Ilka Schnoor, Heribert Vollmer}, journal={Logical Methods in Computer Science, Volume 5, Issue 1 (January 26, 2009) lmcs:1158}, year={2008}, doi={10.2168/LMCS-5(1:1)2009}, archivePrefix={arXiv}, eprint={0812.4848}, primaryClass={cs.LO} }
bauland2008the
arxiv-5866
0812.4852
Formalizing common sense for scalable inconsistency-robust information integration using Direct Logic(TM) reasoning and the Actor Model
<|reference_start|>Formalizing common sense for scalable inconsistency-robust information integration using Direct Logic(TM) reasoning and the Actor Model: Because contemporary large software systems are pervasively inconsistent, it is not safe to reason about them using classical logic. The goal of Direct Logic is to be a minimal fix to classical mathematical logic that meets the requirements of large-scale Internet applications (including sense making for natural language) by addressing the following issues: inconsistency robustness, contrapositive inference bug, and direct argumentation. Direct Logic makes the following contributions over previous work: * Direct Inference (no contrapositive bug for inference) * Direct Argumentation (inference directly expressed) * Inconsistency-robust deduction without artifices such as indices (labels) on propositions or restrictions on reiteration * Intuitive inferences hold including the following: * Boolean Equivalences * Reasoning by splitting for disjunctive cases * Soundness * Inconsistency-robust Proof by Contradiction Since the global state model of computation (first formalized by Turing) is inadequate to the needs of modern large-scale Internet applications the Actor Model was developed to meet this need. Using, the Actor Model, this paper proves that Logic Programming is not computationally universal in that there are computations that cannot be implemented using logical inference. Consequently the Logic Programming paradigm is strictly less general than the Procedural Embedding of Knowledge paradigm.<|reference_end|>
arxiv
@article{hewitt2008formalizing, title={Formalizing common sense for scalable inconsistency-robust information integration using Direct Logic(TM) reasoning and the Actor Model}, author={Carl Hewitt}, journal={arXiv preprint arXiv:0812.4852}, year={2008}, archivePrefix={arXiv}, eprint={0812.4852}, primaryClass={cs.LO cs.PL cs.SE} }
hewitt2008formalizing
arxiv-5867
0812.4889
Statistical Physics of Signal Estimation in Gaussian Noise: Theory and Examples of Phase Transitions
<|reference_start|>Statistical Physics of Signal Estimation in Gaussian Noise: Theory and Examples of Phase Transitions: We consider the problem of signal estimation (denoising) from a statistical mechanical perspective, using a relationship between the minimum mean square error (MMSE), of estimating a signal, and the mutual information between this signal and its noisy version. The paper consists of essentially two parts. In the first, we derive several statistical-mechanical relationships between a few important quantities in this problem area, such as the MMSE, the differential entropy, the Fisher information, the free energy, and a generalized notion of temperature. We also draw analogies and differences between certain relations pertaining to the estimation problem and the parallel relations in thermodynamics and statistical physics. In the second part of the paper, we provide several application examples, where we demonstrate how certain analysis tools that are customary in statistical physics, prove useful in the analysis of the MMSE. In most of these examples, the corresponding statistical-mechanical systems turn out to consist of strong interactions that cause phase transitions, which in turn are reflected as irregularities and discontinuities (similar to threshold effects) in the behavior of the MMSE.<|reference_end|>
arxiv
@article{merhav2008statistical, title={Statistical Physics of Signal Estimation in Gaussian Noise: Theory and Examples of Phase Transitions}, author={Neri Merhav, Dongning Guo, and Shlomo Shamai (Shitz)}, journal={arXiv preprint arXiv:0812.4889}, year={2008}, doi={10.1109/TIT.2009.2039047}, archivePrefix={arXiv}, eprint={0812.4889}, primaryClass={cs.IT math.IT} }
merhav2008statistical
arxiv-5868
0812.4893
Almost stable matchings in constant time
<|reference_start|>Almost stable matchings in constant time: We show that the ratio of matched individuals to blocking pairs grows linearly with the number of propose--accept rounds executed by the Gale--Shapley algorithm for the stable marriage problem. Consequently, the participants can arrive at an almost stable matching even without full information about the problem instance; for each participant, knowing only its local neighbourhood is enough. In distributed-systems parlance, this means that if each person has only a constant number of acceptable partners, an almost stable matching emerges after a constant number of synchronous communication rounds. This holds even if ties are present in the preference lists. We apply our results to give a distributed $(2+\epsilon)$-approximation algorithm for maximum-weight matching in bicoloured graphs and a centralised randomised constant-time approximation scheme for estimating the size of a stable matching.<|reference_end|>
arxiv
@article{floréen2008almost, title={Almost stable matchings in constant time}, author={Patrik Flor'een, Petteri Kaski, Valentin Polishchuk, Jukka Suomela}, journal={Algorithmica 58 (2010) 102-118}, year={2008}, doi={10.1007/s00453-009-9353-9}, archivePrefix={arXiv}, eprint={0812.4893}, primaryClass={cs.DS cs.DC} }
floréen2008almost
arxiv-5869
0812.4905
Kronecker Graphs: An Approach to Modeling Networks
<|reference_start|>Kronecker Graphs: An Approach to Modeling Networks: How can we model networks with a mathematically tractable model that allows for rigorous analysis of network properties? Networks exhibit a long list of surprising properties: heavy tails for the degree distribution; small diameters; and densification and shrinking diameters over time. Most present network models either fail to match several of the above properties, are complicated to analyze mathematically, or both. In this paper we propose a generative model for networks that is both mathematically tractable and can generate networks that have the above mentioned properties. Our main idea is to use the Kronecker product to generate graphs that we refer to as "Kronecker graphs". First, we prove that Kronecker graphs naturally obey common network properties. We also provide empirical evidence showing that Kronecker graphs can effectively model the structure of real networks. We then present KronFit, a fast and scalable algorithm for fitting the Kronecker graph generation model to large real networks. A naive approach to fitting would take super- exponential time. In contrast, KronFit takes linear time, by exploiting the structure of Kronecker matrix multiplication and by using statistical simulation techniques. Experiments on large real and synthetic networks show that KronFit finds accurate parameters that indeed very well mimic the properties of target networks. Once fitted, the model parameters can be used to gain insights about the network structure, and the resulting synthetic graphs can be used for null- models, anonymization, extrapolations, and graph summarization.<|reference_end|>
arxiv
@article{leskovec2008kronecker, title={Kronecker Graphs: An Approach to Modeling Networks}, author={Jure Leskovec, Deepayan Chakrabarti, Jon Kleinberg, Christos Faloutsos and Zoubin Ghahramani}, journal={arXiv preprint arXiv:0812.4905}, year={2008}, archivePrefix={arXiv}, eprint={0812.4905}, primaryClass={stat.ML cs.DS physics.data-an physics.soc-ph} }
leskovec2008kronecker
arxiv-5870
0812.4919
Obtaining a Planar Graph by Vertex Deletion
<|reference_start|>Obtaining a Planar Graph by Vertex Deletion: In the k-Apex problem the task is to find at most k vertices whose deletion makes the given graph planar. The graphs for which there exists a solution form a minor closed class of graphs, hence by the deep results of Robertson and Seymour, there is an O(n^3) time algorithm for every fixed value of k. However, the proof is extremely complicated and the constants hidden by the big-O notation are huge. Here we give a much simpler algorithm for this problem with quadratic running time, by iteratively reducing the input graph and then applying techniques for graphs of bounded treewidth.<|reference_end|>
arxiv
@article{marx2008obtaining, title={Obtaining a Planar Graph by Vertex Deletion}, author={D'aniel Marx, Ildik'o Schlotter}, journal={arXiv preprint arXiv:0812.4919}, year={2008}, archivePrefix={arXiv}, eprint={0812.4919}, primaryClass={cs.DS} }
marx2008obtaining
arxiv-5871
0812.4937
Efficient Interpolation in the Guruswami-Sudan Algorithm
<|reference_start|>Efficient Interpolation in the Guruswami-Sudan Algorithm: A novel algorithm is proposed for the interpolation step of the Guruswami-Sudan list decoding algorithm. The proposed method is based on the binary exponentiation algorithm, and can be considered as an extension of the Lee-O'Sullivan algorithm. The algorithm is shown to achieve both asymptotical and practical performance gain compared to the case of iterative interpolation algorithm. Further complexity reduction is achieved by integrating the proposed method with re-encoding. The key contribution of the paper, which enables the complexity reduction, is a novel randomized ideal multiplication algorithm.<|reference_end|>
arxiv
@article{trifonov2008efficient, title={Efficient Interpolation in the Guruswami-Sudan Algorithm}, author={Peter Trifonov}, journal={arXiv preprint arXiv:0812.4937}, year={2008}, doi={10.1109/TIT.2010.2053901}, archivePrefix={arXiv}, eprint={0812.4937}, primaryClass={cs.IT cs.DM math.AC math.IT} }
trifonov2008efficient
arxiv-5872
0812.4952
Importance Weighted Active Learning
<|reference_start|>Importance Weighted Active Learning: We present a practical and statistically consistent scheme for actively learning binary classifiers under general loss functions. Our algorithm uses importance weighting to correct sampling bias, and by controlling the variance, we are able to give rigorous label complexity bounds for the learning process. Experiments on passively labeled data show that this approach reduces the label complexity required to achieve good predictive performance on many learning problems.<|reference_end|>
arxiv
@article{beygelzimer2008importance, title={Importance Weighted Active Learning}, author={Alina Beygelzimer, Sanjoy Dasgupta, and John Langford}, journal={arXiv preprint arXiv:0812.4952}, year={2008}, archivePrefix={arXiv}, eprint={0812.4952}, primaryClass={cs.LG} }
beygelzimer2008importance
arxiv-5873
0812.4973
A Simple, Linear-Time Algorithm for x86 Jump Encoding
<|reference_start|>A Simple, Linear-Time Algorithm for x86 Jump Encoding: The problem of space-optimal jump encoding in the x86 instruction set, also known as branch displacement optimization, is described, and a linear-time algorithm is given that uses no complicated data structures, no recursion, and no randomization. The only assumption is that there are no array declarations whose size depends on the negative of the size of a section of code (Hyde 2006), which is reasonable for real code.<|reference_end|>
arxiv
@article{dickson2008a, title={A Simple, Linear-Time Algorithm for x86 Jump Encoding}, author={Neil G. Dickson}, journal={arXiv preprint arXiv:0812.4973}, year={2008}, archivePrefix={arXiv}, eprint={0812.4973}, primaryClass={cs.PL} }
dickson2008a
arxiv-5874
0812.4974
Using a computer algebra system to simplify expressions for Titchmarsh-Weyl m-functions associated with the Hydrogen Atom on the half line
<|reference_start|>Using a computer algebra system to simplify expressions for Titchmarsh-Weyl m-functions associated with the Hydrogen Atom on the half line: In this paper we give simplified formulas for certain polynomials which arise in some new Titchmarsh-Weyl m-functions for the radial part of the separated Hydrogen atom on the half line and two independent programs for generating them using the symbolic manipulator Mathematica.<|reference_end|>
arxiv
@article{knoll2008using, title={Using a computer algebra system to simplify expressions for Titchmarsh-Weyl m-functions associated with the Hydrogen Atom on the half line}, author={Cecilia Knoll, Charles Fulton}, journal={arXiv preprint arXiv:0812.4974}, year={2008}, archivePrefix={arXiv}, eprint={0812.4974}, primaryClass={math.SP cs.SC math.CO} }
knoll2008using
arxiv-5875
0812.4983
Bootstrapping Key Pre-Distribution: Secure, Scalable and User-Friendly Initialization of Sensor Nodes
<|reference_start|>Bootstrapping Key Pre-Distribution: Secure, Scalable and User-Friendly Initialization of Sensor Nodes: To establish secure (point-to-point and/or broadcast) communication channels among the nodes of a wireless sensor network is a fundamental task. To this end, a plethora of (socalled) key pre-distribution schemes have been proposed in the past. All these schemes, however, rely on shared secret(s), which are assumed to be somehow pre-loaded onto the sensor nodes. In this paper, we propose a novel method for secure initialization of sensor nodes based on a visual out-of-band channel. Using the proposed method, the administrator of a sensor network can distribute keys onto the sensor nodes, necessary to bootstrap key pre-distribution. Our secure initialization method requires only a little extra cost, is efficient and scalable with respect to the number of sensor nodes. Moreover, based on a usability study that we conducted, the method turns out to be quite user-friendly and easy to use by naive human users.<|reference_end|>
arxiv
@article{saxena2008bootstrapping, title={Bootstrapping Key Pre-Distribution: Secure, Scalable and User-Friendly Initialization of Sensor Nodes}, author={Nitesh Saxena and Md. Borhan Uddin}, journal={arXiv preprint arXiv:0812.4983}, year={2008}, archivePrefix={arXiv}, eprint={0812.4983}, primaryClass={cs.CR} }
saxena2008bootstrapping
arxiv-5876
0812.4985
On the Capacity of Partially Cognitive Radios
<|reference_start|>On the Capacity of Partially Cognitive Radios: This paper considers the problem of cognitive radios with partial-message information. Here, an interference channel setting is considered where one transmitter (the "cognitive" one) knows the message of the other ("legitimate" user) partially. An outer bound on the capacity region of this channel is found for the "weak" interference case (where the interference from the cognitive transmitter to the legitimate receiver is weak). This outer bound is shown for both the discrete-memoryless and the Gaussian channel cases. An achievable region is subsequently determined for a mixed interference Gaussian cognitive radio channel, where the interference from the legitimate transmitter to the cognitive receiver is "strong". It is shown that, for a class of mixed Gaussian cognitive radio channels, portions of the outer bound are achievable thus resulting in a characterization of a part of this channel's capacity region.<|reference_end|>
arxiv
@article{chung2008on, title={On the Capacity of Partially Cognitive Radios}, author={G. Chung, S. Sridharan, S. Vishwanath, C. S. Hwang}, journal={arXiv preprint arXiv:0812.4985}, year={2008}, archivePrefix={arXiv}, eprint={0812.4985}, primaryClass={cs.IT math.IT} }
chung2008on
arxiv-5877
0812.4986
An Array Algebra
<|reference_start|>An Array Algebra: This is a proposal of an algebra which aims at distributed array processing. The focus lies on re-arranging and distributing array data, which may be multi-dimensional. The context of the work is scientific processing; thus, the core science operations are assumed to be taken care of in external libraries or languages. A main design driver is the desire to carry over some of the strategies of the relational algebra into the array domain.<|reference_end|>
arxiv
@article{schmidt2008an, title={An Array Algebra}, author={Albrecht Schmidt}, journal={arXiv preprint arXiv:0812.4986}, year={2008}, archivePrefix={arXiv}, eprint={0812.4986}, primaryClass={cs.DB} }
schmidt2008an
arxiv-5878
0812.5026
Group representation design of digital signals and sequences
<|reference_start|>Group representation design of digital signals and sequences: In this survey a novel system, called the oscillator system, consisting of order of p^3 functions (signals) on the finite field F_{p}, is described and studied. The new functions are proved to satisfy good auto-correlation, cross-correlation and low peak-to-average power ratio properties. Moreover, the oscillator system is closed under the operation of discrete Fourier transform. Applications of the oscillator system for discrete radar and digital communication theory are explained. Finally, an explicit algorithm to construct the oscillator system is presented.<|reference_end|>
arxiv
@article{gurevich2008group, title={Group representation design of digital signals and sequences}, author={Shamgar Gurevich (UC Berkeley), Ronny Hadani (University of Chicago), Nir Sochen (Tel Aviv University)}, journal={arXiv preprint arXiv:0812.5026}, year={2008}, archivePrefix={arXiv}, eprint={0812.5026}, primaryClass={cs.IT cs.DM math.IT math.RT} }
gurevich2008group
arxiv-5879
0812.5030
A Pseudopolynomial Algorithm for Alexandrov's Theorem
<|reference_start|>A Pseudopolynomial Algorithm for Alexandrov's Theorem: Alexandrov's Theorem states that every metric with the global topology and local geometry required of a convex polyhedron is in fact the intrinsic metric of a unique convex polyhedron. Recent work by Bobenko and Izmestiev describes a differential equation whose solution leads to the polyhedron corresponding to a given metric. We describe an algorithm based on this differential equation to compute the polyhedron to arbitrary precision given the metric, and prove a pseudopolynomial bound on its running time. Along the way, we develop pseudopolynomial algorithms for computing shortest paths and weighted Delaunay triangulations on a polyhedral surface, even when the surface edges are not shortest paths.<|reference_end|>
arxiv
@article{kane2008a, title={A Pseudopolynomial Algorithm for Alexandrov's Theorem}, author={Daniel Kane, Gregory N. Price, and Erik D. Demaine}, journal={arXiv preprint arXiv:0812.5030}, year={2008}, archivePrefix={arXiv}, eprint={0812.5030}, primaryClass={cs.CG} }
kane2008a
arxiv-5880
0812.5032
A New Clustering Algorithm Based Upon Flocking On Complex Network
<|reference_start|>A New Clustering Algorithm Based Upon Flocking On Complex Network: We have proposed a model based upon flocking on a complex network, and then developed two clustering algorithms on the basis of it. In the algorithms, firstly a \textit{k}-nearest neighbor (knn) graph as a weighted and directed graph is produced among all data points in a dataset each of which is regarded as an agent who can move in space, and then a time-varying complex network is created by adding long-range links for each data point. Furthermore, each data point is not only acted by its \textit{k} nearest neighbors but also \textit{r} long-range neighbors through fields established in space by them together, so it will take a step along the direction of the vector sum of all fields. It is more important that these long-range links provides some hidden information for each data point when it moves and at the same time accelerate its speed converging to a center. As they move in space according to the proposed model, data points that belong to the same class are located at a same position gradually, whereas those that belong to different classes are away from one another. Consequently, the experimental results have demonstrated that data points in datasets are clustered reasonably and efficiently, and the rates of convergence of clustering algorithms are fast enough. Moreover, the comparison with other algorithms also provides an indication of the effectiveness of the proposed approach.<|reference_end|>
arxiv
@article{li2008a, title={A New Clustering Algorithm Based Upon Flocking On Complex Network}, author={Qiang Li, Yan He, Jing-ping Jiang}, journal={arXiv preprint arXiv:0812.5032}, year={2008}, archivePrefix={arXiv}, eprint={0812.5032}, primaryClass={cs.LG cs.AI cs.CV physics.soc-ph} }
li2008a
arxiv-5881
0812.5039
Lower bounds for weak epsilon-nets and stair-convexity
<|reference_start|>Lower bounds for weak epsilon-nets and stair-convexity: A set N is called a "weak epsilon-net" (with respect to convex sets) for a finite set X in R^d if N intersects every convex set that contains at least epsilon*|X| points of X. For every fixed d>=2 and every r>=1 we construct sets X in R^d for which every weak (1/r)-net has at least Omega(r log^{d-1} r) points; this is the first superlinear lower bound for weak epsilon-nets in a fixed dimension. The construction is a "stretched grid", i.e., the Cartesian product of d suitable fast-growing finite sequences, and convexity in this grid can be analyzed using "stair-convexity", a new variant of the usual notion of convexity. We also consider weak epsilon-nets for the diagonal of our stretched grid in R^d, d>=3, which is an "intrinsically 1-dimensional" point set. In this case we exhibit slightly superlinear lower bounds (involving the inverse Ackermann function), showing that upper bounds by Alon, Kaplan, Nivasch, Sharir, and Smorodinsky (2008) are not far from the truth in the worst case. Using the stretched grid we also improve the known upper bound for the so-called "second selection lemma" in the plane by a logarithmic factor: We obtain a set T of t triangles with vertices in an n-point set in the plane such that no point is contained in more than O(t^2 / (n^3 log (n^3/t))) triangles of T.<|reference_end|>
arxiv
@article{bukh2008lower, title={Lower bounds for weak epsilon-nets and stair-convexity}, author={Boris Bukh, Jiv{r}'i Matouv{s}ek, Gabriel Nivasch}, journal={Israel Journal of Mathematics, 182:199-228, 2011}, year={2008}, doi={10.1007/s11856-011-0029-1}, archivePrefix={arXiv}, eprint={0812.5039}, primaryClass={math.CO cs.CG} }
bukh2008lower
arxiv-5882
0812.5064
A Novel Clustering Algorithm Based Upon Games on Evolving Network
<|reference_start|>A Novel Clustering Algorithm Based Upon Games on Evolving Network: This paper introduces a model based upon games on an evolving network, and develops three clustering algorithms according to it. In the clustering algorithms, data points for clustering are regarded as players who can make decisions in games. On the network describing relationships among data points, an edge-removing-and-rewiring (ERR) function is employed to explore in a neighborhood of a data point, which removes edges connecting to neighbors with small payoffs, and creates new edges to neighbors with larger payoffs. As such, the connections among data points vary over time. During the evolution of network, some strategies are spread in the network. As a consequence, clusters are formed automatically, in which data points with the same evolutionarily stable strategy are collected as a cluster, so the number of evolutionarily stable strategies indicates the number of clusters. Moreover, the experimental results have demonstrated that data points in datasets are clustered reasonably and efficiently, and the comparison with other algorithms also provides an indication of the effectiveness of the proposed algorithms.<|reference_end|>
arxiv
@article{li2008a, title={A Novel Clustering Algorithm Based Upon Games on Evolving Network}, author={Qiang Li, Zhuo Chen, Yan He, Jing-ping Jiang}, journal={Expert Systems with Applications, 2010}, year={2008}, doi={10.1016/j.eswa.2010.02.050}, archivePrefix={arXiv}, eprint={0812.5064}, primaryClass={cs.LG cs.CV cs.GT nlin.AO} }
li2008a
arxiv-5883
0812.5101
A 7/9 - Approximation Algorithm for the Maximum Traveling Salesman Problem
<|reference_start|>A 7/9 - Approximation Algorithm for the Maximum Traveling Salesman Problem: We give a 7/9 - Approximation Algorithm for the Maximum Traveling Salesman Problem.<|reference_end|>
arxiv
@article{paluch2008a, title={A 7/9 - Approximation Algorithm for the Maximum Traveling Salesman Problem}, author={Katarzyna Paluch, Marcin Mucha, Aleksander Madry}, journal={arXiv preprint arXiv:0812.5101}, year={2008}, doi={10.1007/978-3-642-03685-9_23}, archivePrefix={arXiv}, eprint={0812.5101}, primaryClass={cs.GT cs.DM cs.DS} }
paluch2008a
arxiv-5884
0812.5104
On Quantum and Classical Error Control Codes: Constructions and Applications
<|reference_start|>On Quantum and Classical Error Control Codes: Constructions and Applications: It is conjectured that quantum computers are able to solve certain problems more quickly than any deterministic or probabilistic computer. A quantum computer exploits the rules of quantum mechanics to speed up computations. However, it is a formidable task to build a quantum computer, since the quantum mechanical systems storing the information unavoidably interact with their environment. Therefore, one has to mitigate the resulting noise and decoherence effects to avoid computational errors. In this work, I study various aspects of quantum error control codes -- the key component of fault-tolerant quantum information processing. I present the fundamental theory and necessary background of quantum codes and construct many families of quantum block and convolutional codes over finite fields, in addition to families of subsystem codes over symmetric and asymmetric channels. Particularly, many families of quantum BCH, RS, duadic, and convolutional codes are constructed over finite fields. Families of subsystem codes and a class of optimal MDS subsystem codes are derived over asymmetric and symmetric quantum channels. In addition, propagation rules and tables of upper bounds on subsystem code parameters are established. Classes of quantum and classical LDPC codes based on finite geometries and Latin squares are constructed.<|reference_end|>
arxiv
@article{aly2008on, title={On Quantum and Classical Error Control Codes: Constructions and Applications}, author={Salah A. Aly}, journal={arXiv preprint arXiv:0812.5104}, year={2008}, archivePrefix={arXiv}, eprint={0812.5104}, primaryClass={cs.IT math.IT quant-ph} }
aly2008on
arxiv-5885
0901.0015
Maximum Entropy on Compact Groups
<|reference_start|>Maximum Entropy on Compact Groups: On a compact group the Haar probability measure plays the role of uniform distribution. The entropy and rate distortion theory for this uniform distribution is studied. New results and simplified proofs on convergence of convolutions on compact groups are presented and they can be formulated as entropy increases to its maximum. Information theoretic techniques and Markov chains play a crucial role. The convergence results are also formulated via rate distortion functions. The rate of convergence is shown to be exponential.<|reference_end|>
arxiv
@article{harremoes2008maximum, title={Maximum Entropy on Compact Groups}, author={Peter Harremoes}, journal={Entropy 2009, 11(2), 222-237}, year={2008}, doi={10.3390/e11020222}, archivePrefix={arXiv}, eprint={0901.0015}, primaryClass={cs.IT math.IT math.PR} }
harremoes2008maximum
arxiv-5886
0901.0029
Scientific Computing in the Cloud
<|reference_start|>Scientific Computing in the Cloud: We investigate the feasibility of high performance scientific computation using cloud computers as an alternative to traditional computational tools. The availability of these large, virtualized pools of compute resources raises the possibility of a new compute paradigm for scientific research with many advantages. For research groups, cloud computing provides convenient access to reliable, high performance clusters and storage, without the need to purchase and maintain sophisticated hardware. For developers, virtualization allows scientific codes to be optimized and pre-installed on machine images, facilitating control over the computational environment. Preliminary tests are presented for serial and parallelized versions of the widely used x-ray spectroscopy and electronic structure code FEFF on the Amazon Elastic Compute Cloud, including CPU and network performance.<|reference_end|>
arxiv
@article{rehr2008scientific, title={Scientific Computing in the Cloud}, author={J. J. Rehr, J. P. Gardner, M. Prange, L. Svec and F. Vila}, journal={arXiv preprint arXiv:0901.0029}, year={2008}, archivePrefix={arXiv}, eprint={0901.0029}, primaryClass={cond-mat.mtrl-sci cs.DC physics.comp-ph} }
rehr2008scientific
arxiv-5887
0901.0042
A family of asymptotically good quantum codes based on code concatenation
<|reference_start|>A family of asymptotically good quantum codes based on code concatenation: We explicitly construct an infinite family of asymptotically good concatenated quantum stabilizer codes where the outer code uses CSS-type quantum Reed-Solomon code and the inner code uses a set of special quantum codes. In the field of quantum error-correcting codes, this is the first time that a family of asymptotically good quantum codes is derived from bad codes. Its occurrence supplies a gap in quantum coding theory.<|reference_end|>
arxiv
@article{li2008a, title={A family of asymptotically good quantum codes based on code concatenation}, author={Zhuo Li, Li-Juan Xing, and Xin-Mei Wang}, journal={arXiv preprint arXiv:0901.0042}, year={2008}, archivePrefix={arXiv}, eprint={0901.0042}, primaryClass={quant-ph cs.IT math.IT} }
li2008a
arxiv-5888
0901.0043
Symmetric and Asymmetric Asynchronous Interaction
<|reference_start|>Symmetric and Asymmetric Asynchronous Interaction: We investigate classes of systems based on different interaction patterns with the aim of achieving distributability. As our system model we use Petri nets. In Petri nets, an inherent concept of simultaneity is built in, since when a transition has more than one preplace, it can be crucial that tokens are removed instantaneously. When modelling a system which is intended to be implemented in a distributed way by a Petri net, this built-in concept of synchronous interaction may be problematic. To investigate this we consider asynchronous implementations of nets, in which removing tokens from places can no longer be considered as instantaneous. We model this by inserting silent (unobservable) transitions between transitions and some of their preplaces. We investigate three such implementations, differing in the selection of preplaces of a transition from which the removal of a token is considered time consuming, and the possibility of collecting the tokens in a given order. We investigate the effect of these different transformations of instantaneous interaction into asynchronous interaction patterns by comparing the behaviours of nets before and after insertion of the silent transitions. We exhibit for which classes of Petri nets we obtain equivalent behaviour with respect to failures equivalence. It turns out that the resulting hierarchy of Petri net classes can be described by semi-structural properties. For two of the classes we obtain precise characterisations; for the remaining class we obtain lower and upper bounds. We briefly comment on possible applications of our results to Message Sequence Charts.<|reference_end|>
arxiv
@article{van glabbeek2008symmetric, title={Symmetric and Asymmetric Asynchronous Interaction}, author={Rob van Glabbeek, Ursula Goltz, Jens-Wolfhard Schicke}, journal={arXiv preprint arXiv:0901.0043}, year={2008}, number={Technical Report 2008-03, Technical University of Braunschweig}, archivePrefix={arXiv}, eprint={0901.0043}, primaryClass={cs.LO cs.DC} }
van glabbeek2008symmetric
arxiv-5889
0901.0044
Information Inequalities for Joint Distributions, with Interpretations and Applications
<|reference_start|>Information Inequalities for Joint Distributions, with Interpretations and Applications: Upper and lower bounds are obtained for the joint entropy of a collection of random variables in terms of an arbitrary collection of subset joint entropies. These inequalities generalize Shannon's chain rule for entropy as well as inequalities of Han, Fujishige and Shearer. A duality between the upper and lower bounds for joint entropy is developed. All of these results are shown to be special cases of general, new results for submodular functions-- thus, the inequalities presented constitute a richly structured class of Shannon-type inequalities. The new inequalities are applied to obtain new results in combinatorics, such as bounds on the number of independent sets in an arbitrary graph and the number of zero-error source-channel codes, as well as new determinantal inequalities in matrix theory. A new inequality for relative entropies is also developed, along with interpretations in terms of hypothesis testing. Finally, revealing connections of the results to literature in economics, computer science, and physics are explored.<|reference_end|>
arxiv
@article{madiman2008information, title={Information Inequalities for Joint Distributions, with Interpretations and Applications}, author={Mokshay Madiman and Prasad Tetali}, journal={IEEE Transactions on Information Theory, Vol. 56(6), pp. 2699-2713, June 2010}, year={2008}, doi={10.1109/TIT.2010.2046253}, archivePrefix={arXiv}, eprint={0901.0044}, primaryClass={cs.IT math.CO math.IT math.PR} }
madiman2008information
arxiv-5890
0901.0048
On Synchronous and Asynchronous Interaction in Distributed Systems
<|reference_start|>On Synchronous and Asynchronous Interaction in Distributed Systems: When considering distributed systems, it is a central issue how to deal with interactions between components. In this paper, we investigate the paradigms of synchronous and asynchronous interaction in the context of distributed systems. We investigate to what extent or under which conditions synchronous interaction is a valid concept for specification and implementation of such systems. We choose Petri nets as our system model and consider different notions of distribution by associating locations to elements of nets. First, we investigate the concept of simultaneity which is inherent in the semantics of Petri nets when transitions have multiple input places. We assume that tokens may only be taken instantaneously by transitions on the same location. We exhibit a hierarchy of `asynchronous' Petri net classes by different assumptions on possible distributions. Alternatively, we assume that the synchronisations specified in a Petri net are crucial system properties. Hence transitions and their preplaces may no longer placed on separate locations. We then answer the question which systems may be implemented in a distributed way without restricting concurrency, assuming that locations are inherently sequential. It turns out that in both settings we find semi-structural properties of Petri nets describing exactly the problematic situations for interactions in distributed systems.<|reference_end|>
arxiv
@article{van glabbeek2008on, title={On Synchronous and Asynchronous Interaction in Distributed Systems}, author={Rob van Glabbeek, Ursula Goltz and Jens-Wolfhard Schicke}, journal={arXiv preprint arXiv:0901.0048}, year={2008}, number={Technical Report 2008-04, Technical University of Braunschweig}, archivePrefix={arXiv}, eprint={0901.0048}, primaryClass={cs.LO cs.DC} }
van glabbeek2008on
arxiv-5891
0901.0055
Entropy and set cardinality inequalities for partition-determined functions
<|reference_start|>Entropy and set cardinality inequalities for partition-determined functions: A new notion of partition-determined functions is introduced, and several basic inequalities are developed for the entropy of such functions of independent random variables, as well as for cardinalities of compound sets obtained using these functions. Here a compound set means a set obtained by varying each argument of a function of several variables over a set associated with that argument, where all the sets are subsets of an appropriate algebraic structure so that the function is well defined. On the one hand, the entropy inequalities developed for partition-determined functions imply entropic analogues of general inequalities of Pl\"unnecke-Ruzsa type. On the other hand, the cardinality inequalities developed for compound sets imply several inequalities for sumsets, including for instance a generalization of inequalities proved by Gyarmati, Matolcsi and Ruzsa (2010). We also provide partial progress towards a conjecture of Ruzsa (2007) for sumsets in nonabelian groups. All proofs are elementary and rely on properly developing certain information-theoretic inequalities.<|reference_end|>
arxiv
@article{madiman2008entropy, title={Entropy and set cardinality inequalities for partition-determined functions}, author={Mokshay Madiman, Adam Marcus, Prasad Tetali}, journal={Random Structures and Algorithms, Vol. 40, pp. 399-424, 2012}, year={2008}, doi={10.1002/rsa.20385}, archivePrefix={arXiv}, eprint={0901.0055}, primaryClass={cs.IT math.CO math.IT math.NT math.PR} }
madiman2008entropy
arxiv-5892
0901.0062
Cores of Cooperative Games in Information Theory
<|reference_start|>Cores of Cooperative Games in Information Theory: Cores of cooperative games are ubiquitous in information theory, and arise most frequently in the characterization of fundamental limits in various scenarios involving multiple users. Examples include classical settings in network information theory such as Slepian-Wolf source coding and multiple access channels, classical settings in statistics such as robust hypothesis testing, and new settings at the intersection of networking and statistics such as distributed estimation problems for sensor networks. Cooperative game theory allows one to understand aspects of all of these problems from a fresh and unifying perspective that treats users as players in a game, sometimes leading to new insights. At the heart of these analyses are fundamental dualities that have been long studied in the context of cooperative games; for information theoretic purposes, these are dualities between information inequalities on the one hand and properties of rate, capacity or other resource allocation regions on the other.<|reference_end|>
arxiv
@article{madiman2008cores, title={Cores of Cooperative Games in Information Theory}, author={Mokshay Madiman}, journal={EURASIP Journal on Wireless Communications and Networking, Volume 2008, Article ID 318704}, year={2008}, doi={10.1155/2008/318704}, archivePrefix={arXiv}, eprint={0901.0062}, primaryClass={cs.IT cs.GT math.IT} }
madiman2008cores
arxiv-5893
0901.0065
Exact Histogram Specification Optimized for Structural Similarity
<|reference_start|>Exact Histogram Specification Optimized for Structural Similarity: An exact histogram specification (EHS) method modifies its input image to have a specified histogram. Applications of EHS include image (contrast) enhancement (e.g., by histogram equalization) and histogram watermarking. Performing EHS on an image, however, reduces its visual quality. Starting from the output of a generic EHS method, we maximize the structural similarity index (SSIM) between the original image (before EHS) and the result of EHS iteratively. Essential in this process is the computationally simple and accurate formula we derive for SSIM gradient. As it is based on gradient ascent, the proposed EHS always converges. Experimental results confirm that while obtaining the histogram exactly as specified, the proposed method invariably outperforms the existing methods in terms of visual quality of the result. The computational complexity of the proposed method is shown to be of the same order as that of the existing methods. Index terms: histogram modification, histogram equalization, optimization for perceptual visual quality, structural similarity gradient ascent, histogram watermarking, contrast enhancement.<|reference_end|>
arxiv
@article{avanaki2008exact, title={Exact Histogram Specification Optimized for Structural Similarity}, author={Alireza Avanaki}, journal={arXiv preprint arXiv:0901.0065}, year={2008}, doi={10.1007/s10043-009-0119-z}, archivePrefix={arXiv}, eprint={0901.0065}, primaryClass={cs.CV cs.MM} }
avanaki2008exact
arxiv-5894
0901.0118
On the Stability Region of Amplify-and-Forward Cooperative Relay Networks
<|reference_start|>On the Stability Region of Amplify-and-Forward Cooperative Relay Networks: This paper considers an amplify-and-forward relay network with fading states. Amplify-and-forward scheme (along with its variations) is the core mechanism for enabling cooperative communication in wireless networks, and hence understanding the network stability region under amplify-and-forward scheme is very important. However, in a relay network employing amplify-and-forward, the interaction between nodes is described in terms of real-valued ``packets'' (signals) instead of discrete packets (bits). This restrains the relay nodes from re-encoding the packets at desired rates. Hence, the stability analysis for relay networks employing amplify-and-forward scheme is by no means a straightforward extension of that in packet-based networks. In this paper, the stability region of a four-node relay network is characterized, and a simple throughput optimal algorithm with joint scheduling and rate allocation is proposed.<|reference_end|>
arxiv
@article{jose2008on, title={On the Stability Region of Amplify-and-Forward Cooperative Relay Networks}, author={Jubin Jose, Lei Ying, Sriram Vishwanath}, journal={arXiv preprint arXiv:0901.0118}, year={2008}, archivePrefix={arXiv}, eprint={0901.0118}, primaryClass={cs.IT math.IT} }
jose2008on
arxiv-5895
0901.0121
On upper bounds for parameters related to construction of special maximum matchings
<|reference_start|>On upper bounds for parameters related to construction of special maximum matchings: For a graph $G$ let $L(G)$ and $l(G)$ denote the size of the largest and smallest maximum matching of a graph obtained from $G$ by removing a maximum matching of $G$. We show that $L(G)\leq 2l(G),$ and $L(G)\leq (3/2)l(G)$ provided that $G$ contains a perfect matching. We also characterize the class of graphs for which $L(G)=2l(G)$. Our characterization implies the existence of a polynomial algorithm for testing the property $L(G)=2l(G)$. Finally we show that it is $NP$-complete to test whether a graph $G$ containing a perfect matching satisfies $L(G)=(3/2)l(G)$.<|reference_end|>
arxiv
@article{khojabaghyan2008on, title={On upper bounds for parameters related to construction of special maximum matchings}, author={Artur Khojabaghyan, Vahan V. Mkrtchyan}, journal={Discrete Mathematics 312/2 (2012), pp. 213--220}, year={2008}, doi={10.1016/j.disc.2011.08.026}, archivePrefix={arXiv}, eprint={0901.0121}, primaryClass={cs.DM math.CO} }
khojabaghyan2008on
arxiv-5896
0901.0131
Cloud Computing and Grid Computing 360-Degree Compared
<|reference_start|>Cloud Computing and Grid Computing 360-Degree Compared: Cloud Computing has become another buzzword after Web 2.0. However, there are dozens of different definitions for Cloud Computing and there seems to be no consensus on what a Cloud is. On the other hand, Cloud Computing is not a completely new concept; it has intricate connection to the relatively new but thirteen-year established Grid Computing paradigm, and other relevant technologies such as utility computing, cluster computing, and distributed systems in general. This paper strives to compare and contrast Cloud Computing with Grid Computing from various angles and give insights into the essential characteristics of both.<|reference_end|>
arxiv
@article{foster2008cloud, title={Cloud Computing and Grid Computing 360-Degree Compared}, author={Ian Foster, Yong Zhao, Ioan Raicu, Shiyong Lu}, journal={arXiv preprint arXiv:0901.0131}, year={2008}, doi={10.1109/GCE.2008.4738445}, archivePrefix={arXiv}, eprint={0901.0131}, primaryClass={cs.DC} }
foster2008cloud
arxiv-5897
0901.0134
Design and Evaluation of a Collective IO Model for Loosely Coupled Petascale Programming
<|reference_start|>Design and Evaluation of a Collective IO Model for Loosely Coupled Petascale Programming: Loosely coupled programming is a powerful paradigm for rapidly creating higher-level applications from scientific programs on petascale systems, typically using scripting languages. This paradigm is a form of many-task computing (MTC) which focuses on the passing of data between programs as ordinary files rather than messages. While it has the significant benefits of decoupling producer and consumer and allowing existing application programs to be executed in parallel with no recoding, its typical implementation using shared file systems places a high performance burden on the overall system and on the user who will analyze and consume the downstream data. Previous efforts have achieved great speedups with loosely coupled programs, but have done so with careful manual tuning of all shared file system access. In this work, we evaluate a prototype collective IO model for file-based MTC. The model enables efficient and easy distribution of input data files to computing nodes and gathering of output results from them. It eliminates the need for such manual tuning and makes the programming of large-scale clusters using a loosely coupled model easier. Our approach, inspired by in-memory approaches to collective operations for parallel programming, builds on fast local file systems to provide high-speed local file caches for parallel scripts, uses a broadcast approach to handle distribution of common input data, and uses efficient scatter/gather and caching techniques for input and output. We describe the design of the prototype model, its implementation on the Blue Gene/P supercomputer, and present preliminary measurements of its performance on synthetic benchmarks and on a large-scale molecular dynamics application.<|reference_end|>
arxiv
@article{zhang2008design, title={Design and Evaluation of a Collective IO Model for Loosely Coupled Petascale Programming}, author={Zhao Zhang, Allan Espinosa, Kamil Iskra, Ioan Raicu, Ian Foster, Michael Wilde}, journal={arXiv preprint arXiv:0901.0134}, year={2008}, doi={10.1109/MTAGS.2008.4777908}, archivePrefix={arXiv}, eprint={0901.0134}, primaryClass={cs.DC} }
zhang2008design
arxiv-5898
0901.0148
Using constraint programming to resolve the multi-source/multi-site data movement paradigm on the Grid
<|reference_start|>Using constraint programming to resolve the multi-source/multi-site data movement paradigm on the Grid: In order to achieve both fast and coordinated data transfer to collaborative sites as well as to create a distribution of data over multiple sites, efficient data movement is one of the most essential aspects in distributed environment. With such capabilities at hand, truly distributed task scheduling with minimal latencies would be reachable by internationally distributed collaborations (such as ones in HENP) seeking for scavenging or maximizing on geographically spread computational resources. But it is often not all clear (a) how to move data when available from multiple sources or (b) how to move data to multiple compute resources to achieve an optimal usage of available resources. We present a method of creating a Constraint Programming (CP) model consisting of sites, links and their attributes such as bandwidth for grid network data transfer also considering user tasks as part of the objective function for an optimal solution. We will explore and explain trade-off between schedule generation time and divergence from the optimal solution and show how to improve and render viable the solution's finding time by using search tree time limit, approximations, restrictions such as symmetry breaking or grouping similar tasks together, or generating sequence of optimal schedules by splitting the input problem. Results of data transfer simulation for each case will also include a well known Peer-2-Peer model, and time taken to generate a schedule as well as time needed for a schedule execution will be compared to a CP optimal solution. We will additionally present a possible implementation aimed to bring a distributed datasets (multiple sources) to a given site in a minimal time.<|reference_end|>
arxiv
@article{zerola2008using, title={Using constraint programming to resolve the multi-source/multi-site data movement paradigm on the Grid}, author={Michal Zerola, Jerome Lauret, Roman Bartak and Michal Sumbera}, journal={arXiv preprint arXiv:0901.0148}, year={2008}, archivePrefix={arXiv}, eprint={0901.0148}, primaryClass={cs.PF} }
zerola2008using
arxiv-5899
0901.0163
Limited-Rate Channel State Feedback for Multicarrier Block Fading Channels
<|reference_start|>Limited-Rate Channel State Feedback for Multicarrier Block Fading Channels: The capacity of a fading channel can be substantially increased by feeding back channel state information from the receiver to the transmitter. With limited-rate feedback what state information to feed back and how to encode it are important open questions. This paper studies power loading in a multicarrier system using no more than one bit of feedback per sub-channel. The sub-channels can be correlated and full channel state information is assumed at the receiver.<|reference_end|>
arxiv
@article{agarwal2009limited-rate, title={Limited-Rate Channel State Feedback for Multicarrier Block Fading Channels}, author={Manish Agarwal, Dongning Guo, Michael Honig}, journal={arXiv preprint arXiv:0901.0163}, year={2009}, doi={10.1109/TIT.2010.2080970}, archivePrefix={arXiv}, eprint={0901.0163}, primaryClass={cs.IT math.IT} }
agarwal2009limited-rate
arxiv-5900
0901.0168
Coding for Two-User SISO and MIMO Multiple Access Channels
<|reference_start|>Coding for Two-User SISO and MIMO Multiple Access Channels: Constellation Constrained (CC) capacity regions of a two-user SISO Gaussian Multiple Access Channel (GMAC) with finite complex input alphabets and continuous output are computed in this paper. When both the users employ the same code alphabet, it is well known that an appropriate rotation between the alphabets provides unique decodability to the receiver. For such a set-up, a metric is proposed to compute the angle(s) of rotation between the alphabets such that the CC capacity region is maximally enlarged. Subsequently, code pairs based on Trellis Coded Modulation (TCM) are designed for the two-user GMAC with $M$-PSK and $M$-PAM alphabet pairs for arbitrary values of $M$ and it is proved that, for certain angles of rotation, Ungerboeck labelling on the trellis of each user maximizes the guaranteed squared Euclidean distance of the \textit{sum trellis}. Hence, such a labelling scheme can be used systematically to construct trellis code pairs for a two-user GMAC to achieve sum rates close to the sum capacity of the channel. More importantly, it is shown for the first time that ML decoding complexity at the destination is significantly reduced when $M$-PAM alphabet pairs are employed with \textit{almost} no loss in the sum capacity. \indent A two-user Multiple Input Multiple Output (MIMO) fading MAC with $N_{t}$ antennas at both the users and a single antenna at the destination has also been considered with the assumption that the destination has the perfect knowledge of channel state information and the two users have the perfect knowledge of only the phase components of their channels. For such a set-up, two distinct classes of Space Time Block Code (STBC) pairs derived from the well known class of real orthogonal designs are proposed such that the STBC pairs are information lossless and have low ML decoding complexity.<|reference_end|>
arxiv
@article{harshan2009coding, title={Coding for Two-User SISO and MIMO Multiple Access Channels}, author={J. Harshan, B. Sundar Rajan}, journal={arXiv preprint arXiv:0901.0168}, year={2009}, archivePrefix={arXiv}, eprint={0901.0168}, primaryClass={cs.IT math.IT} }
harshan2009coding