id
stringlengths
9
16
submitter
stringlengths
4
52
authors
stringlengths
4
937
title
stringlengths
7
243
comments
stringlengths
1
472
journal-ref
stringlengths
4
244
doi
stringlengths
14
55
report-no
nulllengths
3
125
categories
stringlengths
5
97
license
stringclasses
9 values
abstract
stringlengths
33
2.95k
versions
list
update_date
timestamp[us]
authors_parsed
sequence
0704.0062
Tom\'a\v{s} Vina\v{r}
Rastislav \v{S}r\'amek, Bro\v{n}a Brejov\'a, Tom\'a\v{s} Vina\v{r}
On-line Viterbi Algorithm and Its Relationship to Random Walks
null
Algorithms in Bioinformatics: 7th International Workshop (WABI), 4645 volume of Lecture Notes in Computer Science, pp. 240-251, Philadelphia, PA, USA, September 2007. Springer
10.1007/978-3-540-74126-8_23
null
cs.DS
null
In this paper, we introduce the on-line Viterbi algorithm for decoding hidden Markov models (HMMs) in much smaller than linear space. Our analysis on two-state HMMs suggests that the expected maximum memory used to decode sequence of length $n$ with $m$-state HMM can be as low as $\Theta(m\log n)$, without a significant slow-down compared to the classical Viterbi algorithm. Classical Viterbi algorithm requires $O(mn)$ space, which is impractical for analysis of long DNA sequences (such as complete human genome chromosomes) and for continuous data streams. We also experimentally demonstrate the performance of the on-line Viterbi algorithm on a simple HMM for gene finding on both simulated and real DNA sequences.
[ { "version": "v1", "created": "Sat, 31 Mar 2007 23:52:33 GMT" } ]
2010-01-25T00:00:00
[ [ "Šrámek", "Rastislav", "" ], [ "Brejová", "Broňa", "" ], [ "Vinař", "Tomáš", "" ] ]
0704.0468
Jinsong Tan
Jinsong Tan
Inapproximability of Maximum Weighted Edge Biclique and Its Applications
null
LNCS 4978, TAMC 2008, pp 282-293
null
null
cs.CC cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a bipartite graph $G = (V_1,V_2,E)$ where edges take on {\it both} positive and negative weights from set $\mathcal{S}$, the {\it maximum weighted edge biclique} problem, or $\mathcal{S}$-MWEB for short, asks to find a bipartite subgraph whose sum of edge weights is maximized. This problem has various applications in bioinformatics, machine learning and databases and its (in)approximability remains open. In this paper, we show that for a wide range of choices of $\mathcal{S}$, specifically when $| \frac{\min\mathcal{S}} {\max \mathcal{S}} | \in \Omega(\eta^{\delta-1/2}) \cap O(\eta^{1/2-\delta})$ (where $\eta = \max\{|V_1|, |V_2|\}$, and $\delta \in (0,1/2]$), no polynomial time algorithm can approximate $\mathcal{S}$-MWEB within a factor of $n^{\epsilon}$ for some $\epsilon > 0$ unless $\mathsf{RP = NP}$. This hardness result gives justification of the heuristic approaches adopted for various applied problems in the aforementioned areas, and indicates that good approximation algorithms are unlikely to exist. Specifically, we give two applications by showing that: 1) finding statistically significant biclusters in the SAMBA model, proposed in \cite{Tan02} for the analysis of microarray data, is $n^{\epsilon}$-inapproximable; and 2) no polynomial time algorithm exists for the Minimum Description Length with Holes problem \cite{Bu05} unless $\mathsf{RP=NP}$.
[ { "version": "v1", "created": "Tue, 3 Apr 2007 21:39:11 GMT" }, { "version": "v2", "created": "Mon, 23 Mar 2009 02:50:29 GMT" } ]
2009-03-23T00:00:00
[ [ "Tan", "Jinsong", "" ] ]
0704.0788
Kerry Soileau
Kerry M. Soileau
Optimal Synthesis of Multiple Algorithms
null
null
null
null
cs.DS cs.PF
null
In this paper we give a definition of "algorithm," "finite algorithm," "equivalent algorithms," and what it means for a single algorithm to dominate a set of algorithms. We define a derived algorithm which may have a smaller mean execution time than any of its component algorithms. We give an explicit expression for the mean execution time (when it exists) of the derived algorithm. We give several illustrative examples of derived algorithms with two component algorithms. We include mean execution time solutions for two-algorithm processors whose joint density of execution times are of several general forms. For the case in which the joint density for a two-algorithm processor is a step function, we give a maximum-likelihood estimation scheme with which to analyze empirical processing time data.
[ { "version": "v1", "created": "Thu, 5 Apr 2007 19:47:54 GMT" } ]
2007-05-23T00:00:00
[ [ "Soileau", "Kerry M.", "" ] ]
0704.0834
Anatoly Rodionov
Anatoly Rodionov, Sergey Volkov
P-adic arithmetic coding
29 pages
null
null
null
cs.DS
null
A new incremental algorithm for data compression is presented. For a sequence of input symbols algorithm incrementally constructs a p-adic integer number as an output. Decoding process starts with less significant part of a p-adic integer and incrementally reconstructs a sequence of input symbols. Algorithm is based on certain features of p-adic numbers and p-adic norm. p-adic coding algorithm may be considered as of generalization a popular compression technique - arithmetic coding algorithms. It is shown that for p = 2 the algorithm works as integer variant of arithmetic coding; for a special class of models it gives exactly the same codes as Huffman's algorithm, for another special model and a specific alphabet it gives Golomb-Rice codes.
[ { "version": "v1", "created": "Fri, 6 Apr 2007 02:30:42 GMT" } ]
2007-05-23T00:00:00
[ [ "Rodionov", "Anatoly", "" ], [ "Volkov", "Sergey", "" ] ]
0704.1068
Leo Liberti
Giacomo Nannicini, Philippe Baptiste, Gilles Barbier, Daniel Krob, Leo Liberti
Fast paths in large-scale dynamic road networks
12 pages, 4 figures
null
null
null
cs.NI cs.DS
null
Efficiently computing fast paths in large scale dynamic road networks (where dynamic traffic information is known over a part of the network) is a practical problem faced by several traffic information service providers who wish to offer a realistic fast path computation to GPS terminal enabled vehicles. The heuristic solution method we propose is based on a highway hierarchy-based shortest path algorithm for static large-scale networks; we maintain a static highway hierarchy and perform each query on the dynamically evaluated network.
[ { "version": "v1", "created": "Mon, 9 Apr 2007 07:04:19 GMT" }, { "version": "v2", "created": "Wed, 27 Jun 2007 18:17:35 GMT" } ]
2007-06-27T00:00:00
[ [ "Nannicini", "Giacomo", "" ], [ "Baptiste", "Philippe", "" ], [ "Barbier", "Gilles", "" ], [ "Krob", "Daniel", "" ], [ "Liberti", "Leo", "" ] ]
0704.1748
Frank Schweitzer
Markus M. Geipel
Self-Organization applied to Dynamic Network Layout
Text revision and figures improved in v.2. See http://www.sg.ethz.ch for more info and examples
International Journal of Modern Physics C vol. 18, no. 10 (2007), pp. 1537-1549
10.1142/S0129183107011558
null
physics.comp-ph cs.DS nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As networks and their structure have become a major field of research, a strong demand for network visualization has emerged. We address this challenge by formalizing the well established spring layout in terms of dynamic equations. We thus open up the design space for new algorithms. Drawing from the knowledge of systems design, we derive a layout algorithm that remedies several drawbacks of the original spring layout. This new algorithm relies on the balancing of two antagonistic forces. We thus call it {\em arf} for "attractive and repulsive forces". It is, as we claim, particularly suited for a dynamic layout of smaller networks ($n < 10^3$). We back this claim with several application examples from on going complex systems research.
[ { "version": "v1", "created": "Fri, 13 Apr 2007 16:45:28 GMT" }, { "version": "v2", "created": "Thu, 19 Apr 2007 13:21:55 GMT" }, { "version": "v3", "created": "Sun, 7 Sep 2008 12:46:19 GMT" }, { "version": "v4", "created": "Mon, 24 Nov 2008 18:35:59 GMT" }, { "version": "v5", "created": "Tue, 20 Jan 2009 14:58:35 GMT" } ]
2009-11-13T00:00:00
[ [ "Geipel", "Markus M.", "" ] ]
0704.2092
Jinsong Tan
Jinsong Tan
A Note on the Inapproximability of Correlation Clustering
null
Information Processing Letters, 108: 331-335, 2008
null
null
cs.LG cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider inapproximability of the correlation clustering problem defined as follows: Given a graph $G = (V,E)$ where each edge is labeled either "+" (similar) or "-" (dissimilar), correlation clustering seeks to partition the vertices into clusters so that the number of pairs correctly (resp. incorrectly) classified with respect to the labels is maximized (resp. minimized). The two complementary problems are called MaxAgree and MinDisagree, respectively, and have been studied on complete graphs, where every edge is labeled, and general graphs, where some edge might not have been labeled. Natural edge-weighted versions of both problems have been studied as well. Let S-MaxAgree denote the weighted problem where all weights are taken from set S, we show that S-MaxAgree with weights bounded by $O(|V|^{1/2-\delta})$ essentially belongs to the same hardness class in the following sense: if there is a polynomial time algorithm that approximates S-MaxAgree within a factor of $\lambda = O(\log{|V|})$ with high probability, then for any choice of S', S'-MaxAgree can be approximated in polynomial time within a factor of $(\lambda + \epsilon)$, where $\epsilon > 0$ can be arbitrarily small, with high probability. A similar statement also holds for $S-MinDisagree. This result implies it is hard (assuming $NP \neq RP$) to approximate unweighted MaxAgree within a factor of $80/79-\epsilon$, improving upon a previous known factor of $116/115-\epsilon$ by Charikar et. al. \cite{Chari05}.
[ { "version": "v1", "created": "Tue, 17 Apr 2007 03:52:41 GMT" }, { "version": "v2", "created": "Mon, 23 Mar 2009 03:22:02 GMT" } ]
2009-03-23T00:00:00
[ [ "Tan", "Jinsong", "" ] ]
0704.2919
David Eppstein
David Eppstein, Jean-Claude Falmagne, and Hasan Uzun
On Verifying and Engineering the Well-gradedness of a Union-closed Family
15 pages
J. Mathematical Psychology 53(1):34-39, 2009
10.1016/j.jmp.2008.09.002
null
math.CO cs.DM cs.DS
null
Current techniques for generating a knowledge space, such as QUERY, guarantees that the resulting structure is closed under union, but not that it satisfies wellgradedness, which is one of the defining conditions for a learning space. We give necessary and sufficient conditions on the base of a union-closed set family that ensures that the family is well-graded. We consider two cases, depending on whether or not the family contains the empty set. We also provide algorithms for efficiently testing these conditions, and for augmenting a set family in a minimal way to one that satisfies these conditions.
[ { "version": "v1", "created": "Mon, 23 Apr 2007 04:37:08 GMT" }, { "version": "v2", "created": "Fri, 16 Nov 2007 06:56:35 GMT" }, { "version": "v3", "created": "Mon, 14 Apr 2008 00:23:39 GMT" } ]
2009-08-28T00:00:00
[ [ "Eppstein", "David", "" ], [ "Falmagne", "Jean-Claude", "" ], [ "Uzun", "Hasan", "" ] ]
0704.3313
David Eppstein
David Eppstein and Michael T. Goodrich
Straggler Identification in Round-Trip Data Streams via Newton's Identities and Invertible Bloom Filters
Fuller version of paper appearing in 10th Worksh. Algorithms and Data Structures, Halifax, Nova Scotia, 2007
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce the straggler identification problem, in which an algorithm must determine the identities of the remaining members of a set after it has had a large number of insertion and deletion operations performed on it, and now has relatively few remaining members. The goal is to do this in o(n) space, where n is the total number of identities. The straggler identification problem has applications, for example, in determining the set of unacknowledged packets in a high-bandwidth multicast data stream. We provide a deterministic solution to the straggler identification problem that uses only O(d log n) bits and is based on a novel application of Newton's identities for symmetric polynomials. This solution can identify any subset of d stragglers from a set of n O(log n)-bit identifiers, assuming that there are no false deletions of identities not already in the set. Indeed, we give a lower bound argument that shows that any small-space deterministic solution to the straggler identification problem cannot be guaranteed to handle false deletions. Nevertheless, we show that there is a simple randomized solution using O(d log n log(1/epsilon)) bits that can maintain a multiset and solve the straggler identification problem, tolerating false deletions, where epsilon>0 is a user-defined parameter bounding the probability of an incorrect response. This randomized solution is based on a new type of Bloom filter, which we call the invertible Bloom filter.
[ { "version": "v1", "created": "Wed, 25 Apr 2007 06:59:43 GMT" }, { "version": "v2", "created": "Fri, 18 May 2007 21:56:45 GMT" }, { "version": "v3", "created": "Thu, 10 Sep 2009 04:13:31 GMT" } ]
2009-09-10T00:00:00
[ [ "Eppstein", "David", "" ], [ "Goodrich", "Michael T.", "" ] ]
0704.3496
Frank Gurski
Frank Gurski
Polynomial algorithms for protein similarity search for restricted mRNA structures
10 Pages
null
null
null
cs.DS cs.CC
null
In this paper we consider the problem of computing an mRNA sequence of maximal similarity for a given mRNA of secondary structure constraints, introduced by Backofen et al. in [BNS02] denoted as the MRSO problem. The problem is known to be NP-complete for planar associated implied structure graphs of vertex degree at most 3. In [BFHV05] a first polynomial dynamic programming algorithms for MRSO on implied structure graphs with maximum vertex degree 3 of bounded cut-width is shown. We give a simple but more general polynomial dynamic programming solution for the MRSO problem for associated implied structure graphs of bounded clique-width. Our result implies that MRSO is polynomial for graphs of bounded tree-width, co-graphs, $P_4$-sparse graphs, and distance hereditary graphs. Further we conclude that the problem of comparing two solutions for MRSO is hard for the class of problems which can be solved in polynomial time with a number of parallel queries to an oracle in NP.
[ { "version": "v1", "created": "Thu, 26 Apr 2007 08:30:14 GMT" } ]
2007-05-23T00:00:00
[ [ "Gurski", "Frank", "" ] ]
0704.3773
Sam Tannous
Sam Tannous
Avoiding Rotated Bitboards with Direct Lookup
7 pages, 1 figure, 4 listings; replaced test positions, fixed typos
ICGA Journal, Vol. 30, No. 2, pp. 85-91. (June 2007).
null
null
cs.DS
null
This paper describes an approach for obtaining direct access to the attacked squares of sliding pieces without resorting to rotated bitboards. The technique involves creating four hash tables using the built in hash arrays from an interpreted, high level language. The rank, file, and diagonal occupancy are first isolated by masking the desired portion of the board. The attacked squares are then directly retrieved from the hash tables. Maintaining incrementally updated rotated bitboards becomes unnecessary as does all the updating, mapping and shifting required to access the attacked squares. Finally, rotated bitboard move generation speed is compared with that of the direct hash table lookup method.
[ { "version": "v1", "created": "Sat, 28 Apr 2007 03:11:59 GMT" }, { "version": "v2", "created": "Mon, 23 Jul 2007 19:23:39 GMT" } ]
2007-10-09T00:00:00
[ [ "Tannous", "Sam", "" ] ]
0704.3835
K. Y. Michael Wong
K. Y. Michael Wong and David Saad
Minimizing Unsatisfaction in Colourful Neighbourhoods
28 pages, 12 figures, substantially revised with additional explanation
J. Phys. A: Math. Theor. 41, 324023 (2008).
10.1088/1751-8113/41/32/324023
null
cs.DS cond-mat.dis-nn cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Colouring sparse graphs under various restrictions is a theoretical problem of significant practical relevance. Here we consider the problem of maximizing the number of different colours available at the nodes and their neighbourhoods, given a predetermined number of colours. In the analytical framework of a tree approximation, carried out at both zero and finite temperatures, solutions obtained by population dynamics give rise to estimates of the threshold connectivity for the incomplete to complete transition, which are consistent with those of existing algorithms. The nature of the transition as well as the validity of the tree approximation are investigated.
[ { "version": "v1", "created": "Sun, 29 Apr 2007 10:03:00 GMT" }, { "version": "v2", "created": "Fri, 21 Dec 2007 04:18:58 GMT" }, { "version": "v3", "created": "Tue, 1 Jul 2008 17:43:32 GMT" } ]
2009-11-13T00:00:00
[ [ "Wong", "K. Y. Michael", "" ], [ "Saad", "David", "" ] ]
0704.3904
Fabien Mathieu
Anh-Tuan Gai (INRIA Rocquencourt), Dmitry Lebedev (FT R&D), Fabien Mathieu (FT R&D), Fabien De Montgolfier (LIAFA), Julien Reynier (LIENS), Laurent Viennot (INRIA Rocquencourt)
Acyclic Preference Systems in P2P Networks
null
null
null
null
cs.DS cs.GT
null
In this work we study preference systems natural for the Peer-to-Peer paradigm. Most of them fall in three categories: global, symmetric and complementary. All these systems share an acyclicity property. As a consequence, they admit a stable (or Pareto efficient) configuration, where no participant can collaborate with better partners than their current ones. We analyze the representation of the such preference systems and show that any acyclic system can be represented with a symmetric mark matrix. This gives a method to merge acyclic preference systems and retain the acyclicity. We also consider such properties of the corresponding collaboration graph, as clustering coefficient and diameter. In particular, studying the example of preferences based on real latency measurements, we observe that its stable configuration is a small-world graph.
[ { "version": "v1", "created": "Mon, 30 Apr 2007 09:26:39 GMT" }, { "version": "v2", "created": "Wed, 2 May 2007 13:07:31 GMT" } ]
2007-05-23T00:00:00
[ [ "Gai", "Anh-Tuan", "", "INRIA Rocquencourt" ], [ "Lebedev", "Dmitry", "", "FT R&D" ], [ "Mathieu", "Fabien", "", "FT R&D" ], [ "De Montgolfier", "Fabien", "", "LIAFA" ], [ "Reynier", "Julien", "", "LIENS" ], [ "Viennot", "Laurent", "", "INRIA Rocquencourt" ] ]
0705.0204
Tshilidzi Marwala
Lukasz A. Machowski, and Tshilidzi Marwala
Using Images to create a Hierarchical Grid Spatial Index
In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Taiwan, 2006, pp. 1974-1979
null
10.1109/ICSMC.2006.385020
null
cs.DS
null
This paper presents a hybrid approach to spatial indexing of two dimensional data. It sheds new light on the age old problem by thinking of the traditional algorithms as working with images. Inspiration is drawn from an analogous situation that is found in machine and human vision. Image processing techniques are used to assist in the spatial indexing of the data. A fixed grid approach is used and bins with too many records are sub-divided hierarchically. Search queries are pre-computed for bins that do not contain any data records. This has the effect of dividing the search space up into non rectangular regions which are based on the spatial properties of the data. The bucketing quad tree can be considered as an image with a resolution of two by two for each layer. The results show that this method performs better than the quad tree if there are more divisions per layer. This confirms our suspicions that the algorithm works better if it gets to look at the data with higher resolution images. An elegant class structure is developed where the implementation of concrete spatial indexes for a particular data type merely relies on rendering the data onto an image.
[ { "version": "v1", "created": "Wed, 2 May 2007 05:37:32 GMT" } ]
2016-11-17T00:00:00
[ [ "Machowski", "Lukasz A.", "" ], [ "Marwala", "Tshilidzi", "" ] ]
0705.0253
Jian Li
Mordecai Golin and Li Jian
More Efficient Algorithms and Analyses for Unequal Letter Cost Prefix-Free Coding
29 pages;9 figures;
null
null
null
cs.IT cs.DS math.IT
null
There is a large literature devoted to the problem of finding an optimal (min-cost) prefix-free code with an unequal letter-cost encoding alphabet of size. While there is no known polynomial time algorithm for solving it optimally there are many good heuristics that all provide additive errors to optimal. The additive error in these algorithms usually depends linearly upon the largest encoding letter size. This paper was motivated by the problem of finding optimal codes when the encoding alphabet is infinite. Because the largest letter cost is infinite, the previous analyses could give infinite error bounds. We provide a new algorithm that works with infinite encoding alphabets. When restricted to the finite alphabet case, our algorithm often provides better error bounds than the best previous ones known.
[ { "version": "v1", "created": "Wed, 2 May 2007 11:23:52 GMT" }, { "version": "v2", "created": "Thu, 3 May 2007 09:00:23 GMT" } ]
2007-07-13T00:00:00
[ [ "Golin", "Mordecai", "" ], [ "Jian", "Li", "" ] ]
0705.0413
David Eppstein
David Eppstein, Marc van Kreveld, Elena Mumford, and Bettina Speckmann
Edges and Switches, Tunnels and Bridges
15 pages, 11 figures. To appear in 10th Worksh. Algorithms and Data Structures, Halifax, Nova Scotia, 2007. This version includes three pages of appendices that will not be included in the conference proceedings version
Computational Geometry Theory & Applications 42(8): 790-802, 2009
10.1016/j.comgeo.2008.05.005
null
cs.DS cs.CG
null
Edge casing is a well-known method to improve the readability of drawings of non-planar graphs. A cased drawing orders the edges of each edge crossing and interrupts the lower edge in an appropriate neighborhood of the crossing. Certain orders will lead to a more readable drawing than others. We formulate several optimization criteria that try to capture the concept of a "good" cased drawing. Further, we address the algorithmic question of how to turn a given drawing into an optimal cased drawing. For many of the resulting optimization problems, we either find polynomial time algorithms or NP-hardness results.
[ { "version": "v1", "created": "Thu, 3 May 2007 06:33:04 GMT" } ]
2009-07-09T00:00:00
[ [ "Eppstein", "David", "" ], [ "van Kreveld", "Marc", "" ], [ "Mumford", "Elena", "" ], [ "Speckmann", "Bettina", "" ] ]
0705.0552
Rajeev Raman
Rajeev Raman, Venkatesh Raman, Srinivasa Rao Satti
Succinct Indexable Dictionaries with Applications to Encoding $k$-ary Trees, Prefix Sums and Multisets
Final version of SODA 2002 paper; supersedes Leicester Tech report 2002/16
ACM Transactions on Algorithms vol 3 (2007), Article 43, 25pp
10.1145/1290672.1290680
null
cs.DS cs.DM cs.IT math.IT
null
We consider the {\it indexable dictionary} problem, which consists of storing a set $S \subseteq \{0,...,m-1\}$ for some integer $m$, while supporting the operations of $\Rank(x)$, which returns the number of elements in $S$ that are less than $x$ if $x \in S$, and -1 otherwise; and $\Select(i)$ which returns the $i$-th smallest element in $S$. We give a data structure that supports both operations in O(1) time on the RAM model and requires ${\cal B}(n,m) + o(n) + O(\lg \lg m)$ bits to store a set of size $n$, where ${\cal B}(n,m) = \ceil{\lg {m \choose n}}$ is the minimum number of bits required to store any $n$-element subset from a universe of size $m$. Previous dictionaries taking this space only supported (yes/no) membership queries in O(1) time. In the cell probe model we can remove the $O(\lg \lg m)$ additive term in the space bound, answering a question raised by Fich and Miltersen, and Pagh. We present extensions and applications of our indexable dictionary data structure, including: An information-theoretically optimal representation of a $k$-ary cardinal tree that supports standard operations in constant time, A representation of a multiset of size $n$ from $\{0,...,m-1\}$ in ${\cal B}(n,m+n) + o(n)$ bits that supports (appropriate generalizations of) $\Rank$ and $\Select$ operations in constant time, and A representation of a sequence of $n$ non-negative integers summing up to $m$ in ${\cal B}(n,m+n) + o(n)$ bits that supports prefix sum queries in constant time.
[ { "version": "v1", "created": "Fri, 4 May 2007 07:47:05 GMT" } ]
2011-08-10T00:00:00
[ [ "Raman", "Rajeev", "" ], [ "Raman", "Venkatesh", "" ], [ "Satti", "Srinivasa Rao", "" ] ]
0705.0561
Jingchao Chen
Jing-Chao Chen
Iterative Rounding for the Closest String Problem
This paper has been published in abstract Booklet of CiE09
null
null
null
cs.DS cs.CC
http://creativecommons.org/licenses/by-nc-sa/3.0/
The closest string problem is an NP-hard problem, whose task is to find a string that minimizes maximum Hamming distance to a given set of strings. This can be reduced to an integer program (IP). However, to date, there exists no known polynomial-time algorithm for IP. In 2004, Meneses et al. introduced a branch-and-bound (B & B) method for solving the IP problem. Their algorithm is not always efficient and has the exponential time complexity. In the paper, we attempt to solve efficiently the IP problem by a greedy iterative rounding technique. The proposed algorithm is polynomial time and much faster than the existing B & B IP for the CSP. If the number of strings is limited to 3, the algorithm is provably at most 1 away from the optimum. The empirical results show that in many cases we can find an exact solution. Even though we fail to find an exact solution, the solution found is very close to exact solution.
[ { "version": "v1", "created": "Fri, 4 May 2007 03:01:42 GMT" }, { "version": "v2", "created": "Wed, 11 May 2011 00:18:55 GMT" } ]
2011-05-12T00:00:00
[ [ "Chen", "Jing-Chao", "" ] ]
0705.0588
Edgar Graaf de
Edgar H. de Graaf, Joost N. Kok, Walter A. Kosters
Clustering Co-occurrence of Maximal Frequent Patterns in Streams
null
null
null
null
cs.AI cs.DS
null
One way of getting a better view of data is using frequent patterns. In this paper frequent patterns are subsets that occur a minimal number of times in a stream of itemsets. However, the discovery of frequent patterns in streams has always been problematic. Because streams are potentially endless it is in principle impossible to say if a pattern is often occurring or not. Furthermore the number of patterns can be huge and a good overview of the structure of the stream is lost quickly. The proposed approach will use clustering to facilitate the analysis of the structure of the stream. A clustering on the co-occurrence of patterns will give the user an improved view on the structure of the stream. Some patterns might occur so much together that they should form a combined pattern. In this way the patterns in the clustering will be the largest frequent patterns: maximal frequent patterns. Our approach to decide if patterns occur often together will be based on a method of clustering when only the distance between pairs is known. The number of maximal frequent patterns is much smaller and combined with clustering methods these patterns provide a good view on the structure of the stream.
[ { "version": "v1", "created": "Fri, 4 May 2007 10:36:53 GMT" } ]
2007-05-23T00:00:00
[ [ "de Graaf", "Edgar H.", "" ], [ "Kok", "Joost N.", "" ], [ "Kosters", "Walter A.", "" ] ]
0705.0593
Edgar Graaf de
Edgar H. de Graaf, Joost N. Kok, Walter A. Kosters
Clustering with Lattices in the Analysis of Graph Patterns
null
null
null
null
cs.AI cs.DS
null
Mining frequent subgraphs is an area of research where we have a given set of graphs (each graph can be seen as a transaction), and we search for (connected) subgraphs contained in many of these graphs. In this work we will discuss techniques used in our framework Lattice2SAR for mining and analysing frequent subgraph data and their corresponding lattice information. Lattice information is provided by the graph mining algorithm gSpan; it contains all supergraph-subgraph relations of the frequent subgraph patterns -- and their supports. Lattice2SAR is in particular used in the analysis of frequent graph patterns where the graphs are molecules and the frequent subgraphs are fragments. In the analysis of fragments one is interested in the molecules where patterns occur. This data can be very extensive and in this paper we focus on a technique of making it better available by using the lattice information in our clustering. Now we can reduce the number of times the highly compressed occurrence data needs to be accessed by the user. The user does not have to browse all the occurrence data in search of patterns occurring in the same molecules. Instead one can directly see which frequent subgraphs are of interest.
[ { "version": "v1", "created": "Fri, 4 May 2007 10:52:28 GMT" } ]
2007-05-23T00:00:00
[ [ "de Graaf", "Edgar H.", "" ], [ "Kok", "Joost N.", "" ], [ "Kosters", "Walter A.", "" ] ]
0705.0933
Max Neunh\"offer
Max Neunhoeffer, Cheryl E. Praeger
Computing Minimal Polynomials of Matrices
null
null
null
null
math.RA cs.DS
null
We present and analyse a Monte-Carlo algorithm to compute the minimal polynomial of an $n\times n$ matrix over a finite field that requires $O(n^3)$ field operations and O(n) random vectors, and is well suited for successful practical implementation. The algorithm, and its complexity analysis, use standard algorithms for polynomial and matrix operations. We compare features of the algorithm with several other algorithms in the literature. In addition we present a deterministic verification procedure which is similarly efficient in most cases but has a worst-case complexity of $O(n^4)$. Finally, we report the results of practical experiments with an implementation of our algorithms in comparison with the current algorithms in the {\sf GAP} library.
[ { "version": "v1", "created": "Mon, 7 May 2007 15:48:12 GMT" }, { "version": "v2", "created": "Mon, 7 Apr 2008 12:18:34 GMT" } ]
2008-04-07T00:00:00
[ [ "Neunhoeffer", "Max", "" ], [ "Praeger", "Cheryl E.", "" ] ]
0705.1025
David Eppstein
David Eppstein
Recognizing Partial Cubes in Quadratic Time
25 pages, five figures. This version significantly expands previous versions, including a new report on an implementation of the algorithm and experiments with it
Journal of Graph Algorithms and Applications 15(2) 269-293, 2011
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show how to test whether a graph with n vertices and m edges is a partial cube, and if so how to find a distance-preserving embedding of the graph into a hypercube, in the near-optimal time bound O(n^2), improving previous O(nm)-time solutions.
[ { "version": "v1", "created": "Tue, 8 May 2007 17:59:08 GMT" }, { "version": "v2", "created": "Tue, 19 Jul 2011 22:39:16 GMT" } ]
2011-07-21T00:00:00
[ [ "Eppstein", "David", "" ] ]
0705.1033
Kebin Wang
Michael A. Bender, Bradley C. Kuszmaul, Shang-Hua Teng, Kebin Wang
Optimal Cache-Oblivious Mesh Layouts
null
null
null
null
cs.DS cs.CE cs.MS cs.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A mesh is a graph that divides physical space into regularly-shaped regions. Meshes computations form the basis of many applications, e.g. finite-element methods, image rendering, and collision detection. In one important mesh primitive, called a mesh update, each mesh vertex stores a value and repeatedly updates this value based on the values stored in all neighboring vertices. The performance of a mesh update depends on the layout of the mesh in memory. This paper shows how to find a memory layout that guarantees that the mesh update has asymptotically optimal memory performance for any set of memory parameters. Such a memory layout is called cache-oblivious. Formally, for a $d$-dimensional mesh $G$, block size $B$, and cache size $M$ (where $M=\Omega(B^d)$), the mesh update of $G$ uses $O(1+|G|/B)$ memory transfers. The paper also shows how the mesh-update performance degrades for smaller caches, where $M=o(B^d)$. The paper then gives two algorithms for finding cache-oblivious mesh layouts. The first layout algorithm runs in time $O(|G|\log^2|G|)$ both in expectation and with high probability on a RAM. It uses $O(1+|G|\log^2(|G|/M)/B)$ memory transfers in expectation and $O(1+(|G|/B)(\log^2(|G|/M) + \log|G|))$ memory transfers with high probability in the cache-oblivious and disk-access machine (DAM) models. The layout is obtained by finding a fully balanced decomposition tree of $G$ and then performing an in-order traversal of the leaves of the tree. The second algorithm runs faster by almost a $\log|G|/\log\log|G|$ factor in all three memory models, both in expectation and with high probability. The layout obtained by finding a relax-balanced decomposition tree of $G$ and then performing an in-order traversal of the leaves of the tree.
[ { "version": "v1", "created": "Tue, 8 May 2007 05:59:55 GMT" }, { "version": "v2", "created": "Mon, 5 Oct 2009 18:45:25 GMT" } ]
2009-10-05T00:00:00
[ [ "Bender", "Michael A.", "" ], [ "Kuszmaul", "Bradley C.", "" ], [ "Teng", "Shang-Hua", "" ], [ "Wang", "Kebin", "" ] ]
0705.1364
Mustaq Ahmed
Mustaq Ahmed and Anna Lubiw
An Approximation Algorithm for Shortest Descending Paths
14 pages, 3 figures
null
null
null
cs.CG cs.DS
null
A path from s to t on a polyhedral terrain is descending if the height of a point p never increases while we move p along the path from s to t. No efficient algorithm is known to find a shortest descending path (SDP) from s to t in a polyhedral terrain. We give a simple approximation algorithm that solves the SDP problem on general terrains. Our algorithm discretizes the terrain with O(n^2 X / e) Steiner points so that after an O(n^2 X / e * log(n X /e))-time preprocessing phase for a given vertex s, we can determine a (1+e)-approximate SDP from s to any point v in O(n) time if v is either a vertex of the terrain or a Steiner point, and in O(n X /e) time otherwise. Here n is the size of the terrain, and X is a parameter of the geometry of the terrain.
[ { "version": "v1", "created": "Wed, 9 May 2007 22:02:28 GMT" } ]
2007-05-23T00:00:00
[ [ "Ahmed", "Mustaq", "" ], [ "Lubiw", "Anna", "" ] ]
0705.1521
Frank Gurski
Frank Gurski
A note on module-composed graphs
10 pages
null
null
null
cs.DS
null
In this paper we consider module-composed graphs, i.e. graphs which can be defined by a sequence of one-vertex insertions v_1,...,v_n, such that the neighbourhood of vertex v_i, 2<= i<= n, forms a module (a homogeneous set) of the graph defined by vertices v_1,..., v_{i-1}. We show that module-composed graphs are HHDS-free and thus homogeneously orderable, weakly chordal, and perfect. Every bipartite distance hereditary graph, every (co-2C_4,P_4)-free graph and thus every trivially perfect graph is module-composed. We give an O(|V_G|(|V_G|+|E_G|)) time algorithm to decide whether a given graph G is module-composed and construct a corresponding module-sequence. For the case of bipartite graphs, module-composed graphs are exactly distance hereditary graphs, which implies simple linear time algorithms for their recognition and construction of a corresponding module-sequence.
[ { "version": "v1", "created": "Thu, 10 May 2007 18:08:22 GMT" }, { "version": "v2", "created": "Sun, 22 Jul 2007 16:30:53 GMT" } ]
2007-07-23T00:00:00
[ [ "Gurski", "Frank", "" ] ]
0705.1750
Peng Cui
Peng Cui
A Tighter Analysis of Setcover Greedy Algorithm for Test Set
12 pages, 3 figures, Revised version
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Setcover greedy algorithm is a natural approximation algorithm for test set problem. This paper gives a precise and tighter analysis of performance guarantee of this algorithm. The author improves the performance guarantee $2\ln n$ which derives from set cover problem to $1.1354\ln n$ by applying the potential function technique. In addition, the author gives a nontrivial lower bound $1.0004609\ln n$ of performance guarantee of this algorithm. This lower bound, together with the matching bound of information content heuristic, confirms the fact information content heuristic is slightly better than setcover greedy algorithm in worst case.
[ { "version": "v1", "created": "Sat, 12 May 2007 04:18:36 GMT" }, { "version": "v2", "created": "Thu, 17 May 2007 09:32:21 GMT" }, { "version": "v3", "created": "Sun, 29 Mar 2009 02:58:29 GMT" }, { "version": "v4", "created": "Sat, 4 Apr 2009 02:46:12 GMT" }, { "version": "v5", "created": "Sat, 29 Jan 2011 04:49:11 GMT" }, { "version": "v6", "created": "Sat, 5 Mar 2011 00:17:44 GMT" } ]
2011-03-08T00:00:00
[ [ "Cui", "Peng", "" ] ]
0705.1876
Grzegorz Malewicz
Grzegorz Malewicz
Scheduling Dags under Uncertainty
null
null
null
null
cs.DS cs.DM
null
This paper introduces a parallel scheduling problem where a directed acyclic graph modeling $t$ tasks and their dependencies needs to be executed on $n$ unreliable workers. Worker $i$ executes task $j$ correctly with probability $p_{i,j}$. The goal is to find a regimen $\Sigma$, that dictates how workers get assigned to tasks (possibly in parallel and redundantly) throughout execution, so as to minimize the expected completion time. This fundamental parallel scheduling problem arises in grid computing and project management fields, and has several applications. We show a polynomial time algorithm for the problem restricted to the case when dag width is at most a constant and the number of workers is also at most a constant. These two restrictions may appear to be too severe. However, they are fundamentally required. Specifically, we demonstrate that the problem is NP-hard with constant number of workers when dag width can grow, and is also NP-hard with constant dag width when the number of workers can grow. When both dag width and the number of workers are unconstrained, then the problem is inapproximable within factor less than 5/4, unless P=NP.
[ { "version": "v1", "created": "Mon, 14 May 2007 06:54:42 GMT" } ]
2007-05-23T00:00:00
[ [ "Malewicz", "Grzegorz", "" ] ]
0705.1970
Nikolaos Laoutaris
Nikolaos Laoutaris
A Closed-Form Method for LRU Replacement under Generalized Power-Law Demand
null
null
null
null
cs.DS
null
We consider the well known \emph{Least Recently Used} (LRU) replacement algorithm and analyze it under the independent reference model and generalized power-law demand. For this extensive family of demand distributions we derive a closed-form expression for the per object steady-state hit ratio. To the best of our knowledge, this is the first analytic derivation of the per object hit ratio of LRU that can be obtained in constant time without requiring laborious numeric computations or simulation. Since most applications of replacement algorithms include (at least) some scenarios under i.i.d. requests, our method has substantial practical value, especially when having to analyze multiple caches, where existing numeric methods and simulation become too time consuming.
[ { "version": "v1", "created": "Mon, 14 May 2007 16:04:48 GMT" } ]
2007-05-23T00:00:00
[ [ "Laoutaris", "Nikolaos", "" ] ]
0705.1986
Andrei Paun
Andrei Paun
On the Hopcroft's minimization algorithm
10 pages, 1 figure
null
null
null
cs.DS
null
We show that the absolute worst case time complexity for Hopcroft's minimization algorithm applied to unary languages is reached only for de Bruijn words. A previous paper by Berstel and Carton gave the example of de Bruijn words as a language that requires O(n log n) steps by carefully choosing the splitting sets and processing these sets in a FIFO mode. We refine the previous result by showing that the Berstel/Carton example is actually the absolute worst case time complexity in the case of unary languages. We also show that a LIFO implementation will not achieve the same worst time complexity for the case of unary languages. Lastly, we show that the same result is valid also for the cover automata and a modification of the Hopcroft's algorithm, modification used in minimization of cover automata.
[ { "version": "v1", "created": "Mon, 14 May 2007 17:15:53 GMT" } ]
2007-05-23T00:00:00
[ [ "Paun", "Andrei", "" ] ]
0705.2125
Ching-Lueh Chang
Ching-Lueh Chang, Yuh-Dauh Lyuu
Parallelized approximation algorithms for minimum routing cost spanning trees
null
null
null
null
cs.DS cs.CC
null
We parallelize several previously proposed algorithms for the minimum routing cost spanning tree problem and some related problems.
[ { "version": "v1", "created": "Tue, 15 May 2007 17:48:42 GMT" }, { "version": "v2", "created": "Wed, 4 Jul 2007 20:10:22 GMT" } ]
2007-07-04T00:00:00
[ [ "Chang", "Ching-Lueh", "" ], [ "Lyuu", "Yuh-Dauh", "" ] ]
0705.2503
Peng Cui
Peng Cui
Improved Approximability Result for Test Set with Small Redundancy
7 pages
null
null
null
cs.DS cs.CC
null
Test set with redundancy is one of the focuses in recent bioinformatics research. Set cover greedy algorithm (SGA for short) is a commonly used algorithm for test set with redundancy. This paper proves that the approximation ratio of SGA can be $(2-\frac{1}{2r})\ln n+{3/2}\ln r+O(\ln\ln n)$ by using the potential function technique. This result is better than the approximation ratio $2\ln n$ which directly derives from set multicover, when $r=o(\frac{\ln n}{\ln\ln n})$, and is an extension of the approximability results for plain test set.
[ { "version": "v1", "created": "Thu, 17 May 2007 09:53:20 GMT" }, { "version": "v2", "created": "Thu, 7 Jun 2007 09:11:18 GMT" }, { "version": "v3", "created": "Tue, 11 Sep 2007 09:21:21 GMT" }, { "version": "v4", "created": "Thu, 27 Sep 2007 14:58:21 GMT" } ]
2007-09-27T00:00:00
[ [ "Cui", "Peng", "" ] ]
0705.2876
Phillip Bradford
Phillip G. Bradford and Daniel A. Ray
An online algorithm for generating fractal hash chains applied to digital chains of custody
null
null
null
null
cs.CR cs.DS
null
This paper gives an online algorithm for generating Jakobsson's fractal hash chains. Our new algorithm compliments Jakobsson's fractal hash chain algorithm for preimage traversal since his algorithm assumes the entire hash chain is precomputed and a particular list of Ceiling(log n) hash elements or pebbles are saved. Our online algorithm for hash chain traversal incrementally generates a hash chain of n hash elements without knowledge of n before it starts. For any n, our algorithm stores only the Ceiling(log n) pebbles which are precisely the inputs for Jakobsson's amortized hash chain preimage traversal algorithm. This compact representation is useful to generate, traverse, and store a number of large digital hash chains on a small and constrained device. We also give an application using both Jakobsson's and our new algorithm applied to digital chains of custody for validating dynamically changing forensics data.
[ { "version": "v1", "created": "Sun, 20 May 2007 17:14:38 GMT" } ]
2007-05-23T00:00:00
[ [ "Bradford", "Phillip G.", "" ], [ "Ray", "Daniel A.", "" ] ]
0705.4171
Eva Borbely
Eva Borbely
Grover search algorithm
null
null
null
null
cs.DS
null
A quantum algorithm is a set of instructions for a quantum computer, however, unlike algorithms in classical computer science their results cannot be guaranteed. A quantum system can undergo two types of operation, measurement and quantum state transformation, operations themselves must be unitary (reversible). Most quantum algorithms involve a series of quantum state transformations followed by a measurement. Currently very few quantum algorithms are known and no general design methodology exists for their construction.
[ { "version": "v1", "created": "Tue, 29 May 2007 09:42:46 GMT" } ]
2007-05-30T00:00:00
[ [ "Borbely", "Eva", "" ] ]
0705.4320
William Hung
William N. N. Hung, Changjian Gao, Xiaoyu Song, Dan Hammerstrom
Defect-Tolerant CMOL Cell Assignment via Satisfiability
To appear in Nanoelectronic Devices for Defense and Security (NANO-DDS), Crystal City, Virginia, June 2007
null
null
null
cs.DM cs.DS
null
We present a CAD framework for CMOL, a hybrid CMOS/ molecular circuit architecture. Our framework first transforms any logically synthesized circuit based on AND/OR/NOT gates to a NOR gate circuit, and then maps the NOR gates to CMOL. We encode the CMOL cell assignment problem as boolean conditions. The boolean constraint is satisfiable if and only if there is a way to map all the NOR gates to the CMOL cells. We further investigate various types of static defects for the CMOL architecture, and propose a reconfiguration technique that can deal with these defects through our CAD framework. This is the first automated framework for CMOL cell assignment, and the first to model several different CMOL static defects. Empirical results show that our approach is efficient and scalable.
[ { "version": "v1", "created": "Tue, 29 May 2007 23:46:38 GMT" } ]
2007-05-31T00:00:00
[ [ "Hung", "William N. N.", "" ], [ "Gao", "Changjian", "" ], [ "Song", "Xiaoyu", "" ], [ "Hammerstrom", "Dan", "" ] ]
0705.4606
Marco Pellegrini
Filippo Geraci and Marco Pellegrini
Dynamic User-Defined Similarity Searching in Semi-Structured Text Retrieval
Submitted to Spire 2007
null
null
null
cs.IR cs.DS
null
Modern text retrieval systems often provide a similarity search utility, that allows the user to find efficiently a fixed number k of documents in the data set that are most similar to a given query (here a query is either a simple sequence of keywords or the identifier of a full document found in previous searches that is considered of interest). We consider the case of a textual database made of semi-structured documents. Each field, in turns, is modelled with a specific vector space. The problem is more complex when we also allow each such vector space to have an associated user-defined dynamic weight that influences its contribution to the overall dynamic aggregated and weighted similarity. This dynamic problem has been tackled in a recent paper by Singitham et al. in in VLDB 2004. Their proposed solution, which we take as baseline, is a variant of the cluster-pruning technique that has the potential for scaling to very large corpora of documents, and is far more efficient than the naive exhaustive search. We devise an alternative way of embedding weights in the data structure, coupled with a non-trivial application of a clustering algorithm based on the furthest point first heuristic for the metric k-center problem. The validity of our approach is demonstrated experimentally by showing significant performance improvements over the scheme proposed in Singitham et al. in VLDB 2004. We improve significantly tradeoffs between query time and output quality with respect to the baseline method in Singitham et al. in in VLDB 2004, and also with respect to a novel method by Chierichetti et al. to appear in ACM PODS 2007. We also speed up the pre-processing time by a factor at least thirty.
[ { "version": "v1", "created": "Thu, 31 May 2007 13:46:39 GMT" } ]
2007-06-01T00:00:00
[ [ "Geraci", "Filippo", "" ], [ "Pellegrini", "Marco", "" ] ]
0705.4618
Roberto Bagnara
Roberto Bagnara, Patricia M. Hill, Enea Zaffanella
An Improved Tight Closure Algorithm for Integer Octagonal Constraints
15 pages, 2 figures
null
null
null
cs.DS cs.CG cs.LO
null
Integer octagonal constraints (a.k.a. ``Unit Two Variables Per Inequality'' or ``UTVPI integer constraints'') constitute an interesting class of constraints for the representation and solution of integer problems in the fields of constraint programming and formal analysis and verification of software and hardware systems, since they couple algorithms having polynomial complexity with a relatively good expressive power. The main algorithms required for the manipulation of such constraints are the satisfiability check and the computation of the inferential closure of a set of constraints. The latter is called `tight' closure to mark the difference with the (incomplete) closure algorithm that does not exploit the integrality of the variables. In this paper we present and fully justify an O(n^3) algorithm to compute the tight closure of a set of UTVPI integer constraints.
[ { "version": "v1", "created": "Thu, 31 May 2007 14:32:46 GMT" }, { "version": "v2", "created": "Fri, 1 Jun 2007 08:17:11 GMT" } ]
2007-06-01T00:00:00
[ [ "Bagnara", "Roberto", "" ], [ "Hill", "Patricia M.", "" ], [ "Zaffanella", "Enea", "" ] ]
0705.4673
B\'ela Csaba
B\'ela Csaba (Anal. and Stoch. Res. Group, HAS), Andr\'as S. Pluh\'ar (Dept. of Comp. Sci., Univ. of Szeged)
A randomized algorithm for the on-line weighted bipartite matching problem
to be published
null
null
null
cs.DS cs.DM
null
We study the on-line minimum weighted bipartite matching problem in arbitrary metric spaces. Here, $n$ not necessary disjoint points of a metric space $M$ are given, and are to be matched on-line with $n$ points of $M$ revealed one by one. The cost of a matching is the sum of the distances of the matched points, and the goal is to find or approximate its minimum. The competitive ratio of the deterministic problem is known to be $\Theta(n)$. It was conjectured that a randomized algorithm may perform better against an oblivious adversary, namely with an expected competitive ratio $\Theta(\log n)$. We prove a slightly weaker result by showing a $o(\log^3 n)$ upper bound on the expected competitive ratio. As an application the same upper bound holds for the notoriously hard fire station problem, where $M$ is the real line.
[ { "version": "v1", "created": "Thu, 31 May 2007 18:35:21 GMT" }, { "version": "v2", "created": "Wed, 6 Jun 2007 20:24:22 GMT" } ]
2007-06-06T00:00:00
[ [ "Csaba", "Béla", "", "Anal. and Stoch. Res. Group, HAS" ], [ "Pluhár", "András S.", "", "Dept. of Comp. Sci., Univ. of Szeged" ] ]
0706.0046
Jingchao Chen
Jing-Chao Chen
Symmetry Partition Sort
null
null
null
null
cs.DS
null
In this paper, we propose a useful replacement for quicksort-style utility functions. The replacement is called Symmetry Partition Sort, which has essentially the same principle as Proportion Extend Sort. The maximal difference between them is that the new algorithm always places already partially sorted inputs (used as a basis for the proportional extension) on both ends when entering the partition routine. This is advantageous to speeding up the partition routine. The library function based on the new algorithm is more attractive than Psort which is a library function introduced in 2004. Its implementation mechanism is simple. The source code is clearer. The speed is faster, with O(n log n) performance guarantee. Both the robustness and adaptivity are better. As a library function, it is competitive.
[ { "version": "v1", "created": "Fri, 1 Jun 2007 01:47:06 GMT" } ]
2007-06-04T00:00:00
[ [ "Chen", "Jing-Chao", "" ] ]
0706.0489
Markus Jalsenius
Markus Jalsenius
Sampling Colourings of the Triangular Lattice
42 pages. Added appendix that describes implementation. Added ancillary files
null
null
null
math-ph cs.DM cs.DS math.MP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that the Glauber dynamics on proper 9-colourings of the triangular lattice is rapidly mixing, which allows for efficient sampling. Consequently, there is a fully polynomial randomised approximation scheme (FPRAS) for counting proper 9-colourings of the triangular lattice. Proper colourings correspond to configurations in the zero-temperature anti-ferromagnetic Potts model. We show that the spin system consisting of proper 9-colourings of the triangular lattice has strong spatial mixing. This implies that there is a unique infinite-volume Gibbs distribution, which is an important property studied in statistical physics. Our results build on previous work by Goldberg, Martin and Paterson, who showed similar results for 10 colours on the triangular lattice. Their work was preceded by Salas and Sokal's 11-colour result. Both proofs rely on computational assistance, and so does our 9-colour proof. We have used a randomised heuristic to guide us towards rigourous results.
[ { "version": "v1", "created": "Mon, 4 Jun 2007 17:49:25 GMT" }, { "version": "v2", "created": "Mon, 22 Mar 2010 17:49:43 GMT" }, { "version": "v3", "created": "Tue, 26 Oct 2010 02:09:02 GMT" } ]
2010-10-27T00:00:00
[ [ "Jalsenius", "Markus", "" ] ]
0706.1063
Matthias Brust R.
Matthias R. Brust, Steffen Rothkugel
Small Worlds: Strong Clustering in Wireless Networks
To appear in: 1st International Workshop on Localized Algorithms and Protocols for Wireless Sensor Networks (LOCALGOS 2007), 2007, IEEE Compuster Society Press
null
null
null
cs.NI cs.DC cs.DS
null
Small-worlds represent efficient communication networks that obey two distinguishing characteristics: a high clustering coefficient together with a small characteristic path length. This paper focuses on an interesting paradox, that removing links in a network can increase the overall clustering coefficient. Reckful Roaming, as introduced in this paper, is a 2-localized algorithm that takes advantage of this paradox in order to selectively remove superfluous links, this way optimizing the clustering coefficient while still retaining a sufficiently small characteristic path length.
[ { "version": "v1", "created": "Thu, 7 Jun 2007 19:42:51 GMT" }, { "version": "v2", "created": "Mon, 11 Jun 2007 05:36:04 GMT" } ]
2007-06-11T00:00:00
[ [ "Brust", "Matthias R.", "" ], [ "Rothkugel", "Steffen", "" ] ]
0706.1084
Adam D. Smith
Sofya Raskhodnikova and Dana Ron and Ronitt Rubinfeld and Adam Smith
Sublinear Algorithms for Approximating String Compressibility
To appear in the proceedings of RANDOM 2007
null
null
null
cs.DS
null
We raise the question of approximating the compressibility of a string with respect to a fixed compression scheme, in sublinear time. We study this question in detail for two popular lossless compression schemes: run-length encoding (RLE) and Lempel-Ziv (LZ), and present sublinear algorithms for approximating compressibility with respect to both schemes. We also give several lower bounds that show that our algorithms for both schemes cannot be improved significantly. Our investigation of LZ yields results whose interest goes beyond the initial questions we set out to study. In particular, we prove combinatorial structural lemmas that relate the compressibility of a string with respect to Lempel-Ziv to the number of distinct short substrings contained in it. In addition, we show that approximating the compressibility with respect to LZ is related to approximating the support size of a distribution.
[ { "version": "v1", "created": "Fri, 8 Jun 2007 02:58:28 GMT" } ]
2007-06-11T00:00:00
[ [ "Raskhodnikova", "Sofya", "" ], [ "Ron", "Dana", "" ], [ "Rubinfeld", "Ronitt", "" ], [ "Smith", "Adam", "" ] ]
0706.1318
John Tomlin
S. Sathiya Keerthi and John A. Tomlin
Constructing a maximum utility slate of on-line advertisements
null
null
null
null
cs.DM cs.DS
null
We present an algorithm for constructing an optimal slate of sponsored search advertisements which respects the ordering that is the outcome of a generalized second price auction, but which must also accommodate complicating factors such as overall budget constraints. The algorithm is easily fast enough to use on the fly for typical problem sizes, or as a subroutine in an overall optimization.
[ { "version": "v1", "created": "Sat, 9 Jun 2007 16:18:45 GMT" } ]
2007-06-12T00:00:00
[ [ "Keerthi", "S. Sathiya", "" ], [ "Tomlin", "John A.", "" ] ]
0706.2155
Greg Sepesi
Greg Sepesi
Dualheap Selection Algorithm: Efficient, Inherently Parallel and Somewhat Mysterious
5 pages, 6 figures
null
null
null
cs.DS cs.CC cs.DC
null
An inherently parallel algorithm is proposed that efficiently performs selection: finding the K-th largest member of a set of N members. Selection is a common component of many more complex algorithms and therefore is a widely studied problem. Not much is new in the proposed dualheap selection algorithm: the heap data structure is from J.W.J.Williams, the bottom-up heap construction is from R.W. Floyd, and the concept of a two heap data structure is from J.W.J. Williams and D.E. Knuth. The algorithm's novelty is limited to a few relatively minor implementation twists: 1) the two heaps are oriented with their roots at the partition values rather than at the minimum and maximum values, 2)the coding of one of the heaps (the heap of smaller values) employs negative indexing, and 3) the exchange phase of the algorithm is similar to a bottom-up heap construction, but navigates the heap with a post-order tree traversal. When run on a single processor, the dualheap selection algorithm's performance is competitive with quickselect with median estimation, a common variant of C.A.R. Hoare's quicksort algorithm. When run on parallel processors, the dualheap selection algorithm is superior due to its subtasks that are easily partitioned and innately balanced.
[ { "version": "v1", "created": "Thu, 14 Jun 2007 16:11:24 GMT" } ]
2007-06-15T00:00:00
[ [ "Sepesi", "Greg", "" ] ]
0706.2725
Guohun Zhu
Guohun Zhu
The Complexity of Determining Existence a Hamiltonian Cycle is $O(n^3)$
6 papers
null
null
null
cs.DS cs.CC cs.DM
null
The Hamiltonian cycle problem in digraph is mapped into a matching cover bipartite graph. Based on this mapping, it is proved that determining existence a Hamiltonian cycle in graph is $O(n^3)$.
[ { "version": "v1", "created": "Tue, 19 Jun 2007 07:57:51 GMT" } ]
2007-06-20T00:00:00
[ [ "Zhu", "Guohun", "" ] ]
0706.2839
Rajeev Raman
Naila Rahman and Rajeev Raman
Cache Analysis of Non-uniform Distribution Sorting Algorithms
The full version of our ESA 2000 paper (LNCS 1879) on this subject
null
null
null
cs.DS cs.PF
null
We analyse the average-case cache performance of distribution sorting algorithms in the case when keys are independently but not necessarily uniformly distributed. The analysis is for both `in-place' and `out-of-place' distribution sorting algorithms and is more accurate than the analysis presented in \cite{RRESA00}. In particular, this new analysis yields tighter upper and lower bounds when the keys are drawn from a uniform distribution. We use this analysis to tune the performance of the integer sorting algorithm MSB radix sort when it is used to sort independent uniform floating-point numbers (floats). Our tuned MSB radix sort algorithm comfortably outperforms a cache-tuned implementations of bucketsort \cite{RR99} and Quicksort when sorting uniform floats from $[0, 1)$.
[ { "version": "v1", "created": "Tue, 19 Jun 2007 17:12:47 GMT" }, { "version": "v2", "created": "Mon, 13 Aug 2007 22:57:01 GMT" } ]
2007-08-14T00:00:00
[ [ "Rahman", "Naila", "" ], [ "Raman", "Rajeev", "" ] ]
0706.2893
Greg Sepesi
Greg Sepesi
Dualheap Sort Algorithm: An Inherently Parallel Generalization of Heapsort
4 pages, 4 figures
null
null
null
cs.DS cs.CC cs.DC
null
A generalization of the heapsort algorithm is proposed. At the expense of about 50% more comparison and move operations for typical cases, the dualheap sort algorithm offers several advantages over heapsort: improved cache performance, better performance if the input happens to be already sorted, and easier parallel implementations.
[ { "version": "v1", "created": "Wed, 20 Jun 2007 14:42:45 GMT" } ]
2007-06-21T00:00:00
[ [ "Sepesi", "Greg", "" ] ]
0706.3104
Cristina Toninelli
Marc Mezard, Cristina Toninelli
Group Testing with Random Pools: optimal two-stage algorithms
12 pages
null
null
null
cs.DS cond-mat.dis-nn cond-mat.stat-mech cs.IT math.IT
null
We study Probabilistic Group Testing of a set of N items each of which is defective with probability p. We focus on the double limit of small defect probability, p<<1, and large number of variables, N>>1, taking either p->0 after $N\to\infty$ or $p=1/N^{\beta}$ with $\beta\in(0,1/2)$. In both settings the optimal number of tests which are required to identify with certainty the defectives via a two-stage procedure, $\bar T(N,p)$, is known to scale as $Np|\log p|$. Here we determine the sharp asymptotic value of $\bar T(N,p)/(Np|\log p|)$ and construct a class of two-stage algorithms over which this optimal value is attained. This is done by choosing a proper bipartite regular graph (of tests and variable nodes) for the first stage of the detection. Furthermore we prove that this optimal value is also attained on average over a random bipartite graph where all variables have the same degree, while the tests have Poisson-distributed degrees. Finally, we improve the existing upper and lower bound for the optimal number of tests in the case $p=1/N^{\beta}$ with $\beta\in[1/2,1)$.
[ { "version": "v1", "created": "Thu, 21 Jun 2007 08:57:44 GMT" } ]
2007-11-14T00:00:00
[ [ "Mezard", "Marc", "" ], [ "Toninelli", "Cristina", "" ] ]
0706.3565
Anatoly Plotnikov
Anatoly D. Plotnikov
Experimental Algorithm for the Maximum Independent Set Problem
From author's book "Discrete mathematics",3-th ed., Moscow,New knowledge,2007, 18 pages, 8 figures
Cybernetics and Systems Analysis: Volume 48, Issue 5 (2012), Page 673-680
null
null
cs.DS
null
We develop an experimental algorithm for the exact solving of the maximum independent set problem. The algorithm consecutively finds the maximal independent sets of vertices in an arbitrary undirected graph such that the next such set contains more elements than the preceding one. For this purpose, we use a technique, developed by Ford and Fulkerson for the finite partially ordered sets, in particular, their method for partition of a poset into the minimum number of chains with finding the maximum antichain. In the process of solving, a special digraph is constructed, and a conjecture is formulated concerning properties of such digraph. This allows to offer of the solution algorithm. Its theoretical estimation of running time equals to is $O(n^{8})$, where $n$ is the number of graph vertices. The offered algorithm was tested by a program on random graphs. The testing the confirms correctness of the algorithm.
[ { "version": "v1", "created": "Mon, 25 Jun 2007 06:45:49 GMT" }, { "version": "v2", "created": "Mon, 2 Jul 2007 02:16:12 GMT" } ]
2016-03-02T00:00:00
[ [ "Plotnikov", "Anatoly D.", "" ] ]
0706.4107
Mihai Patrascu
Gianni Franceschini, S. Muthukrishnan and Mihai Patrascu
Radix Sorting With No Extra Space
Full version of paper accepted to ESA 2007. (17 pages)
null
null
null
cs.DS
null
It is well known that n integers in the range [1,n^c] can be sorted in O(n) time in the RAM model using radix sorting. More generally, integers in any range [1,U] can be sorted in O(n sqrt{loglog n}) time. However, these algorithms use O(n) words of extra memory. Is this necessary? We present a simple, stable, integer sorting algorithm for words of size O(log n), which works in O(n) time and uses only O(1) words of extra memory on a RAM model. This is the integer sorting case most useful in practice. We extend this result with same bounds to the case when the keys are read-only, which is of theoretical interest. Another interesting question is the case of arbitrary c. Here we present a black-box transformation from any RAM sorting algorithm to a sorting algorithm which uses only O(1) extra space and has the same running time. This settles the complexity of in-place sorting in terms of the complexity of sorting.
[ { "version": "v1", "created": "Wed, 27 Jun 2007 22:04:40 GMT" } ]
2007-06-29T00:00:00
[ [ "Franceschini", "Gianni", "" ], [ "Muthukrishnan", "S.", "" ], [ "Patrascu", "Mihai", "" ] ]
0707.0282
Igor Razgon
Igor Razgon and Barry O'Sullivan
Directed Feedback Vertex Set is Fixed-Parameter Tractable
14 pages
null
null
null
cs.DS cs.CC
null
We resolve positively a long standing open question regarding the fixed-parameter tractability of the parameterized Directed Feedback Vertex Set problem. In particular, we propose an algorithm which solves this problem in $O(8^kk!*poly(n))$.
[ { "version": "v1", "created": "Mon, 2 Jul 2007 17:56:53 GMT" } ]
2007-07-03T00:00:00
[ [ "Razgon", "Igor", "" ], [ "O'Sullivan", "Barry", "" ] ]
0707.0421
Riccardo Dondi
Paola Bonizzoni, Gianluca Della Vedova, Riccardo Dondi
The $k$-anonymity Problem is Hard
21 pages, A short version of this paper has been accepted in FCT 2009 - 17th International Symposium on Fundamentals of Computation Theory
null
null
null
cs.DB cs.CC cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The problem of publishing personal data without giving up privacy is becoming increasingly important. An interesting formalization recently proposed is the k-anonymity. This approach requires that the rows in a table are clustered in sets of size at least k and that all the rows in a cluster become the same tuple, after the suppression of some records. The natural optimization problem, where the goal is to minimize the number of suppressed entries, is known to be NP-hard when the values are over a ternary alphabet, k = 3 and the rows length is unbounded. In this paper we give a lower bound on the approximation factor that any polynomial-time algorithm can achive on two restrictions of the problem,namely (i) when the records values are over a binary alphabet and k = 3, and (ii) when the records have length at most 8 and k = 4, showing that these restrictions of the problem are APX-hard.
[ { "version": "v1", "created": "Tue, 3 Jul 2007 14:17:49 GMT" }, { "version": "v2", "created": "Tue, 2 Jun 2009 16:40:37 GMT" } ]
2009-06-02T00:00:00
[ [ "Bonizzoni", "Paola", "" ], [ "Della Vedova", "Gianluca", "" ], [ "Dondi", "Riccardo", "" ] ]
0707.0546
Juli\'an Mestre
Juli\'an Mestre
Weighted Popular Matchings
14 pages, 3 figures. A preliminary version appeared in the Proceedings of the 33rd International Colloquium on Automata, Languages and Programming (ICALP)
null
null
null
cs.DS
null
We study the problem of assigning jobs to applicants. Each applicant has a weight and provides a preference list ranking a subset of the jobs. A matching M is popular if there is no other matching M' such that the weight of the applicants who prefer M' over M exceeds the weight of those who prefer M over M'. This paper gives efficient algorithms to find a popular matching if one exists.
[ { "version": "v1", "created": "Wed, 4 Jul 2007 06:55:43 GMT" } ]
2007-07-05T00:00:00
[ [ "Mestre", "Julián", "" ] ]
0707.0644
Ali Akhavi
Ali Akhavi (GREYC), C\'eline Moreira (GREYC)
Another view of the Gaussian algorithm
null
Proceedings of Latin'04 (04/2004) 474--487
null
null
cs.DS cs.DM
null
We introduce here a rewrite system in the group of unimodular matrices, \emph{i.e.}, matrices with integer entries and with determinant equal to $\pm 1$. We use this rewrite system to precisely characterize the mechanism of the Gaussian algorithm, that finds shortest vectors in a two--dimensional lattice given by any basis. Putting together the algorithmic of lattice reduction and the rewrite system theory, we propose a new worst--case analysis of the Gaussian algorithm. There is already an optimal worst--case bound for some variant of the Gaussian algorithm due to Vall\'ee \cite {ValGaussRevisit}. She used essentially geometric considerations. Our analysis generalizes her result to the case of the usual Gaussian algorithm. An interesting point in our work is its possible (but not easy) generalization to the same problem in higher dimensions, in order to exhibit a tight upper-bound for the number of iterations of LLL--like reduction algorithms in the worst case. Moreover, our method seems to work for analyzing other families of algorithms. As an illustration, the analysis of sorting algorithms are briefly developed in the last section of the paper.
[ { "version": "v1", "created": "Wed, 4 Jul 2007 15:37:15 GMT" } ]
2007-07-05T00:00:00
[ [ "Akhavi", "Ali", "", "GREYC" ], [ "Moreira", "Céline", "", "GREYC" ] ]
0707.0648
Viswanath Nagarajan
Anupam Gupta, MohammadTaghi Hajiaghayi, Viswanath Nagarajan, R. Ravi
Dial a Ride from k-forest
Preliminary version in Proc. European Symposium on Algorithms, 2007
null
null
null
cs.DS
null
The k-forest problem is a common generalization of both the k-MST and the dense-$k$-subgraph problems. Formally, given a metric space on $n$ vertices $V$, with $m$ demand pairs $\subseteq V \times V$ and a ``target'' $k\le m$, the goal is to find a minimum cost subgraph that connects at least $k$ demand pairs. In this paper, we give an $O(\min\{\sqrt{n},\sqrt{k}\})$-approximation algorithm for $k$-forest, improving on the previous best ratio of $O(n^{2/3}\log n)$ by Segev & Segev. We then apply our algorithm for k-forest to obtain approximation algorithms for several Dial-a-Ride problems. The basic Dial-a-Ride problem is the following: given an $n$ point metric space with $m$ objects each with its own source and destination, and a vehicle capable of carrying at most $k$ objects at any time, find the minimum length tour that uses this vehicle to move each object from its source to destination. We prove that an $\alpha$-approximation algorithm for the $k$-forest problem implies an $O(\alpha\cdot\log^2n)$-approximation algorithm for Dial-a-Ride. Using our results for $k$-forest, we get an $O(\min\{\sqrt{n},\sqrt{k}\}\cdot\log^2 n)$- approximation algorithm for Dial-a-Ride. The only previous result known for Dial-a-Ride was an $O(\sqrt{k}\log n)$-approximation by Charikar & Raghavachari; our results give a different proof of a similar approximation guarantee--in fact, when the vehicle capacity $k$ is large, we give a slight improvement on their results.
[ { "version": "v1", "created": "Wed, 4 Jul 2007 16:08:40 GMT" } ]
2007-07-05T00:00:00
[ [ "Gupta", "Anupam", "" ], [ "Hajiaghayi", "MohammadTaghi", "" ], [ "Nagarajan", "Viswanath", "" ], [ "Ravi", "R.", "" ] ]
0707.1051
Mark Braverman
Mark Braverman, Elchanan Mossel
Noisy Sorting Without Resampling
null
null
null
null
cs.DS
null
In this paper we study noisy sorting without re-sampling. In this problem there is an unknown order $a_{\pi(1)} < ... < a_{\pi(n)}$ where $\pi$ is a permutation on $n$ elements. The input is the status of $n \choose 2$ queries of the form $q(a_i,x_j)$, where $q(a_i,a_j) = +$ with probability at least $1/2+\ga$ if $\pi(i) > \pi(j)$ for all pairs $i \neq j$, where $\ga > 0$ is a constant and $q(a_i,a_j) = -q(a_j,a_i)$ for all $i$ and $j$. It is assumed that the errors are independent. Given the status of the queries the goal is to find the maximum likelihood order. In other words, the goal is find a permutation $\sigma$ that minimizes the number of pairs $\sigma(i) > \sigma(j)$ where $q(\sigma(i),\sigma(j)) = -$. The problem so defined is the feedback arc set problem on distributions of inputs, each of which is a tournament obtained as a noisy perturbations of a linear order. Note that when $\ga < 1/2$ and $n$ is large, it is impossible to recover the original order $\pi$. It is known that the weighted feedback are set problem on tournaments is NP-hard in general. Here we present an algorithm of running time $n^{O(\gamma^{-4})}$ and sampling complexity $O_{\gamma}(n \log n)$ that with high probability solves the noisy sorting without re-sampling problem. We also show that if $a_{\sigma(1)},a_{\sigma(2)},...,a_{\sigma(n)}$ is an optimal solution of the problem then it is ``close'' to the original order. More formally, with high probability it holds that $\sum_i |\sigma(i) - \pi(i)| = \Theta(n)$ and $\max_i |\sigma(i) - \pi(i)| = \Theta(\log n)$. Our results are of interest in applications to ranking, such as ranking in sports, or ranking of search items based on comparisons by experts.
[ { "version": "v1", "created": "Fri, 6 Jul 2007 21:30:24 GMT" } ]
2007-07-10T00:00:00
[ [ "Braverman", "Mark", "" ], [ "Mossel", "Elchanan", "" ] ]
0707.1095
Gregory Gutin
Noga Alon, Fedor V. Fomin, Gregory Gutin, Michael Krivelevich, Saket Saurabh
Better Algorithms and Bounds for Directed Maximum Leaf Problems
null
null
null
null
cs.DS cs.DM
null
The {\sc Directed Maximum Leaf Out-Branching} problem is to find an out-branching (i.e. a rooted oriented spanning tree) in a given digraph with the maximum number of leaves. In this paper, we improve known parameterized algorithms and combinatorial bounds on the number of leaves in out-branchings. We show that \begin{itemize} \item every strongly connected digraph $D$ of order $n$ with minimum in-degree at least 3 has an out-branching with at least $(n/4)^{1/3}-1$ leaves; \item if a strongly connected digraph $D$ does not contain an out-branching with $k$ leaves, then the pathwidth of its underlying graph is $O(k\log k)$; \item it can be decided in time $2^{O(k\log^2 k)}\cdot n^{O(1)}$ whether a strongly connected digraph on $n$ vertices has an out-branching with at least $k$ leaves. \end{itemize} All improvements use properties of extremal structures obtained after applying local search and of some out-branching decompositions.
[ { "version": "v1", "created": "Sat, 7 Jul 2007 15:52:29 GMT" } ]
2007-07-10T00:00:00
[ [ "Alon", "Noga", "" ], [ "Fomin", "Fedor V.", "" ], [ "Gutin", "Gregory", "" ], [ "Krivelevich", "Michael", "" ], [ "Saurabh", "Saket", "" ] ]
0707.1532
Samantha Riesenfeld
Constantinos Daskalakis (1), Richard M. Karp (1), Elchanan Mossel (1), Samantha Riesenfeld (1), Elad Verbin (2) ((1) U.C. Berkeley, (2) Tel Aviv University)
Sorting and Selection in Posets
24 pages
null
null
null
cs.DS cs.DM
null
Classical problems of sorting and searching assume an underlying linear ordering of the objects being compared. In this paper, we study a more general setting, in which some pairs of objects are incomparable. This generalization is relevant in applications related to rankings in sports, college admissions, or conference submissions. It also has potential applications in biology, such as comparing the evolutionary fitness of different strains of bacteria, or understanding input-output relations among a set of metabolic reactions or the causal influences among a set of interacting genes or proteins. Our results improve and extend results from two decades ago of Faigle and Tur\'{a}n. A measure of complexity of a partially ordered set (poset) is its width. Our algorithms obtain information about a poset by queries that compare two elements. We present an algorithm that sorts, i.e. completely identifies, a width w poset of size n and has query complexity O(wn + nlog(n)), which is within a constant factor of the information-theoretic lower bound. We also show that a variant of Mergesort has query complexity O(wn(log(n/w))) and total complexity O((w^2)nlog(n/w)). Faigle and Tur\'{a}n have shown that the sorting problem has query complexity O(wn(log(n/w))) but did not address its total complexity. For the related problem of determining the minimal elements of a poset, we give efficient deterministic and randomized algorithms with O(wn) query and total complexity, along with matching lower bounds for the query complexity up to a factor of 2. We generalize these results to the k-selection problem of determining the elements of height at most k. We also derive upper bounds on the total complexity of some other problems of a similar flavor.
[ { "version": "v1", "created": "Tue, 10 Jul 2007 21:52:17 GMT" } ]
2007-07-12T00:00:00
[ [ "Daskalakis", "Constantinos", "" ], [ "Karp", "Richard M.", "" ], [ "Mossel", "Elchanan", "" ], [ "Riesenfeld", "Samantha", "" ], [ "Verbin", "Elad", "" ] ]
0707.1714
Michael Mahoney
Anirban Dasgupta, Petros Drineas, Boulos Harb, Ravi Kumar, and Michael W. Mahoney
Sampling Algorithms and Coresets for Lp Regression
19 pages, 1 figure
null
null
null
cs.DS
null
The Lp regression problem takes as input a matrix $A \in \Real^{n \times d}$, a vector $b \in \Real^n$, and a number $p \in [1,\infty)$, and it returns as output a number ${\cal Z}$ and a vector $x_{opt} \in \Real^d$ such that ${\cal Z} = \min_{x \in \Real^d} ||Ax -b||_p = ||Ax_{opt}-b||_p$. In this paper, we construct coresets and obtain an efficient two-stage sampling-based approximation algorithm for the very overconstrained ($n \gg d$) version of this classical problem, for all $p \in [1, \infty)$. The first stage of our algorithm non-uniformly samples $\hat{r}_1 = O(36^p d^{\max\{p/2+1, p\}+1})$ rows of $A$ and the corresponding elements of $b$, and then it solves the Lp regression problem on the sample; we prove this is an 8-approximation. The second stage of our algorithm uses the output of the first stage to resample $\hat{r}_1/\epsilon^2$ constraints, and then it solves the Lp regression problem on the new sample; we prove this is a $(1+\epsilon)$-approximation. Our algorithm unifies, improves upon, and extends the existing algorithms for special cases of Lp regression, namely $p = 1,2$. In course of proving our result, we develop two concepts--well-conditioned bases and subspace-preserving sampling--that are of independent interest.
[ { "version": "v1", "created": "Wed, 11 Jul 2007 22:04:18 GMT" } ]
2007-07-13T00:00:00
[ [ "Dasgupta", "Anirban", "" ], [ "Drineas", "Petros", "" ], [ "Harb", "Boulos", "" ], [ "Kumar", "Ravi", "" ], [ "Mahoney", "Michael W.", "" ] ]
0707.2160
Seth Pettie
Seth Pettie
Splay Trees, Davenport-Schinzel Sequences, and the Deque Conjecture
null
null
null
null
cs.DS
null
We introduce a new technique to bound the asymptotic performance of splay trees. The basic idea is to transcribe, in an indirect fashion, the rotations performed by the splay tree as a Davenport-Schinzel sequence S, none of whose subsequences are isomorphic to fixed forbidden subsequence. We direct this technique towards Tarjan's deque conjecture and prove that n deque operations require O(n alpha^*(n)) time, where alpha^*(n) is the minimum number of applications of the inverse-Ackermann function mapping n to a constant. We are optimistic that this approach could be directed towards other open conjectures on splay trees such as the traversal and split conjectures.
[ { "version": "v1", "created": "Sat, 14 Jul 2007 16:38:08 GMT" } ]
2007-07-17T00:00:00
[ [ "Pettie", "Seth", "" ] ]
0707.2701
Gernot Schaller
Gernot Schaller
A fixed point iteration for computing the matrix logarithm
4 pages, 3 figures, comments welcome
null
null
null
cs.NA cs.DS
null
In various areas of applied numerics, the problem of calculating the logarithm of a matrix A emerges. Since series expansions of the logarithm usually do not converge well for matrices far away from the identity, the standard numerical method calculates successive square roots. In this article, a new algorithm is presented that relies on the computation of successive matrix exponentials. Convergence of the method is demonstrated for a large class of initial matrices and favorable choices of the initial matrix are discussed.
[ { "version": "v1", "created": "Wed, 18 Jul 2007 11:04:27 GMT" } ]
2007-07-19T00:00:00
[ [ "Schaller", "Gernot", "" ] ]
0707.3407
Alexander Tiskin
Alexander Tiskin
Faster subsequence recognition in compressed strings
null
null
null
null
cs.DS cs.CC cs.DM
null
Computation on compressed strings is one of the key approaches to processing massive data sets. We consider local subsequence recognition problems on strings compressed by straight-line programs (SLP), which is closely related to Lempel--Ziv compression. For an SLP-compressed text of length $\bar m$, and an uncompressed pattern of length $n$, C{\'e}gielski et al. gave an algorithm for local subsequence recognition running in time $O(\bar mn^2 \log n)$. We improve the running time to $O(\bar mn^{1.5})$. Our algorithm can also be used to compute the longest common subsequence between a compressed text and an uncompressed pattern in time $O(\bar mn^{1.5})$; the same problem with a compressed pattern is known to be NP-hard.
[ { "version": "v1", "created": "Mon, 23 Jul 2007 16:26:24 GMT" }, { "version": "v2", "created": "Tue, 6 Nov 2007 14:16:07 GMT" }, { "version": "v3", "created": "Fri, 11 Jan 2008 21:54:54 GMT" }, { "version": "v4", "created": "Fri, 18 Jan 2008 10:20:48 GMT" } ]
2011-11-10T00:00:00
[ [ "Tiskin", "Alexander", "" ] ]
0707.3409
Alexander Tiskin
Alexander Tiskin
Faster exon assembly by sparse spliced alignment
null
null
null
null
cs.DS cs.CC cs.CE q-bio.QM
null
Assembling a gene from candidate exons is an important problem in computational biology. Among the most successful approaches to this problem is \emph{spliced alignment}, proposed by Gelfand et al., which scores different candidate exon chains within a DNA sequence of length $m$ by comparing them to a known related gene sequence of length n, $m = \Theta(n)$. Gelfand et al.\ gave an algorithm for spliced alignment running in time O(n^3). Kent et al.\ considered sparse spliced alignment, where the number of candidate exons is O(n), and proposed an algorithm for this problem running in time O(n^{2.5}). We improve on this result, by proposing an algorithm for sparse spliced alignment running in time O(n^{2.25}). Our approach is based on a new framework of \emph{quasi-local string comparison}.
[ { "version": "v1", "created": "Mon, 23 Jul 2007 16:35:54 GMT" } ]
2007-07-24T00:00:00
[ [ "Tiskin", "Alexander", "" ] ]
0707.3619
Alexander Tiskin
Alexander Tiskin
Semi-local string comparison: algorithmic techniques and applications
null
null
null
null
cs.DS cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A classical measure of string comparison is given by the longest common subsequence (LCS) problem on a pair of strings. We consider its generalisation, called the semi-local LCS problem, which arises naturally in many string-related problems. The semi-local LCS problem asks for the LCS scores for each of the input strings against every substring of the other input string, and for every prefix of each input string against every suffix of the other input string. Such a comparison pattern provides a much more detailed picture of string similarity than a single LCS score; it also arises naturally in many string-related problems. In fact, the semi-local LCS problem turns out to be fundamental for string comparison, providing a powerful and flexible alternative to classical dynamic programming. It is especially useful when the input to a string comparison problem may not be available all at once: for example, comparison of dynamically changing strings; comparison of compressed strings; parallel string comparison. The same approach can also be applied to permutation strings, providing efficient solutions for local versions of the longest increasing subsequence (LIS) problem, and for the problem of computing a maximum clique in a circle graph. Furthermore, the semi-local LCS problem turns out to have surprising connections in a few seemingly unrelated fields, such as computational geometry and algebra of semigroups. This work is devoted to exploring the structure of the semi-local LCS problem, its efficient solutions, and its applications in string comparison and other related areas, including computational molecular biology.
[ { "version": "v1", "created": "Tue, 24 Jul 2007 19:12:23 GMT" }, { "version": "v10", "created": "Wed, 5 Aug 2009 16:07:43 GMT" }, { "version": "v11", "created": "Thu, 20 Aug 2009 22:16:33 GMT" }, { "version": "v12", "created": "Mon, 31 Aug 2009 13:08:55 GMT" }, { "version": "v13", "created": "Mon, 19 Oct 2009 14:48:06 GMT" }, { "version": "v14", "created": "Fri, 11 Dec 2009 20:34:12 GMT" }, { "version": "v15", "created": "Sat, 9 Jan 2010 16:22:30 GMT" }, { "version": "v16", "created": "Tue, 11 May 2010 11:43:57 GMT" }, { "version": "v17", "created": "Tue, 21 Feb 2012 22:50:39 GMT" }, { "version": "v18", "created": "Thu, 20 Sep 2012 12:52:02 GMT" }, { "version": "v19", "created": "Mon, 28 Jan 2013 02:37:21 GMT" }, { "version": "v2", "created": "Mon, 8 Oct 2007 13:29:39 GMT" }, { "version": "v20", "created": "Wed, 20 Nov 2013 20:54:28 GMT" }, { "version": "v21", "created": "Sat, 23 Nov 2013 23:30:05 GMT" }, { "version": "v3", "created": "Tue, 6 Nov 2007 14:09:50 GMT" }, { "version": "v4", "created": "Thu, 21 Aug 2008 15:17:48 GMT" }, { "version": "v5", "created": "Mon, 17 Nov 2008 22:31:37 GMT" }, { "version": "v6", "created": "Wed, 10 Dec 2008 19:46:02 GMT" }, { "version": "v7", "created": "Mon, 12 Jan 2009 17:49:28 GMT" }, { "version": "v8", "created": "Sun, 22 Mar 2009 16:36:29 GMT" }, { "version": "v9", "created": "Mon, 6 Jul 2009 15:59:42 GMT" } ]
2015-03-13T00:00:00
[ [ "Tiskin", "Alexander", "" ] ]
0707.3622
Yaoyun Shi
Igor Markov (University of Michigan) and Yaoyun Shi (University of Michigan)
Constant-degree graph expansions that preserve the treewidth
12 pages, 6 figures, the main result used by quant-ph/0511070
Algorithmica, Volume 59, Number 4, 461-470,2011
10.1007/s00453-009-9312-5
null
cs.DM cs.DS math.CO quant-ph
null
Many hard algorithmic problems dealing with graphs, circuits, formulas and constraints admit polynomial-time upper bounds if the underlying graph has small treewidth. The same problems often encourage reducing the maximal degree of vertices to simplify theoretical arguments or address practical concerns. Such degree reduction can be performed through a sequence of splittings of vertices, resulting in an _expansion_ of the original graph. We observe that the treewidth of a graph may increase dramatically if the splittings are not performed carefully. In this context we address the following natural question: is it possible to reduce the maximum degree to a constant without substantially increasing the treewidth? Our work answers the above question affirmatively. We prove that any simple undirected graph G=(V, E) admits an expansion G'=(V', E') with the maximum degree <= 3 and treewidth(G') <= treewidth(G)+1. Furthermore, such an expansion will have no more than 2|E|+|V| vertices and 3|E| edges; it can be computed efficiently from a tree-decomposition of G. We also construct a family of examples for which the increase by 1 in treewidth cannot be avoided.
[ { "version": "v1", "created": "Tue, 24 Jul 2007 19:56:27 GMT" } ]
2011-11-04T00:00:00
[ [ "Markov", "Igor", "", "University of Michigan" ], [ "Shi", "Yaoyun", "", "University of\n Michigan" ] ]
0707.4448
Mohamed-Ali Belabbas
Mohamed-Ali Belabbas and Patrick J. Wolfe
On sparse representations of linear operators and the approximation of matrix products
6 pages, 3 figures; presented at the 42nd Annual Conference on Information Sciences and Systems (CISS 2008)
null
10.1109/CISS.2008.4558532
null
cs.DS cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Thus far, sparse representations have been exploited largely in the context of robustly estimating functions in a noisy environment from a few measurements. In this context, the existence of a basis in which the signal class under consideration is sparse is used to decrease the number of necessary measurements while controlling the approximation error. In this paper, we instead focus on applications in numerical analysis, by way of sparse representations of linear operators with the objective of minimizing the number of operations needed to perform basic operations (here, multiplication) on these operators. We represent a linear operator by a sum of rank-one operators, and show how a sparse representation that guarantees a low approximation error for the product can be obtained from analyzing an induced quadratic form. This construction in turn yields new algorithms for computing approximate matrix products.
[ { "version": "v1", "created": "Mon, 30 Jul 2007 17:20:23 GMT" }, { "version": "v2", "created": "Fri, 26 Jun 2009 12:04:44 GMT" } ]
2009-06-26T00:00:00
[ [ "Belabbas", "Mohamed-Ali", "" ], [ "Wolfe", "Patrick J.", "" ] ]
0708.0600
Michael Lee
Michael J. Lee
Complementary algorithms for graphs and percolation
5 pages, 3 figures, poster version presented at statphys23 (2007)
null
10.1103/PhysRevE.76.027702
null
cs.DS
null
A pair of complementary algorithms are presented. One of the pair is a fast method for connecting graphs with an edge. The other is a fast method for removing edges from a graph. Both algorithms employ the same tree based graph representation and so, in concert, can arbitrarily modify any graph. Since the clusters of a percolation model may be described as simple connected graphs, an efficient Monte Carlo scheme can be constructed that uses the algorithms to sweep the occupation probability back and forth between two turning points. This approach concentrates computational sampling time within a region of interest. A high precision value of pc = 0.59274603(9) was thus obtained, by Mersenne twister, for the two dimensional square site percolation threshold.
[ { "version": "v1", "created": "Sat, 4 Aug 2007 02:56:13 GMT" } ]
2009-11-13T00:00:00
[ [ "Lee", "Michael J.", "" ] ]
0708.0909
Sebastien Tixeuil
L\'elia Blin (IBISC), Maria Gradinariu Potop-Butucaru (INRIA Rocquencourt, LIP6), S\'ebastien Tixeuil (INRIA Futurs, LRI)
On the Self-stabilization of Mobile Robots in Graphs
null
null
null
null
cs.DS cs.DC
null
Self-stabilization is a versatile technique to withstand any transient fault in a distributed system. Mobile robots (or agents) are one of the emerging trends in distributed computing as they mimic autonomous biologic entities. The contribution of this paper is threefold. First, we present a new model for studying mobile entities in networks subject to transient faults. Our model differs from the classical robot model because robots have constraints about the paths they are allowed to follow, and from the classical agent model because the number of agents remains fixed throughout the execution of the protocol. Second, in this model, we study the possibility of designing self-stabilizing algorithms when those algorithms are run by mobile robots (or agents) evolving on a graph. We concentrate on the core building blocks of robot and agents problems: naming and leader election. Not surprisingly, when no constraints are given on the network graph topology and local execution model, both problems are impossible to solve. Finally, using minimal hypothesis with respect to impossibility results, we provide deterministic and probabilistic solutions to both problems, and show equivalence of these problems by an algorithmic reduction mechanism.
[ { "version": "v1", "created": "Tue, 7 Aug 2007 09:34:14 GMT" }, { "version": "v2", "created": "Wed, 8 Aug 2007 09:11:22 GMT" } ]
2009-09-29T00:00:00
[ [ "Blin", "Lélia", "", "IBISC" ], [ "Potop-Butucaru", "Maria Gradinariu", "", "INRIA\n Rocquencourt, LIP6" ], [ "Tixeuil", "Sébastien", "", "INRIA Futurs, LRI" ] ]
0708.2351
Judit Nagy-Gy\"orgy
Judit Nagy-Gy\"orgy
Randomized algorithm for the k-server problem on decomposable spaces
11 pages
null
null
null
cs.DS cs.DM
null
We study the randomized k-server problem on metric spaces consisting of widely separated subspaces. We give a method which extends existing algorithms to larger spaces with the growth rate of the competitive quotients being at most O(log k). This method yields o(k)-competitive algorithms solving the randomized k-server problem, for some special underlying metric spaces, e.g. HSTs of "small" height (but unbounded degree). HSTs are important tools for probabilistic approximation of metric spaces.
[ { "version": "v1", "created": "Fri, 17 Aug 2007 11:54:44 GMT" } ]
2007-08-20T00:00:00
[ [ "Nagy-György", "Judit", "" ] ]
0708.2544
Gregory Gutin
G. Gutin and E.J. Kim
On the Complexity of the Minimum Cost Homomorphism Problem for Reflexive Multipartite Tournaments
null
null
null
null
cs.DM cs.DS
null
For digraphs $D$ and $H$, a mapping $f: V(D)\dom V(H)$ is a homomorphism of $D$ to $H$ if $uv\in A(D)$ implies $f(u)f(v)\in A(H).$ For a fixed digraph $H$, the homomorphism problem is to decide whether an input digraph $D$ admits a homomorphism to $H$ or not, and is denoted as HOMP($H$). Digraphs are allowed to have loops, but not allowed to have parallel arcs. A natural optimization version of the homomorphism problem is defined as follows. If each vertex $u \in V(D)$ is associated with costs $c_i(u), i \in V(H)$, then the cost of the homomorphism $f$ is $\sum_{u\in V(D)}c_{f(u)}(u)$. For each fixed digraph $H$, we have the {\em minimum cost homomorphism problem for} $H$ and denote it as MinHOMP($H$). The problem is to decide, for an input graph $D$ with costs $c_i(u),$ $u \in V(D), i\in V(H)$, whether there exists a homomorphism of $D$ to $H$ and, if one exists, to find one of minimum cost. In a recent paper, we posed a problem of characterizing polynomial time solvable and NP-hard cases of the minimum cost homomorphism problem for acyclic multipartite tournaments with possible loops (w.p.l.). In this paper, we solve the problem for reflexive multipartite tournaments and demonstrate a considerate difficulty of the problem for the whole class of multipartite tournaments w.p.l. using, as an example, acyclic 3-partite tournaments of order 4 w.p.l.\footnote{This paper was submitted to Discrete Mathematics on April 6, 2007}
[ { "version": "v1", "created": "Sun, 19 Aug 2007 13:00:59 GMT" } ]
2007-08-21T00:00:00
[ [ "Gutin", "G.", "" ], [ "Kim", "E. J.", "" ] ]
0708.2545
Gregory Gutin
E.J. Kim and G. Gutin
Complexity of the Minimum Cost Homomorphism Problem for Semicomplete Digraphs with Possible Loops
null
null
null
null
cs.DM cs.DS
null
For digraphs $D$ and $H$, a mapping $f: V(D)\dom V(H)$ is a homomorphism of $D$ to $H$ if $uv\in A(D)$ implies $f(u)f(v)\in A(H).$ For a fixed digraph $H$, the homomorphism problem is to decide whether an input digraph $D$ admits a homomorphism to $H$ or not, and is denoted as HOM($H$). An optimization version of the homomorphism problem was motivated by a real-world problem in defence logistics and was introduced in \cite{gutinDAM154a}. If each vertex $u \in V(D)$ is associated with costs $c_i(u), i \in V(H)$, then the cost of the homomorphism $f$ is $\sum_{u\in V(D)}c_{f(u)}(u)$. For each fixed digraph $H$, we have the {\em minimum cost homomorphism problem for} $H$ and denote it as MinHOM($H$). The problem is to decide, for an input graph $D$ with costs $c_i(u),$ $u \in V(D), i\in V(H)$, whether there exists a homomorphism of $D$ to $H$ and, if one exists, to find one of minimum cost. Although a complete dichotomy classification of the complexity of MinHOM($H$) for a digraph $H$ remains an unsolved problem, complete dichotomy classifications for MinHOM($H$) were proved when $H$ is a semicomplete digraph \cite{gutinDAM154b}, and a semicomplete multipartite digraph \cite{gutinDAM}. In these studies, it is assumed that the digraph $H$ is loopless. In this paper, we present a full dichotomy classification for semicomplete digraphs with possible loops, which solves a problem in \cite{gutinRMS}.\footnote{This paper was submitted to SIAM J. Discrete Math. on October 27, 2006}
[ { "version": "v1", "created": "Sun, 19 Aug 2007 13:47:00 GMT" } ]
2007-08-21T00:00:00
[ [ "Kim", "E. J.", "" ], [ "Gutin", "G.", "" ] ]
0708.2936
David P{\l}aneta S
David S. Planeta
Priority Queue Based on Multilevel Prefix Tree
null
null
null
null
cs.DS
null
Tree structures are very often used data structures. Among ordered types of trees there are many variants whose basic operations such as insert, delete, search, delete-min are characterized by logarithmic time complexity. In the article I am going to present the structure whose time complexity for each of the above operations is $O(\frac{M}{K} + K)$, where M is the size of data type and K is constant properly matching the size of data type. Properly matched K will make the structure function as a very effective Priority Queue. The structure size linearly depends on the number and size of elements. PTrie is a clever combination of the idea of prefix tree -- Trie, structure of logarithmic time complexity for insert and remove operations, doubly linked list and queues.
[ { "version": "v1", "created": "Tue, 21 Aug 2007 22:59:49 GMT" } ]
2007-08-23T00:00:00
[ [ "Planeta", "David S.", "" ] ]
0708.3259
Rasmus Pagh
Philip Bille, Anna Pagh, Rasmus Pagh
Fast evaluation of union-intersection expressions
null
null
null
null
cs.DS cs.DB cs.IR
null
We show how to represent sets in a linear space data structure such that expressions involving unions and intersections of sets can be computed in a worst-case efficient way. This problem has applications in e.g. information retrieval and database systems. We mainly consider the RAM model of computation, and sets of machine words, but also state our results in the I/O model. On a RAM with word size $w$, a special case of our result is that the intersection of $m$ (preprocessed) sets, containing $n$ elements in total, can be computed in expected time $O(n (\log w)^2 / w + km)$, where $k$ is the number of elements in the intersection. If the first of the two terms dominates, this is a factor $w^{1-o(1)}$ faster than the standard solution of merging sorted lists. We show a cell probe lower bound of time $\Omega(n/(w m \log m)+ (1-\tfrac{\log k}{w}) k)$, meaning that our upper bound is nearly optimal for small $m$. Our algorithm uses a novel combination of approximate set representations and word-level parallelism.
[ { "version": "v1", "created": "Thu, 23 Aug 2007 22:23:04 GMT" } ]
2007-08-27T00:00:00
[ [ "Bille", "Philip", "" ], [ "Pagh", "Anna", "" ], [ "Pagh", "Rasmus", "" ] ]
0708.3408
David P{\l}aneta S
David S. Planeta
Linear Time Algorithms Based on Multilevel Prefix Tree for Finding Shortest Path with Positive Weights and Minimum Spanning Tree in a Networks
null
null
null
null
cs.DS
null
In this paper I present general outlook on questions relevant to the basic graph algorithms; Finding the Shortest Path with Positive Weights and Minimum Spanning Tree. I will show so far known solution set of basic graph problems and present my own. My solutions to graph problems are characterized by their linear worst-case time complexity. It should be noticed that the algorithms which compute the Shortest Path and Minimum Spanning Tree problems not only analyze the weight of arcs (which is the main and often the only criterion of solution hitherto known algorithms) but also in case of identical path weights they select this path which walks through as few vertices as possible. I have presented algorithms which use priority queue based on multilevel prefix tree -- PTrie. PTrie is a clever combination of the idea of prefix tree -- Trie, the structure of logarithmic time complexity for insert and remove operations, doubly linked list and queues. In C++ I will implement linear worst-case time algorithm computing the Single-Destination Shortest-Paths problem and I will explain its usage.
[ { "version": "v1", "created": "Fri, 24 Aug 2007 21:58:29 GMT" } ]
2007-08-28T00:00:00
[ [ "Planeta", "David S.", "" ] ]
0708.3696
Michael Mahoney
Petros Drineas, Michael W. Mahoney and S. Muthukrishnan
Relative-Error CUR Matrix Decompositions
40 pages, 10 figures
null
null
null
cs.DS
null
Many data analysis applications deal with large matrices and involve approximating the matrix using a small number of ``components.'' Typically, these components are linear combinations of the rows and columns of the matrix, and are thus difficult to interpret in terms of the original features of the input data. In this paper, we propose and study matrix approximations that are explicitly expressed in terms of a small number of columns and/or rows of the data matrix, and thereby more amenable to interpretation in terms of the original data. Our main algorithmic results are two randomized algorithms which take as input an $m \times n$ matrix $A$ and a rank parameter $k$. In our first algorithm, $C$ is chosen, and we let $A'=CC^+A$, where $C^+$ is the Moore-Penrose generalized inverse of $C$. In our second algorithm $C$, $U$, $R$ are chosen, and we let $A'=CUR$. ($C$ and $R$ are matrices that consist of actual columns and rows, respectively, of $A$, and $U$ is a generalized inverse of their intersection.) For each algorithm, we show that with probability at least $1-\delta$: $$ ||A-A'||_F \leq (1+\epsilon) ||A-A_k||_F, $$ where $A_k$ is the ``best'' rank-$k$ approximation provided by truncating the singular value decomposition (SVD) of $A$. The number of columns of $C$ and rows of $R$ is a low-degree polynomial in $k$, $1/\epsilon$, and $\log(1/\delta)$. Our two algorithms are the first polynomial time algorithms for such low-rank matrix approximations that come with relative-error guarantees; previously, in some cases, it was not even known whether such matrix decompositions exist. Both of our algorithms are simple, they take time of the order needed to approximately compute the top $k$ singular vectors of $A$, and they use a novel, intuitive sampling method called ``subspace sampling.''
[ { "version": "v1", "created": "Mon, 27 Aug 2007 23:34:50 GMT" } ]
2007-08-29T00:00:00
[ [ "Drineas", "Petros", "" ], [ "Mahoney", "Michael W.", "" ], [ "Muthukrishnan", "S.", "" ] ]
0708.4284
Mariano Zelke
Mariano Zelke
Optimal Per-Edge Processing Times in the Semi-Streaming Model
8 pages, 1 table
Information Processing Letters, Volume 104, Issue 3, 2007, Pages 106-112
null
null
cs.DM cs.DS
null
We present semi-streaming algorithms for basic graph problems that have optimal per-edge processing times and therefore surpass all previous semi-streaming algorithms for these tasks. The semi-streaming model, which is appropriate when dealing with massive graphs, forbids random access to the input and restricts the memory to O(n*polylog n) bits. Particularly, the formerly best per-edge processing times for finding the connected components and a bipartition are O(alpha(n)), for determining k-vertex and k-edge connectivity O(k^2n) and O(n*log n) respectively for any constant k and for computing a minimum spanning forest O(log n). All these time bounds we reduce to O(1). Every presented algorithm determines a solution asymptotically as fast as the best corresponding algorithm up to date in the classical RAM model, which therefore cannot convert the advantage of unlimited memory and random access into superior computing times for these problems.
[ { "version": "v1", "created": "Fri, 31 Aug 2007 07:19:27 GMT" } ]
2007-09-03T00:00:00
[ [ "Zelke", "Mariano", "" ] ]
0708.4288
Philip Bille
Philip Bille
Pattern Matching in Trees and Strings
PhD dissertation, 140 pages
null
null
null
cs.DS
null
We study the design of efficient algorithms for combinatorial pattern matching. More concretely, we study algorithms for tree matching, string matching, and string matching in compressed texts.
[ { "version": "v1", "created": "Fri, 31 Aug 2007 08:07:32 GMT" } ]
2007-09-03T00:00:00
[ [ "Bille", "Philip", "" ] ]
0708.4399
Steven G. Johnson
Xuancheng Shao and Steven G. Johnson
Type-IV DCT, DST, and MDCT algorithms with reduced numbers of arithmetic operations
11 pages
Signal Processing vol. 88, issue 6, p. 1313-1326 (2008)
10.1016/j.sigpro.2007.11.024
null
cs.DS cs.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present algorithms for the type-IV discrete cosine transform (DCT-IV) and discrete sine transform (DST-IV), as well as for the modified discrete cosine transform (MDCT) and its inverse, that achieve a lower count of real multiplications and additions than previously published algorithms, without sacrificing numerical accuracy. Asymptotically, the operation count is reduced from ~2NlogN to ~(17/9)NlogN for a power-of-two transform size N, and the exact count is strictly lowered for all N > 4. These results are derived by considering the DCT to be a special case of a DFT of length 8N, with certain symmetries, and then pruning redundant operations from a recent improved fast Fourier transform algorithm (based on a recursive rescaling of the conjugate-pair split radix algorithm). The improved algorithms for DST-IV and MDCT follow immediately from the improved count for the DCT-IV.
[ { "version": "v1", "created": "Fri, 31 Aug 2007 18:00:33 GMT" }, { "version": "v2", "created": "Thu, 29 Jan 2009 19:18:21 GMT" } ]
2009-01-29T00:00:00
[ [ "Shao", "Xuancheng", "" ], [ "Johnson", "Steven G.", "" ] ]
0709.0511
Oskar Sandberg
Oskar Sandberg
Double Clustering and Graph Navigability
null
null
null
null
math.PR cs.DS math.CO
null
Graphs are called navigable if one can find short paths through them using only local knowledge. It has been shown that for a graph to be navigable, its construction needs to meet strict criteria. Since such graphs nevertheless seem to appear in nature, it is of interest to understand why these criteria should be fulfilled. In this paper we present a simple method for constructing graphs based on a model where nodes vertices are ``similar'' in two different ways, and tend to connect to those most similar to them - or cluster - with respect to both. We prove that this leads to navigable networks for several cases, and hypothesize that it also holds in great generality. Enough generality, perhaps, to explain the occurrence of navigable networks in nature.
[ { "version": "v1", "created": "Tue, 4 Sep 2007 19:38:14 GMT" } ]
2007-09-05T00:00:00
[ [ "Sandberg", "Oskar", "" ] ]
0709.0624
Martin Ziegler
Katharina L\"urwer-Br\"uggemeier and Martin Ziegler
On Faster Integer Calculations using Non-Arithmetic Primitives
null
null
null
null
cs.DS
null
The unit cost model is both convenient and largely realistic for describing integer decision algorithms over (+,*). Additional operations like division with remainder or bitwise conjunction, although equally supported by computing hardware, may lead to a considerable drop in complexity. We show a variety of concrete problems to benefit from such NON-arithmetic primitives by presenting and analyzing corresponding fast algorithms.
[ { "version": "v1", "created": "Wed, 5 Sep 2007 11:34:54 GMT" } ]
2007-09-06T00:00:00
[ [ "Lürwer-Brüggemeier", "Katharina", "" ], [ "Ziegler", "Martin", "" ] ]
0709.0670
Daniil Ryabko
Daniil Ryabko, Juergen Schmidhuber
Using Data Compressors to Construct Rank Tests
null
Applied Mathematics Letters, 22:7, 1029-1032, 2009
null
null
cs.DS cs.IT math.IT
null
Nonparametric rank tests for homogeneity and component independence are proposed, which are based on data compressors. For homogeneity testing the idea is to compress the binary string obtained by ordering the two joint samples and writing 0 if the element is from the first sample and 1 if it is from the second sample and breaking ties by randomization (extension to the case of multiple samples is straightforward). $H_0$ should be rejected if the string is compressed (to a certain degree) and accepted otherwise. We show that such a test obtained from an ideal data compressor is valid against all alternatives. Component independence is reduced to homogeneity testing by constructing two samples, one of which is the first half of the original and the other is the second half with one of the components randomly permuted.
[ { "version": "v1", "created": "Wed, 5 Sep 2007 15:06:04 GMT" } ]
2012-02-28T00:00:00
[ [ "Ryabko", "Daniil", "" ], [ "Schmidhuber", "Juergen", "" ] ]
0709.0677
Binhai Zhu
Binhai Zhu
On the Complexity of Protein Local Structure Alignment Under the Discrete Fr\'echet Distance
11 pages, 2 figures
null
null
null
cs.CC cs.DS
null
We show that given $m$ proteins (or protein backbones, which are modeled as 3D polygonal chains each of length O(n)) the problem of protein local structure alignment under the discrete Fr\'{e}chet distance is as hard as Independent Set. So the problem does not admit any approximation of factor $n^{1-\epsilon}$. This is the strongest negative result regarding the protein local structure alignment problem. On the other hand, if $m$ is a constant, then the problem can be solved in polygnomial time.
[ { "version": "v1", "created": "Wed, 5 Sep 2007 15:30:54 GMT" } ]
2007-09-06T00:00:00
[ [ "Zhu", "Binhai", "" ] ]
0709.0974
Sergey Gubin
Sergey Gubin
Finding Paths and Cycles in Graphs
11 pages
null
null
null
cs.DM cs.CC cs.DS math.CO
null
A polynomial time algorithm which detects all paths and cycles of all lengths in form of vertex pairs (start, finish).
[ { "version": "v1", "created": "Fri, 7 Sep 2007 00:04:20 GMT" } ]
2007-09-10T00:00:00
[ [ "Gubin", "Sergey", "" ] ]
0709.1227
Yanghua Xiao
Yanghua Xiao, Wentao Wu, Wei Wang and Zhengying He
Efficient Algorithms for Node Disjoint Subgraph Homeomorphism Determination
15 pages, 11 figures, submitted to DASFAA 2008
In Proceeding of 13th International Conference on Database Systems for Advanced Applications, 2008
10.1007/978-3-540-78568-2
null
cs.DS cs.DB
null
Recently, great efforts have been dedicated to researches on the management of large scale graph based data such as WWW, social networks, biological networks. In the study of graph based data management, node disjoint subgraph homeomorphism relation between graphs is more suitable than (sub)graph isomorphism in many cases, especially in those cases that node skipping and node mismatching are allowed. However, no efficient node disjoint subgraph homeomorphism determination (ndSHD) algorithms have been available. In this paper, we propose two computationally efficient ndSHD algorithms based on state spaces searching with backtracking, which employ many heuristics to prune the search spaces. Experimental results on synthetic data sets show that the proposed algorithms are efficient, require relative little time in most of the testing cases, can scale to large or dense graphs, and can accommodate to more complex fuzzy matching cases.
[ { "version": "v1", "created": "Sat, 8 Sep 2007 18:14:47 GMT" } ]
2008-10-09T00:00:00
[ [ "Xiao", "Yanghua", "" ], [ "Wu", "Wentao", "" ], [ "Wang", "Wei", "" ], [ "He", "Zhengying", "" ] ]
0709.2016
Konstantin Avrachenkov
Konstantin Avrachenkov, Nelly Litvak, Kim Son Pham
Distribution of PageRank Mass Among Principle Components of the Web
null
null
null
null
cs.NI cs.DS
null
We study the PageRank mass of principal components in a bow-tie Web Graph, as a function of the damping factor c. Using a singular perturbation approach, we show that the PageRank share of IN and SCC components remains high even for very large values of the damping factor, in spite of the fact that it drops to zero when c goes to one. However, a detailed study of the OUT component reveals the presence ``dead-ends'' (small groups of pages linking only to each other) that receive an unfairly high ranking when c is close to one. We argue that this problem can be mitigated by choosing c as small as 1/2.
[ { "version": "v1", "created": "Thu, 13 Sep 2007 08:29:53 GMT" } ]
2007-09-14T00:00:00
[ [ "Avrachenkov", "Konstantin", "" ], [ "Litvak", "Nelly", "" ], [ "Pham", "Kim Son", "" ] ]
0709.2562
Paolo Laureti
Marcel Blattner, Alexander Hunziker, Paolo Laureti
When are recommender systems useful?
null
null
null
null
cs.IR cs.CY cs.DL cs.DS physics.data-an physics.soc-ph
null
Recommender systems are crucial tools to overcome the information overload brought about by the Internet. Rigorous tests are needed to establish to what extent sophisticated methods can improve the quality of the predictions. Here we analyse a refined correlation-based collaborative filtering algorithm and compare it with a novel spectral method for recommending. We test them on two databases that bear different statistical properties (MovieLens and Jester) without filtering out the less active users and ordering the opinions in time, whenever possible. We find that, when the distribution of user-user correlations is narrow, simple averages work nearly as well as advanced methods. Recommender systems can, on the other hand, exploit a great deal of additional information in systems where external influence is negligible and peoples' tastes emerge entirely. These findings are validated by simulations with artificially generated data.
[ { "version": "v1", "created": "Mon, 17 Sep 2007 09:27:07 GMT" } ]
2007-09-19T00:00:00
[ [ "Blattner", "Marcel", "" ], [ "Hunziker", "Alexander", "" ], [ "Laureti", "Paolo", "" ] ]
0709.2961
Andreas Schutt
Andreas Schutt and Peter J. Stuckey
Incremental Satisfiability and Implication for UTVPI Constraints
14 pages, 1 figure
null
null
null
cs.DS cs.CG cs.LO
null
Unit two-variable-per-inequality (UTVPI) constraints form one of the largest class of integer constraints which are polynomial time solvable (unless P=NP). There is considerable interest in their use for constraint solving, abstract interpretation, spatial databases, and theorem proving. In this paper we develop a new incremental algorithm for UTVPI constraint satisfaction and implication checking that requires O(m + n log n + p) time and O(n+m+p) space to incrementally check satisfiability of m UTVPI constraints on n variables and check implication of p UTVPI constraints.
[ { "version": "v1", "created": "Wed, 19 Sep 2007 06:58:05 GMT" } ]
2007-09-20T00:00:00
[ [ "Schutt", "Andreas", "" ], [ "Stuckey", "Peter J.", "" ] ]
0709.3034
Anastasia Analyti
Carlo Meghini, Yannis Tzitzikas, Anastasia Analyti
Query Evaluation in P2P Systems of Taxonomy-based Sources: Algorithms, Complexity, and Optimizations
null
null
null
null
cs.DB cs.DC cs.DS cs.LO
null
In this study, we address the problem of answering queries over a peer-to-peer system of taxonomy-based sources. A taxonomy states subsumption relationships between negation-free DNF formulas on terms and negation-free conjunctions of terms. To the end of laying the foundations of our study, we first consider the centralized case, deriving the complexity of the decision problem and of query evaluation. We conclude by presenting an algorithm that is efficient in data complexity and is based on hypergraphs. More expressive forms of taxonomies are also investigated, which however lead to intractability. We then move to the distributed case, and introduce a logical model of a network of taxonomy-based sources. On such network, a distributed version of the centralized algorithm is then presented, based on a message passing paradigm, and its correctness is proved. We finally discuss optimization issues, and relate our work to the literature.
[ { "version": "v1", "created": "Wed, 19 Sep 2007 15:10:05 GMT" } ]
2007-09-20T00:00:00
[ [ "Meghini", "Carlo", "" ], [ "Tzitzikas", "Yannis", "" ], [ "Analyti", "Anastasia", "" ] ]
0709.3384
Mariano Zelke
Mariano Zelke
Weighted Matching in the Semi-Streaming Model
12 pages, 2 figures
Proceedings of the 25th Annual Symposium on the Theoretical Aspects of Computer Science - STACS 2008, Bordeaux : France (2008)
null
null
cs.DM cs.DS
null
We reduce the best known approximation ratio for finding a weighted matching of a graph using a one-pass semi-streaming algorithm from 5.828 to 5.585. The semi-streaming model forbids random access to the input and restricts the memory to O(n*polylog(n)) bits. It was introduced by Muthukrishnan in 2003 and is appropriate when dealing with massive graphs.
[ { "version": "v1", "created": "Fri, 21 Sep 2007 09:34:19 GMT" } ]
2008-02-25T00:00:00
[ [ "Zelke", "Mariano", "" ] ]
0709.4273
Sergey Gubin
Sergey Gubin
Set Matrices and The Path/Cycle Problem
7 pages
null
null
null
cs.DM cs.CC cs.DS math.CO
null
Presentation of set matrices and demonstration of their efficiency as a tool using the path/cycle problem.
[ { "version": "v1", "created": "Wed, 26 Sep 2007 21:44:10 GMT" } ]
2007-09-28T00:00:00
[ [ "Gubin", "Sergey", "" ] ]
0710.0083
Andrew McGregor
Stanislav Angelov, Keshav Kunal, Andrew McGregor
Sorting and Selection with Random Costs
null
null
null
null
cs.DS
null
There is a growing body of work on sorting and selection in models other than the unit-cost comparison model. This work is the first treatment of a natural stochastic variant of the problem where the cost of comparing two elements is a random variable. Each cost is chosen independently and is known to the algorithm. In particular we consider the following three models: each cost is chosen uniformly in the range $[0,1]$, each cost is 0 with some probability $p$ and 1 otherwise, or each cost is 1 with probability $p$ and infinite otherwise. We present lower and upper bounds (optimal in most cases) for these problems. We obtain our upper bounds by carefully designing algorithms to ensure that the costs incurred at various stages are independent and using properties of random partial orders when appropriate.
[ { "version": "v1", "created": "Sat, 29 Sep 2007 18:10:28 GMT" } ]
2007-10-02T00:00:00
[ [ "Angelov", "Stanislav", "" ], [ "Kunal", "Keshav", "" ], [ "McGregor", "Andrew", "" ] ]
0710.0262
Sebastian Roch
Elchanan Mossel and Sebastien Roch
Incomplete Lineage Sorting: Consistent Phylogeny Estimation From Multiple Loci
Added a section on more general distance-based methods
null
null
null
q-bio.PE cs.CE cs.DS math.PR math.ST stat.TH
null
We introduce a simple algorithm for reconstructing phylogenies from multiple gene trees in the presence of incomplete lineage sorting, that is, when the topology of the gene trees may differ from that of the species tree. We show that our technique is statistically consistent under standard stochastic assumptions, that is, it returns the correct tree given sufficiently many unlinked loci. We also show that it can tolerate moderate estimation errors.
[ { "version": "v1", "created": "Mon, 1 Oct 2007 11:11:43 GMT" }, { "version": "v2", "created": "Fri, 2 Nov 2007 03:41:04 GMT" } ]
2011-09-30T00:00:00
[ [ "Mossel", "Elchanan", "" ], [ "Roch", "Sebastien", "" ] ]
0710.0318
Alexander Tiskin
Vladimir Deineko and Alexander Tiskin
Fast minimum-weight double-tree shortcutting for Metric TSP: Is the best one good enough?
null
null
null
null
cs.DS cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Metric Traveling Salesman Problem (TSP) is a classical NP-hard optimization problem. The double-tree shortcutting method for Metric TSP yields an exponentially-sized space of TSP tours, each of which approximates the optimal solution within at most a factor of 2. We consider the problem of finding among these tours the one that gives the closest approximation, i.e.\ the \emph{minimum-weight double-tree shortcutting}. Burkard et al. gave an algorithm for this problem, running in time $O(n^3+2^d n^2)$ and memory $O(2^d n^2)$, where $d$ is the maximum node degree in the rooted minimum spanning tree. We give an improved algorithm for the case of small $d$ (including planar Euclidean TSP, where $d \leq 4$), running in time $O(4^d n^2)$ and memory $O(4^d n)$. This improvement allows one to solve the problem on much larger instances than previously attempted. Our computational experiments suggest that in terms of the time-quality tradeoff, the minimum-weight double-tree shortcutting method provides one of the best known tour-constructing heuristics.
[ { "version": "v1", "created": "Mon, 1 Oct 2007 15:25:18 GMT" }, { "version": "v2", "created": "Sat, 20 Jun 2009 16:17:30 GMT" }, { "version": "v3", "created": "Thu, 16 Jul 2009 16:17:25 GMT" } ]
2009-07-16T00:00:00
[ [ "Deineko", "Vladimir", "" ], [ "Tiskin", "Alexander", "" ] ]
0710.0539
Anthony A. Ruffa
Anthony A. Ruffa
A Novel Solution to the ATT48 Benchmark Problem
null
null
null
null
cs.DS cs.CC
null
A solution to the benchmark ATT48 Traveling Salesman Problem (from the TSPLIB95 library) results from isolating the set of vertices into ten open-ended zones with nine lengthwise boundaries. In each zone, a minimum-length Hamiltonian Path (HP) is found for each combination of boundary vertices, leading to an approximation for the minimum-length Hamiltonian Cycle (HC). Determination of the optimal HPs for subsequent zones has the effect of automatically filtering out non-optimal HPs from earlier zones. Although the optimal HC for ATT48 involves only two crossing edges between all zones (with one exception), adding inter-zone edges can accommodate more complex problems.
[ { "version": "v1", "created": "Tue, 2 Oct 2007 14:26:33 GMT" } ]
2007-10-03T00:00:00
[ [ "Ruffa", "Anthony A.", "" ] ]
0710.1001
Vitaliy Kurlin
V. Kurlin, L. Mihaylova
Connectivity of Random 1-Dimensional Networks
12 pages, 10 figures
null
null
null
cs.IT cs.DS math.IT stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An important problem in wireless sensor networks is to find the minimal number of randomly deployed sensors making a network connected with a given probability. In practice sensors are often deployed one by one along a trajectory of a vehicle, so it is natural to assume that arbitrary probability density functions of distances between successive sensors in a segment are given. The paper computes the probability of connectivity and coverage of 1-dimensional networks and gives estimates for a minimal number of sensors for important distributions.
[ { "version": "v1", "created": "Thu, 4 Oct 2007 12:57:34 GMT" }, { "version": "v2", "created": "Fri, 9 Oct 2009 21:01:10 GMT" } ]
2009-10-10T00:00:00
[ [ "Kurlin", "V.", "" ], [ "Mihaylova", "L.", "" ] ]
0710.1435
Michael Mahoney
Petros Drineas, Michael W. Mahoney, S. Muthukrishnan, and Tamas Sarlos
Faster Least Squares Approximation
25 pages; minor changes from previous version; this version will appear in Numerische Mathematik
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Least squares approximation is a technique to find an approximate solution to a system of linear equations that has no exact solution. In a typical setting, one lets $n$ be the number of constraints and $d$ be the number of variables, with $n \gg d$. Then, existing exact methods find a solution vector in $O(nd^2)$ time. We present two randomized algorithms that provide very accurate relative-error approximations to the optimal value and the solution vector of a least squares approximation problem more rapidly than existing exact algorithms. Both of our algorithms preprocess the data with the Randomized Hadamard Transform. One then uniformly randomly samples constraints and solves the smaller problem on those constraints, and the other performs a sparse random projection and solves the smaller problem on those projected coordinates. In both cases, solving the smaller problem provides relative-error approximations, and, if $n$ is sufficiently larger than $d$, the approximate solution can be computed in $O(nd \log d)$ time.
[ { "version": "v1", "created": "Sun, 7 Oct 2007 17:37:37 GMT" }, { "version": "v2", "created": "Mon, 25 May 2009 23:01:43 GMT" }, { "version": "v3", "created": "Mon, 3 May 2010 06:55:54 GMT" }, { "version": "v4", "created": "Sun, 26 Sep 2010 18:36:00 GMT" } ]
2010-09-28T00:00:00
[ [ "Drineas", "Petros", "" ], [ "Mahoney", "Michael W.", "" ], [ "Muthukrishnan", "S.", "" ], [ "Sarlos", "Tamas", "" ] ]
0710.1525
Sebastiano Vigna
Sebastiano Vigna, Paolo Boldi
Efficient Optimally Lazy Algorithms for Minimal-Interval Semantics
24 pages, 4 figures. A preliminary (now outdated) version was presented at SPIRE 2006
null
null
null
cs.DS cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Minimal-interval semantics associates with each query over a document a set of intervals, called witnesses, that are incomparable with respect to inclusion (i.e., they form an antichain): witnesses define the minimal regions of the document satisfying the query. Minimal-interval semantics makes it easy to define and compute several sophisticated proximity operators, provides snippets for user presentation, and can be used to rank documents. In this paper we provide algorithms for computing conjunction and disjunction that are linear in the number of intervals and logarithmic in the number of operands; for additional operators, such as ordered conjunction and Brouwerian difference, we provide linear algorithms. In all cases, space is linear in the number of operands. More importantly, we define a formal notion of optimal laziness, and either prove it, or prove its impossibility, for each algorithm. We cast our results in a general framework of antichains of intervals on total orders, making our algorithms directly applicable to other domains.
[ { "version": "v1", "created": "Mon, 8 Oct 2007 12:15:48 GMT" }, { "version": "v2", "created": "Thu, 11 Aug 2016 08:57:30 GMT" } ]
2016-08-12T00:00:00
[ [ "Vigna", "Sebastiano", "" ], [ "Boldi", "Paolo", "" ] ]
0710.1842
Frank Ruskey
Frank Ruskey and Aaron Williams
An explicit universal cycle for the (n-1)-permutations of an n-set
null
null
null
null
cs.DM cs.DS
null
We show how to construct an explicit Hamilton cycle in the directed Cayley graph Cay({\sigma_n, sigma_{n-1}} : \mathbb{S}_n), where \sigma_k = (1 2 >... k). The existence of such cycles was shown by Jackson (Discrete Mathematics, 149 (1996) 123-129) but the proof only shows that a certain directed graph is Eulerian, and Knuth (Volume 4 Fascicle 2, Generating All Tuples and Permutations (2005)) asks for an explicit construction. We show that a simple recursion describes our Hamilton cycle and that the cycle can be generated by an iterative algorithm that uses O(n) space. Moreover, the algorithm produces each successive edge of the cycle in constant time; such algorithms are said to be loopless.
[ { "version": "v1", "created": "Tue, 9 Oct 2007 18:06:05 GMT" } ]
2007-10-10T00:00:00
[ [ "Ruskey", "Frank", "" ], [ "Williams", "Aaron", "" ] ]
0710.2532
Maxwell Young
Valerie King, Cynthia Phillips, Jared Saia and Maxwell Young
Sleeping on the Job: Energy-Efficient Broadcast for Radio Networks
15 pages, 1 figure
null
null
null
cs.DS
null
We address the problem of minimizing power consumption when performing reliable broadcast on a radio network under the following popular model. Each node in the network is located on a point in a two dimensional grid, and whenever a node sends a message, all awake nodes within distance r receive the message. In the broadcast problem, some node wants to successfully send a message to all other nodes in the network even when up to a 1/2 fraction of the nodes within every neighborhood can be deleted by an adversary. The set of deleted nodes is carefully chosen by the adversary to foil our algorithm and moreover, the set of deleted nodes may change periodically. This models worst-case behavior due to mobile nodes, static nodes losing power or simply some points in the grid being unoccupied. A trivial solution requires each node in the network to be awake roughly 1/2 the time, and a trivial lower bound shows that each node must be awake for at least a 1/n fraction of the time. Our first result is an algorithm that requires each node to be awake for only a 1/sqrt(n) fraction of the time in expectation. Our algorithm achieves this while ensuring correctness with probability 1, and keeping optimal values for other resource costs such as latency and number of messages sent. We give a lower-bound that shows that this reduction in power consumption is asymptotically optimal when latency and number of messages sent must be optimal. If we can increase the latency and messages sent by only a log*n factor we give a Las Vegas algorithm that requires each node to be awake for only a (log*n)/n expected fraction of the time; we give a lower-bound showing that this second algorithm is near optimal. Finally, we show how to ensure energy-efficient broadcast in the presence of Byzantine faults.
[ { "version": "v1", "created": "Fri, 12 Oct 2007 19:56:45 GMT" } ]
2007-10-15T00:00:00
[ [ "King", "Valerie", "" ], [ "Phillips", "Cynthia", "" ], [ "Saia", "Jared", "" ], [ "Young", "Maxwell", "" ] ]
0710.3246
John Talbot
David Talbot and John Talbot
Bloom maps
15 pages
null
null
null
cs.DS cs.IT math.IT
null
We consider the problem of succinctly encoding a static map to support approximate queries. We derive upper and lower bounds on the space requirements in terms of the error rate and the entropy of the distribution of values over keys: our bounds differ by a factor log e. For the upper bound we introduce a novel data structure, the Bloom map, generalising the Bloom filter to this problem. The lower bound follows from an information theoretic argument.
[ { "version": "v1", "created": "Wed, 17 Oct 2007 09:35:14 GMT" } ]
2007-10-18T00:00:00
[ [ "Talbot", "David", "" ], [ "Talbot", "John", "" ] ]
0710.3603
Jakub Mare\v{c}ek
Edmund K. Burke, Jakub Marecek, Andrew J. Parkes, and Hana Rudova
On a Clique-Based Integer Programming Formulation of Vertex Colouring with Applications in Course Timetabling
null
Annals of Operations Research (2010) 179(1), 105-130
10.1007/s10479-010-0716-z
null
cs.DM cs.DS math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vertex colouring is a well-known problem in combinatorial optimisation, whose alternative integer programming formulations have recently attracted considerable attention. This paper briefly surveys seven known formulations of vertex colouring and introduces a formulation of vertex colouring using a suitable clique partition of the graph. This formulation is applicable in timetabling applications, where such a clique partition of the conflict graph is given implicitly. In contrast with some alternatives, the presented formulation can also be easily extended to accommodate complex performance indicators (``soft constraints'') imposed in a number of real-life course timetabling applications. Its performance depends on the quality of the clique partition, but encouraging empirical results for the Udine Course Timetabling problem are reported.
[ { "version": "v1", "created": "Thu, 18 Oct 2007 21:38:37 GMT" }, { "version": "v2", "created": "Sat, 10 Nov 2007 19:03:39 GMT" }, { "version": "v3", "created": "Tue, 7 Jul 2009 19:55:23 GMT" } ]
2014-04-10T00:00:00
[ [ "Burke", "Edmund K.", "" ], [ "Marecek", "Jakub", "" ], [ "Parkes", "Andrew J.", "" ], [ "Rudova", "Hana", "" ] ]
README.md exists but content is empty.
Downloads last month
38