corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-670501
cs/0205009
Mostly-Unsupervised Statistical Segmentation of Japanese Kanji Sequences
<|reference_start|>Mostly-Unsupervised Statistical Segmentation of Japanese Kanji Sequences: Given the lack of word delimiters in written Japanese, word segmentation is generally considered a crucial first step in processing Japanese texts. Typical Japanese segmentation algorithms rely either on a lexicon and syntactic analysis or on pre-segmented data; but these are labor-intensive, and the lexico-syntactic techniques are vulnerable to the unknown word problem. In contrast, we introduce a novel, more robust statistical method utilizing unsegmented training data. Despite its simplicity, the algorithm yields performance on long kanji sequences comparable to and sometimes surpassing that of state-of-the-art morphological analyzers over a variety of error metrics. The algorithm also outperforms another mostly-unsupervised statistical algorithm previously proposed for Chinese. Additionally, we present a two-level annotation scheme for Japanese to incorporate multiple segmentation granularities, and introduce two novel evaluation metrics, both based on the notion of a compatible bracket, that can account for multiple granularities simultaneously.<|reference_end|>
arxiv
@article{ando2002mostly-unsupervised, title={Mostly-Unsupervised Statistical Segmentation of Japanese Kanji Sequences}, author={Rie Kubota Ando and Lillian Lee}, journal={Natural Language Engineering 9 (2), pp. 127--149, 2003}, year={2002}, doi={10.1017/S1351324902002954}, archivePrefix={arXiv}, eprint={cs/0205009}, primaryClass={cs.CL} }
ando2002mostly-unsupervised
arxiv-670502
cs/0205010
Approximate Data Structures with Applications
<|reference_start|>Approximate Data Structures with Applications: This paper explores the notion of approximate data structures, which return approximately correct answers to queries, but run faster than their exact counterparts. The paper describes approximate variants of the van Emde Boas data structure, which support the same dynamic operations as the standard van Emde Boas data structure (min, max, successor, predecessor, and existence queries, as well as insertion and deletion), except that answers to queries are approximate. The variants support all operations in constant time provided the performance guarantee is 1+1/polylog(n), and in O(loglogn) time provided the performance guarantee is 1+1/polynomial(n), for n elements in the data structure. Applications described include Prim's minimum-spanning-tree algorithm, Dijkstra's single-source shortest paths algorithm, and an on-line variant of Graham's convex hull algorithm. To obtain output which approximates the desired output with the performance guarantee tending to 1, Prim's algorithm requires only linear time, Dijkstra's algorithm requires O(mloglogn) time, and the on-line variant of Graham's algorithm requires constant amortized time per operation.<|reference_end|>
arxiv
@article{matias2002approximate, title={Approximate Data Structures with Applications}, author={Yossi Matias, Jeff Vitter, Neal Young}, journal={ACM-SIAM Symposium on Discrete Algorithms, pp. 187-194 (1994)}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205010}, primaryClass={cs.DS cs.CC} }
matias2002approximate
arxiv-670503
cs/0205011
On Strongly Connected Digraphs with Bounded Cycle Length
<|reference_start|>On Strongly Connected Digraphs with Bounded Cycle Length: The MEG (minimum equivalent graph) problem is, given a directed graph, to find a small subset of the edges that maintains all reachability relations between nodes. The problem is NP-hard. This paper gives a proof that, for graphs where each directed cycle has at most three edges, the MEG problem is equivalent to maximum bipartite matching, and therefore solvable in polynomial time. This leads to an improvement in the performance guarantee of the previously best approximation algorithm for the general problem in ``Approximating the Minimum Equivalent Digraph'' (1995).<|reference_end|>
arxiv
@article{khuller2002on, title={On Strongly Connected Digraphs with Bounded Cycle Length}, author={Samir Khuller and Balaji Raghavachari, Neal Young}, journal={Discrete Applied Mathematics 69(3):281-289 (1996)}, year={2002}, doi={10.1016/0166-218X(95)00105-Z}, archivePrefix={arXiv}, eprint={cs/0205011}, primaryClass={cs.DS cs.CC} }
khuller2002on
arxiv-670504
cs/0205012
Polynomial-Time Approximation Scheme for Data Broadcast
<|reference_start|>Polynomial-Time Approximation Scheme for Data Broadcast: The data broadcast problem is to find a schedule for broadcasting a given set of messages over multiple channels. The goal is to minimize the cost of the broadcast plus the expected response time to clients who periodically and probabilistically tune in to wait for particular messages. The problem models disseminating data to clients in asymmetric communication environments, where there is a much larger capacity from the information source to the clients than in the reverse direction. Examples include satellites, cable TV, internet broadcast, and mobile phones. Such environments favor the ``push-based'' model where the server broadcasts (pushes) its information on the communication medium and multiple clients simultaneously retrieve the specific information of individual interest. This paper presents the first polynomial-time approximation scheme (PTAS) for data broadcast with O(1) channels and when each message has arbitrary probability, unit length and bounded cost. The best previous polynomial-time approximation algorithm for this case has a performance ratio of 9/8.<|reference_end|>
arxiv
@article{kenyon2002polynomial-time, title={Polynomial-Time Approximation Scheme for Data Broadcast}, author={Claire Kenyon, Nicolas Schabanel, Neal Young}, journal={arXiv preprint arXiv:cs/0205012}, year={2002}, doi={10.1145/335305.335398}, archivePrefix={arXiv}, eprint={cs/0205012}, primaryClass={cs.DS cs.CC} }
kenyon2002polynomial-time
arxiv-670505
cs/0205013
Computing stable models: worst-case performance estimates
<|reference_start|>Computing stable models: worst-case performance estimates: We study algorithms for computing stable models of propositional logic programs and derive estimates on their worst-case performance that are asymptotically better than the trivial bound of O(m 2^n), where m is the size of an input program and n is the number of its atoms. For instance, for programs, whose clauses consist of at most two literals (counting the head) we design an algorithm to compute stable models that works in time O(m\times 1.44225^n). We present similar results for several broader classes of programs, as well.<|reference_end|>
arxiv
@article{lonc2002computing, title={Computing stable models: worst-case performance estimates}, author={Zbigniew Lonc and Miroslaw Truszczynski}, journal={arXiv preprint arXiv:cs/0205013}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205013}, primaryClass={cs.LO cs.AI} }
lonc2002computing
arxiv-670506
cs/0205014
Ultimate approximations in nonmonotonic knowledge representation systems
<|reference_start|>Ultimate approximations in nonmonotonic knowledge representation systems: We study fixpoints of operators on lattices. To this end we introduce the notion of an approximation of an operator. We order approximations by means of a precision ordering. We show that each lattice operator O has a unique most precise or ultimate approximation. We demonstrate that fixpoints of this ultimate approximation provide useful insights into fixpoints of the operator O. We apply our theory to logic programming and introduce the ultimate Kripke-Kleene, well-founded and stable semantics. We show that the ultimate Kripke-Kleene and well-founded semantics are more precise then their standard counterparts We argue that ultimate semantics for logic programming have attractive epistemological properties and that, while in general they are computationally more complex than the standard semantics, for many classes of theories, their complexity is no worse.<|reference_end|>
arxiv
@article{denecker2002ultimate, title={Ultimate approximations in nonmonotonic knowledge representation systems}, author={Marc Denecker, Victor W. Marek and Miroslaw Truszczynski}, journal={arXiv preprint arXiv:cs/0205014}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205014}, primaryClass={cs.AI} }
denecker2002ultimate
arxiv-670507
cs/0205015
Instabilities of Robot Motion
<|reference_start|>Instabilities of Robot Motion: Instabilities of robot motion are caused by topological reasons. In this paper we find a relation between the topological properties of a configuration space (the structure of its cohomology algebra) and the character of instabilities, which are unavoidable in any motion planning algorithm. More specifically, let $X$ denote the space of all admissible configurations of a mechanical system. A {\it motion planner} is given by a splitting $X\times X = F_1\cup F_2\cup ... \cup F_k$ (where $F_1, ..., F_k$ are pairwise disjoint ENRs, see below) and by continuous maps $s_j: F_j \to PX,$ such that $E\circ s_j =1_{F_j}$. Here $PX$ denotes the space of all continuous paths in $X$ (admissible motions of the system) and $E: PX\to X\times X$ denotes the map which assigns to a path the pair of its initial -- end points. Any motion planner determines an algorithm of motion planning for the system. In this paper we apply methods of algebraic topology to study the minimal number of sets $F_j$ in any motion planner in $X$. We also introduce a new notion of {\it order of instability} of a motion planner; it describes the number of essentially distinct motions which may occur as a result of small perturbations of the input data. We find the minimal order of instability, which may have motion planners on a given configuration space $X$. We study a number of specific problems: motion of a rigid body in $\R^3$, a robot arm, motion in $\R^3$ in the presence of obstacles, and others.<|reference_end|>
arxiv
@article{farber2002instabilities, title={Instabilities of Robot Motion}, author={Michael Farber}, journal={arXiv preprint arXiv:cs/0205015}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205015}, primaryClass={cs.RO cs.CG math.AT} }
farber2002instabilities
arxiv-670508
cs/0205016
From Alife Agents to a Kingdom of N Queens
<|reference_start|>From Alife Agents to a Kingdom of N Queens: This paper presents a new approach to solving N-queen problems, which involves a model of distributed autonomous agents with artificial life (ALife) and a method of representing N-queen constraints in an agent environment. The distributed agents locally interact with their living environment, i.e., a chessboard, and execute their reactive behaviors by applying their behavioral rules for randomized motion, least-conflict position searching, and cooperating with other agents etc. The agent-based N-queen problem solving system evolves through selection and contest according to the rule of Survival of the Fittest, in which some agents will die or be eaten if their moving strategies are less efficient than others. The experimental results have shown that this system is capable of solving large-scale N-queen problems. This paper also provides a model of ALife agents for solving general CSPs.<|reference_end|>
arxiv
@article{han2002from, title={From Alife Agents to a Kingdom of N Queens}, author={Jing Han, Jiming Liu and Qingsheng Cai}, journal={in Jiming Liu and Ning Zhong (Eds.), Intelligent Agent Technology: Systems, Methodologies, and Tools, page 110-120, The World Scientific Publishing Co. Pte, Ltd., Nov. 1999}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205016}, primaryClass={cs.AI cs.DS cs.MA} }
han2002from
arxiv-670509
cs/0205017
Ellogon: A New Text Engineering Platform
<|reference_start|>Ellogon: A New Text Engineering Platform: This paper presents Ellogon, a multi-lingual, cross-platform, general-purpose text engineering environment. Ellogon was designed in order to aid both researchers in natural language processing, as well as companies that produce language engineering systems for the end-user. Ellogon provides a powerful TIPSTER-based infrastructure for managing, storing and exchanging textual data, embedding and managing text processing components as well as visualising textual data and their associated linguistic information. Among its key features are full Unicode support, an extensive multi-lingual graphical user interface, its modular architecture and the reduced hardware requirements.<|reference_end|>
arxiv
@article{petasis2002ellogon:, title={Ellogon: A New Text Engineering Platform}, author={Georgios Petasis, Vangelis Karkaletsis, Georgios Paliouras, Ion Androutsopoulos, Constantine D. Spyropoulos}, journal={arXiv preprint arXiv:cs/0205017}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205017}, primaryClass={cs.CL} }
petasis2002ellogon:
arxiv-670510
cs/0205018
Typed Generic Traversal With Term Rewriting Strategies
<|reference_start|>Typed Generic Traversal With Term Rewriting Strategies: A typed model of strategic term rewriting is developed. The key innovation is that generic traversal is covered. To this end, we define a typed rewriting calculus S'_{gamma}. The calculus employs a many-sorted type system extended by designated generic strategy types gamma. We consider two generic strategy types, namely the types of type-preserving and type-unifying strategies. S'_{gamma} offers traversal combinators to construct traversals or schemes thereof from many-sorted and generic strategies. The traversal combinators model different forms of one-step traversal, that is, they process the immediate subterms of a given term without anticipating any scheme of recursion into terms. To inhabit generic types, we need to add a fundamental combinator to lift a many-sorted strategy $s$ to a generic type gamma. This step is called strategy extension. The semantics of the corresponding combinator states that s is only applied if the type of the term at hand fits, otherwise the extended strategy fails. This approach dictates that the semantics of strategy application must be type-dependent to a certain extent. Typed strategic term rewriting with coverage of generic term traversal is a simple but expressive model of generic programming. It has applications in program transformation and program analysis.<|reference_end|>
arxiv
@article{laemmel2002typed, title={Typed Generic Traversal With Term Rewriting Strategies}, author={Ralf Laemmel}, journal={arXiv preprint arXiv:cs/0205018}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205018}, primaryClass={cs.PL} }
laemmel2002typed
arxiv-670511
cs/0205019
Distance function wavelets - Part I: Helmholtz and convection-diffusion transforms and series
<|reference_start|>Distance function wavelets - Part I: Helmholtz and convection-diffusion transforms and series: This report aims to present my research updates on distance function wavelets (DFW) based on the fundamental solutions and the general solutions of the Helmholtz, modified Helmholtz, and convection-diffusion equations, which include the isotropic Helmholtz-Fourier (HF) transform and series, the Helmholtz-Laplace (HL) transform, and the anisotropic convection-diffusion wavelets and ridgelets. The latter is set to handle discontinuous and track data problems. The edge effect of the HF series is addressed. Alternative existence conditions for the DFW transforms are proposed and discussed. To simplify and streamline the expression of the HF and HL transforms, a new dimension-dependent function notation is introduced. The HF series is also used to evaluate the analytical solutions of linear diffusion problems of arbitrary dimensionality and geometry. The weakness of this report is lacking of rigorous mathematical analysis due to the author's limited mathematical knowledge.<|reference_end|>
arxiv
@article{chen2002distance, title={Distance function wavelets - Part I: Helmholtz and convection-diffusion transforms and series}, author={W. Chen}, journal={arXiv preprint arXiv:cs/0205019}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205019}, primaryClass={cs.CE cs.NA} }
chen2002distance
arxiv-670512
cs/0205020
A quasi-RBF technique for numerical discretization of PDE's
<|reference_start|>A quasi-RBF technique for numerical discretization of PDE's: Atkinson developed a strategy which splits solution of a PDE system into homogeneous and particular solutions, where the former have to satisfy the boundary and governing equation, while the latter only need to satisfy the governing equation without concerning geometry. Since the particular solution can be solved irrespective of boundary shape, we can use a readily available fast Fourier or orthogonal polynomial technique O(NlogN) to evaluate it in a regular box or sphere surrounding physical domain. The distinction of this study is that we approximate homogeneous solution with nonsingular general solution RBF as in the boundary knot method. The collocation method using general solution RBF has very high accuracy and spectral convergent speed and is a simple, truly meshfree approach for any complicated geometry. More importantly, the use of nonsingular general solution avoids the controversial artificial boundary in the method of fundamental solution due to the singularity of fundamental solution.<|reference_end|>
arxiv
@article{chen2002a, title={A quasi-RBF technique for numerical discretization of PDE's}, author={W. Chen}, journal={arXiv preprint arXiv:cs/0205020}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205020}, primaryClass={cs.CE cs.CG} }
chen2002a
arxiv-670513
cs/0205021
An Overview of a Grid Architecture for Scientific Computing
<|reference_start|>An Overview of a Grid Architecture for Scientific Computing: This document gives an overview of a Grid testbed architecture proposal for the NorduGrid project. The aim of the project is to establish an inter-Nordic testbed facility for implementation of wide area computing and data handling. The architecture is supposed to define a Grid system suitable for solving data intensive problems at the Large Hadron Collider at CERN. We present the various architecture components needed for such a system. After that we go on to give a description of the dynamics by showing the task flow.<|reference_end|>
arxiv
@article{waananen2002an, title={An Overview of a Grid Architecture for Scientific Computing}, author={A.Waananen, M.Ellert, A.Konstantinov, B.Konya, O.Smirnova}, journal={arXiv preprint arXiv:cs/0205021}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205021}, primaryClass={cs.DC} }
waananen2002an
arxiv-670514
cs/0205022
The Traits of the Personable
<|reference_start|>The Traits of the Personable: Information personalization is fertile ground for application of AI techniques. In this article I relate personalization to the ability to capture partial information in an information-seeking interaction. The specific focus is on personalizing interactions at web sites. Using ideas from partial evaluation and explanation-based generalization, I present a modeling methodology for reasoning about personalization. This approach helps identify seven tiers of `personable traits' in web sites.<|reference_end|>
arxiv
@article{ramakrishnan2002the, title={The Traits of the Personable}, author={Naren Ramakrishnan}, journal={arXiv preprint arXiv:cs/0205022}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205022}, primaryClass={cs.AI cs.IR} }
ramakrishnan2002the
arxiv-670515
cs/0205023
Performance evaluation of the GridFTP within the NorduGrid project
<|reference_start|>Performance evaluation of the GridFTP within the NorduGrid project: This report presents results of the tests measuring the performance of multi-threaded file transfers, using the GridFTP implementation of the Globus project over the NorduGrid network resources. Point to point WAN tests, carried out between the sites of Copenhagen, Lund, Oslo and Uppsala, are described. It was found that multiple threaded download via the high performance GridFTP protocol can significantly improve file transfer performance, and can serve as a reliable data<|reference_end|>
arxiv
@article{ellert2002performance, title={Performance evaluation of the GridFTP within the NorduGrid project}, author={M.Ellert, A.Konstantinov, B.Konya, O.Smirnova, A.Waananen}, journal={arXiv preprint arXiv:cs/0205023}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205023}, primaryClass={cs.DC} }
ellert2002performance
arxiv-670516
cs/0205024
A (non)static 0-order statistical model and its implementation for compressing virtually uncompressible data
<|reference_start|>A (non)static 0-order statistical model and its implementation for compressing virtually uncompressible data: We give an implementation of a statistical model, which can be successfully applied for compressing of a sequence of binary digits with behavior close to random.<|reference_end|>
arxiv
@article{vitchev2002a, title={A (non)static 0-order statistical model and its implementation for compressing virtually uncompressible data}, author={Evgueniy Vitchev}, journal={arXiv preprint arXiv:cs/0205024}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205024}, primaryClass={cs.DS cs.DM} }
vitchev2002a
arxiv-670517
cs/0205025
Bootstrapping Structure into Language: Alignment-Based Learning
<|reference_start|>Bootstrapping Structure into Language: Alignment-Based Learning: This thesis introduces a new unsupervised learning framework, called Alignment-Based Learning, which is based on the alignment of sentences and Harris's (1951) notion of substitutability. Instances of the framework can be applied to an untagged, unstructured corpus of natural language sentences, resulting in a labelled, bracketed version of that corpus. Firstly, the framework aligns all sentences in the corpus in pairs, resulting in a partition of the sentences consisting of parts of the sentences that are equal in both sentences and parts that are unequal. Unequal parts of sentences can be seen as being substitutable for each other, since substituting one unequal part for the other results in another valid sentence. The unequal parts of the sentences are thus considered to be possible (possibly overlapping) constituents, called hypotheses. Secondly, the selection learning phase considers all hypotheses found by the alignment learning phase and selects the best of these. The hypotheses are selected based on the order in which they were found, or based on a probabilistic function. The framework can be extended with a grammar extraction phase. This extended framework is called parseABL. Instead of returning a structured version of the unstructured input corpus, like the ABL system, this system also returns a stochastic context-free or tree substitution grammar. Different instances of the framework have been tested on the English ATIS corpus, the Dutch OVIS corpus and the Wall Street Journal corpus. One of the interesting results, apart from the encouraging numerical results, is that all instances can (and do) learn recursive structures.<|reference_end|>
arxiv
@article{van zaanen2002bootstrapping, title={Bootstrapping Structure into Language: Alignment-Based Learning}, author={Menno M. van Zaanen}, journal={arXiv preprint arXiv:cs/0205025}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205025}, primaryClass={cs.LG cs.CL} }
van zaanen2002bootstrapping
arxiv-670518
cs/0205026
Monads for natural language semantics
<|reference_start|>Monads for natural language semantics: Accounts of semantic phenomena often involve extending types of meanings and revising composition rules at the same time. The concept of monads allows many such accounts -- for intensionality, variable binding, quantification and focus -- to be stated uniformly and compositionally.<|reference_end|>
arxiv
@article{shan2002monads, title={Monads for natural language semantics}, author={Chung-chieh Shan (Harvard University)}, journal={Proceedings of the 2001 European Summer School in Logic, Language and Information student session, ed. Kristina Striegnitz, 285-298}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205026}, primaryClass={cs.CL cs.PL} }
shan2002monads
arxiv-670519
cs/0205027
A variable-free dynamic semantics
<|reference_start|>A variable-free dynamic semantics: I propose a variable-free treatment of dynamic semantics. By "dynamic semantics" I mean analyses of donkey sentences ("Every farmer who owns a donkey beats it") and other binding and anaphora phenomena in natural language where meanings of constituents are updates to information states, for instance as proposed by Groenendijk and Stokhof. By "variable-free" I mean denotational semantics in which functional combinators replace variable indices and assignment functions, for instance as advocated by Jacobson. The new theory presented here achieves a compositional treatment of dynamic anaphora that does not involve assignment functions, and separates the combinatorics of variable-free semantics from the particular linguistic phenomena it treats. Integrating variable-free semantics and dynamic semantics gives rise to interactions that make new empirical predictions, for example "donkey weak crossover" effects.<|reference_end|>
arxiv
@article{shan2002a, title={A variable-free dynamic semantics}, author={Chung-chieh Shan (Harvard University)}, journal={Proceedings of the 13th Amsterdam Colloquium, ed. Robert van Rooy and Martin Stokhof, 204-209 (2001)}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205027}, primaryClass={cs.CL} }
shan2002a
arxiv-670520
cs/0205028
NLTK: The Natural Language Toolkit
<|reference_start|>NLTK: The Natural Language Toolkit: NLTK, the Natural Language Toolkit, is a suite of open source program modules, tutorials and problem sets, providing ready-to-use computational linguistics courseware. NLTK covers symbolic and statistical natural language processing, and is interfaced to annotated corpora. Students augment and replace existing components, learn structured programming by example, and manipulate sophisticated models from the outset.<|reference_end|>
arxiv
@article{loper2002nltk:, title={NLTK: The Natural Language Toolkit}, author={Edward Loper and Steven Bird}, journal={arXiv preprint arXiv:cs/0205028}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205028}, primaryClass={cs.CL} }
loper2002nltk:
arxiv-670521
cs/0205029
A Codebook Generation Algorithm for Document Image Compression
<|reference_start|>A Codebook Generation Algorithm for Document Image Compression: Pattern-matching-based document-compression systems (e.g. for faxing) rely on finding a small set of patterns that can be used to represent all of the ink in the document. Finding an optimal set of patterns is NP-hard; previous compression schemes have resorted to heuristics. This paper describes an extension of the cross-entropy approach, used previously for measuring pattern similarity, to this problem. This approach reduces the problem to a k-medians problem, for which the paper gives a new algorithm with a provably good performance guarantee. In comparison to previous heuristics (First Fit, with and without generalized Lloyd's/k-means postprocessing steps), the new algorithm generates a better codebook, resulting in an overall improvement in compression performance of almost 17%.<|reference_end|>
arxiv
@article{zhang2002a, title={A Codebook Generation Algorithm for Document Image Compression}, author={Qin Zhang, John Danskin, Neal Young}, journal={arXiv preprint arXiv:cs/0205029}, year={2002}, doi={10.1109/DCC.1997.582053}, archivePrefix={arXiv}, eprint={cs/0205029}, primaryClass={cs.DS} }
zhang2002a
arxiv-670522
cs/0205030
Approximation Algorithms for Covering/Packing Integer Programs
<|reference_start|>Approximation Algorithms for Covering/Packing Integer Programs: Given matrices A and B and vectors a, b, c and d, all with non-negative entries, we consider the problem of computing min {c.x: x in Z^n_+, Ax > a, Bx < b, x < d}. We give a bicriteria-approximation algorithm that, given epsilon in (0, 1], finds a solution of cost O(ln(m)/epsilon^2) times optimal, meeting the covering constraints (Ax > a) and multiplicity constraints (x < d), and satisfying Bx < (1 + epsilon)b + beta, where beta is the vector of row sums beta_i = sum_j B_ij. Here m denotes the number of rows of A. This gives an O(ln m)-approximation algorithm for CIP -- minimum-cost covering integer programs with multiplicity constraints, i.e., the special case when there are no packing constraints Bx < b. The previous best approximation ratio has been O(ln(max_j sum_i A_ij)) since 1982. CIP contains the set cover problem as a special case, so O(ln m)-approximation is the best possible unless P=NP.<|reference_end|>
arxiv
@article{kolliopoulos2002approximation, title={Approximation Algorithms for Covering/Packing Integer Programs}, author={Stavros G. Kolliopoulos, Neal E. Young}, journal={Journal of Computer and System Sciences 71(4):495-505(2005)}, year={2002}, doi={10.1016/j.jcss.2005.05.002}, archivePrefix={arXiv}, eprint={cs/0205030}, primaryClass={cs.DS cs.DM} }
kolliopoulos2002approximation
arxiv-670523
cs/0205031
Lecture Notes on Evasiveness of Graph Properties
<|reference_start|>Lecture Notes on Evasiveness of Graph Properties: This report presents notes from the first eight lectures of the class Many Models of Complexity taught by Laszlo Lovasz at Princeton University in the fall of 1990. The topic is evasiveness of graph properties: given a graph property, how many edges of the graph an algorithm must check in the worst case before it knows whether the property holds.<|reference_end|>
arxiv
@article{lovasz2002lecture, title={Lecture Notes on Evasiveness of Graph Properties}, author={Laszlo Lovasz, Neal E. Young}, journal={arXiv preprint arXiv:cs/0205031}, year={2002}, number={Princeton University CS-TR-317-91}, archivePrefix={arXiv}, eprint={cs/0205031}, primaryClass={cs.CC} }
lovasz2002lecture
arxiv-670524
cs/0205032
On-Line End-to-End Congestion Control
<|reference_start|>On-Line End-to-End Congestion Control: Congestion control in the current Internet is accomplished mainly by TCP/IP. To understand the macroscopic network behavior that results from TCP/IP and similar end-to-end protocols, one main analytic technique is to show that the the protocol maximizes some global objective function of the network traffic. Here we analyze a particular end-to-end, MIMD (multiplicative-increase, multiplicative-decrease) protocol. We show that if all users of the network use the protocol, and all connections last for at least logarithmically many rounds, then the total weighted throughput (value of all packets received) is near the maximum possible. Our analysis includes round-trip-times, and (in contrast to most previous analyses) gives explicit convergence rates, allows connections to start and stop, and allows capacities to change.<|reference_end|>
arxiv
@article{garg2002on-line, title={On-Line End-to-End Congestion Control}, author={Naveen Garg, Neal E. Young}, journal={The 43rd Annual IEEE Symposium on Foundations of Computer Science, 303-310 (2002)}, year={2002}, doi={10.1109/SFCS.2002.1181953}, archivePrefix={arXiv}, eprint={cs/0205032}, primaryClass={cs.DS cs.CC cs.NI} }
garg2002on-line
arxiv-670525
cs/0205033
On-Line File Caching
<|reference_start|>On-Line File Caching: In the on-line file-caching problem problem, the input is a sequence of requests for files, given on-line (one at a time). Each file has a non-negative size and a non-negative retrieval cost. The problem is to decide which files to keep in a fixed-size cache so as to minimize the sum of the retrieval costs for files that are not in the cache when requested. The problem arises in web caching by browsers and by proxies. This paper describes a natural generalization of LRU called Landlord and gives an analysis showing that it has an optimal performance guarantee (among deterministic on-line algorithms). The paper also gives an analysis of the algorithm in a so-called ``loosely'' competitive model, showing that on a ``typical'' cache size, either the performance guarantee is O(1) or the total retrieval cost is insignificant.<|reference_end|>
arxiv
@article{young2002on-line, title={On-Line File Caching}, author={Neal E. Young}, journal={Algorithmica 33:371-383 (2002)}, year={2002}, doi={10.1007/s00453-001-0124-5}, archivePrefix={arXiv}, eprint={cs/0205033}, primaryClass={cs.DS cs.CC cs.NI} }
young2002on-line
arxiv-670526
cs/0205034
Data-Collection for the Sloan Digital Sky Survey: a Network-Flow Heuristic
<|reference_start|>Data-Collection for the Sloan Digital Sky Survey: a Network-Flow Heuristic: The goal of the Sloan Digital Sky Survey is ``to map in detail one-quarter of the entire sky, determining the positions and absolute brightnesses of more than 100 million celestial objects''. The survey will be performed by taking ``snapshots'' through a large telescope. Each snapshot can capture up to 600 objects from a small circle of the sky. This paper describes the design and implementation of the algorithm that is being used to determine the snapshots so as to minimize their number. The problem is NP-hard in general; the algorithm described is a heuristic, based on Lagriangian-relaxation and min-cost network flow. It gets within 5-15% of a naive lower bound, whereas using a ``uniform'' cover only gets within 25-35%.<|reference_end|>
arxiv
@article{lupton2002data-collection, title={Data-Collection for the Sloan Digital Sky Survey: a Network-Flow Heuristic}, author={Robert Lupton, Miller Maley, and Neal Young}, journal={Journal of Algorithms 27(2):339-356 (1998)}, year={2002}, doi={10.1006/jagm.1997.0922}, archivePrefix={arXiv}, eprint={cs/0205034}, primaryClass={cs.DS cs.CE} }
lupton2002data-collection
arxiv-670527
cs/0205035
Simple Strategies for Large Zero-Sum Games with Applications to Complexity Theory
<|reference_start|>Simple Strategies for Large Zero-Sum Games with Applications to Complexity Theory: Von Neumann's Min-Max Theorem guarantees that each player of a zero-sum matrix game has an optimal mixed strategy. This paper gives an elementary proof that each player has a near-optimal mixed strategy that chooses uniformly at random from a multiset of pure strategies of size logarithmic in the number of pure strategies available to the opponent. For exponentially large games, for which even representing an optimal mixed strategy can require exponential space, it follows that there are near-optimal, linear-size strategies. These strategies are easy to play and serve as small witnesses to the approximate value of the game. As a corollary, it follows that every language has small ``hard'' multisets of inputs certifying that small circuits can't decide the language. For example, if SAT does not have polynomial-size circuits, then, for each n and c, there is a set of n^(O(c)) Boolean formulae of size n such that no circuit of size n^c (or algorithm running in time n^c) classifies more than two-thirds of the formulae succesfully.<|reference_end|>
arxiv
@article{lipton2002simple, title={Simple Strategies for Large Zero-Sum Games with Applications to Complexity Theory}, author={Richard Lipton, Neal E. Young}, journal={ACM Symposium on Theory of Computing (1994)}, year={2002}, doi={10.1145/195058.195447}, archivePrefix={arXiv}, eprint={cs/0205035}, primaryClass={cs.CC cs.DM} }
lipton2002simple
arxiv-670528
cs/0205036
Randomized Rounding without Solving the Linear Program
<|reference_start|>Randomized Rounding without Solving the Linear Program: Randomized rounding is a standard method, based on the probabilistic method, for designing combinatorial approximation algorithms. In Raghavan's seminal paper introducing the method (1988), he writes: "The time taken to solve the linear program relaxations of the integer programs dominates the net running time theoretically (and, most likely, in practice as well)." This paper explores how this bottleneck can be avoided for randomized rounding algorithms for packing and covering problems (linear programs, or mixed integer linear programs, having no negative coefficients). The resulting algorithms are greedy algorithms, and are faster and simpler to implement than standard randomized-rounding algorithms. This approach can also be used to understand Lagrangian-relaxation algorithms for packing/covering linear programs: such algorithms can be viewed as as (derandomized) randomized-rounding schemes.<|reference_end|>
arxiv
@article{young2002randomized, title={Randomized Rounding without Solving the Linear Program}, author={Neal E. Young}, journal={ACM-SIAM Symposium on Discrete Algorithms (1995)}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205036}, primaryClass={cs.DS cs.DM} }
young2002randomized
arxiv-670529
cs/0205037
A Primal-Dual Parallel Approximation Technique Applied to Weighted Set and Vertex Cover
<|reference_start|>A Primal-Dual Parallel Approximation Technique Applied to Weighted Set and Vertex Cover: The paper describes a simple deterministic parallel/distributed (2+epsilon)-approximation algorithm for the minimum-weight vertex-cover problem and its dual (edge/element packing).<|reference_end|>
arxiv
@article{khuller2002a, title={A Primal-Dual Parallel Approximation Technique Applied to Weighted Set and Vertex Cover}, author={Samir Khuller, Uzi Vishkin, Neal Young}, journal={Journal of Algorithms 17(2):280-289 (1994)}, year={2002}, doi={10.1006/jagm.1994.1036}, archivePrefix={arXiv}, eprint={cs/0205037}, primaryClass={cs.DS cs.DC} }
khuller2002a
arxiv-670530
cs/0205038
Competitive Paging Algorithms
<|reference_start|>Competitive Paging Algorithms: The paging problem is that of deciding which pages to keep in a memory of k pages in order to minimize the number of page faults. This paper introduces the marking algorithm, a simple randomized on-line algorithm for the paging problem, and gives a proof that its performance guarantee (competitive ratio) is O(log k). In contrast, no deterministic on-line algorithm can have a performance guarantee better than k.<|reference_end|>
arxiv
@article{fiat2002competitive, title={Competitive Paging Algorithms}, author={Amos Fiat, Richard Karp, Mike Luby, Lyle McGeoch, Daniel Sleator, Neal E. Young}, journal={Journal of Algorithms 12:685-699 (1991)}, year={2002}, doi={10.1016/0196-6774(91)90041-V}, archivePrefix={arXiv}, eprint={cs/0205038}, primaryClass={cs.DS cs.NI} }
fiat2002competitive
arxiv-670531
cs/0205039
Sequential and Parallel Algorithms for Mixed Packing and Covering
<|reference_start|>Sequential and Parallel Algorithms for Mixed Packing and Covering: Mixed packing and covering problems are problems that can be formulated as linear programs using only non-negative coefficients. Examples include multicommodity network flow, the Held-Karp lower bound on TSP, fractional relaxations of set cover, bin-packing, knapsack, scheduling problems, minimum-weight triangulation, etc. This paper gives approximation algorithms for the general class of problems. The sequential algorithm is a simple greedy algorithm that can be implemented to find an epsilon-approximate solution in O(epsilon^-2 log m) linear-time iterations. The parallel algorithm does comparable work but finishes in polylogarithmic time. The results generalize previous work on pure packing and covering (the special case when the constraints are all "less-than" or all "greater-than") by Michael Luby and Noam Nisan (1993) and Naveen Garg and Jochen Konemann (1998).<|reference_end|>
arxiv
@article{young2002sequential, title={Sequential and Parallel Algorithms for Mixed Packing and Covering}, author={Neal E. Young}, journal={arXiv preprint arXiv:cs/0205039}, year={2002}, doi={10.1109/SFCS.2001.959930}, archivePrefix={arXiv}, eprint={cs/0205039}, primaryClass={cs.DS} }
young2002sequential
arxiv-670532
cs/0205040
Approximating the Minimum Equivalent Digraph
<|reference_start|>Approximating the Minimum Equivalent Digraph: The MEG (minimum equivalent graph) problem is, given a directed graph, to find a small subset of the edges that maintains all reachability relations between nodes. The problem is NP-hard. This paper gives an approximation algorithm with performance guarantee of pi^2/6 ~ 1.64. The algorithm and its analysis are based on the simple idea of contracting long cycles. (This result is strengthened slightly in ``On strongly connected digraphs with bounded cycle length'' (1996).) The analysis applies directly to 2-Exchange, a simple ``local improvement'' algorithm, showing that its performance guarantee is 1.75.<|reference_end|>
arxiv
@article{khuller2002approximating, title={Approximating the Minimum Equivalent Digraph}, author={Samir Khuller, Balaji Raghavachari, Neal E. Young}, journal={SIAM J. Computing 24(4):859-872 (1995)}, year={2002}, doi={10.1137/S0097539793256685}, archivePrefix={arXiv}, eprint={cs/0205040}, primaryClass={cs.DS cs.DM} }
khuller2002approximating
arxiv-670533
cs/0205041
Faster Parametric Shortest Path and Minimum Balance Algorithms
<|reference_start|>Faster Parametric Shortest Path and Minimum Balance Algorithms: The parametric shortest path problem is to find the shortest paths in graph where the edge costs are of the form w_ij+lambda where each w_ij is constant and lambda is a parameter that varies. The problem is to find shortest path trees for every possible value of lambda. The minimum-balance problem is to find a ``weighting'' of the vertices so that adjusting the edge costs by the vertex weights yields a graph in which, for every cut, the minimum weight of any edge crossing the cut in one direction equals the minimum weight of any edge crossing the cut in the other direction. The paper presents fast algorithms for both problems. The algorithms run in O(nm+n^2 log n) time. The paper also describes empirical studies of the algorithms on random graphs, suggesting that the expected time for finding a minimum-mean cycle (an important special case of both problems) is O(n log(n) + m).<|reference_end|>
arxiv
@article{young2002faster, title={Faster Parametric Shortest Path and Minimum Balance Algorithms}, author={Neal Young, Robert Tarjan, James Orlin}, journal={Networks 21(2):205-221 (1991)}, year={2002}, doi={10.1002/net.3230210206}, archivePrefix={arXiv}, eprint={cs/0205041}, primaryClass={cs.DS cs.DM} }
young2002faster
arxiv-670534
cs/0205042
Orienting Graphs to Optimize Reachability
<|reference_start|>Orienting Graphs to Optimize Reachability: The paper focuses on two problems: (i) how to orient the edges of an undirected graph in order to maximize the number of ordered vertex pairs (x,y) such that there is a directed path from x to y, and (ii) how to orient the edges so as to minimize the number of such pairs. The paper describes a quadratic-time algorithm for the first problem, and a proof that the second problem is NP-hard to approximate within some constant 1+epsilon > 1. The latter proof also shows that the second problem is equivalent to ``comparability graph completion''; neither problem was previously known to be NP-hard.<|reference_end|>
arxiv
@article{hakimi2002orienting, title={Orienting Graphs to Optimize Reachability}, author={S. L. Hakimi, E. Schmeichel, Neal E. Young}, journal={Information Processing Letters 63:229-235 (1997)}, year={2002}, doi={10.1016/S0020-0190(97)00129-4}, archivePrefix={arXiv}, eprint={cs/0205042}, primaryClass={cs.DS cs.DM} }
hakimi2002orienting
arxiv-670535
cs/0205043
Low-Degree Spanning Trees of Small Weight
<|reference_start|>Low-Degree Spanning Trees of Small Weight: The degree-d spanning tree problem asks for a minimum-weight spanning tree in which the degree of each vertex is at most d. When d=2 the problem is TSP, and in this case, the well-known Christofides algorithm provides a 1.5-approximation algorithm (assuming the edge weights satisfy the triangle inequality). In 1984, Christos Papadimitriou and Umesh Vazirani posed the challenge of finding an algorithm with performance guarantee less than 2 for Euclidean graphs (points in R^n) and d > 2. This paper gives the first answer to that challenge, presenting an algorithm to compute a degree-3 spanning tree of cost at most 5/3 times the MST. For points in the plane, the ratio improves to 3/2 and the algorithm can also find a degree-4 spanning tree of cost at most 5/4 times the MST.<|reference_end|>
arxiv
@article{khuller2002low-degree, title={Low-Degree Spanning Trees of Small Weight}, author={Samir Khuller, Balaji Raghavachari, Neal E. Young}, journal={SIAM J. Computing 25(2):355-368 (1996)}, year={2002}, doi={10.1137/S0097539794264585}, archivePrefix={arXiv}, eprint={cs/0205043}, primaryClass={cs.DS cs.DM} }
khuller2002low-degree
arxiv-670536
cs/0205044
The K-Server Dual and Loose Competitiveness for Paging
<|reference_start|>The K-Server Dual and Loose Competitiveness for Paging: This paper has two results. The first is based on the surprising observation that the well-known ``least-recently-used'' paging algorithm and the ``balance'' algorithm for weighted caching are linear-programming primal-dual algorithms. This observation leads to a strategy (called ``Greedy-Dual'') that generalizes them both and has an optimal performance guarantee for weighted caching. For the second result, the paper presents empirical studies of paging algorithms, documenting that in practice, on ``typical'' cache sizes and sequences, the performance of paging strategies are much better than their worst-case analyses in the standard model suggest. The paper then presents theoretical results that support and explain this. For example: on any input sequence, with almost all cache sizes, either the performance guarantee of least-recently-used is O(log k) or the fault rate (in an absolute sense) is insignificant. Both of these results are strengthened and generalized in``On-line File Caching'' (1998).<|reference_end|>
arxiv
@article{young2002the, title={The K-Server Dual and Loose Competitiveness for Paging}, author={Neal E. Young}, journal={Algorithmica 11(6):525-541 (1994)}, year={2002}, doi={10.1007/BF01189992}, archivePrefix={arXiv}, eprint={cs/0205044}, primaryClass={cs.DS cs.NI} }
young2002the
arxiv-670537
cs/0205045
Balancing Minimum Spanning and Shortest Path Trees
<|reference_start|>Balancing Minimum Spanning and Shortest Path Trees: This paper give a simple linear-time algorithm that, given a weighted digraph, finds a spanning tree that simultaneously approximates a shortest-path tree and a minimum spanning tree. The algorithm provides a continuous trade-off: given the two trees and epsilon > 0, the algorithm returns a spanning tree in which the distance between any vertex and the root of the shortest-path tree is at most 1+epsilon times the shortest-path distance, and yet the total weight of the tree is at most 1+2/epsilon times the weight of a minimum spanning tree. This is the best tradeoff possible. The paper also describes a fast parallel implementation.<|reference_end|>
arxiv
@article{khuller2002balancing, title={Balancing Minimum Spanning and Shortest Path Trees}, author={Samir Khuller, Balaji Raghavachari, Neal E. Young}, journal={Algorithmica 14(4):305-322 (1995)}, year={2002}, doi={10.1007/BF01294129}, archivePrefix={arXiv}, eprint={cs/0205045}, primaryClass={cs.DS cs.DM} }
khuller2002balancing
arxiv-670538
cs/0205046
On the Number of Iterations for Dantzig-Wolfe Optimization and Packing-Covering Approximation Algorithms
<|reference_start|>On the Number of Iterations for Dantzig-Wolfe Optimization and Packing-Covering Approximation Algorithms: We give a lower bound on the iteration complexity of a natural class of Lagrangean-relaxation algorithms for approximately solving packing/covering linear programs. We show that, given an input with $m$ random 0/1-constraints on $n$ variables, with high probability, any such algorithm requires $\Omega(\rho \log(m)/\epsilon^2)$ iterations to compute a $(1+\epsilon)$-approximate solution, where $\rho$ is the width of the input. The bound is tight for a range of the parameters $(m,n,\rho,\epsilon)$. The algorithms in the class include Dantzig-Wolfe decomposition, Benders' decomposition, Lagrangean relaxation as developed by Held and Karp [1971] for lower-bounding TSP, and many others (e.g. by Plotkin, Shmoys, and Tardos [1988] and Grigoriadis and Khachiyan [1996]). To prove the bound, we use a discrepancy argument to show an analogous lower bound on the support size of $(1+\epsilon)$-approximate mixed strategies for random two-player zero-sum 0/1-matrix games.<|reference_end|>
arxiv
@article{klein2002on, title={On the Number of Iterations for Dantzig-Wolfe Optimization and Packing-Covering Approximation Algorithms}, author={Phil Klein, Neal E. Young}, journal={LNCS 1610 (IPCO): 320-327 (1999); SIAM Journal on Computing 44(4):1154-1172(2015)}, year={2002}, doi={10.1007/3-540-48777-8_24}, archivePrefix={arXiv}, eprint={cs/0205046}, primaryClass={cs.DS cs.CC} }
klein2002on
arxiv-670539
cs/0205047
K-Medians, Facility Location, and the Chernoff-Wald Bound
<|reference_start|>K-Medians, Facility Location, and the Chernoff-Wald Bound: The paper gives approximation algorithms for the k-medians and facility-location problems (both NP-hard). For k-medians, the algorithm returns a solution using at most ln(n+n/epsilon)k medians and having cost at most (1+epsilon) times the cost of the best solution that uses at most k medians. Here epsilon > 0 is an input to the algorithm. In comparison, the best previous algorithm (Jyh-Han Lin and Jeff Vitter, 1992) had a (1+1/epsilon)ln(n) term instead of the ln(n+n/epsilon) term in the performance guarantee. For facility location, the algorithm returns a solution of cost at most d+ln(n) k, provided there exists a solution of cost d+k where d is the assignment cost and k is the facility cost. In comparison, the best previous algorithm (Dorit Hochbaum, 1982) returned a solution of cost at most ln(n)(d+k). For both problems, the algorithms currently provide the best performance guarantee known for the general (non-metric) problems. The paper also introduces a new probabilistic bound (called "Chernoff-Wald bound") for bounding the expectation of the maximum of a collection of sums of random variables, when each sum contains a random number of terms. The bound is used to analyze the randomized rounding scheme that underlies the algorithms.<|reference_end|>
arxiv
@article{young2002k-medians,, title={K-Medians, Facility Location, and the Chernoff-Wald Bound}, author={Neal E. Young}, journal={ACM-SIAM Symposium on Discrete Algorithms (2000)}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205047}, primaryClass={cs.DS cs.DM} }
young2002k-medians,
arxiv-670540
cs/0205048
Huffman Coding with Letter Costs: A Linear-Time Approximation Scheme
<|reference_start|>Huffman Coding with Letter Costs: A Linear-Time Approximation Scheme: We give a polynomial-time approximation scheme for the generalization of Huffman Coding in which codeword letters have non-uniform costs (as in Morse code, where the dash is twice as long as the dot). The algorithm computes a (1+epsilon)-approximate solution in time O(n + f(epsilon) log^3 n), where n is the input size.<|reference_end|>
arxiv
@article{golin2002huffman, title={Huffman Coding with Letter Costs: A Linear-Time Approximation Scheme}, author={Mordecai Golin, Claire Mathieu, Neal E. Young}, journal={SIAM Journal on Computing 41(3):684-713(2012)}, year={2002}, doi={10.1137/100794092}, archivePrefix={arXiv}, eprint={cs/0205048}, primaryClass={cs.DS} }
golin2002huffman
arxiv-670541
cs/0205049
Prefix Codes: Equiprobable Words, Unequal Letter Costs
<|reference_start|>Prefix Codes: Equiprobable Words, Unequal Letter Costs: Describes a near-linear-time algorithm for a variant of Huffman coding, in which the letters may have non-uniform lengths (as in Morse code), but with the restriction that each word to be encoded has equal probability. [See also ``Huffman Coding with Unequal Letter Costs'' (2002).]<|reference_end|>
arxiv
@article{golin2002prefix, title={Prefix Codes: Equiprobable Words, Unequal Letter Costs}, author={Mordecai Golin, Neal E. Young}, journal={SIAM J. Computing 25(6):1281-1304 (1996)}, year={2002}, doi={10.1137/S0097539794268388}, archivePrefix={arXiv}, eprint={cs/0205049}, primaryClass={cs.DS} }
golin2002prefix
arxiv-670542
cs/0205050
A Network-Flow Technique for Finding Low-Weight Bounded-Degree Spanning Trees
<|reference_start|>A Network-Flow Technique for Finding Low-Weight Bounded-Degree Spanning Trees: The problem considered is the following. Given a graph with edge weights satisfying the triangle inequality, and a degree bound for each vertex, compute a low-weight spanning tree such that the degree of each vertex is at most its specified bound. The problem is NP-hard (it generalizes Traveling Salesman (TSP)). This paper describes a network-flow heuristic for modifying a given tree T to meet the constraints. Choosing T to be a minimum spanning tree (MST) yields approximation algorithms with performance guarantee less than 2 for the problem on geometric graphs with L_p-norms. The paper also describes a Euclidean graph whose minimum TSP costs twice the MST, disproving a conjecture made in ``Low-Degree Spanning Trees of Small Weight'' (1996).<|reference_end|>
arxiv
@article{fekete2002a, title={A Network-Flow Technique for Finding Low-Weight Bounded-Degree Spanning Trees}, author={S. Fekete, S. Khuller, M. Klemmstein, B. Raghavachari, Neal E. Young}, journal={Journal of Algorithms 24(2):310-324 (1997)}, year={2002}, doi={10.1006/jagm.1997.0862}, archivePrefix={arXiv}, eprint={cs/0205050}, primaryClass={cs.DS cs.DM} }
fekete2002a
arxiv-670543
cs/0205051
Rounding Algorithms for a Geometric Embedding of Minimum Multiway Cut
<|reference_start|>Rounding Algorithms for a Geometric Embedding of Minimum Multiway Cut: The multiway-cut problem is, given a weighted graph and k >= 2 terminal nodes, to find a minimum-weight set of edges whose removal separates all the terminals. The problem is NP-hard, and even NP-hard to approximate within 1+delta for some small delta > 0. Calinescu, Karloff, and Rabani (1998) gave an algorithm with performance guarantee 3/2-1/k, based on a geometric relaxation of the problem. In this paper, we give improved randomized rounding schemes for their relaxation, yielding a 12/11-approximation algorithm for k=3 and a 1.3438-approximation algorithm in general. Our approach hinges on the observation that the problem of designing a randomized rounding scheme for a geometric relaxation is itself a linear programming problem. The paper explores computational solutions to this problem, and gives a proof that for a general class of geometric relaxations, there are always randomized rounding schemes that match the integrality gap.<|reference_end|>
arxiv
@article{karger2002rounding, title={Rounding Algorithms for a Geometric Embedding of Minimum Multiway Cut}, author={David Karger, Phil Klein, Cliff Stein, Mikkel Thorup, Neal E. Young}, journal={Mathematics of Operations Research 29(3):436-461(2004)}, year={2002}, doi={10.1287/moor.1030.0086}, archivePrefix={arXiv}, eprint={cs/0205051}, primaryClass={cs.DS cs.DM} }
karger2002rounding
arxiv-670544
cs/0205052
Three-Tiered Specification of Micro-Architectures
<|reference_start|>Three-Tiered Specification of Micro-Architectures: A three-tiered specification approach is developed to formally specify collections of collaborating objects, say micro-architectures. (i) The structural properties to be maintained in the collaboration are specified in the lowest tier. (ii) The behaviour of the object methods in the classes is specified in the middle tier. (iii) The interaction of the objects in the micro-architecture is specified in the third tier. The specification approach is based on Larch and accompanying notations and tools. The approach enables the unambiguous and complete specification of reusable collections of collaborating objects. The layered, formal approach is compared to other approaches including the mainstream UML approach.<|reference_end|>
arxiv
@article{alagar2002three-tiered, title={Three-Tiered Specification of Micro-Architectures}, author={Vasu Alagar and Ralf Laemmel}, journal={arXiv preprint arXiv:cs/0205052}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205052}, primaryClass={cs.SE cs.PL} }
alagar2002three-tiered
arxiv-670545
cs/0205053
Sotto Voce: Exploring the Interplay of Conversation and Mobile Audio Spaces
<|reference_start|>Sotto Voce: Exploring the Interplay of Conversation and Mobile Audio Spaces: In addition to providing information to individual visitors, electronic guidebooks have the potential to facilitate social interaction between visitors and their companions. However, many systems impede visitor interaction. By contrast, our electronic guidebook, Sotto Voce, has social interaction as a primary design goal. The system enables visitors to share audio information - specifically, they can hear each other's guidebook activity using a technologically mediated audio eavesdropping mechanism. We conducted a study of visitors using Sotto Voce while touring a historic house. The results indicate that visitors are able to use the system effectively, both as a conversational resource and as an information appliance. More surprisingly, our results suggest that the technologically mediated audio often cohered the visitors' conversation and activity to a far greater degree than audio delivered through the open air.<|reference_end|>
arxiv
@article{aoki2002sotto, title={Sotto Voce: Exploring the Interplay of Conversation and Mobile Audio Spaces}, author={Paul M. Aoki, Rebecca E. Grinter, Amy Hurst, Margaret H. Szymanski, James D. Thornton, and Allison Woodruff}, journal={Proc. ACM SIGCHI Conference on Human Factors in Computing Systems, Minneapolis, MN, April 2002, 431-438. ACM Press.}, year={2002}, doi={10.1145/503376.503454}, archivePrefix={arXiv}, eprint={cs/0205053}, primaryClass={cs.HC cs.SD} }
aoki2002sotto
arxiv-670546
cs/0205054
Eavesdropping on Electronic Guidebooks: Observing Learning Resources in Shared Listening Environments
<|reference_start|>Eavesdropping on Electronic Guidebooks: Observing Learning Resources in Shared Listening Environments: We describe an electronic guidebook, Sotto Voce, that enables visitors to share audio information by eavesdropping on each other's guidebook activity. We have conducted three studies of visitors using electronic guidebooks in a historic house: one study with open air audio played through speakers and two studies with eavesdropped audio. An analysis of visitor interaction in these studies suggests that eavesdropped audio provides more social and interactive learning resources than open air audio played through speakers.<|reference_end|>
arxiv
@article{woodruff2002eavesdropping, title={Eavesdropping on Electronic Guidebooks: Observing Learning Resources in Shared Listening Environments}, author={Allison Woodruff, Paul M. Aoki, Rebecca E. Grinter, Amy Hurst, Margaret H. Szymanski, and James D. Thornton}, journal={In David Bearman and Jennifer Trant (eds.), Museums and the Web 2002: Selected Papers. (Proc. 6th International Conference on Museums and the Web, Boston, MA, April 2002.) Pittsburgh, PA: Archives & Museum Informatics, 2002, 21-30}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205054}, primaryClass={cs.HC} }
woodruff2002eavesdropping
arxiv-670547
cs/0205055
Practical Strategies for Integrating a Conversation Analyst in an Iterative Design Process
<|reference_start|>Practical Strategies for Integrating a Conversation Analyst in an Iterative Design Process: We present a case study of an iterative design process that includes a conversation analyst. We discuss potential benefits of conversation analysis for design, and we describe our strategies for integrating the conversation analyst in the design process. Since the analyst on our team had no previous exposure to design or engineering, and none of the other members of our team had any experience with conversation analysis, we needed to build a foundation for our interaction. One of our key strategies was to pair the conversation analyst with a designer in a highly interactive collaboration. Our tactics have been effective on our project, leading to valuable results that we believe we could not have obtained using another method. We hope that this paper can serve as a practical guide to those interested in establishing a productive and efficient working relationship between a conversation analyst and the other members of a design team.<|reference_end|>
arxiv
@article{woodruff2002practical, title={Practical Strategies for Integrating a Conversation Analyst in an Iterative Design Process}, author={Allison Woodruff, Margaret H. Szymanski, Rebecca E. Grinter, and Paul M. Aoki}, journal={Proc. ACM Conf. on Designing Interactive Systems, London, UK, June 2002, 19-28. ACM Press.}, year={2002}, doi={10.1145/778712.778748}, archivePrefix={arXiv}, eprint={cs/0205055}, primaryClass={cs.HC} }
woodruff2002practical
arxiv-670548
cs/0205056
Parameterized Intractability of Motif Search Problems
<|reference_start|>Parameterized Intractability of Motif Search Problems: We show that Closest Substring, one of the most important problems in the field of biological sequence analysis, is W[1]-hard when parameterized by the number k of input strings (and remains so, even over a binary alphabet). This problem is therefore unlikely to be solvable in time O(f(k)\cdot n^{c}) for any function f of k and constant c independent of k. The problem can therefore be expected to be intractable, in any practical sense, for k>=3. Our result supports the intuition that Closest Substring is computationally much harder than the special case of Closest String, although both problems are NP-complete. We also prove W[1]-hardness for other parameterizations in the case of unbounded alphabet size. Our W[1]-hardness result for Closest Substring generalizes to Consensus Patterns, a problem of similar significance in computational biology.<|reference_end|>
arxiv
@article{fellows2002parameterized, title={Parameterized Intractability of Motif Search Problems}, author={Michael R. Fellows, Jens Gramm and Rolf Niedermeier}, journal={arXiv preprint arXiv:cs/0205056}, year={2002}, number={WSI-2002-2}, archivePrefix={arXiv}, eprint={cs/0205056}, primaryClass={cs.CC} }
fellows2002parameterized
arxiv-670549
cs/0205057
Unsupervised Discovery of Morphemes
<|reference_start|>Unsupervised Discovery of Morphemes: We present two methods for unsupervised segmentation of words into morpheme-like units. The model utilized is especially suited for languages with a rich morphology, such as Finnish. The first method is based on the Minimum Description Length (MDL) principle and works online. In the second method, Maximum Likelihood (ML) optimization is used. The quality of the segmentations is measured using an evaluation method that compares the segmentations produced to an existing morphological analysis. Experiments on both Finnish and English corpora show that the presented methods perform well compared to a current state-of-the-art system.<|reference_end|>
arxiv
@article{creutz2002unsupervised, title={Unsupervised Discovery of Morphemes}, author={Mathias Creutz and Krista Lagus}, journal={arXiv preprint arXiv:cs/0205057}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205057}, primaryClass={cs.CL} }
creutz2002unsupervised
arxiv-670550
cs/0205058
Content Distribution in Unicast Replica Meshes
<|reference_start|>Content Distribution in Unicast Replica Meshes: We propose centralized algorithm of data distribution in the unicast p2p network. Good example of such networks are meshes of WWW and FTP mirrors. Simulation of data propogation for different network topologies is performed and it is shown that proposed method performs up to 200% better then common apporaches<|reference_end|>
arxiv
@article{novikov2002content, title={Content Distribution in Unicast Replica Meshes}, author={Alexei Novikov}, journal={arXiv preprint arXiv:cs/0205058}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205058}, primaryClass={cs.NI} }
novikov2002content
arxiv-670551
cs/0205059
A Connection-Centric Survey of Recommender Systems Research
<|reference_start|>A Connection-Centric Survey of Recommender Systems Research: Recommender systems attempt to reduce information overload and retain customers by selecting a subset of items from a universal set based on user preferences. While research in recommender systems grew out of information retrieval and filtering, the topic has steadily advanced into a legitimate and challenging research area of its own. Recommender systems have traditionally been studied from a content-based filtering vs. collaborative design perspective. Recommendations, however, are not delivered within a vacuum, but rather cast within an informal community of users and social context. Therefore, ultimately all recommender systems make connections among people and thus should be surveyed from such a perspective. This viewpoint is under-emphasized in the recommender systems literature. We therefore take a connection-oriented viewpoint toward recommender systems research. We posit that recommendation has an inherently social element and is ultimately intended to connect people either directly as a result of explicit user modeling or indirectly through the discovery of relationships implicit in extant data. Thus, recommender systems are characterized by how they model users to bring people together: explicitly or implicitly. Finally, user modeling and the connection-centric viewpoint raise broadening and social issues--such as evaluation, targeting, and privacy and trust--which we also briefly address.<|reference_end|>
arxiv
@article{perugini2002a, title={A Connection-Centric Survey of Recommender Systems Research}, author={Saverio Perugini, Marcos Andre Goncalves and Edward A. Fox}, journal={arXiv preprint arXiv:cs/0205059}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205059}, primaryClass={cs.IR cs.HC} }
perugini2002a
arxiv-670552
cs/0205060
Optimizing Queries Using a Meta-level Database
<|reference_start|>Optimizing Queries Using a Meta-level Database: Graph simulation (using graph schemata or data guides) has been successfully proposed as a technique for adding structure to semistructured data. Design patterns for description (such as meta-classes and homomorphisms between schema layers), which are prominent in the object-oriented programming community, constitute a generalization of this graph simulation approach. In this paper, we show description applicable to a wide range of data models that have some notion of object (-identity), and propose to turn it into a data model primitive much like, say, inheritance. We argue that such an extension fills a practical need in contemporary data management. Then, we present algebraic techniques for query optimization (using the notions of described and description queries). Finally, in the semistructured setting, we discuss the pruning of regular path queries (with nested conditions) using description meta-data. In this context, our notion of meta-data extends graph schemata and data guides by meta-level values, allowing to boost query performance and to reduce the redundancy of data.<|reference_end|>
arxiv
@article{koch2002optimizing, title={Optimizing Queries Using a Meta-level Database}, author={Christoph Koch}, journal={arXiv preprint arXiv:cs/0205060}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205060}, primaryClass={cs.DB} }
koch2002optimizing
arxiv-670553
cs/0205061
Aging, double helix and small world property in genetic algorithms
<|reference_start|>Aging, double helix and small world property in genetic algorithms: Over a quarter of century after the invention of genetic algorithms and miriads of their modifications, as well as successful implementations, we are still lacking many essential details of thorough analysis of it's inner working. One of such fundamental questions is: how many generations do we need to solve the optimization problem? This paper tries to answer this question, albeit in a fuzzy way, making use of the double helix concept. As a byproduct we gain better understanding of the ways, in which the genetic algorithm may be fine tuned.<|reference_end|>
arxiv
@article{gutowski2002aging,, title={Aging, double helix and small world property in genetic algorithms}, author={Marek W. Gutowski}, journal={arXiv preprint arXiv:cs/0205061}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205061}, primaryClass={cs.NE cs.DS physics.data-an} }
gutowski2002aging,
arxiv-670554
cs/0205062
Minimizing Cache Misses in Scientific Computing Using Isoperimetric Bodies
<|reference_start|>Minimizing Cache Misses in Scientific Computing Using Isoperimetric Bodies: A number of known techniques for improving cache performance in scientific computations involve the reordering of the iteration space. Some of these reorderings can be considered coverings of the iteration space with sets having small surface-to-volume ratios. Use of such sets may reduce the number of cache misses in computations of local operators having the iteration space as their domain. First, we derive lower bounds on cache misses that any algorithm must suffer while computing a local operator on a grid. Then, we explore coverings of iteration spaces of structured and unstructured discretization grid operators which allow us to approach these lower bounds. For structured grids we introduce a covering by successive minima tiles based on the interference lattice of the grid. We show that the covering has a small surface-to-volume ratio and present a computer experiment showing actual reduction of the cache misses achieved by using these tiles. For planar unstructured grids we show existence of a covering which reduces the number of cache misses to the level of that of structured grids. Next, we introduce a class of multidimensional grids, called starry grids in this paper. These grids represent an abstraction of unstructured grids used in, for example, molecular simulations and the solution of partial differential equations. We show that starry grids can be covered by sets having a low surface-to-volume ratio and, hence have the same cache efficiency as structured grids. Finally, we present a triangulation of a three-dimensional cube that has the property that any local operator on the corresponding grid must incur a significantly larger number of cache misses than a similar operator on a structured grid of the same size.<|reference_end|>
arxiv
@article{frumkin2002minimizing, title={Minimizing Cache Misses in Scientific Computing Using Isoperimetric Bodies}, author={Michael Frumkin, Rob F. Van der Wijngaart}, journal={arXiv preprint arXiv:cs/0205062}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205062}, primaryClass={cs.PF} }
frumkin2002minimizing
arxiv-670555
cs/0205063
Distance function wavelets - Part II: Extended results and conjectures
<|reference_start|>Distance function wavelets - Part II: Extended results and conjectures: Report II is concerned with the extended results of distance function wavelets (DFW). The fractional DFW transforms are first addressed relating to the fractal geometry and fractional derivative, and then, the discrete Helmholtz-Fourier transform is briefly presented. The Green second identity may be an alternative devise in developing the theoretical framework of the DFW transform and series. The kernel solutions of the Winkler plate equation and the Burger's equation are used to create the DFW transforms and series. Most interestingly, it is found that the translation invariant monomial solutions of the high-order Laplace equations can be used to make very simple harmonic polynomial DFW series. In most cases of this study, solid mathematical analysis is missing and results are obtained intuitively in the conjecture status.<|reference_end|>
arxiv
@article{chen2002distance, title={Distance function wavelets - Part II: Extended results and conjectures}, author={W. Chen}, journal={arXiv preprint arXiv:cs/0205063}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205063}, primaryClass={cs.CE cs.CG} }
chen2002distance
arxiv-670556
cs/0205064
Three complete deterministic polynomial algorithms for 3SAT
<|reference_start|>Three complete deterministic polynomial algorithms for 3SAT: Three algorithms are presented that determine the existence of satisfying assignments for 3SAT Boolean satisfiability expressions. One algorithm is presented for determining an instance of a satisfying assignment, where such exists. The algorithms are each deterministic and of polynomial complexity. The algorithms determining existence are complete as each produces a certificate of non-satisfiability, for instances where no satisfying assignment exists, and of satisfiability for such assignment does exist.<|reference_end|>
arxiv
@article{sauerbier2002three, title={Three complete deterministic polynomial algorithms for 3SAT}, author={Charles Sauerbier}, journal={arXiv preprint arXiv:cs/0205064}, year={2002}, number={3SEA-2019-12}, archivePrefix={arXiv}, eprint={cs/0205064}, primaryClass={cs.CC} }
sauerbier2002three
arxiv-670557
cs/0205065
Bootstrapping Lexical Choice via Multiple-Sequence Alignment
<|reference_start|>Bootstrapping Lexical Choice via Multiple-Sequence Alignment: An important component of any generation system is the mapping dictionary, a lexicon of elementary semantic expressions and corresponding natural language realizations. Typically, labor-intensive knowledge-based methods are used to construct the dictionary. We instead propose to acquire it automatically via a novel multiple-pass algorithm employing multiple-sequence alignment, a technique commonly used in bioinformatics. Crucially, our method leverages latent information contained in multi-parallel corpora -- datasets that supply several verbalizations of the corresponding semantics rather than just one. We used our techniques to generate natural language versions of computer-generated mathematical proofs, with good results on both a per-component and overall-output basis. For example, in evaluations involving a dozen human judges, our system produced output whose readability and faithfulness to the semantic input rivaled that of a traditional generation system.<|reference_end|>
arxiv
@article{barzilay2002bootstrapping, title={Bootstrapping Lexical Choice via Multiple-Sequence Alignment}, author={Regina Barzilay and Lillian Lee}, journal={arXiv preprint arXiv:cs/0205065}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205065}, primaryClass={cs.CL} }
barzilay2002bootstrapping
arxiv-670558
cs/0205066
Effectiveness of Preference Elicitation in Combinatorial Auctions
<|reference_start|>Effectiveness of Preference Elicitation in Combinatorial Auctions: Combinatorial auctions where agents can bid on bundles of items are desirable because they allow the agents to express complementarity and substitutability between the items. However, expressing one's preferences can require bidding on all bundles. Selective incremental preference elicitation by the auctioneer was recently proposed to address this problem (Conen & Sandholm 2001), but the idea was not evaluated. In this paper we show, experimentally and theoretically, that automated elicitation provides a drastic benefit. In all of the elicitation schemes under study, as the number of items for sale increases, the amount of information elicited is a vanishing fraction of the information collected in traditional ``direct revelation mechanisms'' where bidders reveal all their valuation information. Most of the elicitation schemes also maintain the benefit as the number of agents increases. We develop more effective elicitation policies for existing query types. We also present a new query type that takes the incremental nature of elicitation to a new level by allowing agents to give approximate answers that are refined only on an as-needed basis. In the process, we present methods for evaluating different types of elicitation policies.<|reference_end|>
arxiv
@article{hudson2002effectiveness, title={Effectiveness of Preference Elicitation in Combinatorial Auctions}, author={Benoit Hudson, Tuomas Sandholm}, journal={arXiv preprint arXiv:cs/0205066}, year={2002}, number={CMU-CS-02-124}, archivePrefix={arXiv}, eprint={cs/0205066}, primaryClass={cs.GT cs.MA} }
hudson2002effectiveness
arxiv-670559
cs/0205067
Evaluating the Effectiveness of Ensembles of Decision Trees in Disambiguating Senseval Lexical Samples
<|reference_start|>Evaluating the Effectiveness of Ensembles of Decision Trees in Disambiguating Senseval Lexical Samples: This paper presents an evaluation of an ensemble--based system that participated in the English and Spanish lexical sample tasks of Senseval-2. The system combines decision trees of unigrams, bigrams, and co--occurrences into a single classifier. The analysis is extended to include the Senseval-1 data.<|reference_end|>
arxiv
@article{pedersen2002evaluating, title={Evaluating the Effectiveness of Ensembles of Decision Trees in Disambiguating Senseval Lexical Samples}, author={Ted Pedersen}, journal={arXiv preprint arXiv:cs/0205067}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205067}, primaryClass={cs.CL} }
pedersen2002evaluating
arxiv-670560
cs/0205068
Assessing System Agreement and Instance Difficulty in the Lexical Sample Tasks of Senseval-2
<|reference_start|>Assessing System Agreement and Instance Difficulty in the Lexical Sample Tasks of Senseval-2: This paper presents a comparative evaluation among the systems that participated in the Spanish and English lexical sample tasks of Senseval-2. The focus is on pairwise comparisons among systems to assess the degree to which they agree, and on measuring the difficulty of the test instances included in these tasks.<|reference_end|>
arxiv
@article{pedersen2002assessing, title={Assessing System Agreement and Instance Difficulty in the Lexical Sample Tasks of Senseval-2}, author={Ted Pedersen}, journal={arXiv preprint arXiv:cs/0205068}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205068}, primaryClass={cs.CL} }
pedersen2002assessing
arxiv-670561
cs/0205069
Machine Learning with Lexical Features: The Duluth Approach to Senseval-2
<|reference_start|>Machine Learning with Lexical Features: The Duluth Approach to Senseval-2: This paper describes the sixteen Duluth entries in the Senseval-2 comparative exercise among word sense disambiguation systems. There were eight pairs of Duluth systems entered in the Spanish and English lexical sample tasks. These are all based on standard machine learning algorithms that induce classifiers from sense-tagged training text where the context in which ambiguous words occur are represented by simple lexical features. These are highly portable, robust methods that can serve as a foundation for more tailored approaches.<|reference_end|>
arxiv
@article{pedersen2002machine, title={Machine Learning with Lexical Features: The Duluth Approach to Senseval-2}, author={Ted Pedersen}, journal={arXiv preprint arXiv:cs/0205069}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205069}, primaryClass={cs.CL} }
pedersen2002machine
arxiv-670562
cs/0205070
Thumbs up? Sentiment Classification using Machine Learning Techniques
<|reference_start|>Thumbs up? Sentiment Classification using Machine Learning Techniques: We consider the problem of classifying documents not by topic, but by overall sentiment, e.g., determining whether a review is positive or negative. Using movie reviews as data, we find that standard machine learning techniques definitively outperform human-produced baselines. However, the three machine learning methods we employed (Naive Bayes, maximum entropy classification, and support vector machines) do not perform as well on sentiment classification as on traditional topic-based categorization. We conclude by examining factors that make the sentiment classification problem more challenging.<|reference_end|>
arxiv
@article{pang2002thumbs, title={Thumbs up? Sentiment Classification using Machine Learning Techniques}, author={Bo Pang, Lillian Lee and Shivakumar Vaithyanathan}, journal={arXiv preprint arXiv:cs/0205070}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205070}, primaryClass={cs.CL cs.LG} }
pang2002thumbs
arxiv-670563
cs/0205071
A Scalable Architecture for Harvest-Based Digital Libraries - The ODU/Southampton Experiments
<|reference_start|>A Scalable Architecture for Harvest-Based Digital Libraries - The ODU/Southampton Experiments: This paper discusses the requirements of current and emerging applications based on the Open Archives Initiative (OAI) and emphasizes the need for a common infrastructure to support them. Inspired by HTTP proxy, cache, gateway and web service concepts, a design for a scalable and reliable infrastructure that aims at satisfying these requirements is presented. Moreover it is shown how various applications can exploit the services included in the proposed infrastructure. The paper concludes by discussing the current status of several prototype implementations.<|reference_end|>
arxiv
@article{liu2002a, title={A Scalable Architecture for Harvest-Based Digital Libraries - The ODU/Southampton Experiments}, author={Xiaoming Liu, Tim Brody, Stevan Harnad, Les Carr, Kurt Maly, Mohammad Zubair, Michael L. Nelson}, journal={arXiv preprint arXiv:cs/0205071}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205071}, primaryClass={cs.DL cs.IR} }
liu2002a
arxiv-670564
cs/0205072
Unsupervised Learning of Morphology without Morphemes
<|reference_start|>Unsupervised Learning of Morphology without Morphemes: The first morphological learner based upon the theory of Whole Word Morphology Ford et al. (1997) is outlined, and preliminary evaluation results are presented. The program, Whole Word Morphologizer, takes a POS-tagged lexicon as input, induces morphological relationships without attempting to discover or identify morphemes, and is then able to generate new words beyond the learning sample. The accuracy (precision) of the generated new words is as high as 80% using the pure Whole Word theory, and 92% after a post-hoc adjustment is added to the routine.<|reference_end|>
arxiv
@article{neuvel2002unsupervised, title={Unsupervised Learning of Morphology without Morphemes}, author={Sylvain Neuvel and Sean A. Fulop}, journal={arXiv preprint arXiv:cs/0205072}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205072}, primaryClass={cs.CL cs.LG} }
neuvel2002unsupervised
arxiv-670565
cs/0205073
Vote Elicitation: Complexity and Strategy-Proofness
<|reference_start|>Vote Elicitation: Complexity and Strategy-Proofness: Preference elicitation is a central problem in AI, and has received significant attention in single-agent settings. It is also a key problem in multiagent systems, but has received little attention here so far. In this setting, the agents may have different preferences that often must be aggregated using voting. This leads to interesting issues because what, if any, information should be elicited from an agent depends on what other agents have revealed about their preferences so far. In this paper we study effective elicitation, and its impediments, for the most common voting protocols. It turns out that in the Single Transferable Vote protocol, even knowing when to terminate elicitation is mathcal NP-complete, while this is easy for all the other protocols under study. Even for these protocols, determining how to elicit effectively is NP-complete, even with perfect suspicions about how the agents will vote. The exception is the Plurality protocol where such effective elicitation is easy. We also show that elicitation introduces additional opportunities for strategic manipulation by the voters. We demonstrate how to curtail the space of elicitation schemes so that no such additional strategic issues arise.<|reference_end|>
arxiv
@article{conitzer2002vote, title={Vote Elicitation: Complexity and Strategy-Proofness}, author={Vincent Conitzer, Tuomas Sandholm}, journal={Proceedings of the 18th National Conference on Artificial Intelligence (AAAI-02), Edmonton, Canada, 2002}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205073}, primaryClass={cs.GT cs.CC cs.MA} }
conitzer2002vote
arxiv-670566
cs/0205074
Complexity Results about Nash Equilibria
<|reference_start|>Complexity Results about Nash Equilibria: Noncooperative game theory provides a normative framework for analyzing strategic interactions. However, for the toolbox to be operational, the solutions it defines will have to be computed. In this paper, we provide a single reduction that 1) demonstrates NP-hardness of determining whether Nash equilibria with certain natural properties exist, and 2) demonstrates the #P-hardness of counting Nash equilibria (or connected sets of Nash equilibria). We also show that 3) determining whether a pure-strategy Bayes-Nash equilibrium exists is NP-hard, and that 4) determining whether a pure-strategy Nash equilibrium exists in a stochastic (Markov) game is PSPACE-hard even if the game is invisible (this remains NP-hard if the game is finite). All of our hardness results hold even if there are only two players and the game is symmetric. Keywords: Nash equilibrium; game theory; computational complexity; noncooperative game theory; normal form game; stochastic game; Markov game; Bayes-Nash equilibrium; multiagent systems.<|reference_end|>
arxiv
@article{conitzer2002complexity, title={Complexity Results about Nash Equilibria}, author={Vincent Conitzer, Tuomas Sandholm}, journal={In Proceedings of the 18th International Joint Conference on Artificial Intelligence (IJCAI-03), Acapulco, Mexico, 2003}, year={2002}, number={CMU-CS-02-135}, archivePrefix={arXiv}, eprint={cs/0205074}, primaryClass={cs.GT cs.CC cs.MA} }
conitzer2002complexity
arxiv-670567
cs/0205075
Complexity of Mechanism Design
<|reference_start|>Complexity of Mechanism Design: The aggregation of conflicting preferences is a central problem in multiagent systems. The key difficulty is that the agents may report their preferences insincerely. Mechanism design is the art of designing the rules of the game so that the agents are motivated to report their preferences truthfully and a (socially) desirable outcome is chosen. We propose an approach where a mechanism is automatically created for the preference aggregation setting at hand. This has several advantages, but the downside is that the mechanism design optimization problem needs to be solved anew each time. Focusing on settings where side payments are not possible, we show that the mechanism design problem is NP-complete for deterministic mechanisms. This holds both for dominant-strategy implementation and for Bayes-Nash implementation. We then show that if we allow randomized mechanisms, the mechanism design problem becomes tractable. In other words, the coordinator can tackle the computational complexity introduced by its uncertainty about the agents' preferences by making the agents face additional uncertainty. This comes at no loss, and in some cases at a gain, in the (social) objective.<|reference_end|>
arxiv
@article{conitzer2002complexity, title={Complexity of Mechanism Design}, author={Vincent Conitzer, Tuomas Sandholm}, journal={Proceedings of the 18th Annual Conference on Uncertainty in Artificial Intelligence (UAI-02), Edmonton, Canada, 2002}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205075}, primaryClass={cs.GT cs.CC cs.MA} }
conitzer2002complexity
arxiv-670568
cs/0205076
Complexity of Manipulating Elections with Few Candidates
<|reference_start|>Complexity of Manipulating Elections with Few Candidates: In multiagent settings where the agents have different preferences, preference aggregation is a central issue. Voting is a general method for preference aggregation, but seminal results have shown that all general voting protocols are manipulable. One could try to avoid manipulation by using voting protocols where determining a beneficial manipulation is hard. Especially among computational agents, it is reasonable to measure this hardness by computational complexity. Some earlier work has been done in this area, but it was assumed that the number of voters and candidates is unbounded. We derive hardness results for practical multiagent settings where the number of candidates is small but the number of voters can be large. We show that with complete information about the others' votes, individual manipulation is easy, and coalitional manipulation is easy with unweighted voters. However, constructive coalitional manipulation with weighted voters is intractable for all of the voting protocols under study, except for the nonrandomized Cup. Destructive manipulation tends to be easier. Randomizing over instantiations of the protocols (such as schedules of the Cup protocol) can be used to make manipulation hard. Finally, we show that under weak assumptions, if weighted coalitional manipulation with complete information about the others' votes is hard in some voting protocol, then individual and unweighted manipulation is hard when there is uncertainty about the others' votes.<|reference_end|>
arxiv
@article{conitzer2002complexity, title={Complexity of Manipulating Elections with Few Candidates}, author={Vincent Conitzer, Tuomas Sandholm}, journal={Proceedings of the 18th National Conference on Artificial Intelligence (AAAI-02), Edmonton, Canada, 2002}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205076}, primaryClass={cs.GT} }
conitzer2002complexity
arxiv-670569
cs/0205077
Designing Multi-Commodity Flow Trees
<|reference_start|>Designing Multi-Commodity Flow Trees: The traditional multi-commodity flow problem assumes a given flow network in which multiple commodities are to be maximally routed in response to given demands. This paper considers the multi-commodity flow network-design problem: given a set of multi-commodity flow demands, find a network subject to certain constraints such that the commodities can be maximally routed. This paper focuses on the case when the network is required to be a tree. The main result is an approximation algorithm for the case when the tree is required to be of constant degree. The algorithm reduces the problem to the minimum-weight balanced-separator problem; the performance guarantee of the algorithm is within a factor of 4 of the performance guarantee of the balanced-separator procedure. If Leighton and Rao's balanced-separator procedure is used, the performance guarantee is O(log n). This improves the O(log^2 n) approximation factor that is trivial to obtain by a direct application of the balanced-separator method.<|reference_end|>
arxiv
@article{khuller2002designing, title={Designing Multi-Commodity Flow Trees}, author={Samir Khuller, Balaji Raghavachari, Neal E. Young}, journal={Information Processing Letters 50:49-55 (1994)}, year={2002}, doi={10.1016/0020-0190(94)90044-2}, archivePrefix={arXiv}, eprint={cs/0205077}, primaryClass={cs.DS cs.DM} }
khuller2002designing
arxiv-670570
cs/0205078
A Spectrum of Applications of Automated Reasoning
<|reference_start|>A Spectrum of Applications of Automated Reasoning: The likelihood of an automated reasoning program being of substantial assistance for a wide spectrum of applications rests with the nature of the options and parameters it offers on which to base needed strategies and methodologies. This article focuses on such a spectrum, featuring W. McCune's program OTTER, discussing widely varied successes in answering open questions, and touching on some of the strategies and methodologies that played a key role. The applications include finding a first proof, discovering single axioms, locating improved axiom systems, and simplifying existing proofs. The last application is directly pertinent to the recently found (by R. Thiele) Hilbert's twenty-fourth problem--which is extremely amenable to attack with the appropriate automated reasoning program--a problem concerned with proof simplification. The methodologies include those for seeking shorter proofs and for finding proofs that avoid unwanted lemmas or classes of term, a specific option for seeking proofs with smaller equational or formula complexity, and a different option to address the variable richness of a proof. The type of proof one obtains with the use of OTTER is Hilbert-style axiomatic, including details that permit one sometimes to gain new insights. We include questions still open and challenges that merit consideration.<|reference_end|>
arxiv
@article{wos2002a, title={A Spectrum of Applications of Automated Reasoning}, author={Larry Wos}, journal={arXiv preprint arXiv:cs/0205078}, year={2002}, number={ANL/MCS-P923-0102}, archivePrefix={arXiv}, eprint={cs/0205078}, primaryClass={cs.AI cs.LO} }
wos2002a
arxiv-670571
cs/0205079
Connectives in Quantum and other Cumulative Logics
<|reference_start|>Connectives in Quantum and other Cumulative Logics: Cumulative logics are studied in an abstract setting, i.e., without connectives, very much in the spirit of Makinson's early work. A powerful representation theorem characterizes those logics by choice functions that satisfy a weakening of Sen's property alpha, in the spirit of the author's "Nonmonotonic Logics and Semantics" (JLC). The representation results obtained are surprisingly smooth: in the completeness part the choice function may be defined on any set of worlds, not only definable sets and no definability-preservation property is required in the soundness part. For abstract cumulative logics, proper conjunction and negation may be defined. Contrary to the situation studied in "Nonmonotonic Logics and Semantics" no proper disjunction seems to be definable in general. The cumulative relations of KLM that satisfy some weakening of the consistency preservation property all define cumulative logics with a proper negation. Quantum Logics, as defined by Engesser and Gabbay are such cumulative logics but the negation defined by orthogonal complement does not provide a proper negation.<|reference_end|>
arxiv
@article{lehmann2002connectives, title={Connectives in Quantum and other Cumulative Logics}, author={Daniel Lehmann}, journal={arXiv preprint arXiv:cs/0205079}, year={2002}, number={TR-2002-28, Leibniz Center for Research in Computer Science, Hebrew University, revised August 2002}, archivePrefix={arXiv}, eprint={cs/0205079}, primaryClass={cs.AI math.LO} }
lehmann2002connectives
arxiv-670572
cs/0205080
Transforming the World Wide Web into a Complexity-Based Semantic Network
<|reference_start|>Transforming the World Wide Web into a Complexity-Based Semantic Network: The aim of this paper is to introduce the idea of the Semantic Web to the Complexity community and set a basic ground for a project resulting in creation of Internet-based semantic network of Complexity-related information providers. Implementation of the Semantic Web technology would be of mutual benefit to both the participants and users and will confirm self-referencing power of the community to apply the products of its own research to itself. We first explain the logic of the transition and discuss important notions associated with the Semantic Web technology. We then present a brief outline of the project milestones.<|reference_end|>
arxiv
@article{marko2002transforming, title={Transforming the World Wide Web into a Complexity-Based Semantic Network}, author={M. Marko, M.A. Porter, A. Probst, C. Gershenson, A. Das}, journal={arXiv preprint arXiv:cs/0205080}, year={2002}, archivePrefix={arXiv}, eprint={cs/0205080}, primaryClass={cs.NI cs.IR} }
marko2002transforming
arxiv-670573
cs/0206001
Neural Net Model for Featured Word Extraction
<|reference_start|>Neural Net Model for Featured Word Extraction: Search engines perform the task of retrieving information related to the user-supplied query words. This task has two parts; one is finding "featured words" which describe an article best and the other is finding a match among these words to user-defined search terms. There are two main independent approaches to achieve this task. The first one, using the concepts of semantics, has been implemented partially. For more details see another paper of Marko et al., 2002. The second approach is reported in this paper. It is a theoretical model based on using Neural Network (NN). Instead of using keywords or reading from the first few lines from papers/articles, the present model gives emphasis on extracting "featured words" from an article. Obviously we propose to exclude prepositions, articles and so on, that is, English words like "of, the, are, so, therefore, " etc. from such a list. A neural model is taken with its nodes pre-assigned energies. Whenever a match is found with featured words and userdefined search words, the node is fired and jumps to a higher energy. This firing continues until the model attains a steady energy level and total energy is now calculated. Clearly, higher match will generate higher energy; so on the basis of total energy, a ranking is done to the article indicating degree of relevance to the user's interest. Another important feature of the proposed model is incorporating a semantic module to refine the search words; like finding association among search words, etc. In this manner, information retrieval can be improved markedly.<|reference_end|>
arxiv
@article{das2002neural, title={Neural Net Model for Featured Word Extraction}, author={A. Das, M. Marko, A. Probst, M. A. Porter, C. Gershenson}, journal={arXiv preprint arXiv:cs/0206001}, year={2002}, archivePrefix={arXiv}, eprint={cs/0206001}, primaryClass={cs.NE cs.NI} }
das2002neural
arxiv-670574
cs/0206002
Building Space-Time Meshes over Arbitrary Spatial Domains
<|reference_start|>Building Space-Time Meshes over Arbitrary Spatial Domains: We present an algorithm to construct meshes suitable for space-time discontinuous Galerkin finite-element methods. Our method generalizes and improves the `Tent Pitcher' algorithm of \"Ung\"or and Sheffer. Given an arbitrary simplicially meshed domain M of any dimension and a time interval [0,T], our algorithm builds a simplicial mesh of the space-time domain Mx[0,T], in constant time per element. Our algorithm avoids the limitations of previous methods by carefully adapting the durations of space-time elements to the local quality and feature size of the underlying space mesh.<|reference_end|>
arxiv
@article{erickson2002building, title={Building Space-Time Meshes over Arbitrary Spatial Domains}, author={Jeff Erickson, Damrong Guoy, John M. Sullivan, and Alper "Ung"or}, journal={arXiv preprint arXiv:cs/0206002}, year={2002}, archivePrefix={arXiv}, eprint={cs/0206002}, primaryClass={cs.CG} }
erickson2002building
arxiv-670575
cs/0206003
Handling Defeasibilities in Action Domains
<|reference_start|>Handling Defeasibilities in Action Domains: Representing defeasibility is an important issue in common sense reasoning. In reasoning about action and change, this issue becomes more difficult because domain and action related defeasible information may conflict with general inertia rules. Furthermore, different types of defeasible information may also interfere with each other during the reasoning. In this paper, we develop a prioritized logic programming approach to handle defeasibilities in reasoning about action. In particular, we propose three action languages {\cal AT}^{0}, {\cal AT}^{1} and {\cal AT}^{2} which handle three types of defeasibilities in action domains named defeasible constraints, defeasible observations and actions with defeasible and abnormal effects respectively. Each language with a higher superscript can be viewed as an extension of the language with a lower superscript. These action languages inherit the simple syntax of {\cal A} language but their semantics is developed in terms of transition systems where transition functions are defined based on prioritized logic programs. By illustrating various examples, we show that our approach eventually provides a powerful mechanism to handle various defeasibilities in temporal prediction and postdiction. We also investigate semantic properties of these three action languages and characterize classes of action domains that present more desirable solutions in reasoning about action within the underlying action languages.<|reference_end|>
arxiv
@article{zhang2002handling, title={Handling Defeasibilities in Action Domains}, author={Yan Zhang}, journal={arXiv preprint arXiv:cs/0206003}, year={2002}, archivePrefix={arXiv}, eprint={cs/0206003}, primaryClass={cs.AI} }
zhang2002handling
arxiv-670576
cs/0206004
Mining All Non-Derivable Frequent Itemsets
<|reference_start|>Mining All Non-Derivable Frequent Itemsets: Recent studies on frequent itemset mining algorithms resulted in significant performance improvements. However, if the minimal support threshold is set too low, or the data is highly correlated, the number of frequent itemsets itself can be prohibitively large. To overcome this problem, recently several proposals have been made to construct a concise representation of the frequent itemsets, instead of mining all frequent itemsets. The main goal of this paper is to identify redundancies in the set of all frequent itemsets and to exploit these redundancies in order to reduce the result of a mining operation. We present deduction rules to derive tight bounds on the support of candidate itemsets. We show how the deduction rules allow for constructing a minimal representation for all frequent itemsets. We also present connections between our proposal and recent proposals for concise representations and we give the results of experiments on real-life datasets that show the effectiveness of the deduction rules. In fact, the experiments even show that in many cases, first mining the concise representation, and then creating the frequent itemsets from this representation outperforms existing frequent set mining algorithms.<|reference_end|>
arxiv
@article{calders2002mining, title={Mining All Non-Derivable Frequent Itemsets}, author={Toon Calders, Bart Goethals}, journal={arXiv preprint arXiv:cs/0206004}, year={2002}, archivePrefix={arXiv}, eprint={cs/0206004}, primaryClass={cs.DB cs.AI} }
calders2002mining
arxiv-670577
cs/0206005
Characterization of Strongly Equivalent Logic Programs in Intermediate Logics
<|reference_start|>Characterization of Strongly Equivalent Logic Programs in Intermediate Logics: The non-classical, nonmonotonic inference relation associated with the answer set semantics for logic programs gives rise to a relationship of 'strong equivalence' between logical programs that can be verified in 3-valued Goedel logic, G3, the strongest non-classical intermediate propositional logic (Lifschitz, Pearce and Valverde, 2001). In this paper we will show that KC (the logic obtained by adding axiom ~A v ~~A to intuitionistic logic), is the weakest intermediate logic for which strongly equivalent logic programs, in a language allowing negations, are logically equivalent.<|reference_end|>
arxiv
@article{de jongh2002characterization, title={Characterization of Strongly Equivalent Logic Programs in Intermediate Logics}, author={Dick de Jongh, Lex Hendriks}, journal={arXiv preprint arXiv:cs/0206005}, year={2002}, archivePrefix={arXiv}, eprint={cs/0206005}, primaryClass={cs.LO} }
de jongh2002characterization
arxiv-670578
cs/0206006
Robust Feature Selection by Mutual Information Distributions
<|reference_start|>Robust Feature Selection by Mutual Information Distributions: Mutual information is widely used in artificial intelligence, in a descriptive way, to measure the stochastic dependence of discrete random variables. In order to address questions such as the reliability of the empirical value, one must consider sample-to-population inferential approaches. This paper deals with the distribution of mutual information, as obtained in a Bayesian framework by a second-order Dirichlet prior distribution. The exact analytical expression for the mean and an analytical approximation of the variance are reported. Asymptotic approximations of the distribution are proposed. The results are applied to the problem of selecting features for incremental learning and classification of the naive Bayes classifier. A fast, newly defined method is shown to outperform the traditional approach based on empirical mutual information on a number of real data sets. Finally, a theoretical development is reported that allows one to efficiently extend the above methods to incomplete samples in an easy and effective way.<|reference_end|>
arxiv
@article{zaffalon2002robust, title={Robust Feature Selection by Mutual Information Distributions}, author={Marco Zaffalon, Marcus Hutter}, journal={Proc. 14th International Conference on Uncertainty in Artificial Intelligence (UAI 2002) pages 577-584}, year={2002}, number={IDSIA-08-02}, archivePrefix={arXiv}, eprint={cs/0206006}, primaryClass={cs.AI cs.LG} }
zaffalon2002robust
arxiv-670579
cs/0206007
Using the Annotated Bibliography as a Resource for Indicative Summarization
<|reference_start|>Using the Annotated Bibliography as a Resource for Indicative Summarization: We report on a language resource consisting of 2000 annotated bibliography entries, which is being analyzed as part of our research on indicative document summarization. We show how annotated bibliographies cover certain aspects of summarization that have not been well-covered by other summary corpora, and motivate why they constitute an important form to study for information retrieval. We detail our methodology for collecting the corpus, and overview our document feature markup that we introduced to facilitate summary analysis. We present the characteristics of the corpus, methods of collection, and show its use in finding the distribution of types of information included in indicative summaries and their relative ordering within the summaries.<|reference_end|>
arxiv
@article{kan2002using, title={Using the Annotated Bibliography as a Resource for Indicative Summarization}, author={Min-Yen Kan, Judith L. Klavans, Kathleen R. McKeown}, journal={Proceedings of LREC 2002, Las Palmas, Spain. pp. 1746-1752}, year={2002}, archivePrefix={arXiv}, eprint={cs/0206007}, primaryClass={cs.CL cs.DL} }
kan2002using
arxiv-670580
cs/0206008
Computer modeling of feelings and emotions: a quantitative neural network model of the feeling-of-knowing
<|reference_start|>Computer modeling of feelings and emotions: a quantitative neural network model of the feeling-of-knowing: The first quantitative neural network model of feelings and emotions is proposed on the base of available data on their neuroscience and evolutionary biology nature, and on a neural network human memory model which admits distinct description of conscious and unconscious mental processes in a time dependent manner. As an example, proposed model is applied to quantitative description of the feeling of knowing.<|reference_end|>
arxiv
@article{gopych2002computer, title={Computer modeling of feelings and emotions: a quantitative neural network model of the feeling-of-knowing}, author={Petro M. Gopych}, journal={Kharkiv University Bulletin, Series Psychology, 2002, no.550(part 1), p.54-58}, year={2002}, archivePrefix={arXiv}, eprint={cs/0206008}, primaryClass={cs.AI cs.NE q-bio.NC q-bio.QM} }
gopych2002computer
arxiv-670581
cs/0206009
Implementation and complexity of the watershed-from-markers algorithm computed as a minimal cost forest
<|reference_start|>Implementation and complexity of the watershed-from-markers algorithm computed as a minimal cost forest: The watershed algorithm belongs to classical algorithms in mathematical morphology. Lotufo et al. published a principle of the watershed computation by means of an Image Foresting Transform (IFT), which computes a shortest path forest from given markers. The algorithm itself was described for a 2D case (image) without a detailed discussion of its computation and memory demands for real datasets. As IFT cleverly solves the problem of plateaus and as it gives precise results when thin objects have to be segmented, it is obvious to use this algorithm for 3D datasets taking in mind the minimizing of a higher memory consumption for the 3D case without loosing low asymptotical time complexity of O(m+C) (and also the real computation speed). The main goal of this paper is an implementation of the IFT algorithm with a priority queue with buckets and careful tuning of this implementation to reach as minimal memory consumption as possible. The paper presents five possible modifications and methods of implementation of the IFT algorithm. All presented implementations keep the time complexity of the standard priority queue with buckets but the best one minimizes the costly memory allocation and needs only 19-45% of memory for typical 3D medical imaging datasets. Memory saving was reached by an IFT algorithm simplification, which stores more elements in temporary structures but these elements are simpler and thus need less memory. The best presented modification allows segmentation of large 3D medical datasets (up to 512x512x680 voxels) with 12-or 16-bits per voxel on currently available PC based workstations.<|reference_end|>
arxiv
@article{felkel2002implementation, title={Implementation and complexity of the watershed-from-markers algorithm computed as a minimal cost forest}, author={Petr Felkel, Mario Bruckwschwaiger and Rainer Wegenkittl}, journal={Computer Graphics Forum, Vol 20, No 3, 2001, Conference Issue, pages C-26 - C-35, ISSN 0167-7075}, year={2002}, doi={10.1111/1467-8659.00495}, archivePrefix={arXiv}, eprint={cs/0206009}, primaryClass={cs.DS cs.CG} }
felkel2002implementation
arxiv-670582
cs/0206010
Performance Comparison of Function Evaluation Methods
<|reference_start|>Performance Comparison of Function Evaluation Methods: We perform a comparison of the performance and efficiency of four different function evaluation methods: black-box functions, binary trees, $n$-ary trees and string parsing. The test consists in evaluating 8 different functions of two variables $x,y$ over 5000 floating point values of the pair $(x,y)$. The outcome of the test indicates that the $n$-ary tree representation of algebraic expressions is the fastest method, closely followed by black-box function method, then by binary trees and lastly by string parsing.<|reference_end|>
arxiv
@article{liberti2002performance, title={Performance Comparison of Function Evaluation Methods}, author={Leo Liberti}, journal={arXiv preprint arXiv:cs/0206010}, year={2002}, archivePrefix={arXiv}, eprint={cs/0206010}, primaryClass={cs.SC cs.NA} }
liberti2002performance
arxiv-670583
cs/0206011
A Statistical Physics Perspective on Web Growth
<|reference_start|>A Statistical Physics Perspective on Web Growth: Approaches from statistical physics are applied to investigate the structure of network models whose growth rules mimic aspects of the evolution of the world-wide web. We first determine the degree distribution of a growing network in which nodes are introduced one at a time and attach to an earlier node of degree k with rate A_ksim k^gamma. Very different behaviors arise for gamma<1, gamma=1, and gamma>1. We also analyze the degree distribution of a heterogeneous network, the joint age-degree distribution, the correlation between degrees of neighboring nodes, as well as global network properties. An extension to directed networks is then presented. By tuning model parameters to reasonable values, we obtain distinct power-law forms for the in-degree and out-degree distributions with exponents that are in good agreement with current data for the web. Finally, a general growth process with independent introduction of nodes and links is investigated. This leads to independently growing sub-networks that may coalesce with other sub-networks. General results for both the size distribution of sub-networks and the degree distribution are obtained.<|reference_end|>
arxiv
@article{krapivsky2002a, title={A Statistical Physics Perspective on Web Growth}, author={P. L. Krapivsky and S. Redner}, journal={Computer Networks 39, 261-276 (2002)}, year={2002}, archivePrefix={arXiv}, eprint={cs/0206011}, primaryClass={cs.NI cond-mat.stat-mech} }
krapivsky2002a
arxiv-670584
cs/0206012
Fast Deterministic Consensus in a Noisy Environment
<|reference_start|>Fast Deterministic Consensus in a Noisy Environment: It is well known that the consensus problem cannot be solved deterministically in an asynchronous environment, but that randomized solutions are possible. We propose a new model, called noisy scheduling, in which an adversarial schedule is perturbed randomly, and show that in this model randomness in the environment can substitute for randomness in the algorithm. In particular, we show that a simplified, deterministic version of Chandra's wait-free shared-memory consensus algorithm (PODC, 1996, pp. 166-175) solves consensus in time at most logarithmic in the number of active processes. The proof of termination is based on showing that a race between independent delayed renewal processes produces a winner quickly. In addition, we show that the protocol finishes in constant time using quantum and priority-based scheduling on a uniprocessor, suggesting that it is robust against the choice of model over a wide range.<|reference_end|>
arxiv
@article{aspnes2002fast, title={Fast Deterministic Consensus in a Noisy Environment}, author={James Aspnes}, journal={arXiv preprint arXiv:cs/0206012}, year={2002}, archivePrefix={arXiv}, eprint={cs/0206012}, primaryClass={cs.DS cs.DC} }
aspnes2002fast
arxiv-670585
cs/0206013
High-order fundamental and general solutions of convection-diffusion equation and their applications with boundary particle method
<|reference_start|>High-order fundamental and general solutions of convection-diffusion equation and their applications with boundary particle method: In this study, we presented the high-order fundamental solutions and general solutions of convection-diffusion equation. To demonstrate their efficacy, we applied the high-order general solutions to the boundary particle method (BPM) for the solution of some inhomogeneous convection-diffusion problems, where the BPM is a new truly boundary-only meshfree collocation method based on multiple reciprocity principle. For the sake of completeness, the BPM is also briefly described here.<|reference_end|>
arxiv
@article{chen2002high-order, title={High-order fundamental and general solutions of convection-diffusion equation and their applications with boundary particle method}, author={W. Chen}, journal={arXiv preprint arXiv:cs/0206013}, year={2002}, archivePrefix={arXiv}, eprint={cs/0206013}, primaryClass={cs.CE cs.CG} }
chen2002high-order
arxiv-670586
cs/0206014
A Method for Open-Vocabulary Speech-Driven Text Retrieval
<|reference_start|>A Method for Open-Vocabulary Speech-Driven Text Retrieval: While recent retrieval techniques do not limit the number of index terms, out-of-vocabulary (OOV) words are crucial in speech recognition. Aiming at retrieving information with spoken queries, we fill the gap between speech recognition and text retrieval in terms of the vocabulary size. Given a spoken query, we generate a transcription and detect OOV words through speech recognition. We then correspond detected OOV words to terms indexed in a target collection to complete the transcription, and search the collection for documents relevant to the completed transcription. We show the effectiveness of our method by way of experiments.<|reference_end|>
arxiv
@article{fujii2002a, title={A Method for Open-Vocabulary Speech-Driven Text Retrieval}, author={Atsushi Fujii, Katunobu Itou and Tetsuya Ishikawa}, journal={Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP-2002), pp.188-195, July. 2002}, year={2002}, archivePrefix={arXiv}, eprint={cs/0206014}, primaryClass={cs.CL} }
fujii2002a
arxiv-670587
cs/0206015
Japanese/English Cross-Language Information Retrieval: Exploration of Query Translation and Transliteration
<|reference_start|>Japanese/English Cross-Language Information Retrieval: Exploration of Query Translation and Transliteration: Cross-language information retrieval (CLIR), where queries and documents are in different languages, has of late become one of the major topics within the information retrieval community. This paper proposes a Japanese/English CLIR system, where we combine a query translation and retrieval modules. We currently target the retrieval of technical documents, and therefore the performance of our system is highly dependent on the quality of the translation of technical terms. However, the technical term translation is still problematic in that technical terms are often compound words, and thus new terms are progressively created by combining existing base words. In addition, Japanese often represents loanwords based on its special phonogram. Consequently, existing dictionaries find it difficult to achieve sufficient coverage. To counter the first problem, we produce a Japanese/English dictionary for base words, and translate compound words on a word-by-word basis. We also use a probabilistic method to resolve translation ambiguity. For the second problem, we use a transliteration method, which corresponds words unlisted in the base word dictionary to their phonetic equivalents in the target language. We evaluate our system using a test collection for CLIR, and show that both the compound word translation and transliteration methods improve the system performance.<|reference_end|>
arxiv
@article{fujii2002japanese/english, title={Japanese/English Cross-Language Information Retrieval: Exploration of Query Translation and Transliteration}, author={Atsushi Fujii and Tetsuya Ishikawa}, journal={Computers and the Humanities, Vol.35, No.4, pp.389-420, 2001}, year={2002}, archivePrefix={arXiv}, eprint={cs/0206015}, primaryClass={cs.CL} }
fujii2002japanese/english
arxiv-670588
cs/0206016
Distance function wavelets - Part III: "Exotic" transforms and series
<|reference_start|>Distance function wavelets - Part III: "Exotic" transforms and series: Part III of the reports consists of various unconventional distance function wavelets (DFW). The dimension and the order of partial differential equation (PDE) are first used as a substitute of the scale parameter in the DFW transforms and series, especially with the space and time-space potential problems. It is noted that the recursive multiple reciprocity formulation is the DFW series. The Green second identity is used to avoid the singularity of the zero-order fundamental solution in creating the DFW series. The fundamental solutions of various composite PDEs are found very flexible and efficient to handle a borad range of problems. We also discuss the underlying connections between the crucial concepts of dimension, scale and the order of PDE through the analysis of dissipative acoustic wave propagation. The shape parameter of the potential problems is also employed as the "scale parameter" to create the non-orthogonal DFW. This paper also briefly discusses and conjectures the DFW correspondences of a variety of coordinate variable transforms and series. Practically important, the anisotropic and inhomogeneous DFW's are developed by using the geodesic distance variable. The DFW and the related basis functions are also used in making the kernel distance sigmoidal functions, which are potentially useful in the artificial neural network and machine learning. As or even worse than the preceding two reports, this study scarifies mathematical rigor and in turn unfetter imagination. Most results are intuitively obtained without rigorous analysis. Follow-up research is still under way. The paper is intended to inspire more research into this promising area.<|reference_end|>
arxiv
@article{chen2002distance, title={Distance function wavelets - Part III: "Exotic" transforms and series}, author={W. Chen}, journal={arXiv preprint arXiv:cs/0206016}, year={2002}, archivePrefix={arXiv}, eprint={cs/0206016}, primaryClass={cs.CE cs.CG} }
chen2002distance
arxiv-670589
cs/0206017
The Prioritized Inductive Logic Programs
<|reference_start|>The Prioritized Inductive Logic Programs: The limit behavior of inductive logic programs has not been explored, but when considering incremental or online inductive learning algorithms which usually run ongoingly, such behavior of the programs should be taken into account. An example is given to show that some inductive learning algorithm may not be correct in the long run if the limit behavior is not considered. An inductive logic program is convergent if given an increasing sequence of example sets, the program produces a corresponding sequence of the Horn logic programs which has the set-theoretic limit, and is limit-correct if the limit of the produced sequence of the Horn logic programs is correct with respect to the limit of the sequence of the example sets. It is shown that the GOLEM system is not limit-correct. Finally, a limit-correct inductive logic system, called the prioritized GOLEM system, is proposed as a solution.<|reference_end|>
arxiv
@article{ma2002the, title={The Prioritized Inductive Logic Programs}, author={Shilong Ma, Yuefei Sui, Ke Xu}, journal={arXiv preprint arXiv:cs/0206017}, year={2002}, archivePrefix={arXiv}, eprint={cs/0206017}, primaryClass={cs.AI cs.LG} }
ma2002the
arxiv-670590
cs/0206018
On Simultaneous Graph Embedding
<|reference_start|>On Simultaneous Graph Embedding: We consider the problem of simultaneous embedding of planar graphs. There are two variants of this problem, one in which the mapping between the vertices of the two graphs is given and another where the mapping is not given. In particular, we show that without mapping, any number of outerplanar graphs can be embedded simultaneously on an $O(n)\times O(n)$ grid, and an outerplanar and general planar graph can be embedded simultaneously on an $O(n^2)\times O(n^3)$ grid. If the mapping is given, we show how to embed two paths on an $n \times n$ grid, a caterpillar and a path on an $n \times 2n$ grid, or two caterpillar graphs on an $O(n^2)\times O(n^3)$ grid. We also show that 5 paths, or 3 caterpillars, or two general planar graphs cannot be simultaneously embedded given the mapping.<|reference_end|>
arxiv
@article{duncan2002on, title={On Simultaneous Graph Embedding}, author={C. A. Duncan, A. Efrat, C. Erten, S. Kobourov, J.S.B. Mitchell}, journal={arXiv preprint arXiv:cs/0206018}, year={2002}, archivePrefix={arXiv}, eprint={cs/0206018}, primaryClass={cs.CG} }
duncan2002on
arxiv-670591
cs/0206019
Simultaneous Embedding of a Planar Graph and Its Dual on the Grid
<|reference_start|>Simultaneous Embedding of a Planar Graph and Its Dual on the Grid: Traditional representations of graphs and their duals suggest the requirement that the dual vertices be placed inside their corresponding primal faces, and the edges of the dual graph cross only their corresponding primal edges. We consider the problem of simultaneously embedding a planar graph and its dual into a small integer grid such that the edges are drawn as straight-line segments and the only crossings are between primal-dual pairs of edges. We provide a linear-time algorithm that simultaneously embeds a 3-connected planar graph and its dual on a (2n-2) by (2n-2) integer grid, where n is the total number of vertices in the graph and its dual. Furthermore our embedding algorithm satisfies the two natural requirements mentioned above.<|reference_end|>
arxiv
@article{erten2002simultaneous, title={Simultaneous Embedding of a Planar Graph and Its Dual on the Grid}, author={C. Erten and S. G. Kobourov}, journal={arXiv preprint arXiv:cs/0206019}, year={2002}, archivePrefix={arXiv}, eprint={cs/0206019}, primaryClass={cs.CG} }
erten2002simultaneous
arxiv-670592
cs/0206020
Multidimensional Network Monitoring for Intrusion Detection
<|reference_start|>Multidimensional Network Monitoring for Intrusion Detection: An approach for real-time network monitoring in terms of numerical time-dependant functions of protocol parameters is suggested. Applying complex systems theory for information f{l}ow analysis of networks, the information traffic is described as a trajectory in multi-dimensional parameter-time space with about 10-12 dimensions. The network traffic description is synthesized by applying methods of theoretical physics and complex systems theory, to provide a robust approach for network monitoring that detects known intrusions, and supports developing real systems for detection of unknown intrusions. The methods of data analysis and pattern recognition presented are the basis of a technology study for an automatic intrusion detection system that detects the attack in the reconnaissance stage.<|reference_end|>
arxiv
@article{gudkov2002multidimensional, title={Multidimensional Network Monitoring for Intrusion Detection}, author={Vladimir Gudkov and Joseph E. Johnson}, journal={arXiv preprint arXiv:cs/0206020}, year={2002}, archivePrefix={arXiv}, eprint={cs/0206020}, primaryClass={cs.CR} }
gudkov2002multidimensional
arxiv-670593
cs/0206021
The analysis of the IFPUG method sensitivity
<|reference_start|>The analysis of the IFPUG method sensitivity: J. Albrecht`s Function Point Analysis (FPA) is a method to determine the functional size of software products. An organization called International Function Point Users Group (IPFUG), considers the FPA as a standard in the software functional size measurement. The Albrechts method is followed by IPFUG method which includes some modifications in order to improve it. A limitation of the method refers to the fact that FPA is not sensitive enough to differentiate the functional size in small enhancements. That affects the productivity analysis, where the software product functional size is required. To provide more power to the functional size measurement, A. Abran, M. Maya and H. Nguyeckim have proposed some modifications to improve it. The IPFUG v 4.1 method which includes these modifications is named IFPUG v 4.1 extended. In this work we set the conditions to delimiting granular from non granular functions and we calculate the static calibration and sensitivity graphs for the measurements of a set of projects with a high percentage of granular functions, all of then measured with the IFPUG v 4.1 method and the IFPUG v 4.1 extended. Finally, we introduce a statistic analysis in order to determine whether significant differences exist between both methods or not.<|reference_end|>
arxiv
@article{monge2002the, title={The analysis of the IFPUG method sensitivity}, author={R. Asensio Monge (U. of Oviedo), F. Sanchis Marco (U.P. of Madrid), F. Torre Cervigon (U. of Oviedo)}, journal={arXiv preprint arXiv:cs/0206021}, year={2002}, archivePrefix={arXiv}, eprint={cs/0206021}, primaryClass={cs.SE} }
monge2002the
arxiv-670594
cs/0206022
The Fastest and Shortest Algorithm for All Well-Defined Problems
<|reference_start|>The Fastest and Shortest Algorithm for All Well-Defined Problems: An algorithm $M$ is described that solves any well-defined problem $p$ as quickly as the fastest algorithm computing a solution to $p$, save for a factor of 5 and low-order additive terms. $M$ optimally distributes resources between the execution of provably correct $p$-solving programs and an enumeration of all proofs, including relevant proofs of program correctness and of time bounds on program runtimes. $M$ avoids Blum's speed-up theorem by ignoring programs without correctness proof. $M$ has broader applicability and can be faster than Levin's universal search, the fastest method for inverting functions save for a large multiplicative constant. An extension of Kolmogorov complexity and two novel natural measures of function complexity are used to show that the most efficient program computing some function $f$ is also among the shortest programs provably computing $f$.<|reference_end|>
arxiv
@article{hutter2002the, title={The Fastest and Shortest Algorithm for All Well-Defined Problems}, author={Marcus Hutter}, journal={International Journal of Foundations of Computer Science, Vol.13, No.3, June 2002, 431-443}, year={2002}, number={IDSIA-16-00}, archivePrefix={arXiv}, eprint={cs/0206022}, primaryClass={cs.CC cs.LO} }
hutter2002the
arxiv-670595
cs/0206023
Relational Association Rules: getting WARMeR
<|reference_start|>Relational Association Rules: getting WARMeR: In recent years, the problem of association rule mining in transactional data has been well studied. We propose to extend the discovery of classical association rules to the discovery of association rules of conjunctive queries in arbitrary relational data, inspired by the WARMR algorithm, developed by Dehaspe and Toivonen, that discovers association rules over a limited set of conjunctive queries. Conjunctive query evaluation in relational databases is well understood, but still poses some great challenges when approached from a discovery viewpoint in which patterns are generated and evaluated with respect to some well defined search space and pruning operators.<|reference_end|>
arxiv
@article{goethals2002relational, title={Relational Association Rules: getting WARMeR}, author={Bart Goethals and Jan Van den Bussche}, journal={arXiv preprint arXiv:cs/0206023}, year={2002}, archivePrefix={arXiv}, eprint={cs/0206023}, primaryClass={cs.DB cs.AI} }
goethals2002relational
arxiv-670596
cs/0206024
Sierpinski Gaskets for Logic Functions Representation
<|reference_start|>Sierpinski Gaskets for Logic Functions Representation: This paper introduces a new approach to represent logic functions in the form of Sierpinski Gaskets. The structure of the gasket allows to manipulate with the corresponding logic expression using recursive essence of fractals. Thus, the Sierpinski gasket's pattern has myriad useful properties which can enhance practical features of other graphic representations like decision diagrams. We have covered possible applications of Sierpinski gaskets in logic design and justified our assumptions in logic function minimization (both Boolean and multiple-valued cases). The experimental results on benchmarks with advances in the novel structure are considered as well.<|reference_end|>
arxiv
@article{popel2002sierpinski, title={Sierpinski Gaskets for Logic Functions Representation}, author={Denis V. Popel and Anita Dani}, journal={ISMVL 2002 Proceedinds}, year={2002}, archivePrefix={arXiv}, eprint={cs/0206024}, primaryClass={cs.LO cs.DM} }
popel2002sierpinski
arxiv-670597
cs/0206025
The Lattice of Fuzzy Intervals and Sufficient Conditions for its Distributivity
<|reference_start|>The Lattice of Fuzzy Intervals and Sufficient Conditions for its Distributivity: Given a reference lattice, we define fuzzy intervals to be the fuzzy sets such that their p-cuts are crisp closed intervals. We show that: given a complete reference lattice, the collection of its fuzzy intervals is a complete lattice. Furthermore we show that: if the reference lattice is completely distributive then the lattice of its fuzzy intervals is distributive.<|reference_end|>
arxiv
@article{kehagias2002the, title={The Lattice of Fuzzy Intervals and Sufficient Conditions for its Distributivity}, author={Ath. Kehagias}, journal={arXiv preprint arXiv:cs/0206025}, year={2002}, archivePrefix={arXiv}, eprint={cs/0206025}, primaryClass={cs.OH} }
kehagias2002the
arxiv-670598
cs/0206026
Interleaved semantic interpretation in environment-based parsing
<|reference_start|>Interleaved semantic interpretation in environment-based parsing: This paper extends a polynomial-time parsing algorithm that resolves structural ambiguity in input to a speech-based user interface by calculating and comparing the denotations of rival constituents, given some model of the interfaced application environment (Schuler 2001). The algorithm is extended to incorporate a full set of logical operators, including quantifiers and conjunctions, into this calculation without increasing the complexity of the overall algorithm beyond polynomial time, both in terms of the length of the input and the number of entities in the environment model.<|reference_end|>
arxiv
@article{schuler2002interleaved, title={Interleaved semantic interpretation in environment-based parsing}, author={William Schuler}, journal={Proceedings of the 19th International Conference on Computational Linguistics (COLING 2002)}, year={2002}, archivePrefix={arXiv}, eprint={cs/0206026}, primaryClass={cs.CL cs.HC} }
schuler2002interleaved
arxiv-670599
cs/0206027
Behaviour-based Knowledge Systems: An Epigenetic Path from Behaviour to Knowledge
<|reference_start|>Behaviour-based Knowledge Systems: An Epigenetic Path from Behaviour to Knowledge: In this paper we expose the theoretical background underlying our current research. This consists in the development of behaviour-based knowledge systems, for closing the gaps between behaviour-based and knowledge-based systems, and also between the understandings of the phenomena they model. We expose the requirements and stages for developing behaviour-based knowledge systems and discuss their limits. We believe that these are necessary conditions for the development of higher order cognitive capacities, in artificial and natural cognitive systems.<|reference_end|>
arxiv
@article{gershenson2002behaviour-based, title={Behaviour-based Knowledge Systems: An Epigenetic Path from Behaviour to Knowledge}, author={Carlos Gershenson}, journal={arXiv preprint arXiv:cs/0206027}, year={2002}, archivePrefix={arXiv}, eprint={cs/0206027}, primaryClass={cs.AI cs.AR cs.NE} }
gershenson2002behaviour-based
arxiv-670600
cs/0206028
Knowledge management for enterprises (Wissensmanagement fuer Unternehmen)
<|reference_start|>Knowledge management for enterprises (Wissensmanagement fuer Unternehmen): Although knowledge is one of the most valuable resource of enterprises and an important production and competition factor, this intellectual potential is often used (or maintained) only inadequate by the enterprises. Therefore, in a globalised and growing market the optimal usage of existing knowledge represents a key factor for enterprises of the future. Here, knowledge management systems should engage facilitating. Because geographically far distributed establishments cause, however, a distributed system, this paper should uncover the spectrum connected with it and present a possible basic approach which is based on ontologies and modern, platform independent technologies. Last but not least this attempt, as well as general questions of the knowledge management, are discussed.<|reference_end|>
arxiv
@article{eiden2002knowledge, title={Knowledge management for enterprises (Wissensmanagement fuer Unternehmen)}, author={Wolfgang Eiden}, journal={arXiv preprint arXiv:cs/0206028}, year={2002}, archivePrefix={arXiv}, eprint={cs/0206028}, primaryClass={cs.IR cs.AI} }
eiden2002knowledge