corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-672001
cs/0407011
Distance distribution of binary codes and the error probability of decoding
<|reference_start|>Distance distribution of binary codes and the error probability of decoding: We address the problem of bounding below the probability of error under maximum likelihood decoding of a binary code with a known distance distribution used on a binary symmetric channel. An improved upper bound is given for the maximum attainable exponent of this probability (the reliability function of the channel). In particular, we prove that the ``random coding exponent'' is the true value of the channel reliability for code rate $R$ in some interval immediately below the critical rate of the channel. An analogous result is obtained for the Gaussian channel.<|reference_end|>
arxiv
@article{barg2004distance, title={Distance distribution of binary codes and the error probability of decoding}, author={Alexander Barg and Andrew McGregor}, journal={IEEE Transactions on Information Theory vol. 51, no. 12, pp. 4237-4246 (2005).}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407011}, primaryClass={cs.IT math.IT} }
barg2004distance
arxiv-672002
cs/0407012
A Taxonomy and Survey of Grid Resource Planning and Reservation Systems for Grid Enabled Analysis Environment
<|reference_start|>A Taxonomy and Survey of Grid Resource Planning and Reservation Systems for Grid Enabled Analysis Environment: The concept of coupling geographically distributed resources for solving large scale problems is becoming increasingly popular forming what is popularly called grid computing. Management of resources in the Grid environment becomes complex as the resources are geographically distributed, heterogeneous in nature and owned by different individuals and organizations each having their own resource management policies and different access and cost models. There have been many projects that have designed and implemented the resource management systems with a variety of architectures and services. In this paper we have presented the general requirements that a Resource Management system should satisfy. The taxonomy has also been defined based on which survey of resource management systems in different existing Grid projects has been conducted to identify the key areas where these systems lack the desired functionality.<|reference_end|>
arxiv
@article{ali2004a, title={A Taxonomy and Survey of Grid Resource Planning and Reservation Systems for Grid Enabled Analysis Environment}, author={Arshad Ali, Ashiq Anjum, Atif Mehmood, Richard McClatchey, Ian Willers, Julian Bunn, Harvey Newman, Michael Thomas, Conrad Steenberg}, journal={arXiv preprint arXiv:cs/0407012}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407012}, primaryClass={cs.DC} }
ali2004a
arxiv-672003
cs/0407013
Distributed Analysis and Load Balancing System for Grid Enabled Analysis on Hand-held devices using Multi-Agents Systems
<|reference_start|>Distributed Analysis and Load Balancing System for Grid Enabled Analysis on Hand-held devices using Multi-Agents Systems: Handheld devices, while growing rapidly, are inherently constrained and lack the capability of executing resource hungry applications. This paper presents the design and implementation of distributed analysis and load-balancing system for hand-held devices using multi-agents system. This system enables low resource mobile handheld devices to act as potential clients for Grid enabled applications and analysis environments. We propose a system, in which mobile agents will transport, schedule, execute and return results for heavy computational jobs submitted by handheld devices. Moreover, in this way, our system provides high throughput computing environment for hand-held devices.<|reference_end|>
arxiv
@article{ahmad2004distributed, title={Distributed Analysis and Load Balancing System for Grid Enabled Analysis on Hand-held devices using Multi-Agents Systems}, author={Naveed Ahmad, Arshad Ali, Ashiq Anjum, Tahir Azim, Julian Bunn, Ali Hassan, Ahsan Ikram, Frank van Lingen, Richard McClatchey, Harvey Newman, Conrad Steenberg, Michael Thomas & Ian Willers}, journal={arXiv preprint arXiv:cs/0407013}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407013}, primaryClass={cs.DC} }
ahmad2004distributed
arxiv-672004
cs/0407014
A Grid-enabled Interface to Condor for Interactive Analysis on Handheld and Resource-limited Devices
<|reference_start|>A Grid-enabled Interface to Condor for Interactive Analysis on Handheld and Resource-limited Devices: This paper was withdrawn by the authors.<|reference_end|>
arxiv
@article{ali2004a, title={A Grid-enabled Interface to Condor for Interactive Analysis on Handheld and Resource-limited Devices}, author={Arshad Ali, Ashiq Anjum, Tahir Azim, Julian Bunn, Ahsan Ikram, Richard McClatchey, Harvey Newman, Conrad Steenberg, Michael Thomas, Ian Willers}, journal={arXiv preprint arXiv:cs/0407014}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407014}, primaryClass={cs.DC} }
ali2004a
arxiv-672005
cs/0407015
Resource Bounded Immunity and Simplicity
<|reference_start|>Resource Bounded Immunity and Simplicity: Revisiting the thirty years-old notions of resource-bounded immunity and simplicity, we investigate the structural characteristics of various immunity notions: strong immunity, almost immunity, and hyperimmunity as well as their corresponding simplicity notions. We also study limited immunity and simplicity, called k-immunity and feasible k-immunity, and their simplicity notions. Finally, we propose the k-immune hypothesis as a working hypothesis that guarantees the existence of simple sets in NP.<|reference_end|>
arxiv
@article{yamakami2004resource, title={Resource Bounded Immunity and Simplicity}, author={Tomoyuki Yamakami and Toshio Suzuki}, journal={(journal version) Theoretical Computer Sceince, Vol.347, pp.90-129, 2005}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407015}, primaryClass={cs.CC cs.DM} }
yamakami2004resource
arxiv-672006
cs/0407016
Learning for Adaptive Real-time Search
<|reference_start|>Learning for Adaptive Real-time Search: Real-time heuristic search is a popular model of acting and learning in intelligent autonomous agents. Learning real-time search agents improve their performance over time by acquiring and refining a value function guiding the application of their actions. As computing the perfect value function is typically intractable, a heuristic approximation is acquired instead. Most studies of learning in real-time search (and reinforcement learning) assume that a simple value-function-greedy policy is used to select actions. This is in contrast to practice, where high-performance is usually attained by interleaving planning and acting via a lookahead search of a non-trivial depth. In this paper, we take a step toward bridging this gap and propose a novel algorithm that (i) learns a heuristic function to be used specifically with a lookahead-based policy, (ii) selects the lookahead depth adaptively in each state, (iii) gives the user control over the trade-off between exploration and exploitation. We extensively evaluate the algorithm in the sliding tile puzzle testbed comparing it to the classical LRTA* and the more recent weighted LRTA*, bounded LRTA*, and FALCONS. Improvements of 5 to 30 folds in convergence speed are observed.<|reference_end|>
arxiv
@article{bulitko2004learning, title={Learning for Adaptive Real-time Search}, author={Vadim Bulitko}, journal={arXiv preprint arXiv:cs/0407016}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407016}, primaryClass={cs.AI cs.LG} }
bulitko2004learning
arxiv-672007
cs/0407017
A Low Cost Distributed Computing Approach to Pulsar Searches at a Small College
<|reference_start|>A Low Cost Distributed Computing Approach to Pulsar Searches at a Small College: We describe a distributed processing cluster of inexpensive Linux machines developed jointly by the Astronomy and Computer Science departments at Haverford College which has been successfully used to search a large volume of data from a recent radio pulsar survey. Analysis of radio pulsar surveys requires significant computational resources to handle the demanding data storage and processing needs. One goal of this project was to explore issues encountered when processing a large amount of pulsar survey data with limited computational resources. This cluster, which was developed and activated in only a few weeks by supervised undergraduate summer research students, used existing decommissioned computers, the campus network, and a script-based, client-oriented, self-scheduled data distribution approach to process the data. This setup provided simplicity, efficiency, and "on-the-fly" scalability at low cost. The entire 570 GB data set from the pulsar survey was processed at Haverford over the course of a ten-week summer period using this cluster. We conclude that this cluster can serve as a useful computational model in cases where data processing must be carried out on a limited budget. We have also constructed a DVD archive of the raw survey data in order to investigate the feasibility of using DVD as an inexpensive and easily accessible raw data storage format for pulsar surveys. DVD-based storage has not been widely explored in the pulsar community, but it has several advantages. The DVD archive we have constructed is reliable, portable, inexpensive, and can be easily read by any standard modern machine.<|reference_end|>
arxiv
@article{cantino2004a, title={A Low Cost Distributed Computing Approach to Pulsar Searches at a Small College}, author={Andrew Cantino, Fronefield Crawford, Saurav Dhital, John P. Dougherty, Reid Sherman (Haverford College)}, journal={arXiv preprint arXiv:cs/0407017}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407017}, primaryClass={cs.DC} }
cantino2004a
arxiv-672008
cs/0407018
An algorithm for two-dimensional mesh generation based on the pinwheel tiling
<|reference_start|>An algorithm for two-dimensional mesh generation based on the pinwheel tiling: We propose a new two-dimensional meshing algorithm called PINW able to generate meshes that accurately approximate the distance between any two domain points by paths composed only of cell edges. This technique is based on an extension of pinwheel tilings proposed by Radin and Conway. We prove that the algorithm produces triangles of bounded aspect ratio. This kind of mesh would be useful in cohesive interface finite element modeling when the crack propagation pathis an outcome of a simulation process.<|reference_end|>
arxiv
@article{ganguly2004an, title={An algorithm for two-dimensional mesh generation based on the pinwheel tiling}, author={Pritam Ganguly, Stephen A. Vavasis, Katerina D. Papoulia}, journal={arXiv preprint arXiv:cs/0407018}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407018}, primaryClass={cs.CG cs.NA} }
ganguly2004an
arxiv-672009
cs/0407019
Stochastic fuzzy controller
<|reference_start|>Stochastic fuzzy controller: A standard approach to building a fuzzy controller based on stochastic logic uses binary random signals with an average (expected value of a random variable) in the range [0, 1]. A different approach is presented, founded on a representation of the membership functions with the probability density functions.<|reference_end|>
arxiv
@article{jurkovic2004stochastic, title={Stochastic fuzzy controller}, author={Franc Jurkovic}, journal={arXiv preprint arXiv:cs/0407019}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407019}, primaryClass={cs.AR} }
jurkovic2004stochastic
arxiv-672010
cs/0407020
Minimum Enclosing Polytope in High Dimensions
<|reference_start|>Minimum Enclosing Polytope in High Dimensions: We study the problem of covering a given set of $n$ points in a high, $d$-dimensional space by the minimum enclosing polytope of a given arbitrary shape. We present algorithms that work for a large family of shapes, provided either only translations and no rotations are allowed, or only rotation about a fixed point is allowed; that is, one is allowed to only scale and translate a given shape, or scale and rotate the shape around a fixed point. Our algorithms start with a polytope guessed to be of optimal size and iteratively moves it based on a greedy principle: simply move the current polytope directly towards any outside point till it touches the surface. For computing the minimum enclosing ball, this gives a simple greedy algorithm with running time $O(nd/\eps)$ producing a ball of radius $1+\eps$ times the optimal. This simple principle generalizes to arbitrary convex shape when only translations are allowed, requiring at most $O(1/\eps^2)$ iterations. Our algorithm implies that {\em core-sets} of size $O(1/\eps^2)$ exist not only for minimum enclosing ball but also for any convex shape with a fixed orientation. A {\em Core-Set} is a small subset of $poly(1/\eps)$ points whose minimum enclosing polytope is almost as large as that of the original points. Although we are unable to combine our techniques for translations and rotations for general shapes, for the min-cylinder problem, we give an algorithm similar to the one in \cite{HV03}, but with an improved running time of $2^{O(\frac{1}{\eps^2}\log \frac{1}{\eps})} nd$.<|reference_end|>
arxiv
@article{panigrahy2004minimum, title={Minimum Enclosing Polytope in High Dimensions}, author={Rina Panigrahy}, journal={arXiv preprint arXiv:cs/0407020}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407020}, primaryClass={cs.CG} }
panigrahy2004minimum
arxiv-672011
cs/0407021
Multi-agent coordination using nearest neighbor rules: revisiting the Vicsek model
<|reference_start|>Multi-agent coordination using nearest neighbor rules: revisiting the Vicsek model: Recently, Jadbabaie, Lin, and Morse (IEEE TAC, 48(6)2003:988-1001) offered a mathematical analysis of the discrete time model of groups of mobile autonomous agents raised by Vicsek et al. in 1995. In their paper, Jadbabaie et al. showed that all agents shall move in the same heading, provided that these agents are periodically linked together. This paper sharpens this result by showing that coordination will be reached under a very weak condition that requires all agents are finally linked together. This condition is also strictly weaker than the one Jadbabaie et al. desired.<|reference_end|>
arxiv
@article{li2004multi-agent, title={Multi-agent coordination using nearest neighbor rules: revisiting the Vicsek model}, author={Sanjiang Li, Huaiqing Wang}, journal={arXiv preprint arXiv:cs/0407021}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407021}, primaryClass={cs.MA cs.AI} }
li2004multi-agent
arxiv-672012
cs/0407022
Solving Elliptic Finite Element Systems in Near-Linear Time with Support Preconditioners
<|reference_start|>Solving Elliptic Finite Element Systems in Near-Linear Time with Support Preconditioners: We consider linear systems arising from the use of the finite element method for solving scalar linear elliptic problems. Our main result is that these linear systems, which are symmetric and positive semidefinite, are well approximated by symmetric diagonally dominant matrices. Our framework for defining matrix approximation is support theory. Significant graph theoretic work has already been developed in the support framework for preconditioners in the diagonally dominant case, and in particular it is known that such systems can be solved with iterative methods in nearly linear time. Thus, our approximation result implies that these graph theoretic techniques can also solve a class of finite element problems in nearly linear time. We show that the support number bounds, which control the number of iterations in the preconditioned iterative solver, depend on mesh quality measures but not on the problem size or shape of the domain.<|reference_end|>
arxiv
@article{boman2004solving, title={Solving Elliptic Finite Element Systems in Near-Linear Time with Support Preconditioners}, author={Erik Boman, Bruce Hendrickson and Stephen Vavasis}, journal={arXiv preprint arXiv:cs/0407022}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407022}, primaryClass={cs.NA} }
boman2004solving
arxiv-672013
cs/0407023
Efficient Hashing with Lookups in two Memory Accesses
<|reference_start|>Efficient Hashing with Lookups in two Memory Accesses: The study of hashing is closely related to the analysis of balls and bins. It is well-known that instead of using a single hash function if we randomly hash a ball into two bins and place it in the smaller of the two, then this dramatically lowers the maximum load on bins. This leads to the concept of two-way hashing where the largest bucket contains $O(\log\log n)$ balls with high probability. The hash look up will now search in both the buckets an item hashes to. Since an item may be placed in one of two buckets, we could potentially move an item after it has been initially placed to reduce maximum load. with a maximum load of We show that by performing moves during inserts, a maximum load of 2 can be maintained on-line, with high probability, while supporting hash update operations. In fact, with $n$ buckets, even if the space for two items are pre-allocated per bucket, as may be desirable in hardware implementations, more than $n$ items can be stored giving a high memory utilization. We also analyze the trade-off between the number of moves performed during inserts and the maximum load on a bucket. By performing at most $h$ moves, we can maintain a maximum load of $O(\frac{\log \log n}{h \log(\log\log n/h)})$. So, even by performing one move, we achieve a better bound than by performing no moves at all.<|reference_end|>
arxiv
@article{panigrahy2004efficient, title={Efficient Hashing with Lookups in two Memory Accesses}, author={Rina Panigrahy}, journal={arXiv preprint arXiv:cs/0407023}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407023}, primaryClass={cs.DS} }
panigrahy2004efficient
arxiv-672014
cs/0407024
An agent-based intelligent environmental monitoring system
<|reference_start|>An agent-based intelligent environmental monitoring system: Fairly rapid environmental changes call for continuous surveillance and on-line decision making. There are two main areas where IT technologies can be valuable. In this paper we present a multi-agent system for monitoring and assessing air-quality attributes, which uses data coming from a meteorological station. A community of software agents is assigned to monitor and validate measurements coming from several sensors, to assess air-quality, and, finally, to fire alarms to appropriate recipients, when needed. Data mining techniques have been used for adding data-driven, customized intelligence into agents. The architecture of the developed system, its domain ontology, and typical agent interactions are presented. Finally, the deployment of a real-world test case is demonstrated.<|reference_end|>
arxiv
@article{athanasiadis2004an, title={An agent-based intelligent environmental monitoring system}, author={Ioannis N Athanasiadis and Pericles A Mitkas}, journal={Management of Environmental Quality, 15(3):238-249, May 2004}, year={2004}, doi={10.1108/14777830410531216}, archivePrefix={arXiv}, eprint={cs/0407024}, primaryClass={cs.MA cs.CE} }
athanasiadis2004an
arxiv-672015
cs/0407025
An agent framework for dynamic agent retraining: Agent academy
<|reference_start|>An agent framework for dynamic agent retraining: Agent academy: Agent Academy (AA) aims to develop a multi-agent society that can train new agents for specific or general tasks, while constantly retraining existing agents in a recursive mode. The system is based on collecting information both from the environment and the behaviors of the acting agents and their related successes/failures to generate a body of data, stored in the Agent Use Repository, which is mined by the Data Miner module, in order to generate useful knowledge about the application domain. Knowledge extracted by the Data Miner is used by the Agent Training Module as to train new agents or to enhance the behavior of agents already running. In this paper the Agent Academy framework is introduced, and its overall architecture and functionality are presented. Training issues as well as agent ontologies are discussed. Finally, a scenario, which aims to provide environmental alerts to both individuals and public authorities, is described an AA-based use case.<|reference_end|>
arxiv
@article{mitkas2004an, title={An agent framework for dynamic agent retraining: Agent academy}, author={P. Mitkas, A. Symeonidis, D. Kechagias, I. N. Athanasiadis, G. Laleci, G. Kurt, Y. Kabak, A. Acar, and A. Dogac}, journal={In B. Stanford-Smith, E. Chiozza, and M. Edin, editors, Challenges and Achievements in e-business and e-work, pages 757-764, Prague, Czech Republic, October 2002. IOS Press}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407025}, primaryClass={cs.MA} }
mitkas2004an
arxiv-672016
cs/0407026
Summarizing Encyclopedic Term Descriptions on the Web
<|reference_start|>Summarizing Encyclopedic Term Descriptions on the Web: We are developing an automatic method to compile an encyclopedic corpus from the Web. In our previous work, paragraph-style descriptions for a term are extracted from Web pages and organized based on domains. However, these descriptions are independent and do not comprise a condensed text as in hand-crafted encyclopedias. To resolve this problem, we propose a summarization method, which produces a single text from multiple descriptions. The resultant summary concisely describes a term from different viewpoints. We also show the effectiveness of our method by means of experiments.<|reference_end|>
arxiv
@article{fujii2004summarizing, title={Summarizing Encyclopedic Term Descriptions on the Web}, author={Atsushi Fujii, Tetsuya Ishikawa}, journal={Proceedings of the 20th International Conference on Computational Linguistics (COLING 2004), pp.645-651, Aug. 2004}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407026}, primaryClass={cs.CL} }
fujii2004summarizing
arxiv-672017
cs/0407027
Unsupervised Topic Adaptation for Lecture Speech Retrieval
<|reference_start|>Unsupervised Topic Adaptation for Lecture Speech Retrieval: We are developing a cross-media information retrieval system, in which users can view specific segments of lecture videos by submitting text queries. To produce a text index, the audio track is extracted from a lecture video and a transcription is generated by automatic speech recognition. In this paper, to improve the quality of our retrieval system, we extensively investigate the effects of adapting acoustic and language models on speech recognition. We perform an MLLR-based method to adapt an acoustic model. To obtain a corpus for language model adaptation, we use the textbook for a target lecture to search a Web collection for the pages associated with the lecture topic. We show the effectiveness of our method by means of experiments.<|reference_end|>
arxiv
@article{fujii2004unsupervised, title={Unsupervised Topic Adaptation for Lecture Speech Retrieval}, author={Atsushi Fujii, Katunobu Itou, Tomoyosi Akiba, Tetsuya Ishikawa}, journal={Proceedings of the 8th International Conference on Spoken Language Processing (ICSLP 2004), pp.2957-2960, Oct. 2004}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407027}, primaryClass={cs.CL} }
fujii2004unsupervised
arxiv-672018
cs/0407028
Effects of Language Modeling on Speech-driven Question Answering
<|reference_start|>Effects of Language Modeling on Speech-driven Question Answering: We integrate automatic speech recognition (ASR) and question answering (QA) to realize a speech-driven QA system, and evaluate its performance. We adapt an N-gram language model to natural language questions, so that the input of our system can be recognized with a high accuracy. We target WH-questions which consist of the topic part and fixed phrase used to ask about something. We first produce a general N-gram model intended to recognize the topic and emphasize the counts of the N-grams that correspond to the fixed phrases. Given a transcription by the ASR engine, the QA engine extracts the answer candidates from target documents. We propose a passage retrieval method robust against recognition errors in the transcription. We use the QA test collection produced in NTCIR, which is a TREC-style evaluation workshop, and show the effectiveness of our method by means of experiments.<|reference_end|>
arxiv
@article{akiba2004effects, title={Effects of Language Modeling on Speech-driven Question Answering}, author={Tomoyosi Akiba, Atsushi Fujii, Katunobu Itou}, journal={Proceedings of the 8th International Conference on Spoken Language Processing (ICSLP 2004), pp.1053-1056, Oct. 2004}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407028}, primaryClass={cs.CL} }
akiba2004effects
arxiv-672019
cs/0407029
Static versus Dynamic Arbitrage Bounds on Multivariate Option Prices
<|reference_start|>Static versus Dynamic Arbitrage Bounds on Multivariate Option Prices: We compare static arbitrage price bounds on basket calls, i.e. bounds that only involve buy-and-hold trading strategies, with the price range obtained within a multi-variate generalization of the Black-Scholes model. While there is no gap between these two sets of prices in the univariate case, we observe here that contrary to our intuition about model risk for at-the-money calls, there is a somewhat large gap between model prices and static arbitrage prices, hence a similarly large set of prices on which a multivariate Black-Scholes model cannot be calibrated but where no conclusion can be drawn on the presence or not of a static arbitrage opportunity.<|reference_end|>
arxiv
@article{d'aspremont2004static, title={Static versus Dynamic Arbitrage Bounds on Multivariate Option Prices}, author={Alexandre d'Aspremont}, journal={arXiv preprint arXiv:cs/0407029}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407029}, primaryClass={cs.CE} }
d'aspremont2004static
arxiv-672020
cs/0407030
Scheduling with Fuzzy Methods
<|reference_start|>Scheduling with Fuzzy Methods: Nowadays, manufacturing industries -- driven by fierce competition and rising customer requirements -- are forced to produce a broader range of individual products of rising quality at the same (or preferably lower) cost. Meeting these demands implies an even more complex production process and thus also an appropriately increasing request to its scheduling. Aggravatingly, vagueness of scheduling parameters -- such as times and conditions -- are often inherent in the production process. In addition, the search for an optimal schedule normally leads to very difficult problems (NP-hard problems in the complexity theoretical sense), which cannot be solved effciently. With the intent to minimize these problems, the introduced heuristic method combines standard scheduling methods with fuzzy methods to get a nearly optimal schedule within an appropriate time considering vagueness adequately.<|reference_end|>
arxiv
@article{eiden2004scheduling, title={Scheduling with Fuzzy Methods}, author={Wolfgang Anthony Eiden}, journal={arXiv preprint arXiv:cs/0407030}, year={2004}, number={20040628}, archivePrefix={arXiv}, eprint={cs/0407030}, primaryClass={cs.OH} }
eiden2004scheduling
arxiv-672021
cs/0407031
On Modal Logics of Partial Recursive Functions
<|reference_start|>On Modal Logics of Partial Recursive Functions: The classical propositional logic is known to be sound and complete with respect to the set semantics that interprets connectives as set operations. The paper extends propositional language by a new binary modality that corresponds to partial recursive function type constructor under the above interpretation. The cases of deterministic and non-deterministic functions are considered and for both of them semantically complete modal logics are described and decidability of these logics is established.<|reference_end|>
arxiv
@article{naumov2004on, title={On Modal Logics of Partial Recursive Functions}, author={Pavel Naumov}, journal={arXiv preprint arXiv:cs/0407031}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407031}, primaryClass={cs.LO} }
naumov2004on
arxiv-672022
cs/0407032
Exposing Software Defined Radio Functionality To Native Operating System Applications via Virtual Devices
<|reference_start|>Exposing Software Defined Radio Functionality To Native Operating System Applications via Virtual Devices: Many reconfigurable platforms require that applications be written specifically to take advantage of the reconfigurable hardware. In a PC-based environment, this presents an undesirable constraint in that the many already available applications cannot leverage on such hardware. Greatest benefit can only be derived from reconfigurable devices if even native OS applications can transparently utilize reconfigurable devices as they would normal full-fledged hardware devices. This paper presents how Proteus Virtual Devices are used to expose reconfigurable hardware in a transparent manner for use by typical native OS applications.<|reference_end|>
arxiv
@article{nathan2004exposing, title={Exposing Software Defined Radio Functionality To Native Operating System Applications via Virtual Devices}, author={Darran Nathan}, journal={arXiv preprint arXiv:cs/0407032}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407032}, primaryClass={cs.AR} }
nathan2004exposing
arxiv-672023
cs/0407033
Track Layouts of Graphs
<|reference_start|>Track Layouts of Graphs: A \emph{$(k,t)$-track layout} of a graph $G$ consists of a (proper) vertex $t$-colouring of $G$, a total order of each vertex colour class, and a (non-proper) edge $k$-colouring such that between each pair of colour classes no two monochromatic edges cross. This structure has recently arisen in the study of three-dimensional graph drawings. This paper presents the beginnings of a theory of track layouts. First we determine the maximum number of edges in a $(k,t)$-track layout, and show how to colour the edges given fixed linear orderings of the vertex colour classes. We then describe methods for the manipulation of track layouts. For example, we show how to decrease the number of edge colours in a track layout at the expense of increasing the number of tracks, and vice versa. We then study the relationship between track layouts and other models of graph layout, namely stack and queue layouts, and geometric thickness. One of our principle results is that the queue-number and track-number of a graph are tied, in the sense that one is bounded by a function of the other. As corollaries we prove that acyclic chromatic number is bounded by both queue-number and stack-number. Finally we consider track layouts of planar graphs. While it is an open problem whether planar graphs have bounded track-number, we prove bounds on the track-number of outerplanar graphs, and give the best known lower bound on the track-number of planar graphs.<|reference_end|>
arxiv
@article{dujmovic2004track, title={Track Layouts of Graphs}, author={Vida Dujmovic and Attila Por and David R. Wood}, journal={Discrete Maths. & Theoretical Computer Science 6.2:497-522, 2004}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407033}, primaryClass={cs.DM cs.CG} }
dujmovic2004track
arxiv-672024
cs/0407034
On the Complexity of Case-Based Planning
<|reference_start|>On the Complexity of Case-Based Planning: We analyze the computational complexity of problems related to case-based planning: planning when a plan for a similar instance is known, and planning from a library of plans. We prove that planning from a single case has the same complexity than generative planning (i.e., planning "from scratch"); using an extended definition of cases, complexity is reduced if the domain stored in the case is similar to the one to search plans for. Planning from a library of cases is shown to have the same complexity. In both cases, the complexity of planning remains, in the worst case, PSPACE-complete.<|reference_end|>
arxiv
@article{liberatore2004on, title={On the Complexity of Case-Based Planning}, author={Paolo Liberatore}, journal={arXiv preprint arXiv:cs/0407034}, year={2004}, doi={10.1080/09528130500283717}, archivePrefix={arXiv}, eprint={cs/0407034}, primaryClass={cs.AI cs.CC} }
liberatore2004on
arxiv-672025
cs/0407035
A Framework for High-Accuracy Privacy-Preserving Mining
<|reference_start|>A Framework for High-Accuracy Privacy-Preserving Mining: To preserve client privacy in the data mining process, a variety of techniques based on random perturbation of data records have been proposed recently. In this paper, we present a generalized matrix-theoretic model of random perturbation, which facilitates a systematic approach to the design of perturbation mechanisms for privacy-preserving mining. Specifically, we demonstrate that (a) the prior techniques differ only in their settings for the model parameters, and (b) through appropriate choice of parameter settings, we can derive new perturbation techniques that provide highly accurate mining results even under strict privacy guarantees. We also propose a novel perturbation mechanism wherein the model parameters are themselves characterized as random variables, and demonstrate that this feature provides significant improvements in privacy at a very marginal cost in accuracy. While our model is valid for random-perturbation-based privacy-preserving mining in general, we specifically evaluate its utility here with regard to frequent-itemset mining on a variety of real datasets. The experimental results indicate that our mechanisms incur substantially lower identity and support errors as compared to the prior techniques.<|reference_end|>
arxiv
@article{agrawal2004a, title={A Framework for High-Accuracy Privacy-Preserving Mining}, author={Shipra Agrawal, Jayant R. Haritsa}, journal={arXiv preprint arXiv:cs/0407035}, year={2004}, number={TR-2004-02, DSL/SERC, Indian Institute of Science}, archivePrefix={arXiv}, eprint={cs/0407035}, primaryClass={cs.DB cs.IR} }
agrawal2004a
arxiv-672026
cs/0407036
All Maximal Independent Sets and Dynamic Dominance for Sparse Graphs
<|reference_start|>All Maximal Independent Sets and Dynamic Dominance for Sparse Graphs: We describe algorithms, based on Avis and Fukuda's reverse search paradigm, for listing all maximal independent sets in a sparse graph in polynomial time and delay per output. For bounded degree graphs, our algorithms take constant time per set generated; for minor-closed graph families, the time is O(n) per set, and for more general sparse graph families we achieve subquadratic time per set. We also describe new data structures for maintaining a dynamic vertex set S in a sparse or minor-closed graph family, and querying the number of vertices not dominated by S; for minor-closed graph families the time per update is constant, while it is sublinear for any sparse graph family. We can also maintain a dynamic vertex set in an arbitrary m-edge graph and test the independence of the maintained set in time O(sqrt m) per update. We use the domination data structures as part of our enumeration algorithms.<|reference_end|>
arxiv
@article{eppstein2004all, title={All Maximal Independent Sets and Dynamic Dominance for Sparse Graphs}, author={David Eppstein}, journal={ACM Trans. Algorithms 5(4):A38, 2009}, year={2004}, doi={10.1145/1597036.1597042}, archivePrefix={arXiv}, eprint={cs/0407036}, primaryClass={cs.DS} }
eppstein2004all
arxiv-672027
cs/0407037
Generalized Evolutionary Algorithm based on Tsallis Statistics
<|reference_start|>Generalized Evolutionary Algorithm based on Tsallis Statistics: Generalized evolutionary algorithm based on Tsallis canonical distribution is proposed. The algorithm uses Tsallis generalized canonical distribution to weigh the configurations for `selection' instead of Gibbs-Boltzmann distribution. Our simulation results show that for an appropriate choice of non-extensive index that is offered by Tsallis statistics, evolutionary algorithms based on this generalization outperform algorithms based on Gibbs-Boltzmann distribution.<|reference_end|>
arxiv
@article{dukkipati2004generalized, title={Generalized Evolutionary Algorithm based on Tsallis Statistics}, author={Ambedkar Dukkipati, M. Narasimha Murty and Shalabh Bhatnagar}, journal={arXiv preprint arXiv:cs/0407037}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407037}, primaryClass={cs.AI} }
dukkipati2004generalized
arxiv-672028
cs/0407038
Model Checking of Statechart Models: Survey and Research Directions
<|reference_start|>Model Checking of Statechart Models: Survey and Research Directions: We survey existing approaches to the formal verification of statecharts using model checking. Although the semantics and subset of statecharts used in each approach varies considerably, along with the model checkers and their specification languages, most approaches rely on translating the hierarchical structure into the flat representation of the input language of the model checker. This makes model checking difficult to scale to industrial models, as the state space grows exponentially with flattening. We look at current approaches to model checking hierarchical structures and find that their semantics is significantly different from statecharts. We propose to address the problem of state space explosion using a combination of techniques, which are proposed as directions for further research.<|reference_end|>
arxiv
@article{bhaduri2004model, title={Model Checking of Statechart Models: Survey and Research Directions}, author={Purandar Bhaduri (1), S. Ramesh (2) ((1) TRDDC, Pune, India (2) IIT Bombay, India)}, journal={arXiv preprint arXiv:cs/0407038}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407038}, primaryClass={cs.SE} }
bhaduri2004model
arxiv-672029
cs/0407039
On the Convergence Speed of MDL Predictions for Bernoulli Sequences
<|reference_start|>On the Convergence Speed of MDL Predictions for Bernoulli Sequences: We consider the Minimum Description Length principle for online sequence prediction. If the underlying model class is discrete, then the total expected square loss is a particularly interesting performance measure: (a) this quantity is bounded, implying convergence with probability one, and (b) it additionally specifies a `rate of convergence'. Generally, for MDL only exponential loss bounds hold, as opposed to the linear bounds for a Bayes mixture. We show that this is even the case if the model class contains only Bernoulli distributions. We derive a new upper bound on the prediction error for countable Bernoulli classes. This implies a small bound (comparable to the one for Bayes mixtures) for certain important model classes. The results apply to many Machine Learning tasks including classification and hypothesis testing. We provide arguments that our theorems generalize to countable classes of i.i.d. models.<|reference_end|>
arxiv
@article{poland2004on, title={On the Convergence Speed of MDL Predictions for Bernoulli Sequences}, author={Jan Poland and Marcus Hutter}, journal={Proc. 15th International Conf. on Algorithmic Learning Theory (ALT-2004), pages 294-308}, year={2004}, number={IDSIA-13-04}, archivePrefix={arXiv}, eprint={cs/0407039}, primaryClass={cs.LG cs.AI cs.IT math.IT math.PR} }
poland2004on
arxiv-672030
cs/0407040
Decomposition Based Search - A theoretical and experimental evaluation
<|reference_start|>Decomposition Based Search - A theoretical and experimental evaluation: In this paper we present and evaluate a search strategy called Decomposition Based Search (DBS) which is based on two steps: subproblem generation and subproblem solution. The generation of subproblems is done through value ranking and domain splitting. Subdomains are explored so as to generate, according to the heuristic chosen, promising subproblems first. We show that two well known search strategies, Limited Discrepancy Search (LDS) and Iterative Broadening (IB), can be seen as special cases of DBS. First we present a tuning of DBS that visits the same search nodes as IB, but avoids restarts. Then we compare both theoretically and computationally DBS and LDS using the same heuristic. We prove that DBS has a higher probability of being successful than LDS on a comparable number of nodes, under realistic assumptions. Experiments on a constraint satisfaction problem and an optimization problem show that DBS is indeed very effective if compared to LDS.<|reference_end|>
arxiv
@article{van hoeve2004decomposition, title={Decomposition Based Search - A theoretical and experimental evaluation}, author={W.J. van Hoeve and M. Milano}, journal={arXiv preprint arXiv:cs/0407040}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407040}, primaryClass={cs.AI} }
van hoeve2004decomposition
arxiv-672031
cs/0407041
Exploiting Semidefinite Relaxations in Constraint Programming
<|reference_start|>Exploiting Semidefinite Relaxations in Constraint Programming: Constraint programming uses enumeration and search tree pruning to solve combinatorial optimization problems. In order to speed up this solution process, we investigate the use of semidefinite relaxations within constraint programming. In principle, we use the solution of a semidefinite relaxation to guide the traversal of the search tree, using a limited discrepancy search strategy. Furthermore, a semidefinite relaxation produces a bound for the solution value, which is used to prune parts of the search tree. Experimental results on stable set and maximum clique problem instances show that constraint programming can indeed greatly benefit from semidefinite relaxations.<|reference_end|>
arxiv
@article{van hoeve2004exploiting, title={Exploiting Semidefinite Relaxations in Constraint Programming}, author={Willem Jan van Hoeve}, journal={arXiv preprint arXiv:cs/0407041}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407041}, primaryClass={cs.DM cs.PL} }
van hoeve2004exploiting
arxiv-672032
cs/0407042
Postponing Branching Decisions
<|reference_start|>Postponing Branching Decisions: Solution techniques for Constraint Satisfaction and Optimisation Problems often make use of backtrack search methods, exploiting variable and value ordering heuristics. In this paper, we propose and analyse a very simple method to apply in case the value ordering heuristic produces ties: postponing the branching decision. To this end, we group together values in a tie, branch on this sub-domain, and defer the decision among them to lower levels of the search tree. We show theoretically and experimentally that this simple modification can dramatically improve the efficiency of the search strategy. Although in practise similar methods may have been applied already, to our knowledge, no empirical or theoretical study has been proposed in the literature to identify when and to what extent this strategy should be used.<|reference_end|>
arxiv
@article{van hoeve2004postponing, title={Postponing Branching Decisions}, author={Willem Jan van Hoeve and Michela Milano}, journal={arXiv preprint arXiv:cs/0407042}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407042}, primaryClass={cs.AI} }
van hoeve2004postponing
arxiv-672033
cs/0407043
A Hyper-Arc Consistency Algorithm for the Soft Alldifferent Constraint
<|reference_start|>A Hyper-Arc Consistency Algorithm for the Soft Alldifferent Constraint: This paper presents an algorithm that achieves hyper-arc consistency for the soft alldifferent constraint. To this end, we prove and exploit the equivalence with a minimum-cost flow problem. Consistency of the constraint can be checked in O(nm) time, and hyper-arc consistency is achieved in O(m) time, where n is the number of variables involved and m is the sum of the cardinalities of the domains. It improves a previous method that did not ensure hyper-arc consistency.<|reference_end|>
arxiv
@article{van hoeve2004a, title={A Hyper-Arc Consistency Algorithm for the Soft Alldifferent Constraint}, author={Willem Jan van Hoeve}, journal={arXiv preprint arXiv:cs/0407043}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407043}, primaryClass={cs.PL} }
van hoeve2004a
arxiv-672034
cs/0407044
Reduced cost-based ranking for generating promising subproblems
<|reference_start|>Reduced cost-based ranking for generating promising subproblems: In this paper, we propose an effective search procedure that interleaves two steps: subproblem generation and subproblem solution. We mainly focus on the first part. It consists of a variable domain value ranking based on reduced costs. Exploiting the ranking, we generate, in a Limited Discrepancy Search tree, the most promising subproblems first. An interesting result is that reduced costs provide a very precise ranking that allows to almost always find the optimal solution in the first generated subproblem, even if its dimension is significantly smaller than that of the original problem. Concerning the proof of optimality, we exploit a way to increase the lower bound for subproblems at higher discrepancies. We show experimental results on the TSP and its time constrained variant to show the effectiveness of the proposed approach, but the technique could be generalized for other problems.<|reference_end|>
arxiv
@article{milano2004reduced, title={Reduced cost-based ranking for generating promising subproblems}, author={M. Milano and W.J. van Hoeve}, journal={arXiv preprint arXiv:cs/0407044}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407044}, primaryClass={cs.AI} }
milano2004reduced
arxiv-672035
cs/0407045
The First-Order Theory of Sets with Cardinality Constraints is Decidable
<|reference_start|>The First-Order Theory of Sets with Cardinality Constraints is Decidable: We show that the decidability of the first-order theory of the language that combines Boolean algebras of sets of uninterpreted elements with Presburger arithmetic operations. We thereby disprove a recent conjecture that this theory is undecidable. Our language allows relating the cardinalities of sets to the values of integer variables, and can distinguish finite and infinite sets. We use quantifier elimination to show the decidability and obtain an elementary upper bound on the complexity. Precise program analyses can use our decidability result to verify representation invariants of data structures that use an integer field to represent the number of stored elements.<|reference_end|>
arxiv
@article{kuncak2004the, title={The First-Order Theory of Sets with Cardinality Constraints is Decidable}, author={Viktor Kuncak and Martin Rinard}, journal={arXiv preprint arXiv:cs/0407045}, year={2004}, number={MIT CSAIL 958}, archivePrefix={arXiv}, eprint={cs/0407045}, primaryClass={cs.LO cs.PL} }
kuncak2004the
arxiv-672036
cs/0407046
A Bimachine Compiler for Ranked Tagging Rules
<|reference_start|>A Bimachine Compiler for Ranked Tagging Rules: This paper describes a novel method of compiling ranked tagging rules into a deterministic finite-state device called a bimachine. The rules are formulated in the framework of regular rewrite operations and allow unrestricted regular expressions in both left and right rule contexts. The compiler is illustrated by an application within a speech synthesis system.<|reference_end|>
arxiv
@article{skut2004a, title={A Bimachine Compiler for Ranked Tagging Rules}, author={Wojciech Skut, Stefan Ulrich and Kathrine Hammervold}, journal={arXiv preprint arXiv:cs/0407046}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407046}, primaryClass={cs.CL} }
skut2004a
arxiv-672037
cs/0407047
Channel-Independent and Sensor-Independent Stimulus Representations
<|reference_start|>Channel-Independent and Sensor-Independent Stimulus Representations: This paper shows how a machine, which observes stimuli through an uncharacterized, uncalibrated channel and sensor, can glean machine-independent information (i.e., channel- and sensor-independent information) about the stimuli. First, we demonstrate that a machine defines a specific coordinate system on the stimulus state space, with the nature of that coordinate system depending on the device's channel and sensor. Thus, machines with different channels and sensors "see" the same stimulus trajectory through state space, but in different machine-specific coordinate systems. For a large variety of physical stimuli, statistical properties of that trajectory endow the stimulus configuration space with differential geometric structure (a metric and parallel transfer procedure), which can then be used to represent relative stimulus configurations in a coordinate-system-independent manner (and, therefore, in a channel- and sensor-independent manner). The resulting description is an "inner" property of the stimulus time series in the sense that it does not depend on extrinsic factors like the observer's choice of a coordinate system in which the stimulus is viewed (i.e., the observer's choice of channel and sensor). This methodology is illustrated with analytic examples and with a numerically simulated experiment. In an intelligent sensory device, this kind of representation "engine" could function as a "front-end" that passes channel/sensor-independent stimulus representations to a pattern recognition module. After a pattern recognizer has been trained in one of these devices, it could be used without change in other devices having different channels and sensors.<|reference_end|>
arxiv
@article{levin2004channel-independent, title={Channel-Independent and Sensor-Independent Stimulus Representations}, author={David N. Levin (U.of Chicago)}, journal={arXiv preprint arXiv:cs/0407047}, year={2004}, doi={10.1063/1.2128687}, archivePrefix={arXiv}, eprint={cs/0407047}, primaryClass={cs.CV cs.AI} }
levin2004channel-independent
arxiv-672038
cs/0407048
Technological networks and the spread of computer viruses
<|reference_start|>Technological networks and the spread of computer viruses: Computer infections such as viruses and worms spread over networks of contacts between computers, with different types of networks being exploited by different types of infections. Here we analyze the structures of several of these networks, exploring their implications for modes of spread and the control of infection. We argue that vaccination strategies that focus on a limited number of network nodes, whether targeted or randomly chosen, are in many cases unlikely to be effective. An alternative dynamic mechanism for the control of contagion, called throttling, is introduced and argued to be effective under a range of conditions.<|reference_end|>
arxiv
@article{balthrop2004technological, title={Technological networks and the spread of computer viruses}, author={Justin Balthrop, Stephanie Forrest, M. E. J. Newman, and Matthew M. Williamson}, journal={Science 304, 527-529 (2004)}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407048}, primaryClass={cs.NI cs.CY} }
balthrop2004technological
arxiv-672039
cs/0407049
Preferred Answer Sets for Ordered Logic Programs
<|reference_start|>Preferred Answer Sets for Ordered Logic Programs: We extend answer set semantics to deal with inconsistent programs (containing classical negation), by finding a ``best'' answer set. Within the context of inconsistent programs, it is natural to have a partial order on rules, representing a preference for satisfying certain rules, possibly at the cost of violating less important ones. We show that such a rule order induces a natural order on extended answer sets, the minimal elements of which we call preferred answer sets. We characterize the expressiveness of the resulting semantics and show that it can simulate negation as failure, disjunction and some other formalisms such as logic programs with ordered disjunction. The approach is shown to be useful in several application areas, e.g. repairing database, where minimal repairs correspond to preferred answer sets. To appear in Theory and Practice of Logic Programming (TPLP).<|reference_end|>
arxiv
@article{van nieuwenborgh2004preferred, title={Preferred Answer Sets for Ordered Logic Programs}, author={Davy Van Nieuwenborgh and Dirk Vermeir}, journal={arXiv preprint arXiv:cs/0407049}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407049}, primaryClass={cs.LO cs.AI} }
van nieuwenborgh2004preferred
arxiv-672040
cs/0407050
Modeling and Validating Hybrid Systems Using VDM and Mathematica
<|reference_start|>Modeling and Validating Hybrid Systems Using VDM and Mathematica: Hybrid systems are characterized by the hybrid evolution of their state: A part of the state changes discretely, the other part changes continuously over time. Typically, modern control applications belong to this class of systems, where a digital controller interacts with a physical environment. In this article we illustrate how a combination of the formal method VDM and the computer algebra system Mathematica can be used to model and simulate both aspects: the control logic and the physics involved. A new Mathematica package emulating VDM-SL has been developed that allows the integration of differential equation systems into formal specifications. The SAFER example from Kelly (1997) serves to demonstrate the new simulation capabilities Mathematica adds: After the thruster selection process, the astronaut's actual position and velocity is calculated by numerically solving Euler's and Newton's equations for rotation and translation. Furthermore, interactive validation is supported by a graphical user interface and data animation.<|reference_end|>
arxiv
@article{aichernig2004modeling, title={Modeling and Validating Hybrid Systems Using VDM and Mathematica}, author={Bernhard K. Aichernig and Reinhold Kainhofer}, journal={In C.Michael Holloway, editor, Lfm2000, Fifth NASA Langley Formal Methods Workshop, Williamsburg, Virginia, June 2000, number CP-2000-210100, pages 35-46. NASA, June 2000}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407050}, primaryClass={cs.SE} }
aichernig2004modeling
arxiv-672041
cs/0407051
Bug shallowness in open-source, Macintosh software
<|reference_start|>Bug shallowness in open-source, Macintosh software: Central to the power of open-source software is bug shallowness, the relative ease of finding and fixing bugs. The open-source movement began with Unix software, so many users were also programmers capable of finding and fixing bugs given the source code. But as the open-source movement reaches the Macintosh platform, bugs may not be shallow because few Macintosh users are programmers. Based on reports from open-source developers, I, however, conclude that that bugs are as shallow in open-source, Macintosh software as in any other open-source software.<|reference_end|>
arxiv
@article{worley2004bug, title={Bug shallowness in open-source, Macintosh software}, author={G Gordon Worley III}, journal={Worley III, G Gordon. "Bug shallowness in open-source, Macintosh software". Advanced Developers Hands On Conference 19. 2004}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407051}, primaryClass={cs.SE} }
worley2004bug
arxiv-672042
cs/0407052
M@th Desktop and MD Tools - Mathematics and Mathematica Made Easy for Students
<|reference_start|>M@th Desktop and MD Tools - Mathematics and Mathematica Made Easy for Students: We present two add-ons for Mathematica for teaching mathematics to undergraduate and high school students. These two applications, M@th Desktop (MD) and M@th Desktop Tools (MDTools), include several palettes and notebooks covering almost every field. The underlying didactic concept is so-called "blended learning", in which these tools are meant to be used as a complement to the professor or teacher rather than as a replacement, which other e-learning applications do. They enable students to avoid the usual problem of computer-based learning, namely that too large an amount of time is wasted struggling with computer and program errors instead of actually learning the mathematical concepts. M@th Desktop Tools is palette-based and provides easily accessible and user-friendly templates for the most important functions in the fields of Analysis, Algebra, Linear Algebra and Statistics. M@th Desktop, in contrast, is a modern, interactive teaching and learning software package for mathematics classes. It is comprised of modules for Differentiation, Integration, and Statistics, and each module presents its topic with a combination of interactive notebooks and palettes. Both packages can be obtained from Deltasoft's homepage at http://www.deltasoft.at/ .<|reference_end|>
arxiv
@article{kainhofer2004m@th, title={M@th Desktop and MD Tools - Mathematics and Mathematica Made Easy for Students}, author={Reinhold Kainhofer and Reinhard V. Simonovits}, journal={arXiv preprint arXiv:cs/0407052}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407052}, primaryClass={cs.MS} }
kainhofer2004m@th
arxiv-672043
cs/0407053
Design of a Parallel and Distributed Web Search Engine
<|reference_start|>Design of a Parallel and Distributed Web Search Engine: This paper describes the architecture of MOSE (My Own Search Engine), a scalable parallel and distributed engine for searching the web. MOSE was specifically designed to efficiently exploit affordable parallel architectures, such as clusters of workstations. Its modular and scalable architecture can easily be tuned to fulfill the bandwidth requirements of the application at hand. Both task-parallel and data-parallel approaches are exploited within MOSE in order to increase the throughput and efficiently use communication, storing and computational resources. We used a collection of html documents as a benchmark, and conducted preliminary experiments on a cluster of three SMP Linux PCs.<|reference_end|>
arxiv
@article{orlando2004design, title={Design of a Parallel and Distributed Web Search Engine}, author={Salvatore Orlando (1), Raffaele Perego (2), Fabrizio Silvestri (1 and 3) ((1) Dipartimento di Informatica, Universit`a di Venezia - Mestre, Italy, (2) Istituto di Scienza e Tecnologia per l'Informazione (A. Faedo) - Pisa, Italy, (3) Dipartimento di Informatica, Universit`a di Pisa, Italy)}, journal={arXiv preprint arXiv:cs/0407053}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407053}, primaryClass={cs.IR cs.DC} }
orlando2004design
arxiv-672044
cs/0407054
From truth to computability I
<|reference_start|>From truth to computability I: The recently initiated approach called computability logic is a formal theory of interactive computation. See a comprehensive online source on the subject at http://www.cis.upenn.edu/~giorgi/cl.html . The present paper contains a soundness and completeness proof for the deductive system CL3 which axiomatizes the most basic first-order fragment of computability logic called the finite-depth, elementary-base fragment. Among the potential application areas for this result are the theory of interactive computation, constructive applied theories, knowledgebase systems, systems for resource-bound planning and action. This paper is self-contained as it reintroduces all relevant definitions as well as main motivations.<|reference_end|>
arxiv
@article{japaridze2004from, title={From truth to computability I}, author={Giorgi Japaridze}, journal={Theoretical Computer Science 357 (2006), pp. 100-135}, year={2004}, doi={10.1016/j.tcs.2006.03.014}, archivePrefix={arXiv}, eprint={cs/0407054}, primaryClass={cs.LO cs.AI cs.GT math.LO} }
japaridze2004from
arxiv-672045
cs/0407055
PELCR: Parallel Environment for Optimal Lambda-Calculus Reduction
<|reference_start|>PELCR: Parallel Environment for Optimal Lambda-Calculus Reduction: In this article we present the implementation of an environment supporting L\'evy's \emph{optimal reduction} for the $\lambda$-calculus \cite{Lev78} on parallel (or distributed) computing systems. In a similar approach to Lamping's one in \cite{Lamping90}, we base our work on a graph reduction technique known as \emph{directed virtual reduction} \cite{DPR97} which is actually a restriction of Danos-Regnier virtual reduction \cite{DanosRegnier93}. The environment, which we refer to as PELCR (Parallel Environment for optimal Lambda-Calculus Reduction) relies on a strategy for directed virtual reduction, namely {\em half combustion}, which we introduce in this article. While developing PELCR we have adopted both a message aggregation technique, allowing a reduction of the communication overhead, and a fair policy for distributing dynamically originated load among processors. We also present an experimental study demonstrating the ability of PELCR to definitely exploit parallelism intrinsic to $\lambda$-terms while performing the reduction. By the results we show how PELCR allows achieving up to 70/80% of the ideal speedup on last generation multiprocessor computing systems. As a last note, the software modules have been developed with the {\tt C} language and using a standard interface for message passing, i.e. MPI, thus making PELCR itself a highly portable software package.<|reference_end|>
arxiv
@article{pedicini2004pelcr:, title={PELCR: Parallel Environment for Optimal Lambda-Calculus Reduction}, author={M. Pedicini, F. Quaglia}, journal={arXiv preprint arXiv:cs/0407055}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407055}, primaryClass={cs.LO cs.DC} }
pedicini2004pelcr:
arxiv-672046
cs/0407056
On the hardness of distinguishing mixed-state quantum computations
<|reference_start|>On the hardness of distinguishing mixed-state quantum computations: This paper considers the following problem. Two mixed-state quantum circuits Q and R are given, and the goal is to determine which of two possibilities holds: (i) Q and R act nearly identically on all possible quantum state inputs, or (ii) there exists some input state that Q and R transform into almost perfectly distinguishable outputs. This problem may be viewed as an abstraction of the following problem: given two physical processes described by sequences of local interactions, are the processes effectively the same or are they different? We prove that this problem is a complete promise problem for the class QIP of problems having quantum interactive proof systems, and is therefore PSPACE-hard. This is in sharp contrast to the fact that the analogous problem for classical (probabilistic) circuits is in AM, and for unitary quantum circuits is in QMA.<|reference_end|>
arxiv
@article{rosgen2004on, title={On the hardness of distinguishing mixed-state quantum computations}, author={Bill Rosgen and John Watrous}, journal={arXiv preprint arXiv:cs/0407056}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407056}, primaryClass={cs.CC quant-ph} }
rosgen2004on
arxiv-672047
cs/0407057
Universal Convergence of Semimeasures on Individual Random Sequences
<|reference_start|>Universal Convergence of Semimeasures on Individual Random Sequences: Solomonoff's central result on induction is that the posterior of a universal semimeasure M converges rapidly and with probability 1 to the true sequence generating posterior mu, if the latter is computable. Hence, M is eligible as a universal sequence predictor in case of unknown mu. Despite some nearby results and proofs in the literature, the stronger result of convergence for all (Martin-Loef) random sequences remained open. Such a convergence result would be particularly interesting and natural, since randomness can be defined in terms of M itself. We show that there are universal semimeasures M which do not converge for all random sequences, i.e. we give a partial negative answer to the open problem. We also provide a positive answer for some non-universal semimeasures. We define the incomputable measure D as a mixture over all computable measures and the enumerable semimeasure W as a mixture over all enumerable nearly-measures. We show that W converges to D and D to mu on all random sequences. The Hellinger distance measuring closeness of two distributions plays a central role.<|reference_end|>
arxiv
@article{hutter2004universal, title={Universal Convergence of Semimeasures on Individual Random Sequences}, author={Marcus Hutter and Andrej Muchnik}, journal={Proc. 15th International Conf. on Algorithmic Learning Theory (ALT-2004), pages 234-248}, year={2004}, number={IDSIA-14-04}, archivePrefix={arXiv}, eprint={cs/0407057}, primaryClass={cs.LG cs.AI cs.CC cs.IT math.IT math.PR} }
hutter2004universal
arxiv-672048
cs/0407058
Communication-Aware Processor Allocation for Supercomputers
<|reference_start|>Communication-Aware Processor Allocation for Supercomputers: This paper gives processor-allocation algorithms for minimizing the average number of communication hops between the assigned processors for grid architectures, in the presence of occupied cells. The simpler problem of assigning processors on a free grid has been studied by Karp, McKellar, and Wong who show that the solutions have nontrivial structure; they left open the complexity of the problem. The associated clustering problem is as follows: Given n points in Re^d, find k points that minimize their average pairwise L1 distance. We present a natural approximation algorithm and show that it is a 7/4-approximation for 2D grids. For d-dimensional space, the approximation guarantee is 2-(1/2d), which is tight. We also give a polynomial-time approximation scheme (PTAS) for constant dimension d, and report on experimental results.<|reference_end|>
arxiv
@article{bender2004communication-aware, title={Communication-Aware Processor Allocation for Supercomputers}, author={Michael A. Bender, David P. Bunde, Erik D. Demaine, Sandor P. Fekete, Vitus J. Leung, Henk Meijer and Cynthia A. Phillips}, journal={arXiv preprint arXiv:cs/0407058}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407058}, primaryClass={cs.DS cs.DC} }
bender2004communication-aware
arxiv-672049
cs/0407059
On rational definite summation
<|reference_start|>On rational definite summation: We present a partial proof of van Hoeij-Abramov conjecture about the algorithmic possibility of computation of finite sums of rational functions. The theoretical results proved in this paper provide an algorithm for computation of a large class of sums $ S(n) = \sum_{k=0}^{n-1}R(k,n)$.<|reference_end|>
arxiv
@article{tsarev2004on, title={On rational definite summation}, author={Sergey P. Tsarev}, journal={arXiv preprint arXiv:cs/0407059}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407059}, primaryClass={cs.SC cs.DM} }
tsarev2004on
arxiv-672050
cs/0407060
Tight bounds for LDPC and LDGM codes under MAP decoding
<|reference_start|>Tight bounds for LDPC and LDGM codes under MAP decoding: A new method for analyzing low density parity check (LDPC) codes and low density generator matrix (LDGM) codes under bit maximum a posteriori probability (MAP) decoding is introduced. The method is based on a rigorous approach to spin glasses developed by Francesco Guerra. It allows to construct lower bounds on the entropy of the transmitted message conditional to the received one. Based on heuristic statistical mechanics calculations, we conjecture such bounds to be tight. The result holds for standard irregular ensembles when used over binary input output symmetric channels. The method is first developed for Tanner graph ensembles with Poisson left degree distribution. It is then generalized to `multi-Poisson' graphs, and, by a completion procedure, to arbitrary degree distribution.<|reference_end|>
arxiv
@article{montanari2004tight, title={Tight bounds for LDPC and LDGM codes under MAP decoding}, author={Andrea Montanari}, journal={IEEE Trans. on Inf. Theory, vol.51, pp. 3221-3246 (2005)}, year={2004}, doi={10.1109/TIT.2005.853320}, archivePrefix={arXiv}, eprint={cs/0407060}, primaryClass={cs.IT cond-mat.dis-nn math.IT} }
montanari2004tight
arxiv-672051
cs/0407061
A measure of similarity between graph vertices
<|reference_start|>A measure of similarity between graph vertices: We introduce a concept of similarity between vertices of directed graphs. Let G_A and G_B be two directed graphs. We define a similarity matrix whose (i, j)-th real entry expresses how similar vertex j (in G_A) is to vertex i (in G_B. The similarity matrix can be obtained as the limit of the normalized even iterates of a linear transformation. In the special case where G_A=G_B=G, the matrix is square and the (i, j)-th entry is the similarity score between the vertices i and j of G. We point out that Kleinberg's "hub and authority" method to identify web-pages relevant to a given query can be viewed as a special case of our definition in the case where one of the graphs has two vertices and a unique directed edge between them. In analogy to Kleinberg, we show that our similarity scores are given by the components of a dominant eigenvector of a non-negative matrix. Potential applications of our similarity concept are numerous. We illustrate an application for the automatic extraction of synonyms in a monolingual dictionary.<|reference_end|>
arxiv
@article{blondel2004a, title={A measure of similarity between graph vertices}, author={Vincent Blondel, Anahi Gajardo, Maureen Heymans, Pierre Senellart, Paul Van Dooren}, journal={arXiv preprint arXiv:cs/0407061}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407061}, primaryClass={cs.IR cond-mat.dis-nn cs.DM physics.data-an} }
blondel2004a
arxiv-672052
cs/0407062
Performance Analysis of the Globus Toolkit Monitoring and Discovery Service, MDS2
<|reference_start|>Performance Analysis of the Globus Toolkit Monitoring and Discovery Service, MDS2: Monitoring and information services form a key component of a distributed system, or Grid. A quantitative study of such services can aid in understanding the performance limitations, advise in the deployment of the monitoring system, and help evaluate future development work. To this end, we examined the performance of the Globus Toolkit(reg. trdmrk) Monitoring and Discovery Service (MDS2) by instrumenting its main services using NetLogger. Our study shows a strong advantage to caching or prefetching the data, as well as the need to have primary components at well-connected sites.<|reference_end|>
arxiv
@article{zhang2004performance, title={Performance Analysis of the Globus Toolkit Monitoring and Discovery Service, MDS2}, author={Xuehai Zhang and Jennifer M. Schopf}, journal={arXiv preprint arXiv:cs/0407062}, year={2004}, number={Preprint ANL/MCS-P1115-0104}, archivePrefix={arXiv}, eprint={cs/0407062}, primaryClass={cs.DC cs.PF} }
zhang2004performance
arxiv-672053
cs/0407063
Unfolding Smooth Prismatoids
<|reference_start|>Unfolding Smooth Prismatoids: We define a notion for unfolding smooth, ruled surfaces, and prove that every smooth prismatoid (the convex hull of two smooth curves lying in parallel planes), has a nonoverlapping "volcano unfolding." These unfoldings keep the base intact, unfold the sides outward, splayed around the base, and attach the top to the tip of some side rib. Our result answers a question for smooth prismatoids whose analog for polyhedral prismatoids remains unsolved.<|reference_end|>
arxiv
@article{benbernou2004unfolding, title={Unfolding Smooth Prismatoids}, author={Nadia Benbernou, Patricia Cahn, Joseph O'Rourke}, journal={arXiv preprint arXiv:cs/0407063}, year={2004}, number={Smith College Computer Science Technical Report 078}, archivePrefix={arXiv}, eprint={cs/0407063}, primaryClass={cs.CG cs.DM} }
benbernou2004unfolding
arxiv-672054
cs/0407064
A Sequent Calculus and a Theorem Prover for Standard Conditional Logics
<|reference_start|>A Sequent Calculus and a Theorem Prover for Standard Conditional Logics: In this paper we present a cut-free sequent calculus, called SeqS, for some standard conditional logics, namely CK, CK+ID, CK+MP and CK+MP+ID. The calculus uses labels and transition formulas and can be used to prove decidability and space complexity bounds for the respective logics. We also present CondLean, a theorem prover for these logics implementing SeqS calculi written in SICStus Prolog.<|reference_end|>
arxiv
@article{olivetti2004a, title={A Sequent Calculus and a Theorem Prover for Standard Conditional Logics}, author={Nicola Olivetti, Gian Luca Pozzato, Camilla Schwind}, journal={arXiv preprint arXiv:cs/0407064}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407064}, primaryClass={cs.LO cs.AI} }
olivetti2004a
arxiv-672055
cs/0407065
Word Sense Disambiguation by Web Mining for Word Co-occurrence Probabilities
<|reference_start|>Word Sense Disambiguation by Web Mining for Word Co-occurrence Probabilities: This paper describes the National Research Council (NRC) Word Sense Disambiguation (WSD) system, as applied to the English Lexical Sample (ELS) task in Senseval-3. The NRC system approaches WSD as a classical supervised machine learning problem, using familiar tools such as the Weka machine learning software and Brill's rule-based part-of-speech tagger. Head words are represented as feature vectors with several hundred features. Approximately half of the features are syntactic and the other half are semantic. The main novelty in the system is the method for generating the semantic features, based on word \hbox{co-occurrence} probabilities. The probabilities are estimated using the Waterloo MultiText System with a corpus of about one terabyte of unlabeled text, collected by a web crawler.<|reference_end|>
arxiv
@article{turney2004word, title={Word Sense Disambiguation by Web Mining for Word Co-occurrence Probabilities}, author={Peter D. Turney (National Research Council of Canada)}, journal={Proceedings of the Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text (SENSEVAL-3), (2004), Barcelona, Spain, 239-242}, year={2004}, archivePrefix={arXiv}, eprint={cs/0407065}, primaryClass={cs.CL cs.IR cs.LG} }
turney2004word
arxiv-672056
cs/0407066
ParFORM: Parallel Version of the Symbolic Manipulation Program FORM
<|reference_start|>ParFORM: Parallel Version of the Symbolic Manipulation Program FORM: After an introduction to the sequential version of FORM and the mechanisms behind, we report on the status of our project of parallelization. We have now a parallel version of FORM running on Cluster- and SMP-architectures. This version can be used to run arbitrary FORM programs in parallel.<|reference_end|>
arxiv
@article{tentyukov2004parform:, title={ParFORM: Parallel Version of the Symbolic Manipulation Program FORM}, author={M.Tentyukov, D.Fliegner, M.Frank, A.Onischenko, A.Retey, H.M.Staudenmaier and J.A.M.Vermaseren}, journal={arXiv preprint arXiv:cs/0407066}, year={2004}, number={TTP04-15}, archivePrefix={arXiv}, eprint={cs/0407066}, primaryClass={cs.SC cs.DC hep-ph} }
tentyukov2004parform:
arxiv-672057
cs/0408001
Semantic Linking - a Context-Based Approach to Interactivity in Hypermedia
<|reference_start|>Semantic Linking - a Context-Based Approach to Interactivity in Hypermedia: The semantic Web initiates new, high level access schemes to online content and applications. One area of superior need for a redefined content exploration is given by on-line educational applications and their concepts of interactivity in the framework of open hypermedia systems. In the present paper we discuss aspects and opportunities of gaining interactivity schemes from semantic notions of components. A transition from standard educational annotation to semantic statements of hyperlinks is discussed. Further on we introduce the concept of semantic link contexts as an approach to manage a coherent rhetoric of linking. A practical implementation is introduced, as well. Our semantic hyperlink implementation is based on the more general Multimedia Information Repository MIR, an open hypermedia system supporting the standards XML, Corba and JNDI.<|reference_end|>
arxiv
@article{engelhardt2004semantic, title={Semantic Linking - a Context-Based Approach to Interactivity in Hypermedia}, author={Michael Engelhardt, Thomas C. Schmidt}, journal={R. Tolksdorf, R. Eckstein: Proc. of Berliner XML Tage. Humboldt-Universitaet zu Berlin; pp. 55-66; ISBN 3-88579-116-1; Berlin; 2003}, year={2004}, archivePrefix={arXiv}, eprint={cs/0408001}, primaryClass={cs.IR cs.LG} }
engelhardt2004semantic
arxiv-672058
cs/0408002
Roaming Real-Time Applications - Mobility Services in IPv6 Networks
<|reference_start|>Roaming Real-Time Applications - Mobility Services in IPv6 Networks: Emerging mobility standards within the next generation Internet Protocol, IPv6, promise to continuously operate devices roaming between IP networks. Associated with the paradigm of ubiquitous computing and communication, network technology is on the spot to deliver voice and videoconferencing as a standard internet solution. However, current roaming procedures are too slow, to remain seamless for real-time applications. Multicast mobility still waits for a convincing design. This paper investigates the temporal behaviour of mobile IPv6 with dedicated focus on topological impacts. Extending the hierarchical mobile IPv6 approach we suggest protocol improvements for a continuous handover, which may serve bidirectional multicast communication, as well. Along this line a multicast mobility concept is introduced as a service for clients and sources, as they are of dedicated importance in multipoint conferencing applications. The mechanisms introduced do not rely on assumptions of any specific multicast routing protocol in use.<|reference_end|>
arxiv
@article{schmidt2004roaming, title={Roaming Real-Time Applications - Mobility Services in IPv6 Networks}, author={Thomas C. Schmidt, Matthias W"ahlisch}, journal={Proceedings TERENA Networking Conference Zagreb, 2003, http://www.terena.nl/conferences/tnc2003/programme/final-programme.html}, year={2004}, archivePrefix={arXiv}, eprint={cs/0408002}, primaryClass={cs.NI cs.PF} }
schmidt2004roaming
arxiv-672059
cs/0408003
Multi-Embedding of Metric Spaces
<|reference_start|>Multi-Embedding of Metric Spaces: Metric embedding has become a common technique in the design of algorithms. Its applicability is often dependent on how high the embedding's distortion is. For example, embedding finite metric space into trees may require linear distortion as a function of its size. Using probabilistic metric embeddings, the bound on the distortion reduces to logarithmic in the size. We make a step in the direction of bypassing the lower bound on the distortion in terms of the size of the metric. We define "multi-embeddings" of metric spaces in which a point is mapped onto a set of points, while keeping the target metric of polynomial size and preserving the distortion of paths. The distortion obtained with such multi-embeddings into ultrametrics is at most O(log Delta loglog Delta) where Delta is the aspect ratio of the metric. In particular, for expander graphs, we are able to obtain constant distortion embeddings into trees in contrast with the Omega(log n) lower bound for all previous notions of embeddings. We demonstrate the algorithmic application of the new embeddings for two optimization problems: group Steiner tree and metrical task systems.<|reference_end|>
arxiv
@article{bartal2004multi-embedding, title={Multi-Embedding of Metric Spaces}, author={Yair Bartal, Manor Mendel}, journal={SIAM J. Comput. 34(1): 248-259, 2004}, year={2004}, doi={10.1137/S0097539703433122}, archivePrefix={arXiv}, eprint={cs/0408003}, primaryClass={cs.DS} }
bartal2004multi-embedding
arxiv-672060
cs/0408004
Hypermedia Learning Objects System - On the Way to a Semantic Educational Web
<|reference_start|>Hypermedia Learning Objects System - On the Way to a Semantic Educational Web: While eLearning systems become more and more popular in daily education, available applications lack opportunities to structure, annotate and manage their contents in a high-level fashion. General efforts to improve these deficits are taken by initiatives to define rich meta data sets and a semanticWeb layer. In the present paper we introduce Hylos, an online learning system. Hylos is based on a cellular eLearning Object (ELO) information model encapsulating meta data conforming to the LOM standard. Content management is provisioned on this semantic meta data level and allows for variable, dynamically adaptable access structures. Context aware multifunctional links permit a systematic navigation depending on the learners and didactic needs, thereby exploring the capabilities of the semantic web. Hylos is built upon the more general Multimedia Information Repository (MIR) and the MIR adaptive context linking environment (MIRaCLE), its linking extension. MIR is an open system supporting the standards XML, Corba and JNDI. Hylos benefits from manageable information structures, sophisticated access logic and high-level authoring tools like the ELO editor responsible for the semi-manual creation of meta data and WYSIWYG like content editing.<|reference_end|>
arxiv
@article{engelhardt2004hypermedia, title={Hypermedia Learning Objects System - On the Way to a Semantic Educational Web}, author={Michael Engelhardt, Andreas K'arp'ati, Torsten Rack, Ivette Schmidt, Thomas C. Schmidt}, journal={Proceedings of the International Workshop {"}Interactive Computer aided Learning{"} ICL 2003. Learning Objects and Reusability of Content, Kassel University Press 2003, ISBN 3-89958-029-X}, year={2004}, archivePrefix={arXiv}, eprint={cs/0408004}, primaryClass={cs.IR cs.LG} }
engelhardt2004hypermedia
arxiv-672061
cs/0408005
Educational Content Management - A Cellular Approach
<|reference_start|>Educational Content Management - A Cellular Approach: In recent times online educational applications more and more are requested to provide self-consistent learning offers for students at the university level. Consequently they need to cope with the wide range of complexity and interrelations university course teaching brings along. An urgent need to overcome simplistically linked HTMLc ontent pages becomes apparent. In the present paper we discuss a schematic concept of educational content construction from information cells and introduce its implementation on the storage and runtime layer. Starting from cells content is annotated according to didactic needs, structured for dynamic arrangement, dynamically decorated with hyperlinks and, as all works are based on XML, open to any presentation layer. Data can be variably accessed through URIs built on semantic path-names and edited via an adaptive authoring toolbox. Our content management approach is based on the more general Multimedia Information Repository MIR. and allows for personalisation, as well. MIR is an open system supporting the standards XML, Corba and JNDI.<|reference_end|>
arxiv
@article{engelhardt2004educational, title={Educational Content Management - A Cellular Approach}, author={Michael Engelhardt, Arne Hildebrand, Andreas K'arp'ati, Torsten Rack, Thomas C. Schmidt}, journal={Proceedings of the International Workshop "Interactive Computer aided Learning" ICL 2002. Blended Learning. Kassel University Press 2002, ISBN 3-933146-83-6}, year={2004}, archivePrefix={arXiv}, eprint={cs/0408005}, primaryClass={cs.CY cs.IR} }
engelhardt2004educational
arxiv-672062
cs/0408006
Why Two Sexes?
<|reference_start|>Why Two Sexes?: Evolutionary role of the separation into two sexes from a cyberneticist's point of view. [I translated this 1965 article from Russian "Nauka i Zhizn" (Science and Life) in 1988. In a popular form, the article puts forward several useful ideas not all of which even today are necessarily well known or widely accepted. Boris Lubachevsky, [email protected] ]<|reference_end|>
arxiv
@article{geodakian2004why, title={Why Two Sexes?}, author={Vigen A. Geodakian}, journal={Nauka i zhizn (Science and Life), 1965}, year={2004}, archivePrefix={arXiv}, eprint={cs/0408006}, primaryClass={cs.NE cs.GL q-bio.PE} }
geodakian2004why
arxiv-672063
cs/0408007
Online convex optimization in the bandit setting: gradient descent without a gradient
<|reference_start|>Online convex optimization in the bandit setting: gradient descent without a gradient: We consider a the general online convex optimization framework introduced by Zinkevich. In this setting, there is a sequence of convex functions. Each period, we must choose a signle point (from some feasible set) and pay a cost equal to the value of the next function on our chosen point. Zinkevich shows that, if the each function is revealed after the choice is made, then one can achieve vanishingly small regret relative the best single decision chosen in hindsight. We extend this to the bandit setting where we do not find out the entire functions but rather just their value at our chosen point. We show how to get vanishingly small regret in this setting. Our approach uses a simple approximation of the gradient that is computed from evaluating a function at a single (random) point. We show that this estimate is sufficient to mimic Zinkevich's gradient descent online analysis, with access to the gradient (only being able to evaluate the function at a single point).<|reference_end|>
arxiv
@article{flaxman2004online, title={Online convex optimization in the bandit setting: gradient descent without a gradient}, author={Abraham D. Flaxman, Adam Tauman Kalai, and H. Brendan McMahan}, journal={arXiv preprint arXiv:cs/0408007}, year={2004}, archivePrefix={arXiv}, eprint={cs/0408007}, primaryClass={cs.LG cs.CC} }
flaxman2004online
arxiv-672064
cs/0408008
Iterative Quantization Using Codes On Graphs
<|reference_start|>Iterative Quantization Using Codes On Graphs: We study codes on graphs combined with an iterative message passing algorithm for quantization. Specifically, we consider the binary erasure quantization (BEQ) problem which is the dual of the binary erasure channel (BEC) coding problem. We show that duals of capacity achieving codes for the BEC yield codes which approach the minimum possible rate for the BEQ. In contrast, low density parity check codes cannot achieve the minimum rate unless their density grows at least logarithmically with block length. Furthermore, we show that duals of efficient iterative decoding algorithms for the BEC yield efficient encoding algorithms for the BEQ. Hence our results suggest that graphical models may yield near optimal codes in source coding as well as in channel coding and that duality plays a key role in such constructions.<|reference_end|>
arxiv
@article{martinian2004iterative, title={Iterative Quantization Using Codes On Graphs}, author={Emin Martinian and Jonathan S. Yedidia}, journal={Proceedings of the 41st Annual Allerton Conference on Communication, Control, and Computing; Monticello, IL; 2004}, year={2004}, archivePrefix={arXiv}, eprint={cs/0408008}, primaryClass={cs.IT math.IT} }
martinian2004iterative
arxiv-672065
cs/0408009
Performance Analysis of Multicast Mobility in a Hierarchical Mobile IP Proxy Environment
<|reference_start|>Performance Analysis of Multicast Mobility in a Hierarchical Mobile IP Proxy Environment: Mobility support in IPv6 networks is ready for release as an RFC, stimulating major discussions on improvements to meet real-time communication requirements. Sprawling hot spots of IP-only wireless networks at the same time await voice and videoconferencing as standard mobile Internet services, thereby adding the request for multicast support to real-time mobility. This paper briefly introduces current approaches for seamless multicast extensions to Mobile IPv6. Key issues of multicast mobility are discussed. Both analytically and in simulations comparisons are drawn between handover performance characteristics, dedicating special focus on the M-HMIPv6 approach.<|reference_end|>
arxiv
@article{schmidt2004performance, title={Performance Analysis of Multicast Mobility in a Hierarchical Mobile IP Proxy Environment}, author={Thomas C. Schmidt, Matthias W"ahlisch}, journal={Selected Papers from TERENA Networking Conference Rhodes, 2004, http://www.terena.nl/library/tnc2004-proceedings/papers/schmidt.pdf}, year={2004}, archivePrefix={arXiv}, eprint={cs/0408009}, primaryClass={cs.NI cs.PF} }
schmidt2004performance
arxiv-672066
cs/0408010
A Simple Proportional Conflict Redistribution Rule
<|reference_start|>A Simple Proportional Conflict Redistribution Rule: One proposes a first alternative rule of combination to WAO (Weighted Average Operator) proposed recently by Josang, Daniel and Vannoorenberghe, called Proportional Conflict Redistribution rule (denoted PCR1). PCR1 and WAO are particular cases of WO (the Weighted Operator) because the conflicting mass is redistributed with respect to some weighting factors. In this first PCR rule, the proportionalization is done for each non-empty set with respect to the non-zero sum of its corresponding mass matrix - instead of its mass column average as in WAO, but the results are the same as Ph. Smets has pointed out. Also, we extend WAO (which herein gives no solution) for the degenerate case when all column sums of all non-empty sets are zero, and then the conflicting mass is transferred to the non-empty disjunctive form of all non-empty sets together; but if this disjunctive form happens to be empty, then one considers an open world (i.e. the frame of discernment might contain new hypotheses) and thus all conflicting mass is transferred to the empty set. In addition to WAO, we propose a general formula for PCR1 (WAO for non-degenerate cases).<|reference_end|>
arxiv
@article{smarandache2004a, title={A Simple Proportional Conflict Redistribution Rule}, author={Florentin Smarandache, Jean Dezert}, journal={International Journal of Applied Mathematics and Statistics, Vol. 3, No. J05, 1-36, 2005.}, year={2004}, archivePrefix={arXiv}, eprint={cs/0408010}, primaryClass={cs.AI} }
smarandache2004a
arxiv-672067
cs/0408011
The asymptotic number of binary codes and binary matroids
<|reference_start|>The asymptotic number of binary codes and binary matroids: The asyptotic number of nonequivalent binary n-codes is determined. This is also the asymptotic number of nonisomorphic binary n-matroids. The connection to a result of Lefmann, Roedl, Phelps is explored. The latter states that almost all binary n-codes have a trivial automorphism group.<|reference_end|>
arxiv
@article{wild2004the, title={The asymptotic number of binary codes and binary matroids}, author={Marcel Wild}, journal={SIAM Journal of Discrete Mathematics 19 (2005) 691-699}, year={2004}, archivePrefix={arXiv}, eprint={cs/0408011}, primaryClass={cs.IT cs.DM math.IT} }
wild2004the
arxiv-672068
cs/0408012
Three-Dimensional Face Orientation and Gaze Detection from a Single Image
<|reference_start|>Three-Dimensional Face Orientation and Gaze Detection from a Single Image: Gaze detection and head orientation are an important part of many advanced human-machine interaction applications. Many systems have been proposed for gaze detection. Typically, they require some form of user cooperation and calibration. Additionally, they may require multiple cameras and/or restricted head positions. We present a new approach for inference of both face orientation and gaze direction from a single image with no restrictions on the head position. Our algorithm is based on a face and eye model, deduced from anthropometric data. This approach allows us to use a single camera and requires no cooperation from the user. Using a single image avoids the complexities associated with of a multi-camera system. Evaluation tests show that our system is accurate, fast and can be used in a variety of applications, including ones where the user is unaware of the system.<|reference_end|>
arxiv
@article{kaminski2004three-dimensional, title={Three-Dimensional Face Orientation and Gaze Detection from a Single Image}, author={J.Y. Kaminski, M. Teicher, D. Knaan and A. Shavit}, journal={arXiv preprint arXiv:cs/0408012}, year={2004}, archivePrefix={arXiv}, eprint={cs/0408012}, primaryClass={cs.CV cs.HC} }
kaminski2004three-dimensional
arxiv-672069
cs/0408013
Roles Are Really Great!
<|reference_start|>Roles Are Really Great!: We present a new role system for specifying changing referencing relationships of heap objects. The role of an object depends, in large part, on its aliasing relationships with other objects, with the role of each object changing as its aliasing relationships change. Roles therefore capture important object and data structure properties and provide useful information about how the actions of the program interact with these properties. Our role system enables the programmer to specify the legal aliasing relationships that define the set of roles that objects may play, the roles of procedure parameters and object fields, and the role changes that procedures perform while manipulating objects. We present an interprocedural, compositional, and context-sensitive role analysis algorithm that verifies that a program respects the role constraints.<|reference_end|>
arxiv
@article{kuncak2004roles, title={Roles Are Really Great!}, author={Viktor Kuncak, Patrick Lam, Martin Rinard}, journal={arXiv preprint arXiv:cs/0408013}, year={2004}, number={MIT CSAIL 822}, archivePrefix={arXiv}, eprint={cs/0408013}, primaryClass={cs.PL cs.SE} }
kuncak2004roles
arxiv-672070
cs/0408014
Typestate Checking and Regular Graph Constraints
<|reference_start|>Typestate Checking and Regular Graph Constraints: We introduce regular graph constraints and explore their decidability properties. The motivation for regular graph constraints is 1) type checking of changing types of objects in the presence of linked data structures, 2) shape analysis techniques, and 3) generalization of similar constraints over trees and grids. We define a subclass of graphs called heaps as an abstraction of the data structures that a program constructs during its execution. We prove that determining the validity of implication for regular graph constraints over the class of heaps is undecidable. We show undecidability by exhibiting a characterization of certain "corresponder graphs" in terms of presence and absence of homomorphisms to a finite number of fixed graphs. The undecidability of implication of regular graph constraints implies that there is no algorithm that will verify that procedure preconditions are met or that the invariants are maintained when these properties are expressed in any specification language at least as expressive as regular graph constraints.<|reference_end|>
arxiv
@article{kuncak2004typestate, title={Typestate Checking and Regular Graph Constraints}, author={Viktor Kuncak, Martin Rinard}, journal={arXiv preprint arXiv:cs/0408014}, year={2004}, number={MIT CSAIL 863}, archivePrefix={arXiv}, eprint={cs/0408014}, primaryClass={cs.PL cs.LO} }
kuncak2004typestate
arxiv-672071
cs/0408015
On the Theory of Structural Subtyping
<|reference_start|>On the Theory of Structural Subtyping: We show that the first-order theory of structural subtyping of non-recursive types is decidable. Let $\Sigma$ be a language consisting of function symbols (representing type constructors) and $C$ a decidable structure in the relational language $L$ containing a binary relation $\leq$. $C$ represents primitive types; $\leq$ represents a subtype ordering. We introduce the notion of $\Sigma$-term-power of $C$, which generalizes the structure arising in structural subtyping. The domain of the $\Sigma$-term-power of $C$ is the set of $\Sigma$-terms over the set of elements of $C$. We show that the decidability of the first-order theory of $C$ implies the decidability of the first-order theory of the $\Sigma$-term-power of $C$. Our decision procedure makes use of quantifier elimination for term algebras and Feferman-Vaught theorem. Our result implies the decidability of the first-order theory of structural subtyping of non-recursive types.<|reference_end|>
arxiv
@article{kuncak2004on, title={On the Theory of Structural Subtyping}, author={Viktor Kuncak, Martin Rinard}, journal={arXiv preprint arXiv:cs/0408015}, year={2004}, number={MIT CSAIL 879}, archivePrefix={arXiv}, eprint={cs/0408015}, primaryClass={cs.LO cs.PL cs.SE} }
kuncak2004on
arxiv-672072
cs/0408016
Lock-Free and Practical Deques using Single-Word Compare-And-Swap
<|reference_start|>Lock-Free and Practical Deques using Single-Word Compare-And-Swap: We present an efficient and practical lock-free implementation of a concurrent deque that is disjoint-parallel accessible and uses atomic primitives which are available in modern computer systems. Previously known lock-free algorithms of deques are either based on non-available atomic synchronization primitives, only implement a subset of the functionality, or are not designed for disjoint accesses. Our algorithm is based on a doubly linked list, and only requires single-word compare-and-swap atomic primitives, even for dynamic memory sizes. We have performed an empirical study using full implementations of the most efficient algorithms of lock-free deques known. For systems with low concurrency, the algorithm by Michael shows the best performance. However, as our algorithm is designed for disjoint accesses, it performs significantly better on systems with high concurrency and non-uniform memory architecture.<|reference_end|>
arxiv
@article{sundell2004lock-free, title={Lock-Free and Practical Deques using Single-Word Compare-And-Swap}, author={H{aa}kan Sundell and Philippas Tsigas}, journal={arXiv preprint arXiv:cs/0408016}, year={2004}, number={2004-02}, archivePrefix={arXiv}, eprint={cs/0408016}, primaryClass={cs.DC cs.DS} }
sundell2004lock-free
arxiv-672073
cs/0408017
Improved Upper Bound for the Redundancy of Fix-Free Codes
<|reference_start|>Improved Upper Bound for the Redundancy of Fix-Free Codes: A variable-length code is a fix-free code if no codeword is a prefix or a suffix of any other codeword. In a fix-free code any finite sequence of codewords can be decoded in both directions, which can improve the robustness to channel noise and speed up the decoding process. In this paper we prove a new sufficient condition of the existence of fix-free codes and improve the upper bound on the redundancy of optimal fix-free codes.<|reference_end|>
arxiv
@article{yekhanin2004improved, title={Improved Upper Bound for the Redundancy of Fix-Free Codes}, author={Sergey Yekhanin}, journal={arXiv preprint arXiv:cs/0408017}, year={2004}, archivePrefix={arXiv}, eprint={cs/0408017}, primaryClass={cs.IT math.IT} }
yekhanin2004improved
arxiv-672074
cs/0408018
On Role Logic
<|reference_start|>On Role Logic: We present role logic, a notation for describing properties of relational structures in shape analysis, databases, and knowledge bases. We construct role logic using the ideas of de Bruijn's notation for lambda calculus, an encoding of first-order logic in lambda calculus, and a simple rule for implicit arguments of unary and binary predicates. The unrestricted version of role logic has the expressive power of first-order logic with transitive closure. Using a syntactic restriction on role logic formulas, we identify a natural fragment RL^2 of role logic. We show that the RL^2 fragment has the same expressive power as two-variable logic with counting C^2 and is therefore decidable. We present a translation of an imperative language into the decidable fragment RL^2, which allows compositional verification of programs that manipulate relational structures. In addition, we show how RL^2 encodes boolean shape analysis constraints and an expressive description logic.<|reference_end|>
arxiv
@article{kuncak2004on, title={On Role Logic}, author={Viktor Kuncak, Martin Rinard}, journal={arXiv preprint arXiv:cs/0408018}, year={2004}, number={MIT CSAIL 925}, archivePrefix={arXiv}, eprint={cs/0408018}, primaryClass={cs.PL cs.LO} }
kuncak2004on
arxiv-672075
cs/0408019
On Generalized Records and Spatial Conjunction in Role Logic
<|reference_start|>On Generalized Records and Spatial Conjunction in Role Logic: We have previously introduced role logic as a notation for describing properties of relational structures in shape analysis, databases and knowledge bases. A natural fragment of role logic corresponds to two-variable logic with counting and is therefore decidable. We show how to use role logic to describe open and closed records, as well the dual of records, inverse records. We observe that the spatial conjunction operation of separation logic naturally models record concatenation. Moreover, we show how to eliminate the spatial conjunction of formulas of quantifier depth one in first-order logic with counting. As a result, allowing spatial conjunction of formulas of quantifier depth one preserves the decidability of two-variable logic with counting. This result applies to two-variable role logic fragment as well. The resulting logic smoothly integrates type system and predicate calculus notation and can be viewed as a natural generalization of the notation for constraints arising in role analysis and similar shape analysis approaches.<|reference_end|>
arxiv
@article{kuncak2004on, title={On Generalized Records and Spatial Conjunction in Role Logic}, author={Viktor Kuncak, Martin Rinard}, journal={arXiv preprint arXiv:cs/0408019}, year={2004}, number={MIT CSAIL 942}, archivePrefix={arXiv}, eprint={cs/0408019}, primaryClass={cs.PL cs.LO} }
kuncak2004on
arxiv-672076
cs/0408020
Collaborative Storage Management In Sensor Networks
<|reference_start|>Collaborative Storage Management In Sensor Networks: In this paper, we consider a class of sensor networks where the data is not required in real-time by an observer; for example, a sensor network monitoring a scientific phenomenon for later play back and analysis. In such networks, the data must be stored in the network. Thus, in addition to battery power, storage is a primary resource: the useful lifetime of the network is constrained by its ability to store the generated data samples. We explore the use of collaborative storage technique to efficiently manage data in storage constrained sensor networks. The proposed collaborative storage technique takes advantage of spatial correlation among the data collected by nearby sensors to significantly reduce the size of the data near the data sources. We show that the proposed approach provides significant savings in the size of the stored data vs. local buffering, allowing the network to run for a longer time without running out of storage space and reducing the amount of data that will eventually be relayed to the observer. In addition, collaborative storage performs load balancing of the available storage space if data generation rates are not uniform across sensors (as would be the case in an event driven sensor network), or if the available storage varies across the network.<|reference_end|>
arxiv
@article{tilak2004collaborative, title={Collaborative Storage Management In Sensor Networks}, author={Sameer Tilak, Nael Abu-Ghazaleh, Wendi Heinzelman}, journal={arXiv preprint arXiv:cs/0408020}, year={2004}, number={CS-TR-04-NA01}, archivePrefix={arXiv}, eprint={cs/0408020}, primaryClass={cs.NI cs.AR} }
tilak2004collaborative
arxiv-672077
cs/0408021
An Algorithm for Quasi-Associative and Quasi-Markovian Rules of Combination in Information Fusion
<|reference_start|>An Algorithm for Quasi-Associative and Quasi-Markovian Rules of Combination in Information Fusion: In this paper one proposes a simple algorithm of combining the fusion rules, those rules which first use the conjunctive rule and then the transfer of conflicting mass to the non-empty sets, in such a way that they gain the property of associativity and fulfill the Markovian requirement for dynamic fusion. Also, a new rule, SDL-improved, is presented.<|reference_end|>
arxiv
@article{smarandache2004an, title={An Algorithm for Quasi-Associative and Quasi-Markovian Rules of Combination in Information Fusion}, author={Florentin Smarandache, Jean Dezert}, journal={International Journal of Applied Mathematics & Statistics, Vol. 22, No. S11 (Special Issue on Soft Computing), 33-42, 2011}, year={2004}, archivePrefix={arXiv}, eprint={cs/0408021}, primaryClass={cs.AI} }
smarandache2004an
arxiv-672078
cs/0408022
Diagnosabilities of regular networks
<|reference_start|>Diagnosabilities of regular networks: In this paper, we study diagnosabilities of multiprocessor systems under two diagnosis models: the PMC model and the comparison model. In each model, we further consider two different diagnosis strategies: the precise diagnosis strategy proposed by Preparata et al. and the pessimistic diagnosis strategy proposed by Friedman. The main result of this paper is to determine diagnosabilities of regular networks with certain conditions, which include several widely used multiprocessor systems such as variants of hypercubes and many others.<|reference_end|>
arxiv
@article{chang2004diagnosabilities, title={Diagnosabilities of regular networks}, author={Guey-Yun Chang, Gerard J. Chang and Gen-Huey Chen}, journal={arXiv preprint arXiv:cs/0408022}, year={2004}, number={NCTS/TPE-Math Technical Report 2004-013}, archivePrefix={arXiv}, eprint={cs/0408022}, primaryClass={cs.NI} }
chang2004diagnosabilities
arxiv-672079
cs/0408023
On Global Warming (Softening Global Constraints)
<|reference_start|>On Global Warming (Softening Global Constraints): We describe soft versions of the global cardinality constraint and the regular constraint, with efficient filtering algorithms maintaining domain consistency. For both constraints, the softening is achieved by augmenting the underlying graph. The softened constraints can be used to extend the meta-constraint framework for over-constrained problems proposed by Petit, Regin and Bessiere.<|reference_end|>
arxiv
@article{van hoeve2004on, title={On Global Warming (Softening Global Constraints)}, author={Willem Jan van Hoeve and Gilles Pesant and Louis-Martin Rousseau}, journal={arXiv preprint arXiv:cs/0408023}, year={2004}, archivePrefix={arXiv}, eprint={cs/0408023}, primaryClass={cs.AI cs.PL} }
van hoeve2004on
arxiv-672080
cs/0408024
Non-uniform Information Dissemination for Sensor Networks
<|reference_start|>Non-uniform Information Dissemination for Sensor Networks: Future smart environments will be characterized by multiple nodes that sense, collect, and disseminate information about environmental phenomena through a wireless network. In this paper, we define a set of applications that require a new form of distributed knowledge about the environment, referred to as non-uniform information granularity. By non-uniform information granularity we mean that the required accuracy or precision of information is proportional to the distance between a source node (information producer) and current sink node (information consumer). That is, as the distance between the source node and sink node increases, loss in information precision is acceptable. Applications that can benefit from this type of knowledge range from battlefield scenarios to rescue operations. The main objectives of this paper are two-fold: first, we will precisely define non-uniform information granularity, and second, we will describe different protocols that achieve non-uniform information dissemination and analyze these protocols based on complexity, energy consumption, and accuracy of information.<|reference_end|>
arxiv
@article{tilak2004non-uniform, title={Non-uniform Information Dissemination for Sensor Networks}, author={Sameer Tilak, Amy Murphy, Wendi Heinzelman and Nael B. Abu-Ghazaleh}, journal={arXiv preprint arXiv:cs/0408024}, year={2004}, number={CS-TR-04-NA03}, archivePrefix={arXiv}, eprint={cs/0408024}, primaryClass={cs.NI} }
tilak2004non-uniform
arxiv-672081
cs/0408025
Optimizing compilation of constraint handling rules in HAL
<|reference_start|>Optimizing compilation of constraint handling rules in HAL: In this paper we discuss the optimizing compilation of Constraint Handling Rules (CHRs). CHRs are a multi-headed committed choice constraint language, commonly applied for writing incremental constraint solvers. CHRs are usually implemented as a language extension that compiles to the underlying language. In this paper we show how we can use different kinds of information in the compilation of CHRs in order to obtain access efficiency, and a better translation of the CHR rules into the underlying language, which in this case is HAL. The kinds of information used include the types, modes, determinism, functional dependencies and symmetries of the CHR constraints. We also show how to analyze CHR programs to determine this information about functional dependencies, symmetries and other kinds of information supporting optimizations.<|reference_end|>
arxiv
@article{holzbaur2004optimizing, title={Optimizing compilation of constraint handling rules in HAL}, author={Christian Holzbaur, Maria Garcia de la Banda, Peter J. Stuckey, Gregory J. Duck}, journal={Theory and Practice of Logic Programming: 5(4-5):503-532, 2005}, year={2004}, archivePrefix={arXiv}, eprint={cs/0408025}, primaryClass={cs.PL} }
holzbaur2004optimizing
arxiv-672082
cs/0408026
Incremental Construction of Minimal Acyclic Sequential Transducers from Unsorted Data
<|reference_start|>Incremental Construction of Minimal Acyclic Sequential Transducers from Unsorted Data: This paper presents an efficient algorithm for the incremental construction of a minimal acyclic sequential transducer (ST) for a dictionary consisting of a list of input and output strings. The algorithm generalises a known method of constructing minimal finite-state automata (Daciuk et al. 2000). Unlike the algorithm published by Mihov and Maurel (2001), it does not require the input strings to be sorted. The new method is illustrated by an application to pronunciation dictionaries.<|reference_end|>
arxiv
@article{skut2004incremental, title={Incremental Construction of Minimal Acyclic Sequential Transducers from Unsorted Data}, author={Wojciech Skut}, journal={arXiv preprint arXiv:cs/0408026}, year={2004}, archivePrefix={arXiv}, eprint={cs/0408026}, primaryClass={cs.CL cs.DS} }
skut2004incremental
arxiv-672083
cs/0408027
CHR Grammars
<|reference_start|>CHR Grammars: A grammar formalism based upon CHR is proposed analogously to the way Definite Clause Grammars are defined and implemented on top of Prolog. These grammars execute as robust bottom-up parsers with an inherent treatment of ambiguity and a high flexibility to model various linguistic phenomena. The formalism extends previous logic programming based grammars with a form of context-sensitive rules and the possibility to include extra-grammatical hypotheses in both head and body of grammar rules. Among the applications are straightforward implementations of Assumption Grammars and abduction under integrity constraints for language analysis. CHR grammars appear as a powerful tool for specification and implementation of language processors and may be proposed as a new standard for bottom-up grammars in logic programming. To appear in Theory and Practice of Logic Programming (TPLP), 2005<|reference_end|>
arxiv
@article{christiansen2004chr, title={CHR Grammars}, author={Henning Christiansen}, journal={arXiv preprint arXiv:cs/0408027}, year={2004}, archivePrefix={arXiv}, eprint={cs/0408027}, primaryClass={cs.CL cs.PL} }
christiansen2004chr
arxiv-672084
cs/0408028
Calculus on Graphs
<|reference_start|>Calculus on Graphs: The purpose of this paper is to develop a "calculus" on graphs that allows graph theory to have new connections to analysis. For example, our framework gives rise to many new partial differential equations on graphs, most notably a new (Laplacian based) wave equation; this wave equation gives rise to a partial improvement on the Chung-Faber-Manteuffel diameter/eigenvalue bound in graph theory, and the Chung-Grigoryan-Yau and (in a certain case) Bobkov-Ledoux distance/eigenvalue bounds in analysis. Our framework also allows most techniques for the non-linear p-Laplacian in analysis to be easily carried over to graph theory.<|reference_end|>
arxiv
@article{friedman2004calculus, title={Calculus on Graphs}, author={Joel Friedman and Jean-Pierre Tillich}, journal={arXiv preprint arXiv:cs/0408028}, year={2004}, archivePrefix={arXiv}, eprint={cs/0408028}, primaryClass={cs.DM math.CO} }
friedman2004calculus
arxiv-672085
cs/0408029
Tsnnls: A solver for large sparse least squares problems with non-negative variables
<|reference_start|>Tsnnls: A solver for large sparse least squares problems with non-negative variables: The solution of large, sparse constrained least-squares problems is a staple in scientific and engineering applications. However, currently available codes for such problems are proprietary or based on MATLAB. We announce a freely available C implementation of the fast block pivoting algorithm of Portugal, Judice, and Vicente. Our version is several times faster than Matstoms' MATLAB implementation of the same algorithm. Further, our code matches the accuracy of MATLAB's built-in lsqnonneg function.<|reference_end|>
arxiv
@article{cantarella2004tsnnls:, title={Tsnnls: A solver for large sparse least squares problems with non-negative variables}, author={Jason Cantarella and Michael Piatek}, journal={arXiv preprint arXiv:cs/0408029}, year={2004}, archivePrefix={arXiv}, eprint={cs/0408029}, primaryClass={cs.MS} }
cantarella2004tsnnls:
arxiv-672086
cs/0408030
The Revolution In Database System Architecture
<|reference_start|>The Revolution In Database System Architecture: Database system architectures are undergoing revolutionary changes. Algorithms and data are being unified by integrating programming languages with the database system. This gives an extensible object-relational system where non-procedural relational operators manipulate object sets. Coupled with this, each DBMS is now a web service. This has huge implications for how we structure applications. DBMSs are now object containers. Queues are the first objects to be added. These queues are the basis for transaction processing and workflow applica-tions. Future workflow systems are likely to be built on this core. Data cubes and online analytic processing are now baked into most DBMSs. Beyond that, DBMSs have a framework for data mining and machine learning algorithms. Decision trees, Bayes nets, clustering, and time series analysis are built in; new algorithms can be added. Text, temporal, and spatial data access methods, along with their probabilistic reasoning have been added to database systems. Allowing approximate and probabilistic answers is essential for many applications. Many believe that XML and xQuery will be the main data structure and access pattern. Database systems must accommodate that perspective.These changes mandate a much more dynamic query optimization strategy. Intelligence is moving to the periphery of the network. Each disk and each sensor will be a competent database machine. Relational algebra is a convenient way to program these systems. Database systems are now expected to be self-managing, self-healing, and always-up.<|reference_end|>
arxiv
@article{gray2004the, title={The Revolution In Database System Architecture}, author={Jim Gray}, journal={Proc ACM SIGMOD 2004, Paris, pp 1-4}, year={2004}, number={MSR-TR-2004-31}, archivePrefix={arXiv}, eprint={cs/0408030}, primaryClass={cs.DB} }
gray2004the
arxiv-672087
cs/0408031
There Goes the Neighborhood: Relational Algebra for Spatial Data Search
<|reference_start|>There Goes the Neighborhood: Relational Algebra for Spatial Data Search: We explored ways of doing spatial search within a relational database: (1) hierarchical triangular mesh (a tessellation of the sphere), (2) a zoned bucketing system, and (3) representing areas as disjunctive-normal form constraints. Each of these approaches has merits. They all allow efficient point-in-region queries. A relational representation for regions allows Boolean operations among them and allows quick tests for point-in-region, regions-containing-point, and region-overlap. The speed of these algorithms is much improved by a zone and multi-scale zone-pyramid scheme. The approach has the virtue that the zone mechanism works well on B-Trees native to all SQL systems and integrates naturally with current query optimizers - rather than requiring a new spatial access method and concomitant query optimizer extensions. Over the last 5 years, we have used these techniques extensively in our work on SkyServer.sdss.org, and SkyQuery.net.<|reference_end|>
arxiv
@article{gray2004there, title={There Goes the Neighborhood: Relational Algebra for Spatial Data Search}, author={Jim Gray, Alexander S. Szalay, Aniruddha R. Thakar, Gyorgy Fekete, William O'Mullane, Maria A. Nieto-Santisteban, Gerd Heber, Arnold H. Rots}, journal={arXiv preprint arXiv:cs/0408031}, year={2004}, number={MSR-TR-2004-32}, archivePrefix={arXiv}, eprint={cs/0408031}, primaryClass={cs.DB} }
gray2004there
arxiv-672088
cs/0408032
Performance Characterisation of Intra-Cluster Collective Communications
<|reference_start|>Performance Characterisation of Intra-Cluster Collective Communications: Although recent works try to improve collective communication in grid systems by separating intra and inter-cluster communication, the optimisation of communications focus only on inter-cluster communications. We believe, instead, that the overall performance of the application may be improved if intra-cluster collective communications performance is known in advance. Hence, it is important to have an accurate model of the intra-cluster collective communications, which provides the necessary evidences to tune and to predict their performance correctly. In this paper we present our experience on modelling such communication strategies. We describe and compare different implementation strategies with their communication models, evaluating the models' accuracy and describing the practical challenges that can be found when modelling collective communications.<|reference_end|>
arxiv
@article{barchet-estefanel2004performance, title={Performance Characterisation of Intra-Cluster Collective Communications}, author={Luiz Angelo Barchet-Estefanel (ID - IMAG), Gregory Mounie (ID - IMAG)}, journal={Proceedings of the SBAC-PAD 2004 16th Symposium on Computer Architecture and High Performance Computing (2004) 254-261}, year={2004}, archivePrefix={arXiv}, eprint={cs/0408032}, primaryClass={cs.DC} }
barchet-estefanel2004performance
arxiv-672089
cs/0408033
Identifying Logical Homogeneous Clusters for Efficient Wide-area Communications
<|reference_start|>Identifying Logical Homogeneous Clusters for Efficient Wide-area Communications: Recently, many works focus on the implementation of collective communication operations adapted to wide area computational systems, like computational Grids or global-computing. Due to the inherently heterogeneity of such environments, most works separate "clusters" in different hierarchy levels. to better model the communication. However, in our opinion, such works do not give enough attention to the delimitation of such clusters, as they normally use the locality or the IP subnet from the machines to delimit a cluster without verifying the "homogeneity" of such clusters. In this paper, we describe a strategy to gather network information from different local-area networks and to construct "logical homogeneous clusters", better suited to the performance modelling.<|reference_end|>
arxiv
@article{barchet-estefanel2004identifying, title={Identifying Logical Homogeneous Clusters for Efficient Wide-area Communications}, author={Luiz Angelo Barchet-Estefanel (ID - Imag, Apache Ur-Ra Id Imag), Gregory Mounie (ID - Imag, Apache Ur-Ra Id Imag)}, journal={Lecture Notes in Computer Sciences Proceedings of the EuroPVM/MPI 2004 11th European PVM/MPI Users' Group Meeting (2004) 319-326}, year={2004}, archivePrefix={arXiv}, eprint={cs/0408033}, primaryClass={cs.DC} }
barchet-estefanel2004identifying
arxiv-672090
cs/0408034
Fast Tuning of Intra-Cluster Collective Communications
<|reference_start|>Fast Tuning of Intra-Cluster Collective Communications: Recent works try to optimise collective communication in grid systems focusing mostly on the optimisation of communications among different clusters. We believe that intra-cluster collective communications should also be optimised, as a way to improve the overall efficiency and to allow the construction of multi-level collective operations. Indeed, inside homogeneous clusters, a simple optimisation approach rely on the comparison from different implementation strategies, through their communication models. In this paper we evaluate this approach, comparing different implementation strategies with their predicted performances. As a result, we are able to choose the communication strategy that better adapts to each network environment.<|reference_end|>
arxiv
@article{barchet-estefanel2004fast, title={Fast Tuning of Intra-Cluster Collective Communications}, author={Luiz Angelo Barchet-Estefanel (ID - Imag, Apache Ur-Ra Id Imag), Gregory Mounie (ID - Imag, Apache Ur-Ra Id Imag)}, journal={Lecture Notes in Computer Sciences - Proceedings of the EuroPVM/MPI 2004 11th European PVM/MPI Users' Group Meeting (2004) 28-35}, year={2004}, archivePrefix={arXiv}, eprint={cs/0408034}, primaryClass={cs.DC} }
barchet-estefanel2004fast
arxiv-672091
cs/0408035
Monitoring, Analyzing, and Controlling Internet-scale Systems with ACME
<|reference_start|>Monitoring, Analyzing, and Controlling Internet-scale Systems with ACME: Analyzing and controlling large distributed services under a wide range of conditions is difficult. Yet these capabilities are essential to a number of important development and operational tasks such as benchmarking, testing, and system management. To facilitate these tasks, we have built the Application Control and Monitoring Environment (ACME), a scalable, flexible infrastructure for monitoring, analyzing, and controlling Internet-scale systems. ACME consists of two parts. ISING, the Internet Sensor In-Network agGregator, queries sensors and aggregates the results as they are routed through an overlay network. ENTRIE, the ENgine for TRiggering Internet Events, uses the data streams supplied by ISING, in combination with a user's XML configuration file, to trigger actuators such as killing processes during a robustness benchmark or paging a system administrator when predefined anomalous conditions are observed. In this paper we describe the design, implementation, and evaluation of ACME and its constituent parts. We find that for a 512-node system running atop an emulated Internet topology, ISING's use of in-network aggregation can reduce end-to-end query-response latency by more than 50% compared to using either direct network connections or the same overlay network without aggregation. We also find that an untuned implementation of ACME can invoke an actuator on one or all nodes in response to a discrete or aggregate event in less than four seconds, and we illustrate ACME's applicability to concrete benchmarking and monitoring scenarios.<|reference_end|>
arxiv
@article{oppenheimer2004monitoring,, title={Monitoring, Analyzing, and Controlling Internet-scale Systems with ACME}, author={David Oppenheimer, Vitaliy Vatkovskiy, Hakim Weatherspoon, Jason Lee, David A. Patterson and John Kubiatowicz}, journal={arXiv preprint arXiv:cs/0408035}, year={2004}, number={UCB//CSD-03-1276}, archivePrefix={arXiv}, eprint={cs/0408035}, primaryClass={cs.DC cs.NI} }
oppenheimer2004monitoring,
arxiv-672092
cs/0408036
Consensus on Transaction Commit
<|reference_start|>Consensus on Transaction Commit: The distributed transaction commit problem requires reaching agreement on whether a transaction is committed or aborted. The classic Two-Phase Commit protocol blocks if the coordinator fails. Fault-tolerant consensus algorithms also reach agreement, but do not block whenever any majority of the processes are working. Running a Paxos consensus algorithm on the commit/abort decision of each participant yields a transaction commit protocol that uses 2F +1 coordinators and makes progress if at least F +1 of them are working. In the fault-free case, this algorithm requires one extra message delay but has the same stable-storage write delay as Two-Phase Commit. The classic Two-Phase Commit algorithm is obtained as the special F = 0 case of the general Paxos Commit algorithm.<|reference_end|>
arxiv
@article{gray2004consensus, title={Consensus on Transaction Commit}, author={Jim Gray, Leslie Lamport}, journal={arXiv preprint arXiv:cs/0408036}, year={2004}, number={MSR-TR-2003-96}, archivePrefix={arXiv}, eprint={cs/0408036}, primaryClass={cs.DC cs.DB} }
gray2004consensus
arxiv-672093
cs/0408037
Multi-dimensional Type Theory: Rules, Categories, and Combinators for Syntax and Semantics
<|reference_start|>Multi-dimensional Type Theory: Rules, Categories, and Combinators for Syntax and Semantics: We investigate the possibility of modelling the syntax and semantics of natural language by constraints, or rules, imposed by the multi-dimensional type theory Nabla. The only multiplicity we explicitly consider is two, namely one dimension for the syntax and one dimension for the semantics, but the general perspective is important. For example, issues of pragmatics could be handled as additional dimensions. One of the main problems addressed is the rather complicated repertoire of operations that exists besides the notion of categories in traditional Montague grammar. For the syntax we use a categorial grammar along the lines of Lambek. For the semantics we use so-called lexical and logical combinators inspired by work in natural logic. Nabla provides a concise interpretation and a sequent calculus as the basis for implementations.<|reference_end|>
arxiv
@article{villadsen2004multi-dimensional, title={Multi-dimensional Type Theory: Rules, Categories, and Combinators for Syntax and Semantics}, author={J{o}rgen Villadsen}, journal={arXiv preprint arXiv:cs/0408037}, year={2004}, archivePrefix={arXiv}, eprint={cs/0408037}, primaryClass={cs.CL cs.AI cs.LO} }
villadsen2004multi-dimensional
arxiv-672094
cs/0408038
The Dynamics of Group Codes: Dual Abelian Group Codes and Systems
<|reference_start|>The Dynamics of Group Codes: Dual Abelian Group Codes and Systems: Fundamental results concerning the dynamics of abelian group codes (behaviors) and their duals are developed. Duals of sequence spaces over locally compact abelian groups may be defined via Pontryagin duality; dual group codes are orthogonal subgroups of dual sequence spaces. The dual of a complete code or system is finite, and the dual of a Laurent code or system is (anti-)Laurent. If C and C^\perp are dual codes, then the state spaces of C act as the character groups of the state spaces of C^\perp. The controllability properties of C are the observability properties of C^\perp. In particular, C is (strongly) controllable if and only if C^\perp is (strongly) observable, and the controller memory of C is the observer memory of C^\perp. The controller granules of C act as the character groups of the observer granules of C^\perp. Examples of minimal observer-form encoder and syndrome-former constructions are given. Finally, every observer granule of C is an "end-around" controller granule of C.<|reference_end|>
arxiv
@article{forney2004the, title={The Dynamics of Group Codes: Dual Abelian Group Codes and Systems}, author={G. David Forney Jr. and Mitchell D. Trott}, journal={IEEE Trans. Inform. Theory, vol. 50, pp. 2935-2965, Dec. 2004.}, year={2004}, doi={10.1109/TIT.2004.838340}, archivePrefix={arXiv}, eprint={cs/0408038}, primaryClass={cs.IT math.IT} }
forney2004the
arxiv-672095
cs/0408039
Medians and Beyond: New Aggregation Techniques for Sensor Networks
<|reference_start|>Medians and Beyond: New Aggregation Techniques for Sensor Networks: Wireless sensor networks offer the potential to span and monitor large geographical areas inexpensively. Sensors, however, have significant power constraint (battery life), making communication very expensive. Another important issue in the context of sensor-based information systems is that individual sensor readings are inherently unreliable. In order to address these two aspects, sensor database systems like TinyDB and Cougar enable in-network data aggregation to reduce the communication cost and improve reliability. The existing data aggregation techniques, however, are limited to relatively simple types of queries such as SUM, COUNT, AVG, and MIN/MAX. In this paper we propose a data aggregation scheme that significantly extends the class of queries that can be answered using sensor networks. These queries include (approximate) quantiles, such as the median, the most frequent data values, such as the consensus value, a histogram of the data distribution, as well as range queries. In our scheme, each sensor aggregates the data it has received from other sensors into a fixed (user specified) size message. We provide strict theoretical guarantees on the approximation quality of the queries in terms of the message size. We evaluate the performance of our aggregation scheme by simulation and demonstrate its accuracy, scalability and low resource utilization for highly variable input data sets.<|reference_end|>
arxiv
@article{shrivastava2004medians, title={Medians and Beyond: New Aggregation Techniques for Sensor Networks}, author={Nisheeth Shrivastava, Chiranjeeb Buragohain, Divyakant Agrawal, Subhash Suri}, journal={Proceedings of the Second ACM Conference on Embedded Networked Sensor Systems (SenSys 2004)}, year={2004}, archivePrefix={arXiv}, eprint={cs/0408039}, primaryClass={cs.DC cs.DB cs.DS} }
shrivastava2004medians
arxiv-672096
cs/0408040
Hash sort: A linear time complexity multiple-dimensional sort algorithm
<|reference_start|>Hash sort: A linear time complexity multiple-dimensional sort algorithm: Sorting and hashing are two completely different concepts in computer science, and appear mutually exclusive to one another. Hashing is a search method using the data as a key to map to the location within memory, and is used for rapid storage and retrieval. Sorting is a process of organizing data from a random permutation into an ordered arrangement, and is a common activity performed frequently in a variety of applications. Almost all conventional sorting algorithms work by comparison, and in doing so have a linearithmic greatest lower bound on the algorithmic time complexity. Any improvement in the theoretical time complexity of a sorting algorithm can result in overall larger gains in implementation performance.. A gain in algorithmic performance leads to much larger gains in speed for the application that uses the sort algorithm. Such a sort algorithm needs to use an alternative method for ordering the data than comparison, to exceed the linearithmic time complexity boundary on algorithmic performance. The hash sort is a general purpose non-comparison based sorting algorithm by hashing, which has some interesting features not found in conventional sorting algorithms. The hash sort asymptotically outperforms the fastest traditional sorting algorithm, the quick sort. The hash sort algorithm has a linear time complexity factor -- even in the worst case. The hash sort opens an area for further work and investigation into alternative means of sorting.<|reference_end|>
arxiv
@article{gilreath2004hash, title={Hash sort: A linear time complexity multiple-dimensional sort algorithm}, author={William F. Gilreath}, journal={Proceedings of First Southern Symposium on Computing December 1998}, year={2004}, archivePrefix={arXiv}, eprint={cs/0408040}, primaryClass={cs.DS} }
gilreath2004hash
arxiv-672097
cs/0408041
Fractal geometry of literature: first attempt to Shakespeare's works
<|reference_start|>Fractal geometry of literature: first attempt to Shakespeare's works: It was demonstrated that there is a geometrical order in the structure of literature. Fractal geometry as a modern mathematical approach and a new geometrical viewpoint on natural objects including both processes and structures was employed for analysis of literature. As the first study, the works of William Shakespeare were chosen as the most important items in western literature. By counting the number of letters applied in a manuscript, it is possible to study the whole manuscript statistically. A novel method based on basic assumption of fractal geometry was proposed for the calculation of fractal dimensions of the literature. The results were compared with Zipf's law. Zipf's law was successfully used for letters instead of words. Two new concepts namely Zipf's dimension and Zipf's order were also introduced. It was found that changes of both fractal dimension and Zipf's dimension are similar and dependent on the manuscript length. Interestingly, direct plotting the data obtained in semi-logarithmic and logarithmic forms also led to a power-law.<|reference_end|>
arxiv
@article{eftekhari2004fractal, title={Fractal geometry of literature: first attempt to Shakespeare's works}, author={Ali Eftekhari}, journal={arXiv preprint arXiv:cs/0408041}, year={2004}, archivePrefix={arXiv}, eprint={cs/0408041}, primaryClass={cs.CL cs.CC} }
eftekhari2004fractal
arxiv-672098
cs/0408042
Dynamic Localization Protocols for Mobile Sensor Networks
<|reference_start|>Dynamic Localization Protocols for Mobile Sensor Networks: The ability of a sensor node to determine its physical location within a network (Localization) is of fundamental importance in sensor networks. Interpretating data from sensors will not be possible unless the context of the data is known; this is most often accomplished by tracking its physical location. Existing research has focused on localization in static sensor networks where localization is a one-time (or low frequency) activity. In contrast, this paper considers localization for mobile sensors: when sensors are mobile, localization must be invoked periodically to enable the sensors to track their location. The higher the frequency of localization, the lower the error introduced because of mobility. However, localization is a costly operation since it involves both communication and computation. In this paper, we propose and investigate adaptive and predictive protocols that control the frequency of localization based on sensor mobility behavior to reduce the energy requirements for localization while bounding the localization error. We show that such protocols can significantly reduce the localization energy without sacrificing accuracy (in fact, improving accuracy for most situations). Using simulation and analysis we explore the tradeoff between energy efficiency and localization error due to mobility for several protocols.<|reference_end|>
arxiv
@article{tilak2004dynamic, title={Dynamic Localization Protocols for Mobile Sensor Networks}, author={Sameer Tilak, Vinay Kolar, Nael B. Abu-Ghazaleh and Kyoung-Don Kang}, journal={arXiv preprint arXiv:cs/0408042}, year={2004}, number={CS-TR-04-NA02}, archivePrefix={arXiv}, eprint={cs/0408042}, primaryClass={cs.NI} }
tilak2004dynamic
arxiv-672099
cs/0408043
The Arithmetical Complexity of Dimension and Randomness
<|reference_start|>The Arithmetical Complexity of Dimension and Randomness: Constructive dimension and constructive strong dimension are effectivizations of the Hausdorff and packing dimensions, respectively. Each infinite binary sequence A is assigned a dimension dim(A) in [0,1] and a strong dimension Dim(A) in [0,1]. Let DIM^alpha and DIMstr^alpha be the classes of all sequences of dimension alpha and of strong dimension alpha, respectively. We show that DIM^0 is properly Pi^0_2, and that for all Delta^0_2-computable alpha in (0,1], DIM^alpha is properly Pi^0_3. To classify the strong dimension classes, we use a more powerful effective Borel hierarchy where a co-enumerable predicate is used rather than a enumerable predicate in the definition of the Sigma^0_1 level. For all Delta^0_2-computable alpha in [0,1), we show that DIMstr^alpha is properly in the Pi^0_3 level of this hierarchy. We show that DIMstr^1 is properly in the Pi^0_2 level of this hierarchy. We also prove that the class of Schnorr random sequences and the class of computably random sequences are properly Pi^0_3.<|reference_end|>
arxiv
@article{hitchcock2004the, title={The Arithmetical Complexity of Dimension and Randomness}, author={John M. Hitchcock, Jack H. Lutz, and Sebastiaan A. Terwijn}, journal={arXiv preprint arXiv:cs/0408043}, year={2004}, archivePrefix={arXiv}, eprint={cs/0408043}, primaryClass={cs.LO cs.CC} }
hitchcock2004the
arxiv-672100
cs/0408044
FLUX: A Logic Programming Method for Reasoning Agents
<|reference_start|>FLUX: A Logic Programming Method for Reasoning Agents: FLUX is a programming method for the design of agents that reason logically about their actions and sensor information in the presence of incomplete knowledge. The core of FLUX is a system of Constraint Handling Rules, which enables agents to maintain an internal model of their environment by which they control their own behavior. The general action representation formalism of the fluent calculus provides the formal semantics for the constraint solver. FLUX exhibits excellent computational behavior due to both a carefully restricted expressiveness and the inference paradigm of progression.<|reference_end|>
arxiv
@article{thielscher2004flux:, title={FLUX: A Logic Programming Method for Reasoning Agents}, author={Michael Thielscher}, journal={arXiv preprint arXiv:cs/0408044}, year={2004}, archivePrefix={arXiv}, eprint={cs/0408044}, primaryClass={cs.AI} }
thielscher2004flux: