corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-5301 | 0810.5263 | Lower bounds for distributed markov chain problems | <|reference_start|>Lower bounds for distributed markov chain problems: We study the worst-case communication complexity of distributed algorithms computing a path problem based on stationary distributions of random walks in a network $G$ with the caveat that $G$ is also the communication network. The problem is a natural generalization of shortest path lengths to expected path lengths, and represents a model used in many practical applications such as pagerank and eigentrust as well as other problems involving Markov chains defined by networks. For the problem of computing a single stationary probability, we prove an $\Omega(n^2 \log n)$ bits lower bound; the trivial centralized algorithm costs $O(n^3)$ bits and no known algorithm beats this. We also prove lower bounds for the related problems of approximately computing the stationary probabilities, computing only the ranking of the nodes, and computing the node with maximal rank. As a corollary, we obtain lower bounds for labelling schemes for the hitting time between two nodes.<|reference_end|> | arxiv | @article{sami2008lower,
title={Lower bounds for distributed markov chain problems},
author={Rahul Sami, Andy Twigg},
journal={arXiv preprint arXiv:0810.5263},
year={2008},
archivePrefix={arXiv},
eprint={0810.5263},
primaryClass={cs.DS}
} | sami2008lower |
arxiv-5302 | 0810.5308 | Typical Performance of Irregular Low-Density Generator-Matrix Codes for Lossy Compression | <|reference_start|>Typical Performance of Irregular Low-Density Generator-Matrix Codes for Lossy Compression: We evaluate typical performance of irregular low-density generator-matrix (LDGM) codes, which is defined by sparse matrices with arbitrary irregular bit degree distribution and arbitrary check degree distribution, for lossy compression. We apply the replica method under one-step replica symmetry breaking (1RSB) ansatz to this problem.<|reference_end|> | arxiv | @article{mimura2008typical,
title={Typical Performance of Irregular Low-Density Generator-Matrix Codes for
Lossy Compression},
author={Kazushi Mimura},
journal={J. Phys. A: Math. Theor., 42, 13, 135002 (2009)},
year={2008},
doi={10.1088/1751-8113/42/13/135002},
archivePrefix={arXiv},
eprint={0810.5308},
primaryClass={cond-mat.dis-nn cs.IT math.IT}
} | mimura2008typical |
arxiv-5303 | 0810.5325 | 3D Face Recognition with Sparse Spherical Representations | <|reference_start|>3D Face Recognition with Sparse Spherical Representations: This paper addresses the problem of 3D face recognition using simultaneous sparse approximations on the sphere. The 3D face point clouds are first aligned with a novel and fully automated registration process. They are then represented as signals on the 2D sphere in order to preserve depth and geometry information. Next, we implement a dimensionality reduction process with simultaneous sparse approximations and subspace projection. It permits to represent each 3D face by only a few spherical functions that are able to capture the salient facial characteristics, and hence to preserve the discriminant facial information. We eventually perform recognition by effective matching in the reduced space, where Linear Discriminant Analysis can be further activated for improved recognition performance. The 3D face recognition algorithm is evaluated on the FRGC v.1.0 data set, where it is shown to outperform classical state-of-the-art solutions that work with depth images.<|reference_end|> | arxiv | @article{llonch20083d,
title={3D Face Recognition with Sparse Spherical Representations},
author={R. Sala Llonch, E. Kokiopoulou, I. Tosic and P. Frossard},
journal={arXiv preprint arXiv:0810.5325},
year={2008},
archivePrefix={arXiv},
eprint={0810.5325},
primaryClass={cs.CV}
} | llonch20083d |
arxiv-5304 | 0810.5351 | An Activity-Based Model for Separation of Duty | <|reference_start|>An Activity-Based Model for Separation of Duty: This paper offers several contributions for separation of duty (SoD) administration in role-based access control (RBAC) systems. We first introduce a new formal framework, based on business perspective, where SoD constraints are analyzed introducing the activity concept. This notion helps organizations define SoD constraints in terms of business requirements and reduces management complexity in large-scale RBAC systems. The model enables the definition of a wide taxonomy of conflict types. In particular, object-based SoD is introduced using the SoD domain concept, namely the set of data in which transaction conflicts may occur. Together with the formalization of the above properties, in this paper we also show the effectiveness of our proposal: we have applied the model to a large, existing organization; results highlight the benefits of adopting the proposed model in terms of reduced administration cost.<|reference_end|> | arxiv | @article{colantonio2008an,
title={An Activity-Based Model for Separation of Duty},
author={Alessandro Colantonio, Roberto Di Pietro, Alberto Ocello},
journal={arXiv preprint arXiv:0810.5351},
year={2008},
archivePrefix={arXiv},
eprint={0810.5351},
primaryClass={cs.CR}
} | colantonio2008an |
arxiv-5305 | 0810.5399 | An axiomatic characterization of a two-parameter extended relative entropy | <|reference_start|>An axiomatic characterization of a two-parameter extended relative entropy: The uniqueness theorem for a two-parameter extended relative entropy is proven. This result extends our previous one, the uniqueness theorem for a one-parameter extended relative entropy, to a two-parameter case. In addition, the properties of a two-parameter extended relative entropy are studied.<|reference_end|> | arxiv | @article{furuichi2008an,
title={An axiomatic characterization of a two-parameter extended relative
entropy},
author={Shigeru Furuichi},
journal={J. Math. Phys. Vol.51 (2010), 123302 (10 pages)},
year={2008},
doi={10.1063/1.3525917},
archivePrefix={arXiv},
eprint={0810.5399},
primaryClass={cond-mat.stat-mech cs.IT math.IT}
} | furuichi2008an |
arxiv-5306 | 0810.5407 | Quasi-metrics, Similarities and Searches: aspects of geometry of protein datasets | <|reference_start|>Quasi-metrics, Similarities and Searches: aspects of geometry of protein datasets: A quasi-metric is a distance function which satisfies the triangle inequality but is not symmetric: it can be thought of as an asymmetric metric. The central result of this thesis, developed in Chapter 3, is that a natural correspondence exists between similarity measures between biological (nucleotide or protein) sequences and quasi-metrics. Chapter 2 presents basic concepts of the theory of quasi-metric spaces and introduces a new examples of them: the universal countable rational quasi-metric space and its bicompletion, the universal bicomplete separable quasi-metric space. Chapter 4 is dedicated to development of a notion of the quasi-metric space with Borel probability measure, or pq-space. The main result of this chapter indicates that `a high dimensional quasi-metric space is close to being a metric space'. Chapter 5 investigates the geometric aspects of the theory of database similarity search in the context of quasi-metrics. The results about $pq$-spaces are used to produce novel theoretical bounds on performance of indexing schemes. Finally, the thesis presents some biological applications. Chapter 6 introduces FSIndex, an indexing scheme that significantly accelerates similarity searches of short protein fragment datasets. Chapter 7 presents the prototype of the system for discovery of short functional protein motifs called PFMFind, which relies on FSIndex for similarity searches.<|reference_end|> | arxiv | @article{stojmirovic2008quasi-metrics,,
title={Quasi-metrics, Similarities and Searches: aspects of geometry of protein
datasets},
author={Aleksandar Stojmirovic},
journal={arXiv preprint arXiv:0810.5407},
year={2008},
archivePrefix={arXiv},
eprint={0810.5407},
primaryClass={cs.IR math.GN q-bio.QM}
} | stojmirovic2008quasi-metrics, |
arxiv-5307 | 0810.5428 | Relating Web pages to enable information-gathering tasks | <|reference_start|>Relating Web pages to enable information-gathering tasks: We argue that relationships between Web pages are functions of the user's intent. We identify a class of Web tasks - information-gathering - that can be facilitated by a search engine that provides links to pages which are related to the page the user is currently viewing. We define three kinds of intentional relationships that correspond to whether the user is a) seeking sources of information, b) reading pages which provide information, or c) surfing through pages as part of an extended information-gathering process. We show that these three relationships can be productively mined using a combination of textual and link information and provide three scoring mechanisms that correspond to them: {\em SeekRel}, {\em FactRel} and {\em SurfRel}. These scoring mechanisms incorporate both textual and link information. We build a set of capacitated subnetworks - each corresponding to a particular keyword - that mirror the interconnection structure of the World Wide Web. The scores are computed by computing flows on these subnetworks. The capacities of the links are derived from the {\em hub} and {\em authority} values of the nodes they connect, following the work of Kleinberg (1998) on assigning authority to pages in hyperlinked environments. We evaluated our scoring mechanism by running experiments on four data sets taken from the Web. We present user evaluations of the relevance of the top results returned by our scoring mechanisms and compare those to the top results returned by Google's Similar Pages feature, and the {\em Companion} algorithm proposed by Dean and Henzinger (1999).<|reference_end|> | arxiv | @article{bagchi2008relating,
title={Relating Web pages to enable information-gathering tasks},
author={Amitabha Bagchi, Garima Lahoti},
journal={arXiv preprint arXiv:0810.5428},
year={2008},
archivePrefix={arXiv},
eprint={0810.5428},
primaryClass={cs.IR cs.DS}
} | bagchi2008relating |
arxiv-5308 | 0810.5439 | The Multi-Core Era - Trends and Challenges | <|reference_start|>The Multi-Core Era - Trends and Challenges: Since the very beginning of hardware development, computer processors were invented with ever-increasing clock frequencies and sophisticated in-build optimization strategies. Due to physical limitations, this 'free lunch' of speedup has come to an end. The following article gives a summary and bibliography for recent trends and challenges in CMP architectures. It discusses how 40 years of parallel computing research need to be considered in the upcoming multi-core era. We argue that future research must be driven from two sides - a better expression of hardware structures, and a domain-specific understanding of software parallelism.<|reference_end|> | arxiv | @article{tröger2008the,
title={The Multi-Core Era - Trends and Challenges},
author={Peter Tr"oger (Blekinge Institute Of Technology)},
journal={arXiv preprint arXiv:0810.5439},
year={2008},
archivePrefix={arXiv},
eprint={0810.5439},
primaryClass={cs.DC}
} | tröger2008the |
arxiv-5309 | 0810.5477 | Worst-case time decremental connectivity and k-edge witness | <|reference_start|>Worst-case time decremental connectivity and k-edge witness: We give a simple algorithm for decremental graph connectivity that handles edge deletions in worst-case time $O(k \log n)$ and connectivity queries in $O(\log k)$, where $k$ is the number of edges deleted so far, and uses worst-case space $O(m^2)$. We use this to give an algorithm for $k$-edge witness (``does the removal of a given set of $k$ edges disconnect two vertices $u,v$?'') with worst-case time $O(k^2 \log n)$ and space $O(k^2 n^2)$. For $k = o(\sqrt{n})$ these improve the worst-case $O(\sqrt{n})$ bound for deletion due to Eppstein et al. We also give a decremental connectivity algorithm using $O(n^2 \log n / \log \log n)$ space, whose time complexity depends on the toughness and independence number of the input graph. Finally, we show how to construct a distributed data structure for \kvw by giving a labeling scheme. This is the first data structure for \kvw that can efficiently distributed without just giving each vertex a copy of the whole structure. Its complexity depends on being able to construct a linear layout with good properties.<|reference_end|> | arxiv | @article{twigg2008worst-case,
title={Worst-case time decremental connectivity and k-edge witness},
author={Andrew Twigg},
journal={arXiv preprint arXiv:0810.5477},
year={2008},
archivePrefix={arXiv},
eprint={0810.5477},
primaryClass={cs.DS}
} | twigg2008worst-case |
arxiv-5310 | 0810.5482 | On the length of attractors in boolean networks with an interaction graph by layers | <|reference_start|>On the length of attractors in boolean networks with an interaction graph by layers: We consider a boolean network whose interaction graph has no circuit of length >1. Under this hypothesis, we establish an upper bound on the length of the attractors of the network which only depends on its interaction graph.<|reference_end|> | arxiv | @article{richard2008on,
title={On the length of attractors in boolean networks with an interaction
graph by layers},
author={Adrien Richard},
journal={arXiv preprint arXiv:0810.5482},
year={2008},
archivePrefix={arXiv},
eprint={0810.5482},
primaryClass={cs.DM}
} | richard2008on |
arxiv-5311 | 0810.5484 | A Novel Clustering Algorithm Based on a Modified Model of Random Walk | <|reference_start|>A Novel Clustering Algorithm Based on a Modified Model of Random Walk: We introduce a modified model of random walk, and then develop two novel clustering algorithms based on it. In the algorithms, each data point in a dataset is considered as a particle which can move at random in space according to the preset rules in the modified model. Further, this data point may be also viewed as a local control subsystem, in which the controller adjusts its transition probability vector in terms of the feedbacks of all data points, and then its transition direction is identified by an event-generating function. Finally, the positions of all data points are updated. As they move in space, data points collect gradually and some separating parts emerge among them automatically. As a consequence, data points that belong to the same class are located at a same position, whereas those that belong to different classes are away from one another. Moreover, the experimental results have demonstrated that data points in the test datasets are clustered reasonably and efficiently, and the comparison with other algorithms also provides an indication of the effectiveness of the proposed algorithms.<|reference_end|> | arxiv | @article{li2008a,
title={A Novel Clustering Algorithm Based on a Modified Model of Random Walk},
author={Qiang Li, Yan He, Jing-ping Jiang},
journal={arXiv preprint arXiv:0810.5484},
year={2008},
archivePrefix={arXiv},
eprint={0810.5484},
primaryClass={cs.LG cs.AI cs.MA}
} | li2008a |
arxiv-5312 | 0810.5516 | Symbolic model checking of tense logics on rational Kripke models | <|reference_start|>Symbolic model checking of tense logics on rational Kripke models: We introduce the class of rational Kripke models and study symbolic model checking of the basic tense logic Kt and some extensions of it in models from that class. Rational Kripke models are based on (generally infinite) rational graphs, with vertices labeled by the words in some regular language and transitions recognized by asynchronous two-head finite automata, also known as rational transducers. Every atomic proposition in a rational Kripke model is evaluated in a regular set of states. We show that every formula of Kt has an effectively computable regular extension in every rational Kripke model, and therefore local model checking and global model checking of Kt in rational Kripke models are decidable. These results are lifted to a number of extensions of Kt. We study and partly determine the complexity of the model checking procedures.<|reference_end|> | arxiv | @article{bekker2008symbolic,
title={Symbolic model checking of tense logics on rational Kripke models},
author={Wilmari Bekker and Valentin Goranko},
journal={arXiv preprint arXiv:0810.5516},
year={2008},
archivePrefix={arXiv},
eprint={0810.5516},
primaryClass={cs.LO}
} | bekker2008symbolic |
arxiv-5313 | 0810.5517 | Model checking memoryful linear-time logics over one-counter automata | <|reference_start|>Model checking memoryful linear-time logics over one-counter automata: We study complexity of the model-checking problems for LTL with registers (also known as freeze LTL) and for first-order logic with data equality tests over one-counter automata. We consider several classes of one-counter automata (mainly deterministic vs. nondeterministic) and several logical fragments (restriction on the number of registers or variables and on the use of propositional variables for control locations). The logics have the ability to store a counter value and to test it later against the current counter value. We show that model checking over deterministic one-counter automata is PSPACE-complete with infinite and finite accepting runs. By constrast, we prove that model checking freeze LTL in which the until operator is restricted to the eventually operator over nondeterministic one-counter automata is undecidable even if only one register is used and with no propositional variable. As a corollary of our proof, this also holds for first-order logic with data equality tests restricted to two variables. This makes a difference with the facts that several verification problems for one-counter automata are known to be decidable with relatively low complexity, and that finitary satisfiability for the two logics are decidable. Our results pave the way for model-checking memoryful (linear-time) logics over other classes of operational models, such as reversal-bounded counter machines.<|reference_end|> | arxiv | @article{demri2008model,
title={Model checking memoryful linear-time logics over one-counter automata},
author={Stephane Demri, Ranko Lazic, Arnaud Sangnier},
journal={arXiv preprint arXiv:0810.5517},
year={2008},
archivePrefix={arXiv},
eprint={0810.5517},
primaryClass={cs.LO}
} | demri2008model |
arxiv-5314 | 0810.5535 | A Combinatorial-Probabilistic Diagnostic Entropy and Information | <|reference_start|>A Combinatorial-Probabilistic Diagnostic Entropy and Information: A new combinatorial-probabilistic diagnostic entropy has been introduced. It describes the pair-wise sum of probabilities of system conditions that have to be distinguished during the diagnosing process. The proposed measure describes the uncertainty of the system conditions, and at the same time complexity of the diagnosis problem. Treating the assumed combinatorial-diagnostic entropy as a primary notion, the information delivered by the symptoms has been defined. The relationships have been derived to facilitate explicit, quantitative assessment of the information of a single symptom as well as that of a symptoms set. It has been proved that the combinatorial-probabilistic information shows the property of additivity. The presented measures are focused on diagnosis problem, but they can be easily applied to other disciplines such as decision theory and classification.<|reference_end|> | arxiv | @article{borowczyk2008a,
title={A Combinatorial-Probabilistic Diagnostic Entropy and Information},
author={Henryk Borowczyk},
journal={arXiv preprint arXiv:0810.5535},
year={2008},
archivePrefix={arXiv},
eprint={0810.5535},
primaryClass={cs.IT math.IT}
} | borowczyk2008a |
arxiv-5315 | 0810.5551 | A Theory of Truncated Inverse Sampling | <|reference_start|>A Theory of Truncated Inverse Sampling: In this paper, we have established a new framework of truncated inverse sampling for estimating mean values of non-negative random variables such as binomial, Poisson, hyper-geometrical, and bounded variables. We have derived explicit formulas and computational methods for designing sampling schemes to ensure prescribed levels of precision and confidence for point estimators. Moreover, we have developed interval estimation methods.<|reference_end|> | arxiv | @article{chen2008a,
title={A Theory of Truncated Inverse Sampling},
author={Xinjia Chen},
journal={arXiv preprint arXiv:0810.5551},
year={2008},
archivePrefix={arXiv},
eprint={0810.5551},
primaryClass={math.ST cs.LG math.PR stat.ME stat.TH}
} | chen2008a |
arxiv-5316 | 0810.5573 | A branch-and-bound feature selection algorithm for U-shaped cost functions | <|reference_start|>A branch-and-bound feature selection algorithm for U-shaped cost functions: This paper presents the formulation of a combinatorial optimization problem with the following characteristics: i.the search space is the power set of a finite set structured as a Boolean lattice; ii.the cost function forms a U-shaped curve when applied to any lattice chain. This formulation applies for feature selection in the context of pattern recognition. The known approaches for this problem are branch-and-bound algorithms and heuristics, that explore partially the search space. Branch-and-bound algorithms are equivalent to the full search, while heuristics are not. This paper presents a branch-and-bound algorithm that differs from the others known by exploring the lattice structure and the U-shaped chain curves of the search space. The main contribution of this paper is the architecture of this algorithm that is based on the representation and exploration of the search space by new lattice properties proven here. Several experiments, with well known public data, indicate the superiority of the proposed method to SFFS, which is a popular heuristic that gives good results in very short computational time. In all experiments, the proposed method got better or equal results in similar or even smaller computational time.<|reference_end|> | arxiv | @article{ris2008a,
title={A branch-and-bound feature selection algorithm for U-shaped cost
functions},
author={Marcelo Ris, Junior Barrera, David C. Martins Jr},
journal={arXiv preprint arXiv:0810.5573},
year={2008},
archivePrefix={arXiv},
eprint={0810.5573},
primaryClass={cs.CV cs.DS cs.LG}
} | ris2008a |
arxiv-5317 | 0810.5575 | Detection of parallel steps in programs with arrays | <|reference_start|>Detection of parallel steps in programs with arrays: The problem of detecting of information and logically independent (DILD) steps in programs is a key for equivalent program transformations. Here we are considering the problem of independence of loop iterations, the concentration of massive data processing and hence the most challenge construction for parallelizing. We introduced a separated form of loops when loop's body is a sequence of procedures each of them are used array's elements selected in a previous procedure. We prove that any loop may be algorithmically represented in this form and number of such procedures is invariant. We show that for this form of loop the steps connections are determined with some integer equations and hence the independence problem is algorithmically unsolvable if index expressions are more complex than cubical. We suggest a modification of index semantics that made connection equations trivial and loops iterations can be executed in parallel.<|reference_end|> | arxiv | @article{nuriyev2008detection,
title={Detection of parallel steps in programs with arrays},
author={R. Nuriyev},
journal={arXiv preprint arXiv:0810.5575},
year={2008},
archivePrefix={arXiv},
eprint={0810.5575},
primaryClass={cs.PL}
} | nuriyev2008detection |
arxiv-5318 | 0810.5578 | Anonymizing Graphs | <|reference_start|>Anonymizing Graphs: Motivated by recently discovered privacy attacks on social networks, we study the problem of anonymizing the underlying graph of interactions in a social network. We call a graph (k,l)-anonymous if for every node in the graph there exist at least k other nodes that share at least l of its neighbors. We consider two combinatorial problems arising from this notion of anonymity in graphs. More specifically, given an input graph we ask for the minimum number of edges to be added so that the graph becomes (k,l)-anonymous. We define two variants of this minimization problem and study their properties. We show that for certain values of k and l the problems are polynomial-time solvable, while for others they become NP-hard. Approximation algorithms for the latter cases are also given.<|reference_end|> | arxiv | @article{feder2008anonymizing,
title={Anonymizing Graphs},
author={Tomas Feder, Shubha U. Nabar, Evimaria Terzi},
journal={arXiv preprint arXiv:0810.5578},
year={2008},
archivePrefix={arXiv},
eprint={0810.5578},
primaryClass={cs.DB cs.DS}
} | feder2008anonymizing |
arxiv-5319 | 0810.5582 | Anonymizing Unstructured Data | <|reference_start|>Anonymizing Unstructured Data: In this paper we consider the problem of anonymizing datasets in which each individual is associated with a set of items that constitute private information about the individual. Illustrative datasets include market-basket datasets and search engine query logs. We formalize the notion of k-anonymity for set-valued data as a variant of the k-anonymity model for traditional relational datasets. We define an optimization problem that arises from this definition of anonymity and provide O(klogk) and O(1)-approximation algorithms for the same. We demonstrate applicability of our algorithms to the America Online query log dataset.<|reference_end|> | arxiv | @article{motwani2008anonymizing,
title={Anonymizing Unstructured Data},
author={Rajeev Motwani, Shubha U. Nabar},
journal={arXiv preprint arXiv:0810.5582},
year={2008},
archivePrefix={arXiv},
eprint={0810.5582},
primaryClass={cs.DB cs.DS}
} | motwani2008anonymizing |
arxiv-5320 | 0810.5596 | Programming languages with algorithmically parallelizing problem | <|reference_start|>Programming languages with algorithmically parallelizing problem: The study consists of two parts. Objective of the first part is modern language constructions responsible for algorithmically insolvability of parallelizing problem. Second part contains several ways to modify the constructions to make the problem algorithmically solvable<|reference_end|> | arxiv | @article{nuriyev2008programming,
title={Programming languages with algorithmically parallelizing problem},
author={R. Nuriyev},
journal={arXiv preprint arXiv:0810.5596},
year={2008},
archivePrefix={arXiv},
eprint={0810.5596},
primaryClass={cs.DC}
} | nuriyev2008programming |
arxiv-5321 | 0810.5631 | Temporal Difference Updating without a Learning Rate | <|reference_start|>Temporal Difference Updating without a Learning Rate: We derive an equation for temporal difference learning from statistical principles. Specifically, we start with the variational principle and then bootstrap to produce an updating rule for discounted state value estimates. The resulting equation is similar to the standard equation for temporal difference learning with eligibility traces, so called TD(lambda), however it lacks the parameter alpha that specifies the learning rate. In the place of this free parameter there is now an equation for the learning rate that is specific to each state transition. We experimentally test this new learning rule against TD(lambda) and find that it offers superior performance in various settings. Finally, we make some preliminary investigations into how to extend our new temporal difference algorithm to reinforcement learning. To do this we combine our update equation with both Watkins' Q(lambda) and Sarsa(lambda) and find that it again offers superior performance without a learning rate parameter.<|reference_end|> | arxiv | @article{hutter2008temporal,
title={Temporal Difference Updating without a Learning Rate},
author={Marcus Hutter and Shane Legg},
journal={Advances in Neural Information Processing Systems 20 (NIPS 2008)
pages 705-712},
year={2008},
archivePrefix={arXiv},
eprint={0810.5631},
primaryClass={cs.LG cs.AI}
} | hutter2008temporal |
arxiv-5322 | 0810.5633 | Reconstructing Extended Perfect Binary One-Error-Correcting Codes from Their Minimum Distance Graphs | <|reference_start|>Reconstructing Extended Perfect Binary One-Error-Correcting Codes from Their Minimum Distance Graphs: The minimum distance graph of a code has the codewords as vertices and edges exactly when the Hamming distance between two codewords equals the minimum distance of the code. A constructive proof for reconstructibility of an extended perfect binary one-error-correcting code from its minimum distance graph is presented. Consequently, inequivalent such codes have nonisomorphic minimum distance graphs. Moreover, it is shown that the automorphism group of a minimum distance graph is isomorphic to that of the corresponding code.<|reference_end|> | arxiv | @article{mogilnykh2008reconstructing,
title={Reconstructing Extended Perfect Binary One-Error-Correcting Codes from
Their Minimum Distance Graphs},
author={Ivan Yu. Mogilnykh, Patric R. J. "Osterg{aa}rd, Olli Pottonen, Faina
I. Solov'eva},
journal={IEEE Trans. Inform. Theory 55 (2009) 2622-2625},
year={2008},
doi={10.1109/TIT.2009.2018338},
archivePrefix={arXiv},
eprint={0810.5633},
primaryClass={cs.IT math.CO math.IT}
} | mogilnykh2008reconstructing |
arxiv-5323 | 0810.5636 | On the Possibility of Learning in Reactive Environments with Arbitrary Dependence | <|reference_start|>On the Possibility of Learning in Reactive Environments with Arbitrary Dependence: We address the problem of reinforcement learning in which observations may exhibit an arbitrary form of stochastic dependence on past observations and actions, i.e. environments more general than (PO)MDPs. The task for an agent is to attain the best possible asymptotic reward where the true generating environment is unknown but belongs to a known countable family of environments. We find some sufficient conditions on the class of environments under which an agent exists which attains the best asymptotic reward for any environment in the class. We analyze how tight these conditions are and how they relate to different probabilistic assumptions known in reinforcement learning and related fields, such as Markov Decision Processes and mixing conditions.<|reference_end|> | arxiv | @article{ryabko2008on,
title={On the Possibility of Learning in Reactive Environments with Arbitrary
Dependence},
author={Daniil Ryabko and Marcus Hutter},
journal={Theoretical Computer Science, 405:3 (2008) pages 274-284},
year={2008},
number={IDSIA-08-08},
archivePrefix={arXiv},
eprint={0810.5636},
primaryClass={cs.LG cs.AI cs.IT math.IT}
} | ryabko2008on |
arxiv-5324 | 0810.5647 | Kaltofen's division-free determinant algorithm differentiated for matrix adjoint computation | <|reference_start|>Kaltofen's division-free determinant algorithm differentiated for matrix adjoint computation: Kaltofen has proposed a new approach in 1992 for computing matrix determinants without divisions. The algorithm is based on a baby steps/giant steps construction of Krylov subspaces, and computes the determinant as the constant term of a characteristic polynomial. For matrices over an abstract ring, by the results of Baur and Strassen, the determinant algorithm, actually a straight-line program, leads to an algorithm with the same complexity for computing the adjoint of a matrix. However, the latter adjoint algorithm is obtained by the reverse mode of automatic differentiation, hence somehow is not "explicit". We present an alternative (still closely related) algorithm for the adjoint thatcan be implemented directly, we mean without resorting to an automatic transformation. The algorithm is deduced by applying program differentiation techniques "by hand" to Kaltofen's method, and is completely decribed. As subproblem, we study the differentiation of programs that compute minimum polynomials of lineraly generated sequences, and we use a lazy polynomial evaluation mechanism for reducing the cost of Strassen's avoidance of divisions in our case.<|reference_end|> | arxiv | @article{villard2008kaltofen's,
title={Kaltofen's division-free determinant algorithm differentiated for matrix
adjoint computation},
author={Gilles Villard (LIP)},
journal={arXiv preprint arXiv:0810.5647},
year={2008},
archivePrefix={arXiv},
eprint={0810.5647},
primaryClass={cs.SC cs.CC}
} | villard2008kaltofen's |
arxiv-5325 | 0810.5663 | Effective Complexity and its Relation to Logical Depth | <|reference_start|>Effective Complexity and its Relation to Logical Depth: Effective complexity measures the information content of the regularities of an object. It has been introduced by M. Gell-Mann and S. Lloyd to avoid some of the disadvantages of Kolmogorov complexity, also known as algorithmic information content. In this paper, we give a precise formal definition of effective complexity and rigorous proofs of its basic properties. In particular, we show that incompressible binary strings are effectively simple, and we prove the existence of strings that have effective complexity close to their lengths. Furthermore, we show that effective complexity is related to Bennett's logical depth: If the effective complexity of a string $x$ exceeds a certain explicit threshold then that string must have astronomically large depth; otherwise, the depth can be arbitrarily small.<|reference_end|> | arxiv | @article{ay2008effective,
title={Effective Complexity and its Relation to Logical Depth},
author={Nihat Ay, Markus Mueller, Arleta Szkola},
journal={IEEE Trans. Inf. Th., Vol. 56/9 pp. 4593-4607 (2010)},
year={2008},
doi={10.1109/TIT.2010.2053892},
archivePrefix={arXiv},
eprint={0810.5663},
primaryClass={cs.IT math.IT}
} | ay2008effective |
arxiv-5326 | 0810.5685 | Interpolation of Shifted-Lacunary Polynomials | <|reference_start|>Interpolation of Shifted-Lacunary Polynomials: Given a "black box" function to evaluate an unknown rational polynomial f in Q[x] at points modulo a prime p, we exhibit algorithms to compute the representation of the polynomial in the sparsest shifted power basis. That is, we determine the sparsity t, the shift s (a rational), the exponents 0 <= e1 < e2 < ... < et, and the coefficients c1,...,ct in Q\{0} such that f(x) = c1(x-s)^e1+c2(x-s)^e2+...+ct(x-s)^et. The computed sparsity t is absolutely minimal over any shifted power basis. The novelty of our algorithm is that the complexity is polynomial in the (sparse) representation size, and in particular is logarithmic in deg(f). Our method combines previous celebrated results on sparse interpolation and computing sparsest shifts, and provides a way to handle polynomials with extremely high degree which are, in some sense, sparse in information.<|reference_end|> | arxiv | @article{giesbrecht2008interpolation,
title={Interpolation of Shifted-Lacunary Polynomials},
author={Mark Giesbrecht and Daniel S. Roche},
journal={Computational Complexity, Vol. 19, No 3., pp. 333-354, 2010},
year={2008},
doi={10.1007/s00037-010-0294-0},
archivePrefix={arXiv},
eprint={0810.5685},
primaryClass={cs.SC cs.DS cs.MS}
} | giesbrecht2008interpolation |
arxiv-5327 | 0810.5717 | On the Conditional Independence Implication Problem: A Lattice-Theoretic Approach | <|reference_start|>On the Conditional Independence Implication Problem: A Lattice-Theoretic Approach: A lattice-theoretic framework is introduced that permits the study of the conditional independence (CI) implication problem relative to the class of discrete probability measures. Semi-lattices are associated with CI statements and a finite, sound and complete inference system relative to semi-lattice inclusions is presented. This system is shown to be (1) sound and complete for saturated CI statements, (2) complete for general CI statements, and (3) sound and complete for stable CI statements. These results yield a criterion that can be used to falsify instances of the implication problem and several heuristics are derived that approximate this "lattice-exclusion" criterion in polynomial time. Finally, we provide experimental results that relate our work to results obtained from other existing inference algorithms.<|reference_end|> | arxiv | @article{niepert2008on,
title={On the Conditional Independence Implication Problem: A Lattice-Theoretic
Approach},
author={Mathias Niepert, Dirk Van Gucht and Marc Gyssens},
journal={Proceedings of the 24th Conference on Uncertainty in Artificial
Intelligence, 2008, pages 435-443},
year={2008},
archivePrefix={arXiv},
eprint={0810.5717},
primaryClass={cs.AI cs.DM}
} | niepert2008on |
arxiv-5328 | 0810.5725 | A triangle-based logic for affine-invariant querying of spatial and spatio-temporal data | <|reference_start|>A triangle-based logic for affine-invariant querying of spatial and spatio-temporal data: In spatial databases, incompatibilities often arise due to different choices of origin or unit of measurement (e.g., centimeters versus inches). By representing and querying the data in an affine-invariant manner, we can avoid these incompatibilities. In practice, spatial (resp., spatio-temporal) data is often represented as a finite union of triangles (resp., moving triangles). As two arbitrary triangles are equal up to a unique affinity of the plane, they seem perfect candidates as basic units for an affine-invariant query language. We propose a so-called "triangle logic", a query language that is affine-generic and has triangles as basic elements. We show that this language has the same expressive power as the affine-generic fragment of first-order logic over the reals on triangle databases. We illustrate that the proposed language is simple and intuitive. It can also serve as a first step towards a "moving-triangle logic" for spatio-temporal data.<|reference_end|> | arxiv | @article{haesevoets2008a,
title={A triangle-based logic for affine-invariant querying of spatial and
spatio-temporal data},
author={Sofie Haesevoets, Bart Kuijpers},
journal={arXiv preprint arXiv:0810.5725},
year={2008},
archivePrefix={arXiv},
eprint={0810.5725},
primaryClass={cs.LO cs.DB}
} | haesevoets2008a |
arxiv-5329 | 0810.5728 | Multi-Objective Model Checking of Markov Decision Processes | <|reference_start|>Multi-Objective Model Checking of Markov Decision Processes: We study and provide efficient algorithms for multi-objective model checking problems for Markov Decision Processes (MDPs). Given an MDP, M, and given multiple linear-time (\omega -regular or LTL) properties \varphi\_i, and probabilities r\_i \epsilon [0,1], i=1,...,k, we ask whether there exists a strategy \sigma for the controller such that, for all i, the probability that a trajectory of M controlled by \sigma satisfies \varphi\_i is at least r\_i. We provide an algorithm that decides whether there exists such a strategy and if so produces it, and which runs in time polynomial in the size of the MDP. Such a strategy may require the use of both randomization and memory. We also consider more general multi-objective \omega -regular queries, which we motivate with an application to assume-guarantee compositional reasoning for probabilistic systems. Note that there can be trade-offs between different properties: satisfying property \varphi\_1 with high probability may necessitate satisfying \varphi\_2 with low probability. Viewing this as a multi-objective optimization problem, we want information about the "trade-off curve" or Pareto curve for maximizing the probabilities of different properties. We show that one can compute an approximate Pareto curve with respect to a set of \omega -regular properties in time polynomial in the size of the MDP. Our quantitative upper bounds use LP methods. We also study qualitative multi-objective model checking problems, and we show that these can be analysed by purely graph-theoretic methods, even though the strategies may still require both randomization and memory.<|reference_end|> | arxiv | @article{etessami2008multi-objective,
title={Multi-Objective Model Checking of Markov Decision Processes},
author={Kousha Etessami, Marta Kwiatkowska, Moshe Y. Vardi, and Mihalis
Yannakakis},
journal={Logical Methods in Computer Science, Volume 4, Issue 4 (November
12, 2008) lmcs:990},
year={2008},
doi={10.2168/LMCS-4(4:8)2008},
archivePrefix={arXiv},
eprint={0810.5728},
primaryClass={cs.LO cs.CC cs.GT}
} | etessami2008multi-objective |
arxiv-5330 | 0810.5732 | Practical language based on systems of definitions | <|reference_start|>Practical language based on systems of definitions: The article suggests a description of a system of tables with a set of special lists absorbing a semantics of data and reflects a fullness of data. It shows how their parallel processing can be constructed based on the descriptions. The approach also might be used for definition intermediate targets for data mining and unstructured data processing.<|reference_end|> | arxiv | @article{nuriyev2008practical,
title={Practical language based on systems of definitions},
author={R. Nuriyev},
journal={arXiv preprint arXiv:0810.5732},
year={2008},
archivePrefix={arXiv},
eprint={0810.5732},
primaryClass={cs.DC}
} | nuriyev2008practical |
arxiv-5331 | 0810.5758 | Non procedural language for parallel programs | <|reference_start|>Non procedural language for parallel programs: Probably building non procedural languages is the most prospective way for parallel programming just because non procedural means no fixed way for execution. The article consists of 3 parts. In first part we consider formal systems for definition a named datasets and studying an expression power of different subclasses. In the second part we consider a complexity of algorithms of building sets by the definitions. In third part we consider a fullness and flexibility of the class of program based data set definitions.<|reference_end|> | arxiv | @article{nuriyev2008non,
title={Non procedural language for parallel programs},
author={Renat Nuriyev},
journal={arXiv preprint arXiv:0810.5758},
year={2008},
archivePrefix={arXiv},
eprint={0810.5758},
primaryClass={cs.DC}
} | nuriyev2008non |
arxiv-5332 | 0810.5763 | Number of wireless sensors needed to detect a wildfire | <|reference_start|>Number of wireless sensors needed to detect a wildfire: The lack of extensive research in the application of inexpensive wireless sensor nodes for the early detection of wildfires motivated us to investigate the cost of such a network. As a first step, in this paper we present several results which relate the time to detection and the burned area to the number of sensor nodes in the region which is protected. We prove that the probability distribution of the burned area at the moment of detection is approximately exponential, given that some hypotheses hold: the positions of the sensor nodes are independent random variables uniformly distributed and the number of sensor nodes is large. This conclusion depends neither on the number of ignition points nor on the propagation model of the fire.<|reference_end|> | arxiv | @article{fierens2008number,
title={Number of wireless sensors needed to detect a wildfire},
author={Pablo Ignacio Fierens},
journal={arXiv preprint arXiv:0810.5763},
year={2008},
archivePrefix={arXiv},
eprint={0810.5763},
primaryClass={cs.NI}
} | fierens2008number |
arxiv-5333 | 0810.5770 | From Multi-Keyholes to Measure of Correlation and Power Imbalance in MIMO Channels: Outage Capacity Analysis | <|reference_start|>From Multi-Keyholes to Measure of Correlation and Power Imbalance in MIMO Channels: Outage Capacity Analysis: An information-theoretic analysis of a multi-keyhole channel, which includes a number of statistically independent keyholes with possibly different correlation matrices, is given. When the number of keyholes or/and the number of Tx/Rx antennas is large, there is an equivalent Rayleigh-fading channel such that the outage capacities of both channels are asymptotically equal. In the case of a large number of antennas and for a broad class of fading distributions, the instantaneous capacity is shown to be asymptotically Gaussian in distribution, and compact, closed-form expressions for the mean and variance are given. Motivated by the asymptotic analysis, a simple, full-ordering scalar measure of spatial correlation and power imbalance in MIMO channels is introduced, which quantifies the negative impact of these two factors on the outage capacity in a simple and well-tractable way. It does not require the eigenvalue decomposition, and has the full-ordering property. The size-asymptotic results are used to prove Telatar's conjecture for semi-correlated multi-keyhole and Rayleigh channels. Since the keyhole channel model approximates well the relay channel in the amplify-and-forward mode in certain scenarios, these results also apply to the latter<|reference_end|> | arxiv | @article{levin2008from,
title={From Multi-Keyholes to Measure of Correlation and Power Imbalance in
MIMO Channels: Outage Capacity Analysis},
author={George Levin, Sergey Loyka},
journal={arXiv preprint arXiv:0810.5770},
year={2008},
doi={10.1109/TIT.2011.2133010},
archivePrefix={arXiv},
eprint={0810.5770},
primaryClass={cs.IT math.IT}
} | levin2008from |
arxiv-5334 | 0811.0037 | A complexity dichotomy for hypergraph partition functions | <|reference_start|>A complexity dichotomy for hypergraph partition functions: We consider the complexity of counting homomorphisms from an $r$-uniform hypergraph $G$ to a symmetric $r$-ary relation $H$. We give a dichotomy theorem for $r>2$, showing for which $H$ this problem is in FP and for which $H$ it is #P-complete. This generalises a theorem of Dyer and Greenhill (2000) for the case $r=2$, which corresponds to counting graph homomorphisms. Our dichotomy theorem extends to the case in which the relation $H$ is weighted, and the goal is to compute the \emph{partition function}, which is the sum of weights of the homomorphisms. This problem is motivated by statistical physics, where it arises as computing the partition function for particle models in which certain combinations of $r$ sites interact symmetrically. In the weighted case, our dichotomy theorem generalises a result of Bulatov and Grohe (2005) for graphs, where $r=2$. When $r=2$, the polynomial time cases of the dichotomy correspond simply to rank-1 weights. Surprisingly, for all $r>2$ the polynomial time cases of the dichotomy have rather more structure. It turns out that the weights must be superimposed on a combinatorial structure defined by solutions of an equation over an Abelian group. Our result also gives a dichotomy for a closely related constraint satisfaction problem.<|reference_end|> | arxiv | @article{dyer2008a,
title={A complexity dichotomy for hypergraph partition functions},
author={Martin Dyer, Leslie Ann Goldberg and Mark Jerrum},
journal={arXiv preprint arXiv:0811.0037},
year={2008},
archivePrefix={arXiv},
eprint={0811.0037},
primaryClass={cs.CC cs.DM}
} | dyer2008a |
arxiv-5335 | 0811.0048 | Conjectural Equilibrium in Water-filling Games | <|reference_start|>Conjectural Equilibrium in Water-filling Games: This paper considers a non-cooperative game in which competing users sharing a frequency-selective interference channel selfishly optimize their power allocation in order to improve their achievable rates. Previously, it was shown that a user having the knowledge of its opponents' channel state information can make foresighted decisions and substantially improve its performance compared with the case in which it deploys the conventional iterative water-filling algorithm, which does not exploit such knowledge. This paper discusses how a foresighted user can acquire this knowledge by modeling its experienced interference as a function of its own power allocation. To characterize the outcome of the multi-user interaction, the conjectural equilibrium is introduced, and the existence of this equilibrium for the investigated water-filling game is proved. Interestingly, both the Nash equilibrium and the Stackelberg equilibrium are shown to be special cases of the generalization of conjectural equilibrium. We develop practical algorithms to form accurate beliefs and search desirable power allocation strategies. Numerical simulations indicate that a foresighted user without any a priori knowledge of its competitors' private information can effectively learn the required information, and induce the entire system to an operating point that improves both its own achievable rate as well as the rates of the other participants in the water-filling game.<|reference_end|> | arxiv | @article{su2008conjectural,
title={Conjectural Equilibrium in Water-filling Games},
author={Yi Su and Mihaela van der Schaar},
journal={arXiv preprint arXiv:0811.0048},
year={2008},
archivePrefix={arXiv},
eprint={0811.0048},
primaryClass={cs.GT cs.MA}
} | su2008conjectural |
arxiv-5336 | 0811.0063 | A variant of Wiener's attack on RSA | <|reference_start|>A variant of Wiener's attack on RSA: Wiener's attack is a well-known polynomial-time attack on a RSA cryptosystem with small secret decryption exponent d, which works if d<n^{0.25}, where n=pq is the modulus of the cryptosystem. Namely, in that case, d is the denominator of some convergent p_m/q_m of the continued fraction expansion of e/n, and therefore d can be computed efficiently from the public key (n,e). There are several extensions of Wiener's attack that allow the RSA cryptosystem to be broken when d is a few bits longer than n^{0.25}. They all have the run-time complexity (at least) O(D^2), where d=Dn^{0.25}. Here we propose a new variant of Wiener's attack, which uses results on Diophantine approximations of the form |\alpha - p/q| < c/q^2, and "meet-in-the-middle" variant for testing the candidates (of the form rq_{m+1} + sq_m) for the secret exponent. This decreases the run-time complexity of the attack to O(D log(D)) (with the space complexity O(D)).<|reference_end|> | arxiv | @article{dujella2008a,
title={A variant of Wiener's attack on RSA},
author={Andrej Dujella},
journal={Computing 85 (2009), 77-83},
year={2008},
doi={10.1007/s00607-009-0037-8},
archivePrefix={arXiv},
eprint={0811.0063},
primaryClass={cs.CR}
} | dujella2008a |
arxiv-5337 | 0811.0071 | Conversion/Preference Games | <|reference_start|>Conversion/Preference Games: We introduce the concept of Conversion/Preference Games, or CP games for short. CP games generalize the standard notion of strategic games. First we exemplify the use of CP games. Second we formally introduce and define the CP-games formalism. Then we sketch two `real-life' applications, namely a connection between CP games and gene regulation networks, and the use of CP games to formalize implied information in Chinese Wall security. We end with a study of a particular fixed-point construction over CP games and of the resulting existence of equilibria in possibly infinite games.<|reference_end|> | arxiv | @article{roux2008conversion/preference,
title={Conversion/Preference Games},
author={St'ephane Le Roux (LIP), Pierre Lescanne (LIP), Ren'e Vestergaard},
journal={arXiv preprint arXiv:0811.0071},
year={2008},
archivePrefix={arXiv},
eprint={0811.0071},
primaryClass={cs.GT}
} | roux2008conversion/preference |
arxiv-5338 | 0811.0077 | Approximation of a Fractional Order System by an Integer Order Model Using Particle Swarm Optimization Technique | <|reference_start|>Approximation of a Fractional Order System by an Integer Order Model Using Particle Swarm Optimization Technique: System identification is a necessity in control theory. Classical control theory usually considers processes with integer order transfer functions. Real processes are usually of fractional order as opposed to the ideal integral order models. A simple and elegant scheme is presented for approximation of such a real world fractional order process by an ideal integral order model. A population of integral order process models is generated and updated by PSO technique, the fitness function being the sum of squared deviations from the set of observations obtained from the actual fractional order process. Results show that the proposed scheme offers a high degree of accuracy.<|reference_end|> | arxiv | @article{maiti2008approximation,
title={Approximation of a Fractional Order System by an Integer Order Model
Using Particle Swarm Optimization Technique},
author={Deepyaman Maiti, Amit Konar},
journal={arXiv preprint arXiv:0811.0077},
year={2008},
archivePrefix={arXiv},
eprint={0811.0077},
primaryClass={cs.OH}
} | maiti2008approximation |
arxiv-5339 | 0811.0078 | A Swarm Intelligence Based Scheme for Complete and Fault-tolerant Identification of a Dynamical Fractional Order Process | <|reference_start|>A Swarm Intelligence Based Scheme for Complete and Fault-tolerant Identification of a Dynamical Fractional Order Process: System identification refers to estimation of process parameters and is a necessity in control theory. Physical systems usually have varying parameters. For such processes, accurate identification is particularly important. Online identification schemes are also needed for designing adaptive controllers. Real processes are usually of fractional order as opposed to the ideal integral order models. In this paper, we propose a simple and elegant scheme of estimating the parameters for such a fractional order process. A population of process models is generated and updated by particle swarm optimization (PSO) technique, the fitness function being the sum of squared deviations from the actual set of observations. Results show that the proposed scheme offers a high degree of accuracy even when the observations are corrupted to a significant degree. Additional schemes to improve the accuracy still further are also proposed and analyzed.<|reference_end|> | arxiv | @article{maiti2008a,
title={A Swarm Intelligence Based Scheme for Complete and Fault-tolerant
Identification of a Dynamical Fractional Order Process},
author={Deepyaman Maiti, Ayan Acharya, Amit Konar},
journal={arXiv preprint arXiv:0811.0078},
year={2008},
archivePrefix={arXiv},
eprint={0811.0078},
primaryClass={cs.OH}
} | maiti2008a |
arxiv-5340 | 0811.0079 | The Application of Stochastic Optimization Algorithms to the Design of a Fractional-order PID Controller | <|reference_start|>The Application of Stochastic Optimization Algorithms to the Design of a Fractional-order PID Controller: The Proportional-Integral-Derivative Controller is widely used in industries for process control applications. Fractional-order PID controllers are known to outperform their integer-order counterparts. In this paper, we propose a new technique of fractional-order PID controller synthesis based on peak overshoot and rise-time specifications. Our approach is to construct an objective function, the optimization of which yields a possible solution to the design problem. This objective function is optimized using two popular bio-inspired stochastic search algorithms, namely Particle Swarm Optimization and Differential Evolution. With the help of a suitable example, the superiority of the designed fractional-order PID controller to an integer-order PID controller is affirmed and a comparative study of the efficacy of the two above algorithms in solving the optimization problem is also presented.<|reference_end|> | arxiv | @article{chakraborty2008the,
title={The Application of Stochastic Optimization Algorithms to the Design of a
Fractional-order PID Controller},
author={Mithun Chakraborty, Deepyaman Maiti, Amit Konar},
journal={arXiv preprint arXiv:0811.0079},
year={2008},
archivePrefix={arXiv},
eprint={0811.0079},
primaryClass={cs.OH}
} | chakraborty2008the |
arxiv-5341 | 0811.0080 | A Deterministic Model for Analyzing the Dynamics of Ant System Algorithm and Performance Amelioration through a New Pheromone Deposition Approach | <|reference_start|>A Deterministic Model for Analyzing the Dynamics of Ant System Algorithm and Performance Amelioration through a New Pheromone Deposition Approach: Ant Colony Optimization (ACO) is a metaheuristic for solving difficult discrete optimization problems. This paper presents a deterministic model based on differential equation to analyze the dynamics of basic Ant System algorithm. Traditionally, the deposition of pheromone on different parts of the tour of a particular ant is always kept unvarying. Thus the pheromone concentration remains uniform throughout the entire path of an ant. This article introduces an exponentially increasing pheromone deposition approach by artificial ants to improve the performance of basic Ant System algorithm. The idea here is to introduce an additional attracting force to guide the ants towards destination more easily by constructing an artificial potential field identified by increasing pheromone concentration towards the goal. Apart from carrying out analysis of Ant System dynamics with both traditional and the newly proposed deposition rules, the paper presents an exhaustive set of experiments performed to find out suitable parameter ranges for best performance of Ant System with the proposed deposition approach. Simulations reveal that the proposed deposition rule outperforms the traditional one by a large extent both in terms of solution quality and algorithm convergence. Thus, the contributions of the article can be presented as follows: i) it introduces differential equation and explores a novel method of analyzing the dynamics of ant system algorithms, ii) it initiates an exponentially increasing pheromone deposition approach by artificial ants to improve the performance of algorithm in terms of solution quality and convergence time, iii) exhaustive experimentation performed facilitates the discovery of an algebraic relationship between the parameter set of the algorithm and feature of the problem environment.<|reference_end|> | arxiv | @article{acharya2008a,
title={A Deterministic Model for Analyzing the Dynamics of Ant System Algorithm
and Performance Amelioration through a New Pheromone Deposition Approach},
author={Ayan Acharya, Deepyaman Maiti, Amit Konar, Ramadoss Janarthanan},
journal={arXiv preprint arXiv:0811.0080},
year={2008},
doi={10.1109/ICIAFS.2008.4783979},
archivePrefix={arXiv},
eprint={0811.0080},
primaryClass={cs.OH}
} | acharya2008a |
arxiv-5342 | 0811.0083 | Tuning PID and FOPID Controllers using the Integral Time Absolute Error Criterion | <|reference_start|>Tuning PID and FOPID Controllers using the Integral Time Absolute Error Criterion: Particle swarm optimization (PSO) is extensively used for real parameter optimization in diverse fields of study. This paper describes an application of PSO to the problem of designing a fractional-order proportional-integral-derivative (FOPID) controller whose parameters comprise proportionality constant, integral constant, derivative constant, integral order (lambda) and derivative order (delta). The presence of five optimizable parameters makes the task of designing a FOPID controller more challenging than conventional PID controller design. Our design method focuses on minimizing the Integral Time Absolute Error (ITAE) criterion. The digital realization of the deigned system utilizes the Tustin operator-based continued fraction expansion scheme. We carry out a simulation that illustrates the effectiveness of the proposed approach especially for realizing fractional-order plants. This paper also attempts to study the behavior of fractional PID controller vis-a-vis that of its integer order counterpart and demonstrates the superiority of the former to the latter.<|reference_end|> | arxiv | @article{maiti2008tuning,
title={Tuning PID and FOPID Controllers using the Integral Time Absolute Error
Criterion},
author={Deepyaman Maiti, Ayan Acharya, Mithun Chakraborty, Amit Konar,
Ramadoss Janarthanan},
journal={arXiv preprint arXiv:0811.0083},
year={2008},
archivePrefix={arXiv},
eprint={0811.0083},
primaryClass={cs.OH}
} | maiti2008tuning |
arxiv-5343 | 0811.0113 | A Bayesian Framework for Opinion Updates | <|reference_start|>A Bayesian Framework for Opinion Updates: Opinion Dynamics lacks a theoretical basis. In this article, I propose to use a decision-theoretic framework, based on the updating of subjective probabilities, as that basis. We will see we get a basic tool for a better understanding of the interaction between the agents in Opinion Dynamics problems and for creating new models. I will review the few existing applications of Bayesian update rules to both discrete and continuous opinion problems and show that several traditional models can be obtained as special cases or approximations from these Bayesian models. The empirical basis and useful properties of the framework will be discussed and examples of how the framework can be used to describe different problems given.<|reference_end|> | arxiv | @article{martins2008a,
title={A Bayesian Framework for Opinion Updates},
author={Andre C. R. Martins},
journal={arXiv preprint arXiv:0811.0113},
year={2008},
number={In Liu Yijun and Zhou Tao, editors, Social Physics Catena (No.3),
pages 146-157. Science Press, Beijing},
archivePrefix={arXiv},
eprint={0811.0113},
primaryClass={physics.soc-ph cs.MA nlin.AO}
} | martins2008a |
arxiv-5344 | 0811.0123 | A computational model of affects | <|reference_start|>A computational model of affects: This article provides a simple logical structure, in which affective concepts (i.e. concepts related to emotions and feelings) can be defined. The set of affects defined is similar to the set of emotions covered in the OCC model (Ortony A., Collins A., and Clore G. L.: The Cognitive Structure of Emotions. Cambridge University Press, 1988), but the model presented in this article is fully computationally defined.<|reference_end|> | arxiv | @article{turkia2008a,
title={A computational model of affects},
author={Mika Turkia},
journal={Dietrich, D.; Fodor, G.; Zucker, G.; Bruckner, D. (Eds.):
Simulating the Mind. A Technical Neuropsychoanalytical Approach. Springer
2009, pp. 277-289},
year={2008},
archivePrefix={arXiv},
eprint={0811.0123},
primaryClass={cs.AI cs.MA}
} | turkia2008a |
arxiv-5345 | 0811.0131 | Balancing Exploration and Exploitation by an Elitist Ant System with Exponential Pheromone Deposition Rule | <|reference_start|>Balancing Exploration and Exploitation by an Elitist Ant System with Exponential Pheromone Deposition Rule: The paper presents an exponential pheromone deposition rule to modify the basic ant system algorithm which employs constant deposition rule. A stability analysis using differential equation is carried out to find out the values of parameters that make the ant system dynamics stable for both kinds of deposition rule. A roadmap of connected cities is chosen as the problem environment where the shortest route between two given cities is required to be discovered. Simulations performed with both forms of deposition approach using Elitist Ant System model reveal that the exponential deposition approach outperforms the classical one by a large extent. Exhaustive experiments are also carried out to find out the optimum setting of different controlling parameters for exponential deposition approach and an empirical relationship between the major controlling parameters of the algorithm and some features of problem environment.<|reference_end|> | arxiv | @article{acharya2008balancing,
title={Balancing Exploration and Exploitation by an Elitist Ant System with
Exponential Pheromone Deposition Rule},
author={Ayan Acharya, Deepyaman Maiti, Aritra Banerjee, Amit Konar},
journal={arXiv preprint arXiv:0811.0131},
year={2008},
archivePrefix={arXiv},
eprint={0811.0131},
primaryClass={cs.AI}
} | acharya2008balancing |
arxiv-5346 | 0811.0133 | A Study of the Grunwald-Letnikov Definition for Minimizing the Effects of Random Noise on Fractional Order Differential Equations | <|reference_start|>A Study of the Grunwald-Letnikov Definition for Minimizing the Effects of Random Noise on Fractional Order Differential Equations: Of the many definitions for fractional order differintegral, the Grunwald-Letnikov definition is arguably the most important one. The necessity of this definition for the description and analysis of fractional order systems cannot be overstated. Unfortunately, the Fractional Order Differential Equation (FODE) describing such a systems, in its original form, highly sensitive to the effects of random noise components inevitable in a natural environment. Thus direct application of the definition in a real-life problem can yield erroneous results. In this article, we perform an in-depth mathematical analysis the Grunwald-Letnikov definition in depth and, as far as we know, we are the first to do so. Based on our analysis, we present a transformation scheme which will allow us to accurately analyze generalized fractional order systems in presence of significant quantities of random errors. Finally, by a simple experiment, we demonstrate the high degree of robustness to noise offered by the said transformation and thus validate our scheme.<|reference_end|> | arxiv | @article{chakraborty2008a,
title={A Study of the Grunwald-Letnikov Definition for Minimizing the Effects
of Random Noise on Fractional Order Differential Equations},
author={Mithun Chakraborty, Deepyaman Maiti, Amit Konar, Ramadoss Janarthanan},
journal={arXiv preprint arXiv:0811.0133},
year={2008},
doi={10.1109/ICIAFS.2008.4783931},
archivePrefix={arXiv},
eprint={0811.0133},
primaryClass={cs.OH}
} | chakraborty2008a |
arxiv-5347 | 0811.0134 | A Novel Parser Design Algorithm Based on Artificial Ants | <|reference_start|>A Novel Parser Design Algorithm Based on Artificial Ants: This article presents a unique design for a parser using the Ant Colony Optimization algorithm. The paper implements the intuitive thought process of human mind through the activities of artificial ants. The scheme presented here uses a bottom-up approach and the parsing program can directly use ambiguous or redundant grammars. We allocate a node corresponding to each production rule present in the given grammar. Each node is connected to all other nodes (representing other production rules), thereby establishing a completely connected graph susceptible to the movement of artificial ants. Each ant tries to modify this sentential form by the production rule present in the node and upgrades its position until the sentential form reduces to the start symbol S. Successful ants deposit pheromone on the links that they have traversed through. Eventually, the optimum path is discovered by the links carrying maximum amount of pheromone concentration. The design is simple, versatile, robust and effective and obviates the calculation of the above mentioned sets and precedence relation tables. Further advantages of our scheme lie in i) ascertaining whether a given string belongs to the language represented by the grammar, and ii) finding out the shortest possible path from the given string to the start symbol S in case multiple routes exist.<|reference_end|> | arxiv | @article{maiti2008a,
title={A Novel Parser Design Algorithm Based on Artificial Ants},
author={Deepyaman Maiti, Ayan Acharya, Amit Konar, Janarthanan Ramadoss},
journal={arXiv preprint arXiv:0811.0134},
year={2008},
doi={10.1109/ICIAFS.2008.4783925},
archivePrefix={arXiv},
eprint={0811.0134},
primaryClass={cs.AI}
} | maiti2008a |
arxiv-5348 | 0811.0135 | Complete Identification of a Dynamic Fractional Order System Under Non-ideal Conditions Using Fractional Differintegral Definitions | <|reference_start|>Complete Identification of a Dynamic Fractional Order System Under Non-ideal Conditions Using Fractional Differintegral Definitions: This contribution deals with identification of fractional-order dynamical systems. System identification, which refers to estimation of process parameters, is a necessity in control theory. Real processes are usually of fractional order as opposed to the ideal integral order models. A simple and elegant scheme of estimating the parameters for such a fractional order process is proposed. This method employs fractional calculus theory to find equations relating the parameters that are to be estimated, and then estimates the process parameters after solving the simultaneous equations. The data used for the calculations are intentionally corrupted to simulate real-life conditions. Results show that the proposed scheme offers a very high degree of accuracy even for erroneous data.<|reference_end|> | arxiv | @article{maiti2008complete,
title={Complete Identification of a Dynamic Fractional Order System Under
Non-ideal Conditions Using Fractional Differintegral Definitions},
author={Deepyaman Maiti, Ayan Acharya, R. Janarthanan, Amit Konar},
journal={arXiv preprint arXiv:0811.0135},
year={2008},
doi={10.1109/ADCOM.2008.4760462},
archivePrefix={arXiv},
eprint={0811.0135},
primaryClass={cs.OH}
} | maiti2008complete |
arxiv-5349 | 0811.0136 | Extension of Max-Min Ant System with Exponential Pheromone Deposition Rule | <|reference_start|>Extension of Max-Min Ant System with Exponential Pheromone Deposition Rule: The paper presents an exponential pheromone deposition approach to improve the performance of classical Ant System algorithm which employs uniform deposition rule. A simplified analysis using differential equations is carried out to study the stability of basic ant system dynamics with both exponential and constant deposition rules. A roadmap of connected cities, where the shortest path between two specified cities are to be found out, is taken as a platform to compare Max-Min Ant System model (an improved and popular model of Ant System algorithm) with exponential and constant deposition rules. Extensive simulations are performed to find the best parameter settings for non-uniform deposition approach and experiments with these parameter settings revealed that the above approach outstripped the traditional one by a large extent in terms of both solution quality and convergence time.<|reference_end|> | arxiv | @article{acharya2008extension,
title={Extension of Max-Min Ant System with Exponential Pheromone Deposition
Rule},
author={Ayan Acharya, Deepyaman Maiti, Aritra Banerjee, R. Janarthanan, Amit
Konar},
journal={arXiv preprint arXiv:0811.0136},
year={2008},
doi={10.1109/ADCOM.2008.4760419},
archivePrefix={arXiv},
eprint={0811.0136},
primaryClass={cs.AI}
} | acharya2008extension |
arxiv-5350 | 0811.0137 | A Novel Approach for Complete Identification of Dynamic Fractional Order Systems Using Stochastic Optimization Algorithms and Fractional Calculus | <|reference_start|>A Novel Approach for Complete Identification of Dynamic Fractional Order Systems Using Stochastic Optimization Algorithms and Fractional Calculus: This contribution deals with identification of fractional-order dynamical systems. System identification, which refers to estimation of process parameters, is a necessity in control theory. Real processes are usually of fractional order as opposed to the ideal integral order models. A simple and elegant scheme of estimating the parameters for such a fractional order process is proposed. This method employs fractional calculus theory to find equations relating the parameters that are to be estimated, and then estimates the process parameters after solving the simultaneous equations. The said simultaneous equations are generated and updated using particle swarm optimization (PSO) technique, the fitness function being the sum of squared deviations from the actual set of observations. The data used for the calculations are intentionally corrupted to simulate real-life conditions. Results show that the proposed scheme offers a very high degree of accuracy even for erroneous data.<|reference_end|> | arxiv | @article{maiti2008a,
title={A Novel Approach for Complete Identification of Dynamic Fractional Order
Systems Using Stochastic Optimization Algorithms and Fractional Calculus},
author={Deepyaman Maiti, Mithun Chakraborty, Amit Konar},
journal={arXiv preprint arXiv:0811.0137},
year={2008},
doi={10.1109/ICECE.2008.4769333},
archivePrefix={arXiv},
eprint={0811.0137},
primaryClass={cs.OH}
} | maiti2008a |
arxiv-5351 | 0811.0139 | Entropy, Perception, and Relativity | <|reference_start|>Entropy, Perception, and Relativity: In this paper, I expand Shannon's definition of entropy into a new form of entropy that allows integration of information from different random events. Shannon's notion of entropy is a special case of my more general definition of entropy. I define probability using a so-called performance function, which is de facto an exponential distribution. Assuming that my general notion of entropy reflects the true uncertainty about a probabilistic event, I understand that our perceived uncertainty differs. I claim that our perception is the result of two opposing forces similar to the two famous antagonists in Chinese philosophy: Yin and Yang. Based on this idea, I show that our perceived uncertainty matches the true uncertainty in points determined by the golden ratio. I demonstrate that the well-known sigmoid function, which we typically employ in artificial neural networks as a non-linear threshold function, describes the actual performance. Furthermore, I provide a motivation for the time dilation in Einstein's Special Relativity, basically claiming that although time dilation conforms with our perception, it does not correspond to reality. At the end of the paper, I show how to apply this theoretical framework to practical applications. I present recognition rates for a pattern recognition problem, and also propose a network architecture that can take advantage of general entropy to solve complex decision problems.<|reference_end|> | arxiv | @article{jaeger2008entropy,,
title={Entropy, Perception, and Relativity},
author={Stefan Jaeger},
journal={arXiv preprint arXiv:0811.0139},
year={2008},
number={LAMP-TR-131/CAR-TR-1012/CS-TR-4799/UMIACS-TR-2006-20},
archivePrefix={arXiv},
eprint={0811.0139},
primaryClass={cs.LG}
} | jaeger2008entropy, |
arxiv-5352 | 0811.0146 | Effect of Tuned Parameters on a LSA MCQ Answering Model | <|reference_start|>Effect of Tuned Parameters on a LSA MCQ Answering Model: This paper presents the current state of a work in progress, whose objective is to better understand the effects of factors that significantly influence the performance of Latent Semantic Analysis (LSA). A difficult task, which consists in answering (French) biology Multiple Choice Questions, is used to test the semantic properties of the truncated singular space and to study the relative influence of main parameters. A dedicated software has been designed to fine tune the LSA semantic space for the Multiple Choice Questions task. With optimal parameters, the performances of our simple model are quite surprisingly equal or superior to those of 7th and 8th grades students. This indicates that semantic spaces were quite good despite their low dimensions and the small sizes of training data sets. Besides, we present an original entropy global weighting of answers' terms of each question of the Multiple Choice Questions which was necessary to achieve the model's success.<|reference_end|> | arxiv | @article{lifchitz2008effect,
title={Effect of Tuned Parameters on a LSA MCQ Answering Model},
author={Alain Lifchitz (LIP6), Sandra Jhean-Larose (LPC), Guy Denhi`ere (LPC)},
journal={Behavior Research Methods, 41 (4), p. 1201-1209, November 2009},
year={2008},
doi={10.3758/BRM.41.4.1201},
archivePrefix={arXiv},
eprint={0811.0146},
primaryClass={cs.LG cs.AI stat.ML}
} | lifchitz2008effect |
arxiv-5353 | 0811.0152 | Theoretical Analysis of Compressive Sensing via Random Filter | <|reference_start|>Theoretical Analysis of Compressive Sensing via Random Filter: In this paper, the theoretical analysis of compressive sensing via random filter, firstly outlined by J. Romberg [compressive sensing by random convolution, submitted to SIAM Journal on Imaging Science on July 9, 2008], has been refined or generalized to the design of general random filter used for compressive sensing. This universal CS measurement consists of two parts: one is from the convolution of unknown signal with a random waveform followed by random time-domain subsampling; the other is from the directly time-domain subsampling of the unknown signal. It has been shown that the proposed approach is a universally efficient data acquisition strategy, which means that the n-dimensional signal which is S sparse in any sparse representation can be exactly recovered from Slogn measurements with overwhelming probability.<|reference_end|> | arxiv | @article{li2008theoretical,
title={Theoretical Analysis of Compressive Sensing via Random Filter},
author={Lianlin Li, Yin Xiang and Fang Li},
journal={arXiv preprint arXiv:0811.0152},
year={2008},
archivePrefix={arXiv},
eprint={0811.0152},
primaryClass={cs.IT math.IT}
} | li2008theoretical |
arxiv-5354 | 0811.0166 | Automatic Modular Abstractions for Linear Constraints | <|reference_start|>Automatic Modular Abstractions for Linear Constraints: We propose a method for automatically generating abstract transformers for static analysis by abstract interpretation. The method focuses on linear constraints on programs operating on rational, real or floating-point variables and containing linear assignments and tests. In addition to loop-free code, the same method also applies for obtaining least fixed points as functions of the precondition, which permits the analysis of loops and recursive functions. Our algorithms are based on new quantifier elimination and symbolic manipulation techniques. Given the specification of an abstract domain, and a program block, our method automatically outputs an implementation of the corresponding abstract transformer. It is thus a form of program transformation. The motivation of our work is data-flow synchronous programming languages, used for building control-command embedded systems, but it also applies to imperative and functional programming.<|reference_end|> | arxiv | @article{monniaux2008automatic,
title={Automatic Modular Abstractions for Linear Constraints},
author={David Monniaux (VERIMAG - Imag)},
journal={arXiv preprint arXiv:0811.0166},
year={2008},
archivePrefix={arXiv},
eprint={0811.0166},
primaryClass={cs.PL cs.LO}
} | monniaux2008automatic |
arxiv-5355 | 0811.0174 | A Bit of Information Theory, and the Data Augmentation Algorithm Converges | <|reference_start|>A Bit of Information Theory, and the Data Augmentation Algorithm Converges: The data augmentation (DA) algorithm is a simple and powerful tool in statistical computing. In this note basic information theory is used to prove a nontrivial convergence theorem for the DA algorithm.<|reference_end|> | arxiv | @article{yu2008a,
title={A Bit of Information Theory, and the Data Augmentation Algorithm
Converges},
author={Yaming Yu},
journal={IEEE Transactions on Information Theory 54 (2008) 5186--5188},
year={2008},
doi={10.1109/TIT.2008.929918},
archivePrefix={arXiv},
eprint={0811.0174},
primaryClass={cs.IT math.IT stat.CO}
} | yu2008a |
arxiv-5356 | 0811.0196 | Reduced-Complexity Reed--Solomon Decoders Based on Cyclotomic FFTs | <|reference_start|>Reduced-Complexity Reed--Solomon Decoders Based on Cyclotomic FFTs: In this paper, we reduce the computational complexities of partial and dual partial cyclotomic FFTs (CFFTs), which are discrete Fourier transforms where spectral and temporal components are constrained, based on their properties as well as a common subexpression elimination algorithm. Our partial CFFTs achieve smaller computational complexities than previously proposed partial CFFTs. Utilizing our CFFTs in both transform- and time-domain Reed--Solomon decoders, we achieve significant complexity reductions.<|reference_end|> | arxiv | @article{chen2008reduced-complexity,
title={Reduced-Complexity Reed--Solomon Decoders Based on Cyclotomic FFTs},
author={Ning Chen and Zhiyuan Yan},
journal={arXiv preprint arXiv:0811.0196},
year={2008},
doi={10.1109/LSP.2009.2014292},
archivePrefix={arXiv},
eprint={0811.0196},
primaryClass={cs.IT math.IT}
} | chen2008reduced-complexity |
arxiv-5357 | 0811.0210 | Novel Blind Signal Classification Method Based on Data Compression | <|reference_start|>Novel Blind Signal Classification Method Based on Data Compression: This paper proposes a novel algorithm for signal classification problems. We consider a non-stationary random signal, where samples can be classified into several different classes, and samples in each class are identically independently distributed with an unknown probability distribution. The problem to be solved is to estimate the probability distributions of the classes and the correct membership of the samples to the classes. We propose a signal classification method based on the data compression principle that the accurate estimation in the classification problems induces the optimal signal models for data compression. The method formulates the classification problem as an optimization problem, where a so called {"classification gain"} is maximized. In order to circumvent the difficulties in integer optimization, we propose a continuous relaxation based algorithm. It is proven in this paper that asymptotically vanishing optimality loss is incurred by the continuous relaxation. We show by simulation results that the proposed algorithm is effective, robust and has low computational complexity. The proposed algorithm can be applied to solve various multimedia signal segmentation, analysis, and pattern recognition problems.<|reference_end|> | arxiv | @article{ma2008novel,
title={Novel Blind Signal Classification Method Based on Data Compression},
author={Xudong Ma},
journal={Proceeding of the 6th International Conference on Information
Technology : New Generations, Las Vegas, Nevada, April 27-29, 2009},
year={2008},
archivePrefix={arXiv},
eprint={0811.0210},
primaryClass={cs.IT math.IT}
} | ma2008novel |
arxiv-5358 | 0811.0241 | Joint Transmitter-Receiver Design for the Downlink Multiuser Spatial Multiplexing MIMO System | <|reference_start|>Joint Transmitter-Receiver Design for the Downlink Multiuser Spatial Multiplexing MIMO System: This paper proposes a joint transmitter-receiver design to minimize the weighted sum power under the post-processing signal-to-interference-and-noise ratio (post-SINR) constraints for all subchannels. Simulation results demonstrate that the algorithm can not only satisfy the post-SINR constraints but also easily adjust the power distribution among the users by changing the weights accordingly. Hence the algorithm can be used to alleviates the adjacent cell interference by reducing the transmitting power to the edge users without performance penalty.<|reference_end|> | arxiv | @article{ma2008joint,
title={Joint Transmitter-Receiver Design for the Downlink Multiuser Spatial
Multiplexing MIMO System},
author={P. Ma (1), W. Wang (1), X. Zhao (1) and K. Zheng (1) ((1) Beijing
University of Posts and Telecommunications, Beijing, China)},
journal={arXiv preprint arXiv:0811.0241},
year={2008},
archivePrefix={arXiv},
eprint={0811.0241},
primaryClass={cs.IT math.IT}
} | ma2008joint |
arxiv-5359 | 0811.0254 | Characterizing Graphs of Zonohedra | <|reference_start|>Characterizing Graphs of Zonohedra: A classic theorem by Steinitz states that a graph G is realizable by a convex polyhedron if and only if G is 3-connected planar. Zonohedra are an important subclass of convex polyhedra having the property that the faces of a zonohedron are parallelograms and are in parallel pairs. In this paper we give characterization of graphs of zonohedra. We also give a linear time algorithm to recognize such a graph. In our quest for finding the algorithm, we prove that in a zonohedron P both the number of zones and the number of faces in each zone is O(square root{n}), where n is the number of vertices of P.<|reference_end|> | arxiv | @article{adnan2008characterizing,
title={Characterizing Graphs of Zonohedra},
author={Muhammad Abdullah Adnan and Masud Hasan},
journal={arXiv preprint arXiv:0811.0254},
year={2008},
archivePrefix={arXiv},
eprint={0811.0254},
primaryClass={cs.CG cs.DM cs.DS}
} | adnan2008characterizing |
arxiv-5360 | 0811.0273 | Efficient Energy Management Policies for Networks with Energy Harvesting Sensor Nodes | <|reference_start|>Efficient Energy Management Policies for Networks with Energy Harvesting Sensor Nodes: We study sensor networks with energy harvesting nodes. The generated energy at a node can be stored in a buffer. A sensor node periodically senses a random field and generates a packet. These packets are stored in a queue and transmitted using the energy available at that time at the node. For such networks we develop efficient energy management policies. First, for a single node, we obtain policies that are throughput optimal, i.e., the data queue stays stable for the largest possible data rate. Next we obtain energy management policies which minimize the mean delay in the queue. We also compare performance of several easily implementable suboptimal policies. A greedy policy is identified which, in low SNR regime, is throughput optimal and also minimizes mean delay. Next using the results for a single node, we develop efficient MAC policies.<|reference_end|> | arxiv | @article{sharma2008efficient,
title={Efficient Energy Management Policies for Networks with Energy Harvesting
Sensor Nodes},
author={Vinod Sharma, Utpal Mukherji and Vinay Joseph},
journal={arXiv preprint arXiv:0811.0273},
year={2008},
archivePrefix={arXiv},
eprint={0811.0273},
primaryClass={cs.NI}
} | sharma2008efficient |
arxiv-5361 | 0811.0285 | Some results on communicating the sum of sources over a network | <|reference_start|>Some results on communicating the sum of sources over a network: We consider the problem of communicating the sum of $m$ sources to $n$ terminals in a directed acyclic network. Recently, it was shown that for a network of unit capacity links with either $m=2$ or $n=2$, the sum of the sources can be communicated to the terminals if and only if every source-terminal pair is connected in the network. We show in this paper that for any finite set of primes, there exists a network where the sum of the sources can be communicated to the terminals only over finite fields of characteristic belonging to that set. As a corollary, this gives networks where the sum can not be communicated over any finite field even though every source is connected to every terminal.<|reference_end|> | arxiv | @article{rai2008some,
title={Some results on communicating the sum of sources over a network},
author={Brijesh Kumar Rai, Bikash Kumar Dey, and Abhay Karandikar},
journal={arXiv preprint arXiv:0811.0285},
year={2008},
archivePrefix={arXiv},
eprint={0811.0285},
primaryClass={cs.IT math.IT}
} | rai2008some |
arxiv-5362 | 0811.0310 | Edhibou: a Customizable Interface for Decision Support in a Semantic Portal | <|reference_start|>Edhibou: a Customizable Interface for Decision Support in a Semantic Portal: The Semantic Web is becoming more and more a reality, as the required technologies have reached an appropriate level of maturity. However, at this stage, it is important to provide tools facilitating the use and deployment of these technologies by end-users. In this paper, we describe EdHibou, an automatically generated, ontology-based graphical user interface that integrates in a semantic portal. The particularity of EdHibou is that it makes use of OWL reasoning capabilities to provide intelligent features, such as decision support, upon the underlying ontology. We present an application of EdHibou to medical decision support based on a formalization of clinical guidelines in OWL and show how it can be customized thanks to an ontology of graphical components.<|reference_end|> | arxiv | @article{badra2008edhibou:,
title={Edhibou: a Customizable Interface for Decision Support in a Semantic
Portal},
author={Fadi Badra (INRIA Lorraine - LORIA), Mathieu D'Aquin (KMI), Jean
Lieber (INRIA Lorraine - LORIA), Thomas Meilender (INRIA Lorraine - LORIA)},
journal={arXiv preprint arXiv:0811.0310},
year={2008},
archivePrefix={arXiv},
eprint={0811.0310},
primaryClass={cs.AI cs.HC}
} | badra2008edhibou: |
arxiv-5363 | 0811.0325 | Energy Benefit of Network Coding for Multiple Unicast in Wireless Networks | <|reference_start|>Energy Benefit of Network Coding for Multiple Unicast in Wireless Networks: We show that the maximum possible energy benefit of network coding for multiple unicast on wireless networks is at least 3. This improves the previously known lower bound of 2.4 from [1].<|reference_end|> | arxiv | @article{goseling2008energy,
title={Energy Benefit of Network Coding for Multiple Unicast in Wireless
Networks},
author={Jasper Goseling and Jos. H. Weber},
journal={Proceedings of the Twenty-ninth Symposium on Information Theory in
the Benelux (ISBN: 978-90-9023135-8), Leuven, Belgium, pp. 85-91, May 29-30,
2008},
year={2008},
archivePrefix={arXiv},
eprint={0811.0325},
primaryClass={cs.IT math.IT}
} | goseling2008energy |
arxiv-5364 | 0811.0335 | Cooperative interface of a swarm of UAVs | <|reference_start|>Cooperative interface of a swarm of UAVs: After presenting the broad context of authority sharing, we outline how introducing more natural interaction in the design of the ground operator interface of UV systems should help in allowing a single operator to manage the complexity of his/her task. Introducing new modalities is one one of the means in the realization of our vision of next- generation GOI. A more fundamental aspect resides in the interaction manager which should help balance the workload of the operator between mission and interaction, notably by applying a multi-strategy approach to generation and interpretation. We intend to apply these principles to the context of the Smaart prototype, and in this perspective, we illustrate how to characterize the workload associated with a particular operational situation.<|reference_end|> | arxiv | @article{saget2008cooperative,
title={Cooperative interface of a swarm of UAVs},
author={Sylvie Saget, Francois Legras, Gilles Coppin},
journal={arXiv preprint arXiv:0811.0335},
year={2008},
archivePrefix={arXiv},
eprint={0811.0335},
primaryClass={cs.AI cs.HC cs.MA}
} | saget2008cooperative |
arxiv-5365 | 0811.0340 | Document stream clustering: experimenting an incremental algorithm and AR-based tools for highlighting dynamic trends | <|reference_start|>Document stream clustering: experimenting an incremental algorithm and AR-based tools for highlighting dynamic trends: We address here two major challenges presented by dynamic data mining: 1) the stability challenge: we have implemented a rigorous incremental density-based clustering algorithm, independent from any initial conditions and ordering of the data-vectors stream, 2) the cognitive challenge: we have implemented a stringent selection process of association rules between clusters at time t-1 and time t for directly generating the main conclusions about the dynamics of a data-stream. We illustrate these points with an application to a two years and 2600 documents scientific information database.<|reference_end|> | arxiv | @article{lelu2008document,
title={Document stream clustering: experimenting an incremental algorithm and
AR-based tools for highlighting dynamic trends},
author={Alain Lelu (LASELDI), Martine Cadot, Pascal Cuxac (INIST)},
journal={International Workshop on Webometrics, Informetrics and
Scientometrics & Seventh COLLNET Meeting, France (2006)},
year={2008},
archivePrefix={arXiv},
eprint={0811.0340},
primaryClass={cs.AI}
} | lelu2008document |
arxiv-5366 | 0811.0359 | Embedding Non-Ground Logic Programs into Autoepistemic Logic for Knowledge Base Combination | <|reference_start|>Embedding Non-Ground Logic Programs into Autoepistemic Logic for Knowledge Base Combination: In the context of the Semantic Web, several approaches to the combination of ontologies, given in terms of theories of classical first-order logic and rule bases, have been proposed. They either cast rules into classical logic or limit the interaction between rules and ontologies. Autoepistemic logic (AEL) is an attractive formalism which allows to overcome these limitations, by serving as a uniform host language to embed ontologies and nonmonotonic logic programs into it. For the latter, so far only the propositional setting has been considered. In this paper, we present three embeddings of normal and three embeddings of disjunctive non-ground logic programs under the stable model semantics into first-order AEL. While the embeddings all correspond with respect to objective ground atoms, differences arise when considering non-atomic formulas and combinations with first-order theories. We compare the embeddings with respect to stable expansions and autoepistemic consequences, considering the embeddings by themselves, as well as combinations with classical theories. Our results reveal differences and correspondences of the embeddings and provide useful guidance in the choice of a particular embedding for knowledge combination.<|reference_end|> | arxiv | @article{de bruijn2008embedding,
title={Embedding Non-Ground Logic Programs into Autoepistemic Logic for
Knowledge Base Combination},
author={Jos de Bruijn, Thomas Eiter, Axel Polleres, and Hans Tompits},
journal={arXiv preprint arXiv:0811.0359},
year={2008},
archivePrefix={arXiv},
eprint={0811.0359},
primaryClass={cs.LO cs.AI}
} | de bruijn2008embedding |
arxiv-5367 | 0811.0381 | On the dynamics of Social Balance on general networks (with an application to XOR-SAT) | <|reference_start|>On the dynamics of Social Balance on general networks (with an application to XOR-SAT): We study nondeterministic and probabilistic versions of a discrete dynamical system (due to T. Antal, P. L. Krapivsky, and S. Redner) inspired by Heider's social balance theory. We investigate the convergence time of this dynamics on several classes of graphs. Our contributions include: 1. We point out the connection between the triad dynamics and a generalization of annihilating walks to hypergraphs. In particular, this connection allows us to completely characterize the recurrent states in graphs where each edge belongs to at most two triangles. 2. We also solve the case of hypergraphs that do not contain edges consisting of one or two vertices. 3. We show that on the so-called "triadic cycle" graph, the convergence time is linear. 4. We obtain a cubic upper bound on the convergence time on 2-regular triadic simplexes G. This bound can be further improved to a quantity that depends on the Cheeger constant of G. In particular this provides some rigorous counterparts to previous experimental observations. We also point out an application to the analysis of the random walk algorithm on certain instances of the 3-XOR-SAT problem.<|reference_end|> | arxiv | @article{istrate2008on,
title={On the dynamics of Social Balance on general networks (with an
application to XOR-SAT)},
author={Gabriel Istrate},
journal={Fundamenta Informaticae, 91 (2), pp. 341-356, 2009.},
year={2008},
archivePrefix={arXiv},
eprint={0811.0381},
primaryClass={cs.DM math.CO math.PR physics.soc-ph}
} | istrate2008on |
arxiv-5368 | 0811.0405 | Predicting the popularity of online content | <|reference_start|>Predicting the popularity of online content: We present a method for accurately predicting the long time popularity of online content from early measurements of user access. Using two content sharing portals, Youtube and Digg, we show that by modeling the accrual of views and votes on content offered by these services we can predict the long-term dynamics of individual submissions from initial data. In the case of Digg, measuring access to given stories during the first two hours allows us to forecast their popularity 30 days ahead with remarkable accuracy, while downloads of Youtube videos need to be followed for 10 days to attain the same performance. The differing time scales of the predictions are shown to be due to differences in how content is consumed on the two portals: Digg stories quickly become outdated, while Youtube videos are still found long after they are initially submitted to the portal. We show that predictions are more accurate for submissions for which attention decays quickly, whereas predictions for evergreen content will be prone to larger errors.<|reference_end|> | arxiv | @article{szabo2008predicting,
title={Predicting the popularity of online content},
author={Gabor Szabo and Bernardo A. Huberman},
journal={arXiv preprint arXiv:0811.0405},
year={2008},
archivePrefix={arXiv},
eprint={0811.0405},
primaryClass={cs.CY cs.IR physics.soc-ph}
} | szabo2008predicting |
arxiv-5369 | 0811.0413 | Robust Linear Processing for Downlink Multiuser MIMO System With Imperfectly Known Channel | <|reference_start|>Robust Linear Processing for Downlink Multiuser MIMO System With Imperfectly Known Channel: This paper proposes a roust downlink multiuser MIMO scheme that exploits the channel mean and antenna correlations to alleviate the performance penalty due to the mismatch between the true and estimated CSI.<|reference_end|> | arxiv | @article{ma2008robust,
title={Robust Linear Processing for Downlink Multiuser MIMO System With
Imperfectly Known Channel},
author={Pengfei Ma (1), Xiaochuan Zhao (1), Mugen Peng (1), Wenbo Wang (1)
((1) Beijing University of Posts and Telecommunications, Beijing, China)},
journal={arXiv preprint arXiv:0811.0413},
year={2008},
archivePrefix={arXiv},
eprint={0811.0413},
primaryClass={cs.IT math.IT}
} | ma2008robust |
arxiv-5370 | 0811.0417 | Parametric Channel Estimation by Exploiting Hopping Pilots in Uplink OFDMA | <|reference_start|>Parametric Channel Estimation by Exploiting Hopping Pilots in Uplink OFDMA: This paper proposes a parametric channel estimation algorithm applicable to uplink of OFDMA systems with pseudo-random subchannelization. It exploits the hopping pilots to facilitate ESPRIT to estimate the delay subspace of the multipath fading channel, and utilizes the global pilot tones to interpolate on data subcarriers. Hence, it outperforms the traditional local channel interpolators considerably.<|reference_end|> | arxiv | @article{zhao2008parametric,
title={Parametric Channel Estimation by Exploiting Hopping Pilots in Uplink
OFDMA},
author={Xiaochuan Zhao (1), Tao Peng (1) and Wenbo Wang (1) ((1) Beijing
University of Posts and Telecommunications, Beijing, China)},
journal={arXiv preprint arXiv:0811.0417},
year={2008},
archivePrefix={arXiv},
eprint={0811.0417},
primaryClass={cs.IT math.IT}
} | zhao2008parametric |
arxiv-5371 | 0811.0419 | Doppler Spread Estimation by Subspace Tracking for OFDM Systems | <|reference_start|>Doppler Spread Estimation by Subspace Tracking for OFDM Systems: This paper proposes a novel maximum Doppler spread estimation algorithm for OFDM systems with the comb-type pilot pattern. By tracking the drifting delay subspace of the multipath channel, the time correlation function is measured at a high accuracy, which accordingly improves the estimation accuracy of the maximum Doppler spread considerably.<|reference_end|> | arxiv | @article{zhao2008doppler,
title={Doppler Spread Estimation by Subspace Tracking for OFDM Systems},
author={Xiaochuan Zhao (1), Tao Peng (1), Ming Yang (1) and Wenbo Wang (1)
((1) Beijing University of Posts and Telecommunications, Beijing, China)},
journal={arXiv preprint arXiv:0811.0419},
year={2008},
archivePrefix={arXiv},
eprint={0811.0419},
primaryClass={cs.IT math.IT}
} | zhao2008doppler |
arxiv-5372 | 0811.0430 | An Analysis of the Bias-Property of the Sample Auto-Correlation Matrices of Doubly Selective Fading Channels for OFDM Systems | <|reference_start|>An Analysis of the Bias-Property of the Sample Auto-Correlation Matrices of Doubly Selective Fading Channels for OFDM Systems: This paper derives the analytic expression of the sample auto-correlation matrix from the least-squared channel estimation of doubly selective fading channels for OFDM systems. According to the expression, the sample auto-correlation matrix reveals the bias property which would cause the model mismatch and therefore deteriorate the performance of channel estimation. Numerical results demonstrate the bias property and corresponding analysis.<|reference_end|> | arxiv | @article{zhao2008an,
title={An Analysis of the Bias-Property of the Sample Auto-Correlation Matrices
of Doubly Selective Fading Channels for OFDM Systems},
author={Xiaochuan Zhao (1), Tao Peng (1), Ming Yang (1) and Wenbo Wang (1)
((1) Beijing University of Posts and Telecommunications, Beijing, China)},
journal={arXiv preprint arXiv:0811.0430},
year={2008},
archivePrefix={arXiv},
eprint={0811.0430},
primaryClass={cs.IT math.IT}
} | zhao2008an |
arxiv-5373 | 0811.0431 | On the Cramer-Rao Lower Bound for Frequency Correlation Matrices of Doubly Selective Fading Channels for OFDM Systems | <|reference_start|>On the Cramer-Rao Lower Bound for Frequency Correlation Matrices of Doubly Selective Fading Channels for OFDM Systems: The analytic expression of CRLB and the maximum likelihood estimator for the sample frequency correlation matrices in doubly selective fading channels for OFDM systems are reported in this paper. According to the analytical and numerical results, the amount of samples affects the average mean square error dominantly while the SNR and the Doppler spread do negligibly.<|reference_end|> | arxiv | @article{zhao2008on,
title={On the Cramer-Rao Lower Bound for Frequency Correlation Matrices of
Doubly Selective Fading Channels for OFDM Systems},
author={Xiaochuan Zhao (1), Ming Yang (1), Tao Peng (1) and Wenbo Wang (1)
((1) Beijing University of Posts and Telecommunications, Beijing, China)},
journal={arXiv preprint arXiv:0811.0431},
year={2008},
archivePrefix={arXiv},
eprint={0811.0431},
primaryClass={cs.IT math.IT}
} | zhao2008on |
arxiv-5374 | 0811.0433 | On the Cramer-Rao Lower Bound for Spatial Correlation Matrices of Doubly Selective Fading Channels for MIMO OFDM Systems | <|reference_start|>On the Cramer-Rao Lower Bound for Spatial Correlation Matrices of Doubly Selective Fading Channels for MIMO OFDM Systems: The analytic expression of CRLB and the maximum likelihood estimator for spatial correlation matrices in time-varying multipath fading channels for MIMO OFDM systems are reported in this paper. The analytical and numerical results reveal that the amount of samples and the order of frequency selectivity have dominant impact on the CRLB. Moreover, the number of pilot tones, SNR as well as the normalized maximum Doppler spread together influence the effective order of frequency selectivity.<|reference_end|> | arxiv | @article{zhao2008on,
title={On the Cramer-Rao Lower Bound for Spatial Correlation Matrices of Doubly
Selective Fading Channels for MIMO OFDM Systems},
author={Xiaochuan Zhao (1), Tao Peng (1), Ming Yang (1) and Wenbo Wang (1)
((1) Beijing University of Posts and Telecommunications, Beijing, China)},
journal={arXiv preprint arXiv:0811.0433},
year={2008},
archivePrefix={arXiv},
eprint={0811.0433},
primaryClass={cs.IT math.IT}
} | zhao2008on |
arxiv-5375 | 0811.0436 | Instruction sequences for the production of processes | <|reference_start|>Instruction sequences for the production of processes: Single-pass instruction sequences under execution are considered to produce behaviours to be controlled by some execution environment. Threads as considered in thread algebra model such behaviours: upon each action performed by a thread, a reply from its execution environment determines how the thread proceeds. Threads in turn can be looked upon as producing processes as considered in process algebra. We show that, by apposite choice of basic instructions, all processes that can only be in a finite number of states can be produced by single-pass instruction sequences.<|reference_end|> | arxiv | @article{bergstra2008instruction,
title={Instruction sequences for the production of processes},
author={J. A. Bergstra, C. A. Middelburg},
journal={arXiv preprint arXiv:0811.0436},
year={2008},
number={PRG0814},
archivePrefix={arXiv},
eprint={0811.0436},
primaryClass={cs.PL cs.LO}
} | bergstra2008instruction |
arxiv-5376 | 0811.0452 | Doppler Spread Estimation by Tracking the Delay-Subspace for OFDM Systems in Doubly Selective Fading Channels | <|reference_start|>Doppler Spread Estimation by Tracking the Delay-Subspace for OFDM Systems in Doubly Selective Fading Channels: A novel maximum Doppler spread estimation algorithm for OFDM systems with comb-type pilot pattern is presented in this paper. By tracking the drifting delay subspace of time-varying multipath channels, a Doppler dependent parameter can be accurately measured and further expanded and transformed into a non-linear high-order polynomial equation, from which the maximum Doppler spread is readily solved by resorting to the Newton's method. Its performance is demonstrated by simulations.<|reference_end|> | arxiv | @article{zhao2008doppler,
title={Doppler Spread Estimation by Tracking the Delay-Subspace for OFDM
Systems in Doubly Selective Fading Channels},
author={Xiaochuan Zhao (1), Tao Peng (1), Ming Yang (1) and Wenbo Wang (1)
((1) Beijing University of Posts and Telecommunications, Beijing, China)},
journal={arXiv preprint arXiv:0811.0452},
year={2008},
doi={10.1109/LSP.2008.2010812},
archivePrefix={arXiv},
eprint={0811.0452},
primaryClass={cs.IT math.IT}
} | zhao2008doppler |
arxiv-5377 | 0811.0453 | CoZo+ - A Content Zoning Engine for textual documents | <|reference_start|>CoZo+ - A Content Zoning Engine for textual documents: Content zoning can be understood as a segmentation of textual documents into zones. This is inspired by [6] who initially proposed an approach for the argumentative zoning of textual documents. With the prototypical CoZo+ engine, we focus on content zoning towards an automatic processing of textual streams while considering only the actors as the zones. We gain information that can be used to realize an automatic recognition of content for pre-defined actors. We understand CoZo+ as a necessary pre-step towards an automatic generation of summaries and to make intellectual ownership of documents detectable.<|reference_end|> | arxiv | @article{wagner2008cozo+,
title={CoZo+ - A Content Zoning Engine for textual documents},
author={Cynthia Wagner, and Christoph Schommer},
journal={arXiv preprint arXiv:0811.0453},
year={2008},
archivePrefix={arXiv},
eprint={0811.0453},
primaryClass={cs.CL cs.IR}
} | wagner2008cozo+ |
arxiv-5378 | 0811.0463 | Solving the P/NP Problem under Intrinsic Uncertainty | <|reference_start|>Solving the P/NP Problem under Intrinsic Uncertainty: Heisenberg's uncertainty principle states that it is not possible to compute both the position and momentum of an electron with absolute certainty. However, this computational limitation, which is central to quantum mechanics, has no counterpart in theoretical computer science. Here, I will show that we can distinguish between the complexity classes P and NP when we consider intrinsic uncertainty in our computations, and take uncertainty about whether a bit belongs to the program code or machine input into account. Given intrinsic uncertainty, every output is uncertain, and computations become meaningful only in combination with a confidence level. In particular, it is impossible to compute solutions with absolute certainty as this requires infinite run-time. Considering intrinsic uncertainty, I will present a function that is in NP but not in P, and thus prove that P is a proper subset of NP. I will also show that all traditional hard decision problems have polynomial-time algorithms that provide solutions with confidence under uncertainty.<|reference_end|> | arxiv | @article{jaeger2008solving,
title={Solving the P/NP Problem under Intrinsic Uncertainty},
author={Stefan Jaeger},
journal={arXiv preprint arXiv:0811.0463},
year={2008},
archivePrefix={arXiv},
eprint={0811.0463},
primaryClass={cs.CC}
} | jaeger2008solving |
arxiv-5379 | 0811.0475 | Secure Arithmetic Computation with No Honest Majority | <|reference_start|>Secure Arithmetic Computation with No Honest Majority: We study the complexity of securely evaluating arithmetic circuits over finite rings. This question is motivated by natural secure computation tasks. Focusing mainly on the case of two-party protocols with security against malicious parties, our main goals are to: (1) only make black-box calls to the ring operations and standard cryptographic primitives, and (2) minimize the number of such black-box calls as well as the communication overhead. We present several solutions which differ in their efficiency, generality, and underlying intractability assumptions. These include: 1. An unconditionally secure protocol in the OT-hybrid model which makes a black-box use of an arbitrary ring $R$, but where the number of ring operations grows linearly with (an upper bound on) $\log|R|$. 2. Computationally secure protocols in the OT-hybrid model which make a black-box use of an underlying ring, and in which the number of ring operations does not grow with the ring size. These results extend a previous approach of Naor and Pinkas for secure polynomial evaluation (SIAM J. Comput., 35(5), 2006). 3. A protocol for the rings $\mathbb{Z}_m=\mathbb{Z}/m\mathbb{Z}$ which only makes a black-box use of a homomorphic encryption scheme. When $m$ is prime, the (amortized) number of calls to the encryption scheme for each gate of the circuit is constant. All of our protocols are in fact UC-secure in the OT-hybrid model and can be generalized to multiparty computation with an arbitrary number of malicious parties.<|reference_end|> | arxiv | @article{ishai2008secure,
title={Secure Arithmetic Computation with No Honest Majority},
author={Yuval Ishai, Manoj Prabhakaran and Amit Sahai},
journal={arXiv preprint arXiv:0811.0475},
year={2008},
archivePrefix={arXiv},
eprint={0811.0475},
primaryClass={cs.CR cs.CC}
} | ishai2008secure |
arxiv-5380 | 0811.0537 | First-Order and Temporal Logics for Nested Words | <|reference_start|>First-Order and Temporal Logics for Nested Words: Nested words are a structured model of execution paths in procedural programs, reflecting their call and return nesting structure. Finite nested words also capture the structure of parse trees and other tree-structured data, such as XML. We provide new temporal logics for finite and infinite nested words, which are natural extensions of LTL, and prove that these logics are first-order expressively-complete. One of them is based on adding a "within" modality, evaluating a formula on a subword, to a logic CaRet previously studied in the context of verifying properties of recursive state machines (RSMs). The other logic, NWTL, is based on the notion of a summary path that uses both the linear and nesting structures. For NWTL we show that satisfiability is EXPTIME-complete, and that model-checking can be done in time polynomial in the size of the RSM model and exponential in the size of the NWTL formula (and is also EXPTIME-complete). Finally, we prove that first-order logic over nested words has the three-variable property, and we present a temporal logic for nested words which is complete for the two-variable fragment of first-order.<|reference_end|> | arxiv | @article{alur2008first-order,
title={First-Order and Temporal Logics for Nested Words},
author={Rajeev Alur (UPenn), Marcelo Arenas (PUC, Chile), Pablo Barcelo (U
Chile), Kousha Etessami (U Edinburgh), Neil Immerman (UMass), Leonid Libkin
(Edinbugh)},
journal={Logical Methods in Computer Science, Volume 4, Issue 4 (November
25, 2008) lmcs:782},
year={2008},
doi={10.2168/LMCS-4(4:11)2008},
archivePrefix={arXiv},
eprint={0811.0537},
primaryClass={cs.LO}
} | alur2008first-order |
arxiv-5381 | 0811.0543 | Incomplete decode-and-forward protocol using distributed space-time block codes | <|reference_start|>Incomplete decode-and-forward protocol using distributed space-time block codes: In this work, we explore the introduction of distributed space-time codes in decode-and-forward (DF) protocols. A first protocol named the Asymmetric DF is presented. It is based on two phases of different lengths, defined so that signals can be fully decoded at relays. This strategy brings full diversity but the symbol rate is not optimal. To solve this problem a second protocol named the Incomplete DF is defined. It is based on an incomplete decoding at the relays reducing the length of the first phase. This last strategy brings both full diversity and full symbol rate. The outage probability and the simulation results show that the Incomplete DF has better performance than any existing DF protocol and than the non-orthogonal amplify-and-forward (NAF) strategy using the same space-time codes. Moreover the diversity-multiplexing gain tradeoff (DMT) of this new DF protocol is proven to be the same as the one of the NAF.<|reference_end|> | arxiv | @article{hucher2008incomplete,
title={Incomplete decode-and-forward protocol using distributed space-time
block codes},
author={Charlotte Hucher, Ghaya Rekaya-Ben Othman and Ahmed Saadani},
journal={arXiv preprint arXiv:0811.0543},
year={2008},
archivePrefix={arXiv},
eprint={0811.0543},
primaryClass={cs.IT math.IT}
} | hucher2008incomplete |
arxiv-5382 | 0811.0573 | A Web-Based Resource Model for eScience: Object Reuse & Exchange | <|reference_start|>A Web-Based Resource Model for eScience: Object Reuse & Exchange: Work in the Open Archives Initiative - Object Reuse and Exchange (OAI-ORE) focuses on an important aspect of infrastructure for eScience: the specification of the data model and a suite of implementation standards to identify and describe compound objects. These are objects that aggregate multiple sources of content including text, images, data, visualization tools, and the like. These aggregations are an essential product of eScience, and will become increasingly common in the age of data-driven scholarship. The OAI-ORE specifications conform to the core concepts of the Web architecture and the semantic Web, ensuring that applications that use them will integrate well into the general Web environment.<|reference_end|> | arxiv | @article{lagoze2008a,
title={A Web-Based Resource Model for eScience: Object Reuse & Exchange},
author={Carl Lagoze, Herbert Van de Sompel, Michael Nelson, Simeon Warner,
Robert Sanderson, Pete Johnston},
journal={arXiv preprint arXiv:0811.0573},
year={2008},
archivePrefix={arXiv},
eprint={0811.0573},
primaryClass={cs.DL}
} | lagoze2008a |
arxiv-5383 | 0811.0579 | UNL-French deconversion as transfer & generation from an interlingua with possible quality enhancement through offline human interaction | <|reference_start|>UNL-French deconversion as transfer & generation from an interlingua with possible quality enhancement through offline human interaction: We present the architecture of the UNL-French deconverter, which "generates" from the UNL interlingua by first"localizing" the UNL form for French, within UNL, and then applying slightly adapted but classical transfer and generation techniques, implemented in GETA's Ariane-G5 environment, supplemented by some UNL-specific tools. Online interaction can be used during deconversion to enhance output quality and is now used for development purposes. We show how interaction could be delayed and embedded in the postedition phase, which would then interact not directly with the output text, but indirectly with several components of the deconverter. Interacting online or offline can improve the quality not only of the utterance at hand, but also of the utterances processed later, as various preferences may be automatically changed to let the deconverter "learn".<|reference_end|> | arxiv | @article{sérasset2008unl-french,
title={UNL-French deconversion as transfer & generation from an interlingua
with possible quality enhancement through offline human interaction},
author={Gilles s'erasset (IMAG, Clips - Imag, Lig), Christian Boitet (IMAG,
Clips - Imag, Lig)},
journal={MACHINE TRANSLATION SUMMIT VII, Singapour : Singapour (1999)},
year={2008},
archivePrefix={arXiv},
eprint={0811.0579},
primaryClass={cs.CL}
} | sérasset2008unl-french |
arxiv-5384 | 0811.0582 | Optimization of automatically generated multi-core code for the LTE RACH-PD algorithm | <|reference_start|>Optimization of automatically generated multi-core code for the LTE RACH-PD algorithm: Embedded real-time applications in communication systems require high processing power. Manual scheduling devel-oped for single-processor applications is not suited to multi-core architectures. The Algorithm Architecture Matching (AAM) methodology optimizes static application implementation on multi-core architectures. The Random Access Channel Preamble Detection (RACH-PD) is an algorithm for non-synchronized access of Long Term Evolu-tion (LTE) wireless networks. LTE aims to improve the spectral efficiency of the next generation cellular system. This paper de-scribes a complete methodology for implementing the RACH-PD. AAM prototyping is applied to the RACH-PD which is modelled as a Synchronous DataFlow graph (SDF). An efficient implemen-tation of the algorithm onto a multi-core DSP, the TI C6487, is then explained. Benchmarks for the solution are given.<|reference_end|> | arxiv | @article{pelcat2008optimization,
title={Optimization of automatically generated multi-core code for the LTE
RACH-PD algorithm},
author={Maxime Pelcat (IETR), Slaheddine Aridhi, Jean Franc{c}ois Nezan
(IETR)},
journal={DASIP 2008, Bruxelles : Belgique (2008)},
year={2008},
archivePrefix={arXiv},
eprint={0811.0582},
primaryClass={cs.MM cs.DC}
} | pelcat2008optimization |
arxiv-5385 | 0811.0602 | Classification dynamique d'un flux documentaire : une \'evaluation statique pr\'ealable de l'algorithme GERMEN | <|reference_start|>Classification dynamique d'un flux documentaire : une \'evaluation statique pr\'ealable de l'algorithme GERMEN: Data-stream clustering is an ever-expanding subdomain of knowledge extraction. Most of the past and present research effort aims at efficient scaling up for the huge data repositories. Our approach focuses on qualitative improvement, mainly for "weak signals" detection and precise tracking of topical evolutions in the framework of information watch - though scalability is intrinsically guaranteed in a possibly distributed implementation. Our GERMEN algorithm exhaustively picks up the whole set of density peaks of the data at time t, by identifying the local perturbations induced by the current document vector, such as changing cluster borders, or new/vanishing clusters. Optimality yields from the uniqueness 1) of the density landscape for any value of our zoom parameter, 2) of the cluster allocation operated by our border propagation rule. This results in a rigorous independence from the data presentation ranking or any initialization parameter. We present here as a first step the only assessment of a static view resulting from one year of the CNRS/INIST Pascal database in the field of geotechnics.<|reference_end|> | arxiv | @article{lelu2008classification,
title={Classification dynamique d'un flux documentaire : une \'evaluation
statique pr\'ealable de l'algorithme GERMEN},
author={Alain Lelu (LASELDI), Pascal Cuxac (INIST), Joel Johansson (INIST)},
journal={JADT 2006 : 8es Journ\'ees internationales d'Analyse statistique
des Donn\'ees Textuelles, France (2006)},
year={2008},
archivePrefix={arXiv},
eprint={0811.0602},
primaryClass={cs.AI}
} | lelu2008classification |
arxiv-5386 | 0811.0603 | Query Refinement by Multi Word Term expansions and semantic synonymy | <|reference_start|>Query Refinement by Multi Word Term expansions and semantic synonymy: We developed a system, TermWatch (https://stid-bdd.iut.univ-metz.fr/TermWatch/index.pl), which combines a linguistic extraction of terms, their structuring into a terminological network with a clustering algorithm. In this paper we explore its ability in integrating the most promising aspects of the studies on query refinement: choice of meaningful text units to cluster (domain terms), choice of tight semantic relations with which to cluster terms, structuring of terms in a network enabling abetter perception of domain concepts. We have run this experiment on the 367 645 English abstracts of PASCAL 2005-2006 bibliographic database (http://www.inist.fr) and compared the structured terminological resource automatically build by TermWarch to the English segment of TermScience resource (http://termsciences.inist.fr/) containing 88 211 terms.<|reference_end|> | arxiv | @article{lux-pogodalla2008query,
title={Query Refinement by Multi Word Term expansions and semantic synonymy},
author={Veronila Lux-Pogodalla (INIST), Eric San Juan},
journal={InSciT2006, medira : Espagne (2006)},
year={2008},
archivePrefix={arXiv},
eprint={0811.0603},
primaryClass={cs.IR}
} | lux-pogodalla2008query |
arxiv-5387 | 0811.0623 | Algorithmic complexity and randomness in elastic solids | <|reference_start|>Algorithmic complexity and randomness in elastic solids: A system comprised of an elastic solid and its response to an external random force sequence is shown to behave based on the principles of the theory of algorithmic complexity and randomness. The solid distorts the randomness of an input force sequence in a way proportional to its algorithmic complexity. We demonstrate this by numerical analysis of a one-dimensional vibrating elastic solid (the system) on which we apply a maximally random input force. The level of complexity of the system is controlled via external parameters. The output response is the field of displacements observed at several positions on the body. The algorithmic complexity and stochasticity of the resulting output displacement sequence is measured and compared against the complexity of the system. The results show that the higher the system complexity the more random-deficient the output sequence. This agrees with the theory introduced in [16] which states that physical systems such as this behave as algorithmic selection-rules which act on random actions in their surroundings.<|reference_end|> | arxiv | @article{ratsaby2008algorithmic,
title={Algorithmic complexity and randomness in elastic solids},
author={J. Ratsaby and J. Chaskalovic},
journal={arXiv preprint arXiv:0811.0623},
year={2008},
archivePrefix={arXiv},
eprint={0811.0623},
primaryClass={cs.CC cs.IT math.IT}
} | ratsaby2008algorithmic |
arxiv-5388 | 0811.0637 | Optimality of Myopic Sensing in Multi-Channel Opportunistic Access | <|reference_start|>Optimality of Myopic Sensing in Multi-Channel Opportunistic Access: We consider opportunistic communications over multiple channels where the state ("good" or "bad") of each channel evolves as independent and identically distributed Markov processes. A user, with limited sensing and access capability, chooses one channel to sense and subsequently access (based on the sensed channel state) in each time slot. A reward is obtained when the user senses and accesses a "good" channel. The objective is to design the optimal channel selection policy that maximizes the expected reward accrued over time. This problem can be generally cast as a Partially Observable Markov Decision Process (POMDP) or a restless multi-armed bandit process, to which optimal solutions are often intractable. We show in this paper that the myopic policy, with a simple and robust structure, achieves optimality under certain conditions. This result finds applications in opportunistic communications in fading environment, cognitive radio networks for spectrum overlay, and resource-constrained jamming and anti-jamming.<|reference_end|> | arxiv | @article{ahmad2008optimality,
title={Optimality of Myopic Sensing in Multi-Channel Opportunistic Access},
author={Sahand H.A. Ahmad, Mingyan Liu, Tara Javidi, Qing Zhao, Bhaskar
Krishnamachari},
journal={arXiv preprint arXiv:0811.0637},
year={2008},
archivePrefix={arXiv},
eprint={0811.0637},
primaryClass={cs.NI cs.IT math.IT}
} | ahmad2008optimality |
arxiv-5389 | 0811.0699 | A Note on the Inversion Complexity of Boolean Functions in Boolean Formulas | <|reference_start|>A Note on the Inversion Complexity of Boolean Functions in Boolean Formulas: In this note, we consider the minimum number of NOT operators in a Boolean formula representing a Boolean function. In circuit complexity theory, the minimum number of NOT gates in a Boolean circuit computing a Boolean function $f$ is called the inversion complexity of $f$. In 1958, Markov determined the inversion complexity of every Boolean function and particularly proved that $\lceil \log_2(n+1) \rceil$ NOT gates are sufficient to compute any Boolean function on $n$ variables. As far as we know, no result is known for inversion complexity in Boolean formulas, i.e., the minimum number of NOT operators in a Boolean formula representing a Boolean function. The aim of this note is showing that we can determine the inversion complexity of every Boolean function in Boolean formulas by arguments based on the study of circuit complexity.<|reference_end|> | arxiv | @article{morizumi2008a,
title={A Note on the Inversion Complexity of Boolean Functions in Boolean
Formulas},
author={Hiroki Morizumi},
journal={arXiv preprint arXiv:0811.0699},
year={2008},
archivePrefix={arXiv},
eprint={0811.0699},
primaryClass={cs.CC cs.DM}
} | morizumi2008a |
arxiv-5390 | 0811.0705 | The Design of Sparse Antenna Array | <|reference_start|>The Design of Sparse Antenna Array: The aim of antenna array synthesis is to achieve a desired radiation pattern with the minimum number of antenna elements. In this paper the antenna synthesis problem is studied from a totally new perspective. One of the key principles of compressive sensing is that the signal to be sensed should be sparse or compressible. This coincides with the requirement of minimum number of element in the antenna array synthesis problem. In this paper the antenna element of the array can be efficiently reduced via compressive sensing, which shows a great improvement to the existing antenna synthesis method. Moreover, the desired radiation pattern can be achieved in a very computation time which is even shorter than the existing method. Numerical examples are presented to show the high efficiency of the proposed method.<|reference_end|> | arxiv | @article{li2008the,
title={The Design of Sparse Antenna Array},
author={Lianlin Li, wenji zhang and Fang Li},
journal={arXiv preprint arXiv:0811.0705},
year={2008},
archivePrefix={arXiv},
eprint={0811.0705},
primaryClass={cs.IT math.IT}
} | li2008the |
arxiv-5391 | 0811.0709 | MMOGs as Social Experiments: the Case of Environmental Laws | <|reference_start|>MMOGs as Social Experiments: the Case of Environmental Laws: In this paper we argue that Massively Multiplayer Online Games (MMOGs), also known as Large Games are an interesting research tool for policy experimentation. One of the major problems with lawmaking is that testing the laws is a difficult enterprise. Here we show that the concept of an MMOG can be used to experiment with environmental laws on a large scale, provided that the MMOG is a real game, i.e., it is fun, addictive, presents challenges that last, etc.. We present a detailed game concept as an initial step.<|reference_end|> | arxiv | @article{broekens2008mmogs,
title={MMOGs as Social Experiments: the Case of Environmental Laws},
author={Joost Broekens},
journal={arXiv preprint arXiv:0811.0709},
year={2008},
archivePrefix={arXiv},
eprint={0811.0709},
primaryClass={cs.CY}
} | broekens2008mmogs |
arxiv-5392 | 0811.0717 | Visualization of association graphs for assisting the interpretation of classifications | <|reference_start|>Visualization of association graphs for assisting the interpretation of classifications: Given a query on the PASCAL database maintained by the INIST, we design user interfaces to visualize and browse two types of graphs extracted from abstracts: 1) the graph of all associations between authors (co-author graph), 2) the graph of strong associations between authors and terms automatically extracted from abstracts and grouped using linguistic variations. We adapt for this purpose the TermWatch system that comprises a term extractor, a relation identifier which yields the terminological network and a clustering module. The results are output on two interfaces: a graphic one mapping the clusters in a 2D space and a terminological hypertext network allowing the user to interactively explore results and return to source texts.<|reference_end|> | arxiv | @article{juan2008visualization,
title={Visualization of association graphs for assisting the interpretation of
classifications},
author={Eric San Juan (INIST), Ivana Roche (INIST)},
journal={arXiv preprint arXiv:0811.0717},
year={2008},
archivePrefix={arXiv},
eprint={0811.0717},
primaryClass={stat.AP cs.DL cs.IR}
} | juan2008visualization |
arxiv-5393 | 0811.0719 | Web Usage Analysis: New Science Indicators and Co-usage | <|reference_start|>Web Usage Analysis: New Science Indicators and Co-usage: A new type of statistical analysis of the science and technical information (STI) in the Web context is produced. We propose a set of indicators about Web users, visualized bibliographic records, and e-commercial transactions. In addition, we introduce two Web usage factors. Finally, we give an overview of the co-usage analysis. For these tasks, we introduce a computer based system, called Miri@d, which produces descriptive statistical information about the Web users' searching behaviour, and what is effectively used from a free access digital bibliographical database. The system is conceived as a server of statistical data which are carried out beforehand, and as an interactive server for online statistical work. The results will be made available to analysts, who can use this descriptive statistical information as raw data for their indicator design tasks, and as input for multivariate data analysis, clustering analysis, and mapping. Managers also can exploit the results in order to improve management and decision-making.<|reference_end|> | arxiv | @article{polanco2008web,
title={Web Usage Analysis: New Science Indicators and Co-usage},
author={Xavier Polanco (INIST), Ivana Roche (INIST), Dominique Besagni (INIST)},
journal={S\'eminaire VSST 2006, Lille : France (2006)},
year={2008},
archivePrefix={arXiv},
eprint={0811.0719},
primaryClass={cs.IR stat.AP}
} | polanco2008web |
arxiv-5394 | 0811.0726 | Improved Capacity Scaling in Wireless Networks With Infrastructure | <|reference_start|>Improved Capacity Scaling in Wireless Networks With Infrastructure: This paper analyzes the impact and benefits of infrastructure support in improving the throughput scaling in networks of $n$ randomly located wireless nodes. The infrastructure uses multi-antenna base stations (BSs), in which the number of BSs and the number of antennas at each BS can scale at arbitrary rates relative to $n$. Under the model, capacity scaling laws are analyzed for both dense and extended networks. Two BS-based routing schemes are first introduced in this study: an infrastructure-supported single-hop (ISH) routing protocol with multiple-access uplink and broadcast downlink and an infrastructure-supported multi-hop (IMH) routing protocol. Then, their achievable throughput scalings are analyzed. These schemes are compared against two conventional schemes without BSs: the multi-hop (MH) transmission and hierarchical cooperation (HC) schemes. It is shown that a linear throughput scaling is achieved in dense networks, as in the case without help of BSs. In contrast, the proposed BS-based routing schemes can, under realistic network conditions, improve the throughput scaling significantly in extended networks. The gain comes from the following advantages of these BS-based protocols. First, more nodes can transmit simultaneously in the proposed scheme than in the MH scheme if the number of BSs and the number of antennas are large enough. Second, by improving the long-distance signal-to-noise ratio (SNR), the received signal power can be larger than that of the HC, enabling a better throughput scaling under extended networks. Furthermore, by deriving the corresponding information-theoretic cut-set upper bounds, it is shown under extended networks that a combination of four schemes IMH, ISH, MH, and HC is order-optimal in all operating regimes.<|reference_end|> | arxiv | @article{shin2008improved,
title={Improved Capacity Scaling in Wireless Networks With Infrastructure},
author={Won-Yong Shin, Sang-Woon Jeon, Natasha Devroye, Mai H. Vu, Sae-Young
Chung, Yong H. Lee, and Vahid Tarokh},
journal={arXiv preprint arXiv:0811.0726},
year={2008},
doi={10.1109/TIT.2011.2158881},
archivePrefix={arXiv},
eprint={0811.0726},
primaryClass={cs.IT math.IT}
} | shin2008improved |
arxiv-5395 | 0811.0731 | Cognitive OFDM network sensing: a free probability approach | <|reference_start|>Cognitive OFDM network sensing: a free probability approach: In this paper, a practical power detection scheme for OFDM terminals, based on recent free probability tools, is proposed. The objective is for the receiving terminal to determine the transmission power and the number of the surrounding base stations in the network. However, thesystem dimensions of the network model turn energy detection into an under-determined problem. The focus of this paper is then twofold: (i) discuss the maximum amount of information that an OFDM terminal can gather from the surrounding base stations in the network, (ii) propose a practical solution for blind cell detection using the free deconvolution tool. The efficiency of this solution is measured through simulations, which show better performance than the classical power detection methods.<|reference_end|> | arxiv | @article{couillet2008cognitive,
title={Cognitive OFDM network sensing: a free probability approach},
author={Romain Couillet, Merouane Debbah},
journal={arXiv preprint arXiv:0811.0731},
year={2008},
archivePrefix={arXiv},
eprint={0811.0731},
primaryClass={cs.IT cs.AI math.IT math.PR}
} | couillet2008cognitive |
arxiv-5396 | 0811.0741 | Data Mining-based Fragmentation of XML Data Warehouses | <|reference_start|>Data Mining-based Fragmentation of XML Data Warehouses: With the multiplication of XML data sources, many XML data warehouse models have been proposed to handle data heterogeneity and complexity in a way relational data warehouses fail to achieve. However, XML-native database systems currently suffer from limited performances, both in terms of manageable data volume and response time. Fragmentation helps address both these issues. Derived horizontal fragmentation is typically used in relational data warehouses and can definitely be adapted to the XML context. However, the number of fragments produced by classical algorithms is difficult to control. In this paper, we propose the use of a k-means-based fragmentation approach that allows to master the number of fragments through its $k$ parameter. We experimentally compare its efficiency to classical derived horizontal fragmentation algorithms adapted to XML data warehouses and show its superiority.<|reference_end|> | arxiv | @article{mahboubi2008data,
title={Data Mining-based Fragmentation of XML Data Warehouses},
author={Hadj Mahboubi (ERIC), J'er^ome Darmont (ERIC)},
journal={ACM 11th International Workshop on Data Warehousing and OLAP
(CIKM/DOLAP 08), Napa Valley : \'Etats-Unis d'Am\'erique (2008)},
year={2008},
archivePrefix={arXiv},
eprint={0811.0741},
primaryClass={cs.DB}
} | mahboubi2008data |
arxiv-5397 | 0811.0764 | A Bayesian Framework for Collaborative Multi-Source Signal Detection | <|reference_start|>A Bayesian Framework for Collaborative Multi-Source Signal Detection: This paper introduces a Bayesian framework to detect multiple signals embedded in noisy observations from a sensor array. For various states of knowledge on the communication channel and the noise at the receiving sensors, a marginalization procedure based on recent tools of finite random matrix theory, in conjunction with the maximum entropy principle, is used to compute the hypothesis selection criterion. Quite remarkably, explicit expressions for the Bayesian detector are derived which enable to decide on the presence of signal sources in a noisy wireless environment. The proposed Bayesian detector is shown to outperform the classical power detector when the noise power is known and provides very good performance for limited knowledge on the noise power. Simulations corroborate the theoretical results and quantify the gain achieved using the proposed Bayesian framework.<|reference_end|> | arxiv | @article{couillet2008a,
title={A Bayesian Framework for Collaborative Multi-Source Signal Detection},
author={Romain Couillet, Merouane Debbah},
journal={arXiv preprint arXiv:0811.0764},
year={2008},
archivePrefix={arXiv},
eprint={0811.0764},
primaryClass={cs.IT cs.AI math.IT math.PR}
} | couillet2008a |
arxiv-5398 | 0811.0777 | A random coding theorem for "modulo-two adder" source network | <|reference_start|>A random coding theorem for "modulo-two adder" source network: This paper has been withdrawn by the author, due a crucial error in the proof of the main Theorem (Sec. 3). In particular, in deriving the bound on the probability of error (Eq. 10) the contribution of those pairs (x', y') that are not equal to (x, y) has not been considered. By adding the contribution of these pairs, one can verify that a region of rates similar to the Slepian-Wolf region will emerge. The author would like to acknowledge a critical review of the paper by Mr. Paul Cuff of Stanford University who first pointed out the error.<|reference_end|> | arxiv | @article{zia2008a,
title={A random coding theorem for "modulo-two adder" source network},
author={Amin Zia},
journal={arXiv preprint arXiv:0811.0777},
year={2008},
archivePrefix={arXiv},
eprint={0811.0777},
primaryClass={cs.IT math.IT}
} | zia2008a |
arxiv-5399 | 0811.0778 | A maximum entropy approach to OFDM channel estimation | <|reference_start|>A maximum entropy approach to OFDM channel estimation: In this work, a new Bayesian framework for OFDM channel estimation is proposed. Using Jaynes' maximum entropy principle to derive prior information, we successively tackle the situations when only the channel delay spread is a priori known, then when it is not known. Exploitation of the time-frequency dimensions are also considered in this framework, to derive the optimal channel estimation associated to some performance measure under any state of knowledge. Simulations corroborate the optimality claim and always prove as good or better in performance than classical estimators.<|reference_end|> | arxiv | @article{couillet2008a,
title={A maximum entropy approach to OFDM channel estimation},
author={Romain Couillet, Merouane Debbah},
journal={arXiv preprint arXiv:0811.0778},
year={2008},
archivePrefix={arXiv},
eprint={0811.0778},
primaryClass={cs.IT math.IT math.PR}
} | couillet2008a |
arxiv-5400 | 0811.0811 | When are two algorithms the same? | <|reference_start|>When are two algorithms the same?: People usually regard algorithms as more abstract than the programs that implement them. The natural way to formalize this idea is that algorithms are equivalence classes of programs with respect to a suitable equivalence relation. We argue that no such equivalence relation exists.<|reference_end|> | arxiv | @article{blass2008when,
title={When are two algorithms the same?},
author={Andreas Blass (University of Michigan), Nachum Dershowitz (Tel Aviv
University), and Yuri Gurevich (Microsoft Research)},
journal={Bulletin of Symbolic Logic, vol. 15, no. 2, pp. 145-168, 2009},
year={2008},
archivePrefix={arXiv},
eprint={0811.0811},
primaryClass={cs.GL cs.DS cs.LO}
} | blass2008when |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.