corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-6901 | 0903.4796 | Fast FPT algorithms for vertex subset and vertex partitioning problems using neighborhood unions | <|reference_start|>Fast FPT algorithms for vertex subset and vertex partitioning problems using neighborhood unions: We introduce the graph parameter boolean-width, related to the number of different unions of neighborhoods across a cut of a graph. Boolean-width is similar to rank-width, which is related to the number of $GF[2]$-sums (1+1=0) of neighborhoods instead of the boolean-sums (1+1=1) used for boolean-width. We give algorithms for a large class of NP-hard vertex subset and vertex partitioning problems that are FPT when parameterized by either boolean-width, rank-width or clique-width, with runtime single exponential in either parameter if given the pertinent optimal decomposition. To compare boolean-width versus rank-width or clique-width, we first show that for any graph, the square root of its boolean-width is never more than its rank-width. Next, we exhibit a class of graphs, the Hsu-grids, for which we can solve NP-hard problems in polynomial time, if we use the right parameter. An $n \times \frac{n}{10}$ Hsu-grid on ${1/10}n^2$ vertices has boolean-width $\Theta(\log n)$ and rank-width $\Theta(n)$. Moreover, any optimal rank-decomposition of such a graph will have boolean-width $\Theta(n)$, i.e. exponential in the optimal boolean-width. A main open problem is to approximate the boolean-width better than what is given by the algorithm for rank-width [Hlin\v{e}n\'y and Oum, 2008]<|reference_end|> | arxiv | @article{bui-xuan2009fast,
title={Fast FPT algorithms for vertex subset and vertex partitioning problems
using neighborhood unions},
author={B.-M. Bui-Xuan and J. A. Telle and M. Vatshelle (Department of
Informatics, University of Bergen, Norway)},
journal={arXiv preprint arXiv:0903.4796},
year={2009},
archivePrefix={arXiv},
eprint={0903.4796},
primaryClass={cs.DS cs.DM}
} | bui-xuan2009fast |
arxiv-6902 | 0903.4812 | A computational method for bounding the probability of reconstruction on trees | <|reference_start|>A computational method for bounding the probability of reconstruction on trees: For a tree Markov random field non-reconstruction is said to hold if as the depth of the tree goes to infinity the information that a typical configuration at the leaves gives about the value at the root goes to zero. The distribution of the measure at the root conditioned on a typical boundary can be computed using a distributional recurrence. However the exact computation is not feasible because the support of the distribution grows exponentially with the depth. In this work, we introduce a notion of a survey of a distribution over probability vectors which is a succinct representation of the true distribution. We show that a survey of the distribution of the measure at the root can be constructed by an efficient recursive algorithm. The key properties of surveys are that the size does not grow with the depth, they can be constructed recursively, and they still provide a good bound for the distance between the true conditional distribution and the unconditional distribution at the root. This approach applies to a large class of Markov random field models including randomly generated ones. As an application we show bounds on the reconstruction threshold for the Potts model on small-degree trees.<|reference_end|> | arxiv | @article{bhatnagar2009a,
title={A computational method for bounding the probability of reconstruction on
trees},
author={Nayantara Bhatnagar, Elitza Maneva},
journal={SIAM J. on Discrete Math, 25(2):854-871, 2011},
year={2009},
archivePrefix={arXiv},
eprint={0903.4812},
primaryClass={cs.DM}
} | bhatnagar2009a |
arxiv-6903 | 0903.4817 | An Exponential Lower Bound on the Complexity of Regularization Paths | <|reference_start|>An Exponential Lower Bound on the Complexity of Regularization Paths: For a variety of regularized optimization problems in machine learning, algorithms computing the entire solution path have been developed recently. Most of these methods are quadratic programs that are parameterized by a single parameter, as for example the Support Vector Machine (SVM). Solution path algorithms do not only compute the solution for one particular value of the regularization parameter but the entire path of solutions, making the selection of an optimal parameter much easier. It has been assumed that these piecewise linear solution paths have only linear complexity, i.e. linearly many bends. We prove that for the support vector machine this complexity can be exponential in the number of training points in the worst case. More strongly, we construct a single instance of n input points in d dimensions for an SVM such that at least \Theta(2^{n/2}) = \Theta(2^d) many distinct subsets of support vectors occur as the regularization parameter changes.<|reference_end|> | arxiv | @article{gärtner2009an,
title={An Exponential Lower Bound on the Complexity of Regularization Paths},
author={Bernd G"artner, Martin Jaggi and Cl'ement Maria},
journal={Journal of Computational Geometry (JoCG) 3(1), 168-195, 2012},
year={2009},
archivePrefix={arXiv},
eprint={0903.4817},
primaryClass={cs.LG cs.CG cs.CV math.OC stat.ML}
} | gärtner2009an |
arxiv-6904 | 0903.4826 | New Linear Codes from Matrix-Product Codes with Polynomial Units | <|reference_start|>New Linear Codes from Matrix-Product Codes with Polynomial Units: A new construction of codes from old ones is considered, it is an extension of the matrix-product construction. Several linear codes that improve the parameters of the known ones are presented.<|reference_end|> | arxiv | @article{hernando2009new,
title={New Linear Codes from Matrix-Product Codes with Polynomial Units},
author={Fernando Hernando and Diego Ruano},
journal={Adv. Math. Commun. 4 (2010), no. 3, 363-367},
year={2009},
doi={10.3934/amc.2012.6.259},
archivePrefix={arXiv},
eprint={0903.4826},
primaryClass={cs.IT math.IT}
} | hernando2009new |
arxiv-6905 | 0903.4856 | A Combinatorial Algorithm to Compute Regularization Paths | <|reference_start|>A Combinatorial Algorithm to Compute Regularization Paths: For a wide variety of regularization methods, algorithms computing the entire solution path have been developed recently. Solution path algorithms do not only compute the solution for one particular value of the regularization parameter but the entire path of solutions, making the selection of an optimal parameter much easier. Most of the currently used algorithms are not robust in the sense that they cannot deal with general or degenerate input. Here we present a new robust, generic method for parametric quadratic programming. Our algorithm directly applies to nearly all machine learning applications, where so far every application required its own different algorithm. We illustrate the usefulness of our method by applying it to a very low rank problem which could not be solved by existing path tracking methods, namely to compute part-worth values in choice based conjoint analysis, a popular technique from market research to estimate consumers preferences on a class of parameterized options.<|reference_end|> | arxiv | @article{gärtner2009a,
title={A Combinatorial Algorithm to Compute Regularization Paths},
author={Bernd G"artner, Joachim Giesen, Martin Jaggi and Torsten Welsch},
journal={arXiv preprint arXiv:0903.4856},
year={2009},
archivePrefix={arXiv},
eprint={0903.4856},
primaryClass={cs.LG cs.AI cs.CV}
} | gärtner2009a |
arxiv-6906 | 0903.4860 | Learning Multiple Belief Propagation Fixed Points for Real Time Inference | <|reference_start|>Learning Multiple Belief Propagation Fixed Points for Real Time Inference: In the context of inference with expectation constraints, we propose an approach based on the "loopy belief propagation" algorithm LBP, as a surrogate to an exact Markov Random Field MRF modelling. A prior information composed of correlations among a large set of N variables, is encoded into a graphical model; this encoding is optimized with respect to an approximate decoding procedure LBP, which is used to infer hidden variables from an observed subset. We focus on the situation where the underlying data have many different statistical components, representing a variety of independent patterns. Considering a single parameter family of models we show how LBP may be used to encode and decode efficiently such information, without solving the NP hard inverse problem yielding the optimal MRF. Contrary to usual practice, we work in the non-convex Bethe free energy minimization framework, and manage to associate a belief propagation fixed point to each component of the underlying probabilistic mixture. The mean field limit is considered and yields an exact connection with the Hopfield model at finite temperature and steady state, when the number of mixture components is proportional to the number of variables. In addition, we provide an enhanced learning procedure, based on a straightforward multi-parameter extension of the model in conjunction with an effective continuous optimization procedure. This is performed using the stochastic search heuristic CMAES and yields a significant improvement with respect to the single parameter basic model.<|reference_end|> | arxiv | @article{furtlehner2009learning,
title={Learning Multiple Belief Propagation Fixed Points for Real Time
Inference},
author={Cyril Furtlehner, Jean-Marc Lasgouttes and Anne Auger},
journal={Physica A. Vol. 389(1), pp. 149-163 (2010)},
year={2009},
doi={10.1016/j.physa.2009.08.030},
archivePrefix={arXiv},
eprint={0903.4860},
primaryClass={cs.LG cond-mat.dis-nn physics.data-an}
} | furtlehner2009learning |
arxiv-6907 | 0903.4875 | Extensible Component Based Architecture for FLASH, A Massively Parallel, Multiphysics Simulation Code | <|reference_start|>Extensible Component Based Architecture for FLASH, A Massively Parallel, Multiphysics Simulation Code: FLASH is a publicly available high performance application code which has evolved into a modular, extensible software system from a collection of unconnected legacy codes. FLASH has been successful because its capabilities have been driven by the needs of scientific applications, without compromising maintainability, performance, and usability. In its newest incarnation, FLASH3 consists of inter-operable modules that can be combined to generate different applications. The FLASH architecture allows arbitrarily many alternative implementations of its components to co-exist and interchange with each other, resulting in greater flexibility. Further, a simple and elegant mechanism exists for customization of code functionality without the need to modify the core implementation of the source. A built-in unit test framework providing verifiability, combined with a rigorous software maintenance process, allow the code to operate simultaneously in the dual mode of production and development. In this paper we describe the FLASH3 architecture, with emphasis on solutions to the more challenging conflicts arising from solver complexity, portable performance requirements, and legacy codes. We also include results from user surveys conducted in 2005 and 2007, which highlight the success of the code.<|reference_end|> | arxiv | @article{dubey2009extensible,
title={Extensible Component Based Architecture for FLASH, A Massively Parallel,
Multiphysics Simulation Code},
author={A. Dubey, L.B. Reid, K. Weide, K. Antypas, M.K. Ganapathy, K. Riley,
D. Sheeler, A. Siegal},
journal={arXiv preprint arXiv:0903.4875},
year={2009},
doi={10.1016/j.parco.2009.08.001},
archivePrefix={arXiv},
eprint={0903.4875},
primaryClass={cs.SE}
} | dubey2009extensible |
arxiv-6908 | 0903.4898 | Asymptotic Optimality of the Static Frequency Caching in the Presence of Correlated Requests | <|reference_start|>Asymptotic Optimality of the Static Frequency Caching in the Presence of Correlated Requests: It is well known that the static caching algorithm that keeps the most frequently requested documents in the cache is optimal in case when documents are of the same size and requests are independent and equally distributed. However, it is hard to develop explicit and provably optimal caching algorithms when requests are statistically correlated. In this paper, we show that keeping the most frequently requested documents in the cache is still optimal for large cache sizes even if the requests are strongly correlated.<|reference_end|> | arxiv | @article{jelenkovic2009asymptotic,
title={Asymptotic Optimality of the Static Frequency Caching in the Presence of
Correlated Requests},
author={Predrag R. Jelenkovic and Ana Radovanovic},
journal={arXiv preprint arXiv:0903.4898},
year={2009},
archivePrefix={arXiv},
eprint={0903.4898},
primaryClass={cs.PF cs.DS}
} | jelenkovic2009asymptotic |
arxiv-6909 | 0903.4907 | The hardness of the independence and matching clutter of a graph | <|reference_start|>The hardness of the independence and matching clutter of a graph: A {\it clutter} (or {\it antichain} or {\it Sperner family}) $L$ is a pair $(V,E)$, where $V$ is a finite set and $E$ is a family of subsets of $V$ none of which is a subset of another. Usually, the elements of $V$ are called {\it vertices} of $L$, and the elements of $E$ are called {\it edges} of $L$. A subset $s_e$ of an edge $e$ of a clutter is called {\it recognizing} for $e$, if $s_e$ is not a subset of another edge. The {\it hardness} of an edge $e$ of a clutter is the ratio of the size of $e\textrm{'s}$ smallest recognizing subset to the size of $e$. The hardness of a clutter is the maximum hardness of its edges. We study the hardness of clutters arising from independent sets and matchings of graphs.<|reference_end|> | arxiv | @article{hambardzumyan2009the,
title={The hardness of the independence and matching clutter of a graph},
author={Sasun Hambardzumyan, Vahan V. Mkrtchyan, Vahe L. Musoyan, Hovhannes
Sargsyan},
journal={arXiv preprint arXiv:0903.4907},
year={2009},
archivePrefix={arXiv},
eprint={0903.4907},
primaryClass={cs.DM math.CO}
} | hambardzumyan2009the |
arxiv-6910 | 0903.4930 | Time manipulation technique for speeding up reinforcement learning in simulations | <|reference_start|>Time manipulation technique for speeding up reinforcement learning in simulations: A technique for speeding up reinforcement learning algorithms by using time manipulation is proposed. It is applicable to failure-avoidance control problems running in a computer simulation. Turning the time of the simulation backwards on failure events is shown to speed up the learning by 260% and improve the state space exploration by 12% on the cart-pole balancing task, compared to the conventional Q-learning and Actor-Critic algorithms.<|reference_end|> | arxiv | @article{kormushev2009time,
title={Time manipulation technique for speeding up reinforcement learning in
simulations},
author={Petar Kormushev, Kohei Nomoto, Fangyan Dong, Kaoru Hirota},
journal={International Journal of Cybernetics and Information Technologies,
vol. 8, no. 1, pp. 12-24, 2008},
year={2009},
archivePrefix={arXiv},
eprint={0903.4930},
primaryClass={cs.AI cs.LG cs.RO}
} | kormushev2009time |
arxiv-6911 | 0903.4939 | A Novel Algorithm for Compressive Sensing: Iteratively Reweighed Operator Algorithm (IROA) | <|reference_start|>A Novel Algorithm for Compressive Sensing: Iteratively Reweighed Operator Algorithm (IROA): Compressive sensing claims that the sparse signals can be reconstructed exactly from many fewer measurements than traditionally believed necessary. One of issues ensuring the successful compressive sensing is to deal with the sparsity-constraint optimization. Up to now, many excellent theories, algorithms and software have been developed, for example, the so-called greedy algorithm ant its variants, the sparse Bayesian algorithm, the convex optimization methods, and so on. The formulations for them consist of two terms, in which one is and the other is (, mostly, p=1 is adopted due to good characteristic of the convex function) (NOTE: without the loss of generality, itself is assumed to be sparse). It is noted that all of them specify the sparsity constraint by the second term. Different from them, the developed formulation in this paper consists of two terms where one is with () and the other is . For each iteration the measurement matrix (linear operator) is reweighed by determined by which is obtained in the previous iteration, so the proposed method is called the iteratively reweighed operator algorithm (IROA). Moreover, in order to save the computation time, another reweighed operation has been carried out; in particular, the columns of corresponding to small have been excluded out. Theoretical analysis and numerical simulations have shown that the proposed method overcomes the published algorithms.<|reference_end|> | arxiv | @article{li2009a,
title={A Novel Algorithm for Compressive Sensing: Iteratively Reweighed
Operator Algorithm (IROA)},
author={Lianlin Li, Fang Li},
journal={arXiv preprint arXiv:0903.4939},
year={2009},
archivePrefix={arXiv},
eprint={0903.4939},
primaryClass={cs.IT math.IT}
} | li2009a |
arxiv-6912 | 0903.4959 | TCP over 3G links: Problems and Solutions | <|reference_start|>TCP over 3G links: Problems and Solutions: This review paper presents analytical information regarding the transfer of TCP data flows on paths towards interconnected wireless systems, with emphasis on 3G cellular networks. The focus is on protocol modifications in face of problems arising from terminal mobility and wireless transmission. The objective of this paper is not to present an exhaustive review of the literature, but to filter out the causes of poor TCP performance in such systems and give a rationalized view of measures that can be taken against them.<|reference_end|> | arxiv | @article{koukoutsidis2009tcp,
title={TCP over 3G links: Problems and Solutions},
author={Ioannis Koukoutsidis},
journal={arXiv preprint arXiv:0903.4959},
year={2009},
archivePrefix={arXiv},
eprint={0903.4959},
primaryClass={cs.NI}
} | koukoutsidis2009tcp |
arxiv-6913 | 0903.4961 | Global Clock, Physical Time Order and Pending Period Analysis in Multiprocessor Systems | <|reference_start|>Global Clock, Physical Time Order and Pending Period Analysis in Multiprocessor Systems: In multiprocessor systems, various problems are treated with Lamport's logical clock and the resultant logical time orders between operations. However, one often needs to face the high complexities caused by the lack of logical time order information in practice. In this paper, we utilize the \emph{global clock} to infuse the so-called \emph{pending period} to each operation in a multiprocessor system, where the pending period is a time interval that contains the performed time of the operation. Further, we define the \emph{physical time order} for any two operations with disjoint pending periods. The physical time order is obeyed by any real execution in multiprocessor systems due to that it is part of the truly happened operation orders restricted by global clock, and it is then proven to be independent and consistent with traditional logical time orders. The above novel yet fundamental concepts enables new effective approaches for analyzing multiprocessor systems, which are named \emph{pending period analysis} as a whole. As a consequence of pending period analysis, many important problems of multiprocessor systems can be tackled effectively. As a significant application example, complete memory consistency verification, which was known as an NP-hard problem, can be solved with the complexity of $O(n^2)$ (where $n$ is the number of operations). Moreover, the two event ordering problems, which were proven to be Co-NP-Hard and NP-hard respectively, can both be solved with the time complexity of O(n) if restricted by pending period information.<|reference_end|> | arxiv | @article{chen2009global,
title={Global Clock, Physical Time Order and Pending Period Analysis in
Multiprocessor Systems},
author={Yunji Chen, Tianshi Chen, and Weiwu Hu},
journal={arXiv preprint arXiv:0903.4961},
year={2009},
archivePrefix={arXiv},
eprint={0903.4961},
primaryClass={cs.DC}
} | chen2009global |
arxiv-6914 | 0903.5024 | Analysis Paralysis: when to stop? | <|reference_start|>Analysis Paralysis: when to stop?: Analysis of a system constitutes the most important aspect of the systems development life cycle.But it is also the most confusing and time consuming of all the stages.The critical question always remains: How much and till when to analyse? Ed Yourdon has called this phenomenon as Analysis Paralysis. In this paper, I suggest a model which can actually help in arriving at a satisfactory answer to this problem.<|reference_end|> | arxiv | @article{bhardwaj2009analysis,
title={Analysis Paralysis: when to stop?},
author={Er.Akshay Bhardwaj},
journal={arXiv preprint arXiv:0903.5024},
year={2009},
archivePrefix={arXiv},
eprint={0903.5024},
primaryClass={cs.SE}
} | bhardwaj2009analysis |
arxiv-6915 | 0903.5045 | Digital Restoration of Ancient Papyri | <|reference_start|>Digital Restoration of Ancient Papyri: Image processing can be used for digital restoration of ancient papyri, that is, for a restoration performed on their digital images. The digital manipulation allows reducing the background signals and enhancing the readability of texts. In the case of very old and damaged documents, this is fundamental for identification of the patterns of letters. Some examples of restoration, obtained with an image processing which uses edges detection and Fourier filtering, are shown. One of them concerns 7Q5 fragment of the Dead Sea Scrolls.<|reference_end|> | arxiv | @article{sparavigna2009digital,
title={Digital Restoration of Ancient Papyri},
author={Amelia Sparavigna},
journal={arXiv preprint arXiv:0903.5045},
year={2009},
archivePrefix={arXiv},
eprint={0903.5045},
primaryClass={cs.CV}
} | sparavigna2009digital |
arxiv-6916 | 0903.5049 | SQS-graphs of extended 1-perfect codes | <|reference_start|>SQS-graphs of extended 1-perfect codes: A binary extended 1-perfect code $\mathcal C$ folds over its kernel via the Steiner quadruple systems associated with its codewords. The resulting folding, proposed as a graph invariant for $\mathcal C$, distinguishes among the 361 nonlinear codes $\mathcal C$ of kernel dimension $\kappa$ with $9\geq\kappa\geq 5$ obtained via Solov'eva-Phelps doubling construction. Each of the 361 resulting graphs has most of its nonloop edges expressible in terms of the lexicographically disjoint quarters of the products of the components of two of the ten 1-perfect partitions of length 8 classified by Phelps, and loops mostly expressible in terms of the lines of the Fano plane.<|reference_end|> | arxiv | @article{dejter2009sqs-graphs,
title={SQS-graphs of extended 1-perfect codes},
author={Italo J. Dejter},
journal={Congressus Numerantium, {\bf 193}(2008), 175--194, {\bf
MR}2487725; 94B05, (05C90)},
year={2009},
archivePrefix={arXiv},
eprint={0903.5049},
primaryClass={math.CO cs.IT math.IT}
} | dejter2009sqs-graphs |
arxiv-6917 | 0903.5054 | Flow of Activity in the Ouroboros Model | <|reference_start|>Flow of Activity in the Ouroboros Model: The Ouroboros Model is a new conceptual proposal for an algorithmic structure for efficient data processing in living beings as well as for artificial agents. Its central feature is a general repetitive loop where one iteration cycle sets the stage for the next. Sensory input activates data structures (schemata) with similar constituents encountered before, thus expectations are kindled. This corresponds to the highlighting of empty slots in the selected schema, and these expectations are compared with the actually encountered input. Depending on the outcome of this consumption analysis different next steps like search for further data or a reset, i.e. a new attempt employing another schema, are triggered. Monitoring of the whole process, and in particular of the flow of activation directed by the consumption analysis, yields valuable feedback for the optimum allocation of attention and resources including the selective establishment of useful new memory entries.<|reference_end|> | arxiv | @article{thomsen2009flow,
title={Flow of Activity in the Ouroboros Model},
author={Knud Thomsen},
journal={arXiv preprint arXiv:0903.5054},
year={2009},
archivePrefix={arXiv},
eprint={0903.5054},
primaryClass={cs.AI}
} | thomsen2009flow |
arxiv-6918 | 0903.5066 | Modified-CS: Modifying Compressive Sensing for Problems with Partially Known Support | <|reference_start|>Modified-CS: Modifying Compressive Sensing for Problems with Partially Known Support: We study the problem of reconstructing a sparse signal from a limited number of its linear projections when a part of its support is known, although the known part may contain some errors. The ``known" part of the support, denoted T, may be available from prior knowledge. Alternatively, in a problem of recursively reconstructing time sequences of sparse spatial signals, one may use the support estimate from the previous time instant as the ``known" part. The idea of our proposed solution (modified-CS) is to solve a convex relaxation of the following problem: find the signal that satisfies the data constraint and is sparsest outside of T. We obtain sufficient conditions for exact reconstruction using modified-CS. These are much weaker than those needed for compressive sensing (CS) when the sizes of the unknown part of the support and of errors in the known part are small compared to the support size. An important extension called Regularized Modified-CS (RegModCS) is developed which also uses prior signal estimate knowledge. Simulation comparisons for both sparse and compressible signals are shown.<|reference_end|> | arxiv | @article{vaswani2009modified-cs:,
title={Modified-CS: Modifying Compressive Sensing for Problems with Partially
Known Support},
author={Namrata Vaswani and Wei Lu},
journal={IEEE Trans. Signal Processing, pages 4595--4607, vol. 58 (9),
September 2010},
year={2009},
doi={10.1109/TSP.2010.2051150},
archivePrefix={arXiv},
eprint={0903.5066},
primaryClass={cs.IT math.IT math.ST stat.ME stat.TH}
} | vaswani2009modified-cs: |
arxiv-6919 | 0903.5074 | Analyzing Least Squares and Kalman Filtered Compressed Sensing | <|reference_start|>Analyzing Least Squares and Kalman Filtered Compressed Sensing: In recent work, we studied the problem of causally reconstructing time sequences of spatially sparse signals, with unknown and slow time-varying sparsity patterns, from a limited number of linear "incoherent" measurements. We proposed a solution called Kalman Filtered Compressed Sensing (KF-CS). The key idea is to run a reduced order KF only for the current signal's estimated nonzero coefficients' set, while performing CS on the Kalman filtering error to estimate new additions, if any, to the set. KF may be replaced by Least Squares (LS) estimation and we call the resulting algorithm LS-CS. In this work, (a) we bound the error in performing CS on the LS error and (b) we obtain the conditions under which the KF-CS (or LS-CS) estimate converges to that of a genie-aided KF (or LS), i.e. the KF (or LS) which knows the true nonzero sets.<|reference_end|> | arxiv | @article{vaswani2009analyzing,
title={Analyzing Least Squares and Kalman Filtered Compressed Sensing},
author={Namrata Vaswani},
journal={Proc. IEEE Intl. Conf. Acous. Speech Sig. Proc. (ICASSP), 2009},
year={2009},
doi={10.1109/ICASSP.2009.4960258},
archivePrefix={arXiv},
eprint={0903.5074},
primaryClass={cs.IT math.IT}
} | vaswani2009analyzing |
arxiv-6920 | 0903.5108 | Multi-mode Transmission for the MIMO Broadcast Channel with Imperfect Channel State Information | <|reference_start|>Multi-mode Transmission for the MIMO Broadcast Channel with Imperfect Channel State Information: This paper proposes an adaptive multi-mode transmission strategy to improve the spectral efficiency achieved in the multiple-input multiple-output (MIMO) broadcast channel with delayed and quantized channel state information. The adaptive strategy adjusts the number of active users, denoted as the transmission mode, to balance transmit array gain, spatial division multiplexing gain, and residual inter-user interference. Accurate closed-form approximations are derived for the achievable rates for different modes, which help identify the active mode that maximizes the average sum throughput for given feedback delay and channel quantization error. The proposed transmission strategy is combined with round-robin scheduling, and is shown to provide throughput gain over single-user MIMO at moderate signal-to-noise ratio. It only requires feedback of instantaneous channel state information from a small number of users. With a feedback load constraint, the proposed algorithm provides performance close to that achieved by opportunistic scheduling with instantaneous feedback from a large number of users.<|reference_end|> | arxiv | @article{zhang2009multi-mode,
title={Multi-mode Transmission for the MIMO Broadcast Channel with Imperfect
Channel State Information},
author={Jun Zhang, Marios Kountouris, Jeffrey G. Andrews, and Robert W. Heath
Jr},
journal={arXiv preprint arXiv:0903.5108},
year={2009},
doi={10.1109/TCOMM.2011.121410.100144},
archivePrefix={arXiv},
eprint={0903.5108},
primaryClass={cs.IT math.IT}
} | zhang2009multi-mode |
arxiv-6921 | 0903.5122 | A Constructive Generalization of Nash Equilibrium for Better Payoffs and Stability | <|reference_start|>A Constructive Generalization of Nash Equilibrium for Better Payoffs and Stability: In a society of completely selfish individuals where everybody is only interested in maximizing his own payoff, does any equilibrium exist for the society? John Nash proved more than 50 years ago that an equilibrium always exists such that nobody would benefit from unilaterally changing his strategy. Nash Equilibrium is a central concept in game theory, which offers a mathematical foundation for social science and economy. However, it is important from both a theoretical and a practical point of view to understand game playing where individuals are less selfish. This paper offers a constructive generalization of Nash equilibrium to study n-person games where the selfishness of individuals can be defined at any level, including the extreme of complete selfishness. The generalization is constructive since it offers a protocol for individuals in a society to reach an equilibrium. Most importantly, this paper presents experimental results and theoretical investigation to show that the individuals in a society can reduce their selfishness level together to reach a new equilibrium where they can have better payoffs and the society is more stable at the same time. This study suggests that, for the benefit of everyone in a society (including the financial market), the pursuit of maximal payoff by each individual should be controlled at some level either by voluntary good citizenship or by imposed regulations.<|reference_end|> | arxiv | @article{huang2009a,
title={A Constructive Generalization of Nash Equilibrium for Better Payoffs and
Stability},
author={Xiaofei Huang},
journal={arXiv preprint arXiv:0903.5122},
year={2009},
archivePrefix={arXiv},
eprint={0903.5122},
primaryClass={cs.GT cs.MA}
} | huang2009a |
arxiv-6922 | 0903.5154 | Generalised Proof-Nets for Compact Categories with Biproducts | <|reference_start|>Generalised Proof-Nets for Compact Categories with Biproducts: Just as conventional functional programs may be understood as proofs in an intuitionistic logic, so quantum processes can also be viewed as proofs in a suitable logic. We describe such a logic, the logic of compact closed categories and biproducts, presented both as a sequent calculus and as a system of proof-nets. This logic captures much of the necessary structure needed to represent quantum processes under classical control, while remaining agnostic to the fine details. We demonstrate how to represent quantum processes as proof-nets, and show that the dynamic behaviour of a quantum process is captured by the cut-elimination procedure for the logic. We show that the cut elimination procedure is strongly normalising: that is, that every legal way of simplifying a proof-net leads to the same, unique, normal form. Finally, taking some initial set of operations as non-logical axioms, we show that that the resulting category of proof-nets is a representation of the free compact closed category with biproducts generated by those operations.<|reference_end|> | arxiv | @article{duncan2009generalised,
title={Generalised Proof-Nets for Compact Categories with Biproducts},
author={Ross Duncan},
journal={arXiv preprint arXiv:0903.5154},
year={2009},
archivePrefix={arXiv},
eprint={0903.5154},
primaryClass={math.CT cs.LO}
} | duncan2009generalised |
arxiv-6923 | 0903.5168 | Mathematical Model for Transformation of Sentences from Active Voice to Passive Voice | <|reference_start|>Mathematical Model for Transformation of Sentences from Active Voice to Passive Voice: Formal work in linguistics has both produced and used important mathematical tools. Motivated by a survey of models for context and word meaning, syntactic categories, phrase structure rules and trees, an attempt is being made in the present paper to present a mathematical model for structuring of sentences from active voice to passive voice, which is is the form of a transitive verb whose grammatical subject serves as the patient, receiving the action of the verb. For this purpose we have parsed all sentences of a corpus and have generated Boolean groups for each of them. It has been observed that when we take constituents of the sentences as subgroups, the sequences of phrases form permutation roups. Application of isomorphism property yields permutation mapping between the important subgroups. It has resulted in a model for transformation of sentences from active voice to passive voice. A computer program has been written to enable the software developers to evolve grammar software for sentence transformations.<|reference_end|> | arxiv | @article{pandey2009mathematical,
title={Mathematical Model for Transformation of Sentences from Active Voice to
Passive Voice},
author={Rakesh Pandey, H.S. Dhami},
journal={arXiv preprint arXiv:0903.5168},
year={2009},
archivePrefix={arXiv},
eprint={0903.5168},
primaryClass={cs.CL}
} | pandey2009mathematical |
arxiv-6924 | 0903.5172 | Delocalization transition for the Google matrix | <|reference_start|>Delocalization transition for the Google matrix: We study the localization properties of eigenvectors of the Google matrix, generated both from the World Wide Web and from the Albert-Barabasi model of networks. We establish the emergence of a delocalization phase for the PageRank vector when network parameters are changed. In the phase of localized PageRank, a delocalization takes place in the complex plane of eigenvalues of the matrix, leading to delocalized relaxation modes. We argue that the efficiency of information retrieval by Google-type search is strongly affected in the phase of delocalized PageRank.<|reference_end|> | arxiv | @article{giraud2009delocalization,
title={Delocalization transition for the Google matrix},
author={Olivier Giraud, Bertrand Georgeot, Dima L. Shepelyansky},
journal={Phys. Rev. E 80, 026107 (2009)},
year={2009},
doi={10.1103/PhysRevE.80.026107},
archivePrefix={arXiv},
eprint={0903.5172},
primaryClass={cs.IR cond-mat.dis-nn nlin.AO}
} | giraud2009delocalization |
arxiv-6925 | 0903.5177 | RFID Authentication, Efficient Proactive Information Security within Computational Security | <|reference_start|>RFID Authentication, Efficient Proactive Information Security within Computational Security: We consider repeated communication sessions between a RFID Tag (e.g., Radio Frequency Identification, RFID Tag) and a RFID Verifier. A proactive information theoretic security scheme is proposed. The scheme is based on the assumption that the information exchanged during at least one of every n successive communication sessions is not exposed to an adversary. The Tag and the Verifier maintain a vector of n entries that is repeatedly refreshed by pairwise xoring entries, with a new vector of n entries that is randomly chosen by the Tag and sent to the Verifier as a part of each communication session. The general case in which the adversary does not listen in k > 0 sessions among any n successive communication sessions is also considered. A lower bound of n(k+1) for the number of random numbers used during any n successive communication sessions is proven. In other words, we prove that an algorithm must use at least n(k+1) new random numbers during any n successive communication sessions. Then a randomized scheme that uses only O(n log n) new random numbers is presented. A computational secure scheme which is based on the information theoretic secure scheme is used to ensure that even in the case that the adversary listens in all the information exchanges, the communication between the Tag and the Verifier is secure.<|reference_end|> | arxiv | @article{dolev2009rfid,
title={RFID Authentication, Efficient Proactive Information Security within
Computational Security},
author={Shlomi Dolev, Marina Kopeetsky, Adi Shamir},
journal={arXiv preprint arXiv:0903.5177},
year={2009},
archivePrefix={arXiv},
eprint={0903.5177},
primaryClass={cs.CR}
} | dolev2009rfid |
arxiv-6926 | 0903.5188 | Quantum decision theory as quantum theory of measurement | <|reference_start|>Quantum decision theory as quantum theory of measurement: We present a general theory of quantum information processing devices, that can be applied to human decision makers, to atomic multimode registers, or to molecular high-spin registers. Our quantum decision theory is a generalization of the quantum theory of measurement, endowed with an action ring, a prospect lattice and a probability operator measure. The algebra of probability operators plays the role of the algebra of local observables. Because of the composite nature of prospects and of the entangling properties of the probability operators, quantum interference terms appear, which make actions noncommutative and the prospect probabilities non-additive. The theory provides the basis for explaining a variety of paradoxes typical of the application of classical utility theory to real human decision making. The principal advantage of our approach is that it is formulated as a self-consistent mathematical theory, which allows us to explain not just one effect but actually all known paradoxes in human decision making. Being general, the approach can serve as a tool for characterizing quantum information processing by means of atomic, molecular, and condensed-matter systems.<|reference_end|> | arxiv | @article{yukalov2009quantum,
title={Quantum decision theory as quantum theory of measurement},
author={V.I. Yukalov and D. Sornette},
journal={Phys. Lett. A 372 (2008) 6867-6871},
year={2009},
doi={10.1016/j.physleta.2008.09.053},
archivePrefix={arXiv},
eprint={0903.5188},
primaryClass={quant-ph cs.AI}
} | yukalov2009quantum |
arxiv-6927 | 0903.5201 | Mapping of transrectal ultrasonographic prostate biopsies: quality control and learning curve assessment by image processing | <|reference_start|>Mapping of transrectal ultrasonographic prostate biopsies: quality control and learning curve assessment by image processing: Objective: Mapping of transrectal ultrasonographic (TRUS) prostate biopsies is of fundamental importance for either diagnostic purposes or the management and treatment of prostate cancer, but the localization of the cores seems inaccurate. Our objective was to evaluate the capacities of an operator to plan transrectal prostate biopsies under 2-dimensional TRUS guidance using a registration algorithm to represent the localization of biopsies in a reference 3-dimensional ultrasonographic volume. Methods: Thirty-two patients underwent a series of 12 prostate biopsies under local anesthesia performed by 1 operator using a TRUS probe combined with specific third-party software to verify that the biopsies were indeed conducted within the planned targets. RESULTS: The operator reached 71% of the planned targets with substantial variability that depended on their localization (100% success rate for targets in the middle and right parasagittal parts versus 53% for targets in the left lateral base). Feedback from this system after each series of biopsies enabled the operator to significantly improve his dexterity over the course of time (first 16 patients: median score, 7 of 10 and cumulated median biopsy length in targets of 90 mm; last 16 patients, median score, 9 of 10 and a cumulated median length of 121 mm; P = .046). Conclusions: In addition to being a useful tool to improve the distribution of prostate biopsies, the potential of this system is above all the preparation of a detailed "map" of each patient showing biopsy zones without substantial changes in routine clinical practices.<|reference_end|> | arxiv | @article{mozer2009mapping,
title={Mapping of transrectal ultrasonographic prostate biopsies: quality
control and learning curve assessment by image processing},
author={Pierre Mozer, Michael Baumann (TIMC), Gregoire Chevreau (TIMC),
Alexandre Moreau-Gaudry (TIMC, CHU-Grenoble CIC), Stephane Bart, Raphaele
Renard-Penna, Eva Comperat, Pierre Conort, Marc-Olivier Bitker, Emmanuel
Chartier-Kastler, Francois Richard, Jocelyne Troccaz (TIMC)},
journal={Journal of ultrasound in medicine : official journal of the
American Institute of Ultrasound in Medicine 28, 4 (2009) 455-60},
year={2009},
archivePrefix={arXiv},
eprint={0903.5201},
primaryClass={physics.med-ph cs.OH}
} | mozer2009mapping |
arxiv-6928 | 0903.5208 | On the Necessary and Sufficient Condition of Greedy Routing Supporting Geographical Data Networks | <|reference_start|>On the Necessary and Sufficient Condition of Greedy Routing Supporting Geographical Data Networks: Large scale decentralized communication systems have introduced the new trend towards online routing where routing decisions are performed based on a limited and localized knowledge of the network. Geometrical greedy routing has been among the simplest and most common online routing schemes. A perfect geometrical routing scheme is expected to deliver each packet to the point in the network that is closest to the packet destination. However greedy routing fails to guarantee such delivery as the greedy forwarding decision sometimes leads the packets to localized minimums. This article investigates the necessary and sufficient properties of the greedy supporting graphs that provide the guaranteed delivery of packets when acting as a routing substrate.<|reference_end|> | arxiv | @article{ghaffari2009on,
title={On the Necessary and Sufficient Condition of Greedy Routing Supporting
Geographical Data Networks},
author={M. Ghaffari, B. Hariri and S. Shirmohammadi},
journal={arXiv preprint arXiv:0903.5208},
year={2009},
archivePrefix={arXiv},
eprint={0903.5208},
primaryClass={cs.CG cs.NI}
} | ghaffari2009on |
arxiv-6929 | 0903.5221 | Computing Cylindrical Algebraic Decomposition via Triangular Decomposition | <|reference_start|>Computing Cylindrical Algebraic Decomposition via Triangular Decomposition: Cylindrical algebraic decomposition is one of the most important tools for computing with semi-algebraic sets, while triangular decomposition is among the most important approaches for manipulating constructible sets. In this paper, for an arbitrary finite set $F \subset {\R}[y_1, ..., y_n]$ we apply comprehensive triangular decomposition in order to obtain an $F$-invariant cylindrical decomposition of the $n$-dimensional complex space, from which we extract an $F$-invariant cylindrical algebraic decomposition of the $n$-dimensional real space. We report on an implementation of this new approach for constructing cylindrical algebraic decompositions.<|reference_end|> | arxiv | @article{chen2009computing,
title={Computing Cylindrical Algebraic Decomposition via Triangular
Decomposition},
author={Changbo Chen (1), Marc Moreno Maza (1), Bican Xia (2), Lu Yang (3)
((1) ORCCA, University of Western Ontario (UWO), London, Ontario, Canada, (2)
School of Mathematical Sciences, Peking University, Beijing, China, (3)
Shanghai Key Laboratory of Trustworthy Computing, East China Normal
University, Shanghai, China)},
journal={arXiv preprint arXiv:0903.5221},
year={2009},
archivePrefix={arXiv},
eprint={0903.5221},
primaryClass={cs.SC}
} | chen2009computing |
arxiv-6930 | 0903.5254 | Comparing Bibliometric Statistics Obtained from the Web of Science and Scopus | <|reference_start|>Comparing Bibliometric Statistics Obtained from the Web of Science and Scopus: For more than 40 years, the Institute for Scientific Information (ISI, now part of Thomson Reuters) produced the only available bibliographic databases from which bibliometricians could compile large-scale bibliometric indicators. ISI's citation indexes, now regrouped under the Web of Science (WoS), were the major sources of bibliometric data until 2004, when Scopus was launched by the publisher Reed Elsevier. For those who perform bibliometric analyses and comparisons of countries or institutions, the existence of these two major databases raises the important question of the comparability and stability of statistics obtained from different data sources. This paper uses macro-level bibliometric indicators to compare results obtained from the WoS and Scopus. It shows that the correlations between the measures obtained with both databases for the number of papers and the number of citations received by countries, as well as for their ranks, are extremely high (R2 > .99). There is also a very high correlation when countries' papers are broken down by field. The paper thus provides evidence that indicators of scientific production and citations at the country level are stable and largely independent of the database.<|reference_end|> | arxiv | @article{archambault2009comparing,
title={Comparing Bibliometric Statistics Obtained from the Web of Science and
Scopus},
author={Eric Archambault, David Campbell, Yves Gingras, Vincent Lariviere},
journal={arXiv preprint arXiv:0903.5254},
year={2009},
doi={10.1002/asi.21062},
archivePrefix={arXiv},
eprint={0903.5254},
primaryClass={cs.IR cs.DL}
} | archambault2009comparing |
arxiv-6931 | 0903.5259 | A System of Interaction and Structure IV: The Exponentials and Decomposition | <|reference_start|>A System of Interaction and Structure IV: The Exponentials and Decomposition: We study a system, called NEL, which is the mixed commutative/non-commutative linear logic BV augmented with linear logic's exponentials. Equivalently, NEL is MELL augmented with the non-commutative self-dual connective seq. In this paper, we show a basic compositionality property of NEL, which we call decomposition. This result leads to a cut-elimination theorem, which is proved in the next paper of this series. To control the induction measure for the theorem, we rely on a novel technique that extracts from NEL proofs the structure of exponentials, into what we call !-?-Flow-Graphs.<|reference_end|> | arxiv | @article{strassburger2009a,
title={A System of Interaction and Structure IV: The Exponentials and
Decomposition},
author={Lutz Strassburger and Alessio Guglielmi},
journal={ACM Transactions on Computational Logic 12(4), pp. 23:1-39. 2011},
year={2009},
doi={10.1145/1970398.1970399},
archivePrefix={arXiv},
eprint={0903.5259},
primaryClass={cs.LO}
} | strassburger2009a |
arxiv-6932 | 0903.5267 | Equitable Partitioning Policies for Mobile Robotic Networks | <|reference_start|>Equitable Partitioning Policies for Mobile Robotic Networks: The most widely applied strategy for workload sharing is to equalize the workload assigned to each resource. In mobile multi-agent systems, this principle directly leads to equitable partitioning policies in which (i) the workspace is divided into subregions of equal measure, (ii) there is a bijective correspondence between agents and subregions, and (iii) each agent is responsible for service requests originating within its own subregion. In this paper, we design provably correct, spatially-distributed and adaptive policies that allow a team of agents to achieve a convex and equitable partition of a convex workspace, where each subregion has the same measure. We also consider the issue of achieving convex and equitable partitions where subregions have shapes similar to those of regular polygons. Our approach is related to the classic Lloyd algorithm, and exploits the unique features of power diagrams. We discuss possible applications to routing of vehicles in stochastic and dynamic environments. Simulation results are presented and discussed.<|reference_end|> | arxiv | @article{pavone2009equitable,
title={Equitable Partitioning Policies for Mobile Robotic Networks},
author={Marco Pavone, Alessandro Arsie, Emilio Frazzoli, Francesco Bullo},
journal={arXiv preprint arXiv:0903.5267},
year={2009},
archivePrefix={arXiv},
eprint={0903.5267},
primaryClass={cs.RO}
} | pavone2009equitable |
arxiv-6933 | 0903.5282 | Multi-agent Q-Learning of Channel Selection in Multi-user Cognitive Radio Systems: A Two by Two Case | <|reference_start|>Multi-agent Q-Learning of Channel Selection in Multi-user Cognitive Radio Systems: A Two by Two Case: Resource allocation is an important issue in cognitive radio systems. It can be done by carrying out negotiation among secondary users. However, significant overhead may be incurred by the negotiation since the negotiation needs to be done frequently due to the rapid change of primary users' activity. In this paper, a channel selection scheme without negotiation is considered for multi-user and multi-channel cognitive radio systems. To avoid collision incurred by non-coordination, each user secondary learns how to select channels according to its experience. Multi-agent reinforcement leaning (MARL) is applied in the framework of Q-learning by considering the opponent secondary users as a part of the environment. The dynamics of the Q-learning are illustrated using Metrick-Polak plot. A rigorous proof of the convergence of Q-learning is provided via the similarity between the Q-learning and Robinson-Monro algorithm, as well as the analysis of convergence of the corresponding ordinary differential equation (via Lyapunov function). Examples are illustrated and the performance of learning is evaluated by numerical simulations.<|reference_end|> | arxiv | @article{li2009multi-agent,
title={Multi-agent Q-Learning of Channel Selection in Multi-user Cognitive
Radio Systems: A Two by Two Case},
author={Husheng Li},
journal={arXiv preprint arXiv:0903.5282},
year={2009},
doi={10.1109/ICSMC.2009.5346172},
archivePrefix={arXiv},
eprint={0903.5282},
primaryClass={cs.IT math.IT}
} | li2009multi-agent |
arxiv-6934 | 0903.5289 | Heterogeneous knowledge representation using a finite automaton and first order logic: a case study in electromyography | <|reference_start|>Heterogeneous knowledge representation using a finite automaton and first order logic: a case study in electromyography: In a certain number of situations, human cognitive functioning is difficult to represent with classical artificial intelligence structures. Such a difficulty arises in the polyneuropathy diagnosis which is based on the spatial distribution, along the nerve fibres, of lesions, together with the synthesis of several partial diagnoses. Faced with this problem while building up an expert system (NEUROP), we developed a heterogeneous knowledge representation associating a finite automaton with first order logic. A number of knowledge representation problems raised by the electromyography test features are examined in this study and the expert system architecture allowing such a knowledge modeling are laid out.<|reference_end|> | arxiv | @article{rialle2009heterogeneous,
title={Heterogeneous knowledge representation using a finite automaton and
first order logic: a case study in electromyography},
author={Vincent Rialle (TIMC, DMIS), Annick Vila, Yves Besnard (TIMC)},
journal={Artificial Intelligence in Medicine 3, 2 (1991) 65-74},
year={2009},
archivePrefix={arXiv},
eprint={0903.5289},
primaryClass={cs.AI}
} | rialle2009heterogeneous |
arxiv-6935 | 0903.5316 | Sequences close to periodic | <|reference_start|>Sequences close to periodic: The paper is a survey of notions and results related to classical and new generalizations of the notion of a periodic sequence. The topics related to almost periodicity in combinatorics on words, symbolic dynamics, expressibility in logical theories, algorithmic computability, Kolmogorov complexity, number theory, are discussed.<|reference_end|> | arxiv | @article{muchnik2009sequences,
title={Sequences close to periodic},
author={An. A. Muchnik, Yu. L. Pritykin, A. L. Semenov},
journal={arXiv preprint arXiv:0903.5316},
year={2009},
doi={10.1070/RM2009v064n05ABEH004641},
archivePrefix={arXiv},
eprint={0903.5316},
primaryClass={cs.DM cs.FL}
} | muchnik2009sequences |
arxiv-6936 | 0903.5328 | A Stochastic View of Optimal Regret through Minimax Duality | <|reference_start|>A Stochastic View of Optimal Regret through Minimax Duality: We study the regret of optimal strategies for online convex optimization games. Using von Neumann's minimax theorem, we show that the optimal regret in this adversarial setting is closely related to the behavior of the empirical minimization algorithm in a stochastic process setting: it is equal to the maximum, over joint distributions of the adversary's action sequence, of the difference between a sum of minimal expected losses and the minimal empirical loss. We show that the optimal regret has a natural geometric interpretation, since it can be viewed as the gap in Jensen's inequality for a concave functional--the minimizer over the player's actions of expected loss--defined on a set of probability distributions. We use this expression to obtain upper and lower bounds on the regret of an optimal strategy for a variety of online learning problems. Our method provides upper bounds without the need to construct a learning algorithm; the lower bounds provide explicit optimal strategies for the adversary.<|reference_end|> | arxiv | @article{abernethy2009a,
title={A Stochastic View of Optimal Regret through Minimax Duality},
author={Jacob Abernethy, Alekh Agarwal, Peter L. Bartlett, Alexander Rakhlin},
journal={arXiv preprint arXiv:0903.5328},
year={2009},
archivePrefix={arXiv},
eprint={0903.5328},
primaryClass={cs.LG stat.ML}
} | abernethy2009a |
arxiv-6937 | 0903.5341 | Unspecified distribution in single disorder problem | <|reference_start|>Unspecified distribution in single disorder problem: We register a stochastic sequence affected by one disorder. Monitoring of the sequence is made in the circumstances when not full information about distributions before and after the change is available. The initial problem of disorder detection is transformed to optimal stopping of observed sequence. Formula for optimal decision functions is derived.<|reference_end|> | arxiv | @article{sarnowski2009unspecified,
title={Unspecified distribution in single disorder problem},
author={Wojciech Sarnowski, Krzysztof Szajowski},
journal={Applied Stochastic Models in Business and Industry. 2018, vol. 34,
nr 5, s. 700-717},
year={2009},
doi={10.1002/asmb.2317},
number={Rap. Inst. Mat. Copm. Sci. PWr. 2009,, Ser. PRE ; no 8},
archivePrefix={arXiv},
eprint={0903.5341},
primaryClass={math.PR cs.IT math.IT math.ST stat.TH}
} | sarnowski2009unspecified |
arxiv-6938 | 0903.5342 | Exact Non-Parametric Bayesian Inference on Infinite Trees | <|reference_start|>Exact Non-Parametric Bayesian Inference on Infinite Trees: Given i.i.d. data from an unknown distribution, we consider the problem of predicting future items. An adaptive way to estimate the probability density is to recursively subdivide the domain to an appropriate data-dependent granularity. A Bayesian would assign a data-independent prior probability to "subdivide", which leads to a prior over infinite(ly many) trees. We derive an exact, fast, and simple inference algorithm for such a prior, for the data evidence, the predictive distribution, the effective model dimension, moments, and other quantities. We prove asymptotic convergence and consistency results, and illustrate the behavior of our model on some prototypical functions.<|reference_end|> | arxiv | @article{hutter2009exact,
title={Exact Non-Parametric Bayesian Inference on Infinite Trees},
author={Marcus Hutter},
journal={arXiv preprint arXiv:0903.5342},
year={2009},
archivePrefix={arXiv},
eprint={0903.5342},
primaryClass={math.PR cs.LG math.ST stat.TH}
} | hutter2009exact |
arxiv-6939 | 0903.5346 | Cooperative Update Exchange in the Youtopia System | <|reference_start|>Cooperative Update Exchange in the Youtopia System: Youtopia is a platform for collaborative management and integration of relational data. At the heart of Youtopia is an update exchange abstraction: changes to the data propagate through the system to satisfy user-specified mappings. We present a novel change propagation model that combines a deterministic chase with human intervention. The process is fundamentally cooperative and gives users significant control over how mappings are repaired. An additional advantage of our model is that mapping cycles can be permitted without compromising correctness. We investigate potential harmful interference between updates in our model; we introduce two appropriate notions of serializability that avoid such interference if enforced. The first is very general and related to classical final-state serializability; the second is more restrictive but highly practical and related to conflict-serializability. We present an algorithm to enforce the latter notion. Our algorithm is an optimistic one, and as such may sometimes require updates to be aborted. We develop techniques for reducing the number of aborts and we test these experimentally.<|reference_end|> | arxiv | @article{kot2009cooperative,
title={Cooperative Update Exchange in the Youtopia System},
author={Lucja Kot, Christoph Koch},
journal={arXiv preprint arXiv:0903.5346},
year={2009},
archivePrefix={arXiv},
eprint={0903.5346},
primaryClass={cs.DB}
} | kot2009cooperative |
arxiv-6940 | 0903.5372 | A game theory approach for self-coexistence analysis among IEEE 80222 networks | <|reference_start|>A game theory approach for self-coexistence analysis among IEEE 80222 networks: This paper has been withdrawn by the author due to some errors<|reference_end|> | arxiv | @article{huang2009a,
title={A game theory approach for self-coexistence analysis among IEEE 802.22
networks},
author={Dong Huang, Zhiqi Shen, Chunyan Miao, Yuan Miao, Cyril Leung},
journal={arXiv preprint arXiv:0903.5372},
year={2009},
archivePrefix={arXiv},
eprint={0903.5372},
primaryClass={cs.IT cs.GT math.IT}
} | huang2009a |
arxiv-6941 | 0903.5392 | Quasipolynomial Normalisation in Deep Inference via Atomic Flows and Threshold Formulae | <|reference_start|>Quasipolynomial Normalisation in Deep Inference via Atomic Flows and Threshold Formulae: Je\v{r}\'abek showed that cuts in classical propositional logic proofs in deep inference can be eliminated in quasipolynomial time. The proof is indirect and it relies on a result of Atserias, Galesi and Pudl\'ak about monotone sequent calculus and a correspondence between that system and cut-free deep-inference proofs. In this paper we give a direct proof of Je\v{r}\'abek's result: we give a quasipolynomial-time cut-elimination procedure for classical propositional logic in deep inference. The main new ingredient is the use of a computational trace of deep-inference proofs called atomic flows, which are both very simple (they only trace structural rules and forget logical rules) and strong enough to faithfully represent the cut-elimination procedure.<|reference_end|> | arxiv | @article{bruscoli2009quasipolynomial,
title={Quasipolynomial Normalisation in Deep Inference via Atomic Flows and
Threshold Formulae},
author={Paola Bruscoli, Alessio Guglielmi, Tom Gundersen and Michel Parigot},
journal={Logical Methods in Computer Science, Volume 12, Issue 2 (May 3,
2016) lmcs:1637},
year={2009},
doi={10.1007/978-3-642-17511-4_9},
archivePrefix={arXiv},
eprint={0903.5392},
primaryClass={cs.CC cs.LO}
} | bruscoli2009quasipolynomial |
arxiv-6942 | 0903.5399 | Regret and Jeffreys Integrals in Exp Families | <|reference_start|>Regret and Jeffreys Integrals in Exp Families: The problem of whether minimax redundancy, minimax regret and Jeffreys integrals are finite or infinite are discussed.<|reference_end|> | arxiv | @article{grunwald2009regret,
title={Regret and Jeffreys Integrals in Exp. Families},
author={Peter Grunwald and Peter Harremoes},
journal={arXiv preprint arXiv:0903.5399},
year={2009},
archivePrefix={arXiv},
eprint={0903.5399},
primaryClass={cs.IT math.IT}
} | grunwald2009regret |
arxiv-6943 | 0903.5426 | Testing Goodness-of-Fit via Rate Distortion | <|reference_start|>Testing Goodness-of-Fit via Rate Distortion: A framework is developed using techniques from rate distortion theory in statistical testing. The idea is first to do optimal compression according to a certain distortion function and then use information divergence from the compressed empirical distribution to the compressed null hypothesis as statistic. Only very special cases have been studied in more detail, but they indicate that the approach can be used under very general conditions.<|reference_end|> | arxiv | @article{harremoes2009testing,
title={Testing Goodness-of-Fit via Rate Distortion},
author={Peter Harremoes},
journal={arXiv preprint arXiv:0903.5426},
year={2009},
archivePrefix={arXiv},
eprint={0903.5426},
primaryClass={cs.IT math.IT math.ST stat.TH}
} | harremoes2009testing |
arxiv-6944 | 0903.5505 | Asymptotically almost all \lambda-terms are strongly normalizing | <|reference_start|>Asymptotically almost all \lambda-terms are strongly normalizing: We present quantitative analysis of various (syntactic and behavioral) properties of random \lambda-terms. Our main results are that asymptotically all the terms are strongly normalizing and that any fixed closed term almost never appears in a random term. Surprisingly, in combinatory logic (the translation of the \lambda-calculus into combinators), the result is exactly opposite. We show that almost all terms are not strongly normalizing. This is due to the fact that any fixed combinator almost always appears in a random combinator.<|reference_end|> | arxiv | @article{david2009asymptotically,
title={Asymptotically almost all \lambda-terms are strongly normalizing},
author={Ren'e David (LAMA), Katarzyna Grygiel, Jakub Kozic, Christophe
Raffalli (LAMA), Guillaume Theyssier (LAMA), Marek Zaionc},
journal={Logical Methods in Computer Science, Volume 9, Issue 1 (February
15, 2013) lmcs:848},
year={2009},
doi={10.2168/LMCS-9(1:2)2013},
archivePrefix={arXiv},
eprint={0903.5505},
primaryClass={math.LO cs.DM cs.LO math.CO}
} | david2009asymptotically |
arxiv-6945 | 0904.0016 | Stochastic Models of User-Contributory Web Sites | <|reference_start|>Stochastic Models of User-Contributory Web Sites: We describe a general stochastic processes-based approach to modeling user-contributory web sites, where users create, rate and share content. These models describe aggregate measures of activity and how they arise from simple models of individual users. This approach provides a tractable method to understand user activity on the web site and how this activity depends on web site design choices, especially the choice of what information about other users' behaviors is shown to each user. We illustrate this modeling approach in the context of user-created content on the news rating site Digg.<|reference_end|> | arxiv | @article{hogg2009stochastic,
title={Stochastic Models of User-Contributory Web Sites},
author={Tad Hogg and Kristina Lerman},
journal={Proc. of the 3rd Intl Conf on Weblogs and Social Media
(ICWSM2009), pp. 50-57 (2009)},
year={2009},
archivePrefix={arXiv},
eprint={0904.0016},
primaryClass={cs.CY cs.IR}
} | hogg2009stochastic |
arxiv-6946 | 0904.0019 | On Solving Boolean Multilevel Optimization Problems | <|reference_start|>On Solving Boolean Multilevel Optimization Problems: Many combinatorial optimization problems entail a number of hierarchically dependent optimization problems. An often used solution is to associate a suitably large cost with each individual optimization problem, such that the solution of the resulting aggregated optimization problem solves the original set of hierarchically dependent optimization problems. This paper starts by studying the package upgradeability problem in software distributions. Straightforward solutions based on Maximum Satisfiability (MaxSAT) and pseudo-Boolean (PB) optimization are shown to be ineffective, and unlikely to scale for large problem instances. Afterwards, the package upgradeability problem is related to multilevel optimization. The paper then develops new algorithms for Boolean Multilevel Optimization (BMO) and highlights a large number of potential applications. The experimental results indicate that the proposed algorithms for BMO allow solving optimization problems that existing MaxSAT and PB solvers would otherwise be unable to solve.<|reference_end|> | arxiv | @article{argelich2009on,
title={On Solving Boolean Multilevel Optimization Problems},
author={Josep Argelich, Ines Lynce, and Joao Marques-Silva},
journal={arXiv preprint arXiv:0904.0019},
year={2009},
archivePrefix={arXiv},
eprint={0904.0019},
primaryClass={cs.LO cs.AI}
} | argelich2009on |
arxiv-6947 | 0904.0027 | Faith in the Algorithm, Part 2: Computational Eudaemonics | <|reference_start|>Faith in the Algorithm, Part 2: Computational Eudaemonics: Eudaemonics is the study of the nature, causes, and conditions of human well-being. According to the ethical theory of eudaemonia, reaping satisfaction and fulfillment from life is not only a desirable end, but a moral responsibility. However, in modern society, many individuals struggle to meet this responsibility. Computational mechanisms could better enable individuals to achieve eudaemonia by yielding practical real-world systems that embody algorithms that promote human flourishing. This article presents eudaemonic systems as the evolutionary goal of the present day recommender system.<|reference_end|> | arxiv | @article{rodriguez2009faith,
title={Faith in the Algorithm, Part 2: Computational Eudaemonics},
author={Marko A. Rodriguez and Jennifer H. Watkins},
journal={Proceedings of the International Conference on Knowledge-Based and
Intelligent Information & Engineering Systems, Invited Session: Innovations
in Intelligent Systems, Lecture Notes in Artificial Intelligence,
Springer-Verlag, October 2009.},
year={2009},
number={LA-UR-09-02095},
archivePrefix={arXiv},
eprint={0904.0027},
primaryClass={cs.CY cs.AI}
} | rodriguez2009faith |
arxiv-6948 | 0904.0029 | Learning for Dynamic subsumption | <|reference_start|>Learning for Dynamic subsumption: In this paper a new dynamic subsumption technique for Boolean CNF formulae is proposed. It exploits simple and sufficient conditions to detect during conflict analysis, clauses from the original formula that can be reduced by subsumption. During the learnt clause derivation, and at each step of the resolution process, we simply check for backward subsumption between the current resolvent and clauses from the original formula and encoded in the implication graph. Our approach give rise to a strong and dynamic simplification technique that exploits learning to eliminate literals from the original clauses. Experimental results show that the integration of our dynamic subsumption approach within the state-of-the-art SAT solvers Minisat and Rsat achieves interesting improvements particularly on crafted instances.<|reference_end|> | arxiv | @article{hamadi2009learning,
title={Learning for Dynamic subsumption},
author={Youssef Hamadi, Said Jabbour, Lakhdar Sais},
journal={arXiv preprint arXiv:0904.0029},
year={2009},
archivePrefix={arXiv},
eprint={0904.0029},
primaryClass={cs.AI}
} | hamadi2009learning |
arxiv-6949 | 0904.0034 | CCS-Based Dynamic Logics for Communicating Concurrent Programs | <|reference_start|>CCS-Based Dynamic Logics for Communicating Concurrent Programs: This work presents three increasingly expressive Dynamic Logics in which the programs are CCS processes (sCCS-PDL, CCS-PDL and XCCS-PDL). Their goal is to reason about properties of concurrent programs and systems described using CCS. In order to accomplish that, CCS's operators and constructions are added to a basic modal logic in order to create dynamic logics that are suitable for the description and verification of properties of communicating, concurrent and non-deterministic programs and systems, in a similar way as PDL is used for the sequential case. We provide complete axiomatizations for the three logics. Unlike Peleg's Concurrent PDL with Channels, our logics have a simple Kripke semantics, complete axiomatizations and the finite model property.<|reference_end|> | arxiv | @article{benevides2009ccs-based,
title={CCS-Based Dynamic Logics for Communicating Concurrent Programs},
author={Mario R. F. Benevides and L. Menasch'e Schechter},
journal={arXiv preprint arXiv:0904.0034},
year={2009},
archivePrefix={arXiv},
eprint={0904.0034},
primaryClass={cs.LO}
} | benevides2009ccs-based |
arxiv-6950 | 0904.0037 | Deterministic Capacity of MIMO Relay Networks | <|reference_start|>Deterministic Capacity of MIMO Relay Networks: The deterministic capacity of a relay network is the capacity of a network when relays are restricted to transmitting \emph{reliable} information, that is, (asymptotically) deterministic function of the source message. In this paper it is shown that the deterministic capacity of a number of MIMO relay networks can be found in the low power regime where $\SNR\to0$. This is accomplished through deriving single letter upper bounds and finding the limit of these as $\SNR\to0$. The advantage of this technique is that it overcomes the difficulty of finding optimum distributions for mutual information.<|reference_end|> | arxiv | @article{host-madsen2009deterministic,
title={Deterministic Capacity of MIMO Relay Networks},
author={Anders Host-Madsen},
journal={arXiv preprint arXiv:0904.0037},
year={2009},
doi={10.1109/ACSSC.2008.5074667},
archivePrefix={arXiv},
eprint={0904.0037},
primaryClass={cs.IT math.IT}
} | host-madsen2009deterministic |
arxiv-6951 | 0904.0052 | Stiffness Analysis of Overconstrained Parallel Manipulators | <|reference_start|>Stiffness Analysis of Overconstrained Parallel Manipulators: The paper presents a new stiffness modeling method for overconstrained parallel manipulators with flexible links and compliant actuating joints. It is based on a multidimensional lumped-parameter model that replaces the link flexibility by localized 6-dof virtual springs that describe both translational/rotational compliance and the coupling between them. In contrast to other works, the method involves a FEA-based link stiffness evaluation and employs a new solution strategy of the kinetostatic equations for the unloaded manipulator configuration, which allows computing the stiffness matrix for the overconstrained architectures, including singular manipulator postures. The advantages of the developed technique are confirmed by application examples, which deal with comparative stiffness analysis of two translational parallel manipulators of 3-PUU and 3-PRPaR architectures. Accuracy of the proposed approach was evaluated for a case study, which focuses on stiffness analysis of Orthoglide parallel manipulator.<|reference_end|> | arxiv | @article{pashkevich2009stiffness,
title={Stiffness Analysis of Overconstrained Parallel Manipulators},
author={Anatoly Pashkevich (IRCCyN), Damien Chablat (IRCCyN), Philippe Wenger
(IRCCyN)},
journal={Journal of Mechanism and Machine Theory 44, 5 (2009) 966-982},
year={2009},
doi={10.1016/j.mechmachtheory.2008.05.017},
archivePrefix={arXiv},
eprint={0904.0052},
primaryClass={cs.RO}
} | pashkevich2009stiffness |
arxiv-6952 | 0904.0058 | Kinematics of A 3-PRP planar parallel robot | <|reference_start|>Kinematics of A 3-PRP planar parallel robot: Recursive modelling for the kinematics of a 3-PRP planar parallel robot is presented in this paper. Three planar chains connecting to the moving platform of the manipulator are located in a vertical plane. Knowing the motion of the platform, we develop the inverse kinematics and determine the positions, velocities and accelerations of the robot. Several matrix equations offer iterative expressions and graphs for the displacements, velocities and accelerations of three prismatic actuators.<|reference_end|> | arxiv | @article{chablat2009kinematics,
title={Kinematics of A 3-PRP planar parallel robot},
author={Damien Chablat (IRCCyN), Stefan Staicu},
journal={UPB Scientific Bulletin, Series D: Mechanical Engineering 71, 1
(2009) 3-16},
year={2009},
archivePrefix={arXiv},
eprint={0904.0058},
primaryClass={cs.RO}
} | chablat2009kinematics |
arxiv-6953 | 0904.0071 | Kripke Models for Classical Logic | <|reference_start|>Kripke Models for Classical Logic: We introduce a notion of Kripke model for classical logic for which we constructively prove soundness and cut-free completeness. We discuss the novelty of the notion and its potential applications.<|reference_end|> | arxiv | @article{ilik2009kripke,
title={Kripke Models for Classical Logic},
author={Danko Ilik (PPS, INRIA Paris - Rocquencourt, LIX), Gyesik Lee
(ROSAEC), Hugo Herbelin (PPS, INRIA Paris - Rocquencourt)},
journal={Annals of Pure and Applied Logic 161(11), 2010},
year={2009},
doi={10.1016/j.apal.2010.04.007},
archivePrefix={arXiv},
eprint={0904.0071},
primaryClass={math.LO cs.LO}
} | ilik2009kripke |
arxiv-6954 | 0904.0087 | Complex Dependencies in Large Software Systems | <|reference_start|>Complex Dependencies in Large Software Systems: Two large, open source software systems are analyzed from the vantage point of complex adaptive systems theory. For both systems, the full dependency graphs are constructed and their properties are shown to be consistent with the assumption of stochastic growth. In particular, the afferent links are distributed according to Zipf's law for both systems. Using the Small-World criterion for directed graphs, it is shown that contrary to claims in the literature, these software systems do not possess Small-World properties. Furthermore, it is argued that the Small-World property is not of any particular advantage in a standard layered architecture. Finally, it is suggested that the eigenvector centrality can play an important role in deciding which open source software packages to use in mission critical applications. This comes about because knowing the absolute number of afferent links alone is insufficient to decide how important a package is to the system as a whole, instead the importance of the linking package plays a major role as well.<|reference_end|> | arxiv | @article{kohring2009complex,
title={Complex Dependencies in Large Software Systems},
author={G.A. Kohring},
journal={Advances in Complex Systems, Vol. 12, No. 6, pp. 565-581, (2009).},
year={2009},
number={LR-09- 357},
archivePrefix={arXiv},
eprint={0904.0087},
primaryClass={nlin.AO cs.SE physics.soc-ph}
} | kohring2009complex |
arxiv-6955 | 0904.0109 | Authentication and Secrecy Codes for Equiprobable Source Probability Distributions | <|reference_start|>Authentication and Secrecy Codes for Equiprobable Source Probability Distributions: We give new combinatorial constructions for codes providing authentication and secrecy for equiprobable source probability distributions. In particular, we construct an infinite class of optimal authentication codes which are multiple-fold secure against spoofing and simultaneously achieve perfect secrecy. Several further new optimal codes satisfying these properties will also be constructed and presented in general tables. Almost all of these appear to be the first authentication codes with these properties.<|reference_end|> | arxiv | @article{huber2009authentication,
title={Authentication and Secrecy Codes for Equiprobable Source Probability
Distributions},
author={Michael Huber},
journal={arXiv preprint arXiv:0904.0109},
year={2009},
doi={10.1109/ISIT.2009.5206028},
archivePrefix={arXiv},
eprint={0904.0109},
primaryClass={cs.CR cs.DM}
} | huber2009authentication |
arxiv-6956 | 0904.0125 | Coherence for rewriting 2-theories | <|reference_start|>Coherence for rewriting 2-theories: General coherence theorems are constructed that yield explicit presentations of categorical and algebraic objects. The categorical structures involved are finitary discrete Lawvere 2-theories, though they are approached within the language of term rewriting theory. Two general coherence theorems are obtained. The first applies to terminating and confluent rewriting 2-theories. This result is exploited to construct systematic presentations for the higher Thompson groups and the Higman-Thompson groups. The presentations are categorically interesting as they arise from higher-arity analogues of the Stasheff/Mac Lane coherence axioms, which involve phenomena not present in the classical binary axioms. The second general coherence theorem holds for 2-theories that are not necessarily confluent or terminating and is used to construct a new proof of coherence for iterated monoidal categories, which arise as categorical models of iterated loop spaces and fail to be confluent.<|reference_end|> | arxiv | @article{cohen2009coherence,
title={Coherence for rewriting 2-theories},
author={Jonathan Asher Cohen},
journal={arXiv preprint arXiv:0904.0125},
year={2009},
archivePrefix={arXiv},
eprint={0904.0125},
primaryClass={math.CT cs.LO}
} | cohen2009coherence |
arxiv-6957 | 0904.0145 | Kinematic and Dynamic Analysis of the 2-DOF Spherical Wrist of Orthoglide 5-axis | <|reference_start|>Kinematic and Dynamic Analysis of the 2-DOF Spherical Wrist of Orthoglide 5-axis: This paper deals with the kinematics and dynamics of a two degree of freedom spherical manipulator, the wrist of Orthoglide 5-axis. The latter is a parallel kinematics machine composed of two manipulators: i) the Orthoglide 3-axis; a three-dof translational parallel manipulator that belongs to the family of Delta robots, and ii) the Agile eye; a two-dof parallel spherical wrist. The geometric and inertial parameters used in the model are determined by means of a CAD software. The performance of the spherical wrist is emphasized by means of several test trajectories. The effects of machining and/or cutting forces and the length of the cutting tool on the dynamic performance of the wrist are also analyzed. Finally, a preliminary selection of the motors is proposed from the velocities and torques required by the actuators to carry out the test trajectories.<|reference_end|> | arxiv | @article{ur-rehman2009kinematic,
title={Kinematic and Dynamic Analysis of the 2-DOF Spherical Wrist of
Orthoglide 5-axis},
author={Raza Ur-Rehman (IRCCyN), St'ephane Caro (IRCCyN), Damien Chablat
(IRCCyN), Philippe Wenger (IRCCyN)},
journal={Troisi\`eme Congr\`es International. Conception et Mod\'elisation
des Syst\`emes M\'ecaniques, Hammamet : Tunisie (2009)},
year={2009},
archivePrefix={arXiv},
eprint={0904.0145},
primaryClass={cs.RO physics.class-ph}
} | ur-rehman2009kinematic |
arxiv-6958 | 0904.0166 | Grid porting of Bhabha scattering code through a master-worker scheme | <|reference_start|>Grid porting of Bhabha scattering code through a master-worker scheme: A program calculating Bhabha scattering at high energy colliders is considered for porting to the EGEE Grid infrastructure. The program code, which is a result of the aITALC project, is ported by using a master-worker operating scheme. The job submission, execution and monitoring are implemented using the GridWay metascheduler. The unattended execution of jobs turned out to be complete and rather efficient, even when pre-knowledge of the grid is absent. While the batch of jobs remains organized at the user's side, the actual computation was carried out within the phenogrid virtual organization. The scientific results support the use of the small angle Bhabha scattering for the luminosity measurements of the International Linear Collider project.<|reference_end|> | arxiv | @article{lorca2009grid,
title={Grid porting of Bhabha scattering code through a master-worker scheme},
author={Alejandro Lorca, Jose Luis Vazquez-Poletti, Eduardo Huedo, Ignacio M.
Llorente},
journal={arXiv preprint arXiv:0904.0166},
year={2009},
archivePrefix={arXiv},
eprint={0904.0166},
primaryClass={hep-ph cs.DC}
} | lorca2009grid |
arxiv-6959 | 0904.0214 | Differential reduction of generalized hypergeometric functions from Feynman diagrams: One-variable case | <|reference_start|>Differential reduction of generalized hypergeometric functions from Feynman diagrams: One-variable case: The differential-reduction algorithm, which allows one to express generalized hypergeometric functions with parameters of arbitrary values in terms of such functions with parameters whose values differ from the original ones by integers, is discussed in the context of evaluating Feynman diagrams. Where this is possible, we compare our results with those obtained using standard techniques. It is shown that the criterion of reducibility of multiloop Feynman integrals can be reformulated in terms of the criterion of reducibility of hypergeometric functions. The relation between the numbers of master integrals obtained by differential reduction and integration by parts is discussed.<|reference_end|> | arxiv | @article{bytev2009differential,
title={Differential reduction of generalized hypergeometric functions from
Feynman diagrams: One-variable case},
author={Vladimir V. Bytev (Hamburg U., Inst. Theor. Phys. II & Dubna, JINR),
Mikhail Yu. Kalmykov (Hamburg U., Inst. Theor. Phys. II & Dubna, JINR), Bernd
A.Kniehl (Hamburg U., Inst. Theor. Phys. II)},
journal={Nucl.Phys.B836:129-170, 2010},
year={2009},
doi={10.1016/j.nuclphysb.2010.03.025},
number={DESY-10-027},
archivePrefix={arXiv},
eprint={0904.0214},
primaryClass={hep-th cs.SC hep-ph math-ph math.CA math.MP}
} | bytev2009differential |
arxiv-6960 | 0904.0217 | The mdt algorithm | <|reference_start|>The mdt algorithm: Link state routing protocols such as OSPF or IS-IS currently use only best paths to forward IP packets throughout a domain. The optimality of sub-paths ensures consistency of hop by hop forwarding although paths, calculated using Dijkstra algorithm, are recursively composed. According to the link metric, the diversity of existing paths can be underestimated using only best paths. Hence, it reduces potential benefits of multipath applications such as load balancing and fast rerouting. In this paper, we propose a low time complexity multipath computation algorithm able to calculate at least two paths with a different first hop between all pairs of nodes in the network if such next hops exist. Using real and generated topologies, we evaluate and compare the complexity of our proposition with several techniques. Simulation results suggest that the path diversity achieved with our proposition is approximatively the same that the one obtained using consecutive Dijsktra computations, but with a lower time complexity.<|reference_end|> | arxiv | @article{mérindol2009the,
title={The mdt algorithm},
author={Pascal M'erindol, Jean-Jacques Pansiot, St'ephane Cateloin},
journal={arXiv preprint arXiv:0904.0217},
year={2009},
number={LSIIT Research Report RR-PM01-09},
archivePrefix={arXiv},
eprint={0904.0217},
primaryClass={cs.NI}
} | mérindol2009the |
arxiv-6961 | 0904.0226 | Coding Versus ARQ in Fading Channels: How reliable should the PHY be? | <|reference_start|>Coding Versus ARQ in Fading Channels: How reliable should the PHY be?: This paper studies the tradeoff between channel coding and ARQ (automatic repeat request) in Rayleigh block-fading channels. A heavily coded system corresponds to a low transmission rate with few ARQ re-transmissions, whereas lighter coding corresponds to a higher transmitted rate but more re-transmissions. The optimum error probability, where optimum refers to the maximization of the average successful throughput, is derived and is shown to be a decreasing function of the average signal-to-noise ratio and of the channel diversity order. A general conclusion of the work is that the optimum error probability is quite large (e.g., 10% or larger) for reasonable channel parameters, and that operating at a very small error probability can lead to a significantly reduced throughput. This conclusion holds even when a number of practical ARQ considerations, such as delay constraints and acknowledgement feedback errors, are taken into account.<|reference_end|> | arxiv | @article{wu2009coding,
title={Coding Versus ARQ in Fading Channels: How reliable should the PHY be?},
author={Peng Wu and Nihar Jindal},
journal={arXiv preprint arXiv:0904.0226},
year={2009},
archivePrefix={arXiv},
eprint={0904.0226},
primaryClass={cs.IT math.IT}
} | wu2009coding |
arxiv-6962 | 0904.0228 | Safe Reasoning Over Ontologies | <|reference_start|>Safe Reasoning Over Ontologies: As ontologies proliferate and automatic reasoners become more powerful, the problem of protecting sensitive information becomes more serious. In particular, as facts can be inferred from other facts, it becomes increasingly likely that information included in an ontology, while not itself deemed sensitive, may be able to be used to infer other sensitive information. We first consider the problem of testing an ontology for safeness defined as its not being able to be used to derive any sensitive facts using a given collection of inference rules. We then consider the problem of optimizing an ontology based on the criterion of making as much useful information as possible available without revealing any sensitive facts.<|reference_end|> | arxiv | @article{grabarnik2009safe,
title={Safe Reasoning Over Ontologies},
author={Genady Grabarnik, Aaron Kershenbaum (IBM TJ Watson Research)},
journal={arXiv preprint arXiv:0904.0228},
year={2009},
archivePrefix={arXiv},
eprint={0904.0228},
primaryClass={cs.AI cs.DS}
} | grabarnik2009safe |
arxiv-6963 | 0904.0262 | Every Large Point Set contains Many Collinear Points or an Empty Pentagon | <|reference_start|>Every Large Point Set contains Many Collinear Points or an Empty Pentagon: We prove the following generalised empty pentagon theorem: for every integer $\ell \geq 2$, every sufficiently large set of points in the plane contains $\ell$ collinear points or an empty pentagon. As an application, we settle the next open case of the "big line or big clique" conjecture of K\'ara, P\'or, and Wood [\emph{Discrete Comput. Geom.} 34(3):497--506, 2005].<|reference_end|> | arxiv | @article{abel2009every,
title={Every Large Point Set contains Many Collinear Points or an Empty
Pentagon},
author={Zachary Abel and Brad Ballinger and Prosenjit Bose and S'ebastien
Collette and Vida Dujmovi'c and Ferran Hurtado and Scott D. Kominers and
Stefan Langerman and Attila P'or and David R. Wood},
journal={Graphs and Combinatorics 27(1), (2011), 47-60},
year={2009},
doi={10.1007/s00373-010-0957-2},
archivePrefix={arXiv},
eprint={0904.0262},
primaryClass={math.CO cs.CG}
} | abel2009every |
arxiv-6964 | 0904.0274 | Interference Alignment with Asymmetric Complex Signaling - Settling the Host-Madsen-Nosratinia Conjecture | <|reference_start|>Interference Alignment with Asymmetric Complex Signaling - Settling the Host-Madsen-Nosratinia Conjecture: It has been conjectured by Host-Madsen and Nosratinia that complex Gaussian interference channels with constant channel coefficients have only one degree-of-freedom regardless of the number of users. While several examples are known of constant channels that achieve more than 1 degree of freedom, these special cases only span a subset of measure zero. In other words, for almost all channel coefficient values, it is not known if more than 1 degree-of-freedom is achievable. In this paper, we settle the Host-Madsen-Nosratinia conjecture in the negative. We show that at least 1.2 degrees-of-freedom are achievable for all values of complex channel coefficients except for a subset of measure zero. For the class of linear beamforming and interference alignment schemes considered in this paper, it is also shown that 1.2 is the maximum number of degrees of freedom achievable on the complex Gaussian 3 user interference channel with constant channel coefficients, for almost all values of channel coefficients. To establish the achievability of 1.2 degrees of freedom we introduce the novel idea of asymmetric complex signaling - i.e., the inputs are chosen to be complex but not circularly symmetric. It is shown that unlike Gaussian point-to-point, multiple-access and broadcast channels where circularly symmetric complex Gaussian inputs are optimal, for interference channels optimal inputs are in general asymmetric. With asymmetric complex signaling, we also show that the 2 user complex Gaussian X channel with constant channel coefficients achieves the outer bound of 4/3 degrees-of-freedom, i.e., the assumption of time-variations/frequency-selectivity used in prior work to establish the same result, is not needed.<|reference_end|> | arxiv | @article{cadambe2009interference,
title={Interference Alignment with Asymmetric Complex Signaling - Settling the
Host-Madsen-Nosratinia Conjecture},
author={Viveck R. Cadambe, Syed A. Jafar, Chenwei Wang},
journal={IEEE Transactions on Information Theory, Sep. 2010, Vol. 56,
Issue: 9, Pages: 4552-4565},
year={2009},
archivePrefix={arXiv},
eprint={0904.0274},
primaryClass={cs.IT math.IT}
} | cadambe2009interference |
arxiv-6965 | 0904.0292 | Sublinear Time Algorithms for Earth Mover's Distance | <|reference_start|>Sublinear Time Algorithms for Earth Mover's Distance: We study the problem of estimating the Earth Mover's Distance (EMD) between probability distributions when given access only to samples. We give closeness testers and additive-error estimators over domains in $[0, \Delta]^d$, with sample complexities independent of domain size - permitting the testability even of continuous distributions over infinite domains. Instead, our algorithms depend on other parameters, such as the diameter of the domain space, which may be significantly smaller. We also prove lower bounds showing the dependencies on these parameters to be essentially optimal. Additionally, we consider whether natural classes of distributions exist for which there are algorithms with better dependence on the dimension, and show that for highly clusterable data, this is indeed the case. Lastly, we consider a variant of the EMD, defined over tree metrics instead of the usual L1 metric, and give optimal algorithms.<|reference_end|> | arxiv | @article{ba2009sublinear,
title={Sublinear Time Algorithms for Earth Mover's Distance},
author={Khanh Do Ba, Huy L Nguyen, Huy N Nguyen, Ronitt Rubinfeld},
journal={arXiv preprint arXiv:0904.0292},
year={2009},
archivePrefix={arXiv},
eprint={0904.0292},
primaryClass={cs.DS}
} | ba2009sublinear |
arxiv-6966 | 0904.0293 | INFRAWEBS axiom editor - a graphical ontology-driven tool for creating complex logical expressions | <|reference_start|>INFRAWEBS axiom editor - a graphical ontology-driven tool for creating complex logical expressions: The current INFRAWEBS European research project aims at developing ICT framework enabling software and service providers to generate and establish open and extensible development platforms for Web Service applications. One of the concrete project objectives is developing a full-life-cycle software toolset for creating and maintaining Semantic Web Services (SWSs) supporting specific applications based on Web Service Modelling Ontology (WSMO) framework. According to WSMO, functional and behavioural descriptions of a SWS may be represented by means of complex logical expressions (axioms). The paper describes a specialized user-friendly tool for constructing and editing such axioms - INFRAWEBS Axiom Editor. After discussing the main design principles of the Editor, its functional architecture is briefly presented. The tool is implemented in Eclipse Graphical Environment Framework and Eclipse Rich Client Platform.<|reference_end|> | arxiv | @article{agre2009infrawebs,
title={INFRAWEBS axiom editor - a graphical ontology-driven tool for creating
complex logical expressions},
author={Gennady Agre, Petar Kormushev, Ivan Dilov},
journal={International Journal of Information Theories and Applications,
vol. 13, no. 2, ISSN 1310-0513, pp. 169-178, 2006},
year={2009},
archivePrefix={arXiv},
eprint={0904.0293},
primaryClass={cs.SE}
} | agre2009infrawebs |
arxiv-6967 | 0904.0300 | Design, development and implementation of a tool for construction of declarative functional descriptions of semantic web services based on WSMO methodology | <|reference_start|>Design, development and implementation of a tool for construction of declarative functional descriptions of semantic web services based on WSMO methodology: Semantic web services (SWS) are self-contained, self-describing, semantically marked-up software resources that can be published, discovered, composed and executed across the Web in a semi-automatic way. They are a key component of the future Semantic Web, in which networked computer programs become providers and users of information at the same time. This work focuses on developing a full-life-cycle software toolset for creating and maintaining Semantic Web Services (SWSs) based on the Web Service Modelling Ontology (WSMO) framework. A main part of WSMO-based SWS is service capability - a declarative description of Web service functionality. A formal syntax and semantics for such a description is provided by Web Service Modeling Language (WSML), which is based on different logical formalisms, namely, Description Logics, First-Order Logic and Logic Programming. A WSML description of a Web service capability is represented as a set of complex logical expressions (axioms). We develop a specialized user-friendly tool for constructing and editing WSMO-based SWS capabilities. Since the users of this tool are not specialists in first-order logic, a graphical way for constricting and editing axioms is proposed. The designed process for constructing logical expressions is ontology-driven, which abstracts away as much as possible from any concrete syntax of logical language. We propose several mechanisms to guarantees the semantic consistency of the produced logical expressions. The tool is implemented in Java using Eclipse for IDE and GEF (Graphical Editing Framework) for visualization.<|reference_end|> | arxiv | @article{kormushev2009design,,
title={Design, development and implementation of a tool for construction of
declarative functional descriptions of semantic web services based on WSMO
methodology},
author={Petar Kormushev},
journal={arXiv preprint arXiv:0904.0300},
year={2009},
archivePrefix={arXiv},
eprint={0904.0300},
primaryClass={cs.AI cs.LO}
} | kormushev2009design, |
arxiv-6968 | 0904.0308 | Exponential decreasing rate of leaked information in universal random privacy amplification | <|reference_start|>Exponential decreasing rate of leaked information in universal random privacy amplification: We derive a new upper bound for Eve's information in secret key generation from a common random number without communication. This bound improves on Bennett et al(1995)'s bound based on the R\'enyi entropy of order 2 because the bound obtained here uses the R\'enyi entropy of order $1+s$ for $s \in [0,1]$. This bound is applied to a wire-tap channel. Then, we derive an exponential upper bound for Eve's information. Our exponent is compared with Hayashi(2006)'s exponent. For the additive case, the bound obtained here is better. The result is applied to secret key agreement by public discussion.<|reference_end|> | arxiv | @article{hayashi2009exponential,
title={Exponential decreasing rate of leaked information in universal random
privacy amplification},
author={Masahito Hayashi},
journal={IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 6, JUNE
2011, 3989-4001},
year={2009},
doi={10.1109/TIT.2011.2110950},
archivePrefix={arXiv},
eprint={0904.0308},
primaryClass={cs.IT cs.CR math.AC math.IT}
} | hayashi2009exponential |
arxiv-6969 | 0904.0313 | Visual approach for data mining on medical information databases using Fastmap algorithm | <|reference_start|>Visual approach for data mining on medical information databases using Fastmap algorithm: The rapid development of tools for acquisition and storage of information has lead to the formation of enormous medical databases. The large quantity of data definitely surpasses the abilities of humans for efficient usage without specialized tools for analysis. The situation is described as rich in data, but poor in information. In order to fill this growing gap, different approaches from the field of Data Mining are applied. These methods perform analysis of large sets of observed data in order to find new dependencies or concise representation of the data, which is more meaningful to humans. One of the possible approaches for discovery of dependencies is the visual approach, in which data is processed and visualized in a way suitable for analysis by a domain expert. This work proposes a visual approach, in which data is processed and visualized in a way suitable for analysis by a domain expert. We design and implement a software solution for visualization of multi-dimensional, classified medical data using the FastMap algorithm for graduate reduction of dimensions. The implementation of the graphical user interface is described in detail since it is the most important factor for the ease of use of these tools by non-professionals in data mining.<|reference_end|> | arxiv | @article{kormushev2009visual,
title={Visual approach for data mining on medical information databases using
Fastmap algorithm},
author={Petar Kormushev},
journal={arXiv preprint arXiv:0904.0313},
year={2009},
archivePrefix={arXiv},
eprint={0904.0313},
primaryClass={cs.IR cs.DB}
} | kormushev2009visual |
arxiv-6970 | 0904.0322 | Model-free control and intelligent PID controllers: towards a possible trivialization of nonlinear control? | <|reference_start|>Model-free control and intelligent PID controllers: towards a possible trivialization of nonlinear control?: We are introducing a model-free control and a control with a restricted model for finite-dimensional complex systems. This control design may be viewed as a contribution to "intelligent" PID controllers, the tuning of which becomes quite straightforward, even with highly nonlinear and/or time-varying systems. Our main tool is a newly developed numerical differentiation. Differential algebra provides the theoretical framework. Our approach is validated by several numerical experiments.<|reference_end|> | arxiv | @article{fliess2009model-free,
title={Model-free control and intelligent PID controllers: towards a possible
trivialization of nonlinear control?},
author={Michel Fliess (LIX, INRIA Saclay - Ile de France), C'edric Join
(INRIA Saclay - Ile de France, CRAN)},
journal={arXiv preprint arXiv:0904.0322},
year={2009},
archivePrefix={arXiv},
eprint={0904.0322},
primaryClass={math.OC cs.NA math.CA}
} | fliess2009model-free |
arxiv-6971 | 0904.0352 | Incremental Deployment of Network Monitors Based on Group Betweenness Centrality | <|reference_start|>Incremental Deployment of Network Monitors Based on Group Betweenness Centrality: In many applications we are required to increase the deployment of a distributed monitoring system on an evolving network. In this paper we present a new method for finding candidate locations for additional deployment in the network. This method is based on the Group Betweenness Centrality (GBC) measure that is used to estimate the influence of a group of nodes over the information flow in the network. The new method assists in finding the location of k additional monitors in the evolving network, such that the portion of additional traffic covered is at least (1-1/e) of the optimal.<|reference_end|> | arxiv | @article{dolev2009incremental,
title={Incremental Deployment of Network Monitors Based on Group Betweenness
Centrality},
author={Shlomi Dolev, Yuval Elovici, Rami Puzis, Polina Zilberman},
journal={Information Processing Letters, 109(20), 1172-1176 (2009)},
year={2009},
doi={10.1016/j.ipl.2009.07.019},
archivePrefix={arXiv},
eprint={0904.0352},
primaryClass={cs.DS}
} | dolev2009incremental |
arxiv-6972 | 0904.0417 | On Computational Complexity of Clifford Algebra | <|reference_start|>On Computational Complexity of Clifford Algebra: After a brief discussion of the computational complexity of Clifford algebras, we present a new basis for even Clifford algebra Cl(2m) that simplifies greatly the actual calculations and, without resorting to the conventional matrix isomorphism formulation, obtains the same complexity. In the last part we apply these results to the Clifford algebra formulation of the NP-complete problem of the maximum clique of a graph introduced in a previous paper.<|reference_end|> | arxiv | @article{budinich2009on,
title={On Computational Complexity of Clifford Algebra},
author={Marco Budinich},
journal={Journal of Mathematical Physics, 50 #5, 053514, 18 May 2009},
year={2009},
doi={10.1063/1.3133042},
archivePrefix={arXiv},
eprint={0904.0417},
primaryClass={math-ph cs.DM math.MP}
} | budinich2009on |
arxiv-6973 | 0904.0437 | What Do Family Caregivers of Alzheimer's Disease Patients Desire in Smart Home Technologies? | <|reference_start|>What Do Family Caregivers of Alzheimer's Disease Patients Desire in Smart Home Technologies?: Objectives - The authors' aim was to investigate the representations, wishes, and fears of family caregivers (FCs) regarding 14 innovative technologies (IT) for care aiding and burden alleviation, given the severe physical and psychological stress induced by dementia care, and the very slow uptake of these technologies in our society. Methods - A cluster sample survey based on a self-administered questionnaire was carried out on data collected from 270 families of patients with Alzheimer's disease or related disorders, located in the greater Paris area. Multiple Correspondence Analysis was used in addition to usual statistical tests to identify homogenous FCs clusters concerning the appreciation or rejection of the considered technologies. Results - Two opposite clusters were clearly defined: FCs in favor of a substantial use of technology, and those rather or totally hostile. Furthermore the distributions of almost all the answers of appreciations were U shaped. Significant relations were demonstrated between IT appreciation and FC's family or gender statuses (e.g., female FCs appreciated more than male FCs a tracking device for quick recovering of wandering patients: p=0.0025, N=195). Conclusions - The study provides further evidence of the contrasted perception of technology in dementia care at home, and suggests the development of public debates based on rigorous assessment of practices and a strict ethical aim to protect against misuse.<|reference_end|> | arxiv | @article{rialle2009what,
title={What Do Family Caregivers of Alzheimer's Disease Patients Desire in
Smart Home Technologies?},
author={Vincent Rialle (TIMC, DMIS), Catherine Ollivet (FA 93), Carole Guigui
(TIMC), Christian Herv'e (LEM)},
journal={Methods of information in medicine 47, 1 (2008) 63-69},
year={2009},
archivePrefix={arXiv},
eprint={0904.0437},
primaryClass={cs.CY}
} | rialle2009what |
arxiv-6974 | 0904.0471 | Holographic algorithms without matchgates | <|reference_start|>Holographic algorithms without matchgates: The theory of holographic algorithms, which are polynomial time algorithms for certain combinatorial counting problems, yields insight into the hierarchy of complexity classes. In particular, the theory produces algebraic tests for a problem to be in the class P. In this article we streamline the implementation of holographic algorithms by eliminating one of the steps in the construction procedure, and generalize their applicability to new signatures. Instead of matchgates, which are weighted graph fragments that replace vertices of a natural bipartite graph G associated to a problem P, our approach uses only only a natural number-of-edges by number-of-edges matrix associated to G. An easy-to-compute multiple of its Pfaffian is the number of solutions to the counting problem. This simplification improves our understanding of the applicability of holographic algorithms, indicates a more geometric approach to complexity classes, and facilitates practical implementations. The generalized applicability arises because our approach allows for new algebraic tests that are different from the "Grassmann-Plucker identities" used up until now. Natural problems treatable by these new methods have been previously considered in a different context, and we present one such example.<|reference_end|> | arxiv | @article{landsberg2009holographic,
title={Holographic algorithms without matchgates},
author={J.M. Landsberg, Jason Morton, and Serguei Norine},
journal={arXiv preprint arXiv:0904.0471},
year={2009},
archivePrefix={arXiv},
eprint={0904.0471},
primaryClass={cs.CC math.CO math.RT}
} | landsberg2009holographic |
arxiv-6975 | 0904.0477 | Message Passing for Optimization and Control of Power Grid: Model of Distribution System with Redundancy | <|reference_start|>Message Passing for Optimization and Control of Power Grid: Model of Distribution System with Redundancy: We use a power grid model with $M$ generators and $N$ consumption units to optimize the grid and its control. Each consumer demand is drawn from a predefined finite-size-support distribution, thus simulating the instantaneous load fluctuations. Each generator has a maximum power capability. A generator is not overloaded if the sum of the loads of consumers connected to a generator does not exceed its maximum production. In the standard grid each consumer is connected only to its designated generator, while we consider a more general organization of the grid allowing each consumer to select one generator depending on the load from a pre-defined consumer-dependent and sufficiently small set of generators which can all serve the load. The model grid is interconnected in a graph with loops, drawn from an ensemble of random bipartite graphs, while each allowed configuration of loaded links represent a set of graph covering trees. Losses, the reactive character of the grid and the transmission-level connections between generators (and many other details relevant to realistic power grid) are ignored in this proof-of-principles study. We focus on the asymptotic limit and we show that the interconnects allow significant expansion of the parameter domains for which the probability of a generator overload is asymptotically zero. Our construction explores the formal relation between the problem of grid optimization and the modern theory of sparse graphical models. We also design heuristic algorithms that achieve the asymptotically optimal selection of loaded links. We conclude discussing the ability of this approach to include other effects, such as a more realistic modeling of the power grid and related optimization and control algorithms.<|reference_end|> | arxiv | @article{zdeborová2009message,
title={Message Passing for Optimization and Control of Power Grid: Model of
Distribution System with Redundancy},
author={Lenka Zdeborov'a, Aur'elien Decelle, and Michael Chertkov},
journal={Phys. Rev. E 80, 046112 (2009)},
year={2009},
doi={10.1103/PhysRevE.80.046112},
archivePrefix={arXiv},
eprint={0904.0477},
primaryClass={cond-mat.stat-mech cs.CE cs.NI}
} | zdeborová2009message |
arxiv-6976 | 0904.0489 | Persistence and Success in the Attention Economy | <|reference_start|>Persistence and Success in the Attention Economy: A hallmark of the attention economy is the competition for the attention of others. Thus people persistently upload content to social media sites, hoping for the highly unlikely outcome of topping the charts and reaching a wide audience. And yet, an analysis of the production histories and success dynamics of 10 million videos from \texttt{YouTube} revealed that the more frequently an individual uploads content the less likely it is that it will reach a success threshold. This paradoxical result is further compounded by the fact that the average quality of submissions does increase with the number of uploads, with the likelihood of success less than that of playing a lottery.<|reference_end|> | arxiv | @article{wu2009persistence,
title={Persistence and Success in the Attention Economy},
author={Fang Wu and Bernardo A. Huberman},
journal={arXiv preprint arXiv:0904.0489},
year={2009},
archivePrefix={arXiv},
eprint={0904.0489},
primaryClass={cs.CY physics.soc-ph}
} | wu2009persistence |
arxiv-6977 | 0904.0494 | Average Case Analysis of Multichannel Sparse Recovery Using Convex Relaxation | <|reference_start|>Average Case Analysis of Multichannel Sparse Recovery Using Convex Relaxation: In this paper, we consider recovery of jointly sparse multichannel signals from incomplete measurements. Several approaches have been developed to recover the unknown sparse vectors from the given observations, including thresholding, simultaneous orthogonal matching pursuit (SOMP), and convex relaxation based on a mixed matrix norm. Typically, worst-case analysis is carried out in order to analyze conditions under which the algorithms are able to recover any jointly sparse set of vectors. However, such an approach is not able to provide insights into why joint sparse recovery is superior to applying standard sparse reconstruction methods to each channel individually. Previous work considered an average case analysis of thresholding and SOMP by imposing a probability model on the measured signals. In this paper, our main focus is on analysis of convex relaxation techniques. In particular, we focus on the mixed l_2,1 approach to multichannel recovery. We show that under a very mild condition on the sparsity and on the dictionary characteristics, measured for example by the coherence, the probability of recovery failure decays exponentially in the number of channels. This demonstrates that most of the time, multichannel sparse recovery is indeed superior to single channel methods. Our probability bounds are valid and meaningful even for a small number of signals. Using the tools we develop to analyze the convex relaxation method, we also tighten the previous bounds for thresholding and SOMP.<|reference_end|> | arxiv | @article{eldar2009average,
title={Average Case Analysis of Multichannel Sparse Recovery Using Convex
Relaxation},
author={Yonina C. Eldar and Holger Rauhut},
journal={arXiv preprint arXiv:0904.0494},
year={2009},
archivePrefix={arXiv},
eprint={0904.0494},
primaryClass={cs.IT math.IT}
} | eldar2009average |
arxiv-6978 | 0904.0525 | The Minimal Polynomial over F_q of Linear Recurring Sequence over F_q^m | <|reference_start|>The Minimal Polynomial over F_q of Linear Recurring Sequence over F_q^m: Recently, motivated by the study of vectorized stream cipher systems, the joint linear complexity and joint minimal polynomial of multisequences have been investigated. Let S be a linear recurring sequence over finite field F_{q^m} with minimal polynomial h(x) over F_{q^m}. Since F_{q^m} and F_{q}^m are isomorphic vector spaces over the finite field F_q, S is identified with an m-fold multisequence S^{(m)} over the finite field F_q. The joint minimal polynomial and joint linear complexity of the m-fold multisequence S^{(m)} are the minimal polynomial and linear complexity over F_q of S respectively. In this paper, we study the minimal polynomial and linear complexity over F_q of a linear recurring sequence S over F_{q^m} with minimal polynomial h(x) over F_{q^m}. If the canonical factorization of h(x) in F_{q^m}[x] is known, we determine the minimal polynomial and linear complexity over F_q of the linear recurring sequence S over F_{q^m}.<|reference_end|> | arxiv | @article{gao2009the,
title={The Minimal Polynomial over F_q of Linear Recurring Sequence over
F_{q^m}},
author={Zhi-Han Gao and Fang-Wei Fu},
journal={arXiv preprint arXiv:0904.0525},
year={2009},
archivePrefix={arXiv},
eprint={0904.0525},
primaryClass={cs.IT cs.CR math.IT}
} | gao2009the |
arxiv-6979 | 0904.0534 | Theory of Carry Value Transformation (CVT) and its Application in Fractal formation | <|reference_start|>Theory of Carry Value Transformation (CVT) and its Application in Fractal formation: In this paper the theory of Carry Value Transformation (CVT) is designed and developed on a pair of n-bit strings and is used to produce many interesting patterns. One of them is found to be a self-similar fractal whose dimension is same as the dimension of the Sierpinski triangle. Different construction procedures like L-system, Cellular Automata rule, Tilling for this fractal are obtained which signifies that like other tools CVT can also be used for the formation of self-similar fractals. It is shown that CVT can be used for the production of periodic as well as chaotic patterns. Also, the analytical and algebraic properties of CVT are discussed. The definition of CVT in two-dimension is slightly modified and its mathematical properties are highlighted. Finally, the extension of CVT and modified CVT (MCVT) are done in higher dimensions.<|reference_end|> | arxiv | @article{choudhury2009theory,
title={Theory of Carry Value Transformation (CVT) and its Application in
Fractal formation},
author={Pabitra Pal Choudhury, Sudhakar Sahoo, Birendra Kumar Nayak, and Sk.
Sarif Hassan},
journal={arXiv preprint arXiv:0904.0534},
year={2009},
archivePrefix={arXiv},
eprint={0904.0534},
primaryClass={cs.DM}
} | choudhury2009theory |
arxiv-6980 | 0904.0544 | Mission-Aware Medium Access Control in Random Access Networks | <|reference_start|>Mission-Aware Medium Access Control in Random Access Networks: We study mission-critical networking in wireless communication networks, where network users are subject to critical events such as emergencies and crises. If a critical event occurs to a user, the user needs to send necessary information for help as early as possible. However, most existing medium access control (MAC) protocols are not adequate to meet the urgent need for information transmission by users in a critical situation. In this paer, we propose a novel class of MAC protocols that utilize available past information as well as current information. Our proposed protocols are mission-aware since they prescribe different transmission decision rules to users in different situations. We show that the proposed protocols perform well not only when the system faces a critical situation but also when there is no critical situation. By utilizing past information, the proposed protocols coordinate transmissions by users to achieve high throughput in the normal phase of operation and to let a user in a critical situation make successful transmissions while it is in the critical situation. Moreover, the proposed protocols require short memory and no message exchanges.<|reference_end|> | arxiv | @article{park2009mission-aware,
title={Mission-Aware Medium Access Control in Random Access Networks},
author={Jaeok Park and Mihaela van der Schaar},
journal={arXiv preprint arXiv:0904.0544},
year={2009},
archivePrefix={arXiv},
eprint={0904.0544},
primaryClass={cs.NI cs.GT cs.IT math.IT}
} | park2009mission-aware |
arxiv-6981 | 0904.0545 | Time Hopping technique for faster reinforcement learning in simulations | <|reference_start|>Time Hopping technique for faster reinforcement learning in simulations: This preprint has been withdrawn by the author for revision<|reference_end|> | arxiv | @article{kormushev2009time,
title={Time Hopping technique for faster reinforcement learning in simulations},
author={Petar Kormushev, Kohei Nomoto, Fangyan Dong, Kaoru Hirota},
journal={arXiv preprint arXiv:0904.0545},
year={2009},
archivePrefix={arXiv},
eprint={0904.0545},
primaryClass={cs.AI cs.LG cs.RO}
} | kormushev2009time |
arxiv-6982 | 0904.0546 | Eligibility Propagation to Speed up Time Hopping for Reinforcement Learning | <|reference_start|>Eligibility Propagation to Speed up Time Hopping for Reinforcement Learning: A mechanism called Eligibility Propagation is proposed to speed up the Time Hopping technique used for faster Reinforcement Learning in simulations. Eligibility Propagation provides for Time Hopping similar abilities to what eligibility traces provide for conventional Reinforcement Learning. It propagates values from one state to all of its temporal predecessors using a state transitions graph. Experiments on a simulated biped crawling robot confirm that Eligibility Propagation accelerates the learning process more than 3 times.<|reference_end|> | arxiv | @article{kormushev2009eligibility,
title={Eligibility Propagation to Speed up Time Hopping for Reinforcement
Learning},
author={Petar Kormushev, Kohei Nomoto, Fangyan Dong, Kaoru Hirota},
journal={arXiv preprint arXiv:0904.0546},
year={2009},
archivePrefix={arXiv},
eprint={0904.0546},
primaryClass={cs.AI cs.LG cs.RO}
} | kormushev2009eligibility |
arxiv-6983 | 0904.0570 | The Derivational Complexity Induced by the Dependency Pair Method | <|reference_start|>The Derivational Complexity Induced by the Dependency Pair Method: We study the derivational complexity induced by the dependency pair method, enhanced with standard refinements. We obtain upper bounds on the derivational complexity induced by the dependency pair method in terms of the derivational complexity of the base techniques employed. In particular we show that the derivational complexity induced by the dependency pair method based on some direct technique, possibly refined by argument filtering, the usable rules criterion, or dependency graphs, is primitive recursive in the derivational complexity induced by the direct method. This implies that the derivational complexity induced by a standard application of the dependency pair method based on traditional termination orders like KBO, LPO, and MPO is exactly the same as if those orders were applied as the only termination technique.<|reference_end|> | arxiv | @article{moser2009the,
title={The Derivational Complexity Induced by the Dependency Pair Method},
author={Georg Moser (University of Innsbruck), Andreas Schnabl (University of
Innsbruck)},
journal={Logical Methods in Computer Science, Volume 7, Issue 3 (July 13,
2011) lmcs:805},
year={2009},
doi={10.2168/LMCS-7(3:1)2011},
archivePrefix={arXiv},
eprint={0904.0570},
primaryClass={cs.LO cs.AI cs.CC cs.PL}
} | moser2009the |
arxiv-6984 | 0904.0578 | Efficient Description Logic Reasoning in Prolog: The DLog system | <|reference_start|>Efficient Description Logic Reasoning in Prolog: The DLog system: This paper describes a resolution based Description Logic reasoning system called DLog. DLog transforms Description Logic axioms into a Prolog program and uses the standard Prolog execution for efficiently answering instance retrieval queries. From the Description Logic point of view, DLog is an ABox reasoning engine for the full SHIQ language. The DLog approach makes it possible to store the individuals in a database instead of memory, which results in better scalability and helps using description logic ontologies directly on top of existing information sources. To appear in Theory and Practice of Logic Programming (TPLP).<|reference_end|> | arxiv | @article{lukácsy2009efficient,
title={Efficient Description Logic Reasoning in Prolog: The DLog system},
author={Gergely Luk'acsy, P'eter Szeredi},
journal={arXiv preprint arXiv:0904.0578},
year={2009},
archivePrefix={arXiv},
eprint={0904.0578},
primaryClass={cs.LO}
} | lukácsy2009efficient |
arxiv-6985 | 0904.0583 | Thin Partitions: Isoperimetric Inequalities and Sampling Algorithms for some Nonconvex Families | <|reference_start|>Thin Partitions: Isoperimetric Inequalities and Sampling Algorithms for some Nonconvex Families: Star-shaped bodies are an important nonconvex generalization of convex bodies (e.g., linear programming with violations). Here we present an efficient algorithm for sampling a given star-shaped body. The complexity of the algorithm grows polynomially in the dimension and inverse polynomially in the fraction of the volume taken up by the kernel of the star-shaped body. The analysis is based on a new isoperimetric inequality. Our main technical contribution is a tool for proving such inequalities when the domain is not convex. As a consequence, we obtain a polynomial algorithm for computing the volume of such a set as well. In contrast, linear optimization over star-shaped sets is NP-hard.<|reference_end|> | arxiv | @article{chandrasekaran2009thin,
title={Thin Partitions: Isoperimetric Inequalities and Sampling Algorithms for
some Nonconvex Families},
author={Karthekeyan Chandrasekaran, Daniel Dadush, Santosh Vempala},
journal={arXiv preprint arXiv:0904.0583},
year={2009},
archivePrefix={arXiv},
eprint={0904.0583},
primaryClass={cs.DS math.FA math.PR}
} | chandrasekaran2009thin |
arxiv-6986 | 0904.0585 | Controller synthesis with very simplified linear constraints in PN model | <|reference_start|>Controller synthesis with very simplified linear constraints in PN model: This paper addresses the problem of forbidden states for safe Petri net modeling discrete event systems. We present an efficient method to construct a controller. A set of linear constraints allow forbidding the reachability of specific states. The number of these so-called forbidden states and consequently the number of constraints are large and lead to a large number of control places. A systematic method for constructing very simplified controller is offered. By using a method based on Petri nets partial invariants, maximal permissive controllers are determined.<|reference_end|> | arxiv | @article{dideban2009controller,
title={Controller synthesis with very simplified linear constraints in PN model},
author={Abbas Dideban, M. Zareiee, Hassane Alla (GIPSA-lab)},
journal={arXiv preprint arXiv:0904.0585},
year={2009},
archivePrefix={arXiv},
eprint={0904.0585},
primaryClass={cs.IT math.IT}
} | dideban2009controller |
arxiv-6987 | 0904.0586 | Optimal Supervisory Control Synthesis | <|reference_start|>Optimal Supervisory Control Synthesis: The place invariant method is well known as an elegant way to construct a Petri net controller. It is possible to use the constraint for preventing forbidden states. But in general case, the number forbidden states can be very large giving a great number of control places. In this paper is presented a systematic method to reduce the size and the number of constraints. This method is applicable for safe and conservative Petri nets giving a maximally permissive controller.<|reference_end|> | arxiv | @article{alla2009optimal,
title={Optimal Supervisory Control Synthesis},
author={Hassane. Alla (GIPSA-lab)},
journal={arXiv preprint arXiv:0904.0586},
year={2009},
archivePrefix={arXiv},
eprint={0904.0586},
primaryClass={cs.IT math.IT}
} | alla2009optimal |
arxiv-6988 | 0904.0589 | Fuzzy Linguistic Logic Programming and its Applications | <|reference_start|>Fuzzy Linguistic Logic Programming and its Applications: The paper introduces fuzzy linguistic logic programming, which is a combination of fuzzy logic programming, introduced by P. Vojtas, and hedge algebras in order to facilitate the representation and reasoning on human knowledge expressed in natural languages. In fuzzy linguistic logic programming, truth values are linguistic ones, e.g., VeryTrue, VeryProbablyTrue, and LittleFalse, taken from a hedge algebra of a linguistic truth variable, and linguistic hedges (modifiers) can be used as unary connectives in formulae. This is motivated by the fact that humans reason mostly in terms of linguistic terms rather than in terms of numbers, and linguistic hedges are often used in natural languages to express different levels of emphasis. The paper presents: (i) the language of fuzzy linguistic logic programming; (ii) a declarative semantics in terms of Herbrand interpretations and models; (iii) a procedural semantics which directly manipulates linguistic terms to compute a lower bound to the truth value of a query, and proves its soundness; (iv) a fixpoint semantics of logic programs, and based on it, proves the completeness of the procedural semantics; (v) several applications of fuzzy linguistic logic programming; and (vi) an idea of implementing a system to execute fuzzy linguistic logic programs.<|reference_end|> | arxiv | @article{le2009fuzzy,
title={Fuzzy Linguistic Logic Programming and its Applications},
author={Van Hung Le (1), Fei Liu (1), and Dinh Khang Tran (2) ((1)La Trobe
University, Australia (2)Hanoi University of Technology, Vietnam)},
journal={arXiv preprint arXiv:0904.0589},
year={2009},
archivePrefix={arXiv},
eprint={0904.0589},
primaryClass={cs.LO}
} | le2009fuzzy |
arxiv-6989 | 0904.0643 | Performing Nonlinear Blind Source Separation with Signal Invariants | <|reference_start|>Performing Nonlinear Blind Source Separation with Signal Invariants: Given a time series of multicomponent measurements x(t), the usual objective of nonlinear blind source separation (BSS) is to find a "source" time series s(t), comprised of statistically independent combinations of the measured components. In this paper, the source time series is required to have a density function in (s,ds/dt)-space that is equal to the product of density functions of individual components. This formulation of the BSS problem has a solution that is unique, up to permutations and component-wise transformations. Separability is shown to impose constraints on certain locally invariant (scalar) functions of x, which are derived from local higher-order correlations of the data's velocity dx/dt. The data are separable if and only if they satisfy these constraints, and, if the constraints are satisfied, the sources can be explicitly constructed from the data. The method is illustrated by using it to separate two speech-like sounds recorded with a single microphone.<|reference_end|> | arxiv | @article{levin2009performing,
title={Performing Nonlinear Blind Source Separation with Signal Invariants},
author={David N. Levin (University of Chicago)},
journal={arXiv preprint arXiv:0904.0643},
year={2009},
doi={10.1109/TSP.2009.2034916},
archivePrefix={arXiv},
eprint={0904.0643},
primaryClass={cs.AI cs.LG}
} | levin2009performing |
arxiv-6990 | 0904.0644 | Settling the Complexity of Arrow-Debreu Equilibria in Markets with Additively Separable Utilities | <|reference_start|>Settling the Complexity of Arrow-Debreu Equilibria in Markets with Additively Separable Utilities: We prove that the problem of computing an Arrow-Debreu market equilibrium is PPAD-complete even when all traders use additively separable, piecewise-linear and concave utility functions. In fact, our proof shows that this market-equilibrium problem does not have a fully polynomial-time approximation scheme unless every problem in PPAD is solvable in polynomial time.<|reference_end|> | arxiv | @article{chen2009settling,
title={Settling the Complexity of Arrow-Debreu Equilibria in Markets with
Additively Separable Utilities},
author={Xi Chen, Decheng Dai, Ye Du and Shang-Hua Teng},
journal={arXiv preprint arXiv:0904.0644},
year={2009},
archivePrefix={arXiv},
eprint={0904.0644},
primaryClass={cs.CC cs.GT}
} | chen2009settling |
arxiv-6991 | 0904.0648 | Evolvability need not imply learnability | <|reference_start|>Evolvability need not imply learnability: We show that Boolean functions expressible as monotone disjunctive normal forms are PAC-evolvable under a uniform distribution on the Boolean cube if the hypothesis size is allowed to remain fixed. We further show that this result is insufficient to prove the PAC-learnability of monotone Boolean functions, thereby demonstrating a counter-example to a recent claim to the contrary. We further discuss scenarios wherein evolvability and learnability will coincide as well as scenarios under which they differ. The implications of the latter case on the prospects of learning in complex hypothesis spaces is briefly examined.<|reference_end|> | arxiv | @article{srivastava2009evolvability,
title={Evolvability need not imply learnability},
author={Nisheeth Srivastava},
journal={arXiv preprint arXiv:0904.0648},
year={2009},
archivePrefix={arXiv},
eprint={0904.0648},
primaryClass={cs.LG cs.CC}
} | srivastava2009evolvability |
arxiv-6992 | 0904.0682 | Privacy in Search Logs | <|reference_start|>Privacy in Search Logs: Search engine companies collect the "database of intentions", the histories of their users' search queries. These search logs are a gold mine for researchers. Search engine companies, however, are wary of publishing search logs in order not to disclose sensitive information. In this paper we analyze algorithms for publishing frequent keywords, queries and clicks of a search log. We first show how methods that achieve variants of $k$-anonymity are vulnerable to active attacks. We then demonstrate that the stronger guarantee ensured by $\epsilon$-differential privacy unfortunately does not provide any utility for this problem. We then propose an algorithm ZEALOUS and show how to set its parameters to achieve $(\epsilon,\delta)$-probabilistic privacy. We also contrast our analysis of ZEALOUS with an analysis by Korolova et al. [17] that achieves $(\epsilon',\delta')$-indistinguishability. Our paper concludes with a large experimental study using real applications where we compare ZEALOUS and previous work that achieves $k$-anonymity in search log publishing. Our results show that ZEALOUS yields comparable utility to $k-$anonymity while at the same time achieving much stronger privacy guarantees.<|reference_end|> | arxiv | @article{goetz2009privacy,
title={Privacy in Search Logs},
author={Michaela Goetz, Ashwin Machanavajjhala, Guozhang Wang, Xiaokui Xiao,
Johannes Gehrke},
journal={arXiv preprint arXiv:0904.0682},
year={2009},
archivePrefix={arXiv},
eprint={0904.0682},
primaryClass={cs.DB cs.IR}
} | goetz2009privacy |
arxiv-6993 | 0904.0698 | About the impossibility to prove P=NP and the pseudo-randomness in NP | <|reference_start|>About the impossibility to prove P=NP and the pseudo-randomness in NP: The relationship between the complexity classes P and NP is an unsolved question in the field of theoretical computer science. In this paper, we look at the link between the P - NP question and the "Deterministic" versus "Non Deterministic" nature of a problem, and more specifically at the temporal nature of the complexity within the NP class of problems. Let us remind that the NP class is called the class of "Non Deterministic Polynomial" languages. Using the meta argument that results in Mathematics should be "time independent" as they are reproducible, the paper shows that the P!=NP assertion is impossible to prove in the a-temporal framework of Mathematics. In a previous version of the report, we use a similar argument based on randomness to show that the P = NP assertion was also impossible to prove, but this part of the paper was shown to be incorrect. So, this version deletes it. In fact, this paper highlights the time dependence of the complexity for any NP problem, linked to some pseudo-randomness in its heart.<|reference_end|> | arxiv | @article{rémon2009about,
title={About the impossibility to prove P=NP and the pseudo-randomness in NP},
author={M. R'emon},
journal={arXiv preprint arXiv:0904.0698},
year={2009},
number={2008/18, Dept Math, Namur University},
archivePrefix={arXiv},
eprint={0904.0698},
primaryClass={cs.CC}
} | rémon2009about |
arxiv-6994 | 0904.0719 | Moveable objects and applications, based on them | <|reference_start|>Moveable objects and applications, based on them: The inner views of all our applications are predetermined by the designers; only some non-significant variations are allowed with the help of adaptive interface. In several programs you can find some moveable objects, but it is an extremely rare thing. However, the design of applications on the basis of moveable and resizable objects opens an absolutely new way of programming; such applications are much more effective in users' work, because each user can adjust an application to his purposes. Programs, using adaptive interface, only implement the designer's ideas of what would be the best reaction to any of the users' doings or commands. Applications on moveable elements do not have such predetermined system of rules; they are fully controlled by the users. This article describes and demonstrates the new way of applications' design.<|reference_end|> | arxiv | @article{andreyev2009moveable,
title={Moveable objects and applications, based on them},
author={Sergey Andreyev},
journal={arXiv preprint arXiv:0904.0719},
year={2009},
archivePrefix={arXiv},
eprint={0904.0719},
primaryClass={cs.HC}
} | andreyev2009moveable |
arxiv-6995 | 0904.0721 | Optimal Tableau Decision Procedures for PDL | <|reference_start|>Optimal Tableau Decision Procedures for PDL: We reformulate Pratt's tableau decision procedure of checking satisfiability of a set of formulas in PDL. Our formulation is simpler and more direct for implementation. Extending the method we give the first EXPTIME (optimal) tableau decision procedure not based on transformation for checking consistency of an ABox w.r.t. a TBox in PDL (here, PDL is treated as a description logic). We also prove the new result that the data complexity of the instance checking problem in PDL is coNP-complete.<|reference_end|> | arxiv | @article{nguyen2009optimal,
title={Optimal Tableau Decision Procedures for PDL},
author={Linh Anh Nguyen and Andrzej Sza{l}as},
journal={Fund. Inform. 104(4), pp. 349-384, 2010},
year={2009},
archivePrefix={arXiv},
eprint={0904.0721},
primaryClass={cs.LO cs.AI cs.CC}
} | nguyen2009optimal |
arxiv-6996 | 0904.0727 | (Meta) Kernelization | <|reference_start|>(Meta) Kernelization: In a parameterized problem, every instance I comes with a positive integer k. The problem is said to admit a polynomial kernel if, in polynomial time, one can reduce the size of the instance I to a polynomial in k, while preserving the answer. In this work we give two meta-theorems on kernelzation. The first theorem says that all problems expressible in Counting Monadic Second Order Logic and satisfying a coverability property admit a polynomial kernel on graphs of bounded genus. Our second result is that all problems that have finite integer index and satisfy a weaker coverability property admit a linear kernel on graphs of bounded genus. These theorems unify and extend all previously known kernelization results for planar graph problems.<|reference_end|> | arxiv | @article{bodlaender2009(meta),
title={(Meta) Kernelization},
author={Hans L. Bodlaender, Fedor V. Fomin, Daniel Lokshtanov, Eelko Penninkx,
Saket Saurabh and Dimitrios M. Thilikos},
journal={arXiv preprint arXiv:0904.0727},
year={2009},
archivePrefix={arXiv},
eprint={0904.0727},
primaryClass={cs.DM cs.DS}
} | bodlaender2009(meta) |
arxiv-6997 | 0904.0747 | Bethe Free Energy Approach to LDPC Decoding on Memory Channels | <|reference_start|>Bethe Free Energy Approach to LDPC Decoding on Memory Channels: We address the problem of the joint sequence detection in partial-response (PR) channels and decoding of low-density parity-check (LDPC) codes. We model the PR channel and the LDPC code as a combined inference problem. We present for the first time the derivation of the belief propagation (BP) equations that allow the simultaneous detection and decoding of a LDPC codeword in a PR channel. To accomplish this we follow an approach from statistical mechanics, in which the Bethe free energy is minimized with respect to the beliefs on the nodes of the PR-LDPC graph. The equations obtained are explicit and are optimal for decoding LDPC codes on PR channels with polynomial $h(D) = 1 - a D^n$ (a real, n positive integer) in the sense that they provide the exact inference of the marginal probabilities on the nodes in a graph free of loops. A simple algorithmic solution to the set of BP equations is proposed and evaluated using numerical simulations, yielding bit-error rate performances that surpass those of turbo equalization.<|reference_end|> | arxiv | @article{anguita2009bethe,
title={Bethe Free Energy Approach to LDPC Decoding on Memory Channels},
author={Jaime A. Anguita, Michael Chertkov, Mark A. Neifeld, Bane Vasic},
journal={arXiv preprint arXiv:0904.0747},
year={2009},
archivePrefix={arXiv},
eprint={0904.0747},
primaryClass={cs.IT math.IT}
} | anguita2009bethe |
arxiv-6998 | 0904.0751 | Distributed Source Coding of Correlated Gaussian Remote Sources | <|reference_start|>Distributed Source Coding of Correlated Gaussian Remote Sources: We consider the distributed source coding system for $L$ correlated Gaussian observations $Y_i, i=1,2, ..., L$. Let $X_i,i=1,2, ..., L$ be $L$ correlated Gaussian random variables and $N_i,$ $i=1,2,... L$ be independent additive Gaussian noises also independent of $X_i, i=1,2,..., L$. We consider the case where for each $i=1,2,..., L$, $Y_i$ is a noisy observation of $X_i$, that is, $Y_i=X_i+N_i$. On this coding system the determination problem of the rate distortion region remains open. In this paper, we derive explicit outer and inner bounds of the rate distortion region. We further find an explicit sufficient condition for those two to match. We also study the sum rate part of the rate distortion region when the correlation has some symmetrical property and derive a new lower bound of the sum rate part. We derive a sufficient condition for this lower bound to be tight. The derived sufficient condition depends only on the correlation property of the sources and their observations.<|reference_end|> | arxiv | @article{oohama2009distributed,
title={Distributed Source Coding of Correlated Gaussian Remote Sources},
author={Yasutada Oohama},
journal={arXiv preprint arXiv:0904.0751},
year={2009},
archivePrefix={arXiv},
eprint={0904.0751},
primaryClass={cs.IT math.IT}
} | oohama2009distributed |
arxiv-6999 | 0904.0768 | Codes on Planar Graphs | <|reference_start|>Codes on Planar Graphs: Codes defined on graphs and their properties have been subjects of intense recent research. On the practical side, constructions for capacity-approaching codes are graphical. On the theoretical side, codes on graphs provide several intriguing problems in the intersection of coding theory and graph theory. In this paper, we study codes defined by planar Tanner graphs. We derive an upper bound on minimum distance $d$ of such codes as a function of the code rate $R$ for $R \ge 5/8$. The bound is given by $$d\le \lceil \frac{7-8R}{2(2R-1)} \rceil + 3\le 7.$$ Among the interesting conclusions of this result are the following: (1) planar graphs do not support asymptotically good codes, and (2) finite-length, high-rate codes on graphs with high minimum distance will necessarily be non-planar.<|reference_end|> | arxiv | @article{srinivasan2009codes,
title={Codes on Planar Graphs},
author={Srimathy Srinivasan, Andrew Thangaraj},
journal={arXiv preprint arXiv:0904.0768},
year={2009},
archivePrefix={arXiv},
eprint={0904.0768},
primaryClass={cs.IT math.IT}
} | srinivasan2009codes |
arxiv-7000 | 0904.0771 | Effect of cell residence time variance on the performance of an advanced paging algorithm | <|reference_start|>Effect of cell residence time variance on the performance of an advanced paging algorithm: The use of advanced sequential paging algorithms has been suggested as a means to reduce the signaling cost in future mobile cellular networks. In a proposed algorithm (Koukoutsidis and Theologou, 2003), the system can use the additional information of the last interaction cell combined with a mobility model to predict the short-term location probabilities at the time of an incoming call arrival. The short-term location probabilities reduce the uncertainty in mobile user position and thus greatly improve the search. In this paper, an analytical model is derived that allows for a general distribution of cell residence times. By considering a Gamma distribution, we study the effect of the variance of cell residence times and derive useful results on the performance of the algorithm.<|reference_end|> | arxiv | @article{koukoutsidis2009effect,
title={Effect of cell residence time variance on the performance of an advanced
paging algorithm},
author={Ioannis Koukoutsidis, Petros Papaioannou, Michael E. Theologou},
journal={arXiv preprint arXiv:0904.0771},
year={2009},
archivePrefix={arXiv},
eprint={0904.0771},
primaryClass={cs.PF cs.NI}
} | koukoutsidis2009effect |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.