corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-7001
0904.0776
Induction of High-level Behaviors from Problem-solving Traces using Machine Learning Tools
<|reference_start|>Induction of High-level Behaviors from Problem-solving Traces using Machine Learning Tools: This paper applies machine learning techniques to student modeling. It presents a method for discovering high-level student behaviors from a very large set of low-level traces corresponding to problem-solving actions in a learning environment. Basic actions are encoded into sets of domain-dependent attribute-value patterns called cases. Then a domain-independent hierarchical clustering identifies what we call general attitudes, yielding automatic diagnosis expressed in natural language, addressed in principle to teachers. The method can be applied to individual students or to entire groups, like a class. We exhibit examples of this system applied to thousands of students' actions in the domain of algebraic transformations.<|reference_end|>
arxiv
@article{robinet2009induction, title={Induction of High-level Behaviors from Problem-solving Traces using Machine Learning Tools}, author={Vivien Robinet (Leibniz - IMAG, TIMC), Gilles Bisson (Leibniz - IMAG, TIMC), Mirta B. Gordon (Leibniz - IMAG, TIMC), Beno^it Lemaire (Leibniz - IMAG, TIMC)}, journal={IEEE Intelligent Systems 22, 4 (2007) 22}, year={2009}, archivePrefix={arXiv}, eprint={0904.0776}, primaryClass={stat.ML cs.LG} }
robinet2009induction
arxiv-7002
0904.0785
Evaluating and Optimising Models of Network Growth
<|reference_start|>Evaluating and Optimising Models of Network Growth: This paper presents a statistically sound method for measuring the accuracy with which a probabilistic model reflects the growth of a network, and a method for optimising parameters in such a model. The technique is data-driven, and can be used for the modeling and simulation of any kind of evolving network. The overall framework, a Framework for Evolving Topology Analysis (FETA), is tested on data sets collected from the Internet AS-level topology, social networking websites and a co-authorship network. Statistical models of the growth of these networks are produced and tested using a likelihood-based method. The models are then used to generate artificial topologies with the same statistical properties as the originals. This work can be used to predict future growth patterns for a known network, or to generate artificial models of graph topology evolution for simulation purposes. Particular application examples include strategic network planning, user profiling in social networks or infrastructure deployment in managed overlay-based services.<|reference_end|>
arxiv
@article{clegg2009evaluating, title={Evaluating and Optimising Models of Network Growth}, author={Richard Clegg, Raul Landa, Uli Harder, Miguel Rio}, journal={arXiv preprint arXiv:0904.0785}, year={2009}, archivePrefix={arXiv}, eprint={0904.0785}, primaryClass={cs.NI} }
clegg2009evaluating
arxiv-7003
0904.0811
The density of weights of Generalized Reed--Muller codes
<|reference_start|>The density of weights of Generalized Reed--Muller codes: We study the density of the weights of Generalized Reed--Muller codes. Let $RM_p(r,m)$ denote the code of multivariate polynomials over $\F_p$ in $m$ variables of total degree at most $r$. We consider the case of fixed degree $r$, when we let the number of variables $m$ tend to infinity. We prove that the set of relative weights of codewords is quite sparse: for every $\alpha \in [0,1]$ which is not rational of the form $\frac{\ell}{p^k}$, there exists an interval around $\alpha$ in which no relative weight exists, for any value of $m$. This line of research is to the best of our knowledge new, and complements the traditional lines of research, which focus on the weight distribution and the divisibility properties of the weights. Equivalently, we study distributions taking values in a finite field, which can be approximated by distributions coming from constant degree polynomials, where we do not bound the number of variables. We give a complete characterization of all such distributions.<|reference_end|>
arxiv
@article{lovett2009the, title={The density of weights of Generalized Reed--Muller codes}, author={Shachar Lovett}, journal={arXiv preprint arXiv:0904.0811}, year={2009}, archivePrefix={arXiv}, eprint={0904.0811}, primaryClass={cs.IT math.IT} }
lovett2009the
arxiv-7004
0904.0813
Projective Space Codes for the Injection Metric
<|reference_start|>Projective Space Codes for the Injection Metric: In the context of error control in random linear network coding, it is useful to construct codes that comprise well-separated collections of subspaces of a vector space over a finite field. In this paper, the metric used is the so-called "injection distance", introduced by Silva and Kschischang. A Gilbert-Varshamov bound for such codes is derived. Using the code-construction framework of Etzion and Silberstein, new non-constant-dimension codes are constructed; these codes contain more codewords than comparable codes designed for the subspace metric.<|reference_end|>
arxiv
@article{khaleghi2009projective, title={Projective Space Codes for the Injection Metric}, author={Azadeh Khaleghi, Frank R. Kschischang}, journal={arXiv preprint arXiv:0904.0813}, year={2009}, archivePrefix={arXiv}, eprint={0904.0813}, primaryClass={cs.IT math.IT} }
khaleghi2009projective
arxiv-7005
0904.0814
Stability Analysis and Learning Bounds for Transductive Regression Algorithms
<|reference_start|>Stability Analysis and Learning Bounds for Transductive Regression Algorithms: This paper uses the notion of algorithmic stability to derive novel generalization bounds for several families of transductive regression algorithms, both by using convexity and closed-form solutions. Our analysis helps compare the stability of these algorithms. It also shows that a number of widely used transductive regression algorithms are in fact unstable. Finally, it reports the results of experiments with local transductive regression demonstrating the benefit of our stability bounds for model selection, for one of the algorithms, in particular for determining the radius of the local neighborhood used by the algorithm.<|reference_end|>
arxiv
@article{cortes2009stability, title={Stability Analysis and Learning Bounds for Transductive Regression Algorithms}, author={Corinna Cortes, Mehryar Mohri, Dmitry Pechyony, Ashish Rastogi}, journal={arXiv preprint arXiv:0904.0814}, year={2009}, archivePrefix={arXiv}, eprint={0904.0814}, primaryClass={cs.LG} }
cortes2009stability
arxiv-7006
0904.0821
Imaging of moving targets with multi-static SAR using an overcomplete dictionary
<|reference_start|>Imaging of moving targets with multi-static SAR using an overcomplete dictionary: This paper presents a method for imaging of moving targets using multi-static SAR by treating the problem as one of spatial reflectivity signal inversion over an overcomplete dictionary of target velocities. Since SAR sensor returns can be related to the spatial frequency domain projections of the scattering field, we exploit insights from compressed sensing theory to show that moving targets can be effectively imaged with transmitters and receivers randomly dispersed in a multi-static geometry within a narrow forward cone around the scene of interest. Existing approaches to dealing with moving targets in SAR solve a coupled non-linear problem of target scattering and motion estimation typically through matched filtering. In contrast, by using an overcomplete dictionary approach we effectively linearize the forward model and solve the moving target problem as a larger, unified regularized inversion problem subject to sparsity constraints.<|reference_end|>
arxiv
@article{stojanovic2009imaging, title={Imaging of moving targets with multi-static SAR using an overcomplete dictionary}, author={Ivana Stojanovic and William C. Karl}, journal={arXiv preprint arXiv:0904.0821}, year={2009}, doi={10.1109/JSTSP.2009.2038982}, archivePrefix={arXiv}, eprint={0904.0821}, primaryClass={cs.IT math.IT} }
stojanovic2009imaging
arxiv-7007
0904.0828
On approximating Gaussian relay networks by deterministic networks
<|reference_start|>On approximating Gaussian relay networks by deterministic networks: We examine the extent to which Gaussian relay networks can be approximated by deterministic networks, and present two results, one negative and one positive. The gap between the capacities of a Gaussian relay network and a corresponding linear deterministic network can be unbounded. The key reasons are that the linear deterministic model fails to capture the phase of received signals, and there is a loss in signal strength in the reduction to a linear deterministic network. On the positive side, Gaussian relay networks are indeed well approximated by certain discrete superposition networks, where the inputs and outputs to the channels are discrete, and channel gains are signed integers. As a corollary, MIMO channels cannot be approximated by the linear deterministic model but can be by the discrete superposition model.<|reference_end|>
arxiv
@article{anand2009on, title={On approximating Gaussian relay networks by deterministic networks}, author={M. Anand, P. R. Kumar}, journal={arXiv preprint arXiv:0904.0828}, year={2009}, archivePrefix={arXiv}, eprint={0904.0828}, primaryClass={cs.IT math.IT} }
anand2009on
arxiv-7008
0904.0859
Approximability of Sparse Integer Programs
<|reference_start|>Approximability of Sparse Integer Programs: The main focus of this paper is a pair of new approximation algorithms for certain integer programs. First, for covering integer programs {min cx: Ax >= b, 0 <= x <= d} where A has at most k nonzeroes per row, we give a k-approximation algorithm. (We assume A, b, c, d are nonnegative.) For any k >= 2 and eps>0, if P != NP this ratio cannot be improved to k-1-eps, and under the unique games conjecture this ratio cannot be improved to k-eps. One key idea is to replace individual constraints by others that have better rounding properties but the same nonnegative integral solutions; another critical ingredient is knapsack-cover inequalities. Second, for packing integer programs {max cx: Ax <= b, 0 <= x <= d} where A has at most k nonzeroes per column, we give a (2k^2+2)-approximation algorithm. Our approach builds on the iterated LP relaxation framework. In addition, we obtain improved approximations for the second problem when k=2, and for both problems when every A_{ij} is small compared to b_i. Finally, we demonstrate a 17/16-inapproximability for covering integer programs with at most two nonzeroes per column.<|reference_end|>
arxiv
@article{pritchard2009approximability, title={Approximability of Sparse Integer Programs}, author={David Pritchard, Deeparnab Chakrabarty}, journal={arXiv preprint arXiv:0904.0859}, year={2009}, archivePrefix={arXiv}, eprint={0904.0859}, primaryClass={cs.DS cs.DM} }
pritchard2009approximability
arxiv-7009
0904.0879
On Superposition Coding for the Wyner-Ziv Problem
<|reference_start|>On Superposition Coding for the Wyner-Ziv Problem: In problems of lossy source/noisy channel coding with side information, the theoretical bounds are achieved using "good" source/channel codes that can be partitioned into "good" channel/source codes. A scheme that achieves optimality in channel coding with side information at the encoder using independent channel and source codes was outlined in previous works. In practice, the original problem is transformed into a multiple-access problem in which the superposition of the two independent codes can be decoded using successive interference cancellation. Inspired by this work, we analyze the superposition approach for source coding with side information at the decoder. We present a random coding analysis that shows achievability of the Wyner-Ziv bound. Then, we discuss some issues related to the practical implementation of this method.<|reference_end|>
arxiv
@article{cappellari2009on, title={On Superposition Coding for the Wyner-Ziv Problem}, author={Lorenzo Cappellari}, journal={arXiv preprint arXiv:0904.0879}, year={2009}, archivePrefix={arXiv}, eprint={0904.0879}, primaryClass={cs.IT math.IT} }
cappellari2009on
arxiv-7010
0904.0942
Boosting the Accuracy of Differentially-Private Histograms Through Consistency
<|reference_start|>Boosting the Accuracy of Differentially-Private Histograms Through Consistency: We show that it is possible to significantly improve the accuracy of a general class of histogram queries while satisfying differential privacy. Our approach carefully chooses a set of queries to evaluate, and then exploits consistency constraints that should hold over the noisy output. In a post-processing phase, we compute the consistent input most likely to have produced the noisy output. The final output is differentially-private and consistent, but in addition, it is often much more accurate. We show, both theoretically and experimentally, that these techniques can be used for estimating the degree sequence of a graph very precisely, and for computing a histogram that can support arbitrary range queries accurately.<|reference_end|>
arxiv
@article{hay2009boosting, title={Boosting the Accuracy of Differentially-Private Histograms Through Consistency}, author={Michael Hay, Vibhor Rastogi, Gerome Miklau, and Dan Suciu}, journal={arXiv preprint arXiv:0904.0942}, year={2009}, archivePrefix={arXiv}, eprint={0904.0942}, primaryClass={cs.DB cs.CR} }
hay2009boosting
arxiv-7011
0904.0962
Color Dipole Moments for Edge Detection
<|reference_start|>Color Dipole Moments for Edge Detection: Dipole and higher moments are physical quantities used to describe a charge distribution. In analogy with electromagnetism, it is possible to define the dipole moments for a gray-scale image, according to the single aspect of a gray-tone map. In this paper we define the color dipole moments for color images. For color maps in fact, we have three aspects, the three primary colors, to consider. Associating three color charges to each pixel, color dipole moments can be easily defined and used for edge detection.<|reference_end|>
arxiv
@article{sparavigna2009color, title={Color Dipole Moments for Edge Detection}, author={Amelia Sparavigna}, journal={arXiv preprint arXiv:0904.0962}, year={2009}, archivePrefix={arXiv}, eprint={0904.0962}, primaryClass={cs.CV} }
sparavigna2009color
arxiv-7012
0904.0973
A statistical mechanical interpretation of algorithmic information theory III: Composite systems and fixed points
<|reference_start|>A statistical mechanical interpretation of algorithmic information theory III: Composite systems and fixed points: The statistical mechanical interpretation of algorithmic information theory (AIT, for short) was introduced and developed by our former works [K. Tadaki, Local Proceedings of CiE 2008, pp.425-434, 2008] and [K. Tadaki, Proceedings of LFCS'09, Springer's LNCS, vol.5407, pp.422-440, 2009], where we introduced the notion of thermodynamic quantities, such as partition function Z(T), free energy F(T), energy E(T), and statistical mechanical entropy S(T), into AIT. We then discovered that, in the interpretation, the temperature T equals to the partial randomness of the values of all these thermodynamic quantities, where the notion of partial randomness is a stronger representation of the compression rate by means of program-size complexity. Furthermore, we showed that this situation holds for the temperature itself as a thermodynamic quantity, namely, for each of all the thermodynamic quantities above, the computability of its value at temperature T gives a sufficient condition for T in (0,1) to be a fixed point on partial randomness. In this paper, we develop the statistical mechanical interpretation of AIT further and pursue its formal correspondence to normal statistical mechanics. The thermodynamic quantities in AIT are defined based on the halting set of an optimal computer, which is a universal decoding algorithm used to define the notion of program-size complexity. We show that there are infinitely many optimal computers which give completely different sufficient conditions in each of the thermodynamic quantities in AIT. We do this by introducing the notion of composition of computers into AIT, which corresponds to the notion of composition of systems in normal statistical mechanics.<|reference_end|>
arxiv
@article{tadaki2009a, title={A statistical mechanical interpretation of algorithmic information theory III: Composite systems and fixed points}, author={Kohtaro Tadaki}, journal={Mathematical Structures in Computer Science 22 (2012) 752-770}, year={2009}, doi={10.1017/S096012951100051X}, archivePrefix={arXiv}, eprint={0904.0973}, primaryClass={cs.IT cs.CC math.IT math.PR} }
tadaki2009a
arxiv-7013
0904.0981
Dependency Pairs and Polynomial Path Orders
<|reference_start|>Dependency Pairs and Polynomial Path Orders: We show how polynomial path orders can be employed efficiently in conjunction with weak innermost dependency pairs to automatically certify polynomial runtime complexity of term rewrite systems and the polytime computability of the functions computed. The established techniques have been implemented and we provide ample experimental data to assess the new method.<|reference_end|>
arxiv
@article{avanzini2009dependency, title={Dependency Pairs and Polynomial Path Orders}, author={Martin Avanzini and Georg Moser}, journal={arXiv preprint arXiv:0904.0981}, year={2009}, archivePrefix={arXiv}, eprint={0904.0981}, primaryClass={cs.LO cs.AI cs.CC cs.SC} }
avanzini2009dependency
arxiv-7014
0904.0986
Approche conceptuelle par un processus d'annotation pour la repr\'esentation et la valorisation de contenus informationnels en intelligence \'economique (IE)
<|reference_start|>Approche conceptuelle par un processus d'annotation pour la repr\'esentation et la valorisation de contenus informationnels en intelligence \'economique (IE): In the era of the information society, the impact of the information systems on the economy of material and immaterial is certainly perceptible. With regards to the information resources of an organization, the annotation involved to enrich informational content, to track the intellectual activities on a document and to set the added value on information for the benefit of solving a decision-making problem in the context of economic intelligence. Our contribution is distinguished by the representation of an annotation process and its inherent concepts to lead the decisionmaker to an anticipated decision: the provision of relevant and annotated information. Such information in the system is made easy by taking into account the diversity of resources and those that are well annotated so formally and informally by the EI actors. A capital research framework consist of integrating in the decision-making process the annotator activity, the software agent (or the reasoning mechanisms) and the information resources enhancement.<|reference_end|>
arxiv
@article{sidhom2009approche, title={Approche conceptuelle par un processus d'annotation pour la repr\'esentation et la valorisation de contenus informationnels en intelligence \'economique (IE)}, author={Sahbi Sidhom (LORIA)}, journal={Syst\`emes d'Information et Intelligence Economique (SIIE) 1 ( ISBN 9978-9973868-19-0) (2008) pp. 172-190}, year={2009}, archivePrefix={arXiv}, eprint={0904.0986}, primaryClass={cs.IR} }
sidhom2009approche
arxiv-7015
0904.0994
Breaking through the Thresholds: an Analysis for Iterative Reweighted $\ell_1$ Minimization via the Grassmann Angle Framework
<|reference_start|>Breaking through the Thresholds: an Analysis for Iterative Reweighted $\ell_1$ Minimization via the Grassmann Angle Framework: It is now well understood that $\ell_1$ minimization algorithm is able to recover sparse signals from incomplete measurements [2], [1], [3] and sharp recoverable sparsity thresholds have also been obtained for the $\ell_1$ minimization algorithm. However, even though iterative reweighted $\ell_1$ minimization algorithms or related algorithms have been empirically observed to boost the recoverable sparsity thresholds for certain types of signals, no rigorous theoretical results have been established to prove this fact. In this paper, we try to provide a theoretical foundation for analyzing the iterative reweighted $\ell_1$ algorithms. In particular, we show that for a nontrivial class of signals, the iterative reweighted $\ell_1$ minimization can indeed deliver recoverable sparsity thresholds larger than that given in [1], [3]. Our results are based on a high-dimensional geometrical analysis (Grassmann angle analysis) of the null-space characterization for $\ell_1$ minimization and weighted $\ell_1$ minimization algorithms.<|reference_end|>
arxiv
@article{xu2009breaking, title={Breaking through the Thresholds: an Analysis for Iterative Reweighted $\ell_1$ Minimization via the Grassmann Angle Framework}, author={Weiyu Xu, M. Amin Khajehnejad, Salman Avestimehr, Babak Hassibi}, journal={arXiv preprint arXiv:0904.0994}, year={2009}, archivePrefix={arXiv}, eprint={0904.0994}, primaryClass={math.PR cs.IT math.IT} }
xu2009breaking
arxiv-7016
0904.1002
A Program to Determine the Exact Competitive Ratio of List s-Batching with Unit Jobs
<|reference_start|>A Program to Determine the Exact Competitive Ratio of List s-Batching with Unit Jobs: We consider the online list s-batch problem, where all the jobs have processing time 1 and we seek to minimize the sum of the completion times of the jobs. We give a Java program which is used to verify that the competitiveness of this problem is 619/583.<|reference_end|>
arxiv
@article{bein2009a, title={A Program to Determine the Exact Competitive Ratio of List s-Batching with Unit Jobs}, author={Wolfgang Bein, Leah Epstein, Lawrence L. Larmore, John Noga}, journal={arXiv preprint arXiv:0904.1002}, year={2009}, archivePrefix={arXiv}, eprint={0904.1002}, primaryClass={cs.DS} }
bein2009a
arxiv-7017
0904.1083
5-axis High Speed Milling Optimisation
<|reference_start|>5-axis High Speed Milling Optimisation: Manufacturing of free form parts relies on the calculation of a tool path based on a CAD model, on a machining strategy and on a given numerically controlled machine tool. In order to reach the best possible performances, it is necessary to take into account a maximum of constraints during tool path calculation. For this purpose, we have developed a surface representation of the tool paths to manage 5-axis High Speed Milling, which is the most complicated case. This model allows integrating early in the step of tool path computation the machine tool geometrical constraints (axis ranges, part holder orientation), kinematical constraints (speed and acceleration on the axes, singularities) as well as gouging issues between the tool and the part. The aim of the paper is to optimize the step of 5-axis HSM tool path calculation with a bi-parameter surface representation of the tool path. We propose an example of integration of the digital process for tool path computation, ensuring the required quality and maximum productivity<|reference_end|>
arxiv
@article{tournier20095-axis, title={5-axis High Speed Milling Optimisation}, author={Christophe Tournier (LURPA), Sylvain Lavernhe (LURPA), Claire Lartigue (LURPA)}, journal={Revue Internationale d Ingenierie Numerique 2, 1-2 (2006) pages 173-184}, year={2009}, archivePrefix={arXiv}, eprint={0904.1083}, primaryClass={cs.OH} }
tournier20095-axis
arxiv-7018
0904.1084
Usinage de poches en UGV - Aide au choix de strat\'egies
<|reference_start|>Usinage de poches en UGV - Aide au choix de strat\'egies: The paper deals with associating the optimal machining strategy to a given pocket geometry, within the context of High-Speed Machining (HSM) of aeronautical pockets. First we define different classes of pocket features according to geometrical criteria. Following, we propose a method allowing to associate a set of capable tools to the features. Each capable tool defines a machined zone with a specific geometry. The last part of the paper is thus dedicated to associate the optimal machining strategy to a given geometry within the context of HSM. Results highlight that analyses must be conducted in a dynamical as well as a geometrical viewpoint. In particular, it becomes necessary to integrate dynamical specifities associated to the behavior of the couple machine/NC unit in the tool path calculation.<|reference_end|>
arxiv
@article{mawussi2009usinage, title={Usinage de poches en UGV - Aide au choix de strat\'egies}, author={Kwamiwi Mawussi (LURPA), Sylvain Lavernhe (LURPA), Claire Lartigue (LURPA)}, journal={Revue Internationale de CFAO et d'informatique graphique 18, 3 (2003) 337-349}, year={2009}, archivePrefix={arXiv}, eprint={0904.1084}, primaryClass={cs.OH} }
mawussi2009usinage
arxiv-7019
0904.1110
On formal verification of arithmetic-based cryptographic primitives
<|reference_start|>On formal verification of arithmetic-based cryptographic primitives: Cryptographic primitives are fundamental for information security: they are used as basic components for cryptographic protocols or public-key cryptosystems. In many cases, their security proofs consist in showing that they are reducible to computationally hard problems. Those reductions can be subtle and tedious, and thus not easily checkable. On top of the proof assistant Coq, we had implemented in previous work a toolbox for writing and checking game-based security proofs of cryptographic primitives. In this paper we describe its extension with number-theoretic capabilities so that it is now possible to write and check arithmetic-based cryptographic primitives in our toolbox. We illustrate our work by machine checking the game-based proofs of unpredictability of the pseudo-random bit generator of Blum, Blum and Shub, and semantic security of the public-key cryptographic scheme of Goldwasser and Micali.<|reference_end|>
arxiv
@article{nowak2009on, title={On formal verification of arithmetic-based cryptographic primitives}, author={David Nowak}, journal={In Information Security and Cryptology - ICISC 2008, 11th International Conference, Seoul, Korea, December 3-5, 2008, Proceedings, volume 5461 of Lecture Notes in Computer Science, pages 368-382, Springer}, year={2009}, doi={10.1007/978-3-642-00730-9_23}, archivePrefix={arXiv}, eprint={0904.1110}, primaryClass={cs.CR cs.LO} }
nowak2009on
arxiv-7020
0904.1113
k-Means has Polynomial Smoothed Complexity
<|reference_start|>k-Means has Polynomial Smoothed Complexity: The k-means method is one of the most widely used clustering algorithms, drawing its popularity from its speed in practice. Recently, however, it was shown to have exponential worst-case running time. In order to close the gap between practical performance and theoretical analysis, the k-means method has been studied in the model of smoothed analysis. But even the smoothed analyses so far are unsatisfactory as the bounds are still super-polynomial in the number n of data points. In this paper, we settle the smoothed running time of the k-means method. We show that the smoothed number of iterations is bounded by a polynomial in n and 1/\sigma, where \sigma is the standard deviation of the Gaussian perturbations. This means that if an arbitrary input data set is randomly perturbed, then the k-means method will run in expected polynomial time on that input set.<|reference_end|>
arxiv
@article{arthur2009k-means, title={k-Means has Polynomial Smoothed Complexity}, author={David Arthur and Bodo Manthey and Heiko R"oglin}, journal={arXiv preprint arXiv:0904.1113}, year={2009}, archivePrefix={arXiv}, eprint={0904.1113}, primaryClass={cs.DS cs.CC cs.CG} }
arthur2009k-means
arxiv-7021
0904.1144
Alternative evaluation of statistical indicators in atoms: the non-relativistic and relativistic cases
<|reference_start|>Alternative evaluation of statistical indicators in atoms: the non-relativistic and relativistic cases: In this work, the calculation of a statistical measure of complexity and the Fisher-Shannon information is performed for all the atoms in the periodic table. Non-relativistic and relativistic cases are considered. We follow the method suggested in [C.P. Panos, N.S. Nikolaidis, K. Ch. Chatzisavvas, C.C. Tsouros, arXiv:0812.3963v1] that uses the fractional occupation probabilities of electrons in atomic orbitals, instead of the continuous electronic wave functions. For the order of shell filling in the relativistic case, we take into account the effect due to electronic spin-orbit interaction. The increasing of both magnitudes, the statistical complexity and the Fisher-Shannon information, with the atomic number $Z$ is observed. The shell structure and the irregular shell filling is well displayed by the Fisher-Shannon information in the relativistic case.<|reference_end|>
arxiv
@article{sanudo2009alternative, title={Alternative evaluation of statistical indicators in atoms: the non-relativistic and relativistic cases}, author={Jaime Sanudo and Ricardo Lopez-Ruiz}, journal={arXiv preprint arXiv:0904.1144}, year={2009}, doi={10.1016/j.physleta.2009.05.030}, archivePrefix={arXiv}, eprint={0904.1144}, primaryClass={nlin.AO cs.IT math.IT physics.atom-ph} }
sanudo2009alternative
arxiv-7022
0904.1149
Chaitin \Omega numbers and halting problems
<|reference_start|>Chaitin \Omega numbers and halting problems: Chaitin [G. J. Chaitin, J. Assoc. Comput. Mach., vol.22, pp.329-340, 1975] introduced \Omega number as a concrete example of random real. The real \Omega is defined as the probability that an optimal computer halts, where the optimal computer is a universal decoding algorithm used to define the notion of program-size complexity. Chaitin showed \Omega to be random by discovering the property that the first n bits of the base-two expansion of \Omega solve the halting problem of the optimal computer for all binary inputs of length at most n. In the present paper we investigate this property from various aspects. We consider the relative computational power between the base-two expansion of \Omega and the halting problem by imposing the restriction to finite size on both the problems. It is known that the base-two expansion of \Omega and the halting problem are Turing equivalent. We thus consider an elaboration of the Turing equivalence in a certain manner.<|reference_end|>
arxiv
@article{tadaki2009chaitin, title={Chaitin \Omega numbers and halting problems}, author={Kohtaro Tadaki}, journal={In: Ambos-Spies K., L\"owe B., Merkle W. (eds) Mathematical Theory and Computational Practice. CiE 2009. LNCS, vol 5635 (2009) Springer}, year={2009}, doi={10.1007/978-3-642-03073-4_46}, archivePrefix={arXiv}, eprint={0904.1149}, primaryClass={math.LO cs.CC cs.IT math.IT} }
tadaki2009chaitin
arxiv-7023
0904.1150
Upper Bounds on the Capacities of Noncontrollable Finite-State Channels with/without Feedback
<|reference_start|>Upper Bounds on the Capacities of Noncontrollable Finite-State Channels with/without Feedback: Noncontrollable finite-state channels (FSCs) are FSCs in which the channel inputs have no influence on the channel states, i.e., the channel states evolve freely. Since single-letter formulae for the channel capacities are rarely available for general noncontrollable FSCs, computable bounds are usually utilized to numerically bound the capacities. In this paper, we take the delayed channel state as part of the channel input and then define the {\em directed information rate} from the new channel input (including the source and the delayed channel state) sequence to the channel output sequence. With this technique, we derive a series of upper bounds on the capacities of noncontrollable FSCs with/without feedback. These upper bounds can be achieved by conditional Markov sources and computed by solving an average reward per stage stochastic control problem (ARSCP) with a compact state space and a compact action space. By showing that the ARSCP has a uniformly continuous reward function, we transform the original ARSCP into a finite-state and finite-action ARSCP that can be solved by a value iteration method. Under a mild assumption, the value iteration algorithm is convergent and delivers a near-optimal stationary policy and a numerical upper bound.<|reference_end|>
arxiv
@article{huang2009upper, title={Upper Bounds on the Capacities of Noncontrollable Finite-State Channels with/without Feedback}, author={Xiujie Huang, Aleksandar Kavcic and Xiao Ma}, journal={arXiv preprint arXiv:0904.1150}, year={2009}, doi={10.1109/TIT.2012.2201341}, number={CLN: 9-255}, archivePrefix={arXiv}, eprint={0904.1150}, primaryClass={cs.IT cs.SY math.IT math.OC} }
huang2009upper
arxiv-7024
0904.1186
A New Key-Agreement-Protocol
<|reference_start|>A New Key-Agreement-Protocol: A new 4-pass Key-Agreement Protocol is presented. The security of the protocol mainly relies on the existence of a (polynomial-computable) One-Way-Function and the supposed computational hardness of solving a specific system of equations.<|reference_end|>
arxiv
@article{grohmann2009a, title={A New Key-Agreement-Protocol}, author={Bjoern Grohmann}, journal={arXiv preprint arXiv:0904.1186}, year={2009}, archivePrefix={arXiv}, eprint={0904.1186}, primaryClass={cs.CR} }
grohmann2009a
arxiv-7025
0904.1193
Coherence Analysis of Iterative Thresholding Algorithms
<|reference_start|>Coherence Analysis of Iterative Thresholding Algorithms: There is a recent surge of interest in developing algorithms for finding sparse solutions of underdetermined systems of linear equations $y = \Phi x$. In many applications, extremely large problem sizes are envisioned, with at least tens of thousands of equations and hundreds of thousands of unknowns. For such problem sizes, low computational complexity is paramount. The best studied $\ell_1$ minimization algorithm is not fast enough to fulfill this need. Iterative thresholding algorithms have been proposed to address this problem. In this paper we want to analyze two of these algorithms theoretically, and give sufficient conditions under which they recover the sparsest solution.<|reference_end|>
arxiv
@article{maleki2009coherence, title={Coherence Analysis of Iterative Thresholding Algorithms}, author={Arian Maleki}, journal={arXiv preprint arXiv:0904.1193}, year={2009}, archivePrefix={arXiv}, eprint={0904.1193}, primaryClass={cs.IT math.IT} }
maleki2009coherence
arxiv-7026
0904.1227
Learning convex bodies is hard
<|reference_start|>Learning convex bodies is hard: We show that learning a convex body in $\RR^d$, given random samples from the body, requires $2^{\Omega(\sqrt{d/\eps})}$ samples. By learning a convex body we mean finding a set having at most $\eps$ relative symmetric difference with the input body. To prove the lower bound we construct a hard to learn family of convex bodies. Our construction of this family is very simple and based on error correcting codes.<|reference_end|>
arxiv
@article{goyal2009learning, title={Learning convex bodies is hard}, author={Navin Goyal, Luis Rademacher}, journal={arXiv preprint arXiv:0904.1227}, year={2009}, archivePrefix={arXiv}, eprint={0904.1227}, primaryClass={cs.LG cs.CG} }
goyal2009learning
arxiv-7027
0904.1229
Finding an Unknown Acyclic Orientation of a Given Graph
<|reference_start|>Finding an Unknown Acyclic Orientation of a Given Graph: Let c(G) be the smallest number of edges we have to test in order to determine an unknown acyclic orientation of the given graph G in the worst case. For example, if G is the complete graph on n vertices, then c(G) is the smallest number of comparisons needed to sort n numbers. We prove that c(G)\le (1/4+o(1))n^2 for any graph G on n vertices, answering in the affirmative a question of Aigner, Triesch, and Tuza [Discrete Mathematics, 144 (1995) 3-10]. Also, we show that, for every e>0, it is NP-hard to approximate the parameter c(G) within a multiplicative factor 74/73-e.<|reference_end|>
arxiv
@article{pikhurko2009finding, title={Finding an Unknown Acyclic Orientation of a Given Graph}, author={Oleg Pikhurko}, journal={arXiv preprint arXiv:0904.1229}, year={2009}, archivePrefix={arXiv}, eprint={0904.1229}, primaryClass={math.CO cs.IT math.IT} }
pikhurko2009finding
arxiv-7028
0904.1234
Mapping the evolution of scientific fields
<|reference_start|>Mapping the evolution of scientific fields: Despite the apparent cross-disciplinary interactions among scientific fields, a formal description of their evolution is lacking. Here we describe a novel approach to study the dynamics and evolution of scientific fields using a network-based analysis. We build an idea network consisting of American Physical Society Physics and Astronomy Classification Scheme (PACS) numbers as nodes representing scientific concepts. Two PACS numbers are linked if there exist publications that reference them simultaneously. We locate scientific fields using a community finding algorithm, and describe the time evolution of these fields over the course of 1985-2006. The communities we identify map to known scientific fields, and their age depends on their size and activity. We expect our approach to quantifying the evolution of ideas to be relevant for making predictions about the future of science and thus help to guide its development.<|reference_end|>
arxiv
@article{herrera2009mapping, title={Mapping the evolution of scientific fields}, author={Mark Herrera, David C. Roberts, Natali Gulbahce}, journal={PLoS ONE 5(5): e10355. 2010}, year={2009}, doi={10.1371/journal.pone.0010355}, archivePrefix={arXiv}, eprint={0904.1234}, primaryClass={physics.soc-ph cs.DL cs.IR} }
herrera2009mapping
arxiv-7029
0904.1242
The Distribution and Deposition Algorithm for Multiple Sequences Sets
<|reference_start|>The Distribution and Deposition Algorithm for Multiple Sequences Sets: Sequences set is a mathematical model used in many applications. As the number of the sequences becomes larger, single sequence set model is not appropriate for the rapidly increasing problem sizes. For example, more and more text processing applications separate a single big text file into multiple files before processing. For these applications, the underline mathematical model is multiple sequences sets (MSS). Though there is increasing use of MSS, there is little research on how to process MSS efficiently. To process multiple sequences sets, sequences are first distributed to different sets, and then sequences for each set are processed. Deriving effective algorithm for MSS processing is both interesting and challenging. In this paper, we have defined the cost functions and performance ratio for analysis of the quality of synthesis sequences. Based on these, the problem of Process of Multiple Sequences Sets (PMSS) is formulated. We have first proposed two greedy algorithms for the PMSS problem, which are based on generalization of algorithms for single sequences set. Then based on the analysis of the characteristics of multiple sequences sets, we have proposed the Distribution and Deposition (DDA) algorithm and DDA* algorithm for PMSS problem. In DDA algorithm, the sequences are first distributed to multiple sets according to their alphabet contents; then sequences in each set are deposited by the deposition algorithm. The DDA* algorithm differs from the DDA algorithm in that the DDA* algorithm distributes sequences by clustering based on sequence profiles. Experiments show that DDA and DDA* always output results with smaller costs than other algorithms, and DDA* outperforms DDA in most instances. The DDA and DDA* algorithms are also efficient both in time and space.<|reference_end|>
arxiv
@article{ning2009the, title={The Distribution and Deposition Algorithm for Multiple Sequences Sets}, author={Kang Ning and Hon Wai Leong}, journal={arXiv preprint arXiv:0904.1242}, year={2009}, archivePrefix={arXiv}, eprint={0904.1242}, primaryClass={cs.DS cs.DC cs.DM} }
ning2009the
arxiv-7030
0904.1243
Maximizing the number of accepted flows in TDMA-based wireless ad hoc networks is APX-complete
<|reference_start|>Maximizing the number of accepted flows in TDMA-based wireless ad hoc networks is APX-complete: Full exploitation of the bandwidth resources of Wireless Networks is challenging because of the sharing of the radio medium among neighboring nodes. Practical algorithms and distributed schemes that tries to optimising the use of the network radio resources. In this technical report we present the proof that maximising the network capacity is is an APX Complete problem (not approximable within 1/(1 - 2^(-k)) - eps for eps > 0).<|reference_end|>
arxiv
@article{bruno2009maximizing, title={Maximizing the number of accepted flows in TDMA-based wireless ad hoc networks is APX-complete}, author={Raffaele Bruno, Vania Conan, Stephane Rousseau}, journal={arXiv preprint arXiv:0904.1243}, year={2009}, archivePrefix={arXiv}, eprint={0904.1243}, primaryClass={cs.NI} }
bruno2009maximizing
arxiv-7031
0904.1258
An Investigation Report on Auction Mechanism Design
<|reference_start|>An Investigation Report on Auction Mechanism Design: Auctions are markets with strict regulations governing the information available to traders in the market and the possible actions they can take. Since well designed auctions achieve desirable economic outcomes, they have been widely used in solving real-world optimization problems, and in structuring stock or futures exchanges. Auctions also provide a very valuable testing-ground for economic theory, and they play an important role in computer-based control systems. Auction mechanism design aims to manipulate the rules of an auction in order to achieve specific goals. Economists traditionally use mathematical methods, mainly game theory, to analyze auctions and design new auction forms. However, due to the high complexity of auctions, the mathematical models are typically simplified to obtain results, and this makes it difficult to apply results derived from such models to market environments in the real world. As a result, researchers are turning to empirical approaches. This report aims to survey the theoretical and empirical approaches to designing auction mechanisms and trading strategies with more weights on empirical ones, and build the foundation for further research in the field.<|reference_end|>
arxiv
@article{niu2009an, title={An Investigation Report on Auction Mechanism Design}, author={Jinzhong Niu, Simon Parsons}, journal={arXiv preprint arXiv:0904.1258}, year={2009}, archivePrefix={arXiv}, eprint={0904.1258}, primaryClass={cs.AI cs.MA} }
niu2009an
arxiv-7032
0904.1281
Asymptotically Optimal Joint Source-Channel Coding with Minimal Delay
<|reference_start|>Asymptotically Optimal Joint Source-Channel Coding with Minimal Delay: We present and analyze a joint source-channel coding strategy for the transmission of a Gaussian source across a Gaussian channel in n channel uses per source symbol. Among all such strategies, our scheme has the following properties: i) the resulting mean-squared error scales optimally with the signal-to-noise ratio, and ii) the scheme is easy to implement and the incurred delay is minimal, in the sense that a single source symbol is encoded at a time.<|reference_end|>
arxiv
@article{kleiner2009asymptotically, title={Asymptotically Optimal Joint Source-Channel Coding with Minimal Delay}, author={Marius Kleiner, Bixio Rimoldi}, journal={arXiv preprint arXiv:0904.1281}, year={2009}, doi={10.1109/GLOCOM.2009.5425427}, archivePrefix={arXiv}, eprint={0904.1281}, primaryClass={cs.IT math.IT} }
kleiner2009asymptotically
arxiv-7033
0904.1284
Theoretical framework for constructing matching algorithms in biometric authentication systems
<|reference_start|>Theoretical framework for constructing matching algorithms in biometric authentication systems: In this paper, we propose a theoretical framework to construct matching algorithms for any biometric authentication systems. Conventional matching algorithms are not necessarily secure against strong intentional impersonation attacks such as wolf attacks. The wolf attack is an attempt to impersonate a genuine user by presenting a "wolf" to a biometric authentication system without the knowledge of a genuine user's biometric sample. A wolf is a sample which can be accepted as a match with multiple templates. The wolf attack probability (WAP) is the maximum success probability of the wolf attack, which was proposed by Une, Otsuka, Imai as a measure for evaluating security of biometric authentication systems. We present a principle for construction of secure matching algorithms against the wolf attack for any biometric authentication systems. The ideal matching algorithm determines a threshold for each input value depending on the entropy of the probability distribution of the (Hamming) distances. Then we show that if the information about the probability distribution for each input value is perfectly given, then our matching algorithm is secure against the wolf attack. Our generalized matching algorithm gives a theoretical framework to construct secure matching algorithms. How lower WAP is achievable depends on how accurately the entropy is estimated. Then there is a trade-off between the efficiency and the achievable WAP. Almost every conventional matching algorithm employs a fixed threshold and hence it can be regarded as an efficient but insecure instance of our theoretical framework. Daugman's IrisCode recognition algorithm proposed can also be regarded as a non-optimal instance of our framework.<|reference_end|>
arxiv
@article{inuma2009theoretical, title={Theoretical framework for constructing matching algorithms in biometric authentication systems}, author={Manabu Inuma, Akira Otsuka, Hideki Imai}, journal={arXiv preprint arXiv:0904.1284}, year={2009}, archivePrefix={arXiv}, eprint={0904.1284}, primaryClass={cs.CR cs.DS} }
inuma2009theoretical
arxiv-7034
0904.1289
Language Diversity across the Consonant Inventories: A Study in the Framework of Complex Networks
<|reference_start|>Language Diversity across the Consonant Inventories: A Study in the Framework of Complex Networks: n this paper, we attempt to explain the emergence of the linguistic diversity that exists across the consonant inventories of some of the major language families of the world through a complex network based growth model. There is only a single parameter for this model that is meant to introduce a small amount of randomness in the otherwise preferential attachment based growth process. The experiments with this model parameter indicates that the choice of consonants among the languages within a family are far more preferential than it is across the families. The implications of this result are twofold -- (a) there is an innate preference of the speakers towards acquiring certain linguistic structures over others and (b) shared ancestry propels the stronger preferential connection between the languages within a family than across them. Furthermore, our observations indicate that this parameter might bear a correlation with the period of existence of the language families under investigation.<|reference_end|>
arxiv
@article{choudhury2009language, title={Language Diversity across the Consonant Inventories: A Study in the Framework of Complex Networks}, author={Monojit Choudhury, Animesh Mukherjee, Anupam Basu, Niloy Ganguly, Ashish Garg, Vaibhav Jalan}, journal={arXiv preprint arXiv:0904.1289}, year={2009}, archivePrefix={arXiv}, eprint={0904.1289}, primaryClass={cs.CL physics.comp-ph physics.soc-ph} }
choudhury2009language
arxiv-7035
0904.1296
On the perfect matching index of bridgeless cubic graphs
<|reference_start|>On the perfect matching index of bridgeless cubic graphs: If $G$ is a bridgeless cubic graph, Fulkerson conjectured that we can find 6 perfect matchings $M_1,...,M_6$ of $G$ with the property that every edge of $G$ is contained in exactly two of them and Berge conjectured that its edge set can be covered by 5 perfect matchings. We define $\tau(G)$ as the least number of perfect matchings allowing to cover the edge set of a bridgeless cubic graph and we study this parameter. The set of graphs with perfect matching index 4 seems interesting and we give some informations on this class.<|reference_end|>
arxiv
@article{fouquet2009on, title={On the perfect matching index of bridgeless cubic graphs}, author={Jean-Luc Fouquet (LIFO), Jean-Marie Vanherpe (LIFO)}, journal={arXiv preprint arXiv:0904.1296}, year={2009}, archivePrefix={arXiv}, eprint={0904.1296}, primaryClass={cs.DM} }
fouquet2009on
arxiv-7036
0904.1299
On the Communication of Scientific Results: The Full-Metadata Format
<|reference_start|>On the Communication of Scientific Results: The Full-Metadata Format: In this paper, we introduce a scientific format for text-based data files, which facilitates storing and communicating tabular data sets. The so-called Full-Metadata Format builds on the widely used INI-standard and is based on four principles: readable self-documentation, flexible structure, fail-safe compatibility, and searchability. As a consequence, all metadata required to interpret the tabular data are stored in the same file, allowing for the automated generation of publication-ready tables and graphs and the semantic searchability of data file collections. The Full-Metadata Format is introduced on the basis of three comprehensive examples. The complete format and syntax is given in the appendix.<|reference_end|>
arxiv
@article{riede2009on, title={On the Communication of Scientific Results: The Full-Metadata Format}, author={Moritz Riede, Rico Schueppel, Kristian O. Sylvester-Hvid, Martin Kuehne, Michael C. Roettger, Klaus Zimmermann and Andreas W. Liehr}, journal={Comput.Phys.Commun.181:651-662,2010}, year={2009}, doi={10.1016/j.cpc.2009.11.014}, number={SI20090302a}, archivePrefix={arXiv}, eprint={0904.1299}, primaryClass={cs.DL cs.IR physics.comp-ph physics.ins-det} }
riede2009on
arxiv-7037
0904.1302
On the Parameterised Intractability of Monadic Second-Order Logic
<|reference_start|>On the Parameterised Intractability of Monadic Second-Order Logic: One of Courcelle's celebrated results states that if C is a class of graphs of bounded tree-width, then model-checking for monadic second order logic is fixed-parameter tractable on C by linear time parameterised algorithms. An immediate question is whether this is best possible or whether the result can be extended to classes of unbounded tree-width. In this paper we show that in terms of tree-width, the theorem can not be extended much further. More specifically, we show that if C is a class of graphs which is closed under colourings and satisfies certain constructibility conditions such that the tree-width of C is not bounded by log^{16}(n) then MSO_2-model checking is not fixed-parameter tractable unless the satisfiability problem SAT for propositional logic can be solved in sub-exponential time. If the tree-width of C is not poly-logarithmically bounded, then MSO_2-model checking is not fixed-parameter tractable unless all problems in the polynomial-time hierarchy, and hence in particular all problems in NP, can be solved in sub-exponential time.<|reference_end|>
arxiv
@article{kreutzer2009on, title={On the Parameterised Intractability of Monadic Second-Order Logic}, author={Stephan Kreutzer}, journal={arXiv preprint arXiv:0904.1302}, year={2009}, archivePrefix={arXiv}, eprint={0904.1302}, primaryClass={cs.LO cs.CC} }
kreutzer2009on
arxiv-7038
0904.1313
A Class of Novel STAP Algorithms Using Sparse Recovery Technique
<|reference_start|>A Class of Novel STAP Algorithms Using Sparse Recovery Technique: A class of novel STAP algorithms based on sparse recovery technique were presented. Intrinsic sparsity of distribution of clutter and target energy on spatial-frequency plane was exploited from the viewpoint of compressed sensing. The original sample data and distribution of target and clutter energy was connected by a ill-posed linear algebraic equation and popular $L_1$ optimization method could be utilized to search for its solution with sparse characteristic. Several new filtering algorithm acting on this solution were designed to clean clutter component on spatial-frequency plane effectively for detecting invisible targets buried in clutter. The method above is called CS-STAP in general. CS-STAP showed their advantage compared with conventional STAP technique, such as SMI, in two ways: Firstly, the resolution of CS-STAP on estimation for distribution of clutter and target energy is ultra-high such that clutter energy might be annihilated almost completely by carefully tuned filter. Output SCR of CS-STAP algorithms is far superior to the requirement of detection; Secondly, a much smaller size of training sample support compared with SMI method is requested for CS-STAP method. Even with only one snapshot (from target range cell) could CS-STAP method be able to reveal the existence of target clearly. CS-STAP method display its great potential to be used in heterogeneous situation. Experimental result on dataset from mountaintop program has provided the evidence for our assertion on CS-STAP.<|reference_end|>
arxiv
@article{zhang2009a, title={A Class of Novel STAP Algorithms Using Sparse Recovery Technique}, author={Hao Zhang, Gang Li, Huadong Meng}, journal={arXiv preprint arXiv:0904.1313}, year={2009}, archivePrefix={arXiv}, eprint={0904.1313}, primaryClass={cs.IT math.IT} }
zhang2009a
arxiv-7039
0904.1331
Primitive Polynomials, Singer Cycles, and Word-Oriented Linear Feedback Shift Registers
<|reference_start|>Primitive Polynomials, Singer Cycles, and Word-Oriented Linear Feedback Shift Registers: Using the structure of Singer cycles in general linear groups, we prove that a conjecture of Zeng, Han and He (2007) holds in the affirmative in a special case, and outline a plausible approach to prove it in the general case. This conjecture is about the number of primitive $\sigma$-LFSRs of a given order over a finite field, and it generalizes a known formula for the number of primitive LFSRs, which, in turn, is the number of primitive polynomials of a given degree over a finite field. Moreover, this conjecture is intimately related to an open question of Niederreiter (1995) on the enumeration of splitting subspaces of a given dimension.<|reference_end|>
arxiv
@article{ghorpade2009primitive, title={Primitive Polynomials, Singer Cycles, and Word-Oriented Linear Feedback Shift Registers}, author={Sudhir R. Ghorpade, Sartaj Ul Hasan, and Meena Kumari}, journal={Designs, Codes and Cryptography, Vol. 58, No. 2 (2011), pp. 123-134}, year={2009}, doi={10.1007/s10623-010-9387-7}, archivePrefix={arXiv}, eprint={0904.1331}, primaryClass={math.CO cs.IT math.IT} }
ghorpade2009primitive
arxiv-7040
0904.1366
A Unified Approach to Ranking in Probabilistic Databases
<|reference_start|>A Unified Approach to Ranking in Probabilistic Databases: The dramatic growth in the number of application domains that naturally generate probabilistic, uncertain data has resulted in a need for efficiently supporting complex querying and decision-making over such data. In this paper, we present a unified approach to ranking and top-k query processing in probabilistic databases by viewing it as a multi-criteria optimization problem, and by deriving a set of features that capture the key properties of a probabilistic dataset that dictate the ranked result. We contend that a single, specific ranking function may not suffice for probabilistic databases, and we instead propose two parameterized ranking functions, called PRF-w and PRF-e, that generalize or can approximate many of the previously proposed ranking functions. We present novel generating functions-based algorithms for efficiently ranking large datasets according to these ranking functions, even if the datasets exhibit complex correlations modeled using probabilistic and/xor trees or Markov networks. We further propose that the parameters of the ranking function be learned from user preferences, and we develop an approach to learn those parameters. Finally, we present a comprehensive experimental study that illustrates the effectiveness of our parameterized ranking functions, especially PRF-e, at approximating other ranking functions and the scalability of our proposed algorithms for exact or approximate ranking.<|reference_end|>
arxiv
@article{li2009a, title={A Unified Approach to Ranking in Probabilistic Databases}, author={Jian Li, Barna Saha, Amol Deshpande}, journal={arXiv preprint arXiv:0904.1366}, year={2009}, archivePrefix={arXiv}, eprint={0904.1366}, primaryClass={cs.DB cs.DS} }
li2009a
arxiv-7041
0904.1369
Cooperative Transmission for Wireless Relay Networks Using Limited Feedback
<|reference_start|>Cooperative Transmission for Wireless Relay Networks Using Limited Feedback: To achieve the available performance gains in half-duplex wireless relay networks, several cooperative schemes have been earlier proposed using either distributed space-time coding or distributed beamforming for the transmitter without and with channel state information (CSI), respectively. However, these schemes typically have rather high implementation and/or decoding complexities, especially when the number of relays is high. In this paper, we propose a simple low-rate feedback-based approach to achieve maximum diversity with a low decoding and implementation complexity. To further improve the performance of the proposed scheme, the knowledge of the second-order channel statistics is exploited to design long-term power loading through maximizing the receiver signal-to-noise ratio (SNR) with appropriate constraints. This maximization problem is approximated by a convex feasibility problem whose solution is shown to be close to the optimal one in terms of the error probability. Subsequently, to provide robustness against feedback errors and further decrease the feedback rate, an extended version of the distributed Alamouti code is proposed. It is also shown that our scheme can be generalized to the differential transmission case, where it can be applied to wireless relay networks with no CSI available at the receiver.<|reference_end|>
arxiv
@article{paredes2009cooperative, title={Cooperative Transmission for Wireless Relay Networks Using Limited Feedback}, author={Javier M. Paredes, Babak H. Khalaj, and Alex B. Gershman}, journal={arXiv preprint arXiv:0904.1369}, year={2009}, doi={10.1109/TSP.2010.2046079}, archivePrefix={arXiv}, eprint={0904.1369}, primaryClass={cs.IT math.IT} }
paredes2009cooperative
arxiv-7042
0904.1409
MIMO Downlink Scheduling with Non-Perfect Channel State Knowledge
<|reference_start|>MIMO Downlink Scheduling with Non-Perfect Channel State Knowledge: Downlink scheduling schemes are well-known and widely investigated under the assumption that the channel state is perfectly known to the scheduler. In the multiuser MIMO (broadcast) case, downlink scheduling in the presence of non-perfect channel state information (CSI) is only scantly treated. In this paper we provide a general framework that addresses the problem systematically. Also, we illuminate the key role played by the channel state prediction error: our scheme treats in a fundamentally different way users with small channel prediction error ("predictable" users) and users with large channel prediction error ("non-predictable" users), and can be interpreted as a near-optimal opportunistic time-sharing strategy between MIMO downlink beamforming to predictable users and space-time coding to nonpredictable users. Our results, based on a realistic MIMO channel model used in 3GPP standardization, show that the proposed algorithms can significantly outperform a conventional "mismatched" scheduling scheme that treats the available CSI as if it was perfect.<|reference_end|>
arxiv
@article{shirani-mehr2009mimo, title={MIMO Downlink Scheduling with Non-Perfect Channel State Knowledge}, author={Hooman Shirani-Mehr, Giuseppe Caire and Michael J. Neely}, journal={arXiv preprint arXiv:0904.1409}, year={2009}, archivePrefix={arXiv}, eprint={0904.1409}, primaryClass={cs.IT math.IT} }
shirani-mehr2009mimo
arxiv-7043
0904.1435
Reducibility Among Fractional Stability Problems
<|reference_start|>Reducibility Among Fractional Stability Problems: In this paper, we resolve the computational complexity of a number of outstanding open problems with practical applications. Here is the list of problems we show to be PPAD-complete, along with the domains of practical significance: Fractional Stable Paths Problem (FSPP) [21] - Internet routing; Core of Balanced Games [41] - Economics and Game theory; Scarf's Lemma [41] - Combinatorics; Hypergraph Matching [1]- Social Choice and Preference Systems; Fractional Bounded Budget Connection Games (FBBC) [30] - Social networks; and Strong Fractional Kernel [2]- Graph Theory. In fact, we show that no fully polynomial-time approximation schemes exist (unless PPAD is in FP). This paper is entirely a series of reductions that build in nontrivial ways on the framework established in previous work. In the course of deriving these reductions, we created two new concepts - preference games and personalized equilibria. The entire set of new reductions can be presented as a lattice with the above problems sandwiched between preference games (at the "easy" end) and personalized equilibria (at the "hard" end). Our completeness results extend to natural approximate versions of most of these problems. On a technical note, we wish to highlight our novel "continuous-to-discrete" reduction from exact personalized equilibria to approximate personalized equilibria using a linear program augmented with an exponential number of "min" constraints of a specific form. In addition to enhancing our repertoire of PPAD-complete problems, we expect the concepts and techniques in this paper to find future use in algorithmic game theory.<|reference_end|>
arxiv
@article{kintali2009reducibility, title={Reducibility Among Fractional Stability Problems}, author={Shiva Kintali, Laura J. Poplawski, Rajmohan Rajaraman, Ravi Sundaram, Shang-Hua Teng}, journal={arXiv preprint arXiv:0904.1435}, year={2009}, archivePrefix={arXiv}, eprint={0904.1435}, primaryClass={cs.CC cs.GT} }
kintali2009reducibility
arxiv-7044
0904.1439
Towards an explanatory and computational theory of scientific discovery
<|reference_start|>Towards an explanatory and computational theory of scientific discovery: We propose an explanatory and computational theory of transformative discoveries in science. The theory is derived from a recurring theme found in a diverse range of scientific change, scientific discovery, and knowledge diffusion theories in philosophy of science, sociology of science, social network analysis, and information science. The theory extends the concept of structural holes from social networks to a broader range of associative networks found in science studies, especially including networks that reflect underlying intellectual structures such as co-citation networks and collaboration networks. The central premise is that connecting otherwise disparate patches of knowledge is a valuable mechanism of creative thinking in general and transformative scientific discovery in particular.<|reference_end|>
arxiv
@article{chen2009towards, title={Towards an explanatory and computational theory of scientific discovery}, author={Chaomei Chen (1 and 2), Yue Chen (2), Mark Horowitz (1), Haiyan Hou (2), Zeyuan Liu (2) and Don Pellegrino (1) ((1) Drexel University, (2) Dalian University of Technology)}, journal={Journal of Informetirics, 3(2009), 191-209}, year={2009}, archivePrefix={arXiv}, eprint={0904.1439}, primaryClass={cs.GL cs.CY} }
chen2009towards
arxiv-7045
0904.1444
Spatial and Temporal Correlation of the Interference in ALOHA Ad Hoc Networks
<|reference_start|>Spatial and Temporal Correlation of the Interference in ALOHA Ad Hoc Networks: Interference is a main limiting factor of the performance of a wireless ad hoc network. The temporal and the spatial correlation of the interference makes the outages correlated temporally (important for retransmissions) and spatially correlated (important for routing). In this letter we quantify the temporal and spatial correlation of the interference in a wireless ad hoc network whose nodes are distributed as a Poisson point process on the plane when ALOHA is used as the multiple-access scheme.<|reference_end|>
arxiv
@article{ganti2009spatial, title={Spatial and Temporal Correlation of the Interference in ALOHA Ad Hoc Networks}, author={Radha Krishna Ganti and Martin Haenggi}, journal={arXiv preprint arXiv:0904.1444}, year={2009}, doi={10.1109/LCOMM.2009.090837}, archivePrefix={arXiv}, eprint={0904.1444}, primaryClass={cs.IT cs.NI math.IT math.PR} }
ganti2009spatial
arxiv-7046
0904.1446
Concavity of entropy under thinning
<|reference_start|>Concavity of entropy under thinning: Building on the recent work of Johnson (2007) and Yu (2008), we prove that entropy is a concave function with respect to the thinning operation T_a. That is, if X and Y are independent random variables on Z_+ with ultra-log-concave probability mass functions, then H(T_a X+T_{1-a} Y)>= a H(X)+(1-a)H(Y), 0 <= a <= 1, where H denotes the discrete entropy. This is a discrete analogue of the inequality (h denotes the differential entropy) h(sqrt(a) X + sqrt{1-a} Y)>= a h(X)+(1-a) h(Y), 0 <= a <= 1, which holds for continuous X and Y with finite variances and is equivalent to Shannon's entropy power inequality. As a consequence we establish a special case of a conjecture of Shepp and Olkin (1981).<|reference_end|>
arxiv
@article{yu2009concavity, title={Concavity of entropy under thinning}, author={Yaming Yu and Oliver Johnson}, journal={IEEE International Symposium on Information Theory, June 28 2009-July 3, 2009, pp. 144 -- 148}, year={2009}, doi={10.1109/ISIT.2009.5205880}, archivePrefix={arXiv}, eprint={0904.1446}, primaryClass={cs.IT math.IT} }
yu2009concavity
arxiv-7047
0904.1488
Computing Stuttering Simulations
<|reference_start|>Computing Stuttering Simulations: Stuttering bisimulation is a well-known behavioral equivalence that preserves CTL-X, namely CTL without the next-time operator X. Correspondingly, the stuttering simulation preorder induces a coarser behavioral equivalence that preserves the existential fragment ECTL-{X,G}, namely ECTL without the next-time X and globally G operators. While stuttering bisimulation equivalence can be computed by the well-known Groote and Vaandrager's [1990] algorithm, to the best of our knowledge, no algorithm for computing the stuttering simulation preorder and equivalence is available. This paper presents such an algorithm for finite state systems.<|reference_end|>
arxiv
@article{ranzato2009computing, title={Computing Stuttering Simulations}, author={Francesco Ranzato and Francesco Tapparo}, journal={arXiv preprint arXiv:0904.1488}, year={2009}, archivePrefix={arXiv}, eprint={0904.1488}, primaryClass={cs.LO} }
ranzato2009computing
arxiv-7048
0904.1529
On the word problem for SP-categories, and the properties of two-way communication
<|reference_start|>On the word problem for SP-categories, and the properties of two-way communication: The word problem for categories with free products and coproducts (sums), SP-categories, is directly related to the problem of determining the equivalence of certain processes. Indeed, the maps in these categories may be directly interpreted as processes which communicate by two-way channels. The maps of an SP-category may also be viewed as a proof theory for a simple logic with a game theoretic intepretation. The cut-elimination procedure for this logic determines equality only up to certain permuting conversions. As the equality classes under these permuting conversions are finite, it is easy to see that equality between cut-free terms (even in the presence of the additive units) is decidable. Unfortunately, this does not yield a tractable decision algorithm as these equivalence classes can contain exponentially many terms. However, the rather special properties of these free categories -- and, thus, of two-way communication -- allow one to devise a tractable algorithm for equality. We show that, restricted to cut-free terms s,t : X --> A, the decision procedure runs in time polynomial on |X||A|, the product of the sizes of the domain and codomain type.<|reference_end|>
arxiv
@article{santocanale2009on, title={On the word problem for SP-categories, and the properties of two-way communication}, author={Luigi Santocanale (LIF), Robin Cockett}, journal={arXiv preprint arXiv:0904.1529}, year={2009}, archivePrefix={arXiv}, eprint={0904.1529}, primaryClass={cs.LO math.CT math.LO} }
santocanale2009on
arxiv-7049
0904.1534
Estimating nonlinearities in twophase flow in porous media
<|reference_start|>Estimating nonlinearities in twophase flow in porous media: In order to analyze numerically inverse problems several techniques based on linear and nonlinear stability analysis are presented. These techniques are illustrated on the problem of estimating mobilities and capillary pressure in one-dimensional two-phase displacements in porous media that are performed in laboratories. This is an example of the problem of estimating nonlinear coefficients in a system of nonlinear partial differential equations.<|reference_end|>
arxiv
@article{zhang2009estimating, title={Estimating nonlinearities in twophase flow in porous media}, author={Jianfeng Zhang, Guy Chavent (INRIA Rocquencourt, CEREMADE), J'er^ome Jaffr'e (INRIA Rocquencourt)}, journal={arXiv preprint arXiv:0904.1534}, year={2009}, archivePrefix={arXiv}, eprint={0904.1534}, primaryClass={cs.NA math.AP physics.class-ph} }
zhang2009estimating
arxiv-7050
0904.1538
Shannon-Kotel'nikov Mappings for Analog Point-to-Point Communications
<|reference_start|>Shannon-Kotel'nikov Mappings for Analog Point-to-Point Communications: In this paper an approach to joint source-channel coding (JSCC) named Shannon-Kotel'nikov mappings (S-K mappings) is presented. S-K mappings are continuous, or piecewise continuous direct source-to-channel mappings operating directly on amplitude continuous and discrete time signals. Such mappings include several existing JSCC schemes as special cases. Many existing approaches to analog- or hybrid discrete analog JSCC provide both excellent performance as well as robustness to variations in noise level. This at low delay and relatively low complexity. However, a theory explaining their performance and behaviour on a general basis, as well as guidelines on how to construct close to optimal mappings in general, does not currently exist. Therefore, such mappings are often found based on educated guesses inspired of configurations that are known in advance to produce good solutions, combination of already existing mappings, numerical optimization or machine learning methods. The objective of this paper is to introduce a theoretical framework for analysis of analog- or hybrid discrete analog S-K mappings. This framework will enable calculation of distortion when applying such schemes on point-to-point links, reveal more about their fundamental nature, and provide guidelines on how they should be constructed in order to perform well at both low and arbitrary complexity and delay. Such guidelines will likely help constrain solutions to numerical approaches and help explain why machine learning approaches finds the solutions they do. This task is difficult and we do not provide a complete framework at this stage: We focus on high SNR and memoryless sources with an arbitrary continuous unimodal density function and memoryless Gaussian channels. We also provide example of mappings based on surfaces which are chosen based on the provided theory.<|reference_end|>
arxiv
@article{floor2009shannon-kotel'nikov, title={Shannon-Kotel'nikov Mappings for Analog Point-to-Point Communications}, author={P{aa}l Anders Floor and Tor A. Ramstad}, journal={arXiv preprint arXiv:0904.1538}, year={2009}, archivePrefix={arXiv}, eprint={0904.1538}, primaryClass={cs.IT math.IT} }
floor2009shannon-kotel'nikov
arxiv-7051
0904.1579
Online prediction of ovarian cancer
<|reference_start|>Online prediction of ovarian cancer: In this paper we apply computer learning methods to diagnosing ovarian cancer using the level of the standard biomarker CA125 in conjunction with information provided by mass-spectrometry. We are working with a new data set collected over a period of 7 years. Using the level of CA125 and mass-spectrometry peaks, our algorithm gives probability predictions for the disease. To estimate classification accuracy we convert probability predictions into strict predictions. Our algorithm makes fewer errors than almost any linear combination of the CA125 level and one peak's intensity (taken on the log scale). To check the power of our algorithm we use it to test the hypothesis that CA125 and the peaks do not contain useful information for the prediction of the disease at a particular time before the diagnosis. Our algorithm produces $p$-values that are better than those produced by the algorithm that has been previously applied to this data set. Our conclusion is that the proposed algorithm is more reliable for prediction on new data.<|reference_end|>
arxiv
@article{zhdanov2009online, title={Online prediction of ovarian cancer}, author={Fedor Zhdanov, Vladimir Vovk, Brian Burford, Dmitry Devetyarov, Ilia Nouretdinov and Alex Gammerman}, journal={arXiv preprint arXiv:0904.1579}, year={2009}, archivePrefix={arXiv}, eprint={0904.1579}, primaryClass={cs.AI cs.LG} }
zhdanov2009online
arxiv-7052
0904.1613
On the closed-form solution of the rotation matrix arising in computer vision problems
<|reference_start|>On the closed-form solution of the rotation matrix arising in computer vision problems: We show the closed-form solution to the maximization of trace(A'R), where A is given and R is unknown rotation matrix. This problem occurs in many computer vision tasks involving optimal rotation matrix estimation. The solution has been continuously reinvented in different fields as part of specific problems. We summarize the historical evolution of the problem and present the general proof of the solution. We contribute to the proof by considering the degenerate cases of A and discuss the uniqueness of R.<|reference_end|>
arxiv
@article{myronenko2009on, title={On the closed-form solution of the rotation matrix arising in computer vision problems}, author={Andriy Myronenko, Xubo Song}, journal={arXiv preprint arXiv:0904.1613}, year={2009}, archivePrefix={arXiv}, eprint={0904.1613}, primaryClass={cs.CV} }
myronenko2009on
arxiv-7053
0904.1616
Mathematical and Statistical Opportunities in Cyber Security
<|reference_start|>Mathematical and Statistical Opportunities in Cyber Security: The role of mathematics in a complex system such as the Internet has yet to be deeply explored. In this paper, we summarize some of the important and pressing problems in cyber security from the viewpoint of open science environments. We start by posing the question "What fundamental problems exist within cyber security research that can be helped by advanced mathematics and statistics?" Our first and most important assumption is that access to real-world data is necessary to understand large and complex systems like the Internet. Our second assumption is that many proposed cyber security solutions could critically damage both the openness and the productivity of scientific research. After examining a range of cyber security problems, we come to the conclusion that the field of cyber security poses a rich set of new and exciting research opportunities for the mathematical and statistical sciences.<|reference_end|>
arxiv
@article{meza2009mathematical, title={Mathematical and Statistical Opportunities in Cyber Security}, author={Juan Meza, Scott Campbell, and David Bailey}, journal={arXiv preprint arXiv:0904.1616}, year={2009}, number={LBNL-1667E}, archivePrefix={arXiv}, eprint={0904.1616}, primaryClass={cs.CR} }
meza2009mathematical
arxiv-7054
0904.1629
Fuzzy inference based mentality estimation for eye robot agent
<|reference_start|>Fuzzy inference based mentality estimation for eye robot agent: Household robots need to communicate with human beings in a friendly fashion. To achieve better understanding of displayed information, an importance and a certainty of the information should be communicated together with the main information. The proposed intent expression system aims to convey this additional information using an eye robot. The eye motions are represented as states in a pleasure-arousal space model. Change of the model state is calculated by fuzzy inference according to the importance and certainty of the displayed information. This change influences the arousal-sleep coordinate in the space which corresponds to activeness in communication. The eye robot provides a basic interface for the mascot robot system which is an easy to understand information terminal for home environments in a humatronics society.<|reference_end|>
arxiv
@article{yamazaki2009fuzzy, title={Fuzzy inference based mentality estimation for eye robot agent}, author={Yoichi Yamazaki, Fangyan Dong, Yuta Masuda, Yukiko Uehara, Petar Kormushev, Hai An Vu, Phuc Quang Le, Kaoru Hirota}, journal={Proceedings of 23rd Fuzzy System Symposium (FSS 2007), pp. 387-388, 2007}, year={2009}, archivePrefix={arXiv}, eprint={0904.1629}, primaryClass={cs.RO cs.AI cs.HC} }
yamazaki2009fuzzy
arxiv-7055
0904.1630
Self-Assembly of a Statistically Self-Similar Fractal
<|reference_start|>Self-Assembly of a Statistically Self-Similar Fractal: We demonstrate existence of a tile assembly system that self-assembles the statistically self-similar Sierpinski Triangle in the Winfree-Rothemund Tile Assembly Model. This appears to be the first paper that considers self-assembly of a random fractal, instead of a deterministic fractal or a finite, bounded shape. Our technical contributions include a way to remember, and use, unboundedly-long prefixes of an infinite coding sequence at each stage of fractal construction; a tile assembly mechanism for nested recursion; and a definition of "almost-everywhere local determinism," to describe a tileset whose assembly is locally determined, conditional upon a zeta-dimension zero set of (infinitely many) "input" tiles. This last is similar to the definition of randomized computation for Turing machines, in which an algorithm is deterministic relative to an oracle sequence of coin flips that provides advice but does not itself compute. Keywords: tile self-assembly, statistically self-similar Sierpinski Triangle.<|reference_end|>
arxiv
@article{sterling2009self-assembly, title={Self-Assembly of a Statistically Self-Similar Fractal}, author={Aaron Sterling}, journal={arXiv preprint arXiv:0904.1630}, year={2009}, archivePrefix={arXiv}, eprint={0904.1630}, primaryClass={cs.CC cs.DS cs.OH} }
sterling2009self-assembly
arxiv-7056
0904.1631
Intent expression using eye robot for mascot robot system
<|reference_start|>Intent expression using eye robot for mascot robot system: An intent expression system using eye robots is proposed for a mascot robot system from a viewpoint of humatronics. The eye robot aims at providing a basic interface method for an information terminal robot system. To achieve better understanding of the displayed information, the importance and the degree of certainty of the information should be communicated along with the main content. The proposed intent expression system aims at conveying this additional information using the eye robot system. Eye motions are represented as the states in a pleasure-arousal space model. Changes in the model state are calculated by fuzzy inference according to the importance and degree of certainty of the displayed information. These changes influence the arousal-sleep coordinates in the space that corresponds to levels of liveliness during communication. The eye robot provides a basic interface for the mascot robot system that is easy to be understood as an information terminal for home environments in a humatronics society.<|reference_end|>
arxiv
@article{yamazaki2009intent, title={Intent expression using eye robot for mascot robot system}, author={Yoichi Yamazaki, Fangyan Dong, Yuta Masuda, Yukiko Uehara, Petar Kormushev, Hai An Vu, Phuc Quang Le, Kaoru Hirota}, journal={8th International Symposium on Advanced Intelligent Systems (ISIS2007), pp. 576-580, 2007}, year={2009}, archivePrefix={arXiv}, eprint={0904.1631}, primaryClass={cs.RO cs.AI cs.HC} }
yamazaki2009intent
arxiv-7057
0904.1645
A 3-approximation algorithm for computing a parsimonious first speciation in the gene duplication model
<|reference_start|>A 3-approximation algorithm for computing a parsimonious first speciation in the gene duplication model: We consider the following problem: from a given set of gene families trees on a set of genomes, find a first speciation, that splits these genomes into two subsets, that minimizes the number of gene duplications that happened before this speciation. We call this problem the Minimum Duplication Bipartition Problem. Using a generalization of the Minimum Edge-Cut Problem, known as Submodular Function Minimization, we propose a polynomial time and space 3-approximation algorithm for the Minimum Duplication Bipartition Problem.<|reference_end|>
arxiv
@article{chauve2009a, title={A 3-approximation algorithm for computing a parsimonious first speciation in the gene duplication model}, author={Cedric Chauve, A"ida Ouangraoua (LaBRI)}, journal={arXiv preprint arXiv:0904.1645}, year={2009}, archivePrefix={arXiv}, eprint={0904.1645}, primaryClass={cs.DM cs.DS q-bio.QM} }
chauve2009a
arxiv-7058
0904.1672
CP-logic: A Language of Causal Probabilistic Events and Its Relation to Logic Programming
<|reference_start|>CP-logic: A Language of Causal Probabilistic Events and Its Relation to Logic Programming: This papers develops a logical language for representing probabilistic causal laws. Our interest in such a language is twofold. First, it can be motivated as a fundamental study of the representation of causal knowledge. Causality has an inherent dynamic aspect, which has been studied at the semantical level by Shafer in his framework of probability trees. In such a dynamic context, where the evolution of a domain over time is considered, the idea of a causal law as something which guides this evolution is quite natural. In our formalization, a set of probabilistic causal laws can be used to represent a class of probability trees in a concise, flexible and modular way. In this way, our work extends Shafer's by offering a convenient logical representation for his semantical objects. Second, this language also has relevance for the area of probabilistic logic programming. In particular, we prove that the formal semantics of a theory in our language can be equivalently defined as a probability distribution over the well-founded models of certain logic programs, rendering it formally quite similar to existing languages such as ICL or PRISM. Because we can motivate and explain our language in a completely self-contained way as a representation of probabilistic causal laws, this provides a new way of explaining the intuitions behind such probabilistic logic programs: we can say precisely which knowledge such a program expresses, in terms that are equally understandable by a non-logician. Moreover, we also obtain an additional piece of knowledge representation methodology for probabilistic logic programs, by showing how they can express probabilistic causal laws.<|reference_end|>
arxiv
@article{vennekens2009cp-logic:, title={CP-logic: A Language of Causal Probabilistic Events and Its Relation to Logic Programming}, author={Joost Vennekens, Marc Denecker, Maurice Bruynooghe}, journal={arXiv preprint arXiv:0904.1672}, year={2009}, archivePrefix={arXiv}, eprint={0904.1672}, primaryClass={cs.AI cs.LO} }
vennekens2009cp-logic:
arxiv-7059
0904.1692
Error Bounds for Repeat-Accumulate Codes Decoded via Linear Programming
<|reference_start|>Error Bounds for Repeat-Accumulate Codes Decoded via Linear Programming: We examine regular and irregular repeat-accumulate (RA) codes with repetition degrees which are all even. For these codes and with a particular choice of an interleaver, we give an upper bound on the decoding error probability of a linear-programming based decoder which is an inverse polynomial in the block length. Our bound is valid for any memoryless, binary-input, output-symmetric (MBIOS) channel. This result generalizes the bound derived by Feldman et al., which was for regular RA(2) codes.<|reference_end|>
arxiv
@article{goldenberg2009error, title={Error Bounds for Repeat-Accumulate Codes Decoded via Linear Programming}, author={Idan Goldenberg and David Burshtein}, journal={arXiv preprint arXiv:0904.1692}, year={2009}, archivePrefix={arXiv}, eprint={0904.1692}, primaryClass={cs.IT math.IT} }
goldenberg2009error
arxiv-7060
0904.1696
Undirected Graphs of Entanglement 3
<|reference_start|>Undirected Graphs of Entanglement 3: Entanglement is a complexity measure of digraphs that origins in fixed-point logics. Its combinatorial purpose is to measure the nested depth of cycles in digraphs. We address the problem of characterizing the structure of graphs of entanglement at most $k$. Only partial results are known so far: digraphs for $k=1$, and undirected graphs for $k=2$. In this paper we investigate the structure of undirected graphs for $k=3$. Our main tool is the so-called \emph{Tutte's decomposition} of 2-connected graphs into cycles and 3-connected components into a tree-like fashion. We shall give necessary conditions on Tutte's tree to be a tree decomposition of a 2-connected graph of entanglement 3.<|reference_end|>
arxiv
@article{belkhir2009undirected, title={Undirected Graphs of Entanglement 3}, author={Walid Belkhir}, journal={arXiv preprint arXiv:0904.1696}, year={2009}, archivePrefix={arXiv}, eprint={0904.1696}, primaryClass={cs.GT cs.DM} }
belkhir2009undirected
arxiv-7061
0904.1700
Recovering the state sequence of hidden Markov models using mean-field approximations
<|reference_start|>Recovering the state sequence of hidden Markov models using mean-field approximations: Inferring the sequence of states from observations is one of the most fundamental problems in Hidden Markov Models. In statistical physics language, this problem is equivalent to computing the marginals of a one-dimensional model with a random external field. While this task can be accomplished through transfer matrix methods, it becomes quickly intractable when the underlying state space is large. This paper develops several low-complexity approximate algorithms to address this inference problem when the state space becomes large. The new algorithms are based on various mean-field approximations of the transfer matrix. Their performances are studied in detail on a simple realistic model for DNA pyrosequencing.<|reference_end|>
arxiv
@article{sinton2009recovering, title={Recovering the state sequence of hidden Markov models using mean-field approximations}, author={Antoine Sinton}, journal={arXiv preprint arXiv:0904.1700}, year={2009}, doi={10.1088/1742-5468/2009/07/P07026}, archivePrefix={arXiv}, eprint={0904.1700}, primaryClass={cond-mat.dis-nn cond-mat.stat-mech cs.LG} }
sinton2009recovering
arxiv-7062
0904.1701
The Star Height Hierarchy Vs The Variable Hierarchy
<|reference_start|>The Star Height Hierarchy Vs The Variable Hierarchy: The star height hierarchy (resp. the variable hierarchy) results in classifying $\mu$-terms into classes according to the nested depth of fixed point operators (resp. to the number of bound variables). We prove, under some assumptions, that the variable hierarchy is a proper refinement of the star height hierarchy. We mean that the non collapse of the variable hierarchy implies the non collapse of the star height hierarchy. The proof relies on the combinatorial characterization of the two hierarchies.<|reference_end|>
arxiv
@article{belkhir2009the, title={The Star Height Hierarchy Vs. The Variable Hierarchy}, author={Walid Belkhir}, journal={arXiv preprint arXiv:0904.1701}, year={2009}, archivePrefix={arXiv}, eprint={0904.1701}, primaryClass={cs.LO cs.GT} }
belkhir2009the
arxiv-7063
0904.1703
Closure Under Minors of Undirected Entanglement
<|reference_start|>Closure Under Minors of Undirected Entanglement: Entanglement is a digraph complexity measure that origins in fixed-point theory. Its purpose is to count the nested depth of cycles in digraphs. In this paper we prove that the class of undirected graphs of entanglement at most $k$, for arbitrary fixed $k \in \mathbb{N}$, is closed under taking minors. Our proof relies on the game theoretic characterization of entanglement in terms of Robber and Cops games.<|reference_end|>
arxiv
@article{belkhir2009closure, title={Closure Under Minors of Undirected Entanglement}, author={Walid Belkhir}, journal={arXiv preprint arXiv:0904.1703}, year={2009}, archivePrefix={arXiv}, eprint={0904.1703}, primaryClass={cs.DM cs.GT} }
belkhir2009closure
arxiv-7064
0904.1705
Bounded Max-Colorings of Graphs
<|reference_start|>Bounded Max-Colorings of Graphs: In a bounded max-coloring of a vertex/edge weighted graph, each color class is of cardinality at most $b$ and of weight equal to the weight of the heaviest vertex/edge in this class. The bounded max-vertex/edge-coloring problems ask for such a coloring minimizing the sum of all color classes' weights. In this paper we present complexity results and approximation algorithms for those problems on general graphs, bipartite graphs and trees. We first show that both problems are polynomial for trees, when the number of colors is fixed, and $H_b$ approximable for general graphs, when the bound $b$ is fixed. For the bounded max-vertex-coloring problem, we show a 17/11-approximation algorithm for bipartite graphs, a PTAS for trees as well as for bipartite graphs when $b$ is fixed. For unit weights, we show that the known 4/3 lower bound for bipartite graphs is tight by providing a simple 4/3 approximation algorithm. For the bounded max-edge-coloring problem, we prove approximation factors of $3-2/\sqrt{2b}$, for general graphs, $\min\{e, 3-2/\sqrt{b}\}$, for bipartite graphs, and 2, for trees. Furthermore, we show that this problem is NP-complete even for trees. This is the first complexity result for max-coloring problems on trees.<|reference_end|>
arxiv
@article{bampis2009bounded, title={Bounded Max-Colorings of Graphs}, author={Evripidis Bampis, Alexander Kononov, Giorgio Lucarelli, Ioannis Milis}, journal={arXiv preprint arXiv:0904.1705}, year={2009}, archivePrefix={arXiv}, eprint={0904.1705}, primaryClass={cs.DS} }
bampis2009bounded
arxiv-7065
0904.1712
Turbo Packet Combining for Broadband Space-Time BICM Hybrid-ARQ Systems with Co-Channel Interference
<|reference_start|>Turbo Packet Combining for Broadband Space-Time BICM Hybrid-ARQ Systems with Co-Channel Interference: In this paper, efficient turbo packet combining for single carrier (SC) broadband multiple-input--multiple-output (MIMO) hybrid--automatic repeat request (ARQ) transmission with unknown co-channel interference (CCI) is studied. We propose a new frequency domain soft minimum mean square error (MMSE)-based signal level combining technique where received signals and channel frequency responses (CFR)s corresponding to all retransmissions are used to decode the data packet. We provide a recursive implementation algorithm for the introduced scheme, and show that both its computational complexity and memory requirements are quite insensitive to the ARQ delay, i.e., maximum number of ARQ rounds. Furthermore, we analyze the asymptotic performance, and show that under a sum-rank condition on the CCI MIMO ARQ channel, the proposed packet combining scheme is not interference-limited. Simulation results are provided to demonstrate the gains offered by the proposed technique.<|reference_end|>
arxiv
@article{ait-idir2009turbo, title={Turbo Packet Combining for Broadband Space-Time BICM Hybrid-ARQ Systems with Co-Channel Interference}, author={Tarik Ait-Idir, Houda Chafnaji, and Samir Saoudi}, journal={IEEE Transactions on Wireless Communications, vol. 9, no. 5, pp. 1686-1697, May 2010}, year={2009}, doi={10.1109/TWC.2010.05.090441}, archivePrefix={arXiv}, eprint={0904.1712}, primaryClass={cs.IT math.IT} }
ait-idir2009turbo
arxiv-7066
0904.1729
Joint Opportunistic Scheduling in Multi-Cellular Systems
<|reference_start|>Joint Opportunistic Scheduling in Multi-Cellular Systems: We address the problem of multiuser scheduling with partial channel information in a multi-cell environment. The scheduling problem is formulated jointly with the ARQ based channel learning process and the intercell interference mitigating cell breathing protocol. The optimal joint scheduling policy under various system constraints is established. The general problem is posed as a generalized Restless Multiarmed Bandit process and the notion of indexability is studied. We conjecture, with numerical support, that the multicell multiuser scheduling problem is indexable and obtain a partial structure of the index policy.<|reference_end|>
arxiv
@article{murugesan2009joint, title={Joint Opportunistic Scheduling in Multi-Cellular Systems}, author={Sugumar Murugesan, Philip Schniter}, journal={arXiv preprint arXiv:0904.1729}, year={2009}, archivePrefix={arXiv}, eprint={0904.1729}, primaryClass={cs.NI} }
murugesan2009joint
arxiv-7067
0904.1730
Feedback-based online network coding
<|reference_start|>Feedback-based online network coding: Current approaches to the practical implementation of network coding are batch-based, and often do not use feedback, except possibly to signal completion of a file download. In this paper, the various benefits of using feedback in a network coded system are studied. It is shown that network coding can be performed in a completely online manner, without the need for batches or generations, and that such online operation does not affect the throughput. Although these ideas are presented in a single-hop packet erasure broadcast setting, they naturally extend to more general lossy networks which employ network coding in the presence of feedback. The impact of feedback on queue size at the sender and decoding delay at the receivers is studied. Strategies for adaptive coding based on feedback are presented, with the goal of minimizing the queue size and delay. The asymptotic behavior of these metrics is characterized, in the limit of the traffic load approaching capacity. Different notions of decoding delay are considered, including an order-sensitive notion which assumes that packets are useful only when delivered in order. Our work may be viewed as a natural extension of Automatic Repeat reQuest (ARQ) schemes to coded networks.<|reference_end|>
arxiv
@article{sundararajan2009feedback-based, title={Feedback-based online network coding}, author={Jay Kumar Sundararajan, Devavrat Shah, Muriel Medard}, journal={arXiv preprint arXiv:0904.1730}, year={2009}, archivePrefix={arXiv}, eprint={0904.1730}, primaryClass={cs.NI cs.IT math.IT} }
sundararajan2009feedback-based
arxiv-7068
0904.1752
On polynomial growth functions of D0L-systems
<|reference_start|>On polynomial growth functions of D0L-systems: The aim of this paper is to prove that every polynomial function that maps the natural integers to the positive integers is the growth function of some D0L-system.<|reference_end|>
arxiv
@article{cassaigne2009on, title={On polynomial growth functions of D0L-systems}, author={Julien Cassaigne and Francois Nicolas}, journal={arXiv preprint arXiv:0904.1752}, year={2009}, archivePrefix={arXiv}, eprint={0904.1752}, primaryClass={cs.DM cs.FL} }
cassaigne2009on
arxiv-7069
0904.1754
Opportunistic Multiuser Scheduling in a Three State Markov-modeled Downlink
<|reference_start|>Opportunistic Multiuser Scheduling in a Three State Markov-modeled Downlink: We consider the downlink of a cellular system and address the problem of multiuser scheduling with partial channel information. In our setting, the channel of each user is modeled by a three-state Markov chain. The scheduler indirectly estimates the channel via accumulated Automatic Repeat Request (ARQ) feedback from the scheduled users and uses this information in future scheduling decisions. Using a Partially Observable Markov Decision Process (POMDP), we formulate a throughput maximization problem that is an extension of our previous work where the channels were modeled using two states. We recall the greedy policy that was shown to be optimal and easy to implement in the two state case and study the implementation structure of the greedy policy in the considered downlink. We classify the system into two types based on the channel statistics and obtain round robin structures for the greedy policy for each system type. We obtain performance bounds for the downlink system using these structures and study the conditions under which the greedy policy is optimal.<|reference_end|>
arxiv
@article{murugesan2009opportunistic, title={Opportunistic Multiuser Scheduling in a Three State Markov-modeled Downlink}, author={Sugumar Murugesan, Philip Schniter}, journal={arXiv preprint arXiv:0904.1754}, year={2009}, archivePrefix={arXiv}, eprint={0904.1754}, primaryClass={cs.NI} }
murugesan2009opportunistic
arxiv-7070
0904.1783
Exact Join Detection for Convex Polyhedra and Other Numerical Abstractions
<|reference_start|>Exact Join Detection for Convex Polyhedra and Other Numerical Abstractions: Deciding whether the union of two convex polyhedra is itself a convex polyhedron is a basic problem in polyhedral computations; having important applications in the field of constrained control and in the synthesis, analysis, verification and optimization of hardware and software systems. In such application fields though, general convex polyhedra are just one among many, so-called, numerical abstractions, which range from restricted families of (not necessarily closed) convex polyhedra to non-convex geometrical objects. We thus tackle the problem from an abstract point of view: for a wide range of numerical abstractions that can be modeled as bounded join-semilattices --that is, partial orders where any finite set of elements has a least upper bound--, we show necessary and sufficient conditions for the equivalence between the lattice-theoretic join and the set-theoretic union. For the case of closed convex polyhedra --which, as far as we know, is the only one already studied in the literature-- we improve upon the state-of-the-art by providing a new algorithm with a better worst-case complexity. The results and algorithms presented for the other numerical abstractions are new to this paper. All the algorithms have been implemented, experimentally validated, and made available in the Parma Polyhedra Library.<|reference_end|>
arxiv
@article{bagnara2009exact, title={Exact Join Detection for Convex Polyhedra and Other Numerical Abstractions}, author={Roberto Bagnara, Patricia M. Hill, Enea Zaffanella}, journal={arXiv preprint arXiv:0904.1783}, year={2009}, archivePrefix={arXiv}, eprint={0904.1783}, primaryClass={cs.CG} }
bagnara2009exact
arxiv-7071
0904.1812
Two Designs of Space-Time Block Codes Achieving Full Diversity with Partial Interference Cancellation Group Decoding
<|reference_start|>Two Designs of Space-Time Block Codes Achieving Full Diversity with Partial Interference Cancellation Group Decoding: A partial interference cancellation (PIC) group decoding based space-time block code (STBC) design criterion was recently proposed by Guo and Xia, where the decoding complexity and the code rate trade-off is dealt when the full diversity is achieved. In this paper, two designs of STBC are proposed for any number of transmit antennas that can obtain full diversity when a PIC group decoding (with a particular grouping scheme) is applied at receiver. With the PIC group decoding and an appropriate grouping scheme for the decoding, the proposed STBC are shown to obtain the same diversity gain as the ML decoding, but have a low decoding complexity. The first proposed STBC is designed with multiple diagonal layers and it can obtain the full diversity for two-layer design with the PIC group decoding and the rate is up to 2 symbols per channel use. But with PIC-SIC group decoding, the first proposed STBC can obtain full diversity for any number of layers and the rate can be full. The second proposed STBC can obtain full diversity and a rate up to 9/4 with the PIC group decoding. Some code design examples are given and simulation results show that the newly proposed STBC can well address the rate-performance-complexity tradeoff of the MIMO systems.<|reference_end|>
arxiv
@article{zhang2009two, title={Two Designs of Space-Time Block Codes Achieving Full Diversity with Partial Interference Cancellation Group Decoding}, author={Wei Zhang, Tianyi Xu, and Xiang-Gen Xia}, journal={arXiv preprint arXiv:0904.1812}, year={2009}, archivePrefix={arXiv}, eprint={0904.1812}, primaryClass={cs.IT math.IT} }
zhang2009two
arxiv-7072
0904.1840
Higher Dimensional Consensus: Learning in Large-Scale Networks
<|reference_start|>Higher Dimensional Consensus: Learning in Large-Scale Networks: The paper presents higher dimension consensus (HDC) for large-scale networks. HDC generalizes the well-known average-consensus algorithm. It divides the nodes of the large-scale network into anchors and sensors. Anchors are nodes whose states are fixed over the HDC iterations, whereas sensors are nodes that update their states as a linear combination of the neighboring states. Under appropriate conditions, we show that the sensor states converge to a linear combination of the anchor states. Through the concept of anchors, HDC captures in a unified framework several interesting network tasks, including distributed sensor localization, leader-follower, distributed Jacobi to solve linear systems of algebraic equations, and, of course, average-consensus. In many network applications, it is of interest to learn the weights of the distributed linear algorithm so that the sensors converge to a desired state. We term this inverse problem the HDC learning problem. We pose learning in HDC as a constrained non-convex optimization problem, which we cast in the framework of multi-objective optimization (MOP) and to which we apply Pareto optimality. We prove analytically relevant properties of the MOP solutions and of the Pareto front from which we derive the solution to learning in HDC. Finally, the paper shows how the MOP approach resolves interesting tradeoffs (speed of convergence versus quality of the final state) arising in learning in HDC in resource constrained networks.<|reference_end|>
arxiv
@article{khan2009higher, title={Higher Dimensional Consensus: Learning in Large-Scale Networks}, author={Usman A. Khan, Soummya Kar, and Jose M. F. Moura}, journal={U. A. Khan, S. Kar, and J. M. F. Moura, "Higher dimensional consensus: Learning in large-scale networks," IEEE Transactions on Signal Processing, vol. 58, no. 5, pp. 2836-2849, May 2010}, year={2009}, doi={10.1109/TSP.2010.2042482}, archivePrefix={arXiv}, eprint={0904.1840}, primaryClass={cs.IT cs.DC math.IT math.OC} }
khan2009higher
arxiv-7073
0904.1888
On Fodor on Darwin on Evolution
<|reference_start|>On Fodor on Darwin on Evolution: Jerry Fodor argues that Darwin was wrong about "natural selection" because (1) it is only a tautology rather than a scientific law that can support counterfactuals ("If X had happened, Y would have happened") and because (2) only minds can select. Hence Darwin's analogy with "artificial selection" by animal breeders was misleading and evolutionary explanation is nothing but post-hoc historical narrative. I argue that Darwin was right on all counts.<|reference_end|>
arxiv
@article{harnad2009on, title={On Fodor on Darwin on Evolution}, author={Stevan Harnad}, journal={arXiv preprint arXiv:0904.1888}, year={2009}, archivePrefix={arXiv}, eprint={0904.1888}, primaryClass={cs.NE cs.LG} }
harnad2009on
arxiv-7074
0904.1889
First Person Singular
<|reference_start|>First Person Singular: Brian Rotman argues that (one) "mind" and (one) "god" are only conceivable, literally, because of (alphabetic) literacy, which allowed us to designate each of these ghosts as an incorporeal, speaker-independent "I" (or, in the case of infinity, a notional agent that goes on counting forever). I argue that to have a mind is to have the capacity to feel. No one can be sure which organisms feel, hence have minds, but it seems likely that one-celled organisms and plants do not, whereas animals do. So minds originated before humans and before language --hence, a fortiori, before writing, whether alphabetic or ideographic.<|reference_end|>
arxiv
@article{harnad2009first, title={First Person Singular}, author={Stevan Harnad}, journal={arXiv preprint arXiv:0904.1889}, year={2009}, archivePrefix={arXiv}, eprint={0904.1889}, primaryClass={cs.OH} }
harnad2009first
arxiv-7075
0904.1892
Lattice Strategies for the Dirty Multiple Access Channel
<|reference_start|>Lattice Strategies for the Dirty Multiple Access Channel: A generalization of the Gaussian dirty-paper problem to a multiple access setup is considered. There are two additive interference signals, one known to each transmitter but none to the receiver. The rates achievable using Costa's strategies (i.e. by a random binning scheme induced by Costa's auxiliary random variables) vanish in the limit when the interference signals are strong. In contrast, it is shown that lattice strategies ("lattice precoding") can achieve positive rates independent of the interferences, and in fact in some cases - which depend on the noise variance and power constraints - they are optimal. In particular, lattice strategies are optimal in the limit of high SNR. It is also shown that the gap between the achievable rate region and the capacity region is at most 0.167 bit. Thus, the dirty MAC is another instance of a network setup, like the Korner-Marton modulo-two sum problem, where linear coding is potentially better than random binning. Lattice transmission schemes and conditions for optimality for the asymmetric case, where there is only one interference which is known to one of the users (who serves as a "helper" to the other user), and for the "common interference" case are also derived. In the former case the gap between the helper achievable rate and its capacity is at most 0.085 bit.<|reference_end|>
arxiv
@article{philosof2009lattice, title={Lattice Strategies for the Dirty Multiple Access Channel}, author={Tal Philosof, Ram Zamir, Uri Erez and Ashish Khisti}, journal={arXiv preprint arXiv:0904.1892}, year={2009}, archivePrefix={arXiv}, eprint={0904.1892}, primaryClass={cs.IT math.IT} }
philosof2009lattice
arxiv-7076
0904.1897
Refined Coding Bounds and Code Constructions for Coherent Network Error Correction
<|reference_start|>Refined Coding Bounds and Code Constructions for Coherent Network Error Correction: Coherent network error correction is the error-control problem in network coding with the knowledge of the network codes at the source and sink nodes. With respect to a given set of local encoding kernels defining a linear network code, we obtain refined versions of the Hamming bound, the Singleton bound and the Gilbert-Varshamov bound for coherent network error correction. Similar to its classical counterpart, this refined Singleton bound is tight for linear network codes. The tightness of this refined bound is shown by two construction algorithms of linear network codes achieving this bound. These two algorithms illustrate different design methods: one makes use of existing network coding algorithms for error-free transmission and the other makes use of classical error-correcting codes. The implication of the tightness of the refined Singleton bound is that the sink nodes with higher maximum flow values can have higher error correction capabilities.<|reference_end|>
arxiv
@article{yang2009refined, title={Refined Coding Bounds and Code Constructions for Coherent Network Error Correction}, author={Shenghao Yang, Raymond W. Yeung, Chi-Kin Ngai}, journal={IEEE-J-IT 57 (2011) 1409 - 1424}, year={2009}, doi={10.1109/TIT.2011.2106930}, archivePrefix={arXiv}, eprint={0904.1897}, primaryClass={cs.IT math.IT} }
yang2009refined
arxiv-7077
0904.1902
On Distributed Model Checking of MSO on Graphs
<|reference_start|>On Distributed Model Checking of MSO on Graphs: We consider distributed model-checking of Monadic Second-Order logic (MSO) on graphs which constitute the topology of communication networks. The graph is thus both the structure being checked and the system on which the distributed computation is performed. We prove that MSO can be distributively model-checked with only a constant number of messages sent over each link for planar networks with bounded diameter, as well as for networks with bounded degree and bounded tree-length. The distributed algorithms rely on nontrivial transformations of linear time sequential algorithms for tree decompositions of bounded tree-width graphs.<|reference_end|>
arxiv
@article{grumbach2009on, title={On Distributed Model Checking of MSO on Graphs}, author={Stephane Grumbach (INRIA Liama), Zhilin Wu (CASIA Liama)}, journal={arXiv preprint arXiv:0904.1902}, year={2009}, archivePrefix={arXiv}, eprint={0904.1902}, primaryClass={cs.LO cs.DC} }
grumbach2009on
arxiv-7078
0904.1907
Average Entropy Functions
<|reference_start|>Average Entropy Functions: The closure of the set of entropy functions associated with n discrete variables, Gammar*n, is a convex cone in (2n-1)- dimensional space, but its full characterization remains an open problem. In this paper, we map Gammar*n to an n-dimensional region Phi*n by averaging the joint entropies with the same number of variables, and show that the simpler Phi*n can be characterized solely by the Shannon-type information inequalities<|reference_end|>
arxiv
@article{chen2009average, title={Average Entropy Functions}, author={Qi Chen, Chen He, Lingge Jiang, Qingchuan Wang}, journal={arXiv preprint arXiv:0904.1907}, year={2009}, archivePrefix={arXiv}, eprint={0904.1907}, primaryClass={cs.IT cs.RO math.IT} }
chen2009average
arxiv-7079
0904.1910
Compressive Sampling with Known Spectral Energy Density
<|reference_start|>Compressive Sampling with Known Spectral Energy Density: A method to improve l1 performance of the CS (Compressive Sampling) for A-scan SFCW-GPR (Stepped Frequency Continuous Wave-Ground Penetrating Radar) signals with known spectral energy density is proposed. Instead of random sampling, the proposed method selects the location of samples to follow the distribution of the spectral energy. Samples collected from three different measurement methods; the uniform sampling, random sampling, and energy equipartition sampling, are used to reconstruct a given monocycle signal whose spectral energy density is known. Objective performance evaluation in term of PSNR (Peak Signal to Noise Ratio) indicates empirically that the CS reconstruction of random sampling outperform the uniform sampling, while the energy equipartition sampling outperforms both of them. These results suggest that similar performance improvement can be achieved for the compressive SFCW (Stepped Frequency Continuous Wave) radar, allowing even higher acquisition speed.<|reference_end|>
arxiv
@article{suksmono2009compressive, title={Compressive Sampling with Known Spectral Energy Density}, author={Andriyan Bayu Suksmono}, journal={arXiv preprint arXiv:0904.1910}, year={2009}, archivePrefix={arXiv}, eprint={0904.1910}, primaryClass={cs.IT cs.CE math.FA math.IT} }
suksmono2009compressive
arxiv-7080
0904.1915
Logical locality entails frugal distributed computation over graphs
<|reference_start|>Logical locality entails frugal distributed computation over graphs: First-order logic is known to have limited expressive power over finite structures. It enjoys in particular the locality property, which states that first-order formulae cannot have a global view of a structure. This limitation ensures on their low sequential computational complexity. We show that the locality impacts as well on their distributed computational complexity. We use first-order formulae to describe the properties of finite connected graphs, which are the topology of communication networks, on which the first-order formulae are also evaluated. We show that over bounded degree networks and planar networks, first-order properties can be frugally evaluated, that is, with only a bounded number of messages, of size logarithmic in the number of nodes, sent over each link. Moreover, we show that the result carries over for the extension of first-order logic with unary counting.<|reference_end|>
arxiv
@article{grumbach2009logical, title={Logical locality entails frugal distributed computation over graphs}, author={Stephane Grumbach (INRIA Liama), Zhilin Wu (CASIA Liama)}, journal={arXiv preprint arXiv:0904.1915}, year={2009}, archivePrefix={arXiv}, eprint={0904.1915}, primaryClass={cs.LO cs.DC} }
grumbach2009logical
arxiv-7081
0904.1920
Feasibility of Motion Planning on Acyclic and Strongly Connected Directed Graphs
<|reference_start|>Feasibility of Motion Planning on Acyclic and Strongly Connected Directed Graphs: Motion planning is a fundamental problem of robotics with applications in many areas of computer science and beyond. Its restriction to graphs has been investigated in the literature for it allows to concentrate on the combinatorial problem abstracting from geometric considerations. In this paper, we consider motion planning over directed graphs, which are of interest for asymmetric communication networks. Directed graphs generalize undirected graphs, while introducing a new source of complexity to the motion planning problem: moves are not reversible. We first consider the class of acyclic directed graphs and show that the feasibility can be solved in time linear in the product of the number of vertices and the number of arcs. We then turn to strongly connected directed graphs. We first prove a structural theorem for decomposing strongly connected directed graphs into strongly biconnected components.Based on the structural decomposition, we give an algorithm for the feasibility of motion planning on strongly connected directed graphs, and show that it can also be decided in time linear in the product of the number of vertices and the number of arcs.<|reference_end|>
arxiv
@article{wu2009feasibility, title={Feasibility of Motion Planning on Acyclic and Strongly Connected Directed Graphs}, author={Zhilin Wu (CASIA Liama), Stephane Grumbach (INRIA Liama)}, journal={arXiv preprint arXiv:0904.1920}, year={2009}, archivePrefix={arXiv}, eprint={0904.1920}, primaryClass={cs.DM cs.DS} }
wu2009feasibility
arxiv-7082
0904.1923
Seidel Minor, Permutation Graphs and Combinatorial Properties
<|reference_start|>Seidel Minor, Permutation Graphs and Combinatorial Properties: A permutation graph is an intersection graph of segments lying between two parallel lines. A Seidel complementation of a finite graph at one of it vertex $v$ consists to complement the edges between the neighborhood and the non-neighborhood of $v$. Two graphs are Seidel complement equivalent if one can be obtained from the other by a successive application of Seidel complementation. In this paper we introduce the new concept of Seidel complementation and Seidel minor, we then show that this operation preserves cographs and the structure of modular decomposition. The main contribution of this paper is to provide a new and succinct characterization of permutation graphs i.e. A graph is a permutation graph \Iff it does not contain the following graphs: $C_5$, $C_7$, $XF_{6}^{2}$, $XF_{5}^{2n+3}$, $C_{2n}, n\geqslant6$ and their complement as Seidel minor. In addition we provide a $O(n+m)$-time algorithm to output one of the forbidden Seidel minor if the graph is not a permutation graph.<|reference_end|>
arxiv
@article{limouzy2009seidel, title={Seidel Minor, Permutation Graphs and Combinatorial Properties}, author={Vincent Limouzy}, journal={arXiv preprint arXiv:0904.1923}, year={2009}, archivePrefix={arXiv}, eprint={0904.1923}, primaryClass={cs.DM} }
limouzy2009seidel
arxiv-7083
0904.1931
KiWi: A Scalable Subspace Clustering Algorithm for Gene Expression Analysis
<|reference_start|>KiWi: A Scalable Subspace Clustering Algorithm for Gene Expression Analysis: Subspace clustering has gained increasing popularity in the analysis of gene expression data. Among subspace cluster models, the recently introduced order-preserving sub-matrix (OPSM) has demonstrated high promise. An OPSM, essentially a pattern-based subspace cluster, is a subset of rows and columns in a data matrix for which all the rows induce the same linear ordering of columns. Existing OPSM discovery methods do not scale well to increasingly large expression datasets. In particular, twig clusters having few genes and many experiments incur explosive computational costs and are completely pruned off by existing methods. However, it is of particular interest to determine small groups of genes that are tightly coregulated across many conditions. In this paper, we present KiWi, an OPSM subspace clustering algorithm that is scalable to massive datasets, capable of discovering twig clusters and identifying negative as well as positive correlations. We extensively validate KiWi using relevant biological datasets and show that KiWi correctly assigns redundant probes to the same cluster, groups experiments with common clinical annotations, differentiates real promoter sequences from negative control sequences, and shows good association with cis-regulatory motif predictions.<|reference_end|>
arxiv
@article{griffith2009kiwi:, title={KiWi: A Scalable Subspace Clustering Algorithm for Gene Expression Analysis}, author={Obi L. Griffith, Byron J. Gao, Mikhail Bilenky, Yuliya Prichyna, Martin Ester, Steven J.M. Jones}, journal={arXiv preprint arXiv:0904.1931}, year={2009}, archivePrefix={arXiv}, eprint={0904.1931}, primaryClass={cs.DB cs.AI q-bio.GN} }
griffith2009kiwi:
arxiv-7084
0904.1956
Ergodic Layered Erasure One-Sided Interference Channels
<|reference_start|>Ergodic Layered Erasure One-Sided Interference Channels: The sum capacity of a class of layered erasure one-sided interference channels is developed under the assumption of no channel state information at the transmitters. Outer bounds are presented for this model and are shown to be tight for the following sub-classes: i) weak, ii) strong (mix of strong but not very strong (SnVS) and very strong (VS)), iii) ergodic very strong (mix of strong and weak), and (iv) a sub-class of mixed interference (mix of SnVS and weak). Each sub-class is uniquely defined by the fading statistics.<|reference_end|>
arxiv
@article{aggarwal2009ergodic, title={Ergodic Layered Erasure One-Sided Interference Channels}, author={Vaneet Aggarwal and Lalitha Sankar and A. Robert Calderbank and H. Vincent Poor}, journal={arXiv preprint arXiv:0904.1956}, year={2009}, archivePrefix={arXiv}, eprint={0904.1956}, primaryClass={cs.IT math.IT} }
aggarwal2009ergodic
arxiv-7085
0904.1989
Personalized Recommendation via Integrated Diffusion on User-Item-Tag Tripartite Graphs
<|reference_start|>Personalized Recommendation via Integrated Diffusion on User-Item-Tag Tripartite Graphs: Personalized recommender systems are confronting great challenges of accuracy, diversification and novelty, especially when the data set is sparse and lacks accessorial information, such as user profiles, item attributes and explicit ratings. Collaborative tags contain rich information about personalized preferences and item contents, and are therefore potential to help in providing better recommendations. In this paper, we propose a recommendation algorithm based on an integrated diffusion on user-item-tag tripartite graphs. We use three benchmark data sets, Del.icio.us, MovieLens and BibSonomy, to evaluate our algorithm. Experimental results demonstrate that the usage of tag information can significantly improve accuracy, diversification and novelty of recommendations.<|reference_end|>
arxiv
@article{zhang2009personalized, title={Personalized Recommendation via Integrated Diffusion on User-Item-Tag Tripartite Graphs}, author={Zi-Ke Zhang, Tao Zhou, Yi-Cheng Zhang}, journal={Physica A 389 (2010) 179-186}, year={2009}, doi={10.1016/j.physa.2009.08.036}, archivePrefix={arXiv}, eprint={0904.1989}, primaryClass={cs.IR} }
zhang2009personalized
arxiv-7086
0904.2012
Simplicial Databases
<|reference_start|>Simplicial Databases: In this paper, we define a category DB, called the category of simplicial databases, whose objects are databases and whose morphisms are data-preserving maps. Along the way we give a precise formulation of the category of relational databases, and prove that it is a full subcategory of DB. We also prove that limits and colimits always exist in DB and that they correspond to queries such as select, join, union, etc. One feature of our construction is that the schema of a simplicial database has a natural geometric structure: an underlying simplicial set. The geometry of a schema is a way of keeping track of relationships between distinct tables, and can be thought of as a system of foreign keys. The shape of a schema is generally intuitive (e.g. the schema for round-trip flights is a circle consisting of an edge from $A$ to $B$ and an edge from $B$ to $A$), and as such, may be useful for analyzing data. We give several applications of our approach, as well as possible advantages it has over the relational model. We also indicate some directions for further research.<|reference_end|>
arxiv
@article{spivak2009simplicial, title={Simplicial Databases}, author={David I. Spivak}, journal={arXiv preprint arXiv:0904.2012}, year={2009}, archivePrefix={arXiv}, eprint={0904.2012}, primaryClass={cs.DB cs.IR} }
spivak2009simplicial
arxiv-7087
0904.2018
Stochastic Service Guarantee Analysis Based on Time-Domain Models
<|reference_start|>Stochastic Service Guarantee Analysis Based on Time-Domain Models: Stochastic network calculus is a theory for stochastic service guarantee analysis of computer communication networks. In the current stochastic network calculus literature, its traffic and server models are typically based on the cumulative amount of traffic and cumulative amount of service respectively. However, there are network scenarios where the applicability of such models is limited, and hence new ways of modeling traffic and service are needed to address this limitation. This paper presents time-domain models and results for stochastic network calculus. Particularly, we define traffic models, which are based on probabilistic lower-bounds on cumulative packet inter-arrival time, and server models, which are based on probabilistic upper-bounds on cumulative packet service time. In addition, examples demonstrating the use of the proposed time-domain models are provided. On the basis of the proposed models, the five basic properties of stochastic network calculus are also proved, which implies broad applicability of the proposed time-domain approach.<|reference_end|>
arxiv
@article{xie2009stochastic, title={Stochastic Service Guarantee Analysis Based on Time-Domain Models}, author={J.Xie and Y.Jiang}, journal={arXiv preprint arXiv:0904.2018}, year={2009}, archivePrefix={arXiv}, eprint={0904.2018}, primaryClass={cs.NI cs.PF} }
xie2009stochastic
arxiv-7088
0904.2022
Absdet-Pseudo-Codewords and Perm-Pseudo-Codewords: Definitions and Properties
<|reference_start|>Absdet-Pseudo-Codewords and Perm-Pseudo-Codewords: Definitions and Properties: The linear-programming decoding performance of a binary linear code crucially depends on the structure of the fundamental cone of the parity-check matrix that describes the code. Towards a better understanding of fundamental cones and the vectors therein, we introduce the notion of absdet-pseudo-codewords and perm-pseudo-codewords: we give the definitions, we discuss some simple examples, and we list some of their properties.<|reference_end|>
arxiv
@article{smarandache2009absdet-pseudo-codewords, title={Absdet-Pseudo-Codewords and Perm-Pseudo-Codewords: Definitions and Properties}, author={Roxana Smarandache, Pascal O. Vontobel}, journal={Proc. IEEE Int. Symp. Information Theory, Seoul, Korea, June 28 - July 3, 2009}, year={2009}, doi={10.1109/ISIT.2009.5205910}, archivePrefix={arXiv}, eprint={0904.2022}, primaryClass={cs.IT cs.DM math.IT} }
smarandache2009absdet-pseudo-codewords
arxiv-7089
0904.2023
A new Protocol for 1-2 Oblivious Transfer
<|reference_start|>A new Protocol for 1-2 Oblivious Transfer: A new protocol for 1-2 (String) Oblivious Transfer is proposed. The protocol uses 5 rounds of message exchange.<|reference_end|>
arxiv
@article{grohmann2009a, title={A new Protocol for 1-2 Oblivious Transfer}, author={Bjoern Grohmann}, journal={arXiv preprint arXiv:0904.2023}, year={2009}, archivePrefix={arXiv}, eprint={0904.2023}, primaryClass={cs.CR} }
grohmann2009a
arxiv-7090
0904.2027
A Near-Optimal Algorithm for L1-Difference
<|reference_start|>A Near-Optimal Algorithm for L1-Difference: We give the first L_1-sketching algorithm for integer vectors which produces nearly optimal sized sketches in nearly linear time. This answers the first open problem in the list of open problems from the 2006 IITK Workshop on Algorithms for Data Streams. Specifically, suppose Alice receives a vector x in {-M,...,M}^n and Bob receives y in {-M,...,M}^n, and the two parties share randomness. Each party must output a short sketch of their vector such that a third party can later quickly recover a (1 +/- eps)-approximation to ||x-y||_1 with 2/3 probability given only the sketches. We give a sketching algorithm which produces O(eps^{-2}log(1/eps)log(nM))-bit sketches in O(n*log^2(nM)) time, independent of eps. The previous best known sketching algorithm for L_1 is due to [Feigenbaum et al., SICOMP 2002], which achieved the optimal sketch length of O(eps^{-2}log(nM)) bits but had a running time of O(n*log(nM)/eps^2). Notice that our running time is near-linear for every eps, whereas for sufficiently small values of eps, the running time of the previous algorithm can be as large as quadratic. Like their algorithm, our sketching procedure also yields a small-space, one-pass streaming algorithm which works even if the entries of x,y are given in arbitrary order.<|reference_end|>
arxiv
@article{nelson2009a, title={A Near-Optimal Algorithm for L1-Difference}, author={Jelani Nelson, David P. Woodruff}, journal={arXiv preprint arXiv:0904.2027}, year={2009}, archivePrefix={arXiv}, eprint={0904.2027}, primaryClass={cs.DS} }
nelson2009a
arxiv-7091
0904.2028
Cloud Networking Formation in CogMesh Environment
<|reference_start|>Cloud Networking Formation in CogMesh Environment: As radio spectrum usage paradigm moving from the traditional command and control allocation scheme to the open spectrum allocation scheme, wireless networks meet new opportunities and challenges. In this article we introduce the concept of cognitive wireless mesh (CogMesh) networks and address the unique problem in such a network. CogMesh is a self-organized distributed network architecture combining cognitive technologies with the mesh structure in order to provide a uniform service platform over a wide range of networks. It is based on dynamic spectrum access (DSA) and featured by self-organization, self-configuration and self-healing. The unique problem in CogMesh is the common control channel problem, which is caused by the opportunistic spectrum sharing nature of secondary users (SU) in the network. More precisely, since the channels of SUs are fluctuating according to the radio environment, it is difficult to find always available global common control channels. This puts a significant challenge on the network design. We develop the control cloud based control channel selection and cluster based network formation techniques to tackle this problem. Moreover, we show in this article that the swarm intelligence is a good candidate to deal with the control channel problem in CogMesh. Since the study of cognitive wireless networks (CWN) is still in its early phase, the ideas provided in this article act as a catalyst to inspire new solutions in this field.<|reference_end|>
arxiv
@article{chen2009cloud, title={Cloud Networking Formation in CogMesh Environment}, author={Tao Chen, Honggang Zhang, Marcos Katz}, journal={arXiv preprint arXiv:0904.2028}, year={2009}, archivePrefix={arXiv}, eprint={0904.2028}, primaryClass={cs.NI} }
chen2009cloud
arxiv-7092
0904.2037
Boosting through Optimization of Margin Distributions
<|reference_start|>Boosting through Optimization of Margin Distributions: Boosting has attracted much research attention in the past decade. The success of boosting algorithms may be interpreted in terms of the margin theory. Recently it has been shown that generalization error of classifiers can be obtained by explicitly taking the margin distribution of the training data into account. Most of the current boosting algorithms in practice usually optimizes a convex loss function and do not make use of the margin distribution. In this work we design a new boosting algorithm, termed margin-distribution boosting (MDBoost), which directly maximizes the average margin and minimizes the margin variance simultaneously. This way the margin distribution is optimized. A totally-corrective optimization algorithm based on column generation is proposed to implement MDBoost. Experiments on UCI datasets show that MDBoost outperforms AdaBoost and LPBoost in most cases.<|reference_end|>
arxiv
@article{shen2009boosting, title={Boosting through Optimization of Margin Distributions}, author={Chunhua Shen and Hanxi Li}, journal={arXiv preprint arXiv:0904.2037}, year={2009}, archivePrefix={arXiv}, eprint={0904.2037}, primaryClass={cs.LG cs.CV} }
shen2009boosting
arxiv-7093
0904.2051
Joint-sparse recovery from multiple measurements
<|reference_start|>Joint-sparse recovery from multiple measurements: The joint-sparse recovery problem aims to recover, from sets of compressed measurements, unknown sparse matrices with nonzero entries restricted to a subset of rows. This is an extension of the single-measurement-vector (SMV) problem widely studied in compressed sensing. We analyze the recovery properties for two types of recovery algorithms. First, we show that recovery using sum-of-norm minimization cannot exceed the uniform recovery rate of sequential SMV using $\ell_1$ minimization, and that there are problems that can be solved with one approach but not with the other. Second, we analyze the performance of the ReMBo algorithm [M. Mishali and Y. Eldar, IEEE Trans. Sig. Proc., 56 (2008)] in combination with $\ell_1$ minimization, and show how recovery improves as more measurements are taken. From this analysis it follows that having more measurements than number of nonzero rows does not improve the potential theoretical recovery rate.<|reference_end|>
arxiv
@article{berg2009joint-sparse, title={Joint-sparse recovery from multiple measurements}, author={Ewout van den Berg and Michael P. Friedlander}, journal={IEEE Transactions on Information Theory, 56(5):2516-2527, 2010}, year={2009}, doi={10.1109/TIT.2010.2043876}, number={University of British Columbia, Department of Computer Science Tech. Rep. 2009-07}, archivePrefix={arXiv}, eprint={0904.2051}, primaryClass={cs.IT math.IT} }
berg2009joint-sparse
arxiv-7094
0904.2058
The Power of Depth 2 Circuits over Algebras
<|reference_start|>The Power of Depth 2 Circuits over Algebras: We study the problem of polynomial identity testing (PIT) for depth 2 arithmetic circuits over matrix algebra. We show that identity testing of depth 3 (Sigma-Pi-Sigma) arithmetic circuits over a field F is polynomial time equivalent to identity testing of depth 2 (Pi-Sigma) arithmetic circuits over U_2(F), the algebra of upper-triangular 2 x 2 matrices with entries from F. Such a connection is a bit surprising since we also show that, as computational models, Pi-Sigma circuits over U_2(F) are strictly `weaker' than Sigma-Pi-Sigma circuits over F. The equivalence further shows that PIT of depth 3 arithmetic circuits reduces to PIT of width-2 planar commutative Algebraic Branching Programs (ABP). Thus, identity testing for commutative ABPs is interesting even in the case of width-2. Further, we give a deterministic polynomial time identity testing algorithm for a Pi-Sigma circuit over any constant dimensional commutative algebra over F. While over commutative algebras of polynomial dimension, identity testing is at least as hard as that of Sigma-Pi-Sigma circuits over F.<|reference_end|>
arxiv
@article{saha2009the, title={The Power of Depth 2 Circuits over Algebras}, author={Chandan Saha, Ramprasad Saptharishi, Nitin Saxena}, journal={arXiv preprint arXiv:0904.2058}, year={2009}, archivePrefix={arXiv}, eprint={0904.2058}, primaryClass={cs.CC cs.DS} }
saha2009the
arxiv-7095
0904.2060
Complementary cooperation, minimal winning coalitions, and power indices
<|reference_start|>Complementary cooperation, minimal winning coalitions, and power indices: We introduce a new simple game, which is referred to as the complementary weighted multiple majority game (C-WMMG for short). C-WMMG models a basic cooperation rule, the complementary cooperation rule, and can be taken as a sister model of the famous weighted majority game (WMG for short). In this paper, we concentrate on the two dimensional C-WMMG. An interesting property of this case is that there are at most $n+1$ minimal winning coalitions (MWC for short), and they can be enumerated in time $O(n\log n)$, where $n$ is the number of players. This property guarantees that the two dimensional C-WMMG is more handleable than WMG. In particular, we prove that the main power indices, i.e. the Shapley-Shubik index, the Penrose-Banzhaf index, the Holler-Packel index, and the Deegan-Packel index, are all polynomially computable. To make a comparison with WMG, we know that it may have exponentially many MWCs, and none of the four power indices is polynomially computable (unless P=NP). Still for the two dimensional case, we show that local monotonicity holds for all of the four power indices. In WMG, this property is possessed by the Shapley-Shubik index and the Penrose-Banzhaf index, but not by the Holler-Packel index or the Deegan-Packel index. Since our model fits very well the cooperation and competition in team sports, we hope that it can be potentially applied in measuring the values of players in team sports, say help people give more objective ranking of NBA players and select MVPs, and consequently bring new insights into contest theory and the more general field of sports economics. It may also provide some interesting enlightenments into the design of non-additive voting mechanisms. Last but not least, the threshold version of C-WMMG is a generalization of WMG, and natural variants of it are closely related with the famous airport game and the stable marriage/roommates problem.<|reference_end|>
arxiv
@article{cao2009complementary, title={Complementary cooperation, minimal winning coalitions, and power indices}, author={Zhigang Cao, Xiaoguang Yang}, journal={Theoretical Computer Science, 470 (2013) 53-92}, year={2009}, doi={10.1016/j.tcs.2012.11.033}, archivePrefix={arXiv}, eprint={0904.2060}, primaryClass={cs.GT} }
cao2009complementary
arxiv-7096
0904.2061
Selfish Bin Covering
<|reference_start|>Selfish Bin Covering: In this paper, we address the selfish bin covering problem, which is greatly related both to the bin covering problem, and to the weighted majority game. What we mainly concern is how much the lack of coordination harms the social welfare. Besides the standard PoA and PoS, which are based on Nash equilibrium, we also take into account the strong Nash equilibrium, and several other new equilibria. For each equilibrium, the corresponding PoA and PoS are given, and the problems of computing an arbitrary equilibrium, as well as approximating the best one, are also considered.<|reference_end|>
arxiv
@article{cao2009selfish, title={Selfish Bin Covering}, author={Zhigang Cao, Xiaoguang Yang}, journal={Theoretical Computer Science, 412, 2011, 7049-7058}, year={2009}, doi={10.1016/j.tcs.2011.09.017}, archivePrefix={arXiv}, eprint={0904.2061}, primaryClass={cs.GT} }
cao2009selfish
arxiv-7097
0904.2076
On stratified regions
<|reference_start|>On stratified regions: Type and effect systems are a tool to analyse statically the behaviour of programs with effects. We present a proof based on the so called reducibility candidates that a suitable stratification of the type and effect system entails the termination of the typable programs. The proof technique covers a simply typed, multi-threaded, call-by-value lambda-calculus, equipped with a variety of scheduling (preemptive, cooperative) and interaction mechanisms (references, channels, signals).<|reference_end|>
arxiv
@article{amadio2009on, title={On stratified regions}, author={Roberto Amadio (PPS)}, journal={Programming Languages and Systems, 7th Asian Symposium, APLAS 2009, Cor\'ee, R\'epublique De (2009)}, year={2009}, archivePrefix={arXiv}, eprint={0904.2076}, primaryClass={cs.LO} }
amadio2009on
arxiv-7098
0904.2096
A Distributed Software Architecture for Collaborative Teleoperation based on a VR Platform and Web Application Interoperability
<|reference_start|>A Distributed Software Architecture for Collaborative Teleoperation based on a VR Platform and Web Application Interoperability: Augmented Reality and Virtual Reality can provide to a Human Operator (HO) a real help to complete complex tasks, such as robot teleoperation and cooperative teleassistance. Using appropriate augmentations, the HO can interact faster, safer and easier with the remote real world. In this paper, we present an extension of an existing distributed software and network architecture for collaborative teleoperation based on networked human-scaled mixed reality and mobile platform. The first teleoperation system was composed by a VR application and a Web application. However the 2 systems cannot be used together and it is impossible to control a distant robot simultaneously. Our goal is to update the teleoperation system to permit a heterogeneous collaborative teleoperation between the 2 platforms. An important feature of this interface is based on different Mobile platforms to control one or many robots.<|reference_end|>
arxiv
@article{domingues2009a, title={A Distributed Software Architecture for Collaborative Teleoperation based on a VR Platform and Web Application Interoperability}, author={Christophe Domingues (IBISC), Samir Otmane (IBISC), Fr'ed'eric Davesne (IBISC), Malik Mallem (IBISC)}, journal={18th International Conference on Artificial Reality and Telexistence (ACM ICAT 2008), Yokohama : Japon (2008)}, year={2009}, archivePrefix={arXiv}, eprint={0904.2096}, primaryClass={cs.HC cs.GR cs.MM cs.RO} }
domingues2009a
arxiv-7099
0904.2115
Colorful Strips
<|reference_start|>Colorful Strips: Given a planar point set and an integer $k$, we wish to color the points with $k$ colors so that any axis-aligned strip containing enough points contains all colors. The goal is to bound the necessary size of such a strip, as a function of $k$. We show that if the strip size is at least $2k{-}1$, such a coloring can always be found. We prove that the size of the strip is also bounded in any fixed number of dimensions. In contrast to the planar case, we show that deciding whether a 3D point set can be 2-colored so that any strip containing at least three points contains both colors is NP-complete. We also consider the problem of coloring a given set of axis-aligned strips, so that any sufficiently covered point in the plane is covered by $k$ colors. We show that in $d$ dimensions the required coverage is at most $d(k{-}1)+1$. Lower bounds are given for the two problems. This complements recent impossibility results on decomposition of strip coverings with arbitrary orientations. Finally, we study a variant where strips are replaced by wedges.<|reference_end|>
arxiv
@article{aloupis2009colorful, title={Colorful Strips}, author={G. Aloupis and J. Cardinal and S. Collette and S. Imahori and M. Korman and S. Langerman and O. Schwartz and S. Smorodinsky and P. Taslakian}, journal={arXiv preprint arXiv:0904.2115}, year={2009}, archivePrefix={arXiv}, eprint={0904.2115}, primaryClass={cs.CG} }
aloupis2009colorful
arxiv-7100
0904.2129
Crossing-Optimal Acyclic HP-Completion for Outerplanar st-Digraphs
<|reference_start|>Crossing-Optimal Acyclic HP-Completion for Outerplanar st-Digraphs: Given an embedded planar acyclic digraph G, we define the problem of acyclic hamiltonian path completion with crossing minimization (Acyclic-HPCCM) to be the problem of determining a hamiltonian path completion set of edges such that, when these edges are embedded on G, they create the smallest possible number of edge crossings and turn G to a hamiltonian acyclic digraph. Our results include: 1. We provide a characterization under which a planar st-digraph G is hamiltonian. 2. For an outerplanar st-digraph G, we define the st-polygon decomposition of G and, based on its properties, we develop a linear-time algorithm that solves the Acyclic-HPCCM problem. 3. For the class of planar st-digraphs, we establish an equivalence between the Acyclic-HPCCM problem and the problem of determining an upward 2-page topological book embedding with minimum number of spine crossings. We infer (based on this equivalence) for the class of outerplanar st-digraphs an upward topological 2-page book embedding with minimum number of spine crossings. To the best of our knowledge, it is the first time that edge-crossing minimization is studied in conjunction with the acyclic hamiltonian completion problem and the first time that an optimal algorithm with respect to spine crossing minimization is presented for upward topological book embeddings.<|reference_end|>
arxiv
@article{mchedlidze2009crossing-optimal, title={Crossing-Optimal Acyclic HP-Completion for Outerplanar st-Digraphs}, author={Tamara Mchedlidze, Antonios Symvonis}, journal={arXiv preprint arXiv:0904.2129}, year={2009}, archivePrefix={arXiv}, eprint={0904.2129}, primaryClass={cs.DS} }
mchedlidze2009crossing-optimal