corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-675301 | cs/0612088 | Non-Clairvoyant Batch Sets Scheduling: Fairness is Fair enough | <|reference_start|>Non-Clairvoyant Batch Sets Scheduling: Fairness is Fair enough: Scheduling questions arise naturally in many different areas among which operating system design, compiling,... In real life systems, the characteristics of the jobs (such as release time and processing time) are usually unknown and unpredictable beforehand. The system is typically unaware of the remaining work in each job or of the ability of the job to take advantage of more resources. Following these observations, we adopt the job model by Edmonds et al (2000, 2003) in which the jobs go through a sequence of different phases. Each phase consists of a certain quantity of work and a speed-up function that models how it takes advantage of the number of processors it receives. We consider the non-clairvoyant online setting where a collection of jobs arrives at time 0. We consider the metrics setflowtime introduced by Robert et al (2007). The goal is to minimize the sum of the completion time of the sets, where a set is completed when all of its jobs are done. If the input consists of a single set of jobs, this is simply the makespan of the jobs; and if the input consists of a collection of singleton sets, it is simply the flowtime of the jobs. We show that the non-clairvoyant strategy EQUIoEQUI that evenly splits the available processors among the still unserved sets and then evenly splits these processors among the still uncompleted jobs of each unserved set, achieves a competitive ratio (2+\sqrt3+o(1))\frac{ln n}{lnln n} for the setflowtime minimization and that this is asymptotically optimal (up to a constant factor), where n is the size of the largest set. For makespan minimization, we show that the non-clairvoyant strategy EQUI achieves a competitive ratio of (1+o(1))\frac{ln n}{lnln n}, which is again asymptotically optimal.<|reference_end|> | arxiv | @article{robert2006non-clairvoyant,
title={Non-Clairvoyant Batch Sets Scheduling: Fairness is Fair enough},
author={Julien Robert and Nicolas Schabanel},
journal={arXiv preprint arXiv:cs/0612088},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612088},
primaryClass={cs.DC cs.DS}
} | robert2006non-clairvoyant |
arxiv-675302 | cs/0612089 | On the time complexity of 2-tag systems and small universal Turing machines | <|reference_start|>On the time complexity of 2-tag systems and small universal Turing machines: We show that 2-tag systems efficiently simulate Turing machines. As a corollary we find that the small universal Turing machines of Rogozhin, Minsky and others simulate Turing machines in polynomial time. This is an exponential improvement on the previously known simulation time overhead and improves a forty year old result in the area of small universal Turing machines.<|reference_end|> | arxiv | @article{woods2006on,
title={On the time complexity of 2-tag systems and small universal Turing
machines},
author={Damien Woods, Turlough Neary},
journal={FOCS 2006: 47th Annual IEEE Symposium on Foundations of Computer
Science, IEEE, pages 439-446, Berkeley, CA},
year={2006},
doi={10.1109/FOCS.2006.58},
archivePrefix={arXiv},
eprint={cs/0612089},
primaryClass={cs.CC cs.DS}
} | woods2006on |
arxiv-675303 | cs/0612090 | A Review of Papers that have a bearing on an Analysis of User Interactions in A Collaborative On-line Laboratory | <|reference_start|>A Review of Papers that have a bearing on an Analysis of User Interactions in A Collaborative On-line Laboratory: A number of papers have been reviewed in the areas of HCI, CSCW, CSCL. These have been analyzed with a view to extract the ideas relevant to a consideration of user interactions in a collaborative on line laboratory which is being under development for use in the ITO BSc course at Southampton University. The construction of new theoretical models is to be based upon principles of collaborative HCI design and constructivist and situational educational theory. An investigation of the review papers it is hoped will lead towards a methodology/framework that can be used as guidance for collaborative learning systems and these will need to be developed alongside the requirements as they change during the development cycles. The primary outcome will be the analysis and re-design of the online e-learning laboratory together with a measure of its efficacy in the learning process.<|reference_end|> | arxiv | @article{hinze-hoare2006a,
title={A Review of Papers that have a bearing on an Analysis of User
Interactions in A Collaborative On-line Laboratory},
author={Vita Hinze-Hoare},
journal={arXiv preprint arXiv:cs/0612090},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612090},
primaryClass={cs.HC}
} | hinze-hoare2006a |
arxiv-675304 | cs/0612091 | Bias in the journal impact factor | <|reference_start|>Bias in the journal impact factor: The ISI journal impact factor (JIF) is based on a sample that may represent half the whole-of-life citations to some journals, but a small fraction (<10%) of the citations accruing to other journals. This disproportionate sampling means that the JIF provides a misleading indication of the true impact of journals, biased in favour of journals that have a rapid rather than a prolonged impact. Many journals exhibit a consistent pattern of citation accrual from year to year, so it may be possible to adjust the JIF to provide a more reliable indication of a journal's impact.<|reference_end|> | arxiv | @article{vanclay2006bias,
title={Bias in the journal impact factor},
author={Jerome K Vanclay},
journal={Scientometrics 78(1):3-12 (2009)},
year={2006},
doi={10.1007/s11192-008-1778-4},
archivePrefix={arXiv},
eprint={cs/0612091},
primaryClass={cs.DL q-bio.OT}
} | vanclay2006bias |
arxiv-675305 | cs/0612092 | Agile Adoption Process Framework | <|reference_start|>Agile Adoption Process Framework: Today many organizations aspire to adopt agile processes in hope of overcoming some of the difficulties they are facing with their current software development process. There is no structured framework for the agile adoption process. This paper presents a 3-Stage process framework that assists organization and guides organizations through their agile adoption efforts. The Process Framework has been received significantly positive feedback from experts and leaders in agile adoption industry.<|reference_end|> | arxiv | @article{sidky2006agile,
title={Agile Adoption Process Framework},
author={Ahmed Sidky, James Arthur},
journal={arXiv preprint arXiv:cs/0612092},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612092},
primaryClass={cs.SE}
} | sidky2006agile |
arxiv-675306 | cs/0612093 | A Calculus for Sensor Networks | <|reference_start|>A Calculus for Sensor Networks: We consider the problem of providing a rigorous model for programming wireless sensor networks. Assuming that collisions, packet losses, and errors are dealt with at the lower layers of the protocol stack, we propose a Calculus for Sensor Networks (CSN) that captures the main abstractions for programming applications for this class of devices. Besides providing the syntax and semantics for the calculus, we show its expressiveness by providing implementations for several examples of typical operations on sensor networks. Also included is a detailed discussion of possible extensions to CSN that enable the modeling of other important features of these networks such as sensor state, sampling strategies, and network security.<|reference_end|> | arxiv | @article{silva2006a,
title={A Calculus for Sensor Networks},
author={Miguel S. Silva, Francisco Martins, Luis Lopes, Joao Barros},
journal={arXiv preprint arXiv:cs/0612093},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612093},
primaryClass={cs.DC cs.PL}
} | silva2006a |
arxiv-675307 | cs/0612094 | Reduction of Algebraic Parametric Systems by Rectification of their Affine Expanded Lie Symmetries | <|reference_start|>Reduction of Algebraic Parametric Systems by Rectification of their Affine Expanded Lie Symmetries: Lie group theory states that knowledge of a $m$-parameters solvable group of symmetries of a system of ordinary differential equations allows to reduce by $m$ the number of equations. We apply this principle by finding some \emph{affine derivations} that induces \emph{expanded} Lie point symmetries of considered system. By rewriting original problem in an invariant coordinates set for these symmetries, we \emph{reduce} the number of involved parameters. We present an algorithm based on this standpoint whose arithmetic complexity is \emph{quasi-polynomial} in input's size.<|reference_end|> | arxiv | @article{sedoglavic2006reduction,
title={Reduction of Algebraic Parametric Systems by Rectification of their
Affine Expanded Lie Symmetries},
author={Alexandre Sedoglavic (INRIA Futurs, LIFL)},
journal={Dans Algebraic Biology 2007 4545 (2007) 277--291},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612094},
primaryClass={cs.SC}
} | sedoglavic2006reduction |
arxiv-675308 | cs/0612095 | Approximation of the Two-Part MDL Code | <|reference_start|>Approximation of the Two-Part MDL Code: Approximation of the optimal two-part MDL code for given data, through successive monotonically length-decreasing two-part MDL codes, has the following properties: (i) computation of each step may take arbitrarily long; (ii) we may not know when we reach the optimum, or whether we will reach the optimum at all; (iii) the sequence of models generated may not monotonically improve the goodness of fit; but (iv) the model associated with the optimum has (almost) the best goodness of fit. To express the practically interesting goodness of fit of individual models for individual data sets we have to rely on Kolmogorov complexity.<|reference_end|> | arxiv | @article{adriaans2006approximation,
title={Approximation of the Two-Part MDL Code},
author={Pieter Adriaans (University of Amsterdam), Paul Vitanyi (CWI and
University of Amsterdam)},
journal={arXiv preprint arXiv:cs/0612095},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612095},
primaryClass={cs.LG cs.AI cs.IT math.IT}
} | adriaans2006approximation |
arxiv-675309 | cs/0612096 | Using state space differential geometry for nonlinear blind source separation | <|reference_start|>Using state space differential geometry for nonlinear blind source separation: Given a time series of multicomponent measurements of an evolving stimulus, nonlinear blind source separation (BSS) seeks to find a "source" time series, comprised of statistically independent combinations of the measured components. In this paper, we seek a source time series with local velocity cross correlations that vanish everywhere in stimulus state space. However, in an earlier paper the local velocity correlation matrix was shown to constitute a metric on state space. Therefore, nonlinear BSS maps onto a problem of differential geometry: given the metric observed in the measurement coordinate system, find another coordinate system in which the metric is diagonal everywhere. We show how to determine if the observed data are separable in this way, and, if they are, we show how to construct the required transformation to the source coordinate system, which is essentially unique except for an unknown rotation that can be found by applying the methods of linear BSS. Thus, the proposed technique solves nonlinear BSS in many situations or, at least, reduces it to linear BSS, without the use of probabilistic, parametric, or iterative procedures. This paper also describes a generalization of this methodology that performs nonlinear independent subspace separation. In every case, the resulting decomposition of the observed data is an intrinsic property of the stimulus' evolution in the sense that it does not depend on the way the observer chooses to view it (e.g., the choice of the observing machine's sensors). In other words, the decomposition is a property of the evolution of the "real" stimulus that is "out there" broadcasting energy to the observer. The technique is illustrated with analytic and numerical examples.<|reference_end|> | arxiv | @article{levin2006using,
title={Using state space differential geometry for nonlinear blind source
separation},
author={David N. Levin},
journal={Journal of Applied Physics 103, 044906 (2008)},
year={2006},
doi={10.1063/1.2826943},
archivePrefix={arXiv},
eprint={cs/0612096},
primaryClass={cs.LG cs.SD}
} | levin2006using |
arxiv-675310 | cs/0612097 | Error Exponents for Variable-length Block Codes with Feedback and Cost Constraints | <|reference_start|>Error Exponents for Variable-length Block Codes with Feedback and Cost Constraints: Variable-length block-coding schemes are investigated for discrete memoryless channels with ideal feedback under cost constraints. Upper and lower bounds are found for the minimum achievable probability of decoding error $P_{e,\min}$ as a function of constraints $R, \AV$, and $\bar \tau$ on the transmission rate, average cost, and average block length respectively. For given $R$ and $\AV$, the lower and upper bounds to the exponent $-(\ln P_{e,\min})/\bar \tau$ are asymptotically equal as $\bar \tau \to \infty$. The resulting reliability function, $\lim_{\bar \tau\to \infty} (-\ln P_{e,\min})/\bar \tau$, as a function of $R$ and $\AV$, is concave in the pair $(R, \AV)$ and generalizes the linear reliability function of Burnashev to include cost constraints. The results are generalized to a class of discrete-time memoryless channels with arbitrary alphabets, including additive Gaussian noise channels with amplitude and power constraints.<|reference_end|> | arxiv | @article{nakiboglu2006error,
title={Error Exponents for Variable-length Block Codes with Feedback and Cost
Constraints},
author={B. Nakiboglu, R. G. Gallager},
journal={IEEE Transactions on Information Theory, 54(3):945-963, March 2008},
year={2006},
doi={10.1109/TIT.2007.915913 10.1109/ISIT.2006.261677},
archivePrefix={arXiv},
eprint={cs/0612097},
primaryClass={cs.IT math.IT}
} | nakiboglu2006error |
arxiv-675311 | cs/0612098 | Algorithms and Approaches of Proxy Signature: A Survey | <|reference_start|>Algorithms and Approaches of Proxy Signature: A Survey: Numerous research studies have been investigated on proxy signatures over the last decade. This survey reviews the research progress on proxy signatures, analyzes a few notable proposals, and provides an overall remark of these proposals.<|reference_end|> | arxiv | @article{das2006algorithms,
title={Algorithms and Approaches of Proxy Signature: A Survey},
author={Manik Lal Das, Ashutosh Saxena, Deepak B. Phatak},
journal={arXiv preprint arXiv:cs/0612098},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612098},
primaryClass={cs.CR}
} | das2006algorithms |
arxiv-675312 | cs/0612099 | Network Information Flow in Small World Networks | <|reference_start|>Network Information Flow in Small World Networks: Recent results from statistical physics show that large classes of complex networks, both man-made and of natural origin, are characterized by high clustering properties yet strikingly short path lengths between pairs of nodes. This class of networks are said to have a small-world topology. In the context of communication networks, navigable small-world topologies, i.e. those which admit efficient distributed routing algorithms, are deemed particularly effective, for example in resource discovery tasks and peer-to-peer applications. Breaking with the traditional approach to small-world topologies that privileges graph parameters pertaining to connectivity, and intrigued by the fundamental limits of communication in networks that exploit this type of topology, we investigate the capacity of these networks from the perspective of network information flow. Our contribution includes upper and lower bounds for the capacity of standard and navigable small-world models, and the somewhat surprising result that, with high probability, random rewiring does not alter the capacity of a small-world network.<|reference_end|> | arxiv | @article{costa2006network,
title={Network Information Flow in Small World Networks},
author={Rui A. Costa, Joao Barros},
journal={arXiv preprint arXiv:cs/0612099},
year={2006},
doi={10.1109/WIOPT.2006.1666504},
archivePrefix={arXiv},
eprint={cs/0612099},
primaryClass={cs.IT cs.DM math.IT}
} | costa2006network |
arxiv-675313 | cs/0612100 | Improved results for a memory allocation problem | <|reference_start|>Improved results for a memory allocation problem: We consider a memory allocation problem that can be modeled as a version of bin packing where items may be split, but each bin may contain at most two (parts of) items. A 3/2-approximation algorithm and an NP-hardness proof for this problem was given by Chung et al. We give a simpler 3/2-approximation algorithm for it which is in fact an online algorithm. This algorithm also has good performance for the more general case where each bin may contain at most k parts of items. We show that this general case is also strongly NP-hard. Additionally, we give an efficient 7/5-approximation algorithm.<|reference_end|> | arxiv | @article{epstein2006improved,
title={Improved results for a memory allocation problem},
author={Leah Epstein and Rob van Stee},
journal={arXiv preprint arXiv:cs/0612100},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612100},
primaryClass={cs.DS}
} | epstein2006improved |
arxiv-675314 | cs/0612101 | Maximum Entropy MIMO Wireless Channel Models | <|reference_start|>Maximum Entropy MIMO Wireless Channel Models: In this contribution, models of wireless channels are derived from the maximum entropy principle, for several cases where only limited information about the propagation environment is available. First, analytical models are derived for the cases where certain parameters (channel energy, average energy, spatial correlation matrix) are known deterministically. Frequently, these parameters are unknown (typically because the received energy or the spatial correlation varies with the user position), but still known to represent meaningful system characteristics. In these cases, analytical channel models are derived by assigning entropy-maximizing distributions to these parameters, and marginalizing them out. For the MIMO case with spatial correlation, we show that the distribution of the covariance matrices is conveniently handled through its eigenvalues. The entropy-maximizing distribution of the covariance matrix is shown to be a Wishart distribution. Furthermore, the corresponding probability density function of the channel matrix is shown to be described analytically by a function of the channel Frobenius norm. This technique can provide channel models incorporating the effect of shadow fading and spatial correlation between antennas without the need to assume explicit values for these parameters. The results are compared in terms of mutual information to the classical i.i.d. Gaussian model.<|reference_end|> | arxiv | @article{guillaud2006maximum,
title={Maximum Entropy MIMO Wireless Channel Models},
author={M. Guillaud, M. Debbah, A. L. Moustakas},
journal={arXiv preprint arXiv:cs/0612101},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612101},
primaryClass={cs.IT math.IT}
} | guillaud2006maximum |
arxiv-675315 | cs/0612102 | The Dichotomy of Conjunctive Queries on Probabilistic Structures | <|reference_start|>The Dichotomy of Conjunctive Queries on Probabilistic Structures: We show that for every conjunctive query, the complexity of evaluating it on a probabilistic database is either \PTIME or #\P-complete, and we give an algorithm for deciding whether a given conjunctive query is \PTIME or #\P-complete. The dichotomy property is a fundamental result on query evaluation on probabilistic databases and it gives a complete classification of the complexity of conjunctive queries.<|reference_end|> | arxiv | @article{dalvi2006the,
title={The Dichotomy of Conjunctive Queries on Probabilistic Structures},
author={Nilesh Dalvi and Dan Suciu},
journal={arXiv preprint arXiv:cs/0612102},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612102},
primaryClass={cs.DB}
} | dalvi2006the |
arxiv-675316 | cs/0612103 | The Boundary Between Privacy and Utility in Data Anonymization | <|reference_start|>The Boundary Between Privacy and Utility in Data Anonymization: We consider the privacy problem in data publishing: given a relation I containing sensitive information 'anonymize' it to obtain a view V such that, on one hand attackers cannot learn any sensitive information from V, and on the other hand legitimate users can use V to compute useful statistics on I. These are conflicting goals. We use a definition of privacy that is derived from existing ones in the literature, which relates the a priori probability of a given tuple t, Pr(t), with the a posteriori probability, Pr(t | V), and propose a novel and quite practical definition for utility. Our main result is the following. Denoting n the size of I and m the size of the domain from which I was drawn (i.e. n < m) then: when the a priori probability is Pr(t) = Omega(n/sqrt(m)) for some t, there exists no useful anonymization algorithm, while when Pr(t) = O(n/m) for all tuples t, then we give a concrete anonymization algorithm that is both private and useful. Our algorithm is quite different from the k-anonymization algorithm studied intensively in the literature, and is based on random deletions and insertions to I.<|reference_end|> | arxiv | @article{rastogi2006the,
title={The Boundary Between Privacy and Utility in Data Anonymization},
author={Vibhor Rastogi, Dan Suciu, Sungho Hong},
journal={arXiv preprint arXiv:cs/0612103},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612103},
primaryClass={cs.DB}
} | rastogi2006the |
arxiv-675317 | cs/0612104 | Sufficient Conditions for Coarse-Graining Evolutionary Dynamics | <|reference_start|>Sufficient Conditions for Coarse-Graining Evolutionary Dynamics: It is commonly assumed that the ability to track the frequencies of a set of schemata in the evolving population of an infinite population genetic algorithm (IPGA) under different fitness functions will advance efforts to obtain a theory of adaptation for the simple GA. Unfortunately, for IPGAs with long genomes and non-trivial fitness functions there do not currently exist theoretical results that allow such a study. We develop a simple framework for analyzing the dynamics of an infinite population evolutionary algorithm (IPEA). This framework derives its simplicity from its abstract nature. In particular we make no commitment to the data-structure of the genomes, the kind of variation performed, or the number of parents involved in a variation operation. We use this framework to derive abstract conditions under which the dynamics of an IPEA can be coarse-grained. We then use this result to derive concrete conditions under which it becomes computationally feasible to closely approximate the frequencies of a family of schemata of relatively low order over multiple generations, even when the bitstsrings in the evolving population of the IPGA are long.<|reference_end|> | arxiv | @article{burjorjee2006sufficient,
title={Sufficient Conditions for Coarse-Graining Evolutionary Dynamics},
author={Keki Burjorjee},
journal={arXiv preprint arXiv:cs/0612104},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612104},
primaryClass={cs.NE cs.AI}
} | burjorjee2006sufficient |
arxiv-675318 | cs/0612105 | Towards Parallel Computing on the Internet: Applications, Architectures, Models and Programming Tools | <|reference_start|>Towards Parallel Computing on the Internet: Applications, Architectures, Models and Programming Tools: The development of Internet wide resources for general purpose parallel computing poses the challenging task of matching computation and communication complexity. A number of parallel computing models exist that address this for traditional parallel architectures, and there are a number of emerging models that attempt to do this for large scale Internet-based systems like computational grids. In this survey we cover the three fundamental aspects -- application, architecture and model, and we show how they have been developed over the last decade. We also cover programming tools that are currently being used for parallel programming in computational grids. The trend in conventional computational models are to put emphasis on efficient communication between participating nodes by adapting different types of communication to network conditions. Effects of dynamism and uncertainties that arise in large scale systems are evidently important to understand and yet there is currently little work that addresses this from a parallel computing perspective.<|reference_end|> | arxiv | @article{sundararajan2006towards,
title={Towards Parallel Computing on the Internet: Applications, Architectures,
Models and Programming Tools},
author={Elankovan Sundararajan and Aaron Harwood},
journal={arXiv preprint arXiv:cs/0612105},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612105},
primaryClass={cs.DC cs.PF}
} | sundararajan2006towards |
arxiv-675319 | cs/0612106 | On Completeness of Logical Relations for Monadic Types | <|reference_start|>On Completeness of Logical Relations for Monadic Types: Software security can be ensured by specifying and verifying security properties of software using formal methods with strong theoretical bases. In particular, programs can be modeled in the framework of lambda-calculi, and interesting properties can be expressed formally by contextual equivalence (a.k.a. observational equivalence). Furthermore, imperative features, which exist in most real-life software, can be nicely expressed in the so-called computational lambda-calculus. Contextual equivalence is difficult to prove directly, but we can often use logical relations as a tool to establish it in lambda-calculi. We have already defined logical relations for the computational lambda-calculus in previous work. We devote this paper to the study of their completeness w.r.t. contextual equivalence in the computational lambda-calculus.<|reference_end|> | arxiv | @article{lasota2006on,
title={On Completeness of Logical Relations for Monadic Types},
author={Slawomir Lasota, David Nowak, Yu Zhang},
journal={Advances in Computer Science - ASIAN 2006, Secure Software, 11th
Asian Computing Science Conference, Tokyo, Japan, December 6-8, 2006,
Proceedings, volume 4435 of Lecture Notes in Computer Science, pages 223-230,
Springer},
year={2006},
doi={10.1007/978-3-540-77505-8_17},
archivePrefix={arXiv},
eprint={cs/0612106},
primaryClass={cs.LO cs.PL}
} | lasota2006on |
arxiv-675320 | cs/0612107 | Voiced speech as secondary response of a self-consistent fundamental drive | <|reference_start|>Voiced speech as secondary response of a self-consistent fundamental drive: Voiced segments of speech are assumed to be composed of non-stationary acoustic objects which can be described as stationary response of a non-stationary fundamental drive (FD) process and which are furthermore suited to reconstruct the hidden FD by using a voice adapted (self-consistent) part-tone decomposition of the speech signal. The universality and robustness of human pitch perception encourages the reconstruction of a band-limited FD in the frequency range of the pitch. The self-consistent decomposition of voiced continuants generates several part-tones which can be confirmed to be topologically equivalent to corresponding acoustic modes of the excitation on the transmitter side. As topologically equivalent image of a glottal master oscillator, the self-consistent FD is suited to serve as low frequency part of the basic time-scale separation of auditive perception and to describe the broadband voiced excitation as entrained (synchronized) and/or modulated primary response. Being guided by the acoustic correlates of pitch and loudness perception, the time-scale separation avoids the conventional assumption of stationary excitation and represents the basic decoding step of an advanced precision transmission protocol of self-consistent (voiced) acoustic objects. The present study is focussed on the adaptation of the trajectories (contours) of the centre filter frequency of the part-tones to the chirp of the glottal master oscillator.<|reference_end|> | arxiv | @article{drepper2006voiced,
title={Voiced speech as secondary response of a self-consistent fundamental
drive},
author={Friedhelm R. Drepper},
journal={arXiv preprint arXiv:cs/0612107},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612107},
primaryClass={cs.SD nlin.AO}
} | drepper2006voiced |
arxiv-675321 | cs/0612108 | On Using Matching Theory to Understand P2P Network Design | <|reference_start|>On Using Matching Theory to Understand P2P Network Design: This paper aims to provide insight into stability of collaboration choices in P2P networks. We study networks where exchanges between nodes are driven by the desire to receive the best service available. This is the case for most existing P2P networks. We explore an evolution model derived from stable roommates theory that accounts for heterogeneity between nodes. We show that most P2P applications can be modeled using stable matching theory. This is the case whenever preference lists can be deduced from the exchange policy. In many cases, the preferences lists are characterized by an interesting acyclic property. We show that P2P networks with acyclic preferences possess a unique stable state with good convergence properties.<|reference_end|> | arxiv | @article{lebedev2006on,
title={On Using Matching Theory to Understand P2P Network Design},
author={Dmitry Lebedev (INRIA Rocquencourt), Fabien Mathieu (INRIA
Rocquencourt), Laurent Viennot (INRIA Rocquencourt), Anh-Tuan Gai (INRIA
Rocquencourt), Julien Reynier (INRIA Rocquencourt, INRIA Rocquencourt),
Fabien De Montgolfier (INRIA Rocquencourt)},
journal={arXiv preprint arXiv:cs/0612108},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612108},
primaryClass={cs.NI cs.GT}
} | lebedev2006on |
arxiv-675322 | cs/0612109 | Truncating the loop series expansion for Belief Propagation | <|reference_start|>Truncating the loop series expansion for Belief Propagation: Recently, M. Chertkov and V.Y. Chernyak derived an exact expression for the partition sum (normalization constant) corresponding to a graphical model, which is an expansion around the Belief Propagation solution. By adding correction terms to the BP free energy, one for each "generalized loop" in the factor graph, the exact partition sum is obtained. However, the usually enormous number of generalized loops generally prohibits summation over all correction terms. In this article we introduce Truncated Loop Series BP (TLSBP), a particular way of truncating the loop series of M. Chertkov and V.Y. Chernyak by considering generalized loops as compositions of simple loops. We analyze the performance of TLSBP in different scenarios, including the Ising model, regular random graphs and on Promedas, a large probabilistic medical diagnostic system. We show that TLSBP often improves upon the accuracy of the BP solution, at the expense of increased computation time. We also show that the performance of TLSBP strongly depends on the degree of interaction between the variables. For weak interactions, truncating the series leads to significant improvements, whereas for strong interactions it can be ineffective, even if a high number of terms is considered.<|reference_end|> | arxiv | @article{gomez2006truncating,
title={Truncating the loop series expansion for Belief Propagation},
author={Vicenc Gomez, J. M. Mooij, H. J. Kappen},
journal={The Journal of Machine Learning Research, 8(Sep):1987--2016, 2007},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612109},
primaryClass={cs.AI}
} | gomez2006truncating |
arxiv-675323 | cs/0612110 | Architecture for Modular Data Centers | <|reference_start|>Architecture for Modular Data Centers: Several factors are driving high-scale deployments of large data centers built upon commodity components. These commodity clusters are far cheaper than mainframe systems of the past but they bring serious heat and power density issues. Also the high failure rate of the individual components drives significant administrative costs. This proposal outlines an architecture for data center design based upon 20'x8'x8' modules that substantially changes how these systems are acquired, administered, and then later recycled.<|reference_end|> | arxiv | @article{hamilton2006architecture,
title={Architecture for Modular Data Centers},
author={James Hamilton},
journal={arXiv preprint arXiv:cs/0612110},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612110},
primaryClass={cs.DB}
} | hamilton2006architecture |
arxiv-675324 | cs/0612111 | Fragmentation in Large Object Repositories | <|reference_start|>Fragmentation in Large Object Repositories: Fragmentation leads to unpredictable and degraded application performance. While these problems have been studied in detail for desktop filesystem workloads, this study examines newer systems such as scalable object stores and multimedia repositories. Such systems use a get/put interface to store objects. In principle, databases and filesystems can support such applications efficiently, allowing system designers to focus on complexity, deployment cost and manageability. Although theoretical work proves that certain storage policies behave optimally for some workloads, these policies often behave poorly in practice. Most storage benchmarks focus on short-term behavior or do not measure fragmentation. We compare SQL Server to NTFS and find that fragmentation dominates performance when object sizes exceed 256KB-1MB. NTFS handles fragmentation better than SQL Server. Although the performance curves will vary with other systems and workloads, we expect the same interactions between fragmentation and free space to apply. It is well-known that fragmentation is related to the percentage free space. We found that the ratio of free space to object size also impacts performance. Surprisingly, in both systems, storing objects of a single size causes fragmentation, and changing the size of write requests affects fragmentation. These problems could be addressed with simple changes to the filesystem and database interfaces. It is our hope that an improved understanding of fragmentation will lead to predictable storage systems that require less maintenance after deployment.<|reference_end|> | arxiv | @article{sears2006fragmentation,
title={Fragmentation in Large Object Repositories},
author={Russell Sears, Catharine van Ingen},
journal={arXiv preprint arXiv:cs/0612111},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612111},
primaryClass={cs.DB}
} | sears2006fragmentation |
arxiv-675325 | cs/0612112 | Managing Query Compilation Memory Consumption to Improve DBMS Throughput | <|reference_start|>Managing Query Compilation Memory Consumption to Improve DBMS Throughput: While there are known performance trade-offs between database page buffer pool and query execution memory allocation policies, little has been written on the impact of query compilation memory use on overall throughput of the database management system (DBMS). We present a new aspect of the query optimization problem and offer a solution implemented in Microsoft SQL Server 2005. The solution provides stable throughput for a range of workloads even when memory requests outstrip the ability of the hardware to service those requests.<|reference_end|> | arxiv | @article{baryshnikov2006managing,
title={Managing Query Compilation Memory Consumption to Improve DBMS Throughput},
author={Boris Baryshnikov, Cipri Clinciu, Conor Cunningham, Leo Giakoumakis,
Slava Oks, Stefano Stefani},
journal={arXiv preprint arXiv:cs/0612112},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612112},
primaryClass={cs.DB}
} | baryshnikov2006managing |
arxiv-675326 | cs/0612113 | Isolation Support for Service-based Applications: A Position Paper | <|reference_start|>Isolation Support for Service-based Applications: A Position Paper: In this paper, we propose an approach to providing the benefits of isolation in service-oriented applications where it is not feasible to hold traditional locks for ACID transactions. Our technique, called "Promises", provides an uniform view for clients which covers a wide range of implementation techniques on the service side, all allowing the client to check a condition and then later rely on that condition still holding.<|reference_end|> | arxiv | @article{greenfield2006isolation,
title={Isolation Support for Service-based Applications: A Position Paper},
author={Paul Greenfield, Alan Fekete, Julian Jang, Dean Kuo, Surya Nepal},
journal={arXiv preprint arXiv:cs/0612113},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612113},
primaryClass={cs.DB}
} | greenfield2006isolation |
arxiv-675327 | cs/0612114 | Demaq: A Foundation for Declarative XML Message Processing | <|reference_start|>Demaq: A Foundation for Declarative XML Message Processing: This paper gives an overview of Demaq, an XML message processing system operating on the foundation of transactional XML message queues. We focus on the syntax and semantics of its fully declarative, rule-based application language and demonstrate our message-based programming paradigm in the context of a case study. Further, we discuss optimization opportunities for executing Demaq programs.<|reference_end|> | arxiv | @article{böhm2006demaq:,
title={Demaq: A Foundation for Declarative XML Message Processing},
author={Alexander B"ohm, Carl-Christian Kanne, Guido Moerkotte},
journal={arXiv preprint arXiv:cs/0612114},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612114},
primaryClass={cs.DB}
} | böhm2006demaq: |
arxiv-675328 | cs/0612115 | Consistent Streaming Through Time: A Vision for Event Stream Processing | <|reference_start|>Consistent Streaming Through Time: A Vision for Event Stream Processing: Event processing will play an increasingly important role in constructing enterprise applications that can immediately react to business critical events. Various technologies have been proposed in recent years, such as event processing, data streams and asynchronous messaging (e.g. pub/sub). We believe these technologies share a common processing model and differ only in target workload, including query language features and consistency requirements. We argue that integrating these technologies is the next step in a natural progression. In this paper, we present an overview and discuss the foundations of CEDR, an event streaming system that embraces a temporal stream model to unify and further enrich query language features, handle imperfections in event delivery and define correctness guarantees. We describe specific contributions made so far and outline next steps in developing the CEDR system.<|reference_end|> | arxiv | @article{barga2006consistent,
title={Consistent Streaming Through Time: A Vision for Event Stream Processing},
author={Roger S. Barga, Jonathan Goldstein, Mohamed Ali, Mingsheng Hong},
journal={arXiv preprint arXiv:cs/0612115},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612115},
primaryClass={cs.DB}
} | barga2006consistent |
arxiv-675329 | cs/0612116 | Adventures in time and space | <|reference_start|>Adventures in time and space: This paper investigates what is essentially a call-by-value version of PCF under a complexity-theoretically motivated type system. The programming formalism, ATR, has its first-order programs characterize the polynomial-time computable functions, and its second-order programs characterize the type-2 basic feasible functionals of Mehlhorn and of Cook and Urquhart. (The ATR-types are confined to levels 0, 1, and 2.) The type system comes in two parts, one that primarily restricts the sizes of values of expressions and a second that primarily restricts the time required to evaluate expressions. The size-restricted part is motivated by Bellantoni and Cook's and Leivant's implicit characterizations of polynomial-time. The time-restricting part is an affine version of Barber and Plotkin's DILL. Two semantics are constructed for ATR. The first is a pruning of the naive denotational semantics for ATR. This pruning removes certain functions that cause otherwise feasible forms of recursion to go wrong. The second semantics is a model for ATR's time complexity relative to a certain abstract machine. This model provides a setting for complexity recurrences arising from ATR recursions, the solutions of which yield second-order polynomial time bounds. The time-complexity semantics is also shown to be sound relative to the costs of interpretation on the abstract machine.<|reference_end|> | arxiv | @article{danner2006adventures,
title={Adventures in time and space},
author={Norman Danner and James S. Royer},
journal={Logical Methods in Computer Science, Volume 3, Issue 1 (March 12,
2007) lmcs:2231},
year={2006},
doi={10.2168/LMCS-3(1:9)2007},
archivePrefix={arXiv},
eprint={cs/0612116},
primaryClass={cs.LO cs.PL}
} | danner2006adventures |
arxiv-675330 | cs/0612117 | Statistical Mechanics of On-line Learning when a Moving Teacher Goes around an Unlearnable True Teacher | <|reference_start|>Statistical Mechanics of On-line Learning when a Moving Teacher Goes around an Unlearnable True Teacher: In the framework of on-line learning, a learning machine might move around a teacher due to the differences in structures or output functions between the teacher and the learning machine. In this paper we analyze the generalization performance of a new student supervised by a moving machine. A model composed of a fixed true teacher, a moving teacher, and a student is treated theoretically using statistical mechanics, where the true teacher is a nonmonotonic perceptron and the others are simple perceptrons. Calculating the generalization errors numerically, we show that the generalization errors of a student can temporarily become smaller than that of a moving teacher, even if the student only uses examples from the moving teacher. However, the generalization error of the student eventually becomes the same value with that of the moving teacher. This behavior is qualitatively different from that of a linear model.<|reference_end|> | arxiv | @article{urakami2006statistical,
title={Statistical Mechanics of On-line Learning when a Moving Teacher Goes
around an Unlearnable True Teacher},
author={Masahiro Urakami, Seiji Miyoshi, Masato Okada},
journal={arXiv preprint arXiv:cs/0612117},
year={2006},
doi={10.1143/JPSJ.76.044003},
archivePrefix={arXiv},
eprint={cs/0612117},
primaryClass={cs.LG cond-mat.dis-nn}
} | urakami2006statistical |
arxiv-675331 | cs/0612118 | Gossiping with Multiple Messages | <|reference_start|>Gossiping with Multiple Messages: This paper investigates the dissemination of multiple pieces of information in large networks where users contact each other in a random uncoordinated manner, and users upload one piece per unit time. The underlying motivation is the design and analysis of piece selection protocols for peer-to-peer networks which disseminate files by dividing them into pieces. We first investigate one-sided protocols, where piece selection is based on the states of either the transmitter or the receiver. We show that any such protocol relying only on pushes, or alternatively only on pulls, is inefficient in disseminating all pieces to all users. We propose a hybrid one-sided piece selection protocol -- INTERLEAVE -- and show that by using both pushes and pulls it disseminates $k$ pieces from a single source to $n$ users in $10(k+\log n)$ time, while obeying the constraint that each user can upload at most one piece in one unit of time, with high probability for large $n$. An optimal, unrealistic centralized protocol would take $k+\log_2 n$ time in this setting. Moreover, efficient dissemination is also possible if the source implements forward erasure coding, and users push the latest-released coded pieces (but do not pull). We also investigate two-sided protocols where piece selection is based on the states of both the transmitter and the receiver. We show that it is possible to disseminate $n$ pieces to $n$ users in $n+O(\log n)$ time, starting from an initial state where each user has a unique piece.<|reference_end|> | arxiv | @article{sanghavi2006gossiping,
title={Gossiping with Multiple Messages},
author={Sujay Sanghavi, Bruce Hajek, and Laurent Massoulie},
journal={arXiv preprint arXiv:cs/0612118},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612118},
primaryClass={cs.NI cs.IT math.IT}
} | sanghavi2006gossiping |
arxiv-675332 | cs/0612119 | Symmetric Subresultants and Applications | <|reference_start|>Symmetric Subresultants and Applications: Schur's transforms of a polynomial are used to count its roots in the unit disk. These are generalized them by introducing the sequence of symmetric sub-resultants of two polynomials. Although they do have a determinantal definition, we show that they satisfy a structure theorem which allows us to compute them with a type of Euclidean division. As a consequence, a fast algorithm based on a dichotomic process and FFT is designed. We prove also that these symmetric sub-resultants have a deep link with Toeplitz matrices. Finally, we propose a new algorithm of inversion for such matrices. It has the same cost as those already known, however it is fraction-free and consequently well adapted to computer algebra.<|reference_end|> | arxiv | @article{brunie2006symmetric,
title={Symmetric Subresultants and Applications},
author={Cyril Brunie (LACO), Philippe Saux Picart (LM)},
journal={arXiv preprint arXiv:cs/0612119},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612119},
primaryClass={cs.SC}
} | brunie2006symmetric |
arxiv-675333 | cs/0612120 | Generalizing the Paige-Tarjan Algorithm by Abstract Interpretation | <|reference_start|>Generalizing the Paige-Tarjan Algorithm by Abstract Interpretation: The Paige and Tarjan algorithm (PT) for computing the coarsest refinement of a state partition which is a bisimulation on some Kripke structure is well known. It is also well known in model checking that bisimulation is equivalent to strong preservation of CTL, or, equivalently, of Hennessy-Milner logic. Drawing on these observations, we analyze the basic steps of the PT algorithm from an abstract interpretation perspective, which allows us to reason on strong preservation in the context of generic inductively defined (temporal) languages and of possibly non-partitioning abstract models specified by abstract interpretation. This leads us to design a generalized Paige-Tarjan algorithm, called GPT, for computing the minimal refinement of an abstract interpretation-based model that strongly preserves some given language. It turns out that PT is a straight instance of GPT on the domain of state partitions for the case of strong preservation of Hennessy-Milner logic. We provide a number of examples showing that GPT is of general use. We first show how a well-known efficient algorithm for computing stuttering equivalence can be viewed as a simple instance of GPT. We then instantiate GPT in order to design a new efficient algorithm for computing simulation equivalence that is competitive with the best available algorithms. Finally, we show how GPT allows to compute new strongly preserving abstract models by providing an efficient algorithm that computes the coarsest refinement of a given partition that strongly preserves the language generated by the reachability operator.<|reference_end|> | arxiv | @article{ranzato2006generalizing,
title={Generalizing the Paige-Tarjan Algorithm by Abstract Interpretation},
author={Francesco Ranzato and Francesco Tapparo},
journal={arXiv preprint arXiv:cs/0612120},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612120},
primaryClass={cs.LO}
} | ranzato2006generalizing |
arxiv-675334 | cs/0612121 | Power Assignment Problems in Wireless Communication | <|reference_start|>Power Assignment Problems in Wireless Communication: A fundamental class of problems in wireless communication is concerned with the assignment of suitable transmission powers to wireless devices/stations such that the resulting communication graph satisfies certain desired properties and the overall energy consumed is minimized. Many concrete communication tasks in a wireless network like broadcast, multicast, point-to-point routing, creation of a communication backbone, etc. can be regarded as such a power assignment problem. This paper considers several problems of that kind; for example one problem studied before in \cite{Carrots, Bilo} aims to select and assign powers to $k$ of the stations such that all other stations are within reach of at least one of the selected stations. We improve the running time for obtaining a $(1+\epsilon)$-approximate solution for this problem from $n^{((\alpha/\epsilon)^{O(d)})}$ as reported by Bilo et al. (\cite{Bilo}) to $O(n+ {(\frac{k^{2d+1}}{\epsilon^d})}^{\min{\{2k, (\alpha/\epsilon)^{O(d)} \}}})$ that is, we obtain a running time that is \emph{linear} in the network size. Further results include a constant approximation algorithm for the TSP problem under squared (non-metric!) edge costs, which can be employed to implement a novel data aggregation protocol, as well as efficient schemes to perform $k$-hop multicasts.<|reference_end|> | arxiv | @article{funke2006power,
title={Power Assignment Problems in Wireless Communication},
author={Stefan Funke, Soeren Laue, Zvi Lotker, Rouven Naujoks},
journal={arXiv preprint arXiv:cs/0612121},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612121},
primaryClass={cs.CG cs.AR cs.NI}
} | funke2006power |
arxiv-675335 | cs/0612122 | Large N Analysis of Amplify-and-Forward MIMO Relay Channels with Correlated Rayleigh Fading | <|reference_start|>Large N Analysis of Amplify-and-Forward MIMO Relay Channels with Correlated Rayleigh Fading: In this correspondence the cumulants of the mutual information of the flat Rayleigh fading amplify-and-forward MIMO relay channel without direct link between source and destination are derived in the large array limit. The analysis is based on the replica trick and covers both spatially independent and correlated fading in the first and the second hop, while beamforming at all terminals is restricted to deterministic weight matrices. Expressions for mean and variance of the mutual information are obtained. Their parameters are determined by a nonlinear equation system. All higher cumulants are shown to vanish as the number of antennas n goes to infinity. In conclusion the distribution of the mutual information I becomes Gaussian in the large n limit and is completely characterized by the expressions obtained for mean and variance of I. Comparisons with simulation results show that the asymptotic results serve as excellent approximations for systems with only few antennas at each node. The derivation of the results follows the technique formalized by Moustakas et al. in [1]. Although the evaluations are more involved for the MIMO relay channel compared to point-to-point MIMO channels, the structure of the results is surprisingly simple again. In particular an elegant formula for the mean of the mutual information is obtained, i.e., the ergodic capacity of the two-hop amplify-and-forward MIMO relay channel without direct link.<|reference_end|> | arxiv | @article{wagner2006large,
title={Large N Analysis of Amplify-and-Forward MIMO Relay Channels with
Correlated Rayleigh Fading},
author={Joerg Wagner, Boris Rankov, and Armin Wittneben},
journal={arXiv preprint arXiv:cs/0612122},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612122},
primaryClass={cs.IT math.IT}
} | wagner2006large |
arxiv-675336 | cs/0612123 | Electronic Laboratory Notebook Assisting Reflectance Spectrometry in Legal Medicine | <|reference_start|>Electronic Laboratory Notebook Assisting Reflectance Spectrometry in Legal Medicine: Reflectance spectrometry is a fast and reliable method for the characterisation of human skin if the spectra are analysed with respect to a physical model describing the optical properties of human skin. For a field study performed at the Institute of Legal Medicine and the Freiburg Materials Research Center of the University of Freiburg an electronic laboratory notebook has been developed, which assists in the recording, management, and analysis of reflectance spectra. The core of the electronic laboratory notebook is a MySQL database. It is filled with primary data via a web interface programmed in Java, which also enables the user to browse the database and access the results of data analysis. These are carried out by Matlab, Tcl and Python scripts, which retrieve the primary data from the electronic laboratory notebook, perform the analysis, and store the results in the database for further usage.<|reference_end|> | arxiv | @article{belenkaia2006electronic,
title={Electronic Laboratory Notebook Assisting Reflectance Spectrometry in
Legal Medicine},
author={Lioudmila Belenkaia, Michael Bohnert and Andreas W. Liehr},
journal={arXiv preprint arXiv:cs/0612123},
year={2006},
doi={10.1177/2211068212443960},
archivePrefix={arXiv},
eprint={cs/0612123},
primaryClass={cs.DB cs.DL cs.IR}
} | belenkaia2006electronic |
arxiv-675337 | cs/0612124 | Highly robust error correction by convex programming | <|reference_start|>Highly robust error correction by convex programming: This paper discusses a stylized communications problem where one wishes to transmit a real-valued signal x in R^n (a block of n pieces of information) to a remote receiver. We ask whether it is possible to transmit this information reliably when a fraction of the transmitted codeword is corrupted by arbitrary gross errors, and when in addition, all the entries of the codeword are contaminated by smaller errors (e.g. quantization errors). We show that if one encodes the information as Ax where A is a suitable m by n coding matrix (m >= n), there are two decoding schemes that allow the recovery of the block of n pieces of information x with nearly the same accuracy as if no gross errors occur upon transmission (or equivalently as if one has an oracle supplying perfect information about the sites and amplitudes of the gross errors). Moreover, both decoding strategies are very concrete and only involve solving simple convex optimization programs, either a linear program or a second-order cone program. We complement our study with numerical simulations showing that the encoder/decoder pair performs remarkably well.<|reference_end|> | arxiv | @article{candes2006highly,
title={Highly robust error correction by convex programming},
author={Emmanuel J. Candes and Paige A. Randall},
journal={arXiv preprint arXiv:cs/0612124},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612124},
primaryClass={cs.IT math.IT math.PR math.ST stat.TH}
} | candes2006highly |
arxiv-675338 | cs/0612125 | Heterogeneous Strong Computation Migration | <|reference_start|>Heterogeneous Strong Computation Migration: The continuous increase in performance requirements, for both scientific computation and industry, motivates the need of a powerful computing infrastructure. The Grid appeared as a solution for inexpensive execution of heavy applications in a parallel and distributed manner. It allows combining resources independently of their physical location and architecture to form a global resource pool available to all grid users. However, grid environments are highly unstable and unpredictable. Adaptability is a crucial issue in this context, in order to guarantee an appropriate quality of service to users. Migration is a technique frequently used for achieving adaptation. The objective of this report is to survey the problem of strong migration in heterogeneous environments like the grids', the related implementation issues and the current solutions.<|reference_end|> | arxiv | @article{milanés2006heterogeneous,
title={Heterogeneous Strong Computation Migration},
author={Anolan Milan'es, Noemi Rodriguez and Bruno Schulze},
journal={Concurrency and Computation: Practice and Experience, 20:
1485-1508},
year={2006},
doi={10.1002/cpe.1287},
archivePrefix={arXiv},
eprint={cs/0612125},
primaryClass={cs.DC}
} | milanés2006heterogeneous |
arxiv-675339 | cs/0612126 | The virtual reality framework for engineering objects | <|reference_start|>The virtual reality framework for engineering objects: A framework for virtual reality of engineering objects has been developed. This framework may simulate different equipment related to virtual reality. Framework supports 6D dynamics, ordinary differential equations, finite formulas, vector and matrix operations. The framework also supports embedding of external software.<|reference_end|> | arxiv | @article{ivankov2006the,
title={The virtual reality framework for engineering objects},
author={Petr R. Ivankov, Nikolay P. Ivankov},
journal={arXiv preprint arXiv:cs/0612126},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612126},
primaryClass={cs.CE cs.MS}
} | ivankov2006the |
arxiv-675340 | cs/0612127 | bdbms -- A Database Management System for Biological Data | <|reference_start|>bdbms -- A Database Management System for Biological Data: Biologists are increasingly using databases for storing and managing their data. Biological databases typically consist of a mixture of raw data, metadata, sequences, annotations, and related data obtained from various sources. Current database technology lacks several functionalities that are needed by biological databases. In this paper, we introduce bdbms, an extensible prototype database management system for supporting biological data. bdbms extends the functionalities of current DBMSs to include: (1) Annotation and provenance management including storage, indexing, manipulation, and querying of annotation and provenance as first class objects in bdbms, (2) Local dependency tracking to track the dependencies and derivations among data items, (3) Update authorization to support data curation via content-based authorization, in contrast to identity-based authorization, and (4) New access methods and their supporting operators that support pattern matching on various types of compressed biological data types. This paper presents the design of bdbms along with the techniques proposed to support these functionalities including an extension to SQL. We also outline some open issues in building bdbms.<|reference_end|> | arxiv | @article{eltabakh2006bdbms,
title={bdbms -- A Database Management System for Biological Data},
author={Mohamed Y. Eltabakh, Mourad Ouzzani, Walid G. Aref},
journal={arXiv preprint arXiv:cs/0612127},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612127},
primaryClass={cs.DB}
} | eltabakh2006bdbms |
arxiv-675341 | cs/0612128 | SASE: Complex Event Processing over Streams | <|reference_start|>SASE: Complex Event Processing over Streams: RFID technology is gaining adoption on an increasing scale for tracking and monitoring purposes. Wide deployments of RFID devices will soon generate an unprecedented volume of data. Emerging applications require the RFID data to be filtered and correlated for complex pattern detection and transformed to events that provide meaningful, actionable information to end applications. In this work, we design and develop SASE, a com-plex event processing system that performs such data-information transformation over real-time streams. We design a complex event language for specifying application logic for such transformation, devise new query processing techniques to effi-ciently implement the language, and develop a comprehensive system that collects, cleans, and processes RFID data for deliv-ery of relevant, timely information as well as storing necessary data for future querying. We demonstrate an initial prototype of SASE through a real-world retail management scenario.<|reference_end|> | arxiv | @article{gyllstrom2006sase:,
title={SASE: Complex Event Processing over Streams},
author={Daniel Gyllstrom, Eugene Wu, Hee-Jin Chae, Yanlei Diao, Patrick
Stahlberg, Gordon Anderson},
journal={arXiv preprint arXiv:cs/0612128},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612128},
primaryClass={cs.DB}
} | gyllstrom2006sase: |
arxiv-675342 | cs/0612129 | Impliance: A Next Generation Information Management Appliance | <|reference_start|>Impliance: A Next Generation Information Management Appliance: ably successful in building a large market and adapting to the changes of the last three decades, its impact on the broader market of information management is surprisingly limited. If we were to design an information management system from scratch, based upon today's requirements and hardware capabilities, would it look anything like today's database systems?" In this paper, we introduce Impliance, a next-generation information management system consisting of hardware and software components integrated to form an easy-to-administer appliance that can store, retrieve, and analyze all types of structured, semi-structured, and unstructured information. We first summarize the trends that will shape information management for the foreseeable future. Those trends imply three major requirements for Impliance: (1) to be able to store, manage, and uniformly query all data, not just structured records; (2) to be able to scale out as the volume of this data grows; and (3) to be simple and robust in operation. We then describe four key ideas that are uniquely combined in Impliance to address these requirements, namely the ideas of: (a) integrating software and off-the-shelf hardware into a generic information appliance; (b) automatically discovering, organizing, and managing all data - unstructured as well as structured - in a uniform way; (c) achieving scale-out by exploiting simple, massive parallel processing, and (d) virtualizing compute and storage resources to unify, simplify, and streamline the management of Impliance. Impliance is an ambitious, long-term effort to define simpler, more robust, and more scalable information systems for tomorrow's enterprises.<|reference_end|> | arxiv | @article{bhattacharjee2006impliance:,
title={Impliance: A Next Generation Information Management Appliance},
author={Bishwaranjan Bhattacharjee, Vuk Ercegovac, Joseph Glider, Richard
Golding, Guy Lohman, Volke Markl, Hamid Pirahesh, Jun Rao, Robert Rees,
Frederick Reiss, Eugene Shekita, Garret Swart},
journal={arXiv preprint arXiv:cs/0612129},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612129},
primaryClass={cs.DB}
} | bhattacharjee2006impliance: |
arxiv-675343 | cs/0612130 | Stratification in P2P Networks - Application to BitTorrent | <|reference_start|>Stratification in P2P Networks - Application to BitTorrent: We introduce a model for decentralized networks with collaborating peers. The model is based on the stable matching theory which is applied to systems with a global ranking utility function. We consider the dynamics of peers searching for efficient collaborators and we prove that a unique stable solution exists. We prove that the system converges towards the stable solution and analyze its speed of convergence. We also study the stratification properties of the model, both when all collaborations are possible and for random possible collaborations. We present the corresponding fluid limit on the choice of collaborators in the random case. As a practical example, we study the BitTorrent Tit-for-Tat policy. For this system, our model provides an interesting insight on peer download rates and a possible way to optimize peer strategy.<|reference_end|> | arxiv | @article{gai2006stratification,
title={Stratification in P2P Networks - Application to BitTorrent},
author={Anh-Tuan Gai (INRIA Rocquencourt), Fabien Mathieu (INRIA
Rocquencourt), Julien Reynier (INRIA Rocquencourt), Fabien De Montgolfier
(INRIA Rocquencourt)},
journal={arXiv preprint arXiv:cs/0612130},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612130},
primaryClass={cs.NI}
} | gai2006stratification |
arxiv-675344 | cs/0612131 | Architecting Network-Centric Software Systems: A Style-Based Beginning | <|reference_start|>Architecting Network-Centric Software Systems: A Style-Based Beginning: With the advent of potent network technology, software development has evolved from traditional platform-centric construction to network-centric evolution. This change involves largely the way we reason about systems as evidenced in the introduction of Network- Centric Operations (NCO). Unfortunately, it has resulted in conflicting interpretations of how to map NCO concepts to the field of software architecture. In this paper, we capture the core concepts and goals of NCO, investigate the implications of these concepts and goals on software architecture, and identify the operational characteristics that distinguish network-centric software systems from other systems. More importantly, we use architectural design principles to propose an outline for a network-centric architectural style that helps in characterizing network-centric software systems and that provides a means by which their distinguishing operational characteristics can be realized.<|reference_end|> | arxiv | @article{bohner2006architecting,
title={Architecting Network-Centric Software Systems: A Style-Based Beginning},
author={Amine Chigani James D. Arthur Shawn Bohner},
journal={arXiv preprint arXiv:cs/0612131},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612131},
primaryClass={cs.SE}
} | bohner2006architecting |
arxiv-675345 | cs/0612132 | A New Era in Citation and Bibliometric Analyses: Web of Science, Scopus, and Google Scholar | <|reference_start|>A New Era in Citation and Bibliometric Analyses: Web of Science, Scopus, and Google Scholar: Academic institutions, federal agencies, publishers, editors, authors, and librarians increasingly rely on citation analysis for making hiring, promotion, tenure, funding, and/or reviewer and journal evaluation and selection decisions. The Institute for Scientific Information's (ISI) citation databases have been used for decades as a starting point and often as the only tools for locating citations and/or conducting citation analyses. ISI databases (or Web of Science), however, may no longer be adequate as the only or even the main sources of citations because new databases and tools that allow citation searching are now available. Whether these new databases and tools complement or represent alternatives to Web of Science (WoS) is important to explore. Using a group of 15 library and information science faculty members as a case study, this paper examines the effects of using Scopus and Google Scholar (GS) on the citation counts and rankings of scholars as measured by WoS. The paper discusses the strengths and weaknesses of WoS, Scopus, and GS, their overlap and uniqueness, quality and language of the citations, and the implications of the findings for citation analysis. The project involved citation searching for approximately 1,100 scholarly works published by the study group and over 200 works by a test group (an additional 10 faculty members). Overall, more than 10,000 citing and purportedly citing documents were examined. WoS data took about 100 hours of collecting and processing time, Scopus consumed 200 hours, and GS a grueling 3,000 hours.<|reference_end|> | arxiv | @article{meho2006a,
title={A New Era in Citation and Bibliometric Analyses: Web of Science, Scopus,
and Google Scholar},
author={Lokman I. Meho, Kiduk Yang},
journal={arXiv preprint arXiv:cs/0612132},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612132},
primaryClass={cs.DL cs.IR}
} | meho2006a |
arxiv-675346 | cs/0612133 | Tales of Huffman | <|reference_start|>Tales of Huffman: We study the new problem of Huffman-like codes subject to individual restrictions on the code-word lengths of a subset of the source words. These are prefix codes with minimal expected code-word length for a random source where additionally the code-word lengths of a subset of the source words is prescribed, possibly differently for every such source word. Based on a structural analysis of properties of optimal solutions, we construct an efficient dynamic programming algorithm for this problem, and for an integer programming problem that may be of independent interest.<|reference_end|> | arxiv | @article{vitanyi2006tales,
title={Tales of Huffman},
author={Paul M.B. Vitanyi (CWI and University of Amsterdam), Zvi Lotker (Ben
Gurion University, Beer Sheva)},
journal={arXiv preprint arXiv:cs/0612133},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612133},
primaryClass={cs.IT cs.CC math.IT}
} | vitanyi2006tales |
arxiv-675347 | cs/0612134 | Geometric Complexity Theory II: Towards explicit obstructions for embeddings among class varieties | <|reference_start|>Geometric Complexity Theory II: Towards explicit obstructions for embeddings among class varieties: In part I we reduced the arithmetic (characteristic zero) version of the P \not \subseteq NP conjecture to the problem of showing that a variety associated with the complexity class NP cannot be embedded in the variety associated the complexity class P. We call these class varieties. In this paper, this approach is developed further, reducing the nonexistence problems, such as the P vs. NP and related lower bound problems, to existence problems: specifically to proving existence of obstructions to such embeddings among class varieties. It gives two results towards explicit construction of such obstructions. The first result is a generalization of the Borel-Weil theorem to a class of orbit closures, which include class varieties. The recond result is a weaker form of a conjectured analogue of the second fundamental theorem of invariant theory for the class variety associated with the complexity class NC. These results indicate that the fundamental lower bound problems in complexity theory are intimately linked with explicit construction problems in algebraic geometry and representation theory.<|reference_end|> | arxiv | @article{mulmuley2006geometric,
title={Geometric Complexity Theory II: Towards explicit obstructions for
embeddings among class varieties},
author={Ketan D Mulmuley, Milind Sohoni},
journal={arXiv preprint arXiv:cs/0612134},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612134},
primaryClass={cs.CC math.AG math.RT}
} | mulmuley2006geometric |
arxiv-675348 | cs/0612135 | Accommodation of the Service Offered by the Network for Networked Control Systems | <|reference_start|>Accommodation of the Service Offered by the Network for Networked Control Systems: Networked Controlled Systems (NCSs) are more and more used in industrial applications. They are strongly connected to real-time constraints because important delays induced by the network can lead to an unstable process control. Usually, the network used in NCSs is shared with many others applications requiring different Quality of Service. The objective of this paper is to optimize the tuning of the network scheduling mechanisms in taking into account the level of Quality of Control. The goal is to maximize the bandwidth allocation for unconstrained frames in guarantying that the control constraints are respected. In this paper, we focus on switched Ethernet network implementing the Classification of Service (IEEE 802.1p) based on a Weighted Round Robin policy.<|reference_end|> | arxiv | @article{diouri2006accommodation,
title={Accommodation of the Service Offered by the Network for Networked
Control Systems},
author={Idriss Diouri (CRAN), Jean-Philippe Georges (CRAN), Eric Rondeau
(CRAN)},
journal={2nd workshop on Networked Control Systems : Tolerant to faults
(23/11/2006) 8 pages},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612135},
primaryClass={cs.NI}
} | diouri2006accommodation |
arxiv-675349 | cs/0612136 | Experiments on predictability of word in context and information rate in natural language | <|reference_start|>Experiments on predictability of word in context and information rate in natural language: Based on data from a large-scale experiment with human subjects, we conclude that the logarithm of probability to guess a word in context (unpredictability) depends linearly on the word length. This result holds both for poetry and prose, even though with prose, the subjects don't know the length of the omitted word. We hypothesize that this effect reflects a tendency of natural language to have an even information rate.<|reference_end|> | arxiv | @article{manin2006experiments,
title={Experiments on predictability of word in context and information rate in
natural language},
author={Dmitrii Manin},
journal={Manin, D.Yu. 2006. Experiments on predictability of word in
context and information rate in natural language. J. Information Processes
(electronic publication, http://www.jip.ru), 6 (3), 229-236},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612136},
primaryClass={cs.IT math.IT}
} | manin2006experiments |
arxiv-675350 | cs/0612137 | Turning Cluster Management into Data Management: A System Overview | <|reference_start|>Turning Cluster Management into Data Management: A System Overview: This paper introduces the CondorJ2 cluster management system. Traditionally, cluster management systems such as Condor employ a process-oriented approach with little or no use of modern database system technology. In contrast, CondorJ2 employs a data-centric, 3-tier web-application architecture for all system functions (e.g., job submission, monitoring and scheduling; node configuration, monitoring and management, etc.) except for job execution. Employing a data-oriented approach allows the core challenge (i.e., managing and coordinating a large set of distributed computing resources) to be transformed from a relatively low-level systems problem into a more abstract, higher-level data management problem. Preliminary results suggest that CondorJ2's use of standard 3-tier software represents a significant step forward to the design and implementation of large clusters (1,000 to 10,000 nodes).<|reference_end|> | arxiv | @article{robinson2006turning,
title={Turning Cluster Management into Data Management: A System Overview},
author={Eric Robinson, David DeWitt},
journal={arXiv preprint arXiv:cs/0612137},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612137},
primaryClass={cs.DB}
} | robinson2006turning |
arxiv-675351 | cs/0612138 | Accommodating Sample Size Effect on Similarity Measures in Speaker Clustering | <|reference_start|>Accommodating Sample Size Effect on Similarity Measures in Speaker Clustering: We investigate the symmetric Kullback-Leibler (KL2) distance in speaker clustering and its unreported effects for differently-sized feature matrices. Speaker data is represented as Mel Frequency Cepstral Coefficient (MFCC) vectors, and features are compared using the KL2 metric to form clusters of speech segments for each speaker. We make two observations with respect to clustering based on KL2: 1.) The accuracy of clustering is strongly dependent on the absolute lengths of the speech segments and their extracted feature vectors. 2.) The accuracy of the similarity measure strongly degrades with the length of the shorter of the two speech segments. These effects of length can be attributed to the measure of covariance used in KL2. We demonstrate an empirical correction of this sample-size effect that increases clustering accuracy. We draw parallels to two Vector Quantization-based (VQ) similarity measures, one which exhibits an equivalent effect of sample size, and the second being less influenced by it.<|reference_end|> | arxiv | @article{haubold2006accommodating,
title={Accommodating Sample Size Effect on Similarity Measures in Speaker
Clustering},
author={Alexander Haubold, John R. Kender},
journal={arXiv preprint arXiv:cs/0612138},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612138},
primaryClass={cs.SD cs.MM}
} | haubold2006accommodating |
arxiv-675352 | cs/0612139 | Alignment of Speech to Highly Imperfect Text Transcriptions | <|reference_start|>Alignment of Speech to Highly Imperfect Text Transcriptions: We introduce a novel and inexpensive approach for the temporal alignment of speech to highly imperfect transcripts from automatic speech recognition (ASR). Transcripts are generated for extended lecture and presentation videos, which in some cases feature more than 30 speakers with different accents, resulting in highly varying transcription qualities. In our approach we detect a subset of phonemes in the speech track, and align them to the sequence of phonemes extracted from the transcript. We report on the results for 4 speech-transcript sets ranging from 22 to 108 minutes. The alignment performance is promising, showing a correct matching of phonemes within 10, 20, 30 second error margins for more than 60%, 75%, 90% of text, respectively, on average.<|reference_end|> | arxiv | @article{haubold2006alignment,
title={Alignment of Speech to Highly Imperfect Text Transcriptions},
author={Alexander Haubold, John R. Kender},
journal={arXiv preprint arXiv:cs/0612139},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612139},
primaryClass={cs.SD cs.MM}
} | haubold2006alignment |
arxiv-675353 | cs/0612140 | On simulating nondeterministic stochastic activity networks | <|reference_start|>On simulating nondeterministic stochastic activity networks: In this work we deal with a mechanism for process simulation called a NonDeterministic Stochastic Activity Network (NDSAN). An NDSAN consists basically of a set of activities along with precedence relations involving these activities, which determine their order of execution. Activity durations are stochastic, given by continuous, nonnegative random variables. The nondeterministic behavior of an NDSAN is based on two additional possibilities: (i) by associating choice probabilities with groups of activities, some branches of execution may not be taken; (ii) by allowing iterated executions of groups of activities according to predetermined probabilities, the number of times an activity must be executed is not determined a priori. These properties lead to a rich variety of activity networks, capable of modeling many real situations in process engineering, project design, and troubleshooting. We describe a recursive simulation algorithm for NDSANs, whose repeated execution produces a close approximation to the probability distribution of the completion time of the entire network. We also report on real-world case studies.<|reference_end|> | arxiv | @article{barbosa2006on,
title={On simulating nondeterministic stochastic activity networks},
author={Valmir C. Barbosa, Fernando M.L. Ferreira, Daniel V. Kling, Eduardo
Lopes, Fabio Protti, Eber A. Schmitz},
journal={European Journal of Operational Research 198 (2009), 266-274},
year={2006},
doi={10.1016/j.ejor.2008.06.010},
archivePrefix={arXiv},
eprint={cs/0612140},
primaryClass={cs.DM}
} | barbosa2006on |
arxiv-675354 | cs/0612141 | Exact Failure Frequency Calculations for Extended Systems | <|reference_start|>Exact Failure Frequency Calculations for Extended Systems: This paper shows how the steady-state availability and failure frequency can be calculated in a single pass for very large systems, when the availability is expressed as a product of matrices. We apply the general procedure to $k$-out-of-$n$:G and linear consecutive $k$-out-of-$n$:F systems, and to a simple ladder network in which each edge and node may fail. We also give the associated generating functions when the components have identical availabilities and failure rates. For large systems, the failure rate of the whole system is asymptotically proportional to its size. This paves the way to ready-to-use formulae for various architectures, as well as proof that the differential operator approach to failure frequency calculations is very useful and straightforward.<|reference_end|> | arxiv | @article{druault-vicard2006exact,
title={Exact Failure Frequency Calculations for Extended Systems},
author={Annie Druault-Vicard, Christian Tanguy},
journal={arXiv preprint arXiv:cs/0612141},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612141},
primaryClass={cs.PF}
} | druault-vicard2006exact |
arxiv-675355 | cs/0612142 | What is the probability of connecting two points ? | <|reference_start|>What is the probability of connecting two points ?: The two-terminal reliability, known as the pair connectedness or connectivity function in percolation theory, may actually be expressed as a product of transfer matrices in which the probability of operation of each link and site is exactly taken into account. When link and site probabilities are $p$ and $\rho$, it obeys an asymptotic power-law behavior, for which the scaling factor is the transfer matrix's eigenvalue of largest modulus. The location of the complex zeros of the two-terminal reliability polynomial exhibits structural transitions as $0 \leq \rho \leq 1$.<|reference_end|> | arxiv | @article{tanguy2006what,
title={What is the probability of connecting two points ?},
author={Christian Tanguy},
journal={J. Phys. A: Math. Theor. 40 (2007) 14099-14116},
year={2006},
doi={10.1088/1751-8113/40/47/005},
archivePrefix={arXiv},
eprint={cs/0612142},
primaryClass={cs.DM cs.NI}
} | tanguy2006what |
arxiv-675356 | cs/0612143 | Exact solutions for the two- and all-terminal reliabilities of a simple ladder network | <|reference_start|>Exact solutions for the two- and all-terminal reliabilities of a simple ladder network: The exact calculation of network reliability in a probabilistic context has been a long-standing issue of practical importance, but a difficult one, even for planar graphs, with perfect nodes and with edges of identical reliability p. Many approaches (determination of bounds, sums of disjoint products algorithms, Monte Carlo evaluations, studies of the reliability polynomials, etc.) can only provide approximations when the network's size increases. We consider here a ladder graph of arbitrary size corresponding to real-life network configurations, and give the exact, analytical solutions for the all- and two-terminal reliabilities. These solutions use transfer matrices, in which individual reliabilities of edges and nodes are taken into account. The special case of identical edge and node reliabilities -- p and rho, respectively -- is solved. We show that the zeros of the two-terminal reliability polynomial exhibit structures which differ substantially for seemingly similar networks, and we compare the sensitivity of various edges. We discuss how the present work may be further extended to lead to a catalog of exactly solvable networks in terms of reliability, which could be useful as elementary bricks for a new and improved set of bounds or benchmarks in the general case.<|reference_end|> | arxiv | @article{tanguy2006exact,
title={Exact solutions for the two- and all-terminal reliabilities of a simple
ladder network},
author={Christian Tanguy},
journal={arXiv preprint arXiv:cs/0612143},
year={2006},
archivePrefix={arXiv},
eprint={cs/0612143},
primaryClass={cs.PF}
} | tanguy2006exact |
arxiv-675357 | cs/0701001 | On High Spatial Reuse Link Scheduling in STDMA Wireless Ad Hoc Networks | <|reference_start|>On High Spatial Reuse Link Scheduling in STDMA Wireless Ad Hoc Networks: Graph-based algorithms for point-to-point link scheduling in Spatial reuse Time Division Multiple Access (STDMA) wireless ad hoc networks often result in a significant number of transmissions having low Signal to Interference and Noise density Ratio (SINR) at intended receivers, leading to low throughput. To overcome this problem, we propose a new algorithm for STDMA link scheduling based on a graph model of the network as well as SINR computations. The performance of our algorithm is evaluated in terms of spatial reuse and computational complexity. Simulation results demonstrate that our algorithm achieves better performance than existing algorithms.<|reference_end|> | arxiv | @article{gore2007on,
title={On High Spatial Reuse Link Scheduling in STDMA Wireless Ad Hoc Networks},
author={Ashutosh Deepak Gore, Srikanth Jagabathula, Abhay Karandikar},
journal={arXiv preprint arXiv:cs/0701001},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701001},
primaryClass={cs.PF cs.NI}
} | gore2007on |
arxiv-675358 | cs/0701002 | Relay Assisted F/TDMA Ad Hoc Networks: Node Classification, Power Allocation and Relaying Strategies | <|reference_start|>Relay Assisted F/TDMA Ad Hoc Networks: Node Classification, Power Allocation and Relaying Strategies: This paper considers the design of relay assisted F/TDMA ad hoc networks with multiple relay nodes each of which assists the transmission of a predefined subset of source nodes to their respective destinations. Considering the sum capacity as the performance metric, we solve the problem of optimally allocating the total power of each relay node between the transmissions it is assisting. We consider four different relay transmission strategies, namely regenerative decode-and-forward (RDF), nonregenerative decode-and-forward (NDF), amplify-and-forward (AF) and compress-and-forward (CF). We first obtain the optimum power allocation policies for the relay nodes that employ a uniform relaying strategy for all nodes. We show that the optimum power allocation for the RDF and NDF cases are modified water-filling solutions. We observe that for a given relay transmit power, NDF always outperforms RDF whereas CF always provides higher sum capacity than AF. When CF and NDF are compared, it is observed that either of CF or NDF may outperform the other in different scenarios. This observation suggests that the sum capacity can be further improved by having each relay adopt its relaying strategy in helping different source nodes. We investigate this problem next and determine the optimum power allocation and relaying strategy for each source node that relay nodes assist. We observe that optimum power allocation for relay nodes with hybrid relaying strategies provides higher sum capacity than pure RDF, NDF, AF or CF relaying strategies.<|reference_end|> | arxiv | @article{serbetli2007relay,
title={Relay Assisted F/TDMA Ad Hoc Networks: Node Classification, Power
Allocation and Relaying Strategies},
author={Semih Serbetli and Aylin Yener},
journal={arXiv preprint arXiv:cs/0701002},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701002},
primaryClass={cs.IT math.IT}
} | serbetli2007relay |
arxiv-675359 | cs/0701003 | Magnification Laws of Winner-Relaxing and Winner-Enhancing Kohonen Feature Maps | <|reference_start|>Magnification Laws of Winner-Relaxing and Winner-Enhancing Kohonen Feature Maps: Self-Organizing Maps are models for unsupervised representation formation of cortical receptor fields by stimuli-driven self-organization in laterally coupled winner-take-all feedforward structures. This paper discusses modifications of the original Kohonen model that were motivated by a potential function, in their ability to set up a neural mapping of maximal mutual information. Enhancing the winner update, instead of relaxing it, results in an algorithm that generates an infomax map corresponding to magnification exponent of one. Despite there may be more than one algorithm showing the same magnification exponent, the magnification law is an experimentally accessible quantity and therefore suitable for quantitative description of neural optimization principles.<|reference_end|> | arxiv | @article{claussen2006magnification,
title={Magnification Laws of Winner-Relaxing and Winner-Enhancing Kohonen
Feature Maps},
author={Jens Christian Claussen (University Kiel)},
journal={pp. 17-22 in : V. Capasso (Ed.): Mathematical Modeling & Computing
in Biology and Medicine, Miriam Series, Progetto Leonardo, Bologna (2003)},
year={2006},
archivePrefix={arXiv},
eprint={cs/0701003},
primaryClass={cs.NE cs.IT math.IT}
} | claussen2006magnification |
arxiv-675360 | cs/0701004 | An algebraic approach to complexity of data stream computations | <|reference_start|>An algebraic approach to complexity of data stream computations: We consider a basic problem in the general data streaming model, namely, to estimate a vector $f \in \Z^n$ that is arbitrarily updated (i.e., incremented or decremented) coordinate-wise. The estimate $\hat{f} \in \Z^n$ must satisfy $\norm{\hat{f}-f}_{\infty}\le \epsilon\norm{f}_1 $, that is, $\forall i ~(\abs{\hat{f}_i - f_i} \le \epsilon \norm{f}_1)$. It is known to have $\tilde{O}(\epsilon^{-1})$ randomized space upper bound \cite{cm:jalgo}, $\Omega(\epsilon^{-1} \log (\epsilon n))$ space lower bound \cite{bkmt:sirocco03} and deterministic space upper bound of $\tilde{\Omega}(\epsilon^{-2})$ bits.\footnote{The $\tilde{O}$ and $\tilde{\Omega}$ notations suppress poly-logarithmic factors in $n, \log \epsilon^{-1}, \norm{f}_{\infty}$ and $\log \delta^{-1}$, where, $\delta$ is the error probability (for randomized algorithm).} We show that any deterministic algorithm for this problem requires space $\Omega(\epsilon^{-2} (\log \norm{f}_1))$ bits.<|reference_end|> | arxiv | @article{ganguly2007an,
title={An algebraic approach to complexity of data stream computations},
author={Sumit Ganguly},
journal={arXiv preprint arXiv:cs/0701004},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701004},
primaryClass={cs.CC}
} | ganguly2007an |
arxiv-675361 | cs/0701005 | Exact solutions for the two- and all-terminal reliabilities of the Brecht-Colbourn ladder and the generalized fan | <|reference_start|>Exact solutions for the two- and all-terminal reliabilities of the Brecht-Colbourn ladder and the generalized fan: The two- and all-terminal reliabilities of the Brecht-Colbourn ladder and the generalized fan have been calculated exactly for arbitrary size as well as arbitrary individual edge and node reliabilities, using transfer matrices of dimension four at most. While the all-terminal reliabilities of these graphs are identical, the special case of identical edge ($p$) and node ($\rho$) reliabilities shows that their two-terminal reliabilities are quite distinct, as demonstrated by their generating functions and the locations of the zeros of the reliability polynomials, which undergo structural transitions at $\rho = \displaystyle {1/2}$.<|reference_end|> | arxiv | @article{tanguy2006exact,
title={Exact solutions for the two- and all-terminal reliabilities of the
Brecht-Colbourn ladder and the generalized fan},
author={Christian Tanguy},
journal={arXiv preprint arXiv:cs/0701005},
year={2006},
archivePrefix={arXiv},
eprint={cs/0701005},
primaryClass={cs.PF}
} | tanguy2006exact |
arxiv-675362 | cs/0701006 | The Trapping Redundancy of Linear Block Codes | <|reference_start|>The Trapping Redundancy of Linear Block Codes: We generalize the notion of the stopping redundancy in order to study the smallest size of a trapping set in Tanner graphs of linear block codes. In this context, we introduce the notion of the trapping redundancy of a code, which quantifies the relationship between the number of redundant rows in any parity-check matrix of a given code and the size of its smallest trapping set. Trapping sets with certain parameter sizes are known to cause error-floors in the performance curves of iterative belief propagation decoders, and it is therefore important to identify decoding matrices that avoid such sets. Bounds on the trapping redundancy are obtained using probabilistic and constructive methods, and the analysis covers both general and elementary trapping sets. Numerical values for these bounds are computed for the [2640,1320] Margulis code and the class of projective geometry codes, and compared with some new code-specific trapping set size estimates.<|reference_end|> | arxiv | @article{laendner2006the,
title={The Trapping Redundancy of Linear Block Codes},
author={Stefan Laendner, Thorsten Hehn, Olgica Milenkovic, and Johannes B.
Huber},
journal={arXiv preprint arXiv:cs/0701006},
year={2006},
doi={10.1109/TIT.2008.2008134},
archivePrefix={arXiv},
eprint={cs/0701006},
primaryClass={cs.IT math.IT}
} | laendner2006the |
arxiv-675363 | cs/0701007 | On the Complexity of the Circular Chromatic Number | <|reference_start|>On the Complexity of the Circular Chromatic Number: Circular chromatic number, $\chi_c$ is a natural generalization of chromatic number. It is known that it is \NP-hard to determine whether or not an arbitrary graph $G$ satisfies $\chi(G) = \chi_c(G)$. In this paper we prove that this problem is \NP-hard even if the chromatic number of the graph is known. This answers a question of Xuding Zhu. Also we prove that for all positive integers $k \ge 2$ and $n \ge 3$, for a given graph $G$ with $\chi(G)=n$, it is \NP-complete to verify if $\chi_c(G) \le n- \frac{1}{k}$.<|reference_end|> | arxiv | @article{hatami2006on,
title={On the Complexity of the Circular Chromatic Number},
author={Hamed Hatami and Ruzbeh Tusserkani},
journal={Journal of Graph Theory. 47(3) (2004) pp. 226-230},
year={2006},
archivePrefix={arXiv},
eprint={cs/0701007},
primaryClass={cs.CG}
} | hatami2006on |
arxiv-675364 | cs/0701008 | On the Computational Complexity of Defining Sets | <|reference_start|>On the Computational Complexity of Defining Sets: Suppose we have a family ${\cal F}$ of sets. For every $S \in {\cal F}$, a set $D \subseteq S$ is a {\sf defining set} for $({\cal F},S)$ if $S$ is the only element of $\cal{F}$ that contains $D$ as a subset. This concept has been studied in numerous cases, such as vertex colorings, perfect matchings, dominating sets, block designs, geodetics, orientations, and Latin squares. In this paper, first, we propose the concept of a defining set of a logical formula, and we prove that the computational complexity of such a problem is $\Sigma_2$-complete. We also show that the computational complexity of the following problem about the defining set of vertex colorings of graphs is $\Sigma_2$-complete: {\sc Instance:} A graph $G$ with a vertex coloring $c$ and an integer $k$. {\sc Question:} If ${\cal C}(G)$ be the set of all $\chi(G)$-colorings of $G$, then does $({\cal C}(G),c)$ have a defining set of size at most $k$? Moreover, we study the computational complexity of some other variants of this problem.<|reference_end|> | arxiv | @article{hatami2006on,
title={On the Computational Complexity of Defining Sets},
author={Hamed Hatami and Hossein Maserrat},
journal={Journal of Discrete Applied Mathematics .149(1-3) (2005) pp.
101-110},
year={2006},
archivePrefix={arXiv},
eprint={cs/0701008},
primaryClass={cs.CC}
} | hatami2006on |
arxiv-675365 | cs/0701009 | Approximation and Inapproximability Results for Maximum Clique of Disc Graphs in High Dimensions | <|reference_start|>Approximation and Inapproximability Results for Maximum Clique of Disc Graphs in High Dimensions: We prove algorithmic and hardness results for the problem of finding the largest set of a fixed diameter in the Euclidean space. In particular, we prove that if $A^*$ is the largest subset of diameter $r$ of $n$ points in the Euclidean space, then for every $\epsilon>0$ there exists a polynomial time algorithm that outputs a set $B$ of size at least $|A^*|$ and of diameter at most $r(\sqrt{2}+\epsilon)$. On the hardness side, roughly speaking, we show that unless $P=NP$ for every $\epsilon>0$ it is not possible to guarantee the diameter $r(\sqrt{4/3}-\epsilon)$ for $B$ even if the algorithm is allowed to output a set of size $({95\over 94}-\epsilon)^{-1}|A^*|$.<|reference_end|> | arxiv | @article{afshani2006approximation,
title={Approximation and Inapproximability Results for Maximum Clique of Disc
Graphs in High Dimensions},
author={Peyman Afshani and Hamed Hatami},
journal={Information Processing Letters. 105(3) (2008) pp. 83-87},
year={2006},
archivePrefix={arXiv},
eprint={cs/0701009},
primaryClass={cs.CG math.MG}
} | afshani2006approximation |
arxiv-675366 | cs/0701010 | Determining the Applicability of Agile Practices to Mission and Life-critical Systems | <|reference_start|>Determining the Applicability of Agile Practices to Mission and Life-critical Systems: Adopting agile practices brings about many benefits and improvements to the system being developed. However, in mission and life-critical systems, adopting an inappropriate agile practice has detrimental impacts on the system in various phases of its lifecycle as well as precludes desired qualities from being actualized. This paper presents a three-stage process that provides guidance to organizations on how to identify the agile practices they can benefit from without causing any impact to the mission and life critical system being developed.<|reference_end|> | arxiv | @article{sidky2007determining,
title={Determining the Applicability of Agile Practices to Mission and
Life-critical Systems},
author={Ahmed Sidky, James Arthur},
journal={arXiv preprint arXiv:cs/0701010},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701010},
primaryClass={cs.SE}
} | sidky2007determining |
arxiv-675367 | cs/0701011 | Infinite-Alphabet Prefix Codes Optimal for $\beta$-Exponential Penalties | <|reference_start|>Infinite-Alphabet Prefix Codes Optimal for $\beta$-Exponential Penalties: Let $P = \{p(i)\}$ be a measure of strictly positive probabilities on the set of nonnegative integers. Although the countable number of inputs prevents usage of the Huffman algorithm, there are nontrivial $P$ for which known methods find a source code that is optimal in the sense of minimizing expected codeword length. For some applications, however, a source code should instead minimize one of a family of nonlinear objective functions, $\beta$-exponential means, those of the form $\log_a \sum_i p(i) a^{n(i)}$, where $n(i)$ is the length of the $i$th codeword and $a$ is a positive constant. Applications of such minimizations include a problem of maximizing the chance of message receipt in single-shot communications ($a<1$) and a problem of minimizing the chance of buffer overflow in a queueing system ($a>1$). This paper introduces methods for finding codes optimal for such exponential means. One method applies to geometric distributions, while another applies to distributions with lighter tails. The latter algorithm is applied to Poisson distributions. Both are extended to minimizing maximum pointwise redundancy.<|reference_end|> | arxiv | @article{baer2007infinite-alphabet,
title={Infinite-Alphabet Prefix Codes Optimal for $\beta$-Exponential Penalties},
author={Michael B. Baer},
journal={arXiv preprint arXiv:cs/0701011},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701011},
primaryClass={cs.IT cs.DS math.IT}
} | baer2007infinite-alphabet |
arxiv-675368 | cs/0701012 | $D$-ary Bounded-Length Huffman Coding | <|reference_start|>$D$-ary Bounded-Length Huffman Coding: Efficient optimal prefix coding has long been accomplished via the Huffman algorithm. However, there is still room for improvement and exploration regarding variants of the Huffman problem. Length-limited Huffman coding, useful for many practical applications, is one such variant, in which codes are restricted to the set of codes in which none of the $n$ codewords is longer than a given length, $l_{\max}$. Binary length-limited coding can be done in $O(n l_{\max})$ time and O(n) space via the widely used Package-Merge algorithm. In this paper the Package-Merge approach is generalized without increasing complexity in order to introduce a minimum codeword length, $l_{\min}$, to allow for objective functions other than the minimization of expected codeword length, and to be applicable to both binary and nonbinary codes; nonbinary codes were previously addressed using a slower dynamic programming approach. These extensions have various applications -- including faster decompression -- and can be used to solve the problem of finding an optimal code with limited fringe, that is, finding the best code among codes with a maximum difference between the longest and shortest codewords. The previously proposed method for solving this problem was nonpolynomial time, whereas solving this using the novel algorithm requires only $O(n (l_{\max}- l_{\min})^2)$ time and O(n) space.<|reference_end|> | arxiv | @article{baer2007$d$-ary,
title={$D$-ary Bounded-Length Huffman Coding},
author={Michael B. Baer},
journal={arXiv preprint arXiv:cs/0701012},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701012},
primaryClass={cs.IT cs.DS math.IT}
} | baer2007$d$-ary |
arxiv-675369 | cs/0701013 | Attribute Value Weighting in K-Modes Clustering | <|reference_start|>Attribute Value Weighting in K-Modes Clustering: In this paper, the traditional k-modes clustering algorithm is extended by weighting attribute value matches in dissimilarity computation. The use of attribute value weighting technique makes it possible to generate clusters with stronger intra-similarities, and therefore achieve better clustering performance. Experimental results on real life datasets show that these value weighting based k-modes algorithms are superior to the standard k-modes algorithm with respect to clustering accuracy.<|reference_end|> | arxiv | @article{he2007attribute,
title={Attribute Value Weighting in K-Modes Clustering},
author={Zengyou He, Xaiofei Xu, Shengchun Deng},
journal={arXiv preprint arXiv:cs/0701013},
year={2007},
number={Tr-06-0615},
archivePrefix={arXiv},
eprint={cs/0701013},
primaryClass={cs.AI}
} | he2007attribute |
arxiv-675370 | cs/0701014 | A Reply to Hofman On: "Why LP cannot solve large instances of NP-complete problems in polynomial time" | <|reference_start|>A Reply to Hofman On: "Why LP cannot solve large instances of NP-complete problems in polynomial time": Using an approach that seems to be patterned after that of Yannakakis, Hofman argues that an NP-complete problem cannot be formulated as a polynomial bounded-sized linear programming problem. He then goes on to propose a "construct" that he claims to be a counter-example to recently published linear programming formulations of the Traveling Salesman Problem (TSP) and the Quadratic Assignment Problems (QAP), respectively. In this paper, we show that Hofman's construct is flawed, and provide further proof that his "counter-example" is invalid.<|reference_end|> | arxiv | @article{diaby2007a,
title={A Reply to Hofman On: "Why LP cannot solve large instances of
NP-complete problems in polynomial time"},
author={Moustapha Diaby},
journal={arXiv preprint arXiv:cs/0701014},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701014},
primaryClass={cs.CC}
} | diaby2007a |
arxiv-675371 | cs/0701015 | Asynchronous Implementation of Failure Detectors with partial connectivity and unknown participants | <|reference_start|>Asynchronous Implementation of Failure Detectors with partial connectivity and unknown participants: We consider the problem of failure detection in dynamic networks such as MANETs. Unreliable failure detectors are classical mechanisms which provide information about process failures. However, most of current implementations consider that the network is fully connected and that the initial number of nodes of the system is known. This assumption is not applicable to dynamic environments. Furthermore, such implementations are usually timer-based while in dynamic networks there is no upper bound for communication delays since nodes can move. This paper presents an asynchronous implementation of a failure detector for unknown and mobile networks. Our approach does not rely on timers and neither the composition nor the number of nodes in the system are known. We prove that our algorithm can implement failure detectors of class <>S when behavioral properties and connectivity conditions are satisfied by the underlying system.<|reference_end|> | arxiv | @article{sens2007asynchronous,
title={Asynchronous Implementation of Failure Detectors with partial
connectivity and unknown participants},
author={Pierre Sens (INRIA Rocquencourt), Luciana Arantes (INRIA
Rocquencourt), Mathieu Bouillaguet (INRIA Rocquencourt), V'eronique Martin
(INRIA Rocquencourt), Fabiola Greve (DCC)},
journal={arXiv preprint arXiv:cs/0701015},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701015},
primaryClass={cs.DC}
} | sens2007asynchronous |
arxiv-675372 | cs/0701016 | The Second Law and Informatics | <|reference_start|>The Second Law and Informatics: A unification of thermodynamics and information theory is proposed. It is argued that similarly to the randomness due to collisions in thermal systems, the quenched randomness that exists in data files in informatics systems contributes to entropy. Therefore, it is possible to define equilibrium and to calculate temperature for informatics systems. The obtained temperature yields correctly the Shannon information balance in informatics systems and is consistent with the Clausius inequality and the Carnot cycle.<|reference_end|> | arxiv | @article{kafri2007the,
title={The Second Law and Informatics},
author={Oded Kafri},
journal={arXiv preprint arXiv:cs/0701016},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701016},
primaryClass={cs.IT math.IT}
} | kafri2007the |
arxiv-675373 | cs/0701017 | Energy-Efficient Power Control in Impulse Radio UWB Wireless Networks | <|reference_start|>Energy-Efficient Power Control in Impulse Radio UWB Wireless Networks: In this paper, a game-theoretic model for studying power control for wireless data networks in frequency-selective multipath environments is analyzed. The uplink of an impulse-radio ultrawideband system is considered. The effects of self-interference and multiple-access interference on the performance of generic Rake receivers are investigated for synchronous systems. Focusing on energy efficiency, a noncooperative game is proposed in which users in the network are allowed to choose their transmit powers to maximize their own utilities, and the Nash equilibrium for the proposed game is derived. It is shown that, due to the frequency selective multipath, the noncooperative solution is achieved at different signal-to-interference-plus-noise ratios, depending on the channel realization and the type of Rake receiver employed. A large-system analysis is performed to derive explicit expressions for the achieved utilities. The Pareto-optimal (cooperative) solution is also discussed and compared with the noncooperative approach.<|reference_end|> | arxiv | @article{bacci2007energy-efficient,
title={Energy-Efficient Power Control in Impulse Radio UWB Wireless Networks},
author={Giacomo Bacci, Marco Luise, H. Vincent Poor and Antonia M. Tulino},
journal={arXiv preprint arXiv:cs/0701017},
year={2007},
doi={10.1109/JSTSP.2007.906588},
archivePrefix={arXiv},
eprint={cs/0701017},
primaryClass={cs.IT math.IT}
} | bacci2007energy-efficient |
arxiv-675374 | cs/0701018 | Performance Analysis of Algebraic Soft-Decision Decoding of Reed-Solomon Codes | <|reference_start|>Performance Analysis of Algebraic Soft-Decision Decoding of Reed-Solomon Codes: We investigate the decoding region for Algebraic Soft-Decision Decoding (ASD) of Reed-Solomon codes in a discrete, memoryless, additive-noise channel. An expression is derived for the error correction radius within which the soft-decision decoder produces a list that contains the transmitted codeword. The error radius for ASD is shown to be larger than that of Guruswami-Sudan hard-decision decoding for a subset of low-rate codes. These results are also extended to multivariable interpolation in the sense of Parvaresh and Vardy. An upper bound is then presented for ASD's probability of error, where an error is defined as the event that the decoder selects an erroneous codeword from its list. This new definition gives a more accurate bound on the probability of error of ASD than the results available in the literature.<|reference_end|> | arxiv | @article{duggan2007performance,
title={Performance Analysis of Algebraic Soft-Decision Decoding of Reed-Solomon
Codes},
author={Andrew Duggan and Alexander Barg},
journal={arXiv preprint arXiv:cs/0701018},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701018},
primaryClass={cs.IT math.IT}
} | duggan2007performance |
arxiv-675375 | cs/0701019 | Flow-optimized Cooperative Transmission for the Relay Channel | <|reference_start|>Flow-optimized Cooperative Transmission for the Relay Channel: This paper describes an approach for half-duplex cooperative transmission in a classical three-node relay channel. Assuming availability of channel state information at nodes, the approach makes use of this information to optimize distinct flows through the direct link from the source to the destination and the path via the relay, respectively. It is shown that such a design can effectively harness diversity advantage of the relay channel in both high-rate and low-rate scenarios. When the rate requirement is low, the proposed design gives a second-order outage diversity performance approaching that of full-duplex relaying. When the rate requirement becomes asymptotically large, the design still gives a close-to-second-order outage diversity performance. The design also achieves the best diversity-multiplexing tradeoff possible for the relay channel. With optimal long-term power control over the fading relay channel, the proposed design achieves a delay-limited rate performance that is only 3.0dB (5.4dB) worse than the capacity performance of the additive white Gaussian channel in low- (high-) rate scenarios.<|reference_end|> | arxiv | @article{wong2007flow-optimized,
title={Flow-optimized Cooperative Transmission for the Relay Channel},
author={Tan F. Wong, Tat M. Lok, and John M. Shea},
journal={arXiv preprint arXiv:cs/0701019},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701019},
primaryClass={cs.IT math.IT}
} | wong2007flow-optimized |
arxiv-675376 | cs/0701020 | A nearly optimal and deterministic summary structure for update data streams | <|reference_start|>A nearly optimal and deterministic summary structure for update data streams: The paper has been withdrawn due to an error in Lemma 1.<|reference_end|> | arxiv | @article{ganguly2007a,
title={A nearly optimal and deterministic summary structure for update data
streams},
author={Sumit Ganguly},
journal={arXiv preprint arXiv:cs/0701020},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701020},
primaryClass={cs.DS}
} | ganguly2007a |
arxiv-675377 | cs/0701021 | The Unix KISS: A Case Study | <|reference_start|>The Unix KISS: A Case Study: In this paper we show that the initial philosophy used in designing and developing UNIX in early times has been forgotten due to "fast practices". We question the leitmotif that microkernels, though being by design adherent to the KISS principle, have a number of context switches higher than their monolithic counterparts, running a test suite and verify the results with standard statistical validation tests. We advocate a wiser distribution of shared libraries by statistically analyzing the weight of each shared object in a typical UNIX system, showing that the majority of shared libraries exist in a common space for no real evidence of need. Finally we examine the UNIX heritage with an historical point of view, noticing how habits swiftly replaced the intents of the original authors, moving the focus from the earliest purpose of is avoiding complications, keeping a system simple to use and maintain.<|reference_end|> | arxiv | @article{milicchio2007the,
title={The Unix KISS: A Case Study},
author={Franco Milicchio},
journal={arXiv preprint arXiv:cs/0701021},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701021},
primaryClass={cs.OS cs.GL}
} | milicchio2007the |
arxiv-675378 | cs/0701022 | Definable functions in the simply typed lambda-calculus | <|reference_start|>Definable functions in the simply typed lambda-calculus: It is a common knowledge that the integer functions definable in simply typed lambda-calculus are exactly the extended polynomials. This is indeed the case when one interprets integers over the type (p->p)->p->p where p is a base type and/or equality is taken as beta-conversion. It is commonly believed that the same holds for beta-eta equality and for integers represented over any fixed type of the form (t->t)->t->t. In this paper we show that this opinion is not quite true. We prove that the class of functions strictly definable in simply typed lambda-calculus is considerably larger than the extended polynomials. Namely, we define F as the class of strictly definable functions and G as a class that contains extended polynomials and two additional functions, or more precisely, two function schemas, and is closed under composition. We prove that G is a subset of F. We conjecture that G exactly characterizes strictly definable functions, i.e. G=F, and we gather some evidence for this conjecture proving, for example, that every skewly representable finite range function is strictly representable over (t->t)->t->t for some t.<|reference_end|> | arxiv | @article{zakrzewski2007definable,
title={Definable functions in the simply typed lambda-calculus},
author={Mateusz Zakrzewski},
journal={arXiv preprint arXiv:cs/0701022},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701022},
primaryClass={cs.LO}
} | zakrzewski2007definable |
arxiv-675379 | cs/0701023 | A Polynomial Time Algorithm for 3-SAT | <|reference_start|>A Polynomial Time Algorithm for 3-SAT: Article describes a class of efficient algorithms for 3SAT and their generalizations on SAT.<|reference_end|> | arxiv | @article{gubin2007a,
title={A Polynomial Time Algorithm for 3-SAT},
author={Sergey Gubin},
journal={arXiv preprint arXiv:cs/0701023},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701023},
primaryClass={cs.CC cs.DM cs.DS cs.LO}
} | gubin2007a |
arxiv-675380 | cs/0701024 | Secure Communication over Fading Channels | <|reference_start|>Secure Communication over Fading Channels: The fading broadcast channel with confidential messages (BCC) is investigated, where a source node has common information for two receivers (receivers 1 and 2), and has confidential information intended only for receiver 1. The confidential information needs to be kept as secret as possible from receiver 2. The broadcast channel from the source node to receivers 1 and 2 is corrupted by multiplicative fading gain coefficients in addition to additive Gaussian noise terms. The channel state information (CSI) is assumed to be known at both the transmitter and the receivers. The parallel BCC with independent subchannels is first studied, which serves as an information-theoretic model for the fading BCC. The secrecy capacity region of the parallel BCC is established. This result is then specialized to give the secrecy capacity region of the parallel BCC with degraded subchannels. The secrecy capacity region is then established for the parallel Gaussian BCC, and the optimal source power allocations that achieve the boundary of the secrecy capacity region are derived. In particular, the secrecy capacity region is established for the basic Gaussian BCC. The secrecy capacity results are then applied to study the fading BCC. Both the ergodic and outage performances are studied.<|reference_end|> | arxiv | @article{liang2007secure,
title={Secure Communication over Fading Channels},
author={Yingbin Liang, H. Vincent Poor and Shlomo Shamai (Shitz)},
journal={arXiv preprint arXiv:cs/0701024},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701024},
primaryClass={cs.IT cs.CR math.IT}
} | liang2007secure |
arxiv-675381 | cs/0701025 | Free deconvolution for signal processing applications | <|reference_start|>Free deconvolution for signal processing applications: Situations in many fields of research, such as digital communications, nuclear physics and mathematical finance, can be modelled with random matrices. When the matrices get large, free probability theory is an invaluable tool for describing the asymptotic behaviour of many systems. It will be shown how free probability can be used to aid in source detection for certain systems. Sample covariance matrices for systems with noise are the starting point in our source detection problem. Multiplicative free deconvolution is shown to be a method which can aid in expressing limit eigenvalue distributions for sample covariance matrices, and to simplify estimators for eigenvalue distributions of covariance matrices.<|reference_end|> | arxiv | @article{ryan2007free,
title={Free deconvolution for signal processing applications},
author={O. Ryan and M. Debbah},
journal={arXiv preprint arXiv:cs/0701025},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701025},
primaryClass={cs.IT math.IT}
} | ryan2007free |
arxiv-675382 | cs/0701026 | Analysis of Sequential Decoding Complexity Using the Berry-Esseen Inequality | <|reference_start|>Analysis of Sequential Decoding Complexity Using the Berry-Esseen Inequality: his study presents a novel technique to estimate the computational complexity of sequential decoding using the Berry-Esseen theorem. Unlike the theoretical bounds determined by the conventional central limit theorem argument, which often holds only for sufficiently large codeword length, the new bound obtained from the Berry-Esseen theorem is valid for any blocklength. The accuracy of the new bound is then examined for two sequential decoding algorithms, an ordering-free variant of the generalized Dijkstra's algorithm (GDA)(or simplified GDA) and the maximum-likelihood sequential decoding algorithm (MLSDA). Empirically investigating codes of small blocklength reveals that the theoretical upper bound for the simplified GDA almost matches the simulation results as the signal-to-noise ratio (SNR) per information bit ($\gamma_b$) is greater than or equal to 8 dB. However, the theoretical bound may become markedly higher than the simulated average complexity when $\gamma_b$ is small. For the MLSDA, the theoretical upper bound is quite close to the simulation results for both high SNR ($\gamma_b\geq 6$ dB) and low SNR ($\gamma_b\leq 2$ dB). Even for moderate SNR, the simulation results and the theoretical bound differ by at most \makeblue{0.8} on a $\log_{10}$ scale.<|reference_end|> | arxiv | @article{chen2007analysis,
title={Analysis of Sequential Decoding Complexity Using the Berry-Esseen
Inequality},
author={Po-Ning Chen, Yunghsiang S. Han, Carlos R. P. Hartmann and Hong-Bin Wu},
journal={arXiv preprint arXiv:cs/0701026},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701026},
primaryClass={cs.IT math.IT}
} | chen2007analysis |
arxiv-675383 | cs/0701027 | The source coding game with a cheating switcher | <|reference_start|>The source coding game with a cheating switcher: Berger's paper `The Source Coding Game', IEEE Trans. Inform. Theory, 1971, considers the problem of finding the rate-distortion function for an adversarial source comprised of multiple known IID sources. The adversary, called the `switcher', was allowed only causal access to the source realizations and the rate-distortion function was obtained through the use of a type covering lemma. In this paper, the rate-distortion function of the adversarial source is described, under the assumption that the switcher has non-causal access to all source realizations. The proof utilizes the type covering lemma and simple conditional, random `switching' rules. The rate-distortion function is once again the maximization of the R(D) function for a region of attainable IID distributions.<|reference_end|> | arxiv | @article{palaiyanur2007the,
title={The source coding game with a cheating switcher},
author={Hari Palaiyanur, Cheng Chang, and Anant Sahai},
journal={arXiv preprint arXiv:cs/0701027},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701027},
primaryClass={cs.IT math.IT}
} | palaiyanur2007the |
arxiv-675384 | cs/0701028 | Statistical keyword detection in literary corpora | <|reference_start|>Statistical keyword detection in literary corpora: Understanding the complexity of human language requires an appropriate analysis of the statistical distribution of words in texts. We consider the information retrieval problem of detecting and ranking the relevant words of a text by means of statistical information referring to the "spatial" use of the words. Shannon's entropy of information is used as a tool for automatic keyword extraction. By using The Origin of Species by Charles Darwin as a representative text sample, we show the performance of our detector and compare it with another proposals in the literature. The random shuffled text receives special attention as a tool for calibrating the ranking indices.<|reference_end|> | arxiv | @article{herrera2007statistical,
title={Statistical keyword detection in literary corpora},
author={Juan P. Herrera and Pedro A. Pury},
journal={Eur. Phys. J. B 63, 135-146 (2008)},
year={2007},
doi={10.1140/epjb/e2008-00206-x},
archivePrefix={arXiv},
eprint={cs/0701028},
primaryClass={cs.CL cs.IR physics.soc-ph}
} | herrera2007statistical |
arxiv-675385 | cs/0701029 | The Inhabitation Problem for Rank Two Intersection Types | <|reference_start|>The Inhabitation Problem for Rank Two Intersection Types: We prove that the inhabitation problem for rank two intersection types is decidable, but (contrary to common belief) EXPTIME-hard. The exponential time hardness is shown by reduction from the in-place acceptance problem for alternating Turing machines.<|reference_end|> | arxiv | @article{kusmierek2007the,
title={The Inhabitation Problem for Rank Two Intersection Types},
author={Dariusz Kusmierek},
journal={arXiv preprint arXiv:cs/0701029},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701029},
primaryClass={cs.LO}
} | kusmierek2007the |
arxiv-675386 | cs/0701030 | New Constructions of a Family of 2-Generator Quasi-Cyclic Two-Weight Codes and Related Codes | <|reference_start|>New Constructions of a Family of 2-Generator Quasi-Cyclic Two-Weight Codes and Related Codes: Based on cyclic simplex codes, a new construction of a family of 2-generator quasi-cyclic two-weight codes is given. New optimal binary quasi-cyclic [195, 8, 96], [210, 8, 104] and [240, 8, 120] codes, good QC ternary [195, 6, 126], [208, 6, 135], [221, 6, 144] codes are thus obtained. Furthermre, binary quasi-cyclic self-complementary codes are also constructed.<|reference_end|> | arxiv | @article{chen2007new,
title={New Constructions of a Family of 2-Generator Quasi-Cyclic Two-Weight
Codes and Related Codes},
author={Eric Zhi Chen},
journal={arXiv preprint arXiv:cs/0701030},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701030},
primaryClass={cs.IT math.IT}
} | chen2007new |
arxiv-675387 | cs/0701031 | On the implementation of construction functions for non-free concrete data types | <|reference_start|>On the implementation of construction functions for non-free concrete data types: Many algorithms use concrete data types with some additional invariants. The set of values satisfying the invariants is often a set of representatives for the equivalence classes of some equational theory. For instance, a sorted list is a particular representative wrt commutativity. Theories like associativity, neutral element, idempotence, etc. are also very common. Now, when one wants to combine various invariants, it may be difficult to find the suitable representatives and to efficiently implement the invariants. The preservation of invariants throughout the whole program is even more difficult and error prone. Classically, the programmer solves this problem using a combination of two techniques: the definition of appropriate construction functions for the representatives and the consistent usage of these functions ensured via compiler verifications. The common way of ensuring consistency is to use an abstract data type for the representatives; unfortunately, pattern matching on representatives is lost. A more appealing alternative is to define a concrete data type with private constructors so that both compiler verification and pattern matching on representatives are granted. In this paper, we detail the notion of private data type and study the existence of construction functions. We also describe a prototype, called Moca, that addresses the entire problem of...<|reference_end|> | arxiv | @article{blanqui2007on,
title={On the implementation of construction functions for non-free concrete
data types},
author={Fr'ed'eric Blanqui (INRIA Lorraine - LORIA), Th'er`ese Hardin
(LIP6), Pierre Weis (INRIA Rocquencourt)},
journal={Dans 16th European Symposium on Programming (2006)},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701031},
primaryClass={cs.LO cs.PL}
} | blanqui2007on |
arxiv-675388 | cs/0701032 | Polygraphic programs and polynomial-time functions | <|reference_start|>Polygraphic programs and polynomial-time functions: We study the computational model of polygraphs. For that, we consider polygraphic programs, a subclass of these objects, as a formal description of first-order functional programs. We explain their semantics and prove that they form a Turing-complete computational model. Their algebraic structure is used by analysis tools, called polygraphic interpretations, for complexity analysis. In particular, we delineate a subclass of polygraphic programs that compute exactly the functions that are Turing-computable in polynomial time.<|reference_end|> | arxiv | @article{bonfante2007polygraphic,
title={Polygraphic programs and polynomial-time functions},
author={Guillaume Bonfante, Yves Guiraud},
journal={Logical Methods in Computer Science, Volume 5, Issue 2 (June 3,
2009) lmcs:764},
year={2007},
doi={10.2168/LMCS-5(2:14)2009},
archivePrefix={arXiv},
eprint={cs/0701032},
primaryClass={cs.LO cs.CC math.CT}
} | bonfante2007polygraphic |
arxiv-675389 | cs/0701033 | A Counterexample to a Proposed Proof of P=NP by S Gubin | <|reference_start|>A Counterexample to a Proposed Proof of P=NP by S Gubin: In a recent paper by S. Gubin [cs/0701023v1], a polynomial-time solution to the 3SAT problem was presented as proof that P=NP. The proposed algorithm cannot be made to work, which I shall demonstrate.<|reference_end|> | arxiv | @article{hegerle2007a,
title={A Counterexample to a Proposed Proof of P=NP by S. Gubin},
author={Blake Hegerle},
journal={arXiv preprint arXiv:cs/0701033},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701033},
primaryClass={cs.CC}
} | hegerle2007a |
arxiv-675390 | cs/0701034 | Performance of Rake Receivers in IR-UWB Networks Using Energy-Efficient Power Control | <|reference_start|>Performance of Rake Receivers in IR-UWB Networks Using Energy-Efficient Power Control: This paper studies the performance of partial-Rake (PRake) receivers in impulse-radio ultrawideband wireless networks when an energy-efficient power control scheme is adopted. Due to the large bandwidth of the system, the multipath channel is assumed to be frequency-selective. By making use of noncooperative game-theoretic models and large-system analysis tools, explicit expressions are derived in terms of network parameters to measure the effects of self-interference and multiple-access interference at a receiving access point. Performance of the PRake receivers is thus compared in terms of achieved utilities and loss to that of the all-Rake receiver. Simulation results are provided to validate the analysis.<|reference_end|> | arxiv | @article{bacci2007performance,
title={Performance of Rake Receivers in IR-UWB Networks Using Energy-Efficient
Power Control},
author={Giacomo Bacci, Marco Luise and H. Vincent Poor},
journal={arXiv preprint arXiv:cs/0701034},
year={2007},
doi={10.1109/TWC.2008.070019},
archivePrefix={arXiv},
eprint={cs/0701034},
primaryClass={cs.IT math.IT}
} | bacci2007performance |
arxiv-675391 | cs/0701035 | Finding Astronomical Communities Through Co-readership Analysis | <|reference_start|>Finding Astronomical Communities Through Co-readership Analysis: Whenever a large group of people are engaged in an activity, communities will form. The nature of these communities depends on the relationship considered. In the group of people who regularly use scholarly literature, a relationship like ``person i and person j have cited the same paper'' might reveal communities of people working in a particular field. On this poster, we will investigate the relationship ``person i and person j have read the same paper''. Using the data logs of the NASA/Smithsonian Astrophysics Data System (ADS), we first determine the population that will participate by requiring that a user queries the ADS at a certain rate. Next, we apply the relationship to this population. The result of this will be an abstract ``relationship space'', which we will describe in terms of various ``representations''. Examples of such ``representations'' are the projection of co-read vectors onto Principal Components and the spectral density of the co-read network. We will show that the co-read relationship results in structure, we will describe this structure and we will provide a first attempt in the classification of this structure in terms of astronomical communities. The ADS is funded by NASA Grant NNG06GG68G.<|reference_end|> | arxiv | @article{henneken2007finding,
title={Finding Astronomical Communities Through Co-readership Analysis},
author={Edwin A. Henneken, Michael J. Kurtz, Guenther Eichhorn, Alberto
Accomazzi, Carolyn S. Grant, Donna Thompson, Elizabeth Bohlen, Stephen S.
Murray},
journal={arXiv preprint arXiv:cs/0701035},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701035},
primaryClass={cs.DL astro-ph}
} | henneken2007finding |
arxiv-675392 | cs/0701036 | Compression-based methods for nonparametric density estimation, on-line prediction, regression and classification for time series | <|reference_start|>Compression-based methods for nonparametric density estimation, on-line prediction, regression and classification for time series: We address the problem of nonparametric estimation of characteristics for stationary and ergodic time series. We consider finite-alphabet time series and real-valued ones and the following four problems: i) estimation of the (limiting) probability (or estimation of the density for real-valued time series), ii) on-line prediction, iii) regression and iv) classification (or so-called problems with side information). We show that so-called archivers (or data compressors) can be used as a tool for solving these problems. In particular, firstly, it is proven that any so-called universal code (or universal data compressor) can be used as a basis for constructing asymptotically optimal methods for the above problems. (By definition, a universal code can "compress" any sequence generated by a stationary and ergodic source asymptotically till the Shannon entropy of the source.) And, secondly, we show experimentally that estimates, which are based on practically used methods of data compression, have a reasonable precision.<|reference_end|> | arxiv | @article{ryabko2007compression-based,
title={Compression-based methods for nonparametric density estimation, on-line
prediction, regression and classification for time series},
author={Boris Ryabko},
journal={arXiv preprint arXiv:cs/0701036},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701036},
primaryClass={cs.IT math.IT}
} | ryabko2007compression-based |
arxiv-675393 | cs/0701037 | DMTCP: Transparent Checkpointing for Cluster Computations and the Desktop | <|reference_start|>DMTCP: Transparent Checkpointing for Cluster Computations and the Desktop: DMTCP (Distributed MultiThreaded CheckPointing) is a transparent user-level checkpointing package for distributed applications. Checkpointing and restart is demonstrated for a wide range of over 20 well known applications, including MATLAB, Python, TightVNC, MPICH2, OpenMPI, and runCMS. RunCMS runs as a 680 MB image in memory that includes 540 dynamic libraries, and is used for the CMS experiment of the Large Hadron Collider at CERN. DMTCP transparently checkpoints general cluster computations consisting of many nodes, processes, and threads; as well as typical desktop applications. On 128 distributed cores (32 nodes), checkpoint and restart times are typically 2 seconds, with negligible run-time overhead. Typical checkpoint times are reduced to 0.2 seconds when using forked checkpointing. Experimental results show that checkpoint time remains nearly constant as the number of nodes increases on a medium-size cluster. DMTCP automatically accounts for fork, exec, ssh, mutexes/semaphores, TCP/IP sockets, UNIX domain sockets, pipes, ptys (pseudo-terminals), terminal modes, ownership of controlling terminals, signal handlers, open file descriptors, shared open file descriptors, I/O (including the readline library), shared memory (via mmap), parent-child process relationships, pid virtualization, and other operating system artifacts. By emphasizing an unprivileged, user-space approach, compatibility is maintained across Linux kernels from 2.6.9 through the current 2.6.28. Since DMTCP is unprivileged and does not require special kernel modules or kernel patches, DMTCP can be incorporated and distributed as a checkpoint-restart module within some larger package.<|reference_end|> | arxiv | @article{ansel2007dmtcp:,
title={DMTCP: Transparent Checkpointing for Cluster Computations and the
Desktop},
author={Jason Ansel, Kapil Arya, Gene Cooperman},
journal={arXiv preprint arXiv:cs/0701037},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701037},
primaryClass={cs.DC cs.OS}
} | ansel2007dmtcp: |
arxiv-675394 | cs/0701038 | Approximate Eigenstructure of LTV Channels with Compactly Supported Spreading | <|reference_start|>Approximate Eigenstructure of LTV Channels with Compactly Supported Spreading: In this article we obtain estimates on the approximate eigenstructure of channels with a spreading function supported only on a set of finite measure $|U|$.Because in typical application like wireless communication the spreading function is a random process corresponding to a random Hilbert--Schmidt channel operator $\BH$ we measure this approximation in terms of the ratio of the $p$--norm of the deviation from variants of the Weyl symbol calculus to the $a$--norm of the spreading function itself. This generalizes recent results obtained for the case $p=2$ and $a=1$. We provide a general approach to this topic and consider then operators with $|U|<\infty$ in more detail. We show the relation to pulse shaping and weighted norms of ambiguity functions. Finally we derive several necessary conditions on $|U|$, such that the approximation error is below certain levels.<|reference_end|> | arxiv | @article{jung2007approximate,
title={Approximate Eigenstructure of LTV Channels with Compactly Supported
Spreading},
author={Peter Jung},
journal={arXiv preprint arXiv:cs/0701038},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701038},
primaryClass={cs.IT math.IT}
} | jung2007approximate |
arxiv-675395 | cs/0701039 | On the Complexity of the Numerically Definite Syllogistic and Related Fragments | <|reference_start|>On the Complexity of the Numerically Definite Syllogistic and Related Fragments: In this paper, we determine the complexity of the satisfiability problem for various logics obtained by adding numerical quantifiers, and other constructions, to the traditional syllogistic. In addition, we demonstrate the incompleteness of some recently proposed proof-systems for these logics.<|reference_end|> | arxiv | @article{pratt-hartmann2007on,
title={On the Complexity of the Numerically Definite Syllogistic and Related
Fragments},
author={Ian Pratt-Hartmann},
journal={Bulletin of Symbolic Logic, 14(1), 2008, pp. 1--28},
year={2007},
doi={10.2178/bsl/1208358842},
archivePrefix={arXiv},
eprint={cs/0701039},
primaryClass={cs.LO cs.AI cs.CC}
} | pratt-hartmann2007on |
arxiv-675396 | cs/0701040 | Curve Tracking Control for Legged Locomotion in Horizontal Plane | <|reference_start|>Curve Tracking Control for Legged Locomotion in Horizontal Plane: We derive a hybrid feedback control law for the lateral leg spring (LLS) model so that the center of mass of a legged runner follows a curved path in horizontal plane. The control law enables the runner to change the placement and the elasticity of its legs to move in a desired direction. Stable motion along a curved path is achieved using curvature, bearing and relative distance between the runner and the curve as feedback. Constraints on leg parameters determine the class of curves that can be followed. We also derive an optimal control law that stabilizes the orientation of the runner's body relative to the velocity of the runner's center of mass.<|reference_end|> | arxiv | @article{zhang2007curve,
title={Curve Tracking Control for Legged Locomotion in Horizontal Plane},
author={F. Zhang},
journal={arXiv preprint arXiv:cs/0701040},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701040},
primaryClass={cs.RO}
} | zhang2007curve |
arxiv-675397 | cs/0701041 | A Coding Theorem for a Class of Stationary Channels with Feedback | <|reference_start|>A Coding Theorem for a Class of Stationary Channels with Feedback: A coding theorem is proved for a class of stationary channels with feedback in which the output Y_n = f(X_{n-m}^n, Z_{n-m}^n) is the function of the current and past m symbols from the channel input X_n and the stationary ergodic channel noise Z_n. In particular, it is shown that the feedback capacity is equal to $$ \limp_{n\to\infty} \sup_{p(x^n||y^{n-1})} \frac{1}{n} I(X^n \to Y^n), $$ where I(X^n \to Y^n) = \sum_{i=1}^n I(X^i; Y_i|Y^{i-1}) denotes the Massey directed information from the channel input to the output, and the supremum is taken over all causally conditioned distributions p(x^n||y^{n-1}) = \prod_{i=1}^n p(x_i|x^{i-1},y^{i-1}). The main ideas of the proof are the Shannon strategy for coding with side information and a new elementary coding technique for the given channel model without feedback, which is in a sense dual to Gallager's lossy coding of stationary ergodic sources. A similar approach gives a simple alternative proof of coding theorems for finite state channels by Yang-Kavcic-Tatikonda, Chen-Berger, and Permuter-Weissman-Goldsmith.<|reference_end|> | arxiv | @article{kim2007a,
title={A Coding Theorem for a Class of Stationary Channels with Feedback},
author={Young-Han Kim},
journal={arXiv preprint arXiv:cs/0701041},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701041},
primaryClass={cs.IT math.IT}
} | kim2007a |
arxiv-675398 | cs/0701042 | Sending a Bivariate Gaussian Source over a Gaussian MAC with Feedback | <|reference_start|>Sending a Bivariate Gaussian Source over a Gaussian MAC with Feedback: We consider the problem of transmitting a bivariate Gaussian source over a two-user additive Gaussian multiple-access channel with feedback. Each of the transmitters observes one of the source components and tries to describe it to the common receiver. We are interested in the minimal mean squared error at which the receiver can reconstruct each of the source components. In the ``symmetric case'' we show that, below a certain signal-to-noise ratio threshold which is determined by the source correlation, feedback is useless and the minimal distortion is achieved by uncoded transmission. For the general case we give necessary conditions for the achievability of a distortion pair.<|reference_end|> | arxiv | @article{lapidoth2007sending,
title={Sending a Bivariate Gaussian Source over a Gaussian MAC with Feedback},
author={Amos Lapidoth, Stephan Tinguely},
journal={arXiv preprint arXiv:cs/0701042},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701042},
primaryClass={cs.IT math.IT}
} | lapidoth2007sending |
arxiv-675399 | cs/0701043 | Adaptive Alternating Minimization Algorithms | <|reference_start|>Adaptive Alternating Minimization Algorithms: The classical alternating minimization (or projection) algorithm has been successful in the context of solving optimization problems over two variables. The iterative nature and simplicity of the algorithm has led to its application to many areas such as signal processing, information theory, control, and finance. A general set of sufficient conditions for the convergence and correctness of the algorithm is quite well-known when the underlying problem parameters are fixed. In many practical situations, however, the underlying problem parameters are changing over time, and the use of an adaptive algorithm is more appropriate. In this paper, we study such an adaptive version of the alternating minimization algorithm. As a main result of this paper, we provide a general set of sufficient conditions for the convergence and correctness of the adaptive algorithm. Perhaps surprisingly, these conditions seem to be the minimal ones one would expect in such an adaptive setting. We present applications of our results to adaptive decomposition of mixtures, adaptive log-optimal portfolio selection, and adaptive filter design.<|reference_end|> | arxiv | @article{niesen2007adaptive,
title={Adaptive Alternating Minimization Algorithms},
author={Urs Niesen, Devavrat Shah, Gregory Wornell},
journal={IEEE Transactions on Information Theory, vol. 55, pp. 1423-1429,
March 2009},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701043},
primaryClass={cs.IT math.IT math.OC}
} | niesen2007adaptive |
arxiv-675400 | cs/0701044 | New ID Based Multi-Proxy Multi-Signcryption Scheme from Pairings | <|reference_start|>New ID Based Multi-Proxy Multi-Signcryption Scheme from Pairings: This paper presents an identity based multi-proxy multi-signcryption scheme from pairings. In this scheme a proxy signcrypter group could authorized as a proxy agent by the coopration of all members in the original signcryption group. Then the proxy signcryption can be generated by the cooperation of all the signcrypters in the authorized proxy signcrypter group on the behalf of the original signcrypter group. As compared to the scheme of Liu and Xiao, the proposed scheme provides public verifiability of the signature along with simplified key management.<|reference_end|> | arxiv | @article{lal2007new,
title={New ID Based Multi-Proxy Multi-Signcryption Scheme from Pairings},
author={Sunder Lal and Tej Singh},
journal={arXiv preprint arXiv:cs/0701044},
year={2007},
archivePrefix={arXiv},
eprint={cs/0701044},
primaryClass={cs.CR}
} | lal2007new |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.