corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-6801 | 0903.3218 | Nation-State Routing: Censorship, Wiretapping, and BGP | <|reference_start|>Nation-State Routing: Censorship, Wiretapping, and BGP: The treatment of Internet traffic is increasingly affected by national policies that require the ISPs in a country to adopt common protocols or practices. Examples include government enforced censorship, wiretapping, and protocol deployment mandates for IPv6 and DNSSEC. If an entire nation's worth of ISPs apply common policies to Internet traffic, the global implications could be significant. For instance, how many countries rely on China or Great Britain (known traffic censors) to transit their traffic? These kinds of questions are surprisingly difficult to answer, as they require combining information collected at the prefix, Autonomous System, and country level, and grappling with incomplete knowledge about the AS-level topology and routing policies. In this paper we develop the first framework for country-level routing analysis, which allows us to answer questions about the influence of each country on the flow of international traffic. Our results show that some countries known for their national policies, such as Iran and China, have relatively little effect on interdomain routing, while three countries (the United States, Great Britain, and Germany) are central to international reachability, and their policies thus have huge potential impact.<|reference_end|> | arxiv | @article{karlin2009nation-state,
title={Nation-State Routing: Censorship, Wiretapping, and BGP},
author={Josh Karlin (University of New Mexico), Stephanie Forrest (University
of New Mexico and the Santa Fe Institute), Jennifer Rexford (Princeton
University)},
journal={arXiv preprint arXiv:0903.3218},
year={2009},
archivePrefix={arXiv},
eprint={0903.3218},
primaryClass={cs.NI cs.CR cs.CY}
} | karlin2009nation-state |
arxiv-6802 | 0903.3228 | The Smithsonian/NASA Astrophysics Data System (ADS) Decennial Report | <|reference_start|>The Smithsonian/NASA Astrophysics Data System (ADS) Decennial Report: Eight years after the ADS first appeared the last decadal survey wrote: "NASA's initiative for the Astrophysics Data System has vastly increased the accessibility of the scientific literature for astronomers. NASA deserves credit for this valuable initiative and is urged to continue it." Here we summarize some of the changes concerning the ADS which have occurred in the past ten years, and we describe the current status of the ADS. We then point out two areas where the ADS is building an improved capability which could benefit from a policy statement of support in the ASTRO2010 report. These are: The Semantic Interlinking of Astronomy Observations and Datasets and The Indexing of the Full Text of Astronomy Research Publications.<|reference_end|> | arxiv | @article{kurtz2009the,
title={The Smithsonian/NASA Astrophysics Data System (ADS) Decennial Report},
author={Michael J. Kurtz, Alberto Accomazzi, Stephen S. Murray},
journal={arXiv preprint arXiv:0903.3228},
year={2009},
archivePrefix={arXiv},
eprint={0903.3228},
primaryClass={astro-ph.IM cs.DL}
} | kurtz2009the |
arxiv-6803 | 0903.3257 | A New Local Distance-Based Outlier Detection Approach for Scattered Real-World Data | <|reference_start|>A New Local Distance-Based Outlier Detection Approach for Scattered Real-World Data: Detecting outliers which are grossly different from or inconsistent with the remaining dataset is a major challenge in real-world KDD applications. Existing outlier detection methods are ineffective on scattered real-world datasets due to implicit data patterns and parameter setting issues. We define a novel "Local Distance-based Outlier Factor" (LDOF) to measure the {outlier-ness} of objects in scattered datasets which addresses these issues. LDOF uses the relative location of an object to its neighbours to determine the degree to which the object deviates from its neighbourhood. Properties of LDOF are theoretically analysed including LDOF's lower bound and its false-detection probability, as well as parameter settings. In order to facilitate parameter settings in real-world applications, we employ a top-n technique in our outlier detection approach, where only the objects with the highest LDOF values are regarded as outliers. Compared to conventional approaches (such as top-n KNN and top-n LOF), our method top-n LDOF is more effective at detecting outliers in scattered data. It is also easier to set parameters, since its performance is relatively stable over a large range of parameter values, as illustrated by experimental results on both real-world and synthetic datasets.<|reference_end|> | arxiv | @article{zhang2009a,
title={A New Local Distance-Based Outlier Detection Approach for Scattered
Real-World Data},
author={Ke Zhang and Marcus Hutter and Huidong Jin},
journal={Proc. 13th Pacific-Asia Conf. on Knowledge Discovery and Data
Mining (PAKDD 2009) pages 813-822},
year={2009},
archivePrefix={arXiv},
eprint={0903.3257},
primaryClass={cs.LG cs.IR}
} | zhang2009a |
arxiv-6804 | 0903.3261 | The Secrecy Capacity Region of the Gaussian MIMO Broadcast Channel | <|reference_start|>The Secrecy Capacity Region of the Gaussian MIMO Broadcast Channel: In this paper, we consider a scenario where a source node wishes to broadcast two confidential messages for two respective receivers via a Gaussian MIMO broadcast channel. A wire-tapper also receives the transmitted signal via another MIMO channel. First we assumed that the channels are degraded and the wire-tapper has the worst channel. We establish the capacity region of this scenario. Our achievability scheme is a combination of the superposition of Gaussian codes and randomization within the layers which we will refer to as Secret Superposition Coding. For the outerbound, we use the notion of enhanced channel to show that the secret superposition of Gaussian codes is optimal. We show that we only need to enhance the channels of the legitimate receivers, and the channel of the eavesdropper remains unchanged. Then we extend the result of the degraded case to non-degraded case. We show that the secret superposition of Gaussian codes along with successive decoding cannot work when the channels are not degraded. we develop a Secret Dirty Paper Coding (SDPC) scheme and show that SDPC is optimal for this channel. Finally, we investigate practical characterizations for the specific scenario in which the transmitter and the eavesdropper have multiple antennas, while both intended receivers have a single antenna. We characterize the secrecy capacity region in terms of generalized eigenvalues of the receivers channel and the eavesdropper channel. We refer to this configuration as the MISOME case. In high SNR we show that the capacity region is a convex closure of two rectangular regions.<|reference_end|> | arxiv | @article{bagherikaram2009the,
title={The Secrecy Capacity Region of the Gaussian MIMO Broadcast Channel},
author={Ghadamali Bagherikaram, Abolfazl S. Motahari, Amir K. Khandani},
journal={arXiv preprint arXiv:0903.3261},
year={2009},
archivePrefix={arXiv},
eprint={0903.3261},
primaryClass={cs.IT math.IT}
} | bagherikaram2009the |
arxiv-6805 | 0903.3276 | De-anonymizing Social Networks | <|reference_start|>De-anonymizing Social Networks: Operators of online social networks are increasingly sharing potentially sensitive information about users and their relationships with advertisers, application developers, and data-mining researchers. Privacy is typically protected by anonymization, i.e., removing names, addresses, etc. We present a framework for analyzing privacy and anonymity in social networks and develop a new re-identification algorithm targeting anonymized social-network graphs. To demonstrate its effectiveness on real-world networks, we show that a third of the users who can be verified to have accounts on both Twitter, a popular microblogging service, and Flickr, an online photo-sharing site, can be re-identified in the anonymous Twitter graph with only a 12% error rate. Our de-anonymization algorithm is based purely on the network topology, does not require creation of a large number of dummy "sybil" nodes, is robust to noise and all existing defenses, and works even when the overlap between the target network and the adversary's auxiliary information is small.<|reference_end|> | arxiv | @article{narayanan2009de-anonymizing,
title={De-anonymizing Social Networks},
author={Arvind Narayanan, Vitaly Shmatikov},
journal={arXiv preprint arXiv:0903.3276},
year={2009},
doi={10.1109/SP.2009.22},
archivePrefix={arXiv},
eprint={0903.3276},
primaryClass={cs.CR cs.DS}
} | narayanan2009de-anonymizing |
arxiv-6806 | 0903.3278 | On Oligopoly Spectrum Allocation Game in Cognitive Radio Networks with Capacity Constraints | <|reference_start|>On Oligopoly Spectrum Allocation Game in Cognitive Radio Networks with Capacity Constraints: Dynamic spectrum sharing is a promising technology to improve spectrum utilization in the future wireless networks. The flexible spectrum management provides new opportunities for licensed primary user and unlicensed secondary users to reallocate the spectrum resource efficiently. In this paper, we present an oligopoly pricing framework for dynamic spectrum allocation in which the primary users sell excessive spectrum to the secondary users for monetary return. We present two approaches, the strict constraints (type-I) and the QoS penalty (type-II), to model the realistic situation that the primary users have limited capacities. In the oligopoly model with strict constraints, we propose a low-complexity searching method to obtain the Nash Equilibrium and prove its uniqueness. When reduced to a duopoly game, we analytically show the interesting gaps in the leader-follower pricing strategy. In the QoS penalty based oligopoly model, a novel variable transformation method is developed to derive the unique Nash Equilibrium. When the market information is limited, we provide three myopically optimal algorithms "StrictBEST", "StrictBR" and "QoSBEST" that enable price adjustment for duopoly primary users based on the Best Response Function (BRF) and the bounded rationality (BR) principles. Numerical results validate the effectiveness of our analysis and demonstrate the fast convergence of "StrictBEST" as well as "QoSBEST" to the Nash Equilibrium. For the "StrictBR" algorithm, we reveal the chaotic behaviors of dynamic price adaptation in response to the learning rates.<|reference_end|> | arxiv | @article{xu2009on,
title={On Oligopoly Spectrum Allocation Game in Cognitive Radio Networks with
Capacity Constraints},
author={Yuedong Xu, John C.S. Lui, Dah-Ming Chiu},
journal={Elsevier, Computer Networks, 2010},
year={2009},
doi={10.1016/j.comnet.2009.11.018},
archivePrefix={arXiv},
eprint={0903.3278},
primaryClass={cs.NI cs.GT}
} | xu2009on |
arxiv-6807 | 0903.3287 | Hyperbolic Voronoi diagrams made easy | <|reference_start|>Hyperbolic Voronoi diagrams made easy: We present a simple framework to compute hyperbolic Voronoi diagrams of finite point sets as affine diagrams. We prove that bisectors in Klein's non-conformal disk model are hyperplanes that can be interpreted as power bisectors of Euclidean balls. Therefore our method simply consists in computing an equivalent clipped power diagram followed by a mapping transformation depending on the selected representation of the hyperbolic space (e.g., Poincar\'e conformal disk or upper-plane representations). We discuss on extensions of this approach to weighted and $k$-order diagrams, and describe their dual triangulations. Finally, we consider two useful primitives on the hyperbolic Voronoi diagrams for designing tailored user interfaces of an image catalog browsing application in the hyperbolic disk: (1) finding nearest neighbors, and (2) computing smallest enclosing balls.<|reference_end|> | arxiv | @article{nielsen2009hyperbolic,
title={Hyperbolic Voronoi diagrams made easy},
author={Frank Nielsen and Richard Nock},
journal={International Conference on Computational Science and Its
Applications. IEEE, 2010},
year={2009},
doi={10.1109/ICCSA.2010.37},
archivePrefix={arXiv},
eprint={0903.3287},
primaryClass={cs.CG}
} | nielsen2009hyperbolic |
arxiv-6808 | 0903.3311 | Cartesian effect categories are Freyd-categories | <|reference_start|>Cartesian effect categories are Freyd-categories: Most often, in a categorical semantics for a programming language, the substitution of terms is expressed by composition and finite products. However this does not deal with the order of evaluation of arguments, which may have major consequences when there are side-effects. In this paper Cartesian effect categories are introduced for solving this issue, and they are compared with strong monads, Freyd-categories and Haskell's Arrows. It is proved that a Cartesian effect category is a Freyd-category where the premonoidal structure is provided by a kind of binary product, called the sequential product. The universal property of the sequential product provides Cartesian effect categories with a powerful tool for constructions and proofs. To our knowledge, both effect categories and sequential products are new notions.<|reference_end|> | arxiv | @article{dumas2009cartesian,
title={Cartesian effect categories are Freyd-categories},
author={Jean-Guillaume Dumas (LJK), Dominique Duval (LJK), Jean-Claude Reynaud
(RC)},
journal={arXiv preprint arXiv:0903.3311},
year={2009},
archivePrefix={arXiv},
eprint={0903.3311},
primaryClass={cs.LO math.CT}
} | dumas2009cartesian |
arxiv-6809 | 0903.3317 | Discovering Matching Dependencies | <|reference_start|>Discovering Matching Dependencies: The concept of matching dependencies (mds) is recently pro- posed for specifying matching rules for object identification. Similar to the functional dependencies (with conditions), mds can also be applied to various data quality applications such as violation detection. In this paper, we study the problem of discovering matching dependencies from a given database instance. First, we formally define the measures, support and confidence, for evaluating utility of mds in the given database instance. Then, we study the discovery of mds with certain utility requirements of support and confidence. Exact algorithms are developed, together with pruning strategies to improve the time performance. Since the exact algorithm has to traverse all the data during the computation, we propose an approximate solution which only use some of the data. A bound of relative errors introduced by the approximation is also developed. Finally, our experimental evaluation demonstrates the efficiency of the proposed methods.<|reference_end|> | arxiv | @article{song2009discovering,
title={Discovering Matching Dependencies},
author={Shaoxu Song and Lei Chen},
journal={arXiv preprint arXiv:0903.3317},
year={2009},
archivePrefix={arXiv},
eprint={0903.3317},
primaryClass={cs.DB}
} | song2009discovering |
arxiv-6810 | 0903.3329 | Optimal Policies Search for Sensor Management | <|reference_start|>Optimal Policies Search for Sensor Management: This paper introduces a new approach to solve sensor management problems. Classically sensor management problems can be well formalized as Partially-Observed Markov Decision Processes (POMPD). The original approach developped here consists in deriving the optimal parameterized policy based on a stochastic gradient estimation. We assume in this work that it is possible to learn the optimal policy off-line (in simulation) using models of the environement and of the sensor(s). The learned policy can then be used to manage the sensor(s). In order to approximate the gradient in a stochastic context, we introduce a new method to approximate the gradient, based on Infinitesimal Perturbation Approximation (IPA). The effectiveness of this general framework is illustrated by the managing of an Electronically Scanned Array Radar. First simulations results are finally proposed.<|reference_end|> | arxiv | @article{bréhard2009optimal,
title={Optimal Policies Search for Sensor Management},
author={Thomas Br'ehard (INRIA Futurs), Emmanuel Duflos (INRIA Futurs,
LAGIS), Philippe Vanheeghe (LAGIS), Pierre-Arnaud Coquelin (INRIA Futurs)},
journal={arXiv preprint arXiv:0903.3329},
year={2009},
archivePrefix={arXiv},
eprint={0903.3329},
primaryClass={cs.LG stat.AP}
} | bréhard2009optimal |
arxiv-6811 | 0903.3433 | Fixed point theorems on partial randomness | <|reference_start|>Fixed point theorems on partial randomness: In our former work [K. Tadaki, Local Proceedings of CiE 2008, pp.425-434, 2008], we developed a statistical mechanical interpretation of algorithmic information theory by introducing the notion of thermodynamic quantities at temperature T, such as free energy F(T), energy E(T), and statistical mechanical entropy S(T), into the theory. These quantities are real functions of real argument T>0. We then discovered that, in the interpretation, the temperature T equals to the partial randomness of the values of all these thermodynamic quantities, where the notion of partial randomness is a stronger representation of the compression rate by program-size complexity. Furthermore, we showed that this situation holds for the temperature itself as a thermodynamic quantity. Namely, the computability of the value of partition function Z(T) gives a sufficient condition for T in (0,1) to be a fixed point on partial randomness. In this paper, we show that the computability of each of all the thermodynamic quantities above gives the sufficient condition also. Moreover, we show that the computability of F(T) gives completely different fixed points from the computability of Z(T).<|reference_end|> | arxiv | @article{tadaki2009fixed,
title={Fixed point theorems on partial randomness},
author={Kohtaro Tadaki},
journal={Ann. Pure Appl. Logic 163 (2012) 763-774},
year={2009},
doi={10.1016/j.apal.2011.09.018},
archivePrefix={arXiv},
eprint={0903.3433},
primaryClass={cs.IT cs.CC math.IT math.LO math.PR}
} | tadaki2009fixed |
arxiv-6812 | 0903.3461 | Fault-Tolerant Consensus in Unknown and Anonymous Networks | <|reference_start|>Fault-Tolerant Consensus in Unknown and Anonymous Networks: This paper investigates under which conditions information can be reliably shared and consensus can be solved in unknown and anonymous message-passing networks that suffer from crash-failures. We provide algorithms to emulate registers and solve consensus under different synchrony assumptions. For this, we introduce a novel pseudo leader-election approach which allows a leader-based consensus implementation without breaking symmetry.<|reference_end|> | arxiv | @article{delporte-gallet2009fault-tolerant,
title={Fault-Tolerant Consensus in Unknown and Anonymous Networks},
author={Carole Delporte-Gallet (LIAFA), Hugues Fauconnier (LIAFA), Andreas
Tielmann (LIAFA)},
journal={arXiv preprint arXiv:0903.3461},
year={2009},
archivePrefix={arXiv},
eprint={0903.3461},
primaryClass={cs.DS cs.DC}
} | delporte-gallet2009fault-tolerant |
arxiv-6813 | 0903.3462 | A Nice Labelling for Tree-Like Event Structures of Degree 3 (Extended Version) | <|reference_start|>A Nice Labelling for Tree-Like Event Structures of Degree 3 (Extended Version): We address the problem of finding nice labellings for event structures of degree 3. We develop a minimum theory by which we prove that the labelling number of an event structure of degree 3 is bounded by a linear function of the height. The main theorem we present in this paper states that event structures of degree 3 whose causality order is a tree have a nice labelling with 3 colors. Finally, we exemplify how to use this theorem to construct upper bounds for the labelling number of other event structures of degree 3.<|reference_end|> | arxiv | @article{santocanale2009a,
title={A Nice Labelling for Tree-Like Event Structures of Degree 3 (Extended
Version)},
author={Luigi Santocanale (LIF)},
journal={arXiv preprint arXiv:0903.3462},
year={2009},
archivePrefix={arXiv},
eprint={0903.3462},
primaryClass={cs.DC}
} | santocanale2009a |
arxiv-6814 | 0903.3480 | Worst case attacks against binary probabilistic traitor tracing codes | <|reference_start|>Worst case attacks against binary probabilistic traitor tracing codes: An insightful view into the design of traitor tracing codes should necessarily consider the worst case attacks that the colluders can lead. This paper takes an information-theoretic point of view where the worst case attack is defined as the collusion strategy minimizing the achievable rate of the traitor tracing code. Two different decoders are envisaged, the joint decoder and the simple decoder, as recently defined by P. Moulin \cite{Moulin08universal}. Several classes of colluders are defined with increasing power. The worst case attack is derived for each class and each decoder when applied to Tardos' codes and a probabilistic version of the Boneh-Shaw construction. This contextual study gives the real rates achievable by the binary probabilistic traitor tracing codes. Attacks usually considered in literature, such as majority or minority votes, are indeed largely suboptimal. This article also shows the utmost importance of the time-sharing concept in a probabilistic codes.<|reference_end|> | arxiv | @article{furon2009worst,
title={Worst case attacks against binary probabilistic traitor tracing codes},
author={Teddy Furon and Luis Perez-Freire},
journal={arXiv preprint arXiv:0903.3480},
year={2009},
archivePrefix={arXiv},
eprint={0903.3480},
primaryClass={cs.IT cs.CR math.IT}
} | furon2009worst |
arxiv-6815 | 0903.3487 | Sending a Bivariate Gaussian Source over a Gaussian MAC with Feedback | <|reference_start|>Sending a Bivariate Gaussian Source over a Gaussian MAC with Feedback: We study the power-versus-distortion trade-off for the transmission of a memoryless bivariate Gaussian source over a two-to-one Gaussian multiple-access channel with perfect causal feedback. In this problem, each of two separate transmitters observes a different component of a memoryless bivariate Gaussian source as well as the feedback from the channel output of the previous time-instants. Based on the observed source sequence and the feedback, each transmitter then describes its source component to the common receiver via an average-power constrained Gaussian multiple-access channel. From the resulting channel output, the receiver wishes to reconstruct both source components with the least possible expected squared-error distortion. We study the set of distortion pairs that can be achieved by the receiver on the two source components. We present sufficient conditions and necessary conditions for the achievability of a distortion pair. These conditions are expressed in terms of the source correlation and of the signal-to-noise ratio (SNR) of the channel. In several cases the necessary conditions and sufficient conditions coincide. This allows us to show that if the channel SNR is below a certain threshold, then an uncoded transmission scheme that ignores the feedback is optimal. Thus, below this SNR-threshold feedback is useless. We also derive the precise high-SNR asymptotics of optimal schemes.<|reference_end|> | arxiv | @article{lapidoth2009sending,
title={Sending a Bivariate Gaussian Source over a Gaussian MAC with Feedback},
author={Amos Lapidoth and Stephan Tinguely},
journal={arXiv preprint arXiv:0903.3487},
year={2009},
archivePrefix={arXiv},
eprint={0903.3487},
primaryClass={cs.IT math.IT}
} | lapidoth2009sending |
arxiv-6816 | 0903.3513 | Fuzzy Chemical Abstract Machines | <|reference_start|>Fuzzy Chemical Abstract Machines: Fuzzy set theory opens new vistas in computability theory and here I show this by defining a new computational metaphor--the fuzzy chemical metaphor. This metaphor is an extension of the chemical metaphor. In particular, I introduce the idea of a state of a system as a solution of fuzzy molecules, that is molecules that are not just different but rather similar, that react according to a set of fuzzy reaction rules. These notions become precise by introducing fuzzy labeled transition systems. Solutions of fuzzy molecules and fuzzy reaction rules are used to define the general notion of a fuzzy chemical abstract machine, which is a {\em realization} of the fuzzy chemical metaphor. Based on the idea that these machines can be used to describe the operational semantics of process calculi and algebras that include fuzziness as a fundamental property, I present a toy calculus that is a fuzzy equivalent of the $\pi$-calculus.<|reference_end|> | arxiv | @article{syropoulos2009fuzzy,
title={Fuzzy Chemical Abstract Machines},
author={Apostolos Syropoulos},
journal={arXiv preprint arXiv:0903.3513},
year={2009},
archivePrefix={arXiv},
eprint={0903.3513},
primaryClass={cs.FL}
} | syropoulos2009fuzzy |
arxiv-6817 | 0903.3524 | Ambient Isotopic Meshing of Implicit Algebraic Surface with Singularities | <|reference_start|>Ambient Isotopic Meshing of Implicit Algebraic Surface with Singularities: A complete method is proposed to compute a certified, or ambient isotopic, meshing for an implicit algebraic surface with singularities. By certified, we mean a meshing with correct topology and any given geometric precision. We propose a symbolic-numeric method to compute a certified meshing for the surface inside a box containing singularities and use a modified Plantinga-Vegter marching cube method to compute a certified meshing for the surface inside a box without singularities. Nontrivial examples are given to show the effectiveness of the algorithm. To our knowledge, this is the first method to compute a certified meshing for surfaces with singularities.<|reference_end|> | arxiv | @article{cheng2009ambient,
title={Ambient Isotopic Meshing of Implicit Algebraic Surface with
Singularities},
author={Jin-San Cheng, Xiao-Shan Gao and Jia Li},
journal={arXiv preprint arXiv:0903.3524},
year={2009},
doi={10.1007/978-3-642-04103-7_9},
number={MM-preprints, vol. 27, 2008},
archivePrefix={arXiv},
eprint={0903.3524},
primaryClass={cs.CG cs.GR}
} | cheng2009ambient |
arxiv-6818 | 0903.3537 | Optimization and Analysis of Distributed Averaging with Short Node Memory | <|reference_start|>Optimization and Analysis of Distributed Averaging with Short Node Memory: In this paper, we demonstrate, both theoretically and by numerical examples, that adding a local prediction component to the update rule can significantly improve the convergence rate of distributed averaging algorithms. We focus on the case where the local predictor is a linear combination of the node's two previous values (i.e., two memory taps), and our update rule computes a combination of the predictor and the usual weighted linear combination of values received from neighbouring nodes. We derive the optimal mixing parameter for combining the predictor with the neighbors' values, and carry out a theoretical analysis of the improvement in convergence rate that can be obtained using this acceleration methodology. For a chain topology on n nodes, this leads to a factor of n improvement over the one-step algorithm, and for a two-dimensional grid, our approach achieves a factor of n^1/2 improvement, in terms of the number of iterations required to reach a prescribed level of accuracy.<|reference_end|> | arxiv | @article{oreshkin2009optimization,
title={Optimization and Analysis of Distributed Averaging with Short Node
Memory},
author={Boris N. Oreshkin, Mark J. Coates, Michael G. Rabbat},
journal={arXiv preprint arXiv:0903.3537},
year={2009},
doi={10.1109/TSP.2010.2043127},
archivePrefix={arXiv},
eprint={0903.3537},
primaryClass={cs.DC cs.IT cs.MA math.IT}
} | oreshkin2009optimization |
arxiv-6819 | 0903.3545 | Complexity, time and music | <|reference_start|>Complexity, time and music: The concept of complexity as considered in terms of its algorithmic definition proposed by G.J. Chaitin and A.N. Kolmogorov is revisited for the dynamical complexity of music. When music pieces are cast in the form of time series of pitch variations, concepts of dynamical systems theory can be used to define new quantities such as the {\em dimensionality} as a measure of the {\em global temporal dynamics} of a music piece, and the Shanon {\em entropy} as an evaluation of its {\em local dynamics}. When these quantities are computed explicitly for sequences sampled in the music literature from the 18th to the 20th century, no indication is found of a systematic increase in complexity paralleling historically the evolution of classical western music, but the analysis suggests that the fractional nature of art might have an intrinsic value of more general significance.<|reference_end|> | arxiv | @article{boon2009complexity,,
title={Complexity, time and music},
author={Jean Pierre Boon},
journal={arXiv preprint arXiv:0903.3545},
year={2009},
archivePrefix={arXiv},
eprint={0903.3545},
primaryClass={physics.soc-ph cs.SD physics.data-an}
} | boon2009complexity, |
arxiv-6820 | 0903.3562 | Visual Conceptualizations and Models of Science | <|reference_start|>Visual Conceptualizations and Models of Science: This Journal of Informetrics special issue aims to improve our understanding of the structure and dynamics of science by reviewing and advancing existing conceptualizations and models of scholarly activity. Several of these conceptualizations and models have visual manifestations supporting the combination and comparison of theories and approaches developed in different disciplines of science. Subsequently, we discuss challenges towards a theoretically grounded and practically useful science of science and provide a brief chronological review of relevant work. Then, we exemplarily present three conceptualizations of science that attempt to provide frameworks for the comparison and combination of existing approaches, theories, laws, and measurements. Finally, we discuss the contributions of and interlinkages among the eight papers included in this issue. Each paper makes a unique contribution towards conceptualizations and models of science and roots this contribution in a review and comparison with existing work.<|reference_end|> | arxiv | @article{boerner2009visual,
title={Visual Conceptualizations and Models of Science},
author={Katy Boerner, Andrea Scharnhorst},
journal={arXiv preprint arXiv:0903.3562},
year={2009},
archivePrefix={arXiv},
eprint={0903.3562},
primaryClass={cs.DL physics.soc-ph}
} | boerner2009visual |
arxiv-6821 | 0903.3579 | String comparison by transposition networks | <|reference_start|>String comparison by transposition networks: Computing string or sequence alignments is a classical method of comparing strings and has applications in many areas of computing, such as signal processing and bioinformatics. Semi-local string alignment is a recent generalisation of this method, in which the alignment of a given string and all substrings of another string are computed simultaneously at no additional asymptotic cost. In this paper, we show that there is a close connection between semi-local string alignment and a certain class of traditional comparison networks known as transposition networks. The transposition network approach can be used to represent different string comparison algorithms in a unified form, and in some cases provides generalisations or improvements on existing algorithms. This approach allows us to obtain new algorithms for sparse semi-local string comparison and for comparison of highly similar and highly dissimilar strings, as well as of run-length compressed strings. We conclude that the transposition network method is a very general and flexible way of understanding and improving different string comparison algorithms, as well as their efficient implementation.<|reference_end|> | arxiv | @article{krusche2009string,
title={String comparison by transposition networks},
author={Peter Krusche, Alexander Tiskin},
journal={arXiv preprint arXiv:0903.3579},
year={2009},
archivePrefix={arXiv},
eprint={0903.3579},
primaryClass={cs.DS cs.DM}
} | krusche2009string |
arxiv-6822 | 0903.3622 | Algorithmic Solutions to Some Transportation Optimization Problems with Applications in the Metallurgical Industry | <|reference_start|>Algorithmic Solutions to Some Transportation Optimization Problems with Applications in the Metallurgical Industry: In this paper we address several constrained transportation optimization problems (e.g. vehicle routing, shortest Hamiltonian path), for which we present novel algorithmic solutions and extensions, considering several optimization objectives, like minimizing costs and resource usage. All the considered problems are motivated by practical situations arising, for instance, in the mining and metallurgical industry or in data communication. We restrict our attention to transportation networks with path, tree or geometric structures, for which the developed polynomial-time algorithms are optimal or nearly optimal.<|reference_end|> | arxiv | @article{andreica2009algorithmic,
title={Algorithmic Solutions to Some Transportation Optimization Problems with
Applications in the Metallurgical Industry},
author={Mugurel Ionut Andreica, Sorin Briciu, Madalina Ecaterina Andreica},
journal={Metalurgia International, vol. 14, special issue no. 5, pp. 46-53,
2009. (ISSN: 1582-2214) ; http://www.metalurgia.ro/metalurgia_int.html},
year={2009},
archivePrefix={arXiv},
eprint={0903.3622},
primaryClass={cs.DS cs.CG cs.DM}
} | andreica2009algorithmic |
arxiv-6823 | 0903.3623 | Matrix plots of reordered bistochastized transaction flow tables: A United States intercounty migration example | <|reference_start|>Matrix plots of reordered bistochastized transaction flow tables: A United States intercounty migration example: We present a number of variously rearranged matrix plots of the $3, 107 \times 3, 107$ 1995-2000 (asymmetric) intercounty migration table for the United States, principally in its bistochasticized form (all 3,107 row and column sums iteratively proportionally fitted to equal 1). In one set of plots, the counties are seriated on the bases of the subdominant (left and right) eigenvectors of the bistochastic matrix. In another set, we use the ordering of counties in the dendrogram generated by the associated strong component hierarchical clustering. Interesting, diverse features of U. S. intercounty migration emerge--such as a contrast in centralized, hub-like (cosmopolitan/provincial) properties between cosmopolitan "Sunbelt" and provincial "Black Belt" counties. The methodologies employed should also be insightful for the many other diverse forms of interesting transaction flow-type data--interjournal citations being an obvious, much-studied example, where one might expect that the journals Science, Nature and PNAS would display "cosmopolitan" characteristics.<|reference_end|> | arxiv | @article{slater2009matrix,
title={Matrix plots of reordered bistochastized transaction flow tables: A
United States intercounty migration example},
author={Paul B. Slater},
journal={arXiv preprint arXiv:0903.3623},
year={2009},
archivePrefix={arXiv},
eprint={0903.3623},
primaryClass={physics.soc-ph cs.SI physics.data-an stat.AP}
} | slater2009matrix |
arxiv-6824 | 0903.3624 | Distributed and Adaptive Algorithms for Vehicle Routing in a Stochastic and Dynamic Environment | <|reference_start|>Distributed and Adaptive Algorithms for Vehicle Routing in a Stochastic and Dynamic Environment: In this paper we present distributed and adaptive algorithms for motion coordination of a group of m autonomous vehicles. The vehicles operate in a convex environment with bounded velocity and must service demands whose time of arrival, location and on-site service are stochastic; the objective is to minimize the expected system time (wait plus service) of the demands. The general problem is known as the m-vehicle Dynamic Traveling Repairman Problem (m-DTRP). The best previously known control algorithms rely on centralized a-priori task assignment and are not robust against changes in the environment, e.g. changes in load conditions; therefore, they are of limited applicability in scenarios involving ad-hoc networks of autonomous vehicles operating in a time-varying environment. First, we present a new class of policies for the 1-DTRP problem that: (i) are provably optimal both in light- and heavy-load condition, and (ii) are adaptive, in particular, they are robust against changes in load conditions. Second, we show that partitioning policies, whereby the environment is partitioned among the vehicles and each vehicle follows a certain set of rules in its own region, are optimal in heavy-load conditions. Finally, by combining the new class of algorithms for the 1-DTRP with suitable partitioning policies, we design distributed algorithms for the m-DTRP problem that (i) are spatially distributed, scalable to large networks, and adaptive to network changes, (ii) are within a constant-factor of optimal in heavy-load conditions and stabilize the system in any load condition. Simulation results are presented and discussed.<|reference_end|> | arxiv | @article{pavone2009distributed,
title={Distributed and Adaptive Algorithms for Vehicle Routing in a Stochastic
and Dynamic Environment},
author={Marco Pavone, Emilio Frazzoli, Francesco Bullo},
journal={arXiv preprint arXiv:0903.3624},
year={2009},
doi={10.1109/TAC.2010.2092850},
archivePrefix={arXiv},
eprint={0903.3624},
primaryClass={cs.RO}
} | pavone2009distributed |
arxiv-6825 | 0903.3627 | Statistical RIP and Semi-Circle Distribution of Incoherent Dictionaries | <|reference_start|>Statistical RIP and Semi-Circle Distribution of Incoherent Dictionaries: In this paper we formulate and prove a statistical version of the Candes-Tao restricted isometry property (SRIP for short) which holds in general for any incoherent dictionary which is a disjoint union of orthonormal bases. In addition, we prove that, under appropriate normalization, the eigenvalues of the associated Gram matrix fluctuate around 1 according to the Wigner semicircle distribution. The result is then applied to various dictionaries that arise naturally in the setting of finite harmonic analysis, giving, in particular, a better understanding on a remark of Applebaum-Howard-Searle-Calderbank concerning RIP for the Heisenberg dictionary of chirp like functions.<|reference_end|> | arxiv | @article{gurevich2009statistical,
title={Statistical RIP and Semi-Circle Distribution of Incoherent Dictionaries},
author={Shamgar Gurevich (Berkeley) and Ronny Hadani (Chicago)},
journal={arXiv preprint arXiv:0903.3627},
year={2009},
archivePrefix={arXiv},
eprint={0903.3627},
primaryClass={cs.IT cs.DM math.IT math.PR}
} | gurevich2009statistical |
arxiv-6826 | 0903.3667 | How random are a learner's mistakes? | <|reference_start|>How random are a learner's mistakes?: Given a random binary sequence $X^{(n)}$ of random variables, $X_{t},$ $t=1,2,...,n$, for instance, one that is generated by a Markov source (teacher) of order $k^{*}$ (each state represented by $k^{*}$ bits). Assume that the probability of the event $X_{t}=1$ is constant and denote it by $\beta$. Consider a learner which is based on a parametric model, for instance a Markov model of order $k$, who trains on a sequence $x^{(m)}$ which is randomly drawn by the teacher. Test the learner's performance by giving it a sequence $x^{(n)}$ (generated by the teacher) and check its predictions on every bit of $x^{(n)}.$ An error occurs at time $t$ if the learner's prediction $Y_{t}$ differs from the true bit value $X_{t}$. Denote by $\xi^{(n)}$ the sequence of errors where the error bit $\xi_{t}$ at time $t$ equals 1 or 0 according to whether the event of an error occurs or not, respectively. Consider the subsequence $\xi^{(\nu)}$ of $\xi^{(n)}$ which corresponds to the errors of predicting a 0, i.e., $\xi^{(\nu)}$ consists of the bits of $\xi^{(n)}$ only at times $t$ such that $Y_{t}=0.$ In this paper we compute an estimate on the deviation of the frequency of 1s of $\xi^{(\nu)}$ from $\beta$. The result shows that the level of randomness of $\xi^{(\nu)}$ decreases relative to an increase in the complexity of the learner.<|reference_end|> | arxiv | @article{ratsaby2009how,
title={How random are a learner's mistakes?},
author={Joel Ratsaby},
journal={arXiv preprint arXiv:0903.3667},
year={2009},
archivePrefix={arXiv},
eprint={0903.3667},
primaryClass={cs.LG cs.IT math.IT math.PR}
} | ratsaby2009how |
arxiv-6827 | 0903.3669 | Comment on "Language Trees and Zipping" arXiv:cond-mat/0108530 | <|reference_start|>Comment on "Language Trees and Zipping" arXiv:cond-mat/0108530: Every encoding has priori information if the encoding represents any semantic information of the unverse or object. Encoding means mapping from the unverse to the string or strings of digits. The semantic here is used in the model-theoretic sense or denotation of the object. If encoding or strings of symbols is the adequate and true mapping of model or object, and the mapping is recursive or computable, the distance between two strings (text) is mapping the distance between models. We then are able to measure the distance by computing the distance between the two strings. Otherwise, we may take a misleading course. "Language tree" may not be a family tree in the sense of historical linguistics. Rather it just means the similarity.<|reference_end|> | arxiv | @article{wang2009comment,
title={Comment on "Language Trees and Zipping" arXiv:cond-mat/0108530},
author={Xiuli Wang},
journal={arXiv preprint arXiv:0903.3669},
year={2009},
archivePrefix={arXiv},
eprint={0903.3669},
primaryClass={cs.AI cs.IT math.IT}
} | wang2009comment |
arxiv-6828 | 0903.3676 | Combinatorial Ricci Curvature and Laplacians for Image Processing | <|reference_start|>Combinatorial Ricci Curvature and Laplacians for Image Processing: A new Combinatorial Ricci curvature and Laplacian operators for grayscale images are introduced and tested on 2D synthetic, natural and medical images. Analogue formulae for voxels are also obtained. These notions are based upon more general concepts developed by R. Forman. Further applications, in particular a fitting Ricci flow, are discussed.<|reference_end|> | arxiv | @article{saucan2009combinatorial,
title={Combinatorial Ricci Curvature and Laplacians for Image Processing},
author={Emil Saucan, Eli Appleboilm, Gershon Wolansky, Yehoshua Y. Zeevi},
journal={Proceedings of CISP'09, Vol. 2, 992-997, 2009},
year={2009},
number={CCIT Report # 722 March 2009 (EE Pub. No. 1679)},
archivePrefix={arXiv},
eprint={0903.3676},
primaryClass={cs.CV cs.CG}
} | saucan2009combinatorial |
arxiv-6829 | 0903.3685 | Quasiperfect domination in triangular lattices | <|reference_start|>Quasiperfect domination in triangular lattices: A vertex subset $S$ of a graph $G$ is a perfect (resp. quasiperfect) dominating set in $G$ if each vertex $v$ of $G\setminus S$ is adjacent to only one vertex ($d_v\in\{1,2\}$ vertices) of $S$. Perfect and quasiperfect dominating sets in the regular tessellation graph of Schl\"afli symbol $\{3,6\}$ and in its toroidal quotients are investigated, yielding the classification of their perfect dominating sets and most of their quasiperfect dominating sets $S$ with induced components of the form $K_{\nu}$, where $\nu\in\{1,2,3\}$ depends only on $S$.<|reference_end|> | arxiv | @article{dejter2009quasiperfect,
title={Quasiperfect domination in triangular lattices},
author={Italo J. Dejter},
journal={Discussiones Mathematicae Graph Theory 29(2009) 179-198},
year={2009},
archivePrefix={arXiv},
eprint={0903.3685},
primaryClass={math.CO cs.IT math.IT}
} | dejter2009quasiperfect |
arxiv-6830 | 0903.3690 | Computations modulo regular chains | <|reference_start|>Computations modulo regular chains: The computation of triangular decompositions are based on two fundamental operations: polynomial GCDs modulo regular chains and regularity test modulo saturated ideals. We propose new algorithms for these core operations relying on modular methods and fast polynomial arithmetic. Our strategies take also advantage of the context in which these operations are performed. We report on extensive experimentation, comparing our code to pre-existing Maple implementations, as well as more optimized Magma functions. In most cases, our new code outperforms the other packages by several orders of magnitude.<|reference_end|> | arxiv | @article{li2009computations,
title={Computations modulo regular chains},
author={Xin Li and Marc Moreno Maza and Wei Pan},
journal={arXiv preprint arXiv:0903.3690},
year={2009},
archivePrefix={arXiv},
eprint={0903.3690},
primaryClass={cs.SC}
} | li2009computations |
arxiv-6831 | 0903.3696 | Notes on solving and playing peg solitaire on a computer | <|reference_start|>Notes on solving and playing peg solitaire on a computer: We consider the one-person game of peg solitaire played on a computer. Two popular board shapes are the 33-hole cross-shaped board, and the 15-hole triangle board---we use them as examples throughout. The basic game begins from a full board with one peg missing and the goal is to finish at a board position with one peg. First, we discuss ways to solve the basic game on a computer. Then we consider the problem of quickly distinguishing board positions where the goal can still be reached ("winning" board positions) from those where it cannot. This enables a computer to alert the player if a jump under consideration leads to a dead end. On the 15-hole triangle board, it is possible to identify all winning board positions (from any single vacancy start) by storing a key set of 437 board positions. For the "central game" on the 33-hole cross-shaped board, we can identify all winning board positions by storing 839,536 board positions. By viewing a successful game as a traversal of a directed graph of winning board positions, we apply a simple algorithm to count the number of ways to traverse this graph, and calculate that the total number of solutions to the central game is 40,861,647,040,079,968. Our analysis can also determine how quickly we can reach a "dead board position", where a one peg finish is no longer possible.<|reference_end|> | arxiv | @article{bell2009notes,
title={Notes on solving and playing peg solitaire on a computer},
author={George I. Bell},
journal={arXiv preprint arXiv:0903.3696},
year={2009},
archivePrefix={arXiv},
eprint={0903.3696},
primaryClass={math.CO cs.DM math.HO}
} | bell2009notes |
arxiv-6832 | 0903.3715 | Optimal sparse CDMA detection at high load | <|reference_start|>Optimal sparse CDMA detection at high load: Balancing efficiency of bandwidth use and complexity of detection involves choosing a suitable load for a multi-access channel. In the case of synchronous CDMA, with random codes, it is possible to demonstrate the existence of a threshold in the load beyond which there is an apparent jump in computational complexity. At small load unit clause propagation can determine a jointly optimal detection of sources on a noiseless channel, but fails at high load. Analysis provides insight into the difference between the standard dense random codes and sparse codes, and the limitations of optimal detection in the sparse case.<|reference_end|> | arxiv | @article{raymond2009optimal,
title={Optimal sparse CDMA detection at high load},
author={Jack Raymond},
journal={arXiv preprint arXiv:0903.3715},
year={2009},
archivePrefix={arXiv},
eprint={0903.3715},
primaryClass={cs.IT math.IT}
} | raymond2009optimal |
arxiv-6833 | 0903.3741 | A System F accounting for scalars | <|reference_start|>A System F accounting for scalars: The Algebraic lambda-calculus and the Linear-Algebraic lambda-calculus extend the lambda-calculus with the possibility of making arbitrary linear combinations of terms. In this paper we provide a fine-grained, System F-like type system for the linear-algebraic lambda-calculus. We show that this "scalar" type system enjoys both the subject-reduction property and the strong-normalisation property, our main technical results. The latter yields a significant simplification of the linear-algebraic lambda-calculus itself, by removing the need for some restrictions in its reduction rules. But the more important, original feature of this scalar type system is that it keeps track of 'the amount of a type' that is present in each term. As an example of its use, we shown that it can serve as a guarantee that the normal form of a term is barycentric, i.e that its scalars are summing to one.<|reference_end|> | arxiv | @article{arrighi2009a,
title={A System F accounting for scalars},
author={Pablo Arrighi (ENS-Lyon, LIP and Universite de Grenoble, LIG),
Alejandro Diaz-Caro (Universite de Grenoble, LIG, and Universite Paris-Nord,
Laboratoire LIPN)},
journal={Logical Methods in Computer Science, Volume 8, Issue 1 (February
27, 2012) lmcs:846},
year={2009},
doi={10.2168/LMCS-8(1:11)2012},
archivePrefix={arXiv},
eprint={0903.3741},
primaryClass={cs.LO cs.PL quant-ph}
} | arrighi2009a |
arxiv-6834 | 0903.3759 | GeoP2P: An adaptive peer-to-peer overlay for efficient search and update of spatial information | <|reference_start|>GeoP2P: An adaptive peer-to-peer overlay for efficient search and update of spatial information: This paper proposes a fully decentralized peer-to-peer overlay structure GeoP2P, to facilitate geographic location based search and retrieval of information. Certain limitations of centralized geographic indexes favor peer-to-peer organization of the information, which, in addition to avoiding performance bottleneck, allows autonomy over local information. Peer-to-peer systems for geographic or multidimensional range queries built on existing DHTs suffer from the inaccuracy in linearization of the multidimensional space. Other overlay structures that are based on hierarchical partitioning of the search space are not scalable because they use special super-peers to represent the nodes in the hierarchy. GeoP2P partitions the search space hierarchically, maintains the overlay structure and performs the routing without the need of any super-peers. Although similar fully-decentralized overlays have been previously proposed, they lack the ability to dynamically grow and retract the partition hierarchy when the number of peers change. GeoP2P provides such adaptive features with minimum perturbation of the system state. Such adaptation makes both the routing delay and the state size of each peer logarithmic to the total number of peers, irrespective of the size of the multidimensional space. Our analysis also reveals that the overlay structure and the routing algorithm are generic and independent of several aspects of the partitioning hierarchy, such as the geometric shape of the zones or the dimensionality of the search space.<|reference_end|> | arxiv | @article{asaduzzaman2009geop2p:,
title={GeoP2P: An adaptive peer-to-peer overlay for efficient search and update
of spatial information},
author={Shah Asaduzzaman, Gregor v. Bochmann},
journal={arXiv preprint arXiv:0903.3759},
year={2009},
archivePrefix={arXiv},
eprint={0903.3759},
primaryClass={cs.NI cs.DB cs.DC}
} | asaduzzaman2009geop2p: |
arxiv-6835 | 0903.3786 | Multiple-Input Multiple-Output Gaussian Broadcast Channels with Confidential Messages | <|reference_start|>Multiple-Input Multiple-Output Gaussian Broadcast Channels with Confidential Messages: This paper considers the problem of secret communication over a two-receiver multiple-input multiple-output (MIMO) Gaussian broadcast channel. The transmitter has two independent messages, each of which is intended for one of the receivers but needs to be kept asymptotically perfectly secret from the other. It is shown that, surprisingly, under a matrix power constraint both messages can be simultaneously transmitted at their respective maximal secrecy rates. To prove this result, the MIMO Gaussian wiretap channel is revisited and a new characterization of its secrecy capacity is provided via a new coding scheme that uses artificial noise and random binning.<|reference_end|> | arxiv | @article{liu2009multiple-input,
title={Multiple-Input Multiple-Output Gaussian Broadcast Channels with
Confidential Messages},
author={Ruoheng Liu, Tie Liu, H. Vincent Poor, Shlomo Shamai (Shitz)},
journal={arXiv preprint arXiv:0903.3786},
year={2009},
archivePrefix={arXiv},
eprint={0903.3786},
primaryClass={cs.IT cs.CR math.IT}
} | liu2009multiple-input |
arxiv-6836 | 0903.3797 | Personal report of the 3rd ECMDA-FA'07 conference | <|reference_start|>Personal report of the 3rd ECMDA-FA'07 conference: Manuscripts notes taken during the conference ECMDA 2008. I give the full conference program (title of the article and name of the person who introduced) detailing some of the presentations.<|reference_end|> | arxiv | @article{combemale2009personal,
title={Personal report of the 3rd ECMDA-FA'07 conference},
author={Beno^it Combemale (IRIT)},
journal={arXiv preprint arXiv:0903.3797},
year={2009},
archivePrefix={arXiv},
eprint={0903.3797},
primaryClass={cs.SE}
} | combemale2009personal |
arxiv-6837 | 0903.3850 | Using Structural Recursion for Corecursion | <|reference_start|>Using Structural Recursion for Corecursion: We propose a (limited) solution to the problem of constructing stream values defined by recursive equations that do not respect the guardedness condition. The guardedness condition is imposed on definitions of corecursive functions in Coq, AGDA, and other higher-order proof assistants. In this paper, we concentrate in particular on those non-guarded equations where recursive calls appear under functions. We use a correspondence between streams and functions over natural numbers to show that some classes of non-guarded definitions can be modelled through the encoding as structural recursive functions. In practice, this work extends the class of stream values that can be defined in a constructive type theory-based theorem prover with inductive and coinductive types, structural recursion and guarded corecursion<|reference_end|> | arxiv | @article{bertot2009using,
title={Using Structural Recursion for Corecursion},
author={Yves Bertot (INRIA Sophia Antipolis), Ekaterina Komendantskaya (INRIA
Sophia Antipolis)},
journal={Types 2008 5497 (2008) 220-236},
year={2009},
archivePrefix={arXiv},
eprint={0903.3850},
primaryClass={cs.LO}
} | bertot2009using |
arxiv-6838 | 0903.3889 | On generating independent random strings | <|reference_start|>On generating independent random strings: It is shown that from two strings that are partially random and independent (in the sense of Kolmogorov complexity) it is possible to effectively construct polynomially many strings that are random and pairwise independent. If the two initial strings are random, then the above task can be performed in polynomial time. It is also possible to construct in polynomial time a random string, from two strings that have constant randomness rate.<|reference_end|> | arxiv | @article{zimand2009on,
title={On generating independent random strings},
author={Marius Zimand},
journal={arXiv preprint arXiv:0903.3889},
year={2009},
archivePrefix={arXiv},
eprint={0903.3889},
primaryClass={cs.IT cs.CC math.IT}
} | zimand2009on |
arxiv-6839 | 0903.3900 | Optimized Implementation of Elliptic Curve Based Additive Homomorphic Encryption for Wireless Sensor Networks | <|reference_start|>Optimized Implementation of Elliptic Curve Based Additive Homomorphic Encryption for Wireless Sensor Networks: When deploying wireless sensor networks (WSNs) in public environments it may become necessary to secure their data storage and transmission against possible attacks such as node-compromise and eavesdropping. The nodes feature only small computational and energy resources, thus requiring efficient algorithms. As a solution for this problem the TinyPEDS approach was proposed in [7], which utilizes the Elliptic Curve ElGamal (EC-ElGamal) cryptosystem for additive homomorphic encryption allowing concealed data aggregation. This work presents an optimized implementation of EC-ElGamal on a MicaZ mote, which is a typical sensor node platform with 8-bit processor for WSNs. Compared to the best previous result, our implementation is at least 44% faster for fixed-point multiplication. Because most parts of the algorithm are similar to standard Elliptic Curve algorithms, the results may be reused in other realizations on constrained devices as well.<|reference_end|> | arxiv | @article{ugus2009optimized,
title={Optimized Implementation of Elliptic Curve Based Additive Homomorphic
Encryption for Wireless Sensor Networks},
author={Osman Ugus, Dirk Westhoff, Ralf Laue, Abdulhadi Shoufan, and Sorin A.
Huss},
journal={arXiv preprint arXiv:0903.3900},
year={2009},
archivePrefix={arXiv},
eprint={0903.3900},
primaryClass={cs.CR cs.PF}
} | ugus2009optimized |
arxiv-6840 | 0903.3926 | Designing a GUI for Proofs - Evaluation of an HCI Experiment | <|reference_start|>Designing a GUI for Proofs - Evaluation of an HCI Experiment: Often user interfaces of theorem proving systems focus on assisting particularly trained and skilled users, i.e., proof experts. As a result, the systems are difficult to use for non-expert users. This paper describes a paper and pencil HCI experiment, in which (non-expert) students were asked to make suggestions for a GUI for an interactive system for mathematical proofs. They had to explain the usage of the GUI by applying it to construct a proof sketch for a given theorem. The evaluation of the experiment provides insights for the interaction design for non-expert users and the needs and wants of this user group.<|reference_end|> | arxiv | @article{homik2009designing,
title={Designing a GUI for Proofs - Evaluation of an HCI Experiment},
author={Martin Homik, Andreas Meier},
journal={arXiv preprint arXiv:0903.3926},
year={2009},
number={SEKI Working-Paper SWP-2005-01},
archivePrefix={arXiv},
eprint={0903.3926},
primaryClass={cs.AI}
} | homik2009designing |
arxiv-6841 | 0903.3995 | Gradient-based adaptive interpolation in super-resolution image restoration | <|reference_start|>Gradient-based adaptive interpolation in super-resolution image restoration: This paper presents a super-resolution method based on gradient-based adaptive interpolation. In this method, in addition to considering the distance between the interpolated pixel and the neighboring valid pixel, the interpolation coefficients take the local gradient of the original image into account. The smaller the local gradient of a pixel is, the more influence it should have on the interpolated pixel. And the interpolated high resolution image is finally deblurred by the application of wiener filter. Experimental results show that our proposed method not only substantially improves the subjective and objective quality of restored images, especially enhances edges, but also is robust to the registration error and has low computational complexity.<|reference_end|> | arxiv | @article{chu2009gradient-based,
title={Gradient-based adaptive interpolation in super-resolution image
restoration},
author={Jinyu Chu, Ju Liu, Jianping Qiao, Xiaoling Wang and Yujun Li},
journal={arXiv preprint arXiv:0903.3995},
year={2009},
archivePrefix={arXiv},
eprint={0903.3995},
primaryClass={cs.MM cs.CV}
} | chu2009gradient-based |
arxiv-6842 | 0903.4014 | Construction of Codes for Wiretap Channel and Secret Key Agreement from Correlated Source Outputs by Using Sparse Matrices | <|reference_start|>Construction of Codes for Wiretap Channel and Secret Key Agreement from Correlated Source Outputs by Using Sparse Matrices: The aim of this paper is to prove coding theorems for the wiretap channel coding problem and secret key agreement problem based on the the notion of a hash property for an ensemble of functions. These theorems imply that codes using sparse matrices can achieve the optimal rate. Furthermore, fixed-rate universal coding theorems for a wiretap channel and a secret key agreement are also proved.<|reference_end|> | arxiv | @article{muramatsu2009construction,
title={Construction of Codes for Wiretap Channel and Secret Key Agreement from
Correlated Source Outputs by Using Sparse Matrices},
author={Jun Muramatsu and Shigeki Miyake},
journal={IEEE Transactions on Information Theory, vol. 58, no. 2, pp.
671-692, Feb. 2012},
year={2009},
archivePrefix={arXiv},
eprint={0903.4014},
primaryClass={cs.IT cs.CR math.IT}
} | muramatsu2009construction |
arxiv-6843 | 0903.4035 | BLOGRANK: Ranking Weblogs Based On Connectivity And Similarity Features | <|reference_start|>BLOGRANK: Ranking Weblogs Based On Connectivity And Similarity Features: A large part of the hidden web resides in weblog servers. New content is produced in a daily basis and the work of traditional search engines turns to be insufficient due to the nature of weblogs. This work summarizes the structure of the blogosphere and highlights the special features of weblogs. In this paper we present a method for ranking weblogs based on the link graph and on several similarity characteristics between weblogs. First we create an enhanced graph of connected weblogs and add new types of edges and weights utilising many weblog features. Then, we assign a ranking to each weblog using our algorithm, BlogRank, which is a modified version of PageRank. For the validation of our method we run experiments on a weblog dataset, which we process and adapt to our search engine. (http://spiderwave.aueb.gr/Blogwave). The results suggest that the use of the enhanced graph and the BlogRank algorithm is preferred by the users.<|reference_end|> | arxiv | @article{kritikopoulos2009blogrank:,
title={BLOGRANK: Ranking Weblogs Based On Connectivity And Similarity Features},
author={A. Kritikopoulos, M. Sideri, I. Varlamis},
journal={Proceedings of the 2nd international Workshop on Advanced
Architectures and Algorithms For internet Delivery and Applications (Pisa,
Italy, October 10 - 10, 2006). AAA-IDEA '06},
year={2009},
archivePrefix={arXiv},
eprint={0903.4035},
primaryClass={cs.IR}
} | kritikopoulos2009blogrank: |
arxiv-6844 | 0903.4036 | Feedback control logic synthesis for non safe Petri nets | <|reference_start|>Feedback control logic synthesis for non safe Petri nets: This paper addresses the problem of forbidden states of non safe Petri Net (PN) modelling discrete events systems. To prevent the forbidden states, it is possible to use conditions or predicates associated with transitions. Generally, there are many forbidden states, thus many complex conditions are associated with the transitions. A new idea for computing predicates in non safe Petri nets will be presented. Using this method, we can construct a maximally permissive controller if it exists.<|reference_end|> | arxiv | @article{dideban2009feedback,
title={Feedback control logic synthesis for non safe Petri nets},
author={Abbas Dideban, Hassane Alla (GIPSA-lab)},
journal={13th IFAC Symposium on INFORMATION CONTROL PROBLEMS IN
MANUFACTURING, Moscou : Russie (2009)},
year={2009},
archivePrefix={arXiv},
eprint={0903.4036},
primaryClass={cs.IT math.IT}
} | dideban2009feedback |
arxiv-6845 | 0903.4046 | Boolean Logic with Fault Tolerant Coding | <|reference_start|>Boolean Logic with Fault Tolerant Coding: Error detectable and error correctable coding in Hamming space was researched to discover possible fault tolerant coding constellations, which can implement Boolean logic with fault tolerant property. Basic logic operators of the Boolean algebra were developed to apply fault tolerant coding in the logic circuits. It was shown that application of three-bit fault tolerant codes have provided the digital system skill of auto-recovery without need for designing additional-fault tolerance mechanisms.<|reference_end|> | arxiv | @article{alagoz2009boolean,
title={Boolean Logic with Fault Tolerant Coding},
author={B. Baykant Alagoz},
journal={arXiv preprint arXiv:0903.4046},
year={2009},
archivePrefix={arXiv},
eprint={0903.4046},
primaryClass={cs.OH}
} | alagoz2009boolean |
arxiv-6846 | 0903.4053 | The generating of Fractal Images Using MathCAD Program | <|reference_start|>The generating of Fractal Images Using MathCAD Program: This paper presents the graphic representation in the z-plane of the first three iterations of the algorithm that generates the Sierpinski Gasket. It analyzes the influence of the f(z) map when we represent fractal images.<|reference_end|> | arxiv | @article{stefan2009the,
title={The generating of Fractal Images Using MathCAD Program},
author={Laura Stefan},
journal={Ann. Univ. Tibiscus Comp. Sci. Series 6 (2008), 211 - 220},
year={2009},
archivePrefix={arXiv},
eprint={0903.4053},
primaryClass={cs.MS}
} | stefan2009the |
arxiv-6847 | 0903.4100 | Decentralized Management of Bi-modal Network Resources in a Distributed Stream Processing Platform | <|reference_start|>Decentralized Management of Bi-modal Network Resources in a Distributed Stream Processing Platform: This paper presents resource management techniques for allocating communication and computational resources in a distributed stream processing platform. The platform is designed to exploit the synergy of two classes of network connections -- dedicated and opportunistic. Previous studies we conducted have demonstrated the benefits of such bi-modal resource organization that combines small pools of dedicated computers with a very large pool of opportunistic computing capacities of idle computers to serve high throughput computing applications. This paper extends the idea of bi-modal resource organization into the management of communication resources. Since distributed stream processing applications demand large volume of data transmission between processing sites at a consistent rate, adequate control over the network resources is important to assure a steady flow of processing. The system model used in this paper is a platform where stream processing servers at distributed sites are interconnected with a combination of dedicated and opportunistic communication links. Two pertinent resource allocation problems are analyzed in details and solved using decentralized algorithms. One is the mapping of the stream processing tasks on the processing and the communication resources. The other is the adaptive re-allocation of the opportunistic communication links due to the variations in their capacities. Overall optimization goal is higher task throughput and better utilization of the expensive dedicated links. The evaluation demonstrates that the algorithms are able to exploit the synergy of bi-modal communication links towards achieving the optimization goals.<|reference_end|> | arxiv | @article{asaduzzaman2009decentralized,
title={Decentralized Management of Bi-modal Network Resources in a Distributed
Stream Processing Platform},
author={Shah Asaduzzaman, Muthucumaru Maheswaran},
journal={arXiv preprint arXiv:0903.4100},
year={2009},
archivePrefix={arXiv},
eprint={0903.4100},
primaryClass={cs.DC cs.NI}
} | asaduzzaman2009decentralized |
arxiv-6848 | 0903.4101 | Polylog space compression, pushdown compression, and Lempel-Ziv are incomparable | <|reference_start|>Polylog space compression, pushdown compression, and Lempel-Ziv are incomparable: The pressing need for efficient compression schemes for XML documents has recently been focused on stack computation, and in particular calls for a formulation of information-lossless stack or pushdown compressors that allows a formal analysis of their performance and a more ambitious use of the stack in XML compression, where so far it is mainly connected to parsing mechanisms. In this paper we introduce the model of pushdown compressor, based on pushdown transducers that compute a single injective function while keeping the widest generality regarding stack computation. We also consider online compression algorithms that use at most polylogarithmic space (plogon). These algorithms correspond to compressors in the data stream model. We compare the performance of these two families of compressors with each other and with the general purpose Lempel-Ziv algorithm. This comparison is made without any a priori assumption on the data's source and considering the asymptotic compression ratio for infinite sequences. We prove that in all cases they are incomparable.<|reference_end|> | arxiv | @article{mayordomo2009polylog,
title={Polylog space compression, pushdown compression, and Lempel-Ziv are
incomparable},
author={Elvira Mayordomo, Philippe Moser, Sylvain Perifel},
journal={arXiv preprint arXiv:0903.4101},
year={2009},
archivePrefix={arXiv},
eprint={0903.4101},
primaryClass={cs.CC cs.IR}
} | mayordomo2009polylog |
arxiv-6849 | 0903.4113 | Overlay Structure for Large Scale Content Sharing: Leveraging Geography as the Basis for Routing Locality | <|reference_start|>Overlay Structure for Large Scale Content Sharing: Leveraging Geography as the Basis for Routing Locality: In this paper we place our arguments on two related issues in the design of generalized structured peer-to-peer overlays. First, we argue that for the large-scale content-sharing applications, lookup and content transport functions need to be treated separately. Second, to create a location-based routing overlay suitable for content sharing and other applications, we argue that off-the-shelf geographic coordinates of Internet-connected hosts can be used as a basis. We then outline the design principles and present a design for the generalized routing overlay based on adaptive hierarchical partitioning of the geographical space.<|reference_end|> | arxiv | @article{asaduzzaman2009overlay,
title={Overlay Structure for Large Scale Content Sharing: Leveraging Geography
as the Basis for Routing Locality},
author={Shah Asaduzzaman, Gregor v. Bochmann},
journal={arXiv preprint arXiv:0903.4113},
year={2009},
archivePrefix={arXiv},
eprint={0903.4113},
primaryClass={cs.NI cs.DC cs.MM}
} | asaduzzaman2009overlay |
arxiv-6850 | 0903.4128 | Rate Adaptation via Link-Layer Feedback for Goodput Maximization over a Time-Varying Channel | <|reference_start|>Rate Adaptation via Link-Layer Feedback for Goodput Maximization over a Time-Varying Channel: We consider adapting the transmission rate to maximize the goodput, i.e., the amount of data transmitted without error, over a continuous Markov flat-fading wireless channel. In particular, we consider schemes in which transmitter channel state is inferred from degraded causal error-rate feedback, such as packet-level ACK/NAKs in an automatic repeat request (ARQ) system. In such schemes, the choice of transmission rate affects not only the subsequent goodput but also the subsequent feedback, implying that the optimal rate schedule is given by a partially observable Markov decision process (POMDP). Because solution of the POMDP is computationally impractical, we consider simple suboptimal greedy rate assignment and show that the optimal scheme would itself be greedy if the error-rate feedback was non-degraded. Furthermore, we show that greedy rate assignment using non-degraded feedback yields a total goodput that upper bounds that of optimal rate assignment using degraded feedback. We then detail the implementation of the greedy scheme and propose a reduced-complexity greedy scheme that adapts the transmission rate only once per block of packets. We also investigate the performance of the schemes numerically, and show that the proposed greedy scheme achieves steady-state goodputs that are reasonably close to the upper bound on goodput calculated using non-degraded feedback. A similar improvement is obtained in steady-state goodput, drop rate, and average buffer occupancy in the presence of data buffers. We also investigate an upper bound on the performance of optimal rate assignment for a discrete approximation of the channel and show that such quantization leads to a significant loss in achievable goodput.<|reference_end|> | arxiv | @article{aggarwal2009rate,
title={Rate Adaptation via Link-Layer Feedback for Goodput Maximization over a
Time-Varying Channel},
author={Rohit Aggarwal, Philip Schniter, and C. Emre Koksal},
journal={arXiv preprint arXiv:0903.4128},
year={2009},
archivePrefix={arXiv},
eprint={0903.4128},
primaryClass={cs.IT cs.NI math.IT math.OC}
} | aggarwal2009rate |
arxiv-6851 | 0903.4130 | Pairing Heaps with Costless Meld | <|reference_start|>Pairing Heaps with Costless Meld: Improving the structure and analysis in \cite{elm0}, we give a variation of the pairing heaps that has amortized zero cost per meld (compared to an $O(\log \log{n})$ in \cite{elm0}) and the same amortized bounds for all other operations. More precisely, the new pairing heap requires: no cost per meld, O(1) per find-min and insert, $O(\log{n})$ per delete-min, and $O(\log\log{n})$ per decrease-key. These bounds are the best known for any self-adjusting heap, and match the lower bound proved by Fredman for a family of such heaps. Moreover, the changes we have done make our structure even simpler than that in \cite{elm0}.<|reference_end|> | arxiv | @article{elmasry2009pairing,
title={Pairing Heaps with Costless Meld},
author={Amr Elmasry},
journal={arXiv preprint arXiv:0903.4130},
year={2009},
archivePrefix={arXiv},
eprint={0903.4130},
primaryClass={cs.DS}
} | elmasry2009pairing |
arxiv-6852 | 0903.4132 | Switcher-random-walks: a cognitive-inspired mechanism for network exploration | <|reference_start|>Switcher-random-walks: a cognitive-inspired mechanism for network exploration: Semantic memory is the subsystem of human memory that stores knowledge of concepts or meanings, as opposed to life specific experiences. The organization of concepts within semantic memory can be understood as a semantic network, where the concepts (nodes) are associated (linked) to others depending on perceptions, similarities, etc. Lexical access is the complementary part of this system and allows the retrieval of such organized knowledge. While conceptual information is stored under certain underlying organization (and thus gives rise to a specific topology), it is crucial to have an accurate access to any of the information units, e.g. the concepts, for efficiently retrieving semantic information for real-time needings. An example of an information retrieval process occurs in verbal fluency tasks, and it is known to involve two different mechanisms: -clustering-, or generating words within a subcategory, and, when a subcategory is exhausted, -switching- to a new subcategory. We extended this approach to random-walking on a network (clustering) in combination to jumping (switching) to any node with certain probability and derived its analytical expression based on Markov chains. Results show that this dual mechanism contributes to optimize the exploration of different network models in terms of the mean first passage time. Additionally, this cognitive inspired dual mechanism opens a new framework to better understand and evaluate exploration, propagation and transport phenomena in other complex systems where switching-like phenomena are feasible.<|reference_end|> | arxiv | @article{goñi2009switcher-random-walks:,
title={Switcher-random-walks: a cognitive-inspired mechanism for network
exploration},
author={Joaqu'in Go~ni, I~nigo Martincorena, Bernat Corominas-Murtra,
Gonzalo Arrondo, Sergio Ardanza-Trevijano, Pablo Villoslada},
journal={International Journal of Bifurcation and Chaos 20, 913-922 (2010)},
year={2009},
doi={10.1142/S0218127410026204},
archivePrefix={arXiv},
eprint={0903.4132},
primaryClass={cs.AI cond-mat.dis-nn physics.soc-ph}
} | goñi2009switcher-random-walks: |
arxiv-6853 | 0903.4207 | MacWilliams Identities for Codes on Graphs | <|reference_start|>MacWilliams Identities for Codes on Graphs: The MacWilliams identity for linear time-invariant convolutional codes that has recently been found by Gluesing-Luerssen and Schneider is proved concisely, and generalized to arbitrary group codes on graphs. A similar development yields a short, transparent proof of the dual sum-product update rule.<|reference_end|> | arxiv | @article{forney2009macwilliams,
title={MacWilliams Identities for Codes on Graphs},
author={G. David Forney Jr},
journal={arXiv preprint arXiv:0903.4207},
year={2009},
archivePrefix={arXiv},
eprint={0903.4207},
primaryClass={cs.IT math.IT}
} | forney2009macwilliams |
arxiv-6854 | 0903.4217 | Conditional Probability Tree Estimation Analysis and Algorithms | <|reference_start|>Conditional Probability Tree Estimation Analysis and Algorithms: We consider the problem of estimating the conditional probability of a label in time $O(\log n)$, where $n$ is the number of possible labels. We analyze a natural reduction of this problem to a set of binary regression problems organized in a tree structure, proving a regret bound that scales with the depth of the tree. Motivated by this analysis, we propose the first online algorithm which provably constructs a logarithmic depth tree on the set of labels to solve this problem. We test the algorithm empirically, showing that it works succesfully on a dataset with roughly $10^6$ labels.<|reference_end|> | arxiv | @article{beygelzimer2009conditional,
title={Conditional Probability Tree Estimation Analysis and Algorithms},
author={Alina Beygelzimer, John Langford, Yuri Lifshits, Gregory Sorkin, and
Alex Strehl},
journal={arXiv preprint arXiv:0903.4217},
year={2009},
archivePrefix={arXiv},
eprint={0903.4217},
primaryClass={cs.LG cs.AI}
} | beygelzimer2009conditional |
arxiv-6855 | 0903.4237 | Projection-Forcing Multisets of Weight Changes | <|reference_start|>Projection-Forcing Multisets of Weight Changes: Let $F$ be a finite field. A multiset $S$ of integers is projection-forcing if for every linear function $\phi : F^n \to F^m$ whose multiset of weight changes is $S$, $\phi$ is a coordinate projection up to permutation and scaling of entries. The MacWilliams Extension Theorem from coding theory says that $S = \{0, 0, ..., 0\}$ is projection-forcing. We give a (super-polynomial) algorithm to determine whether or not a given $S$ is projection-forcing. We also give a condition that can be checked in polynomial time that implies that $S$ is projection-forcing. This result is a generalization of the MacWilliams Extension Theorem and work by the first author.<|reference_end|> | arxiv | @article{kramer2009projection-forcing,
title={Projection-Forcing Multisets of Weight Changes},
author={Josh Brown Kramer and Lucas Sabalka},
journal={Journal of Combinatorial Theory, Series A, 117(8): 1136-1142, 2010},
year={2009},
doi={10.1016/j.jcta.2010.01.005},
archivePrefix={arXiv},
eprint={0903.4237},
primaryClass={math.CO cs.IT math.IT}
} | kramer2009projection-forcing |
arxiv-6856 | 0903.4251 | On the Use of Suffix Arrays for Memory-Efficient Lempel-Ziv Data Compression | <|reference_start|>On the Use of Suffix Arrays for Memory-Efficient Lempel-Ziv Data Compression: Much research has been devoted to optimizing algorithms of the Lempel-Ziv (LZ) 77 family, both in terms of speed and memory requirements. Binary search trees and suffix trees (ST) are data structures that have been often used for this purpose, as they allow fast searches at the expense of memory usage. In recent years, there has been interest on suffix arrays (SA), due to their simplicity and low memory requirements. One key issue is that an SA can solve the sub-string problem almost as efficiently as an ST, using less memory. This paper proposes two new SA-based algorithms for LZ encoding, which require no modifications on the decoder side. Experimental results on standard benchmarks show that our algorithms, though not faster, use 3 to 5 times less memory than the ST counterparts. Another important feature of our SA-based algorithms is that the amount of memory is independent of the text to search, thus the memory that has to be allocated can be defined a priori. These features of low and predictable memory requirements are of the utmost importance in several scenarios, such as embedded systems, where memory is at a premium and speed is not critical. Finally, we point out that the new algorithms are general, in the sense that they are adequate for applications other than LZ compression, such as text retrieval and forward/backward sub-string search.<|reference_end|> | arxiv | @article{ferreira2009on,
title={On the Use of Suffix Arrays for Memory-Efficient Lempel-Ziv Data
Compression},
author={Artur Ferreira, Arlindo Oliveira, Mario Figueiredo},
journal={arXiv preprint arXiv:0903.4251},
year={2009},
doi={10.1109/DCC.2009.50},
archivePrefix={arXiv},
eprint={0903.4251},
primaryClass={cs.DS}
} | ferreira2009on |
arxiv-6857 | 0903.4258 | SEPIA: Security through Private Information Aggregation | <|reference_start|>SEPIA: Security through Private Information Aggregation: Secure multiparty computation (MPC) allows joint privacy-preserving computations on data of multiple parties. Although MPC has been studied substantially, building solutions that are practical in terms of computation and communication cost is still a major challenge. In this paper, we investigate the practical usefulness of MPC for multi-domain network security and monitoring. We first optimize MPC comparison operations for processing high volume data in near real-time. We then design privacy-preserving protocols for event correlation and aggregation of network traffic statistics, such as addition of volume metrics, computation of feature entropy, and distinct item count. Optimizing performance of parallel invocations, we implement our protocols along with a complete set of basic operations in a library called SEPIA. We evaluate the running time and bandwidth requirements of our protocols in realistic settings on a local cluster as well as on PlanetLab and show that they work in near real-time for up to 140 input providers and 9 computation nodes. Compared to implementations using existing general-purpose MPC frameworks, our protocols are significantly faster, requiring, for example, 3 minutes for a task that takes 2 days with general-purpose frameworks. This improvement paves the way for new applications of MPC in the area of networking. Finally, we run SEPIA's protocols on real traffic traces of 17 networks and show how they provide new possibilities for distributed troubleshooting and early anomaly detection.<|reference_end|> | arxiv | @article{burkhart2009sepia:,
title={SEPIA: Security through Private Information Aggregation},
author={Martin Burkhart, Mario Strasser, Dilip Many, Xenofontas Dimitropoulos},
journal={arXiv preprint arXiv:0903.4258},
year={2009},
number={TIK-Report No. 298},
archivePrefix={arXiv},
eprint={0903.4258},
primaryClass={cs.NI cs.CR}
} | burkhart2009sepia: |
arxiv-6858 | 0903.4261 | On-Line Tests | <|reference_start|>On-Line Tests: This paper presents an interactive implementation which makes the link between a human operator and a system of a administration of a relational databases MySQL. This application conceived as a multimedia presentations is illustrative for the way in which the transfer and the remaking of the information between the human operator, the module of data processing and the database which stores the informations can be solved (with help of the PHP language and the web use).<|reference_end|> | arxiv | @article{pintea2009on-line,
title={On-Line Tests},
author={Florentina Anica Pintea},
journal={Ann. Univ. Tibiscus Comp. Sci. Series V (2007), 77-84},
year={2009},
archivePrefix={arXiv},
eprint={0903.4261},
primaryClass={cs.HC}
} | pintea2009on-line |
arxiv-6859 | 0903.4266 | The Risk-Utility Tradeoff for IP Address Truncation | <|reference_start|>The Risk-Utility Tradeoff for IP Address Truncation: Network operators are reluctant to share traffic data due to security and privacy concerns. Consequently, there is a lack of publicly available traces for validating and generalizing the latest results in network and security research. Anonymization is a possible solution in this context; however, it is unclear how the sanitization of data preserves characteristics important for traffic analysis. In addition, the privacy-preserving property of state-of-the-art IP address anonymization techniques has come into question by recent attacks that successfully identified a large number of hosts in anonymized traces. In this paper, we examine the tradeoff between data utility for anomaly detection and the risk of host identification for IP address truncation. Specifically, we analyze three weeks of unsampled and non-anonymized network traces from a medium-sized backbone network to assess data utility. The risk of de-anonymizing individual IP addresses is formally evaluated, using a metric based on conditional entropy. Our results indicate that truncation effectively prevents host identification but degrades the utility of data for anomaly detection. However, the degree of degradation depends on the metric used and whether network-internal or external addresses are considered. Entropy metrics are more resistant to truncation than unique counts and the detection quality of anomalies degrades much faster in internal addresses than in external addresses. In particular, the usefulness of internal address counts is lost even for truncation of only 4 bits whereas utility of external address entropy is virtually unchanged even for truncation of 20 bits.<|reference_end|> | arxiv | @article{burkhart2009the,
title={The Risk-Utility Tradeoff for IP Address Truncation},
author={Martin Burkhart, Daniela Brauckhoff, Martin May, Elisa Boschi},
journal={1st ACM Workshop on Network Data Anonymization (NDA), Fairfax,
Virginia, USA, October, 2008},
year={2009},
archivePrefix={arXiv},
eprint={0903.4266},
primaryClass={cs.NI}
} | burkhart2009the |
arxiv-6860 | 0903.4267 | Delving into Transition to the Semantic Web | <|reference_start|>Delving into Transition to the Semantic Web: The semantic technologies pose new challenge for the way in which we built and operate systems. They are tools used to represent significances, associations, theories, separated from data and code. Their goal is to create, to discover, to represent, to organize, to process, to manage, to ratiocinate, to represent, to share and use the significances and knowledge to fulfill the business, personal or social goals.<|reference_end|> | arxiv | @article{despi2009delving,
title={Delving into Transition to the Semantic Web},
author={Ioan Despi, Lucian Luca},
journal={Ann. Univ. Tibiscus Comp. Sci. Series V (2007), 7-16},
year={2009},
archivePrefix={arXiv},
eprint={0903.4267},
primaryClass={cs.SE}
} | despi2009delving |
arxiv-6861 | 0903.4270 | Analysis of some properties for a basic Petri net model | <|reference_start|>Analysis of some properties for a basic Petri net model: The formalism of the models with Petri networks provides a sound theoretical base, supported by powerful mathematical methods able to extract information necessary for the formalism and simulation of the real system that provides features of competition and synchronization. The paper presents a model based on a Petri net, in order to extract information relative to the technological producing process of a food additive.<|reference_end|> | arxiv | @article{fortis2009analysis,
title={Analysis of some properties for a basic Petri net model},
author={Alexandra Emilia Fortis},
journal={Ann. Univ. Tibiscus Comp. Sci. Series V (2007), 17-24},
year={2009},
archivePrefix={arXiv},
eprint={0903.4270},
primaryClass={cs.OH}
} | fortis2009analysis |
arxiv-6862 | 0903.4283 | Pipeline Leak Detection Techniques | <|reference_start|>Pipeline Leak Detection Techniques: Leak detection systems range from simple, visual line walking and checking ones to complex arrangements of hard-ware and software. No one method is universally applicable and operating requirements dictate which method is the most cost effective. The aim of the paper is to review the basic techniques of leak detection that are currently in use. The advantages and disadvantages of each method are discussed and some indications of applicability are outlined.<|reference_end|> | arxiv | @article{chis2009pipeline,
title={Pipeline Leak Detection Techniques},
author={Timur Chis},
journal={Ann. Univ. Tibiscus Comp. Sci. Series V (2007), 25-34},
year={2009},
archivePrefix={arXiv},
eprint={0903.4283},
primaryClass={cs.OH}
} | chis2009pipeline |
arxiv-6863 | 0903.4286 | Computer Systems to Oil Pipeline Transporting | <|reference_start|>Computer Systems to Oil Pipeline Transporting: Computer systems in the pipeline oil transporting that the greatest amount of data can be gathered, analyzed and acted upon in the shortest amount of time. Most operators now have some form of computer based monitoring system employing either commercially available or custom developed software to run the system. This paper presented the SCADA systems to oil pipeline in concordance to the Romanian environmental reglementations.<|reference_end|> | arxiv | @article{chis2009computer,
title={Computer Systems to Oil Pipeline Transporting},
author={Timur Chis},
journal={Ann. Univ. Tibiscus Comp. Sci. Series V (2007), 35-44},
year={2009},
archivePrefix={arXiv},
eprint={0903.4286},
primaryClass={cs.OH}
} | chis2009computer |
arxiv-6864 | 0903.4293 | Non linear system become linear system | <|reference_start|>Non linear system become linear system: The present paper refers to the theory and the practice of the systems regarding non-linear systems and their applications. We aimed the integration of these systems to elaborate their response as well as to highlight some outstanding features.<|reference_end|> | arxiv | @article{bucur2009non,
title={Non linear system become linear system},
author={Petre Bucur, Lucian Luca},
journal={Ann. Univ. Tibiscus Comp. Sci. Series V (2007), 55-62},
year={2009},
archivePrefix={arXiv},
eprint={0903.4293},
primaryClass={cs.DM}
} | bucur2009non |
arxiv-6865 | 0903.4298 | Design of Log-Map / Max-Log-Map Decoder | <|reference_start|>Design of Log-Map / Max-Log-Map Decoder: The process of turbo-code decoding starts with the formation of a posteriori probabilities (APPs) for each data bit, which is followed by choosing the data-bit value that corresponds to the maximum a posteriori (MAP) probability for that data bit. Upon reception of a corrupted code-bit sequence, the process of decision making with APPs allows the MAP algorithm to determine the most likely information bit to have been transmitted at each bit time.<|reference_end|> | arxiv | @article{timis2009design,
title={Design of Log-Map / Max-Log-Map Decoder},
author={Mihai Timis},
journal={Ann. Univ. Tibiscus Comp. Sci. Series V (2007), 63-70},
year={2009},
archivePrefix={arXiv},
eprint={0903.4298},
primaryClass={cs.IT math.IT}
} | timis2009design |
arxiv-6866 | 0903.4299 | Token Ring Project | <|reference_start|>Token Ring Project: Ring topology is a simple configuration used to connect processes that communicate among themselves. A number of network standards such as token ring, token bus, and FDDI are based on the ring connectivity. This article will develop an implementation of a ring of processes that communicate among themselves via pipe links. The processes are nodes in the ring. Each process reads from its standard input and writes in its standard output. N-1 process redirects the its standard output to a standard input of the process through a pipe. When the ring-structure is designed, the project can be extended to simulate networks or to implement algorithms for mutual exclusion.<|reference_end|> | arxiv | @article{streian2009token,
title={Token Ring Project},
author={Virgiliu Streian, Adela Ionescu},
journal={Ann. Univ. Tibiscus Comp. Sci. Series V (2007), 85-90},
year={2009},
archivePrefix={arXiv},
eprint={0903.4299},
primaryClass={cs.NI}
} | streian2009token |
arxiv-6867 | 0903.4302 | ShopList: Programming PDA applications for Windows Mobile using C# | <|reference_start|>ShopList: Programming PDA applications for Windows Mobile using C#: This paper is focused on a C# and Sql Server Mobile 2005 application to keep evidence of a shop list. The purpose of the application is to offer to the user an easier way to manage his shopping options.<|reference_end|> | arxiv | @article{ilea2009shoplist:,
title={ShopList: Programming PDA applications for Windows Mobile using C#},
author={Daniela Ilea, Dan L. Lacrama},
journal={Ann. Univ. Tibiscus Comp. Sci. Series V (2007), 91-98},
year={2009},
archivePrefix={arXiv},
eprint={0903.4302},
primaryClass={cs.OH}
} | ilea2009shoplist: |
arxiv-6868 | 0903.4305 | Evaluation d'une requete en SQL | <|reference_start|>Evaluation d'une requete en SQL: The objective of this paper is to show how the interrogation processor responds to SQL interrogation. The interrogation processor is split into two parts. The first, called the interrogation compiler translates an SQL query into a plan of physical execution. The second, called evaluation query runs the execution plan.<|reference_end|> | arxiv | @article{codat2009evaluation,
title={Evaluation d'une requete en SQL},
author={Diana Sophia Codat},
journal={Ann. Univ. Tibiscus Comp. Sci. Series V (2007), 99-104},
year={2009},
archivePrefix={arXiv},
eprint={0903.4305},
primaryClass={cs.DB}
} | codat2009evaluation |
arxiv-6869 | 0903.4307 | FISLAB - the Fuzzy Inference Tool-box for SCILAB | <|reference_start|>FISLAB - the Fuzzy Inference Tool-box for SCILAB: The present study represents "The Fislab package of programs meant to develop the fuzzy regulators in the Scilab environment" in which we present some general issues, usage requirements and the working mode of the Fislab environment. In the second part of the article some features of the Scilab functions from the Fislab package are described.<|reference_end|> | arxiv | @article{apostol2009fislab,
title={FISLAB - the Fuzzy Inference Tool-box for SCILAB},
author={Simona Apostol},
journal={Ann. Univ. Tibiscus Comp. Sci. Series V (2007), 105-114},
year={2009},
archivePrefix={arXiv},
eprint={0903.4307},
primaryClass={cs.MS}
} | apostol2009fislab |
arxiv-6870 | 0903.4313 | The development of a fuzzy regulator with an entry and an output in Fislab | <|reference_start|>The development of a fuzzy regulator with an entry and an output in Fislab: The present article is a sequel of the article "Fislab the Fuzzy Inference Tool-Box for Scilab" and it represents the practical application of:"The development of the Fuzzy regulator with an input and an output in Fislab". The article contains, besides this application, some functions to be used in the program, namely Scilab functions for the fuzzification of the firm information, functions for the operation of de-fuzzification and functions for the implementation of.<|reference_end|> | arxiv | @article{apostol2009the,
title={The development of a fuzzy regulator with an entry and an output in
Fislab},
author={Simona Apostol},
journal={Ann. Univ. Tibiscus Comp. Sci. Series V (2007), 115-120},
year={2009},
archivePrefix={arXiv},
eprint={0903.4313},
primaryClass={cs.MS}
} | apostol2009the |
arxiv-6871 | 0903.4314 | Virtual Reality | <|reference_start|>Virtual Reality: This paper is focused on the presentation of Virtual Reality principles together with the main implementation methods and techniques. An overview of the main development directions is included.<|reference_end|> | arxiv | @article{lacrama2009virtual,
title={Virtual Reality},
author={Dan L. Lacrama, Dorina Fera},
journal={Ann. Univ. Tibiscus Comp. Sci. Series V (2007), 137-144},
year={2009},
archivePrefix={arXiv},
eprint={0903.4314},
primaryClass={cs.MM}
} | lacrama2009virtual |
arxiv-6872 | 0903.4358 | Sums of powers via integration | <|reference_start|>Sums of powers via integration: Sum of powers 1^p+...+n^p, with n and p being natural numbers and n>=1, can be expressed as a polynomial function of n of degree p+1. Such representations are often called Faulhaber formulae. A simple recursive algorithm for computing coefficients of Faulhaber formulae is presented. The correctness of the algorithm is proved by giving a recurrence relation on Faulhaber formulae.<|reference_end|> | arxiv | @article{dashti2009sums,
title={Sums of powers via integration},
author={M. Torabi Dashti},
journal={arXiv preprint arXiv:0903.4358},
year={2009},
archivePrefix={arXiv},
eprint={0903.4358},
primaryClass={cs.DM}
} | dashti2009sums |
arxiv-6873 | 0903.4365 | CliqueStream: an efficient and fault-resilient live streaming network on a clustered peer-to-peer overlay | <|reference_start|>CliqueStream: an efficient and fault-resilient live streaming network on a clustered peer-to-peer overlay: Several overlay-based live multimedia streaming platforms have been proposed in the recent peer-to-peer streaming literature. In most of the cases, the overlay neighbors are chosen randomly for robustness of the overlay. However, this causes nodes that are distant in terms of proximity in the underlying physical network to become neighbors, and thus data travels unnecessary distances before reaching the destination. For efficiency of bulk data transmission like multimedia streaming, the overlay neighborhood should resemble the proximity in the underlying network. In this paper, we exploit the proximity and redundancy properties of a recently proposed clique-based clustered overlay network, named eQuus, to build efficient as well as robust overlays for multimedia stream dissemination. To combine the efficiency of content pushing over tree structured overlays and the robustness of data-driven mesh overlays, higher capacity stable nodes are organized in tree structure to carry the long haul traffic and less stable nodes with intermittent presence are organized in localized meshes. The overlay construction and fault-recovery procedures are explained in details. Simulation study demonstrates the good locality properties of the platform. The outage time and control overhead induced by the failure recovery mechanism are minimal as demonstrated by the analysis.<|reference_end|> | arxiv | @article{asaduzzaman2009cliquestream:,
title={CliqueStream: an efficient and fault-resilient live streaming network on
a clustered peer-to-peer overlay},
author={Shah Asaduzzaman, Ying Qiao, Gregor v. Bochmann},
journal={Proc. 8th IEEE Intl. Conf. on Peer-to-Peer Computing (P2P'08),
Sep. 2008, Aachen, Germany},
year={2009},
doi={10.1109/P2P.2008.35},
archivePrefix={arXiv},
eprint={0903.4365},
primaryClass={cs.NI cs.DC cs.MM}
} | asaduzzaman2009cliquestream: |
arxiv-6874 | 0903.4366 | Complexity of Fractran and Productivity | <|reference_start|>Complexity of Fractran and Productivity: In functional programming languages the use of infinite structures is common practice. For total correctness of programs dealing with infinite structures one must guarantee that every finite part of the result can be evaluated in finitely many steps. This is known as productivity. For programming with infinite structures, productivity is what termination in well-defined results is for programming with finite structures. Fractran is a simple Turing-complete programming language invented by Conway. We prove that the question whether a Fractran program halts on all positive integers is Pi^0_2-complete. In functional programming, productivity typically is a property of individual terms with respect to the inbuilt evaluation strategy. By encoding Fractran programs as specifications of infinite lists, we establish that this notion of productivity is Pi^0_2-complete even for the most simple specifications. Therefore it is harder than termination of individual terms. In addition, we explore possible generalisations of the notion of productivity in the framework of term rewriting, and prove that their computational complexity is Pi^1_1-complete, thus exceeding the expressive power of first-order logic.<|reference_end|> | arxiv | @article{endrullis2009complexity,
title={Complexity of Fractran and Productivity},
author={Joerg Endrullis, Clemens Grabmayer, Dimitri Hendriks},
journal={arXiv preprint arXiv:0903.4366},
year={2009},
archivePrefix={arXiv},
eprint={0903.4366},
primaryClass={cs.LO}
} | endrullis2009complexity |
arxiv-6875 | 0903.4378 | Using Dedicated and Opportunistic Networks in Synergy for a Cost-effective Distributed Stream Processing Platform | <|reference_start|>Using Dedicated and Opportunistic Networks in Synergy for a Cost-effective Distributed Stream Processing Platform: This paper presents a case for exploiting the synergy of dedicated and opportunistic network resources in a distributed hosting platform for data stream processing applications. Our previous studies have demonstrated the benefits of combining dedicated reliable resources with opportunistic resources in case of high-throughput computing applications, where timely allocation of the processing units is the primary concern. Since distributed stream processing applications demand large volume of data transmission between the processing sites at a consistent rate, adequate control over the network resources is important here to assure a steady flow of processing. In this paper, we propose a system model for the hybrid hosting platform where stream processing servers installed at distributed sites are interconnected with a combination of dedicated links and public Internet. Decentralized algorithms have been developed for allocation of the two classes of network resources among the competing tasks with an objective towards higher task throughput and better utilization of expensive dedicated resources. Results from extensive simulation study show that with proper management, systems exploiting the synergy of dedicated and opportunistic resources yield considerably higher task throughput and thus, higher return on investment over the systems solely using expensive dedicated resources.<|reference_end|> | arxiv | @article{asaduzzaman2009using,
title={Using Dedicated and Opportunistic Networks in Synergy for a
Cost-effective Distributed Stream Processing Platform},
author={Shah Asaduzzaman, Muthucumaru Maheswaran},
journal={Proc. 14th IEEE Intl. Conf. on Parallel and Distributed Systems
(ICPADS), Dec. 2008, Melbourne, Australia},
year={2009},
doi={10.1109/ICPADS.2008.116},
archivePrefix={arXiv},
eprint={0903.4378},
primaryClass={cs.DC cs.NI}
} | asaduzzaman2009using |
arxiv-6876 | 0903.4382 | Ranking Functions for Size-Change Termination II | <|reference_start|>Ranking Functions for Size-Change Termination II: Size-Change Termination is an increasingly-popular technique for verifying program termination. These termination proofs are deduced from an abstract representation of the program in the form of "size-change graphs". We present algorithms that, for certain classes of size-change graphs, deduce a global ranking function: an expression that ranks program states, and decreases on every transition. A ranking function serves as a witness for a termination proof, and is therefore interesting for program certification. The particular form of the ranking expressions that represent SCT termination proofs sheds light on the scope of the proof method. The complexity of the expressions is also interesting, both practicaly and theoretically. While deducing ranking functions from size-change graphs has already been shown possible, the constructions in this paper are simpler and more transparent than previously known. They improve the upper bound on the size of the ranking expression from triply exponential down to singly exponential (for certain classes of instances). We claim that this result is, in some sense, optimal. To this end, we introduce a framework for lower bounds on the complexity of ranking expressions and prove exponential lower bounds.<|reference_end|> | arxiv | @article{ben-amram2009ranking,
title={Ranking Functions for Size-Change Termination II},
author={Amir M. Ben-Amram and Chin Soon Lee},
journal={Logical Methods in Computer Science, Volume 5, Issue 2 (May 25,
2009) lmcs:1000},
year={2009},
doi={10.2168/LMCS-5(2:8)2009},
archivePrefix={arXiv},
eprint={0903.4382},
primaryClass={cs.LO}
} | ben-amram2009ranking |
arxiv-6877 | 0903.4386 | Error-and-Erasure Decoding for Block Codes with Feedback | <|reference_start|>Error-and-Erasure Decoding for Block Codes with Feedback: Inner and outer bounds are derived on the optimal performance of fixed length block codes on discrete memoryless channels with feedback and errors-and-erasures decoding. First an inner bound is derived using a two phase encoding scheme with communication and control phases together with the optimal decoding rule for the given encoding scheme, among decoding rules that can be represented in terms of pairwise comparisons between the messages. Then an outer bound is derived using a generalization of the straight-line bound to errors-and-erasures decoders and the optimal error exponent trade off of a feedback encoder with two messages. In addition upper and lower bounds are derived, for the optimal erasure exponent of error free block codes in terms of the rate. Finally we present a proof of the fact that the optimal trade off between error exponents of a two message code does not increase with feedback on DMCs.<|reference_end|> | arxiv | @article{nakiboglu2009error-and-erasure,
title={Error-and-Erasure Decoding for Block Codes with Feedback},
author={Baris Nakiboglu and Lizhong Zheng},
journal={IEEE Transactions on Information Theory, 58(1):24-49, Jan 2012},
year={2009},
doi={10.1109/TIT.2011.2169529 10.1109/ISIT.2008.4595079},
archivePrefix={arXiv},
eprint={0903.4386},
primaryClass={cs.IT math.IT}
} | nakiboglu2009error-and-erasure |
arxiv-6878 | 0903.4392 | Towards a decentralized algorithm for mapping network and computational resources for distributed data-flow computations | <|reference_start|>Towards a decentralized algorithm for mapping network and computational resources for distributed data-flow computations: Several high-throughput distributed data-processing applications require multi-hop processing of streams of data. These applications include continual processing on data streams originating from a network of sensors, composing a multimedia stream through embedding several component streams originating from different locations, etc. These data-flow computing applications require multiple processing nodes interconnected according to the data-flow topology of the application, for on-stream processing of the data. Since the applications usually sustain for a long period, it is important to optimally map the component computations and communications on the nodes and links in the network, fulfilling the capacity constraints and optimizing some quality metric such as end-to-end latency. The mapping problem is unfortunately NP-complete and heuristics have been previously proposed to compute the approximate solution in a centralized way. However, because of the dynamicity of the network, it is practically impossible to aggregate the correct state of the whole network in a single node. In this paper, we present a distributed algorithm for optimal mapping of the components of the data flow applications. We propose several heuristics to minimize the message complexity of the algorithm while maintaining the quality of the solution.<|reference_end|> | arxiv | @article{asaduzzaman2009towards,
title={Towards a decentralized algorithm for mapping network and computational
resources for distributed data-flow computations},
author={Shah Asaduzzaman, Muthucumaru Maheswaran},
journal={Proc. 21st IEEE Intl. Symp. on High Performance Computing Systems
and Applications (HPCS), May 2007, Saskatoon, SK, Canada},
year={2009},
archivePrefix={arXiv},
eprint={0903.4392},
primaryClass={cs.DC cs.NI}
} | asaduzzaman2009towards |
arxiv-6879 | 0903.4426 | Capacity Scaling Laws for Underwater Networks | <|reference_start|>Capacity Scaling Laws for Underwater Networks: The underwater acoustic channel is characterized by a path loss that depends not only on the transmission distance, but also on the signal frequency. Signals transmitted from one user to another over a distance $l$ are subject to a power loss of $l^{-\alpha}{a(f)}^{-l}$. Although a terrestrial radio channel can be modeled similarly, the underwater acoustic channel has different characteristics. The spreading factor $\alpha$, related to the geometry of propagation, has values in the range $1 \leq \alpha \leq 2$. The absorption coefficient $a(f)$ is a rapidly increasing function of frequency: it is three orders of magnitude greater at 100 kHz than at a few Hz. Existing results for capacity of wireless networks correspond to scenarios for which $a(f) = 1$, or a constant greater than one, and $\alpha \geq 2$. These results cannot be applied to underwater acoustic networks in which the attenuation varies over the system bandwidth. We use a water-filling argument to assess the minimum transmission power and optimum transmission band as functions of the link distance and desired data rate, and study the capacity scaling laws under this model.<|reference_end|> | arxiv | @article{lucani2009capacity,
title={Capacity Scaling Laws for Underwater Networks},
author={Daniel E. Lucani, Muriel M'edard, Milica Stojanovic},
journal={arXiv preprint arXiv:0903.4426},
year={2009},
archivePrefix={arXiv},
eprint={0903.4426},
primaryClass={cs.IT math.IT}
} | lucani2009capacity |
arxiv-6880 | 0903.4434 | Random Linear Network Coding for Time-Division Duplexing: Queueing Analysis | <|reference_start|>Random Linear Network Coding for Time-Division Duplexing: Queueing Analysis: We study the performance of random linear network coding for time division duplexing channels with Poisson arrivals. We model the system as a bulk-service queue with variable bulk size. A full characterization for random linear network coding is provided for time division duplexing channels [1] by means of the moment generating function. We present numerical results for the mean number of packets in the queue and consider the effect of the range of allowable bulk sizes. We show that there exists an optimal choice of this range that minimizes the mean number of data packets in the queue.<|reference_end|> | arxiv | @article{lucani2009random,
title={Random Linear Network Coding for Time-Division Duplexing: Queueing
Analysis},
author={Daniel E. Lucani, Muriel M'edard, Milica Stojanovic},
journal={arXiv preprint arXiv:0903.4434},
year={2009},
archivePrefix={arXiv},
eprint={0903.4434},
primaryClass={cs.IT math.IT}
} | lucani2009random |
arxiv-6881 | 0903.4435 | Tree decomposition and postoptimality analysis in discrete optimization | <|reference_start|>Tree decomposition and postoptimality analysis in discrete optimization: Many real discrete optimization problems (DOPs) are $NP$-hard and contain a huge number of variables and/or constraints that make the models intractable for currently available solvers. Large DOPs can be solved due to their special tructure using decomposition approaches. An important example of decomposition approaches is tree decomposition with local decomposition algorithms using the special block matrix structure of constraints which can exploit sparsity in the interaction graph of a discrete optimization problem. In this paper, discrete optimization problems with a tree structural graph are solved by local decomposition algorithms. Local decomposition algorithms generate a family of related DO problems which have the same structure but differ in the right-hand sides. Due to this fact, postoptimality techniques in DO are applied.<|reference_end|> | arxiv | @article{shcherbina2009tree,
title={Tree decomposition and postoptimality analysis in discrete optimization},
author={O. Shcherbina},
journal={arXiv preprint arXiv:0903.4435},
year={2009},
archivePrefix={arXiv},
eprint={0903.4435},
primaryClass={cs.DM}
} | shcherbina2009tree |
arxiv-6882 | 0903.4443 | Broadcasting in Time-Division Duplexing: A Random Linear Network Coding Approach | <|reference_start|>Broadcasting in Time-Division Duplexing: A Random Linear Network Coding Approach: We study random linear network coding for broadcasting in time division duplexing channels. We assume a packet erasure channel with nodes that cannot transmit and receive information simultaneously. The sender transmits coded data packets back-to-back before stopping to wait for the receivers to acknowledge the number of degrees of freedom, if any, that are required to decode correctly the information. We study the mean time to complete the transmission of a block of packets to all receivers. We also present a bound on the number of stops to wait for acknowledgement in order to complete transmission with probability at least $1-\epsilon$, for any $\epsilon>0$. We present analysis and numerical results showing that our scheme outperforms optimal scheduling policies for broadcast, in terms of the mean completion time. We provide a simple heuristic to compute the number of coded packets to be sent before stopping that achieves close to optimal performance with the advantage of a considerable reduction in the search time.<|reference_end|> | arxiv | @article{lucani2009broadcasting,
title={Broadcasting in Time-Division Duplexing: A Random Linear Network Coding
Approach},
author={Daniel E. Lucani, Muriel M'edard, Milica Stojanovic},
journal={arXiv preprint arXiv:0903.4443},
year={2009},
archivePrefix={arXiv},
eprint={0903.4443},
primaryClass={cs.IT math.IT}
} | lucani2009broadcasting |
arxiv-6883 | 0903.4510 | Differentially Private Combinatorial Optimization | <|reference_start|>Differentially Private Combinatorial Optimization: Consider the following problem: given a metric space, some of whose points are "clients", open a set of at most $k$ facilities to minimize the average distance from the clients to these facilities. This is just the well-studied $k$-median problem, for which many approximation algorithms and hardness results are known. Note that the objective function encourages opening facilities in areas where there are many clients, and given a solution, it is often possible to get a good idea of where the clients are located. However, this poses the following quandary: what if the identity of the clients is sensitive information that we would like to keep private? Is it even possible to design good algorithms for this problem that preserve the privacy of the clients? In this paper, we initiate a systematic study of algorithms for discrete optimization problems in the framework of differential privacy (which formalizes the idea of protecting the privacy of individual input elements). We show that many such problems indeed have good approximation algorithms that preserve differential privacy; this is even in cases where it is impossible to preserve cryptographic definitions of privacy while computing any non-trivial approximation to even the_value_ of an optimal solution, let alone the entire solution. Apart from the $k$-median problem, we study the problems of vertex and set cover, min-cut, facility location, Steiner tree, and the recently introduced submodular maximization problem, "Combinatorial Public Projects" (CPP).<|reference_end|> | arxiv | @article{gupta2009differentially,
title={Differentially Private Combinatorial Optimization},
author={Anupam Gupta, Katrina Ligett, Frank McSherry, Aaron Roth and Kunal
Talwar},
journal={arXiv preprint arXiv:0903.4510},
year={2009},
archivePrefix={arXiv},
eprint={0903.4510},
primaryClass={cs.DS cs.CR cs.GT}
} | gupta2009differentially |
arxiv-6884 | 0903.4513 | Building the information kernel and the problem of recognition | <|reference_start|>Building the information kernel and the problem of recognition: At this point in time there is a need for a new representation of different information, to identify and organize descending its characteristics. Today, science is a powerful tool for the description of reality - the numbers. Why the most important property of numbers. Suppose we have a number 0.2351734, it is clear that the figures are there in order of importance. If necessary, we can round the number up to some value, eg 0.235. Arguably, the 0,235 - the most important information of 0.2351734. Thus, we can reduce the size of numbers is not losing much with the accuracy. Clearly, if learning to provide a graphical or audio information kernel, we can provide the most relevant information, discarding the rest. Introduction of various kinds of information in an information kernel, is an important task, to solve many problems in artificial intelligence and information theory.<|reference_end|> | arxiv | @article{vishnevskaya2009building,
title={Building the information kernel and the problem of recognition},
author={Elena S. Vishnevskaya},
journal={arXiv preprint arXiv:0903.4513},
year={2009},
archivePrefix={arXiv},
eprint={0903.4513},
primaryClass={cs.CV cs.AI}
} | vishnevskaya2009building |
arxiv-6885 | 0903.4521 | Solving Dominating Set in Larger Classes of Graphs: FPT Algorithms and Polynomial Kernels | <|reference_start|>Solving Dominating Set in Larger Classes of Graphs: FPT Algorithms and Polynomial Kernels: We show that the k-Dominating Set problem is fixed parameter tractable (FPT) and has a polynomial kernel for any class of graphs that exclude K_{i,j} as a subgraph, for any fixed i, j >= 1. This strictly includes every class of graphs for which this problem has been previously shown to have FPT algorithms and/or polynomial kernels. In particular, our result implies that the problem restricted to bounded- degenerate graphs has a polynomial kernel, solving an open problem posed by Alon and Gutner.<|reference_end|> | arxiv | @article{philip2009solving,
title={Solving Dominating Set in Larger Classes of Graphs: FPT Algorithms and
Polynomial Kernels},
author={Geevarghese Philip, Venkatesh Raman, Somnath Sikdar},
journal={arXiv preprint arXiv:0903.4521},
year={2009},
archivePrefix={arXiv},
eprint={0903.4521},
primaryClass={cs.DS}
} | philip2009solving |
arxiv-6886 | 0903.4526 | On the Achievable Rate of the Fading Dirty Paper Channel with Imperfect CSIT | <|reference_start|>On the Achievable Rate of the Fading Dirty Paper Channel with Imperfect CSIT: The problem of dirty paper coding (DPC) over the (multi-antenna) fading dirty paper channel (FDPC) Y = H(X + S) + Z is considered when there is imperfect knowledge of the channel state information H at the transmitter (CSIT). The case of FDPC with positive definite (p.d.) input covariance matrix was studied by the authors in a recent paper, and here the more general case of positive semi-definite (p.s.d.) input covariance is dealt with. Towards this end, the choice of auxiliary random variable is modified. The algorithms for determination of inflation factor proposed in the p.d. case are then generalized to the case of p.s.d. input covariance. Subsequently, the largest DPC-achievable high-SNR (signal-to-noise ratio) scaling factor over the no-CSIT FDPC with p.s.d. input covariance matrix is derived. This scaling factor is seen to be a non-trivial generalization of the one achieved for the p.d. case. Next, in the limit of low SNR, it is proved that the choice of all-zero inflation factor (thus treating interference as noise) is optimal in the 'ratio' sense, regardless of the covariance matrix used. Further, in the p.d. covariance case, the inflation factor optimal at high SNR is obtained when the number of transmit antennas is greater than the number of receive antennas, with the other case having been already considered in the earlier paper. Finally, the problem of joint optimization of the input covariance matrix and the inflation factor is dealt with, and an iterative numerical algorithm is developed.<|reference_end|> | arxiv | @article{vaze2009on,
title={On the Achievable Rate of the Fading Dirty Paper Channel with Imperfect
CSIT},
author={Chinmay S. Vaze and Mahesh K. Varanasi},
journal={arXiv preprint arXiv:0903.4526},
year={2009},
archivePrefix={arXiv},
eprint={0903.4526},
primaryClass={cs.IT math.IT}
} | vaze2009on |
arxiv-6887 | 0903.4527 | Graph polynomials and approximation of partition functions with Loopy Belief Propagation | <|reference_start|>Graph polynomials and approximation of partition functions with Loopy Belief Propagation: The Bethe approximation, or loopy belief propagation algorithm is a successful method for approximating partition functions of probabilistic models associated with a graph. Chertkov and Chernyak derived an interesting formula called Loop Series Expansion, which is an expansion of the partition function. The main term of the series is the Bethe approximation while other terms are labeled by subgraphs called generalized loops. In our recent paper, we derive the loop series expansion in form of a polynomial with coefficients positive integers, and extend the result to the expansion of marginals. In this paper, we give more clear derivation for the results and discuss the properties of the polynomial which is introduced in the paper.<|reference_end|> | arxiv | @article{watanabe2009graph,
title={Graph polynomials and approximation of partition functions with Loopy
Belief Propagation},
author={Yusuke Watanabe, Kenji Fukumizu},
journal={arXiv preprint arXiv:0903.4527},
year={2009},
archivePrefix={arXiv},
eprint={0903.4527},
primaryClass={cs.DM cs.LG}
} | watanabe2009graph |
arxiv-6888 | 0903.4530 | Nonnegative approximations of nonnegative tensors | <|reference_start|>Nonnegative approximations of nonnegative tensors: We study the decomposition of a nonnegative tensor into a minimal sum of outer product of nonnegative vectors and the associated parsimonious naive Bayes probabilistic model. We show that the corresponding approximation problem, which is central to nonnegative PARAFAC, will always have optimal solutions. The result holds for any choice of norms and, under a mild assumption, even Bregman divergences.<|reference_end|> | arxiv | @article{lim2009nonnegative,
title={Nonnegative approximations of nonnegative tensors},
author={Lek-Heng Lim, Pierre Comon},
journal={arXiv preprint arXiv:0903.4530},
year={2009},
archivePrefix={arXiv},
eprint={0903.4530},
primaryClass={cs.NA cs.IR}
} | lim2009nonnegative |
arxiv-6889 | 0903.4545 | Computer- and robot-assisted Medical Intervention | <|reference_start|>Computer- and robot-assisted Medical Intervention: Medical robotics includes assistive devices used by the physician in order to make his/her diagnostic or therapeutic practices easier and more efficient. This chapter focuses on such systems. It introduces the general field of Computer-Assisted Medical Interventions, its aims, its different components and describes the place of robots in that context. The evolutions in terms of general design and control paradigms in the development of medical robots are presented and issues specific to that application domain are discussed. A view of existing systems, on-going developments and future trends is given. A case-study is detailed. Other types of robotic help in the medical environment (such as for assisting a handicapped person, for rehabilitation of a patient or for replacement of some damaged/suppressed limbs or organs) are out of the scope of this chapter.<|reference_end|> | arxiv | @article{troccaz2009computer-,
title={Computer- and robot-assisted Medical Intervention},
author={Jocelyne Troccaz (TIMC)},
journal={Handbook of Automation, Shimon Nof (Ed.) (2009) 1451-1466},
year={2009},
archivePrefix={arXiv},
eprint={0903.4545},
primaryClass={cs.RO}
} | troccaz2009computer- |
arxiv-6890 | 0903.4554 | Fountain Codes and Invertible Matrices | <|reference_start|>Fountain Codes and Invertible Matrices: This paper deals with Fountain codes, and especially with their encoding matrices, which are required here to be invertible. A result is stated that an encoding matrix induces a permutation. Also, a result is that encoding matrices form a group with multiplication operation. An encoding is a transformation, which reduces the entropy of an initially high-entropy input vector. A special encoding matrix, with which the entropy reduction is more effective than with matrices created by the Ideal Soliton distribution is formed. Experimental results with entropy reduction are shown.<|reference_end|> | arxiv | @article{malinen2009fountain,
title={Fountain Codes and Invertible Matrices},
author={Mikko Malinen},
journal={arXiv preprint arXiv:0903.4554},
year={2009},
archivePrefix={arXiv},
eprint={0903.4554},
primaryClass={cs.IT math.IT}
} | malinen2009fountain |
arxiv-6891 | 0903.4582 | On the Achievable Diversity-Multiplexing Tradeoff in MIMO Fading Channels with Imperfect CSIT | <|reference_start|>On the Achievable Diversity-Multiplexing Tradeoff in MIMO Fading Channels with Imperfect CSIT: In this paper, we analyze the fundamental tradeoff of diversity and multiplexing in multi-input multi-output (MIMO) channels with imperfect channel state information at the transmitter (CSIT). We show that with imperfect CSIT, a higher diversity gain as well as a more efficient diversity-multiplexing tradeoff (DMT) can be achieved. In the case of multi-input single-output (MISO)/single-input multi-output (SIMO) channels with K transmit/receive antennas, one can achieve a diversity gain of d(r)=K(1-r+K\alpha) at spatial multiplexing gain r, where \alpha is the CSIT quality defined in this paper. For general MIMO channels with M (M>1) transmit and N (N>1) receive antennas, we show that depending on the value of \alpha, different DMT can be derived and the value of \alpha has a great impact on the achievable diversity, especially at high multiplexing gains. Specifically, when \alpha is above a certain threshold, one can achieve a diversity gain of d(r)=MN(1+MN\alpha)-(M+N-1)r; otherwise, the achievable DMT is much lower and can be described as a collection of discontinuous line segments depending on M, N, r and \alpha. Our analysis reveals that imperfect CSIT significantly improves the achievable diversity gain while enjoying high spatial multiplexing gains.<|reference_end|> | arxiv | @article{zhang2009on,
title={On the Achievable Diversity-Multiplexing Tradeoff in MIMO Fading
Channels with Imperfect CSIT},
author={Xiao Juan Zhang and Yi Gong},
journal={arXiv preprint arXiv:0903.4582},
year={2009},
archivePrefix={arXiv},
eprint={0903.4582},
primaryClass={cs.IT math.IT}
} | zhang2009on |
arxiv-6892 | 0903.4594 | Dynamic Control of Tunable Sub-optimal Algorithms for Scheduling of Time-varying Wireless Networks | <|reference_start|>Dynamic Control of Tunable Sub-optimal Algorithms for Scheduling of Time-varying Wireless Networks: It is well known that for ergodic channel processes the Generalized Max-Weight Matching (GMWM) scheduling policy stabilizes the network for any supportable arrival rate vector within the network capacity region. This policy, however, often requires the solution of an NP-hard optimization problem. This has motivated many researchers to develop sub-optimal algorithms that approximate the GMWM policy in selecting schedule vectors. One implicit assumption commonly shared in this context is that during the algorithm runtime, the channel states remain effectively unchanged. This assumption may not hold as the time needed to select near-optimal schedule vectors usually increases quickly with the network size. In this paper, we incorporate channel variations and the time-efficiency of sub-optimal algorithms into the scheduler design, to dynamically tune the algorithm runtime considering the tradeoff between algorithm efficiency and its robustness to changing channel states. Specifically, we propose a Dynamic Control Policy (DCP) that operates on top of a given sub-optimal algorithm, and dynamically but in a large time-scale adjusts the time given to the algorithm according to queue backlog and channel correlations. This policy does not require knowledge of the structure of the given sub-optimal algorithm, and with low overhead can be implemented in a distributed manner. Using a novel Lyapunov analysis, we characterize the throughput stability region induced by DCP and show that our characterization can be tight. We also show that the throughput stability region of DCP is at least as large as that of any other static policy. Finally, we provide two case studies to gain further intuition into the performance of DCP.<|reference_end|> | arxiv | @article{lotfinezhad2009dynamic,
title={Dynamic Control of Tunable Sub-optimal Algorithms for Scheduling of
Time-varying Wireless Networks},
author={Mahdi Lotfinezhad, Ben Liang, Elvino S. Sousa},
journal={arXiv preprint arXiv:0903.4594},
year={2009},
doi={10.1109/IWQOS.2008.22},
archivePrefix={arXiv},
eprint={0903.4594},
primaryClass={cs.IT math.IT}
} | lotfinezhad2009dynamic |
arxiv-6893 | 0903.4615 | On Decidability Properties of One-Dimensional Cellular Automata | <|reference_start|>On Decidability Properties of One-Dimensional Cellular Automata: In a recent paper Sutner proved that the first-order theory of the phase-space $\mathcal{S}_\mathcal{A}=(Q^\mathbb{Z}, \longrightarrow)$ of a one-dimensional cellular automaton $\mathcal{A}$ whose configurations are elements of $Q^\mathbb{Z}$, for a finite set of states $Q$, and where $\longrightarrow$ is the "next configuration relation", is decidable. He asked whether this result could be extended to a more expressive logic. We prove in this paper that this is actuallly the case. We first show that, for each one-dimensional cellular automaton $\mathcal{A}$, the phase-space $\mathcal{S}_\mathcal{A}$ is an omega-automatic structure. Then, applying recent results of Kuske and Lohrey on omega-automatic structures, it follows that the first-order theory, extended with some counting and cardinality quantifiers, of the structure $\mathcal{S}_\mathcal{A}$, is decidable. We give some examples of new decidable properties for one-dimensional cellular automata. In the case of surjective cellular automata, some more efficient algorithms can be deduced from results of Kuske and Lohrey on structures of bounded degree. On the other hand we show that the case of cellular automata give new results on automatic graphs.<|reference_end|> | arxiv | @article{finkel2009on,
title={On Decidability Properties of One-Dimensional Cellular Automata},
author={Olivier Finkel (ELM, Lip)},
journal={Journal of Cellular Automata 6, 2-3 (2011) 181-193},
year={2009},
archivePrefix={arXiv},
eprint={0903.4615},
primaryClass={cs.LO cs.CC math.LO}
} | finkel2009on |
arxiv-6894 | 0903.4616 | Methods for detection and characterization of signals in noisy data with the Hilbert-Huang Transform | <|reference_start|>Methods for detection and characterization of signals in noisy data with the Hilbert-Huang Transform: The Hilbert-Huang Transform is a novel, adaptive approach to time series analysis that does not make assumptions about the data form. Its adaptive, local character allows the decomposition of non-stationary signals with hightime-frequency resolution but also renders it susceptible to degradation from noise. We show that complementing the HHT with techniques such as zero-phase filtering, kernel density estimation and Fourier analysis allows it to be used effectively to detect and characterize signals with low signal to noise ratio.<|reference_end|> | arxiv | @article{stroeer2009methods,
title={Methods for detection and characterization of signals in noisy data with
the Hilbert-Huang Transform},
author={Alexander Stroeer, John K. Cannizzo, Jordan B. Camp, Nicolas Gagarin},
journal={Phys.Rev.D79:124022,2009},
year={2009},
doi={10.1103/PhysRevD.79.124022},
archivePrefix={arXiv},
eprint={0903.4616},
primaryClass={physics.data-an cs.NA gr-qc}
} | stroeer2009methods |
arxiv-6895 | 0903.4696 | Multidimensional Online Robot Motion | <|reference_start|>Multidimensional Online Robot Motion: We consider three related problems of robot movement in arbitrary dimensions: coverage, search, and navigation. For each problem, a spherical robot is asked to accomplish a motion-related task in an unknown environment whose geometry is learned by the robot during navigation. The robot is assumed to have tactile and global positioning sensors. We view these problems from the perspective of (non-linear) competitiveness as defined by Gabriely and Rimon. We first show that in 3 dimensions and higher, there is no upper bound on competitiveness: every online algorithm can do arbitrarily badly compared to the optimal. We then modify the problems by assuming a fixed clearance parameter. We are able to give optimally competitive algorithms under this assumption.<|reference_end|> | arxiv | @article{kramer2009multidimensional,
title={Multidimensional Online Robot Motion},
author={Joshua Brown Kramer, Lucas Sabalka},
journal={International Journal of Computational Geometry and Applications,
20(6):653-684, 2010},
year={2009},
doi={10.1142/S0218195910003475},
archivePrefix={arXiv},
eprint={0903.4696},
primaryClass={cs.CG cs.RO}
} | kramer2009multidimensional |
arxiv-6896 | 0903.4726 | Range Quantile Queries: Another Virtue of Wavelet Trees | <|reference_start|>Range Quantile Queries: Another Virtue of Wavelet Trees: We show how to use a balanced wavelet tree as a data structure that stores a list of numbers and supports efficient {\em range quantile queries}. A range quantile query takes a rank and the endpoints of a sublist and returns the number with that rank in that sublist. For example, if the rank is half the sublist's length, then the query returns the sublist's median. We also show how these queries can be used to support space-efficient {\em coloured range reporting} and {\em document listing}.<|reference_end|> | arxiv | @article{gagie2009range,
title={Range Quantile Queries: Another Virtue of Wavelet Trees},
author={Travis Gagie, Simon J. Puglisi, Andrew Turpin},
journal={arXiv preprint arXiv:0903.4726},
year={2009},
doi={10.1007/978-3-642-03784-9_1},
archivePrefix={arXiv},
eprint={0903.4726},
primaryClass={cs.DS}
} | gagie2009range |
arxiv-6897 | 0903.4728 | Graph Homomorphisms with Complex Values: A Dichotomy Theorem | <|reference_start|>Graph Homomorphisms with Complex Values: A Dichotomy Theorem: Graph homomorphism has been studied intensively. Given an m x m symmetric matrix A, the graph homomorphism function is defined as \[Z_A (G) = \sum_{f:V->[m]} \prod_{(u,v)\in E} A_{f(u),f(v)}, \] where G = (V,E) is any undirected graph. The function Z_A can encode many interesting graph properties, including counting vertex covers and k-colorings. We study the computational complexity of Z_A for arbitrary symmetric matrices A with algebraic complex values. Building on work by Dyer and Greenhill, Bulatov and Grohe, and especially the recent beautiful work by Goldberg, Grohe, Jerrum and Thurley, we prove a complete dichotomy theorem for this problem. We show that Z_A is either computable in polynomial-time or #P-hard, depending explicitly on the matrix A. We further prove that the tractability criterion on A is polynomial-time decidable.<|reference_end|> | arxiv | @article{cai2009graph,
title={Graph Homomorphisms with Complex Values: A Dichotomy Theorem},
author={Jin-Yi Cai, Xi Chen, and Pinyan Lu},
journal={arXiv preprint arXiv:0903.4728},
year={2009},
archivePrefix={arXiv},
eprint={0903.4728},
primaryClass={cs.CC}
} | cai2009graph |
arxiv-6898 | 0903.4738 | Constellation Precoded Beamforming | <|reference_start|>Constellation Precoded Beamforming: We present and analyze the performance of constellation precoded beamforming. This multi-input multi-output transmission technique is based on the singular value decomposition of a channel matrix. In this work, the beamformer is precoded to improve its diversity performance. It was shown previously that while single beamforming achieves full diversity without channel coding, multiple beamforming results in diversity loss. In this paper, we show that a properly designed constellation precoder makes uncoded multiple beamforming achieve full diversity order. We also show that partially precoded multiple beamforming gets better diversity order than multiple beamforming without constellation precoder if the subchannels to be precoded are properly chosen. We propose several criteria to design the constellation precoder. Simulation results match the analysis, and show that precoded multiple beamforming actually outperforms single beamforming without precoding at the same system data rate while achieving full diversity order.<|reference_end|> | arxiv | @article{park2009constellation,
title={Constellation Precoded Beamforming},
author={Hong Ju Park and Ender Ayanoglu},
journal={arXiv preprint arXiv:0903.4738},
year={2009},
archivePrefix={arXiv},
eprint={0903.4738},
primaryClass={cs.IT math.IT}
} | park2009constellation |
arxiv-6899 | 0903.4742 | Guaranteed Minimum Rank Approximation from Linear Observations by Nuclear Norm Minimization with an Ellipsoidal Constraint | <|reference_start|>Guaranteed Minimum Rank Approximation from Linear Observations by Nuclear Norm Minimization with an Ellipsoidal Constraint: The rank minimization problem is to find the lowest-rank matrix in a given set. Nuclear norm minimization has been proposed as an convex relaxation of rank minimization. Recht, Fazel, and Parrilo have shown that nuclear norm minimization subject to an affine constraint is equivalent to rank minimization under a certain condition given in terms of the rank-restricted isometry property. However, in the presence of measurement noise, or with only approximately low rank generative model, the appropriate constraint set is an ellipsoid rather than an affine space. There exist polynomial-time algorithms to solve the nuclear norm minimization with an ellipsoidal constraint, but no performance guarantee has been shown for these algorithms. In this paper, we derive such an explicit performance guarantee, bounding the error in the approximate solution provided by nuclear norm minimization with an ellipsoidal constraint.<|reference_end|> | arxiv | @article{lee2009guaranteed,
title={Guaranteed Minimum Rank Approximation from Linear Observations by
Nuclear Norm Minimization with an Ellipsoidal Constraint},
author={Kiryung Lee and Yoram Bresler},
journal={arXiv preprint arXiv:0903.4742},
year={2009},
archivePrefix={arXiv},
eprint={0903.4742},
primaryClass={cs.IT math.IT}
} | lee2009guaranteed |
arxiv-6900 | 0903.4770 | Act of CVT and EVT In The Formation of Number-Theoretic Fractals | <|reference_start|>Act of CVT and EVT In The Formation of Number-Theoretic Fractals: In this paper we have defined two functions that have been used to construct different fractals having fractal dimensions between 1 and 2. More precisely, we can say that one of our defined functions produce the fractals whose fractal dimension lies in [1.58, 2) and rest function produce the fractals whose fractal dimension lies in (1, 1.58]. Also we tried to calculate the amount of increment of fractal dimension in accordance with base of the number systems. And in switching of fractals from one base to another, the increment of fractal dimension is constant, which is 1.58, its quite surprising!<|reference_end|> | arxiv | @article{pabitra2009act,
title={Act of CVT and EVT In The Formation of Number-Theoretic Fractals},
author={Pal Choudhury Pabitra, Sahoo Sudhakar, Nayak Birendra Kumar, and
Hassan Sk. Sarif},
journal={arXiv preprint arXiv:0903.4770},
year={2009},
archivePrefix={arXiv},
eprint={0903.4770},
primaryClass={cs.DM}
} | pabitra2009act |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.