corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-3201 | 0804.0050 | Outage Probability of the Gaussian MIMO Free-Space Optical Channel with PPM | <|reference_start|>Outage Probability of the Gaussian MIMO Free-Space Optical Channel with PPM: The free-space optical channel has the potential to facilitate inexpensive, wireless communication with fiber-like bandwidth under short deployment timelines. However, atmospheric effects can significantly degrade the reliability of a free-space optical link. In particular, atmospheric turbulence causes random fluctuations in the irradiance of the received laser beam, commonly referred to as scintillation. The scintillation process is slow compared to the large data rates typical of optical transmission. As such, we adopt a quasi-static block fading model and study the outage probability of the channel under the assumption of orthogonal pulse-position modulation. We investigate the mitigation of scintillation through the use of multiple lasers and multiple apertures, thereby creating a multiple-input multiple output (MIMO) channel. Non-ideal photodetection is also assumed such that the combined shot noise and thermal noise are considered as signal-independent Additive Gaussian white noise. Assuming perfect receiver channel state information (CSI), we compute the signal-to-noise ratio exponents for the cases when the scintillation is lognormal, exponential and gamma-gamma distributed, which cover a wide range of atmospheric turbulence conditions. Furthermore, we illustrate very large gains, in some cases larger than 15 dB, when transmitter CSI is also available by adapting the transmitted electrical power.<|reference_end|> | arxiv | @article{letzepis2008outage,
title={Outage Probability of the Gaussian MIMO Free-Space Optical Channel with
PPM},
author={Nick Letzepis and Albert Guillen i Fabregas},
journal={arXiv preprint arXiv:0804.0050},
year={2008},
archivePrefix={arXiv},
eprint={0804.0050},
primaryClass={cs.IT math.IT}
} | letzepis2008outage |
arxiv-3202 | 0804.0066 | Binary Decision Diagrams for Affine Approximation | <|reference_start|>Binary Decision Diagrams for Affine Approximation: Selman and Kautz's work on ``knowledge compilation'' established how approximation (strengthening and/or weakening) of a propositional knowledge-base can be used to speed up query processing, at the expense of completeness. In this classical approach, querying uses Horn over- and under-approximations of a given knowledge-base, which is represented as a propositional formula in conjunctive normal form (CNF). Along with the class of Horn functions, one could imagine other Boolean function classes that might serve the same purpose, owing to attractive deduction-computational properties similar to those of the Horn functions. Indeed, Zanuttini has suggested that the class of affine Boolean functions could be useful in knowledge compilation and has presented an affine approximation algorithm. Since CNF is awkward for presenting affine functions, Zanuttini considers both a sets-of-models representation and the use of modulo 2 congruence equations. In this paper, we propose an algorithm based on reduced ordered binary decision diagrams (ROBDDs). This leads to a representation which is more compact than the sets of models and, once we have established some useful properties of affine Boolean functions, a more efficient algorithm.<|reference_end|> | arxiv | @article{henshall2008binary,
title={Binary Decision Diagrams for Affine Approximation},
author={Kevin Henshall, Peter Schachte, Harald S{o}ndergaard and Leigh
Whiting},
journal={arXiv preprint arXiv:0804.0066},
year={2008},
archivePrefix={arXiv},
eprint={0804.0066},
primaryClass={cs.LO cs.AI}
} | henshall2008binary |
arxiv-3203 | 0804.0074 | Private Handshakes | <|reference_start|>Private Handshakes: Private handshaking allows pairs of users to determine which (secret) groups they are both a member of. Group membership is kept secret to everybody else. Private handshaking is a more private form of secret handshaking, because it does not allow the group administrator to trace users. We extend the original definition of a handshaking protocol to allow and test for membership of multiple groups simultaneously. We present simple and efficient protocols for both the single group and multiple group membership case. Private handshaking is a useful tool for mutual authentication, demanded by many pervasive applications (including RFID) for privacy. Our implementations are efficient enough to support such usually resource constrained scenarios.<|reference_end|> | arxiv | @article{hoepman2008private,
title={Private Handshakes},
author={Jaap-Henk Hoepman},
journal={n F. Stajano, editor, 4th Eur. Symp. on Security and Privacy in Ad
hoc and Sensor Networks, LNCS 4572, pages 31-42, Cambridge, UK, June 2-3 2007},
year={2008},
archivePrefix={arXiv},
eprint={0804.0074},
primaryClass={cs.CR}
} | hoepman2008private |
arxiv-3204 | 0804.0134 | SecMon: End-to-End Quality and Security Monitoring System | <|reference_start|>SecMon: End-to-End Quality and Security Monitoring System: The Voice over Internet Protocol (VoIP) is becoming a more available and popular way of communicating for Internet users. This also applies to Peer-to-Peer (P2P) systems and merging these two have already proven to be successful (e.g. Skype). Even the existing standards of VoIP provide an assurance of security and Quality of Service (QoS), however, these features are usually optional and supported by limited number of implementations. As a result, the lack of mandatory and widely applicable QoS and security guaranties makes the contemporary VoIP systems vulnerable to attacks and network disturbances. In this paper we are facing these issues and propose the SecMon system, which simultaneously provides a lightweight security mechanism and improves quality parameters of the call. SecMon is intended specially for VoIP service over P2P networks and its main advantage is that it provides authentication, data integrity services, adaptive QoS and (D)DoS attack detection. Moreover, the SecMon approach represents a low-bandwidth consumption solution that is transparent to the users and possesses a self-organizing capability. The above-mentioned features are accomplished mainly by utilizing two information hiding techniques: digital audio watermarking and network steganography. These techniques are used to create covert channels that serve as transport channels for lightweight QoS measurement's results. Furthermore, these metrics are aggregated in a reputation system that enables best route path selection in the P2P network. The reputation system helps also to mitigate (D)DoS attacks, maximize performance and increase transmission efficiency in the network.<|reference_end|> | arxiv | @article{ciszkowski2008secmon:,
title={SecMon: End-to-End Quality and Security Monitoring System},
author={Tomasz Ciszkowski, Charlott Eliasson, Markus Fiedler, Zbigniew
Kotulski, Radu Lupu, Wojciech Mazurczyk},
journal={arXiv preprint arXiv:0804.0134},
year={2008},
archivePrefix={arXiv},
eprint={0804.0134},
primaryClass={cs.MM}
} | ciszkowski2008secmon: |
arxiv-3205 | 0804.0143 | Effects of High-Order Co-occurrences on Word Semantic Similarities | <|reference_start|>Effects of High-Order Co-occurrences on Word Semantic Similarities: A computational model of the construction of word meaning through exposure to texts is built in order to simulate the effects of co-occurrence values on word semantic similarities, paragraph by paragraph. Semantic similarity is here viewed as association. It turns out that the similarity between two words W1 and W2 strongly increases with a co-occurrence, decreases with the occurrence of W1 without W2 or W2 without W1, and slightly increases with high-order co-occurrences. Therefore, operationalizing similarity as a frequency of co-occurrence probably introduces a bias: first, there are cases in which there is similarity without co-occurrence and, second, the frequency of co-occurrence overestimates similarity.<|reference_end|> | arxiv | @article{lemaire2008effects,
title={Effects of High-Order Co-occurrences on Word Semantic Similarities},
author={Beno^it Lemaire (TIMC), Guy Denhi`ere (LPC)},
journal={Current Psychology Letters - Behaviour, Brain and Cognition 18, 1
(2006) 1},
year={2008},
archivePrefix={arXiv},
eprint={0804.0143},
primaryClass={cs.CL}
} | lemaire2008effects |
arxiv-3206 | 0804.0149 | From Random Graph to Small World by Wandering | <|reference_start|>From Random Graph to Small World by Wandering: Numerous studies show that most known real-world complex networks share similar properties in their connectivity and degree distribution. They are called small worlds. This article gives a method to turn random graphs into Small World graphs by the dint of random walks.<|reference_end|> | arxiv | @article{gaume2008from,
title={From Random Graph to Small World by Wandering},
author={Bruno Gaume (IRIT), Fabien Mathieu (FT R&D, INRIA Rocquencourt)},
journal={arXiv preprint arXiv:0804.0149},
year={2008},
number={RR-6489},
archivePrefix={arXiv},
eprint={0804.0149},
primaryClass={cs.DS}
} | gaume2008from |
arxiv-3207 | 0804.0188 | Support Vector Machine Classification with Indefinite Kernels | <|reference_start|>Support Vector Machine Classification with Indefinite Kernels: We propose a method for support vector machine classification using indefinite kernels. Instead of directly minimizing or stabilizing a nonconvex loss function, our algorithm simultaneously computes support vectors and a proxy kernel matrix used in forming the loss. This can be interpreted as a penalized kernel learning problem where indefinite kernel matrices are treated as a noisy observations of a true Mercer kernel. Our formulation keeps the problem convex and relatively large problems can be solved efficiently using the projected gradient or analytic center cutting plane methods. We compare the performance of our technique with other methods on several classic data sets.<|reference_end|> | arxiv | @article{luss2008support,
title={Support Vector Machine Classification with Indefinite Kernels},
author={Ronny Luss, Alexandre d'Aspremont},
journal={arXiv preprint arXiv:0804.0188},
year={2008},
archivePrefix={arXiv},
eprint={0804.0188},
primaryClass={cs.LG cs.AI}
} | luss2008support |
arxiv-3208 | 0804.0273 | A proof theoretic analysis of intruder theories | <|reference_start|>A proof theoretic analysis of intruder theories: We consider the problem of intruder deduction in security protocol analysis: that is, deciding whether a given message $M$ can be deduced from a set of messages $\Gamma$ under the theory of blind signatures and arbitrary convergent equational theories modulo associativity and commutativity (AC) of certain binary operators. The traditional formulations of intruder deduction are usually given in natural-deduction-like systems and proving decidability requires significant effort in showing that the rules are "local" in some sense. By using the well-known translation between natural deduction and sequent calculus, we recast the intruder deduction problem as proof search in sequent calculus, in which locality is immediate. Using standard proof theoretic methods, such as permutability of rules and cut elimination, we show that the intruder deduction problem can be reduced, in polynomial time, to the elementary deduction problems, which amounts to solving certain equations in the underlying individual equational theories. We further show that this result extends to combinations of disjoint AC-convergent theories whereby the decidability of intruder deduction under the combined theory reduces to the decidability of elementary deduction in each constituent theory. Although various researchers have reported similar results for individual cases, our work shows that these results can be obtained using a systematic and uniform methodology based on the sequent calculus.<|reference_end|> | arxiv | @article{tiu2008a,
title={A proof theoretic analysis of intruder theories},
author={Alwen Tiu and Rajeev Gore},
journal={arXiv preprint arXiv:0804.0273},
year={2008},
archivePrefix={arXiv},
eprint={0804.0273},
primaryClass={cs.LO cs.CR}
} | tiu2008a |
arxiv-3209 | 0804.0277 | Mapping Semantic Networks to Undirected Networks | <|reference_start|>Mapping Semantic Networks to Undirected Networks: There exists an injective, information-preserving function that maps a semantic network (i.e a directed labeled network) to a directed network (i.e. a directed unlabeled network). The edge label in the semantic network is represented as a topological feature of the directed network. Also, there exists an injective function that maps a directed network to an undirected network (i.e. an undirected unlabeled network). The edge directionality in the directed network is represented as a topological feature of the undirected network. Through function composition, there exists an injective function that maps a semantic network to an undirected network. Thus, aside from space constraints, the semantic network construct does not have any modeling functionality that is not possible with either a directed or undirected network representation. Two proofs of this idea will be presented. The first is a proof of the aforementioned function composition concept. The second is a simpler proof involving an undirected binary encoding of a semantic network.<|reference_end|> | arxiv | @article{rodriguez2008mapping,
title={Mapping Semantic Networks to Undirected Networks},
author={Marko A. Rodriguez},
journal={International Journal of Applied Mathematics and Computer
Sciences, volume 5, issue 1, pages 39-42, ISSN:2070-3902, LA-UR-07-5287, 2009},
year={2008},
number={LAUR-07-5287},
archivePrefix={arXiv},
eprint={0804.0277},
primaryClass={cs.DS}
} | rodriguez2008mapping |
arxiv-3210 | 0804.0317 | Parts-of-Speech Tagger Errors Do Not Necessarily Degrade Accuracy in Extracting Information from Biomedical Text | <|reference_start|>Parts-of-Speech Tagger Errors Do Not Necessarily Degrade Accuracy in Extracting Information from Biomedical Text: A recent study reported development of Muscorian, a generic text processing tool for extracting protein-protein interactions from text that achieved comparable performance to biomedical-specific text processing tools. This result was unexpected since potential errors from a series of text analysis processes is likely to adversely affect the outcome of the entire process. Most biomedical entity relationship extraction tools have used biomedical-specific parts-of-speech (POS) tagger as errors in POS tagging and are likely to affect subsequent semantic analysis of the text, such as shallow parsing. This study aims to evaluate the parts-of-speech (POS) tagging accuracy and attempts to explore whether a comparable performance is obtained when a generic POS tagger, MontyTagger, was used in place of MedPost, a tagger trained in biomedical text. Our results demonstrated that MontyTagger, Muscorian's POS tagger, has a POS tagging accuracy of 83.1% when tested on biomedical text. Replacing MontyTagger with MedPost did not result in a significant improvement in entity relationship extraction from text; precision of 55.6% from MontyTagger versus 56.8% from MedPost on directional relationships and 86.1% from MontyTagger compared to 81.8% from MedPost on nondirectional relationships. This is unexpected as the potential for poor POS tagging by MontyTagger is likely to affect the outcome of the information extraction. An analysis of POS tagging errors demonstrated that 78.5% of tagging errors are being compensated by shallow parsing. Thus, despite 83.1% tagging accuracy, MontyTagger has a functional tagging accuracy of 94.6%.<|reference_end|> | arxiv | @article{ling2008parts-of-speech,
title={Parts-of-Speech Tagger Errors Do Not Necessarily Degrade Accuracy in
Extracting Information from Biomedical Text},
author={Maurice HT Ling, Christophe Lefevre, Kevin R. Nicholas},
journal={Ling, Maurice HT, Lefevre, Christophe, Nicholas, Kevin R. 2008.
Parts-of-Speech Tagger Errors Do Not Necessarily Degrade Accuracy in
Extracting Information from Biomedical Text. The Python Papers 3 (1): 65-80},
year={2008},
archivePrefix={arXiv},
eprint={0804.0317},
primaryClass={cs.CL cs.IR}
} | ling2008parts-of-speech |
arxiv-3211 | 0804.0318 | Moore and more and symmetry | <|reference_start|>Moore and more and symmetry: In any spatially discrete model of pedestrian motion which uses a regular lattice as basis, there is the question of how the symmetry between the different directions of motion can be restored as far as possible but with limited computational effort. This question is equivalent to the question ''How important is the orientation of the axis of discretization for the result of the simulation?'' An optimization in terms of symmetry can be combined with the implementation of higher and heterogeniously distributed walking speeds by representing different walking speeds via different amounts of cells an agent may move during one round. Therefore all different possible neighborhoods for speeds up to v = 10 (cells per round) will be examined for the amount of deviation from radial symmetry. Simple criteria will be stated which will allow find an optimal neighborhood for each speed. It will be shown that following these criteria even the best mixture of steps in Moore and von Neumann neighborhoods is unable to reproduce the optimal neighborhood for a speed as low as 4.<|reference_end|> | arxiv | @article{kretz2008moore,
title={Moore and more and symmetry},
author={Tobias Kretz and Michael Schreckenberg},
journal={arXiv preprint arXiv:0804.0318},
year={2008},
doi={10.1007/978-3-540-47064-9_26},
archivePrefix={arXiv},
eprint={0804.0318},
primaryClass={cs.MA physics.comp-ph}
} | kretz2008moore |
arxiv-3212 | 0804.0337 | On the Convexity of the MSE Region of Single-Antenna Users | <|reference_start|>On the Convexity of the MSE Region of Single-Antenna Users: We prove convexity of the sum-power constrained mean square error (MSE) region in case of two single-antenna users communicating with a multi-antenna base station. Due to the MSE duality this holds both for the vector broadcast channel and the dual multiple access channel. Increasing the number of users to more than two, we show by means of a simple counter-example that the resulting MSE region is not necessarily convex any longer, even under the assumption of single-antenna users. In conjunction with our former observation that the two user MSE region is not necessarily convex for two multi-antenna users, this extends and corrects the hitherto existing notion of the MSE region geometry.<|reference_end|> | arxiv | @article{hunger2008on,
title={On the Convexity of the MSE Region of Single-Antenna Users},
author={Raphael Hunger, Michael Joham},
journal={arXiv preprint arXiv:0804.0337},
year={2008},
archivePrefix={arXiv},
eprint={0804.0337},
primaryClass={cs.IT math.IT}
} | hunger2008on |
arxiv-3213 | 0804.0352 | Permeability Analysis based on information granulation theory | <|reference_start|>Permeability Analysis based on information granulation theory: This paper describes application of information granulation theory, on the analysis of "lugeon data". In this manner, using a combining of Self Organizing Map (SOM) and Neuro-Fuzzy Inference System (NFIS), crisp and fuzzy granules are obtained. Balancing of crisp granules and sub- fuzzy granules, within non fuzzy information (initial granulation), is rendered in open-close iteration. Using two criteria, "simplicity of rules "and "suitable adaptive threshold error level", stability of algorithm is guaranteed. In other part of paper, rough set theory (RST), to approximate analysis, has been employed >.Validation of the proposed methods, on the large data set of in-situ permeability in rock masses, in the Shivashan dam, Iran, has been highlighted. By the implementation of the proposed algorithm on the lugeon data set, was proved the suggested method, relating the approximate analysis on the permeability, could be applied.<|reference_end|> | arxiv | @article{sharifzadeh2008permeability,
title={Permeability Analysis based on information granulation theory},
author={M.Sharifzadeh, H.Owladeghaffari, K.Shahriar, E.Bakhtavar},
journal={arXiv preprint arXiv:0804.0352},
year={2008},
archivePrefix={arXiv},
eprint={0804.0352},
primaryClass={cs.NE cs.AI}
} | sharifzadeh2008permeability |
arxiv-3214 | 0804.0353 | Graphical Estimation of Permeability Using RST&NFIS | <|reference_start|>Graphical Estimation of Permeability Using RST&NFIS: This paper pursues some applications of Rough Set Theory (RST) and neural-fuzzy model to analysis of "lugeon data". In the manner, using Self Organizing Map (SOM) as a pre-processing the data are scaled and then the dominant rules by RST, are elicited. Based on these rules variations of permeability in the different levels of Shivashan dam, Iran has been highlighted. Then, via using a combining of SOM and an adaptive Neuro-Fuzzy Inference System (NFIS) another analysis on the data was carried out. Finally, a brief comparison between the obtained results of RST and SOM-NFIS (briefly SONFIS) has been rendered.<|reference_end|> | arxiv | @article{owladeghaffari2008graphical,
title={Graphical Estimation of Permeability Using RST&NFIS},
author={H.Owladeghaffari, K.Shahriar W. Pedrycz},
journal={arXiv preprint arXiv:0804.0353},
year={2008},
archivePrefix={arXiv},
eprint={0804.0353},
primaryClass={cs.NE cs.AI}
} | owladeghaffari2008graphical |
arxiv-3215 | 0804.0362 | Exhaustive enumeration unveils clustering and freezing in random 3-SAT | <|reference_start|>Exhaustive enumeration unveils clustering and freezing in random 3-SAT: We study geometrical properties of the complete set of solutions of the random 3-satisfiability problem. We show that even for moderate system sizes the number of clusters corresponds surprisingly well with the theoretic asymptotic prediction. We locate the freezing transition in the space of solutions which has been conjectured to be relevant in explaining the onset of computational hardness in random constraint satisfaction problems.<|reference_end|> | arxiv | @article{ardelius2008exhaustive,
title={Exhaustive enumeration unveils clustering and freezing in random 3-SAT},
author={John Ardelius, Lenka Zdeborov'a},
journal={Phys. Rev. E 78, 040101(R) (2008)},
year={2008},
doi={10.1103/PhysRevE.78.040101},
archivePrefix={arXiv},
eprint={0804.0362},
primaryClass={cond-mat.stat-mech cond-mat.dis-nn cs.CC cs.DS}
} | ardelius2008exhaustive |
arxiv-3216 | 0804.0366 | Merging Object and Process Diagrams for Business Information Modeling | <|reference_start|>Merging Object and Process Diagrams for Business Information Modeling: While developing an information system for the University of Bern, we were faced with two major issues: managing software changes and adapting Business Information Models. Software techniques well-suited to software development teams exist, yet the models obtained are often too complex for the business user. We will first highlight the conceptual problems encountered while designing the Business Information Model. We will then propose merging class diagrams and business process modeling to achieve a necessary transparency. We will finally present a modeling tool we developed which, using pilot case studies, helps to show some of the advantages of a dual model approach.<|reference_end|> | arxiv | @article{chénais2008merging,
title={Merging Object and Process Diagrams for Business Information Modeling},
author={Patrick Ch'enais},
journal={arXiv preprint arXiv:0804.0366},
year={2008},
archivePrefix={arXiv},
eprint={0804.0366},
primaryClass={cs.SE}
} | chénais2008merging |
arxiv-3217 | 0804.0385 | On the Sum-Capacity of Degraded Gaussian Multiaccess Relay Channels | <|reference_start|>On the Sum-Capacity of Degraded Gaussian Multiaccess Relay Channels: The sum-capacity is studied for a K-user degraded Gaussian multiaccess relay channel (MARC) where the multiaccess signal received at the destination from the K sources and relay is a degraded version of the signal received at the relay from all sources, given the transmit signal at the relay. An outer bound on the capacity region is developed using cutset bounds. An achievable rate region is obtained for the decode-and-forward (DF) strategy. It is shown that for every choice of input distribution, the rate regions for the inner (DF) and outer bounds are given by the intersection of two K-dimensional polymatroids, one resulting from the multiaccess link at the relay and the other from that at the destination. Although the inner and outer bound rate regions are not identical in general, for both cases, a classical result on the intersection of two polymatroids is used to show that the intersection belongs to either the set of active cases or inactive cases, where the two bounds on the K-user sum-rate are active or inactive, respectively. It is shown that DF achieves the capacity region for a class of degraded Gaussian MARCs in which the relay has a high SNR link to the destination relative to the multiaccess link from the sources to the relay. Otherwise, DF is shown to achieve the sum-capacity for an active class of degraded Gaussian MARCs for which the DF sum-rate is maximized by a polymatroid intersection belonging to the set of active cases. This class is shown to include the class of symmetric Gaussian MARCs where all users transmit at the same power.<|reference_end|> | arxiv | @article{sankar2008on,
title={On the Sum-Capacity of Degraded Gaussian Multiaccess Relay Channels},
author={Lalitha Sankar, Narayan B. Mandayam, H. Vincent Poor},
journal={arXiv preprint arXiv:0804.0385},
year={2008},
archivePrefix={arXiv},
eprint={0804.0385},
primaryClass={cs.IT math.IT}
} | sankar2008on |
arxiv-3218 | 0804.0396 | On the approximability of minmax (regret) network optimization problems | <|reference_start|>On the approximability of minmax (regret) network optimization problems: In this paper the minmax (regret) versions of some basic polynomially solvable deterministic network problems are discussed. It is shown that if the number of scenarios is unbounded, then the problems under consideration are not approximable within $\log^{1-\epsilon} K$ for any $\epsilon>0$ unless NP $\subseteq$ DTIME$(n^{\mathrm{poly} \log n})$, where $K$ is the number of scenarios.<|reference_end|> | arxiv | @article{kasperski2008on,
title={On the approximability of minmax (regret) network optimization problems},
author={Adam Kasperski, Pawel Zielinski},
journal={arXiv preprint arXiv:0804.0396},
year={2008},
archivePrefix={arXiv},
eprint={0804.0396},
primaryClass={cs.CC cs.DM}
} | kasperski2008on |
arxiv-3219 | 0804.0409 | Cryptanalysis of Two McEliece Cryptosystems Based on Quasi-Cyclic Codes | <|reference_start|>Cryptanalysis of Two McEliece Cryptosystems Based on Quasi-Cyclic Codes: We cryptanalyse here two variants of the McEliece cryptosystem based on quasi-cyclic codes. Both aim at reducing the key size by restricting the public and secret generator matrices to be in quasi-cyclic form. The first variant considers subcodes of a primitive BCH code. We prove that this variant is not secure by finding and solving a linear system satisfied by the entries of the secret permutation matrix. The other variant uses quasi-cyclic low density parity-check codes. This scheme was devised to be immune against general attacks working for McEliece type cryptosystems based on low density parity-check codes by choosing in the McEliece scheme more general one-to-one mappings than permutation matrices. We suggest here a structural attack exploiting the quasi-cyclic structure of the code and a certain weakness in the choice of the linear transformations that hide the generator matrix of the code. Our analysis shows that with high probability a parity-check matrix of a punctured version of the secret code can be recovered in cubic time complexity in its length. The complete reconstruction of the secret parity-check matrix of the quasi-cyclic low density parity-check codes requires the search of codewords of low weight which can be done with about $2^{37}$ operations for the specific parameters proposed.<|reference_end|> | arxiv | @article{otmani2008cryptanalysis,
title={Cryptanalysis of Two McEliece Cryptosystems Based on Quasi-Cyclic Codes},
author={Ayoub Otmani, Jean-Pierre Tillich, Leonard Dallot},
journal={arXiv preprint arXiv:0804.0409},
year={2008},
archivePrefix={arXiv},
eprint={0804.0409},
primaryClass={cs.CR cs.DM}
} | otmani2008cryptanalysis |
arxiv-3220 | 0804.0441 | Joint Beamforming for Multiaccess MIMO Systems with Finite Rate Feedback | <|reference_start|>Joint Beamforming for Multiaccess MIMO Systems with Finite Rate Feedback: This paper considers multiaccess multiple-input multiple-output (MIMO) systems with finite rate feedback. The goal is to understand how to efficiently employ the given finite feedback resource to maximize the sum rate by characterizing the performance analytically. Towards this, we propose a joint quantization and feedback strategy: the base station selects the strongest users, jointly quantizes their strongest eigen-channel vectors and broadcasts a common feedback to all the users. This joint strategy is different from an individual strategy, in which quantization and feedback are performed across users independently, and it improves upon the individual strategy in the same way that vector quantization improves upon scalar quantization. In our proposed strategy, the effect of user selection is analyzed by extreme order statistics, while the effect of joint quantization is quantified by what we term ``the composite Grassmann manifold''. The achievable sum rate is then estimated by random matrix theory. Due to its simple implementation and solid performance analysis, the proposed scheme provides a benchmark for multiaccess MIMO systems with finite rate feedback.<|reference_end|> | arxiv | @article{dai2008joint,
title={Joint Beamforming for Multiaccess MIMO Systems with Finite Rate Feedback},
author={Wei Dai, Brian C. Rider and Youjian Liu},
journal={arXiv preprint arXiv:0804.0441},
year={2008},
archivePrefix={arXiv},
eprint={0804.0441},
primaryClass={cs.IT math.IT}
} | dai2008joint |
arxiv-3221 | 0804.0506 | Distributed Consensus over Wireless Sensor Networks Affected by Multipath Fading | <|reference_start|>Distributed Consensus over Wireless Sensor Networks Affected by Multipath Fading: The design of sensor networks capable of reaching a consensus on a globally optimal decision test, without the need for a fusion center, is a problem that has received considerable attention in the last years. Many consensus algorithms have been proposed, with convergence conditions depending on the graph describing the interaction among the nodes. In most works, the graph is undirected and there are no propagation delays. Only recently, the analysis has been extended to consensus algorithms incorporating propagation delays. In this work, we propose a consensus algorithm able to converge to a globally optimal decision statistic, using a wideband wireless network, governed by a fairly simple MAC mechanism, where each link is a multipath, frequency-selective, channel. The main contribution of the paper is to derive necessary and sufficient conditions on the network topology and sufficient conditions on the channel transfer functions guaranteeing the exponential convergence of the consensus algorithm to a globally optimal decision value, for any bounded delay condition.<|reference_end|> | arxiv | @article{scutari2008distributed,
title={Distributed Consensus over Wireless Sensor Networks Affected by
Multipath Fading},
author={Gesualdo Scutari and Sergio Barbarossa},
journal={arXiv preprint arXiv:0804.0506},
year={2008},
doi={10.1109/TSP.2008.924857},
archivePrefix={arXiv},
eprint={0804.0506},
primaryClass={cs.DC cs.MA}
} | scutari2008distributed |
arxiv-3222 | 0804.0510 | Nonparametric Statistical Inference for Ergodic Processes | <|reference_start|>Nonparametric Statistical Inference for Ergodic Processes: In this work a method for statistical analysis of time series is proposed, which is used to obtain solutions to some classical problems of mathematical statistics under the only assumption that the process generating the data is stationary ergodic. Namely, three problems are considered: goodness-of-fit (or identity) testing, process classification, and the change point problem. For each of the problems a test is constructed that is asymptotically accurate for the case when the data is generated by stationary ergodic processes. The tests are based on empirical estimates of distributional distance.<|reference_end|> | arxiv | @article{ryabko2008nonparametric,
title={Nonparametric Statistical Inference for Ergodic Processes},
author={Daniil Ryabko (INRIA Lille - Nord Europe), Boris Ryabko (SIBSUTI, ICT
SBRAS)},
journal={IEEE Transactions on Information Theory 56, 3 (2010) 1430-1435},
year={2008},
archivePrefix={arXiv},
eprint={0804.0510},
primaryClass={cs.IT math.IT math.ST stat.TH}
} | ryabko2008nonparametric |
arxiv-3223 | 0804.0524 | Bayesian Optimisation Algorithm for Nurse Scheduling | <|reference_start|>Bayesian Optimisation Algorithm for Nurse Scheduling: Our research has shown that schedules can be built mimicking a human scheduler by using a set of rules that involve domain knowledge. This chapter presents a Bayesian Optimization Algorithm (BOA) for the nurse scheduling problem that chooses such suitable scheduling rules from a set for each nurses assignment. Based on the idea of using probabilistic models, the BOA builds a Bayesian network for the set of promising solutions and samples these networks to generate new candidate solutions. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed algorithm may be suitable for other scheduling problems.<|reference_end|> | arxiv | @article{li2008bayesian,
title={Bayesian Optimisation Algorithm for Nurse Scheduling},
author={Jingpeng Li and Uwe Aickelin},
journal={Scalable Optimization via Probabilistic Modeling: From Algorithms
to Applications (Studies in Computational Intelligence), edited by M Pelikan,
K Sastry and E Cantu Paz, Chapter 17, pp 315-332, Springer, 2006},
year={2008},
doi={10.1007/978-3-540-34954-9_14},
archivePrefix={arXiv},
eprint={0804.0524},
primaryClass={cs.NE cs.CE}
} | li2008bayesian |
arxiv-3224 | 0804.0528 | Application of Rough Set Theory to Analysis of Hydrocyclone Operation | <|reference_start|>Application of Rough Set Theory to Analysis of Hydrocyclone Operation: This paper describes application of rough set theory, on the analysis of hydrocyclone operation. In this manner, using Self Organizing Map (SOM) as preprocessing step, best crisp granules of data are obtained. Then, using a combining of SOM and rough set theory (RST)-called SORST-, the dominant rules on the information table, obtained from laboratory tests, are extracted. Based on these rules, an approximate estimation on decision attribute is fulfilled. Finally, a brief comparison of this method with the SOM-NFIS system (briefly SONFIS) is highlighted.<|reference_end|> | arxiv | @article{owladeghaffari2008application,
title={Application of Rough Set Theory to Analysis of Hydrocyclone Operation},
author={H.Owladeghaffari, M.Ejtemaei, M.Irannajad},
journal={arXiv preprint arXiv:0804.0528},
year={2008},
archivePrefix={arXiv},
eprint={0804.0528},
primaryClass={cs.AI}
} | owladeghaffari2008application |
arxiv-3225 | 0804.0539 | Irregular turbo code design for the binary erasure channel | <|reference_start|>Irregular turbo code design for the binary erasure channel: In this paper, the design of irregular turbo codes for the binary erasure channel is investigated. An analytic expression of the erasure probability of punctured recursive systematic convolutional codes is derived. This exact expression will be used to track the density evolution of turbo codes over the erasure channel, that will allow for the design of capacity-approaching irregular turbo codes. Next, we propose a graph-optimal interleaver for irregular turbo codes. Simulation results for different coding rates is shown at the end.<|reference_end|> | arxiv | @article{kraidy2008irregular,
title={Irregular turbo code design for the binary erasure channel},
author={Ghassan M. Kraidy, Valentin Savin},
journal={arXiv preprint arXiv:0804.0539},
year={2008},
archivePrefix={arXiv},
eprint={0804.0539},
primaryClass={cs.IT math.IT}
} | kraidy2008irregular |
arxiv-3226 | 0804.0556 | RubberEdge: Reducing Clutching by Combining Position and Rate Control with Elastic Feedback | <|reference_start|>RubberEdge: Reducing Clutching by Combining Position and Rate Control with Elastic Feedback: Position control devices enable precise selection, but significant clutching degrades performance. Clutching can be reduced with high control-display gain or pointer acceleration, but there are human and device limits. Elastic rate control eliminates clutching completely, but can make precise selection difficult. We show that hybrid position-rate control can outperform position control by 20% when there is significant clutching, even when using pointer acceleration. Unlike previous work, our RubberEdge technique eliminates trajectory and velocity discontinuities. We derive predictive models for position control with clutching and hybrid control, and present a prototype RubberEdge position-rate control device including initial user feedback.<|reference_end|> | arxiv | @article{casiez2008rubberedge:,
title={RubberEdge: Reducing Clutching by Combining Position and Rate Control
with Elastic Feedback},
author={G'ery Casiez (LIFL, INRIA Lille - Nord Europe), Daniel Vogel, Qing
Pan (LIFL, INRIA Lille - Nord Europe), Christophe Chaillou (INRIA Lille -
Nord Europe)},
journal={Dans Proceedings of the 20th annual ACM symposium on User
interface software and technology - Symposium on User Interface Software and
Technology, Lille : France (2007)},
year={2008},
doi={10.1145/1294211.1294234},
archivePrefix={arXiv},
eprint={0804.0556},
primaryClass={cs.HC}
} | casiez2008rubberedge: |
arxiv-3227 | 0804.0558 | Agent-Based Perception of an Environment in an Emergency Situation | <|reference_start|>Agent-Based Perception of an Environment in an Emergency Situation: We are interested in the problem of multiagent systems development for risk detecting and emergency response in an uncertain and partially perceived environment. The evaluation of the current situation passes by three stages inside the multiagent system. In a first time, the situation is represented in a dynamic way. The second step, consists to characterise the situation and finally, it is compared with other similar known situations. In this paper, we present an information modelling of an observed environment, that we have applied on the RoboCupRescue Simulation System. Information coming from the environment are formatted according to a taxonomy and using semantic features. The latter are defined thanks to a fine ontology of the domain and are managed by factual agents that aim to represent dynamically the current situation.<|reference_end|> | arxiv | @article{kebair2008agent-based,
title={Agent-Based Perception of an Environment in an Emergency Situation},
author={Fahem Kebair (LITIS), Fr'ed'eric Serin (LITIS), Cyrille Bertelle
(LITIS)},
journal={Dans Proceedings of the World Congress on Engineering -
Proceedings of the World Congress on Engineering, London : Royaume-Uni (2007)},
year={2008},
doi={10.1007/978-988-98671-5-7,978-988-98671-2-6},
archivePrefix={arXiv},
eprint={0804.0558},
primaryClass={cs.AI}
} | kebair2008agent-based |
arxiv-3228 | 0804.0561 | Realistic Haptic Rendering of Interacting Deformable Objects in Virtual Environments | <|reference_start|>Realistic Haptic Rendering of Interacting Deformable Objects in Virtual Environments: A new computer haptics algorithm to be used in general interactive manipulations of deformable virtual objects is presented. In multimodal interactive simulations, haptic feedback computation often comes from contact forces. Subsequently, the fidelity of haptic rendering depends significantly on contact space modeling. Contact and friction laws between deformable models are often simplified in up to date methods. They do not allow a "realistic" rendering of the subtleties of contact space physical phenomena (such as slip and stick effects due to friction or mechanical coupling between contacts). In this paper, we use Signorini's contact law and Coulomb's friction law as a computer haptics basis. Real-time performance is made possible thanks to a linearization of the behavior in the contact space, formulated as the so-called Delassus operator, and iteratively solved by a Gauss-Seidel type algorithm. Dynamic deformation uses corotational global formulation to obtain the Delassus operator in which the mass and stiffness ratio are dissociated from the simulation time step. This last point is crucial to keep stable haptic feedback. This global approach has been packaged, implemented, and tested. Stable and realistic 6D haptic feedback is demonstrated through a clipping task experiment.<|reference_end|> | arxiv | @article{duriez2008realistic,
title={Realistic Haptic Rendering of Interacting Deformable Objects in Virtual
Environments},
author={Christian Duriez (INRIA Lille - Nord Europe), Fr'ed'eric Dubois
(LMGC), Abderrahmane Kheddar (JRL), Claude Andriot (LIST)},
journal={IEEE Transactions on Visualization and Computer Graphics 12, 1
(2006) 36-47},
year={2008},
archivePrefix={arXiv},
eprint={0804.0561},
primaryClass={cs.GR}
} | duriez2008realistic |
arxiv-3229 | 0804.0570 | A Parameterized Perspective on $P_2$-Packings | <|reference_start|>A Parameterized Perspective on $P_2$-Packings: }We study (vertex-disjoint) $P_2$-packings in graphs under a parameterized perspective. Starting from a maximal $P_2$-packing $\p$ of size $j$ we use extremal arguments for determining how many vertices of $\p$ appear in some $P_2$-packing of size $(j+1)$. We basically can 'reuse' $2.5j$ vertices. We also present a kernelization algorithm that gives a kernel of size bounded by $7k$. With these two results we build an algorithm which constructs a $P_2$-packing of size $k$ in time $\Oh^*(2.482^{3k})$.<|reference_end|> | arxiv | @article{chen2008a,
title={A Parameterized Perspective on $P_2$-Packings},
author={Jianer Chen, Henning Fernau, Dan Ning, Daniel Raible, Jianxin Wang},
journal={arXiv preprint arXiv:0804.0570},
year={2008},
archivePrefix={arXiv},
eprint={0804.0570},
primaryClass={cs.DS cs.CC cs.DM}
} | chen2008a |
arxiv-3230 | 0804.0573 | An Artificial Immune System as a Recommender System for Web Sites | <|reference_start|>An Artificial Immune System as a Recommender System for Web Sites: Artificial Immune Systems have been used successfully to build recommender systems for film databases. In this research, an attempt is made to extend this idea to web site recommendation. A collection of more than 1000 individuals web profiles (alternatively called preferences / favourites / bookmarks file) will be used. URLs will be classified using the DMOZ (Directory Mozilla) database of the Open Directory Project as our ontology. This will then be used as the data for the Artificial Immune Systems rather than the actual addresses. The first attempt will involve using a simple classification code number coupled with the number of pages within that classification code. However, this implementation does not make use of the hierarchical tree-like structure of DMOZ. Consideration will then be given to the construction of a similarity measure for web profiles that makes use of this hierarchical information to build a better-informed Artificial Immune System.<|reference_end|> | arxiv | @article{morrison2008an,
title={An Artificial Immune System as a Recommender System for Web Sites},
author={Tom Morrison and Uwe Aickelin},
journal={Proceedings of the 1st International Conference on Artificial
Immune Systems (ICARIS 2002), pp 161-169, Canterbury, UK, 2002},
year={2008},
archivePrefix={arXiv},
eprint={0804.0573},
primaryClass={cs.NE cs.AI}
} | morrison2008an |
arxiv-3231 | 0804.0577 | Decentralized Search with Random Costs | <|reference_start|>Decentralized Search with Random Costs: A decentralized search algorithm is a method of routing on a random graph that uses only limited, local, information about the realization of the graph. In some random graph models it is possible to define such algorithms which produce short paths when routing from any vertex to any other, while for others it is not. We consider random graphs with random costs assigned to the edges. In this situation, we use the methods of stochastic dynamic programming to create a decentralized search method which attempts to minimize the total cost, rather than the number of steps, of each path. We show that it succeeds in doing so among all decentralized search algorithms which monotonically approach the destination. Our algorithm depends on knowing the expected cost of routing from every vertex to any other, but we show that this may be calculated iteratively, and in practice can be easily estimated from the cost of previous routes and compressed into a small routing table. The methods applied here can also be applied directly in other situations, such as efficient searching in graphs with varying vertex degrees.<|reference_end|> | arxiv | @article{sandberg2008decentralized,
title={Decentralized Search with Random Costs},
author={Oskar Sandberg},
journal={arXiv preprint arXiv:0804.0577},
year={2008},
archivePrefix={arXiv},
eprint={0804.0577},
primaryClass={math.PR cs.DS}
} | sandberg2008decentralized |
arxiv-3232 | 0804.0580 | Explicit Learning: an Effort towards Human Scheduling Algorithms | <|reference_start|>Explicit Learning: an Effort towards Human Scheduling Algorithms: Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem.<|reference_end|> | arxiv | @article{li2008explicit,
title={Explicit Learning: an Effort towards Human Scheduling Algorithms},
author={Jingpeng Li and Uwe Aickelin},
journal={Proceedings of the 1st Multidisciplinary International Conference
on Scheduling: Theory and Applications (MISTA 2003), pp 240-241, Nottingham,
UK 2003},
year={2008},
archivePrefix={arXiv},
eprint={0804.0580},
primaryClass={cs.NE cs.AI}
} | li2008explicit |
arxiv-3233 | 0804.0581 | Computing a Finite Size Representation of the Set of Approximate Solutions of an MOP | <|reference_start|>Computing a Finite Size Representation of the Set of Approximate Solutions of an MOP: Recently, a framework for the approximation of the entire set of $\epsilon$-efficient solutions (denote by $E_\epsilon$) of a multi-objective optimization problem with stochastic search algorithms has been proposed. It was proven that such an algorithm produces -- under mild assumptions on the process to generate new candidate solutions --a sequence of archives which converges to $E_{\epsilon}$ in the limit and in the probabilistic sense. The result, though satisfactory for most discrete MOPs, is at least from the practical viewpoint not sufficient for continuous models: in this case, the set of approximate solutions typically forms an $n$-dimensional object, where $n$ denotes the dimension of the parameter space, and thus, it may come to perfomance problems since in practise one has to cope with a finite archive. Here we focus on obtaining finite and tight approximations of $E_\epsilon$, the latter measured by the Hausdorff distance. We propose and investigate a novel archiving strategy theoretically and empirically. For this, we analyze the convergence behavior of the algorithm, yielding bounds on the obtained approximation quality as well as on the cardinality of the resulting approximation, and present some numerical results.<|reference_end|> | arxiv | @article{schuetze2008computing,
title={Computing a Finite Size Representation of the Set of Approximate
Solutions of an MOP},
author={Oliver Schuetze (INRIA Futurs), Carlos A. Coello Coello (INRIA Lille -
Nord Europe), Emilia Tantar (INRIA Lille - Nord Europe), El-Ghazali Talbi
(INRIA Futurs, LIFIA, LGI - IMAG, LIFL)},
journal={arXiv preprint arXiv:0804.0581},
year={2008},
number={RR-6492},
archivePrefix={arXiv},
eprint={0804.0581},
primaryClass={cs.NA}
} | schuetze2008computing |
arxiv-3234 | 0804.0599 | Symmetry Breaking for Maximum Satisfiability | <|reference_start|>Symmetry Breaking for Maximum Satisfiability: Symmetries are intrinsic to many combinatorial problems including Boolean Satisfiability (SAT) and Constraint Programming (CP). In SAT, the identification of symmetry breaking predicates (SBPs) is a well-known, often effective, technique for solving hard problems. The identification of SBPs in SAT has been the subject of significant improvements in recent years, resulting in more compact SBPs and more effective algorithms. The identification of SBPs has also been applied to pseudo-Boolean (PB) constraints, showing that symmetry breaking can also be an effective technique for PB constraints. This paper extends further the application of SBPs, and shows that SBPs can be identified and used in Maximum Satisfiability (MaxSAT), as well as in its most well-known variants, including partial MaxSAT, weighted MaxSAT and weighted partial MaxSAT. As with SAT and PB, symmetry breaking predicates for MaxSAT and variants are shown to be effective for a representative number of problem domains, allowing solving problem instances that current state of the art MaxSAT solvers could not otherwise solve.<|reference_end|> | arxiv | @article{marques-silva2008symmetry,
title={Symmetry Breaking for Maximum Satisfiability},
author={Joao Marques-Silva, Ines Lynce and Vasco Manquinho},
journal={arXiv preprint arXiv:0804.0599},
year={2008},
number={RT/039/08-CDIL},
archivePrefix={arXiv},
eprint={0804.0599},
primaryClass={cs.AI cs.LO}
} | marques-silva2008symmetry |
arxiv-3235 | 0804.0611 | Channel State Feedback Schemes for Multiuser MIMO-OFDM Downlink | <|reference_start|>Channel State Feedback Schemes for Multiuser MIMO-OFDM Downlink: Channel state feedback schemes for the MIMO broadcast downlink have been widely studied in the frequency-flat case. This work focuses on the more relevant frequency selective case, where some important new aspects emerge. We consider a MIMO-OFDM broadcast channel and compare achievable ergodic rates under three channel state feedback schemes: analog feedback, direction quantized feedback and "time-domain" channel quantized feedback. The first two schemes are direct extensions of previously proposed schemes. The third scheme is novel, and it is directly inspired by rate-distortion theory of Gaussian correlated sources. For each scheme we derive the conditions under which the system achieves full multiplexing gain. The key difference with respect to the widely treated frequency-flat case is that in MIMO-OFDM the frequency-domain channel transfer function is a Gaussian correlated source. The new time-domain quantization scheme takes advantage of the channel frequency correlation structure and outperforms the other schemes. Furthermore, it is by far simpler to implement than complicated spherical vector quantization. In particular, we observe that no structured codebook design and vector quantization is actually needed for efficient channel state information feedback.<|reference_end|> | arxiv | @article{shirani-mehr2008channel,
title={Channel State Feedback Schemes for Multiuser MIMO-OFDM Downlink},
author={Hooman Shirani-Mehr and Giuseppe Caire},
journal={arXiv preprint arXiv:0804.0611},
year={2008},
archivePrefix={arXiv},
eprint={0804.0611},
primaryClass={cs.IT math.IT}
} | shirani-mehr2008channel |
arxiv-3236 | 0804.0629 | Short expressions of permutations as products and cryptanalysis of the Algebraic Eraser | <|reference_start|>Short expressions of permutations as products and cryptanalysis of the Algebraic Eraser: On March 2004, Anshel, Anshel, Goldfeld, and Lemieux introduced the \emph{Algebraic Eraser} scheme for key agreement over an insecure channel, using a novel hybrid of infinite and finite noncommutative groups. They also introduced the \emph{Colored Burau Key Agreement Protocol (CBKAP)}, a concrete realization of this scheme. We present general, efficient heuristic algorithms, which extract the shared key out of the public information provided by CBKAP. These algorithms are, according to heuristic reasoning and according to massive experiments, successful for all sizes of the security parameters, assuming that the keys are chosen with standard distributions. Our methods come from probabilistic group theory (permutation group actions and expander graphs). In particular, we provide a simple algorithm for finding short expressions of permutations in $S_n$, as products of given random permutations. Heuristically, our algorithm gives expressions of length $O(n^2\log n)$, in time and space $O(n^3)$. Moreover, this is provable from \emph{the Minimal Cycle Conjecture}, a simply stated hypothesis concerning the uniform distribution on $S_n$. Experiments show that the constants in these estimations are small. This is the first practical algorithm for this problem for $n\ge 256$. Remark: \emph{Algebraic Eraser} is a trademark of SecureRF. The variant of CBKAP actually implemented by SecureRF uses proprietary distributions, and thus our results do not imply its vulnerability. See also arXiv:abs/12020598<|reference_end|> | arxiv | @article{kalka2008short,
title={Short expressions of permutations as products and cryptanalysis of the
Algebraic Eraser},
author={Arkadius Kalka, Mina Teicher, and Boaz Tsaban},
journal={Advances in Applied Mathematics 49 (2012) 57-76},
year={2008},
doi={10.1016/j.aam.2012.03.001},
archivePrefix={arXiv},
eprint={0804.0629},
primaryClass={math.GR cs.CR math.CO math.PR}
} | kalka2008short |
arxiv-3237 | 0804.0635 | Source Coding with Mismatched Distortion Measures | <|reference_start|>Source Coding with Mismatched Distortion Measures: We consider the problem of lossy source coding with a mismatched distortion measure. That is, we investigate what distortion guarantees can be made with respect to distortion measure $\tilde{\rho}$, for a source code designed such that it achieves distortion less than $D$ with respect to distortion measure $\rho$. We find a single-letter characterization of this mismatch distortion and study properties of this quantity. These results give insight into the robustness of lossy source coding with respect to modeling errors in the distortion measure. They also provide guidelines on how to choose a good tractable approximation of an intractable distortion measure.<|reference_end|> | arxiv | @article{niesen2008source,
title={Source Coding with Mismatched Distortion Measures},
author={Urs Niesen, Devavrat Shah, Gregory Wornell},
journal={arXiv preprint arXiv:0804.0635},
year={2008},
archivePrefix={arXiv},
eprint={0804.0635},
primaryClass={cs.IT math.IT}
} | niesen2008source |
arxiv-3238 | 0804.0659 | Steganography from weak cryptography | <|reference_start|>Steganography from weak cryptography: We introduce a problem setting which we call ``the freedom fighters' problem''. It subtly differs from the prisoners' problem. We propose a steganographic method that allows Alice and Bob to fool Wendy the warden in this setting. Their messages are hidden in encryption keys. The recipient has no prior knowledge of these keys, and has to cryptanalyze ciphertexts in order to recover them. We show an example of the protocol and give a partial security analysis.<|reference_end|> | arxiv | @article{skoric2008steganography,
title={Steganography from weak cryptography},
author={Boris Skoric},
journal={arXiv preprint arXiv:0804.0659},
year={2008},
archivePrefix={arXiv},
eprint={0804.0659},
primaryClass={cs.CR}
} | skoric2008steganography |
arxiv-3239 | 0804.0660 | Weak Affine Light Typing is complete with respect to Safe Recursion on Notation | <|reference_start|>Weak Affine Light Typing is complete with respect to Safe Recursion on Notation: Weak affine light typing (WALT) assigns light affine linear formulae as types to a subset of lambda-terms of System F. WALT is poly-time sound: if a lambda-term M has type in WALT, M can be evaluated with a polynomial cost in the dimension of the derivation that gives it a type. The evaluation proceeds under any strategy of a rewriting relation which is a mix of both call-by-name and call-by-value beta-reductions. WALT weakens, namely generalizes, the notion of "stratification of deductions", common to some Light Systems -- those logical systems, derived from Linear logic, to characterize the set of Polynomial functions -- . A weaker stratification allows to define a compositional embedding of Safe recursion on notation (SRN) into WALT. It turns out that the expressivity of WALT is strictly stronger than the one of the known Light Systems. The embedding passes through the representation of a subsystem of SRN. It is obtained by restricting the composition scheme of SRN to one that can only use its safe variables linearly. On one side, this suggests that SRN, in fact, can be redefined in terms of more primitive constructs. On the other, the embedding of SRN into WALT enjoys the two following remarkable aspects. Every datatype, required by the embedding, is represented from scratch, showing the strong structural proof-theoretical roots of WALT. Moreover, the embedding highlights a stratification structure of the normal and safe arguments, normally hidden inside the world of SRN-normal/safe variables: the less an argument is "polyomially impredicative", the deeper, in a formal, proof-theoretical sense, it is represented inside WALT. Finally, since WALT is SRN-complete it is also polynomial-time complete since SRN is.<|reference_end|> | arxiv | @article{roversi2008weak,
title={Weak Affine Light Typing is complete with respect to Safe Recursion on
Notation},
author={Luca Roversi},
journal={arXiv preprint arXiv:0804.0660},
year={2008},
archivePrefix={arXiv},
eprint={0804.0660},
primaryClass={cs.LO}
} | roversi2008weak |
arxiv-3240 | 0804.0686 | Discrimination of two channels by adaptive methods and its application to quantum system | <|reference_start|>Discrimination of two channels by adaptive methods and its application to quantum system: The optimal exponential error rate for adaptive discrimination of two channels is discussed. In this problem, adaptive choice of input signal is allowed. This problem is discussed in various settings. It is proved that adaptive choice does not improve the exponential error rate in these settings. These results are applied to quantum state discrimination.<|reference_end|> | arxiv | @article{hayashi2008discrimination,
title={Discrimination of two channels by adaptive methods and its application
to quantum system},
author={Masahito Hayashi},
journal={IEEE Transactions on Information Theory, Volume 55, Issue 8, 3807
- 3820 (2009)},
year={2008},
doi={10.1109/TIT.2009.2023726},
archivePrefix={arXiv},
eprint={0804.0686},
primaryClass={quant-ph cs.IT math.IT math.ST stat.TH}
} | hayashi2008discrimination |
arxiv-3241 | 0804.0722 | A Memetic Algorithm for the Generalized Traveling Salesman Problem | <|reference_start|>A Memetic Algorithm for the Generalized Traveling Salesman Problem: The generalized traveling salesman problem (GTSP) is an extension of the well-known traveling salesman problem. In GTSP, we are given a partition of cities into groups and we are required to find a minimum length tour that includes exactly one city from each group. The recent studies on this subject consider different variations of a memetic algorithm approach to the GTSP. The aim of this paper is to present a new memetic algorithm for GTSP with a powerful local search procedure. The experiments show that the proposed algorithm clearly outperforms all of the known heuristics with respect to both solution quality and running time. While the other memetic algorithms were designed only for the symmetric GTSP, our algorithm can solve both symmetric and asymmetric instances.<|reference_end|> | arxiv | @article{gutin2008a,
title={A Memetic Algorithm for the Generalized Traveling Salesman Problem},
author={Gregory Gutin, Daniel Karapetyan},
journal={Natural Computing 9(1) (2010) 47-60},
year={2008},
doi={10.1007/s11047-009-9111-6},
archivePrefix={arXiv},
eprint={0804.0722},
primaryClass={cs.DS}
} | gutin2008a |
arxiv-3242 | 0804.0735 | Generalized Traveling Salesman Problem Reduction Algorithms | <|reference_start|>Generalized Traveling Salesman Problem Reduction Algorithms: The generalized traveling salesman problem (GTSP) is an extension of the well-known traveling salesman problem. In GTSP, we are given a partition of cities into groups and we are required to find a minimum length tour that includes exactly one city from each group. The aim of this paper is to present a problem reduction algorithm that deletes redundant vertices and edges, preserving the optimal solution. The algorithm's running time is O(N^3) in the worst case, but it is significantly faster in practice. The algorithm has reduced the problem size by 15-20% on average in our experiments and this has decreased the solution time by 10-60% for each of the considered solvers.<|reference_end|> | arxiv | @article{gutin2008generalized,
title={Generalized Traveling Salesman Problem Reduction Algorithms},
author={Gregory Gutin and Daniel Karapetyan},
journal={Algorithmic Operations Research 4 (2009) 144-154},
year={2008},
archivePrefix={arXiv},
eprint={0804.0735},
primaryClass={cs.DS}
} | gutin2008generalized |
arxiv-3243 | 0804.0742 | Statistical-mechanics approach to a reinforcement learning model with memory | <|reference_start|>Statistical-mechanics approach to a reinforcement learning model with memory: We introduce a two-player model of reinforcement learning with memory. Past actions of an iterated game are stored in a memory and used to determine player's next action. To examine the behaviour of the model some approximate methods are used and confronted against numerical simulations and exact master equation. When the length of memory of players increases to infinity the model undergoes an absorbing-state phase transition. Performance of examined strategies is checked in the prisoner' dilemma game. It turns out that it is advantageous to have a large memory in symmetric games, but it is better to have a short memory in asymmetric ones.<|reference_end|> | arxiv | @article{lipowski2008statistical-mechanics,
title={Statistical-mechanics approach to a reinforcement learning model with
memory},
author={Adam Lipowski, Krzysztof Gontarek, and Marcel Ausloos},
journal={Physica A vol. 388 (2009) pp. 1849-1856},
year={2008},
doi={10.1016/j.physa.2009.01.028},
archivePrefix={arXiv},
eprint={0804.0742},
primaryClass={cond-mat.stat-mech cs.GT}
} | lipowski2008statistical-mechanics |
arxiv-3244 | 0804.0743 | Scalable Distributed Video-on-Demand: Theoretical Bounds and Practical Algorithms | <|reference_start|>Scalable Distributed Video-on-Demand: Theoretical Bounds and Practical Algorithms: We analyze a distributed system where n nodes called boxes store a large set of videos and collaborate to serve simultaneously n videos or less. We explore under which conditions such a system can be scalable while serving any sequence of demands. We model this problem through a combination of two algorithms: a video allocation algorithm and a connection scheduling algorithm. The latter plays against an adversary that incrementally proposes video requests.<|reference_end|> | arxiv | @article{viennot2008scalable,
title={Scalable Distributed Video-on-Demand: Theoretical Bounds and Practical
Algorithms},
author={Laurent Viennot (INRIA Rocquencourt), Yacine Boufkhad (INRIA
Rocquencourt, LIAFA), Fabien Mathieu (INRIA Rocquencourt, FT R&D), Fabien De
Montgolfier (INRIA Rocquencourt, LIAFA), Diego Perino (INRIA Rocquencourt, FT
R&D)},
journal={arXiv preprint arXiv:0804.0743},
year={2008},
number={RR-6496},
archivePrefix={arXiv},
eprint={0804.0743},
primaryClass={cs.NI cs.DS}
} | viennot2008scalable |
arxiv-3245 | 0804.0790 | Outage behavior of slow fading channels with power control using noisy quantized CSIT | <|reference_start|>Outage behavior of slow fading channels with power control using noisy quantized CSIT: The topic of this study is the outage behavior of multiple-antenna slow fading channels with quantized feedback and partial power control. A fixed-rate communication system is considered. It is known from the literature that with error-free feedback, the outage-optimal quantizer for power control has a circular structure. Moreover, the diversity gain of the system increases polynomially with the cardinality of the power control codebook. Here, a similar system is studied, but when the feedback link is error-prone. We prove that in the high-SNR regime, the optimal quantizer structure with noisy feedback is still circular and the optimal Voronoi regions are contiguous non-zero probability intervals. Furthermore, the optimal power control codebook resembles a channel optimized scalar quantizer (COSQ), i.e., the Voronoi regions merge with erroneous feedback information. Using a COSQ, the outage performance of the system is superior to that of a no-feedback scheme. However, asymptotic analysis shows that the diversity gain of the system is the same as a no-CSIT scheme if there is a non-zero and non-vanishing feedback error probability.<|reference_end|> | arxiv | @article{ekbatani2008outage,
title={Outage behavior of slow fading channels with power control using noisy
quantized CSIT},
author={Siavash Ekbatani, Farzad Etemadi, and Hamid Jafarkhani},
journal={arXiv preprint arXiv:0804.0790},
year={2008},
archivePrefix={arXiv},
eprint={0804.0790},
primaryClass={cs.IT math.IT}
} | ekbatani2008outage |
arxiv-3246 | 0804.0797 | Sarbanes-Oxley: What About all the Spreadsheets? | <|reference_start|>Sarbanes-Oxley: What About all the Spreadsheets?: The Sarbanes-Oxley Act of 2002 has finally forced corporations to examine the validity of their spreadsheets. They are beginning to understand the spreadsheet error literature, including what it tells them about the need for comprehensive spreadsheet testing. However, controlling for fraud will require a completely new set of capabilities, and a great deal of new research will be needed to develop fraud control capabilities. This paper discusses the riskiness of spreadsheets, which can now be quantified to a considerable degree. It then discusses how to use control frameworks to reduce the dangers created by spreadsheets. It focuses especially on testing, which appears to be the most crucial element in spreadsheet controls.<|reference_end|> | arxiv | @article{panko2008sarbanes-oxley:,
title={Sarbanes-Oxley: What About all the Spreadsheets?},
author={Raymond R. Panko, Nicholas Ordway},
journal={European Spreadsheet Risks Int. Grp. (EuSpRIG) 2005 15-47
ISBN:1-902724-16-X},
year={2008},
archivePrefix={arXiv},
eprint={0804.0797},
primaryClass={cs.SE cs.CY}
} | panko2008sarbanes-oxley: |
arxiv-3247 | 0804.0813 | Spatial Interference Cancelation for Mobile Ad Hoc Networks: Perfect CSI | <|reference_start|>Spatial Interference Cancelation for Mobile Ad Hoc Networks: Perfect CSI: Interference between nodes directly limits the capacity of mobile ad hoc networks. This paper focuses on spatial interference cancelation with perfect channel state information (CSI), and analyzes the corresponding network capacity. Specifically, by using multiple antennas, zero-forcing beamforming is applied at each receiver for canceling the strongest interferers. Given spatial interference cancelation, the network transmission capacity is analyzed in this paper, which is defined as the maximum transmitting node density under constraints on outage and the signal-to-interference-noise ratio. Assuming the Poisson distribution for the locations of network nodes and spatially i.i.d. Rayleigh fading channels, mathematical tools from stochastic geometry are applied for deriving scaling laws for transmission capacity. Specifically, for small target outage probability, transmission capacity is proved to increase following a power law, where the exponent is the inverse of the size of antenna array or larger depending on the pass loss exponent. As shown by simulations, spatial interference cancelation increases transmission capacity by an order of magnitude or more even if only one extra antenna is added to each node.<|reference_end|> | arxiv | @article{huang2008spatial,
title={Spatial Interference Cancelation for Mobile Ad Hoc Networks: Perfect CSI},
author={Kaibin Huang, Jeffrey G. Andrews, Robert W. Heath, Jr, Dongning Guo,
Randall A. Berry},
journal={arXiv preprint arXiv:0804.0813},
year={2008},
doi={10.1109/ACSSC.2008.5074377},
archivePrefix={arXiv},
eprint={0804.0813},
primaryClass={cs.IT cs.NI math.IT}
} | huang2008spatial |
arxiv-3248 | 0804.0819 | Kalman Filtered Compressed Sensing | <|reference_start|>Kalman Filtered Compressed Sensing: We consider the problem of reconstructing time sequences of spatially sparse signals (with unknown and time-varying sparsity patterns) from a limited number of linear "incoherent" measurements, in real-time. The signals are sparse in some transform domain referred to as the sparsity basis. For a single spatial signal, the solution is provided by Compressed Sensing (CS). The question that we address is, for a sequence of sparse signals, can we do better than CS, if (a) the sparsity pattern of the signal's transform coefficients' vector changes slowly over time, and (b) a simple prior model on the temporal dynamics of its current non-zero elements is available. The overall idea of our solution is to use CS to estimate the support set of the initial signal's transform vector. At future times, run a reduced order Kalman filter with the currently estimated support and estimate new additions to the support set by applying CS to the Kalman innovations or filtering error (whenever it is "large").<|reference_end|> | arxiv | @article{vaswani2008kalman,
title={Kalman Filtered Compressed Sensing},
author={Namrata Vaswani},
journal={arXiv preprint arXiv:0804.0819},
year={2008},
doi={10.1109/ICIP.2008.4711899},
archivePrefix={arXiv},
eprint={0804.0819},
primaryClass={cs.IT math.IT math.ST stat.TH}
} | vaswani2008kalman |
arxiv-3249 | 0804.0852 | On the Influence of Selection Operators on Performances in Cellular Genetic Algorithms | <|reference_start|>On the Influence of Selection Operators on Performances in Cellular Genetic Algorithms: In this paper, we study the influence of the selective pressure on the performance of cellular genetic algorithms. Cellular genetic algorithms are genetic algorithms where the population is embedded on a toroidal grid. This structure makes the propagation of the best so far individual slow down, and allows to keep in the population potentially good solutions. We present two selective pressure reducing strategies in order to slow down even more the best solution propagation. We experiment these strategies on a hard optimization problem, the quadratic assignment problem, and we show that there is a value for of the control parameter for both which gives the best performance. This optimal value does not find explanation on only the selective pressure, measured either by take over time and diversity evolution. This study makes us conclude that we need other tools than the sole selective pressure measures to explain the performances of cellular genetic algorithms.<|reference_end|> | arxiv | @article{simoncini2008on,
title={On the Influence of Selection Operators on Performances in Cellular
Genetic Algorithms},
author={David Simoncini (I3S), Philippe Collard (I3S), S'ebastien Verel
(I3S), Manuel Clergue (I3S)},
journal={Dans Proceedings of the IEEE Congress on Evolutionary Computation
CEC2007 - IEEE Congress on Evolutionary Computation CEC2007, singapore :
Singapour (2007)},
year={2008},
archivePrefix={arXiv},
eprint={0804.0852},
primaryClass={cs.AI}
} | simoncini2008on |
arxiv-3250 | 0804.0862 | On ad hoc routing with guaranteed delivery | <|reference_start|>On ad hoc routing with guaranteed delivery: We point out a simple poly-time log-space routing algorithm in ad hoc networks with guaranteed delivery using universal exploration sequences.<|reference_end|> | arxiv | @article{braverman2008on,
title={On ad hoc routing with guaranteed delivery},
author={Mark Braverman},
journal={arXiv preprint arXiv:0804.0862},
year={2008},
archivePrefix={arXiv},
eprint={0804.0862},
primaryClass={cs.DC cs.CC}
} | braverman2008on |
arxiv-3251 | 0804.0876 | Semi-continuous Sized Types and Termination | <|reference_start|>Semi-continuous Sized Types and Termination: Some type-based approaches to termination use sized types: an ordinal bound for the size of a data structure is stored in its type. A recursive function over a sized type is accepted if it is visible in the type system that recursive calls occur just at a smaller size. This approach is only sound if the type of the recursive function is admissible, i.e., depends on the size index in a certain way. To explore the space of admissible functions in the presence of higher-kinded data types and impredicative polymorphism, a semantics is developed where sized types are interpreted as functions from ordinals into sets of strongly normalizing terms. It is shown that upper semi-continuity of such functions is a sufficient semantic criterion for admissibility. To provide a syntactical criterion, a calculus for semi-continuous functions is developed.<|reference_end|> | arxiv | @article{abel2008semi-continuous,
title={Semi-continuous Sized Types and Termination},
author={Andreas Abel},
journal={Logical Methods in Computer Science, Volume 4, Issue 2 (April 10,
2008) lmcs:1236},
year={2008},
doi={10.2168/LMCS-4(2:3)2008},
archivePrefix={arXiv},
eprint={0804.0876},
primaryClass={cs.PL cs.LO}
} | abel2008semi-continuous |
arxiv-3252 | 0804.0879 | The equations of the ideal latches | <|reference_start|>The equations of the ideal latches: The latches are simple circuits with feedback from the digital electrical engineering. We have included in our work the C element of Muller, the RS latch, the clocked RS latch, the D latch and also circuits containing two interconnected latches: the edge triggered RS flip-flop, the D flip-flop, the JK flip-flop, the T flip-flop. The purpose of this study is to model with equations the previous circuits, considered to be ideal, i.e. non-inertial. The technique of analysis is the pseudoboolean differential calculus.<|reference_end|> | arxiv | @article{vlad2008the,
title={The equations of the ideal latches},
author={Serban E. Vlad},
journal={arXiv preprint arXiv:0804.0879},
year={2008},
archivePrefix={arXiv},
eprint={0804.0879},
primaryClass={cs.GL}
} | vlad2008the |
arxiv-3253 | 0804.0924 | A Unified Semi-Supervised Dimensionality Reduction Framework for Manifold Learning | <|reference_start|>A Unified Semi-Supervised Dimensionality Reduction Framework for Manifold Learning: We present a general framework of semi-supervised dimensionality reduction for manifold learning which naturally generalizes existing supervised and unsupervised learning frameworks which apply the spectral decomposition. Algorithms derived under our framework are able to employ both labeled and unlabeled examples and are able to handle complex problems where data form separate clusters of manifolds. Our framework offers simple views, explains relationships among existing frameworks and provides further extensions which can improve existing algorithms. Furthermore, a new semi-supervised kernelization framework called ``KPCA trick'' is proposed to handle non-linear problems.<|reference_end|> | arxiv | @article{chatpatanasiri2008a,
title={A Unified Semi-Supervised Dimensionality Reduction Framework for
Manifold Learning},
author={Ratthachat Chatpatanasiri and Boonserm Kijsirikul},
journal={arXiv preprint arXiv:0804.0924},
year={2008},
archivePrefix={arXiv},
eprint={0804.0924},
primaryClass={cs.LG cs.AI}
} | chatpatanasiri2008a |
arxiv-3254 | 0804.0936 | Cache-Oblivious Selection in Sorted X+Y Matrices | <|reference_start|>Cache-Oblivious Selection in Sorted X+Y Matrices: Let X[0..n-1] and Y[0..m-1] be two sorted arrays, and define the mxn matrix A by A[j][i]=X[i]+Y[j]. Frederickson and Johnson gave an efficient algorithm for selecting the k-th smallest element from A. We show how to make this algorithm IO-efficient. Our cache-oblivious algorithm performs O((m+n)/B) IOs, where B is the block size of memory transfers.<|reference_end|> | arxiv | @article{de berg2008cache-oblivious,
title={Cache-Oblivious Selection in Sorted X+Y Matrices},
author={Mark de Berg and Shripad Thite},
journal={arXiv preprint arXiv:0804.0936},
year={2008},
archivePrefix={arXiv},
eprint={0804.0936},
primaryClass={cs.DS}
} | de berg2008cache-oblivious |
arxiv-3255 | 0804.0937 | Issues in Strategic Decision Modelling | <|reference_start|>Issues in Strategic Decision Modelling: [Spreadsheet] Models are invaluable tools for strategic planning. Models help key decision makers develop a shared conceptual understanding of complex decisions, identify sensitivity factors and test management scenarios. Different modelling approaches are specialist areas in themselves. Model development can be onerous, expensive, time consuming, and often bewildering. It is also an iterative process where the true magnitude of the effort, time and data required is often not fully understood until well into the process. This paper explores the traditional approaches to strategic planning modelling commonly used in organisations and considers the application of a real-options approach to match and benefit from the increasing uncertainty in today's rapidly changing world.<|reference_end|> | arxiv | @article{jennings2008issues,
title={Issues in Strategic Decision Modelling},
author={Paula Jennings},
journal={Proc. European Spreadsheet Risks Int. Grp. 2003 111-116 ISBN 1
86166 199 1},
year={2008},
archivePrefix={arXiv},
eprint={0804.0937},
primaryClass={cs.HC}
} | jennings2008issues |
arxiv-3256 | 0804.0940 | Optimum Binary Search Trees on the Hierarchical Memory Model | <|reference_start|>Optimum Binary Search Trees on the Hierarchical Memory Model: The Hierarchical Memory Model (HMM) of computation is similar to the standard Random Access Machine (RAM) model except that the HMM has a non-uniform memory organized in a hierarchy of levels numbered 1 through h. The cost of accessing a memory location increases with the level number, and accesses to memory locations belonging to the same level cost the same. Formally, the cost of a single access to the memory location at address a is given by m(a), where m: N -> N is the memory cost function, and the h distinct values of m model the different levels of the memory hierarchy. We study the problem of constructing and storing a binary search tree (BST) of minimum cost, over a set of keys, with probabilities for successful and unsuccessful searches, on the HMM with an arbitrary number of memory levels, and for the special case h=2. While the problem of constructing optimum binary search trees has been well studied for the standard RAM model, the additional parameter m for the HMM increases the combinatorial complexity of the problem. We present two dynamic programming algorithms to construct optimum BSTs bottom-up. These algorithms run efficiently under some natural assumptions about the memory hierarchy. We also give an efficient algorithm to construct a BST that is close to optimum, by modifying a well-known linear-time approximation algorithm for the RAM model. We conjecture that the problem of constructing an optimum BST for the HMM with an arbitrary memory cost function m is NP-complete.<|reference_end|> | arxiv | @article{thite2008optimum,
title={Optimum Binary Search Trees on the Hierarchical Memory Model},
author={Shripad Thite},
journal={arXiv preprint arXiv:0804.0940},
year={2008},
number={UILU-ENG-00-2215 ACT-142},
archivePrefix={arXiv},
eprint={0804.0940},
primaryClass={cs.DS}
} | thite2008optimum |
arxiv-3257 | 0804.0941 | Reducing Overconfidence in Spreadsheet Development | <|reference_start|>Reducing Overconfidence in Spreadsheet Development: Despite strong evidence of widespread errors, spreadsheet developers rarely subject their spreadsheets to post-development testing to reduce errors. This may be because spreadsheet developers are overconfident in the accuracy of their spreadsheets. This conjecture is plausible because overconfidence is present in a wide variety of human cognitive domains, even among experts. This paper describes two experiments in overconfidence in spreadsheet development. The first is a pilot study to determine the existence of overconfidence. The second tests a manipulation to reduce overconfidence and errors. The manipulation is modestly successful, indicating that overconfidence reduction is a promising avenue to pursue.<|reference_end|> | arxiv | @article{panko2008reducing,
title={Reducing Overconfidence in Spreadsheet Development},
author={Raymond R. Panko},
journal={Proc. European Spreadsheet Risks Int. Grp. 2003 49-66 ISBN 1 86166
199 1},
year={2008},
archivePrefix={arXiv},
eprint={0804.0941},
primaryClass={cs.HC cs.SE}
} | panko2008reducing |
arxiv-3258 | 0804.0942 | Spacetime Meshing for Discontinuous Galerkin Methods | <|reference_start|>Spacetime Meshing for Discontinuous Galerkin Methods: Spacetime discontinuous Galerkin (SDG) finite element methods are used to solve such PDEs involving space and time variables arising from wave propagation phenomena in important applications in science and engineering. To support an accurate and efficient solution procedure using SDG methods and to exploit the flexibility of these methods, we give a meshing algorithm to construct an unstructured simplicial spacetime mesh over an arbitrary simplicial space domain. Our algorithm is the first spacetime meshing algorithm suitable for efficient solution of nonlinear phenomena in anisotropic media using novel discontinuous Galerkin finite element methods for implicit solutions directly in spacetime. Given a triangulated d-dimensional Euclidean space domain M (a simplicial complex) and initial conditions of the underlying hyperbolic spacetime PDE, we construct an unstructured simplicial mesh of the (d+1)-dimensional spacetime domain M x [0,infinity). Our algorithm uses a near-optimal number of spacetime elements, each with bounded temporal aspect ratio for any finite prefix M x [0,T] of spacetime. Our algorithm is an advancing front procedure that constructs the spacetime mesh incrementally, an extension of the Tent Pitcher algorithm of Ungor and Sheffer (2000). In 2DxTime, our algorithm simultaneously adapts the size and shape of spacetime tetrahedra to a spacetime error indicator. We are able to incorporate more general front modification operations, such as edge flips and limited mesh smoothing. Our algorithm represents recent progress towards a meshing algorithm in 2DxTime to track moving domain boundaries and other singular surfaces such as shock fronts.<|reference_end|> | arxiv | @article{thite2008spacetime,
title={Spacetime Meshing for Discontinuous Galerkin Methods},
author={Shripad Thite},
journal={arXiv preprint arXiv:0804.0942},
year={2008},
number={UIUCDCS-R-2005-2612},
archivePrefix={arXiv},
eprint={0804.0942},
primaryClass={cs.CG}
} | thite2008spacetime |
arxiv-3259 | 0804.0943 | The Wall and The Ball: A Study of Domain Referent Spreadsheet Errors | <|reference_start|>The Wall and The Ball: A Study of Domain Referent Spreadsheet Errors: The Cell Error Rate in simple spreadsheets averages about 2% to 5%. This CER has been measured in domain free environments. This paper compares the CERs occurring in domain free and applied domain tasks. The applied domain task requires the application of simple linear algebra to a costing problem. The results show that domain referent knowledge influences participants' approaches to spreadsheet creation and spreadsheet usage. The conclusion is that spreadsheet error making is influenced by domain knowledge and domain perception. Qualitative findings also suggest that spreadsheet error making is a part of overall human behaviour, and ought to be analyzed against this wider canvas.<|reference_end|> | arxiv | @article{irons2008the,
title={The Wall and The Ball: A Study of Domain Referent Spreadsheet Errors},
author={Richard J. Irons},
journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2003 33-48
ISBN 1 86166 199 1},
year={2008},
archivePrefix={arXiv},
eprint={0804.0943},
primaryClass={cs.HC cs.SE}
} | irons2008the |
arxiv-3260 | 0804.0946 | Efficient Spacetime Meshing with Nonlocal Cone Constraints | <|reference_start|>Efficient Spacetime Meshing with Nonlocal Cone Constraints: Spacetime Discontinuous Galerkin (DG) methods are used to solve hyperbolic PDEs describing wavelike physical phenomena. When the PDEs are nonlinear, the speed of propagation of the phenomena, called the wavespeed, at any point in the spacetime domain is computed as part of the solution. We give an advancing front algorithm to construct a simplicial mesh of the spacetime domain suitable for DG solutions. Given a simplicial mesh of a bounded linear or planar space domain M, we incrementally construct a mesh of the spacetime domain M x [0,infinity) such that the solution can be computed in constant time per element. We add a patch of spacetime elements to the mesh at every step. The boundary of every patch is causal which means that the elements in the patch can be solved immediately and that the patches in the mesh are partially ordered by dependence. The elements in a single patch are coupled because they share implicit faces; however, the number of elements in each patch is bounded. The main contribution of this paper is sufficient constraints on the progress in time made by the algorithm at each step which guarantee that a new patch with causal boundary can be added to the mesh at every step even when the wavespeed is increasing discontinuously. Our algorithm adapts to the local gradation of the space mesh as well as the wavespeed that most constrains progress at each step. Previous algorithms have been restricted at each step by the maximum wavespeed throughout the entire spacetime domain.<|reference_end|> | arxiv | @article{thite2008efficient,
title={Efficient Spacetime Meshing with Nonlocal Cone Constraints},
author={Shripad Thite},
journal={arXiv preprint arXiv:0804.0946},
year={2008},
archivePrefix={arXiv},
eprint={0804.0946},
primaryClass={cs.CG}
} | thite2008efficient |
arxiv-3261 | 0804.0957 | Derandomizing the Isolation Lemma and Lower Bounds for Circuit Size | <|reference_start|>Derandomizing the Isolation Lemma and Lower Bounds for Circuit Size: The isolation lemma of Mulmuley et al \cite{MVV87} is an important tool in the design of randomized algorithms and has played an important role in several nontrivial complexity upper bounds. On the other hand, polynomial identity testing is a well-studied algorithmic problem with efficient randomized algorithms and the problem of obtaining efficient \emph{deterministic} identity tests has received a lot of attention recently. The goal of this note is to compare the isolation lemma with polynomial identity testing: 1. We show that derandomizing reasonably restricted versions of the isolation lemma implies circuit size lower bounds. We derive the circuit lower bounds by examining the connection between the isolation lemma and polynomial identity testing. We give a randomized polynomial-time identity test for noncommutative circuits of polynomial degree based on the isolation lemma. Using this result, we show that derandomizing the isolation lemma implies noncommutative circuit size lower bounds. The restricted versions of the isolation lemma we consider are natural and would suffice for the standard applications of the isolation lemma. 2. From the result of Klivans-Spielman \cite{KS01} we observe that there is a randomized polynomial-time identity test for commutative circuits of polynomial degree, also based on a more general isolation lemma for linear forms. Consequently, derandomization of (a suitable version of) this isolation lemma also implies circuit size lower bounds in the commutative setting.<|reference_end|> | arxiv | @article{arvind2008derandomizing,
title={Derandomizing the Isolation Lemma and Lower Bounds for Circuit Size},
author={V. Arvind and Partha Mukhopadhyay},
journal={arXiv preprint arXiv:0804.0957},
year={2008},
archivePrefix={arXiv},
eprint={0804.0957},
primaryClass={cs.CC}
} | arvind2008derandomizing |
arxiv-3262 | 0804.0970 | Testing data types implementations from algebraic specifications | <|reference_start|>Testing data types implementations from algebraic specifications: Algebraic specifications of data types provide a natural basis for testing data types implementations. In this framework, the conformance relation is based on the satisfaction of axioms. This makes it possible to formally state the fundamental concepts of testing: exhaustive test set, testability hypotheses, oracle. Various criteria for selecting finite test sets have been proposed. They depend on the form of the axioms, and on the possibilities of observation of the implementation under test. This last point is related to the well-known oracle problem. As the main interest of algebraic specifications is data type abstraction, testing a concrete implementation raises the issue of the gap between the abstract description and the concrete representation. The observational semantics of algebraic specifications bring solutions on the basis of the so-called observable contexts. After a description of testing methods based on algebraic specifications, the chapter gives a brief presentation of some tools and case studies, and presents some applications to other formal methods involving datatypes.<|reference_end|> | arxiv | @article{gaudel2008testing,
title={Testing data types implementations from algebraic specifications},
author={Marie-Claude Gaudel (LRI), Pascale Le Gall (IBISC)},
journal={Formal Methods and Testing, Springer-Verlag (Ed.) (2008) 209-239},
year={2008},
archivePrefix={arXiv},
eprint={0804.0970},
primaryClass={cs.PL}
} | gaudel2008testing |
arxiv-3263 | 0804.0980 | Large MIMO Detection: A Low-Complexity Detector at High Spectral Efficiencies | <|reference_start|>Large MIMO Detection: A Low-Complexity Detector at High Spectral Efficiencies: We consider large MIMO systems, where by `{\em large}' we mean number of transmit and receive antennas of the order of tens to hundreds. Such large MIMO systems will be of immense interest because of the very high spectral efficiencies possible in such systems. We present a low-complexity detector which achieves uncoded near-exponential diversity performance for hundreds of antennas (i.e., achieves near SISO AWGN performance in a large MIMO fading environment) with an average per-bit complexity of just $O(N_tN_r)$, where $N_t$ and $N_r$ denote the number of transmit and receive antennas, respectively. With an outer turbo code, the proposed detector achieves good coded bit error performance as well. For example, in a 600 transmit and 600 receive antennas V-BLAST system with a high spectral efficiency of 200 bps/Hz (using BPSK and rate-1/3 turbo code), our simulation results show that the proposed detector performs close to within about 4.6 dB from theoretical capacity. We also adopt the proposed detector for the low-complexity decoding of high-rate non-orthogonal space-time block codes (STBC) from division algebras (DA). For example, we have decoded the $16\times 16$ full-rate non-orthogonal STBC from DA using the proposed detector and show that it performs close to within about 5.5 dB of the capacity using 4-QAM and rate-3/4 turbo code at a spectral efficiency of 24 bps/Hz. The practical feasibility of the proposed high-performance low-complexity detector could potentially trigger wide interest in the implementation of large MIMO systems. In large MC-CDMA systems with hundreds of users, the proposed detector is shown to achieve near single-user performance at an average per-bit complexity linear in number of users, which is quite appealing for its use in practical CDMA systems.<|reference_end|> | arxiv | @article{vardhan2008large,
title={Large MIMO Detection: A Low-Complexity Detector at High Spectral
Efficiencies},
author={K. Vishnu Vardhan, Saif K. Mohammed, A. Chockalingam, and B. Sundar
Rajan},
journal={arXiv preprint arXiv:0804.0980},
year={2008},
archivePrefix={arXiv},
eprint={0804.0980},
primaryClass={cs.IT math.IT}
} | vardhan2008large |
arxiv-3264 | 0804.0986 | Cauchy's Arm Lemma on a Growing Sphere | <|reference_start|>Cauchy's Arm Lemma on a Growing Sphere: We propose a variant of Cauchy's Lemma, proving that when a convex chain on one sphere is redrawn (with the same lengths and angles) on a larger sphere, the distance between its endpoints increases. The main focus of this work is a comparison of three alternate proofs, to show the links between Toponogov's Comparison Theorem, Legendre's Theorem and Cauchy's Arm Lemma.<|reference_end|> | arxiv | @article{abel2008cauchy's,
title={Cauchy's Arm Lemma on a Growing Sphere},
author={Zachary Abel, David Charlton, Sebastien Collette, Erik D. Demaine,
Martin L. Demaine, Stefan Langerman, Joseph O'Rourke, Val Pinciu and Godfried
Toussaint},
journal={arXiv preprint arXiv:0804.0986},
year={2008},
archivePrefix={arXiv},
eprint={0804.0986},
primaryClass={cs.CG}
} | abel2008cauchy's |
arxiv-3265 | 0804.0996 | Woven Graph Codes: Asymptotic Performances and Examples | <|reference_start|>Woven Graph Codes: Asymptotic Performances and Examples: Constructions of woven graph codes based on constituent block and convolutional codes are studied. It is shown that within the random ensemble of such codes based on $s$-partite, $s$-uniform hypergraphs, where $s$ depends only on the code rate, there exist codes satisfying the Varshamov-Gilbert (VG) and the Costello lower bound on the minimum distance and the free distance, respectively. A connection between regular bipartite graphs and tailbiting codes is shown. Some examples of woven graph codes are presented. Among them an example of a rate $R_{\rm wg}=1/3$ woven graph code with $d_{\rm free}=32$ based on Heawood's bipartite graph and containing $n=7$ constituent rate $R^{c}=2/3$ convolutional codes with overall constraint lengths $\nu^{c}=5$ is given. An encoding procedure for woven graph codes with complexity proportional to the number of constituent codes and their overall constraint length $\nu^{c}$ is presented.<|reference_end|> | arxiv | @article{bocharova2008woven,
title={Woven Graph Codes: Asymptotic Performances and Examples},
author={Irina E. Bocharova, Rolf Johannesson, Boris D.Kudryashov, Victor V.
Zyablov},
journal={arXiv preprint arXiv:0804.0996},
year={2008},
doi={10.1109/TIT.2009.2034787},
archivePrefix={arXiv},
eprint={0804.0996},
primaryClass={cs.IT math.IT}
} | bocharova2008woven |
arxiv-3266 | 0804.1021 | Differentiation of Kaltofen's division-free determinant algorithm | <|reference_start|>Differentiation of Kaltofen's division-free determinant algorithm: Kaltofen has proposed a new approach in [Kaltofen 1992] for computing matrix determinants. The algorithm is based on a baby steps/giant steps construction of Krylov subspaces, and computes the determinant as the constant term of a characteristic polynomial. For matrices over an abstract field and by the results of Baur and Strassen 1983, the determinant algorithm, actually a straight-line program, leads to an algorithm with the same complexity for computing the adjoint of a matrix [Kaltofen 1992]. However, the latter is obtained by the reverse mode of automatic differentiation and somehow is not ``explicit''. We study this adjoint algorithm, show how it can be implemented (without resorting to an automatic transformation), and demonstrate its use on polynomial matrices.<|reference_end|> | arxiv | @article{villard2008differentiation,
title={Differentiation of Kaltofen's division-free determinant algorithm},
author={Gilles Villard (LIP)},
journal={Journal of Symbolic Computation 7, 46 (2011) 773-790},
year={2008},
doi={10.1016/j.jsc.2010.08.012},
archivePrefix={arXiv},
eprint={0804.1021},
primaryClass={cs.SC cs.MS}
} | villard2008differentiation |
arxiv-3267 | 0804.1033 | A Semi-Automatic Framework to Discover Epistemic Modalities in Scientific Articles | <|reference_start|>A Semi-Automatic Framework to Discover Epistemic Modalities in Scientific Articles: Documents in scientific newspapers are often marked by attitudes and opinions of the author and/or other persons, who contribute with objective and subjective statements and arguments as well. In this respect, the attitude is often accomplished by a linguistic modality. As in languages like english, french and german, the modality is expressed by special verbs like can, must, may, etc. and the subjunctive mood, an occurrence of modalities often induces that these verbs take over the role of modality. This is not correct as it is proven that modality is the instrument of the whole sentence where both the adverbs, modal particles, punctuation marks, and the intonation of a sentence contribute. Often, a combination of all these instruments are necessary to express a modality. In this work, we concern with the finding of modal verbs in scientific texts as a pre-step towards the discovery of the attitude of an author. Whereas the input will be an arbitrary text, the output consists of zones representing modalities.<|reference_end|> | arxiv | @article{danilava2008a,
title={A Semi-Automatic Framework to Discover Epistemic Modalities in
Scientific Articles},
author={Sviatlana Danilava, Christoph Schommer},
journal={arXiv preprint arXiv:0804.1033},
year={2008},
archivePrefix={arXiv},
eprint={0804.1033},
primaryClass={cs.CL cs.LO}
} | danilava2008a |
arxiv-3268 | 0804.1041 | On the Stretch Factor of Convex Delaunay Graphs | <|reference_start|>On the Stretch Factor of Convex Delaunay Graphs: Let C be a compact and convex set in the plane that contains the origin in its interior, and let S be a finite set of points in the plane. The Delaunay graph DG_C(S) of S is defined to be the dual of the Voronoi diagram of S with respect to the convex distance function defined by C. We prove that DG_C(S) is a t-spanner for S, for some constant t that depends only on the shape of the set C. Thus, for any two points p and q in S, the graph DG_C(S) contains a path between p and q whose Euclidean length is at most t times the Euclidean distance between p and q.<|reference_end|> | arxiv | @article{bose2008on,
title={On the Stretch Factor of Convex Delaunay Graphs},
author={Prosenjit Bose, Paz Carmi, Sebastien Collette and Michiel Smid},
journal={arXiv preprint arXiv:0804.1041},
year={2008},
archivePrefix={arXiv},
eprint={0804.1041},
primaryClass={cs.CG}
} | bose2008on |
arxiv-3269 | 0804.1046 | Discrete schemes for Gaussian curvature and their convergence | <|reference_start|>Discrete schemes for Gaussian curvature and their convergence: In this paper, several discrete schemes for Gaussian curvature are surveyed. The convergence property of a modified discrete scheme for the Gaussian curvature is considered. Furthermore, a new discrete scheme for Gaussian curvature is resented. We prove that the new scheme converges at the regular vertex with valence not less than 5. By constructing a counterexample, we also show that it is impossible for building a discrete scheme for Gaussian curvature which converges over the regular vertex with valence 4. Finally, asymptotic errors of several discrete scheme for Gaussian curvature are compared.<|reference_end|> | arxiv | @article{xu2008discrete,
title={Discrete schemes for Gaussian curvature and their convergence},
author={Zhiqiang Xu, Guoliang Xu},
journal={arXiv preprint arXiv:0804.1046},
year={2008},
archivePrefix={arXiv},
eprint={0804.1046},
primaryClass={cs.CV cs.CG cs.GR cs.NA}
} | xu2008discrete |
arxiv-3270 | 0804.1079 | P is a proper subset of NP | <|reference_start|>P is a proper subset of NP: The purpose of this article is to examine and limit the conditions in which the P complexity class could be equivalent to the NP complexity class. Proof is provided by demonstrating that as the number of clauses in a NP-complete problem approaches infinity, the number of input sets processed per computation performed also approaches infinity when solved by a polynomial time solution. It is then possible to determine that the only deterministic optimization of a NP-complete problem that could prove P = NP would be one that examines no more than a polynomial number of input sets for a given problem. It is then shown that subdividing the set of all possible input sets into a representative polynomial search partition is a problem in the FEXP complexity class. The findings of this article are combined with the findings of other articles in this series of 4 articles. The final conclusion will be demonstrated that P =/= NP.<|reference_end|> | arxiv | @article{meek2008p,
title={P is a proper subset of NP},
author={Jerrald Meek},
journal={arXiv preprint arXiv:0804.1079},
year={2008},
archivePrefix={arXiv},
eprint={0804.1079},
primaryClass={cs.CC}
} | meek2008p |
arxiv-3271 | 0804.1083 | Towards algebraic methods for maximum entropy estimation | <|reference_start|>Towards algebraic methods for maximum entropy estimation: We show that various formulations (e.g., dual and Kullback-Csiszar iterations) of estimation of maximum entropy (ME) models can be transformed to solving systems of polynomial equations in several variables for which one can use celebrated Grobner bases methods. Posing of ME estimation as solving polynomial equations is possible, in the cases where feature functions (sufficient statistic) that provides the information about the underlying random variable in the form of expectations are integer valued.<|reference_end|> | arxiv | @article{dukkipati2008towards,
title={Towards algebraic methods for maximum entropy estimation},
author={Ambedkar Dukkipati},
journal={arXiv preprint arXiv:0804.1083},
year={2008},
archivePrefix={arXiv},
eprint={0804.1083},
primaryClass={cs.IT math.IT}
} | dukkipati2008towards |
arxiv-3272 | 0804.1115 | Adaptive Dynamics of Realistic Small-World Networks | <|reference_start|>Adaptive Dynamics of Realistic Small-World Networks: Continuing in the steps of Jon Kleinberg's and others celebrated work on decentralized search in small-world networks, we conduct an experimental analysis of a dynamic algorithm that produces small-world networks. We find that the algorithm adapts robustly to a wide variety of situations in realistic geographic networks with synthetic test data and with real world data, even when vertices are uneven and non-homogeneously distributed. We investigate the same algorithm in the case where some vertices are more popular destinations for searches than others, for example obeying power-laws. We find that the algorithm adapts and adjusts the networks according to the distributions, leading to improved performance. The ability of the dynamic process to adapt and create small worlds in such diverse settings suggests a possible mechanism by which such networks appear in nature.<|reference_end|> | arxiv | @article{mogren2008adaptive,
title={Adaptive Dynamics of Realistic Small-World Networks},
author={Olof Mogren, Oskar Sandberg, Vilhelm Verendel and Devdatt Dubhashi},
journal={arXiv preprint arXiv:0804.1115},
year={2008},
archivePrefix={arXiv},
eprint={0804.1115},
primaryClass={cs.DS cs.DC}
} | mogren2008adaptive |
arxiv-3273 | 0804.1117 | Network Beamforming Using Relays with Perfect Channel Information | <|reference_start|>Network Beamforming Using Relays with Perfect Channel Information: This paper is on beamforming in wireless relay networks with perfect channel information at relays, the receiver, and the transmitter if there is a direct link between the transmitter and receiver. It is assumed that every node in the network has its own power constraint. A two-step amplify-and-forward protocol is used, in which the transmitter and relays not only use match filters to form a beam at the receiver but also adaptively adjust their transmit powers according to the channel strength information. For a network with any number of relays and no direct link, the optimal power control is solved analytically. The complexity of finding the exact solution is linear in the number of relays. Our results show that the transmitter should always use its maximal power and the optimal power used at a relay is not a binary function. It can take any value between zero and its maximum transmit power. Also, this value depends on the quality of all other channels in addition to the relay's own channels. Despite this coupling fact, distributive strategies are proposed in which, with the aid of a low-rate broadcast from the receiver, a relay needs only its own channel information to implement the optimal power control. Simulated performance shows that network beamforming achieves the maximal diversity and outperforms other existing schemes. Then, beamforming in networks with a direct link are considered. We show that when the direct link exists during the first step only, the optimal power control is the same as that of networks with no direct link. For networks with a direct link during the second step, recursive numerical algorithms are proposed to solve the power control problem. Simulation shows that by adjusting the transmitter and relays' powers adaptively, network performance is significantly improved.<|reference_end|> | arxiv | @article{jing2008network,
title={Network Beamforming Using Relays with Perfect Channel Information},
author={Y. Jing, H. Jafarkhani},
journal={arXiv preprint arXiv:0804.1117},
year={2008},
archivePrefix={arXiv},
eprint={0804.1117},
primaryClass={cs.IT math.IT}
} | jing2008network |
arxiv-3274 | 0804.1118 | A Survey of Quantum Programming Languages: History, Methods, and Tools | <|reference_start|>A Survey of Quantum Programming Languages: History, Methods, and Tools: Quantum computer programming is emerging as a new subject domain from multidisciplinary research in quantum computing, computer science, mathematics (especially quantum logic, lambda calculi, and linear logic), and engineering attempts to build the first non-trivial quantum computer. This paper briefly surveys the history, methods, and proposed tools for programming quantum computers circa late 2007. It is intended to provide an extensive but non-exhaustive look at work leading up to the current state-of-the-art in quantum computer programming. Further, it is an attempt to analyze the needed programming tools for quantum programmers, to use this analysis to predict the direction in which the field is moving, and to make recommendations for further development of quantum programming language tools.<|reference_end|> | arxiv | @article{sofge2008a,
title={A Survey of Quantum Programming Languages: History, Methods, and Tools},
author={Donald A. Sofge},
journal={arXiv preprint arXiv:0804.1118},
year={2008},
archivePrefix={arXiv},
eprint={0804.1118},
primaryClass={cs.PL}
} | sofge2008a |
arxiv-3275 | 0804.1133 | Prospective Algorithms for Quantum Evolutionary Computation | <|reference_start|>Prospective Algorithms for Quantum Evolutionary Computation: This effort examines the intersection of the emerging field of quantum computing and the more established field of evolutionary computation. The goal is to understand what benefits quantum computing might offer to computational intelligence and how computational intelligence paradigms might be implemented as quantum programs to be run on a future quantum computer. We critically examine proposed algorithms and methods for implementing computational intelligence paradigms, primarily focused on heuristic optimization methods including and related to evolutionary computation, with particular regard for their potential for eventual implementation on quantum computing hardware.<|reference_end|> | arxiv | @article{sofge2008prospective,
title={Prospective Algorithms for Quantum Evolutionary Computation},
author={Donald A. Sofge},
journal={arXiv preprint arXiv:0804.1133},
year={2008},
archivePrefix={arXiv},
eprint={0804.1133},
primaryClass={cs.NE}
} | sofge2008prospective |
arxiv-3276 | 0804.1170 | Approximating L1-distances between mixture distributions using random projections | <|reference_start|>Approximating L1-distances between mixture distributions using random projections: We consider the problem of computing L1-distances between every pair ofcprobability densities from a given family. We point out that the technique of Cauchy random projections (Indyk'06) in this context turns into stochastic integrals with respect to Cauchy motion. For piecewise-linear densities these integrals can be sampled from if one can sample from the stochastic integral of the function x->(1,x). We give an explicit density function for this stochastic integral and present an efficient sampling algorithm. As a consequence we obtain an efficient algorithm to approximate the L1-distances with a small relative error. For piecewise-polynomial densities we show how to approximately sample from the distributions resulting from the stochastic integrals. This also results in an efficient algorithm to approximate the L1-distances, although our inability to get exact samples worsens the dependence on the parameters.<|reference_end|> | arxiv | @article{mahalanabis2008approximating,
title={Approximating L1-distances between mixture distributions using random
projections},
author={Satyaki Mahalanabis, Daniel Stefankovic},
journal={arXiv preprint arXiv:0804.1170},
year={2008},
archivePrefix={arXiv},
eprint={0804.1170},
primaryClass={cs.DS}
} | mahalanabis2008approximating |
arxiv-3277 | 0804.1172 | Transceiver Design with Low-Precision Analog-to-Digital Conversion : An Information-Theoretic Perspective | <|reference_start|>Transceiver Design with Low-Precision Analog-to-Digital Conversion : An Information-Theoretic Perspective: Modern communication receiver architectures center around digital signal processing (DSP), with the bulk of the receiver processing being performed on digital signals obtained after analog-to-digital conversion (ADC). In this paper, we explore Shannon-theoretic performance limits when ADC precision is drastically reduced, from typical values of 8-12 bits used in current communication transceivers, to 1-3 bits. The goal is to obtain insight on whether DSP-centric transceiver architectures are feasible as communication bandwidths scale up, recognizing that high-precision ADC at high sampling rates is either unavailable, or too costly or power-hungry. Specifically, we evaluate the communication limits imposed by low-precision ADC for the ideal real discrete-time Additive White Gaussian Noise (AWGN) channel, under an average power constraint on the input. For an ADC with K quantization bins (i.e., a precision of log2 K bits), we show that the Shannon capacity is achievable by a discrete input distribution with at most K + 1 mass points. For 2-bin (1-bit) symmetric ADC, this result is tightened to show that binary antipodal signaling is optimum for any signal-to-noise ratio (SNR). For multi-bit ADC, the capacity is computed numerically, and the results obtained are used to make the following encouraging observations regarding system design with low-precision ADC : (a) even at moderately high SNR of up to 20 dB, 2-3 bit quantization results in only 10-20% reduction of spectral efficiency, which is acceptable for large communication bandwidths, (b) standard equiprobable pulse amplitude modulation with ADC thresholds set to implement maximum likelihood hard decisions is asymptotically optimum at high SNR, and works well at low to moderate SNRs as well.<|reference_end|> | arxiv | @article{singh2008transceiver,
title={Transceiver Design with Low-Precision Analog-to-Digital Conversion : An
Information-Theoretic Perspective},
author={Jaspreet Singh, Onkar Dabeer, Upamanyu Madhow},
journal={arXiv preprint arXiv:0804.1172},
year={2008},
archivePrefix={arXiv},
eprint={0804.1172},
primaryClass={cs.IT math.IT}
} | singh2008transceiver |
arxiv-3278 | 0804.1173 | A Lower Bound on the Area of a 3-Coloured Disk Packing | <|reference_start|>A Lower Bound on the Area of a 3-Coloured Disk Packing: Given a set of unit-disks in the plane with union area $A$, what fraction of $A$ can be covered by selecting a pairwise disjoint subset of the disks? Rado conjectured 1/4 and proved $1/4.41$. Motivated by the problem of channel-assignment for wireless access points, in which use of 3 channels is a standard practice, we consider a variant where the selected subset of disks must be 3-colourable with disks of the same colour pairwise-disjoint. For this variant of the problem, we conjecture that it is always possible to cover at least $1/1.41$ of the union area and prove $1/2.09$. We also provide an $O(n^2)$ algorithm to select a subset achieving a $1/2.77$ bound.<|reference_end|> | arxiv | @article{brass2008a,
title={A Lower Bound on the Area of a 3-Coloured Disk Packing},
author={Peter Brass, Ferran Hurtado, Benjamin Lafreniere, Anna Lubiw},
journal={arXiv preprint arXiv:0804.1173},
year={2008},
archivePrefix={arXiv},
eprint={0804.1173},
primaryClass={cs.CG cs.DM}
} | brass2008a |
arxiv-3279 | 0804.1179 | A Dynamical Boolean Network | <|reference_start|>A Dynamical Boolean Network: We propose a Dynamical Boolean Network (DBN), which is a Virtual Boolean Network (VBN) whose set of states is fixed but whose transition matrix can change from one discrete time step to another. The transition matrix $T_{k}$ of our DBN for time step $k$ is of the form $Q^{-1}TQ$, where $T$ is a transition matrix (of a VBN) defined at time step $k$ in the course of the construction of our DBN and $Q$ is the matrix representation of some randomly chosen permutation $P$ of the states of our DBN. For each of several classes of such permutations, we carried out a number of simulations of a DBN with two nodes; each of our simulations consisted of 1,000 trials of 10,000 time steps each. In one of our simulations, only six of the 16 possible single-node transition rules for a VBN with two nodes were visited a total of 300,000 times (over all 1,000 trials). In that simulation, linearity appears to play a significant role in that three of those six single-node transition rules are transition rules of a Linear Virtual Boolean Network (LVBN); the other three are the negations of the first three. We also discuss the notions of a Probabilistic Boolean Network and a Hidden Markov Model--in both cases, in the context of using an arbitrary (though not necessarily one-to-one) function to label the states of a VBN.<|reference_end|> | arxiv | @article{ito2008a,
title={A Dynamical Boolean Network},
author={Genta Ito},
journal={arXiv preprint arXiv:0804.1179},
year={2008},
archivePrefix={arXiv},
eprint={0804.1179},
primaryClass={cs.DM}
} | ito2008a |
arxiv-3280 | 0804.1183 | Hash Property and Fixed-rate Universal Coding Theorems | <|reference_start|>Hash Property and Fixed-rate Universal Coding Theorems: The aim of this paper is to prove the achievability of fixed-rate universal coding problems by using our previously introduced notion of hash property. These problems are the fixed-rate lossless universal source coding problem and the fixed-rate universal channel coding problem. Since an ensemble of sparse matrices satisfies the hash property requirement, it is proved that we can construct universal codes by using sparse matrices.<|reference_end|> | arxiv | @article{muramatsu2008hash,
title={Hash Property and Fixed-rate Universal Coding Theorems},
author={Jun Muramatsu and Shigeki Miyake},
journal={IEEE Transactions on Information Theory, vol. 56, no. 6, pp.
2688-2698, June 2010. Corrections: IEEE Transactions on Information Theory,
vol. 58, no. 5, pp. 3305-3307, May 2012},
year={2008},
archivePrefix={arXiv},
eprint={0804.1183},
primaryClass={cs.IT math.IT}
} | muramatsu2008hash |
arxiv-3281 | 0804.1184 | Securing U-Healthcare Sensor Networks using Public Key Based Scheme | <|reference_start|>Securing U-Healthcare Sensor Networks using Public Key Based Scheme: Recent emergence of electronic culture uplifts healthcare facilities to a new era with the aid of wireless sensor network (WSN) technology. Due to the sensitiveness of medical data, austere privacy and security are inevitable for all parts of healthcare systems. However, the constantly evolving nature and constrained resources of sensors in WSN inflict unavailability of a lucid line of defense to ensure perfect security. In order to provide holistic security, protections must be incorporated in every component of healthcare sensor networks. This paper proposes an efficient security scheme for healthcare applications of WSN which uses the notion of public key cryptosystem. Our entire security scheme comprises basically of two parts; a key handshaking scheme based on simple linear operations and the derivation of decryption key by a receiver node for a particular sender in the network. Our architecture allows both base station to node or node to base station secure communications, and node-to-node secure communications. We consider both the issues of stringent security and network performance to propose our security scheme.<|reference_end|> | arxiv | @article{haque2008securing,
title={Securing U-Healthcare Sensor Networks using Public Key Based Scheme},
author={Md. Mokammel Haque, Al-Sakib Khan Pathan, and Choong Seon Hong},
journal={arXiv preprint arXiv:0804.1184},
year={2008},
archivePrefix={arXiv},
eprint={0804.1184},
primaryClass={cs.CR}
} | haque2008securing |
arxiv-3282 | 0804.1185 | Cryptanalysis of Yang-Wang-Chang's Password Authentication Scheme with Smart Cards | <|reference_start|>Cryptanalysis of Yang-Wang-Chang's Password Authentication Scheme with Smart Cards: In 2005, Yang, Wang, and Chang proposed an improved timestamp-based password authentication scheme in an attempt to overcome the flaws of Yang-Shieh_s legendary timestamp-based remote authentication scheme using smart cards. After analyzing the improved scheme proposed by Yang-Wang-Chang, we have found that their scheme is still insecure and vulnerable to four types of forgery attacks. Hence, in this paper, we prove that, their claim that their scheme is intractable is incorrect. Also, we show that even an attack based on Sun et al._s attack could be launched against their scheme which they claimed to resolve with their proposal.<|reference_end|> | arxiv | @article{pathan2008cryptanalysis,
title={Cryptanalysis of Yang-Wang-Chang's Password Authentication Scheme with
Smart Cards},
author={Al-Sakib Khan Pathan and Choong Seon Hong},
journal={10th IEEE International Conference on Advanced Communication
Technology (IEEE ICACT 2008), Volume III, February 17-20, 2008, Phoenix Park,
Korea, pp. 1618-1620},
year={2008},
archivePrefix={arXiv},
eprint={0804.1185},
primaryClass={cs.CR}
} | pathan2008cryptanalysis |
arxiv-3283 | 0804.1187 | M\'ethode de calcul du rayonnement acoustique de structures complexes | <|reference_start|>M\'ethode de calcul du rayonnement acoustique de structures complexes: In the automotive industry, predicting noise during design cycle is a necessary step. Well-known methods exist to answer this issue in low frequency domain. Among these, Finite Element Methods, adapted to closed domains, are quite easy to implement whereas Boundary Element Methods are more adapted to infinite domains, but may induce singularity problems. In this article, the described method, the SDM, allows to use both methods in their best application domain. A new method is also presented to solve the SDM exterior problem. Instead of using Boundary Element Methods, an original use of Finite Elements is made. Efficiency of this new version of the Substructure Deletion Method is discussed.<|reference_end|> | arxiv | @article{viallet2008m\'ethode,
title={M\'ethode de calcul du rayonnement acoustique de structures complexes},
author={Marianne Viallet (LTDS), G'erald Poum'erol, Olivier Dessombz (LTDS),
Louis Jezequel (LTDS)},
journal={Dans Actes du huiti\`eme colloque national en Calcul des
structures - GIENS 2007 - Huiti\`eme colloque national en Calcul des
structures, Giens : France (2007)},
year={2008},
archivePrefix={arXiv},
eprint={0804.1187},
primaryClass={cs.CE}
} | viallet2008m\'ethode |
arxiv-3284 | 0804.1193 | Spreading Signals in the Wideband Limit | <|reference_start|>Spreading Signals in the Wideband Limit: Wideband communications are impossible with signals that are spread over a very large band and are transmitted over multipath channels unknown ahead of time. This work exploits the I-mmse connection to bound the achievable data-rate of spreading signals in wideband settings, and to conclude that the achievable data-rate diminishes as the bandwidth increases due to channel uncertainty. The result applies to all spreading modulations, i.e. signals that are evenly spread over the bandwidth available to the communication system, with SNR smaller than log(W/L)/(W/L) and holds for communications over channels where the number of paths L is unbounded by sub-linear in the bandwidth W.<|reference_end|> | arxiv | @article{zwecher2008spreading,
title={Spreading Signals in the Wideband Limit},
author={Elchanan Zwecher and Dana Porrat},
journal={arXiv preprint arXiv:0804.1193},
year={2008},
archivePrefix={arXiv},
eprint={0804.1193},
primaryClass={cs.IT math.IT}
} | zwecher2008spreading |
arxiv-3285 | 0804.1214 | New Lower Bounds for the Maximum Number of Runs in a String | <|reference_start|>New Lower Bounds for the Maximum Number of Runs in a String: We show a new lower bound for the maximum number of runs in a string. We prove that for any e > 0, (a -- e)n is an asymptotic lower bound, where a = 56733/60064 = 0.944542. It is superior to the previous bound 0.927 given by Franek et al. Moreover, our construction of the strings and the proof is much simpler than theirs.<|reference_end|> | arxiv | @article{kusano2008new,
title={New Lower Bounds for the Maximum Number of Runs in a String},
author={Kazuhiko Kusano, Wataru Matsubara, Akira Ishino, Hideo Bannai, Ayumi
Shinohara},
journal={arXiv preprint arXiv:0804.1214},
year={2008},
archivePrefix={arXiv},
eprint={0804.1214},
primaryClass={cs.DM}
} | kusano2008new |
arxiv-3286 | 0804.1244 | Geometric Data Analysis, From Correspondence Analysis to Structured Data Analysis (book review) | <|reference_start|>Geometric Data Analysis, From Correspondence Analysis to Structured Data Analysis (book review): Review of: Brigitte Le Roux and Henry Rouanet, Geometric Data Analysis, From Correspondence Analysis to Structured Data Analysis, Kluwer, Dordrecht, 2004, xi+475 pp.<|reference_end|> | arxiv | @article{murtagh2008geometric,
title={Geometric Data Analysis, From Correspondence Analysis to Structured Data
Analysis (book review)},
author={Fionn Murtagh},
journal={Journal of Classification 25, 137-141, 2008},
year={2008},
doi={10.1007/s00357-008-9007-7},
archivePrefix={arXiv},
eprint={0804.1244},
primaryClass={cs.AI}
} | murtagh2008geometric |
arxiv-3287 | 0804.1266 | Immune System Approaches to Intrusion Detection - A Review | <|reference_start|>Immune System Approaches to Intrusion Detection - A Review: The use of artificial immune systems in intrusion detection is an appealing concept for two reasons. Firstly, the human immune system provides the human body with a high level of protection from invading pathogens, in a robust, self-organised and distributed manner. Secondly, current techniques used in computer security are not able to cope with the dynamic and increasingly complex nature of computer systems and their security. It is hoped that biologically inspired approaches in this area, including the use of immune-based systems will be able to meet this challenge. Here we review the algorithms used, the development of the systems and the outcome of their implementation. We provide an introduction and analysis of the key developments within this field, in addition to making suggestions for future research.<|reference_end|> | arxiv | @article{kim2008immune,
title={Immune System Approaches to Intrusion Detection - A Review},
author={Jungwon Kim, Peter J. Bentley, Uwe Aickelin, Julie Greensmith, Gianni
Tedesco, Jamie Twycross},
journal={Natural Computing, 6(4), pp 413-466, 2007},
year={2008},
doi={10.1007/s11047-006-9026-4},
archivePrefix={arXiv},
eprint={0804.1266},
primaryClass={cs.NE cs.CR}
} | kim2008immune |
arxiv-3288 | 0804.1270 | The quest for rings on bipolar scales | <|reference_start|>The quest for rings on bipolar scales: We consider the interval $]{-1},1[$ and intend to endow it with an algebraic structure like a ring. The motivation lies in decision making, where scales that are symmetric w.r.t. 0 are needed in order to represent a kind of symmetry in the behaviour of the decision maker. A former proposal due to Grabisch was based on maximum and minimum. In this paper, we propose to build our structure on t-conorms and t-norms, and we relate this construction to uninorms. We show that the only way to build a group is to use strict t-norms, and that there is no way to build a ring. Lastly, we show that the main result of this paper is connected to the theory of ordered Abelian groups.<|reference_end|> | arxiv | @article{grabisch2008the,
title={The quest for rings on bipolar scales},
author={Michel Grabisch (LIP6), Bernard De Baets, Janos Fodor},
journal={International Journal of Uncertainty Fuzziness and Knowledge-Based
Systems (2004) 499-512},
year={2008},
archivePrefix={arXiv},
eprint={0804.1270},
primaryClass={cs.DM math.RA}
} | grabisch2008the |
arxiv-3289 | 0804.1281 | Data Reduction in Intrusion Alert Correlation | <|reference_start|>Data Reduction in Intrusion Alert Correlation: Network intrusion detection sensors are usually built around low level models of network traffic. This means that their output is of a similarly low level and as a consequence, is difficult to analyze. Intrusion alert correlation is the task of automating some of this analysis by grouping related alerts together. Attack graphs provide an intuitive model for such analysis. Unfortunately alert flooding attacks can still cause a loss of service on sensors, and when performing attack graph correlation, there can be a large number of extraneous alerts included in the output graph. This obscures the fine structure of genuine attacks and makes them more difficult for human operators to discern. This paper explores modified correlation algorithms which attempt to minimize the impact of this attack.<|reference_end|> | arxiv | @article{tedesco2008data,
title={Data Reduction in Intrusion Alert Correlation},
author={Gianni Tedesco and Uwe Aickelin},
journal={WSEAS Transactions on Computers, 5(1), pp 186-193, 2006},
year={2008},
archivePrefix={arXiv},
eprint={0804.1281},
primaryClass={cs.CR cs.NE}
} | tedesco2008data |
arxiv-3290 | 0804.1302 | Bolasso: model consistent Lasso estimation through the bootstrap | <|reference_start|>Bolasso: model consistent Lasso estimation through the bootstrap: We consider the least-square linear regression problem with regularization by the l1-norm, a problem usually referred to as the Lasso. In this paper, we present a detailed asymptotic analysis of model consistency of the Lasso. For various decays of the regularization parameter, we compute asymptotic equivalents of the probability of correct model selection (i.e., variable selection). For a specific rate decay, we show that the Lasso selects all the variables that should enter the model with probability tending to one exponentially fast, while it selects all other variables with strictly positive probability. We show that this property implies that if we run the Lasso for several bootstrapped replications of a given sample, then intersecting the supports of the Lasso bootstrap estimates leads to consistent model selection. This novel variable selection algorithm, referred to as the Bolasso, is compared favorably to other linear regression methods on synthetic data and datasets from the UCI machine learning repository.<|reference_end|> | arxiv | @article{bach2008bolasso:,
title={Bolasso: model consistent Lasso estimation through the bootstrap},
author={Francis Bach (INRIA Rocquencourt)},
journal={arXiv preprint arXiv:0804.1302},
year={2008},
archivePrefix={arXiv},
eprint={0804.1302},
primaryClass={cs.LG math.ST stat.ML stat.TH}
} | bach2008bolasso: |
arxiv-3291 | 0804.1327 | Asymptotic behavior of growth functions of D0L-systems | <|reference_start|>Asymptotic behavior of growth functions of D0L-systems: A D0L-system is a triple (A, f, w) where A is a finite alphabet, f is an endomorphism of the free monoid over A, and w is a word over A. The D0L-sequence generated by (A, f, w) is the sequence of words (w, f(w), f(f(w)), f(f(f(w))), ...). The corresponding sequence of lengths, that is the function mapping each non-negative integer n to |f^n(w)|, is called the growth function of (A, f, w). In 1978, Salomaa and Soittola deduced the following result from their thorough study of the theory of rational power series: if the D0L-sequence generated by (A, f, w) is not eventually the empty word then there exist a non-negative integer d and a real number b greater than or equal to one such that |f^n(w)| behaves like n^d b^n as n tends to infinity. The aim of the present paper is to present a short, direct, elementary proof of this theorem.<|reference_end|> | arxiv | @article{cassaigne2008asymptotic,
title={Asymptotic behavior of growth functions of D0L-systems},
author={Julien Cassaigne, Christian Mauduit and Francois Nicolas},
journal={arXiv preprint arXiv:0804.1327},
year={2008},
archivePrefix={arXiv},
eprint={0804.1327},
primaryClass={cs.DM}
} | cassaigne2008asymptotic |
arxiv-3292 | 0804.1366 | Scale-free networks as preasymptotic regimes of superlinear preferential attachment | <|reference_start|>Scale-free networks as preasymptotic regimes of superlinear preferential attachment: We study the following paradox associated with networks growing according to superlinear preferential attachment: superlinear preference cannot produce scale-free networks in the thermodynamic limit, but there are superlinearly growing network models that perfectly match the structure of some real scale-free networks, such as the Internet. We obtain an analytic solution, supported by extensive simulations, for the degree distribution in superlinearly growing networks with arbitrary average degree, and confirm that in the true thermodynamic limit these networks are indeed degenerate, i.e., almost all nodes have low degrees. We then show that superlinear growth has vast preasymptotic regimes whose depths depend both on the average degree in the network and on how superlinear the preference kernel is. We demonstrate that a superlinearly growing network model can reproduce, in its preasymptotic regime, the structure of a real network, if the model captures some sufficiently strong structural constraints -- rich-club connectivity, for example. These findings suggest that real scale-free networks of finite size may exist in preasymptotic regimes of network evolution processes that lead to degenerate network formations in the thermodynamic limit.<|reference_end|> | arxiv | @article{krapivsky2008scale-free,
title={Scale-free networks as preasymptotic regimes of superlinear preferential
attachment},
author={Paul Krapivsky, Dmitri Krioukov},
journal={Phys. Rev. E 78, 026114 (2008)},
year={2008},
doi={10.1103/PhysRevE.78.026114},
archivePrefix={arXiv},
eprint={0804.1366},
primaryClass={cond-mat.stat-mech cond-mat.dis-nn cs.NI physics.soc-ph}
} | krapivsky2008scale-free |
arxiv-3293 | 0804.1382 | Interference-Assisted Secret Communication | <|reference_start|>Interference-Assisted Secret Communication: Wireless communication is susceptible to adversarial eavesdropping due to the broadcast nature of the wireless medium. In this paper it is shown how eavesdropping can be alleviated by exploiting the superposition property of the wireless medium. A wiretap channel with a helping interferer (WT-HI), in which a transmitter sends a confidential message to its intended receiver in the presence of a passive eavesdropper, and with the help of an independent interferer, is considered. The interferer, which does not know the confidential message, helps in ensuring the secrecy of the message by sending independent signals. An achievable secrecy rate for the WT-HI is given. The results show that interference can be exploited to assist secrecy in wireless communications. An important example of the Gaussian case, in which the interferer has a better channel to the intended receiver than to the eavesdropper, is considered. In this situation, the interferer can send a (random) codeword at a rate that ensures that it can be decoded and subtracted from the received signal by the intended receiver but cannot be decoded by the eavesdropper. Hence, only the eavesdropper is interfered with and the secrecy level of the confidential message is increased.<|reference_end|> | arxiv | @article{tang2008interference-assisted,
title={Interference-Assisted Secret Communication},
author={Xiaojun Tang, Ruoheng Liu, Predrag Spasojevic, H. Vincent Poor},
journal={arXiv preprint arXiv:0804.1382},
year={2008},
doi={10.1109/TIT.2011.2121450},
archivePrefix={arXiv},
eprint={0804.1382},
primaryClass={cs.IT cs.CR math.IT}
} | tang2008interference-assisted |
arxiv-3294 | 0804.1409 | Discovering More Accurate Frequent Web Usage Patterns | <|reference_start|>Discovering More Accurate Frequent Web Usage Patterns: Web usage mining is a type of web mining, which exploits data mining techniques to discover valuable information from navigation behavior of World Wide Web users. As in classical data mining, data preparation and pattern discovery are the main issues in web usage mining. The first phase of web usage mining is the data processing phase, which includes the session reconstruction operation from server logs. Session reconstruction success directly affects the quality of the frequent patterns discovered in the next phase. In reactive web usage mining techniques, the source data is web server logs and the topology of the web pages served by the web server domain. Other kinds of information collected during the interactive browsing of web site by user, such as cookies or web logs containing similar information, are not used. The next phase of web usage mining is discovering frequent user navigation patterns. In this phase, pattern discovery methods are applied on the reconstructed sessions obtained in the first phase in order to discover frequent user patterns. In this paper, we propose a frequent web usage pattern discovery method that can be applied after session reconstruction phase. In order to compare accuracy performance of session reconstruction phase and pattern discovery phase, we have used an agent simulator, which models behavior of web users and generates web user navigation as well as the log data kept by the web server.<|reference_end|> | arxiv | @article{bayir2008discovering,
title={Discovering More Accurate Frequent Web Usage Patterns},
author={Murat Ali Bayir, Ismail Hakki Toroslu, Ahmet Cosar, Guven Fidan},
journal={arXiv preprint arXiv:0804.1409},
year={2008},
archivePrefix={arXiv},
eprint={0804.1409},
primaryClass={cs.DB cs.DS}
} | bayir2008discovering |
arxiv-3295 | 0804.1421 | A $O(\log m)$, deterministic, polynomial-time computable approximation of Lewis Carroll's scoring rule | <|reference_start|>A $O(\log m)$, deterministic, polynomial-time computable approximation of Lewis Carroll's scoring rule: We provide deterministic, polynomial-time computable voting rules that approximate Dodgson's and (the ``minimization version'' of) Young's scoring rules to within a logarithmic factor. Our approximation of Dodgson's rule is tight up to a constant factor, as Dodgson's rule is $\NP$-hard to approximate to within some logarithmic factor. The ``maximization version'' of Young's rule is known to be $\NP$-hard to approximate by any constant factor. Both approximations are simple, and natural as rules in their own right: Given a candidate we wish to score, we can regard either its Dodgson or Young score as the edit distance between a given set of voter preferences and one in which the candidate to be scored is the Condorcet winner. (The difference between the two scoring rules is the type of edits allowed.) We regard the marginal cost of a sequence of edits to be the number of edits divided by the number of reductions (in the candidate's deficit against any of its opponents in the pairwise race against that opponent) that the edits yield. Over a series of rounds, our scoring rules greedily choose a sequence of edits that modify exactly one voter's preferences and whose marginal cost is no greater than any other such single-vote-modifying sequence.<|reference_end|> | arxiv | @article{covey2008a,
title={A $O(\log m)$, deterministic, polynomial-time computable approximation
of Lewis Carroll's scoring rule},
author={Jason Covey and Christopher Homan},
journal={arXiv preprint arXiv:0804.1421},
year={2008},
archivePrefix={arXiv},
eprint={0804.1421},
primaryClass={cs.GT cs.AI cs.MA}
} | covey2008a |
arxiv-3296 | 0804.1435 | The Geometry of Interaction of Differential Interaction Nets | <|reference_start|>The Geometry of Interaction of Differential Interaction Nets: The Geometry of Interaction purpose is to give a semantic of proofs or programs accounting for their dynamics. The initial presentation, translated as an algebraic weighting of paths in proofnets, led to a better characterization of the lambda-calculus optimal reduction. Recently Ehrhard and Regnier have introduced an extension of the Multiplicative Exponential fragment of Linear Logic (MELL) that is able to express non-deterministic behaviour of programs and a proofnet-like calculus: Differential Interaction Nets. This paper constructs a proper Geometry of Interaction (GoI) for this extension. We consider it both as an algebraic theory and as a concrete reversible computation. We draw links between this GoI and the one of MELL. As a by-product we give for the first time an equational theory suitable for the GoI of the Multiplicative Additive fragment of Linear Logic.<|reference_end|> | arxiv | @article{de falco2008the,
title={The Geometry of Interaction of Differential Interaction Nets},
author={Marc de Falco},
journal={arXiv preprint arXiv:0804.1435},
year={2008},
archivePrefix={arXiv},
eprint={0804.1435},
primaryClass={cs.LO cs.PL}
} | de falco2008the |
arxiv-3297 | 0804.1440 | Adversary lower bounds for nonadaptive quantum algorithms | <|reference_start|>Adversary lower bounds for nonadaptive quantum algorithms: We present general methods for proving lower bounds on the query complexity of nonadaptive quantum algorithms. Our results are based on the adversary method of Ambainis.<|reference_end|> | arxiv | @article{koiran2008adversary,
title={Adversary lower bounds for nonadaptive quantum algorithms},
author={Pacal Koiran (LIP), J"urgen Landes, Natacha Portier (LIP), Penghui
Yao},
journal={Dans proceedings of WoLLIC 2008 - WoLLIC 2008 15th Workshop on
Logic, Language, Information and Computation, Edinburgh : Royaume-Uni},
year={2008},
archivePrefix={arXiv},
eprint={0804.1440},
primaryClass={cs.CC quant-ph}
} | koiran2008adversary |
arxiv-3298 | 0804.1441 | On Kernelization of Supervised Mahalanobis Distance Learners | <|reference_start|>On Kernelization of Supervised Mahalanobis Distance Learners: This paper focuses on the problem of kernelizing an existing supervised Mahalanobis distance learner. The following features are included in the paper. Firstly, three popular learners, namely, "neighborhood component analysis", "large margin nearest neighbors" and "discriminant neighborhood embedding", which do not have kernel versions are kernelized in order to improve their classification performances. Secondly, an alternative kernelization framework called "KPCA trick" is presented. Implementing a learner in the new framework gains several advantages over the standard framework, e.g. no mathematical formulas and no reprogramming are required for a kernel implementation, the framework avoids troublesome problems such as singularity, etc. Thirdly, while the truths of representer theorems are just assumptions in previous papers related to ours, here, representer theorems are formally proven. The proofs validate both the kernel trick and the KPCA trick in the context of Mahalanobis distance learning. Fourthly, unlike previous works which always apply brute force methods to select a kernel, we investigate two approaches which can be efficiently adopted to construct an appropriate kernel for a given dataset. Finally, numerical results on various real-world datasets are presented.<|reference_end|> | arxiv | @article{chatpatanasiri2008on,
title={On Kernelization of Supervised Mahalanobis Distance Learners},
author={Ratthachat Chatpatanasiri, Teesid Korsrilabutr, Pasakorn
Tangchanachaianan and Boonserm Kijsirikul},
journal={arXiv preprint arXiv:0804.1441},
year={2008},
archivePrefix={arXiv},
eprint={0804.1441},
primaryClass={cs.LG cs.AI}
} | chatpatanasiri2008on |
arxiv-3299 | 0804.1448 | Fast k Nearest Neighbor Search using GPU | <|reference_start|>Fast k Nearest Neighbor Search using GPU: The recent improvements of graphics processing units (GPU) offer to the computer vision community a powerful processing platform. Indeed, a lot of highly-parallelizable computer vision problems can be significantly accelerated using GPU architecture. Among these algorithms, the k nearest neighbor search (KNN) is a well-known problem linked with many applications such as classification, estimation of statistical properties, etc. The main drawback of this task lies in its computation burden, as it grows polynomially with the data size. In this paper, we show that the use of the NVIDIA CUDA API accelerates the search for the KNN up to a factor of 120.<|reference_end|> | arxiv | @article{garcia2008fast,
title={Fast k Nearest Neighbor Search using GPU},
author={Vincent Garcia, Eric Debreuve and Michel Barlaud},
journal={arXiv preprint arXiv:0804.1448},
year={2008},
archivePrefix={arXiv},
eprint={0804.1448},
primaryClass={cs.CV cs.DC}
} | garcia2008fast |
arxiv-3300 | 0804.1490 | Distributed Space-Time Block Codes for the MIMO Multiple Access Channel | <|reference_start|>Distributed Space-Time Block Codes for the MIMO Multiple Access Channel: In this work, the Multiple transmit antennas Multiple Access Channel is considered. A construction of a family of distributed space-time codes for this channel is proposed. No Channel Side Information at the transmitters is assumed and users are not allowed to cooperate together. It is shown that the proposed code achieves the Diversity Multiplexing Tradeoff of the channel. As an example, we consider the two-user MIMO-MAC channel. Simulation results show the significant gain offered by the new coding scheme compared to an orthogonal transmission scheme, e.g. time sharing.<|reference_end|> | arxiv | @article{badr2008distributed,
title={Distributed Space-Time Block Codes for the MIMO Multiple Access Channel},
author={Maya Badr and Jean-Claude Belfiore},
journal={arXiv preprint arXiv:0804.1490},
year={2008},
archivePrefix={arXiv},
eprint={0804.1490},
primaryClass={cs.IT math.IT}
} | badr2008distributed |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.