corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-3701 | 0805.1401 | Approximation Algorithms for Shortest Descending Paths in Terrains | <|reference_start|>Approximation Algorithms for Shortest Descending Paths in Terrains: A path from s to t on a polyhedral terrain is descending if the height of a point p never increases while we move p along the path from s to t. No efficient algorithm is known to find a shortest descending path (SDP) from s to t in a polyhedral terrain. We give two approximation algorithms (more precisely, FPTASs) that solve the SDP problem on general terrains. Both algorithms are simple, robust and easy to implement.<|reference_end|> | arxiv | @article{ahmed2008approximation,
title={Approximation Algorithms for Shortest Descending Paths in Terrains},
author={Mustaq Ahmed, Sandip Das, Sachin Lodha, Anna Lubiw, Anil Maheshwari,
Sasanka Roy},
journal={arXiv preprint arXiv:0805.1401},
year={2008},
archivePrefix={arXiv},
eprint={0805.1401},
primaryClass={cs.CG cs.DS}
} | ahmed2008approximation |
arxiv-3702 | 0805.1437 | On the Spectrum of Large Random Hermitian Finite-Band Matrices | <|reference_start|>On the Spectrum of Large Random Hermitian Finite-Band Matrices: The open problem of calculating the limiting spectrum (or its Shannon transform) of increasingly large random Hermitian finite-band matrices is described. In general, these matrices include a finite number of non-zero diagonals around their main diagonal regardless of their size. Two different communication setups which may be modeled using such matrices are presented: a simple cellular uplink channel, and a time varying inter-symbol interference channel. Selected recent information-theoretic works dealing directly with such channels are reviewed. Finally, several characteristics of the still unknown limiting spectrum of such matrices are listed, and some reflections are touched upon.<|reference_end|> | arxiv | @article{somekh2008on,
title={On the Spectrum of Large Random Hermitian Finite-Band Matrices},
author={Oren Somekh, Osvalso Simeone, Benjamin M. Zaidel, H. Vincent Poor, and
Shlomo Shamai (Shitz)},
journal={arXiv preprint arXiv:0805.1437},
year={2008},
archivePrefix={arXiv},
eprint={0805.1437},
primaryClass={cs.IT math.IT}
} | somekh2008on |
arxiv-3703 | 0805.1442 | How Many Users should be Turned On in a Multi-Antenna Broadcast Channel? | <|reference_start|>How Many Users should be Turned On in a Multi-Antenna Broadcast Channel?: This paper considers broadcast channels with L antennas at the base station and m single-antenna users, where L and m are typically of the same order. We assume that only partial channel state information is available at the base station through a finite rate feedback. Our key observation is that the optimal number of on-users (users turned on), say s, is a function of signal-to-noise ratio (SNR) and feedback rate. In support of this, an asymptotic analysis is employed where L, m and the feedback rate approach infinity linearly. We derive the asymptotic optimal feedback strategy as well as a realistic criterion to decide which users should be turned on. The corresponding asymptotic throughput per antenna, which we define as the spatial efficiency, turns out to be a function of the number of on-users s, and therefore s must be chosen appropriately. Based on the asymptotics, a scheme is developed for systems with finite many antennas and users. Compared with other studies in which s is presumed constant, our scheme achieves a significant gain. Furthermore, our analysis and scheme are valid for heterogeneous systems where different users may have different path loss coefficients and feedback rates.<|reference_end|> | arxiv | @article{dai2008how,
title={How Many Users should be Turned On in a Multi-Antenna Broadcast Channel?},
author={Wei Dai, Youjian (Eugene) Liu, Brian C. Rider and Wen Gao},
journal={arXiv preprint arXiv:0805.1442},
year={2008},
archivePrefix={arXiv},
eprint={0805.1442},
primaryClass={cs.IT math.IT}
} | dai2008how |
arxiv-3704 | 0805.1457 | Model Checking One-clock Priced Timed Automata | <|reference_start|>Model Checking One-clock Priced Timed Automata: We consider the model of priced (a.k.a. weighted) timed automata, an extension of timed automata with cost information on both locations and transitions, and we study various model-checking problems for that model based on extensions of classical temporal logics with cost constraints on modalities. We prove that, under the assumption that the model has only one clock, model-checking this class of models against the logic WCTL, CTL with cost-constrained modalities, is PSPACE-complete (while it has been shown undecidable as soon as the model has three clocks). We also prove that model-checking WMTL, LTL with cost-constrained modalities, is decidable only if there is a single clock in the model and a single stopwatch cost variable (i.e., whose slopes lie in {0,1}).<|reference_end|> | arxiv | @article{bouyer2008model,
title={Model Checking One-clock Priced Timed Automata},
author={Patricia Bouyer, Kim G. Larsen, and Nicolas Markey},
journal={Logical Methods in Computer Science, Volume 4, Issue 2 (June 20,
2008) lmcs:828},
year={2008},
doi={10.2168/LMCS-4(2:9)2008},
archivePrefix={arXiv},
eprint={0805.1457},
primaryClass={cs.LO cs.CC cs.GT}
} | bouyer2008model |
arxiv-3705 | 0805.1464 | Efficiently Simulating Higher-Order Arithmetic by a First-Order Theory Modulo | <|reference_start|>Efficiently Simulating Higher-Order Arithmetic by a First-Order Theory Modulo: In deduction modulo, a theory is not represented by a set of axioms but by a congruence on propositions modulo which the inference rules of standard deductive systems---such as for instance natural deduction---are applied. Therefore, the reasoning that is intrinsic of the theory does not appear in the length of proofs. In general, the congruence is defined through a rewrite system over terms and propositions. We define a rigorous framework to study proof lengths in deduction modulo, where the congruence must be computed in polynomial time. We show that even very simple rewrite systems lead to arbitrary proof-length speed-ups in deduction modulo, compared to using axioms. As higher-order logic can be encoded as a first-order theory in deduction modulo, we also study how to reinterpret, thanks to deduction modulo, the speed-ups between higher-order and first-order arithmetics that were stated by G\"odel. We define a first-order rewrite system with a congruence decidable in polynomial time such that proofs of higher-order arithmetic can be linearly translated into first-order arithmetic modulo that system. We also present the whole higher-order arithmetic as a first-order system without resorting to any axiom, where proofs have the same length as in the axiomatic presentation.<|reference_end|> | arxiv | @article{burel2008efficiently,
title={Efficiently Simulating Higher-Order Arithmetic by a First-Order Theory
Modulo},
author={Guillaume Burel (Max Planck Institute for Informatics)},
journal={Logical Methods in Computer Science, Volume 7, Issue 1 (March 17,
2011) lmcs:861},
year={2008},
doi={10.2168/LMCS-7(1:3)2011},
archivePrefix={arXiv},
eprint={0805.1464},
primaryClass={cs.LO cs.CC}
} | burel2008efficiently |
arxiv-3706 | 0805.1473 | A Fast Algorithm and Datalog Inexpressibility for Temporal Reasoning | <|reference_start|>A Fast Algorithm and Datalog Inexpressibility for Temporal Reasoning: We introduce a new tractable temporal constraint language, which strictly contains the Ord-Horn language of Buerkert and Nebel and the class of AND/OR precedence constraints. The algorithm we present for this language decides whether a given set of constraints is consistent in time that is quadratic in the input size. We also prove that (unlike Ord-Horn) this language cannot be solved by Datalog or by establishing local consistency.<|reference_end|> | arxiv | @article{bodirsky2008a,
title={A Fast Algorithm and Datalog Inexpressibility for Temporal Reasoning},
author={Manuel Bodirsky and Jan Kara},
journal={arXiv preprint arXiv:0805.1473},
year={2008},
archivePrefix={arXiv},
eprint={0805.1473},
primaryClass={cs.AI cs.LO}
} | bodirsky2008a |
arxiv-3707 | 0805.1480 | On-line Learning of an Unlearnable True Teacher through Mobile Ensemble Teachers | <|reference_start|>On-line Learning of an Unlearnable True Teacher through Mobile Ensemble Teachers: On-line learning of a hierarchical learning model is studied by a method from statistical mechanics. In our model a student of a simple perceptron learns from not a true teacher directly, but ensemble teachers who learn from the true teacher with a perceptron learning rule. Since the true teacher and the ensemble teachers are expressed as non-monotonic perceptron and simple ones, respectively, the ensemble teachers go around the unlearnable true teacher with the distance between them fixed in an asymptotic steady state. The generalization performance of the student is shown to exceed that of the ensemble teachers in a transient state, as was shown in similar ensemble-teachers models. Further, it is found that moving the ensemble teachers even in the steady state, in contrast to the fixed ensemble teachers, is efficient for the performance of the student.<|reference_end|> | arxiv | @article{hirama2008on-line,
title={On-line Learning of an Unlearnable True Teacher through Mobile Ensemble
Teachers},
author={Takeshi Hirama and Koji Hukushima},
journal={arXiv preprint arXiv:0805.1480},
year={2008},
doi={10.1143/JPSJ.77.094801},
archivePrefix={arXiv},
eprint={0805.1480},
primaryClass={cond-mat.dis-nn cs.LG}
} | hirama2008on-line |
arxiv-3708 | 0805.1485 | Distributed MIMO Systems with Oblivious Antennas | <|reference_start|>Distributed MIMO Systems with Oblivious Antennas: A scenario in which a single source communicates with a single destination via a distributed MIMO transceiver is considered. The source operates each of the transmit antennas via finite-capacity links, and likewise the destination is connected to the receiving antennas through capacity-constrained channels. Targeting a nomadic communication scenario, in which the distributed MIMO transceiver is designed to serve different standards or services, transmitters and receivers are assumed to be oblivious to the encoding functions shared by source and destination. Adopting a Gaussian symmetric interference network as the channel model (as for regularly placed transmitters and receivers), achievable rates are investigated and compared with an upper bound. It is concluded that in certain asymptotic and non-asymptotic regimes obliviousness of transmitters and receivers does not cause any loss of optimality.<|reference_end|> | arxiv | @article{simeone2008distributed,
title={Distributed MIMO Systems with Oblivious Antennas},
author={Osvaldo Simeone, Oren Somekh, H. Vincent Poor, and Shlomo Shamai
(Shitz)},
journal={arXiv preprint arXiv:0805.1485},
year={2008},
doi={10.1109/ISIT.2008.4595119},
archivePrefix={arXiv},
eprint={0805.1485},
primaryClass={cs.IT math.IT}
} | simeone2008distributed |
arxiv-3709 | 0805.1487 | A Time Efficient Indexing Scheme for Complex Spatiotemporal Retrieval | <|reference_start|>A Time Efficient Indexing Scheme for Complex Spatiotemporal Retrieval: The paper is concerned with the time efficient processing of spatiotemporal predicates, i.e. spatial predicates associated with an exact temporal constraint. A set of such predicates forms a buffer query or a Spatio-temporal Pattern (STP) Query with time. In the more general case of an STP query, the temporal dimension is introduced via the relative order of the spatial predicates (STP queries with order). Therefore, the efficient processing of a spatiotemporal predicate is crucial for the efficient implementation of more complex queries of practical interest. We propose an extension of a known approach, suitable for processing spatial predicates, which has been used for the efficient manipulation of STP queries with order. The extended method is supported by efficient indexing structures. We also provide experimental results that show the efficiency of the technique.<|reference_end|> | arxiv | @article{george2008a,
title={A Time Efficient Indexing Scheme for Complex Spatiotemporal Retrieval},
author={Lagogiannis George, Lorentzos Nikos, Sioutas Spyros, Theodoridis
Evaggelos},
journal={arXiv preprint arXiv:0805.1487},
year={2008},
archivePrefix={arXiv},
eprint={0805.1487},
primaryClass={cs.DB cs.DS}
} | george2008a |
arxiv-3710 | 0805.1489 | Modeling and verifying a broad array of network properties | <|reference_start|>Modeling and verifying a broad array of network properties: Motivated by widely observed examples in nature, society and software, where groups of already related nodes arrive together and attach to an existing network, we consider network growth via sequential attachment of linked node groups, or graphlets. We analyze the simplest case, attachment of the three node V-graphlet, where, with probability alpha, we attach a peripheral node of the graphlet, and with probability (1-alpha), we attach the central node. Our analytical results and simulations show that tuning alpha produces a wide range in degree distribution and degree assortativity, achieving assortativity values that capture a diverse set of many real-world systems. We introduce a fifteen-dimensional attribute vector derived from seven well-known network properties, which enables comprehensive comparison between any two networks. Principal Component Analysis (PCA) of this attribute vector space shows a significantly larger coverage potential of real-world network properties by a simple extension of the above model when compared against a classic model of network growth.<|reference_end|> | arxiv | @article{filkov2008modeling,
title={Modeling and verifying a broad array of network properties},
author={Vladimir Filkov, Zachary M. Saul, Soumen Roy, Raissa M. D'Souza and
Premkumar T. Devanbu},
journal={Europhysics Letters, 86 (2009) 28003},
year={2008},
doi={10.1209/0295-5075/86/28003},
archivePrefix={arXiv},
eprint={0805.1489},
primaryClass={cond-mat.stat-mech cs.SE q-bio.QM stat.AP}
} | filkov2008modeling |
arxiv-3711 | 0805.1567 | Transport in networks with multiple sources and sinks | <|reference_start|>Transport in networks with multiple sources and sinks: We investigate the electrical current and flow (number of parallel paths) between two sets of n sources and n sinks in complex networks. We derive analytical formulas for the average current and flow as a function of n. We show that for small n, increasing n improves the total transport in the network, while for large n bottlenecks begin to form. For the case of flow, this leads to an optimal n* above which the transport is less efficient. For current, the typical decrease in the length of the connecting paths for large n compensates for the effect of the bottlenecks. We also derive an expression for the average flow as a function of n under the common limitation that transport takes place between specific pairs of sources and sinks.<|reference_end|> | arxiv | @article{carmi2008transport,
title={Transport in networks with multiple sources and sinks},
author={Shai Carmi, Zhenhua Wu, Shlomo Havlin, H. Eugene Stanley},
journal={Europhys. Lett. 84, 28005 (2008)},
year={2008},
doi={10.1209/0295-5075/84/28005},
archivePrefix={arXiv},
eprint={0805.1567},
primaryClass={cs.DM cond-mat.dis-nn}
} | carmi2008transport |
arxiv-3712 | 0805.1593 | On the Probability Distribution of Superimposed Random Codes | <|reference_start|>On the Probability Distribution of Superimposed Random Codes: A systematic study of the probability distribution of superimposed random codes is presented through the use of generating functions. Special attention is paid to the cases of either uniformly distributed but not necessarily independent or non uniform but independent bit structures. Recommendations for optimal coding strategies are derived.<|reference_end|> | arxiv | @article{günther2008on,
title={On the Probability Distribution of Superimposed Random Codes},
author={Bernd G"unther},
journal={IEEE Trans. Inf. Theory, 54(7):3206--3210, 2008},
year={2008},
doi={10.1109/TIT.2008.924658},
archivePrefix={arXiv},
eprint={0805.1593},
primaryClass={cs.DB cs.DM cs.IT math.IT}
} | günther2008on |
arxiv-3713 | 0805.1598 | A Simple In-Place Algorithm for In-Shuffle | <|reference_start|>A Simple In-Place Algorithm for In-Shuffle: The paper presents a simple, linear time, in-place algorithm for performing a 2-way in-shuffle which can be used with little modification for certain other k-way shuffles.<|reference_end|> | arxiv | @article{jain2008a,
title={A Simple In-Place Algorithm for In-Shuffle},
author={Peiyush Jain},
journal={arXiv preprint arXiv:0805.1598},
year={2008},
archivePrefix={arXiv},
eprint={0805.1598},
primaryClass={cs.DS}
} | jain2008a |
arxiv-3714 | 0805.1661 | NAPX: A Polynomial Time Approximation Scheme for the Noah's Ark Problem | <|reference_start|>NAPX: A Polynomial Time Approximation Scheme for the Noah's Ark Problem: The Noah's Ark Problem (NAP) is an NP-Hard optimization problem with relevance to ecological conservation management. It asks to maximize the phylogenetic diversity (PD) of a set of taxa given a fixed budget, where each taxon is associated with a cost of conservation and a probability of extinction. NAP has received renewed interest with the rise in availability of genetic sequence data, allowing PD to be used as a practical measure of biodiversity. However, only simplified instances of the problem, where one or more parameters are fixed as constants, have as of yet been addressed in the literature. We present NAPX, the first algorithm for the general version of NAP that returns a $1 - \epsilon$ approximation of the optimal solution. It runs in $O(\frac{n B^2 h^2 \log^2n}{\log^2(1 - \epsilon)})$ time where $n$ is the number of species, and $B$ is the total budget and $h$ is the height of the input tree. We also provide improved bounds for its expected running time.<|reference_end|> | arxiv | @article{hickey2008napx:,
title={NAPX: A Polynomial Time Approximation Scheme for the Noah's Ark Problem},
author={G. Hickey, P. Carmi, A. Maheshwari, N. Zeh},
journal={arXiv preprint arXiv:0805.1661},
year={2008},
archivePrefix={arXiv},
eprint={0805.1661},
primaryClass={cs.DS}
} | hickey2008napx: |
arxiv-3715 | 0805.1662 | Eliminating Trapping Sets in Low-Density Parity Check Codes by using Tanner Graph Covers | <|reference_start|>Eliminating Trapping Sets in Low-Density Parity Check Codes by using Tanner Graph Covers: We discuss error floor asympotics and present a method for improving the performance of low-density parity check (LDPC) codes in the high SNR (error floor) region. The method is based on Tanner graph covers that do not have trapping sets from the original code. The advantages of the method are that it is universal, as it can be applied to any LDPC code/channel/decoding algorithm and it improves performance at the expense of increasing the code length, without losing the code regularity, without changing the decoding algorithm, and, under certain conditions, without lowering the code rate. The proposed method can be modified to construct convolutional LDPC codes also. The method is illustrated by modifying Tanner, MacKay and Margulis codes to improve performance on the binary symmetric channel (BSC) under the Gallager B decoding algorithm. Decoding results on AWGN channel are also presented to illustrate that optimizing codes for one channel/decoding algorithm can lead to performance improvement on other channels.<|reference_end|> | arxiv | @article{vasic2008eliminating,
title={Eliminating Trapping Sets in Low-Density Parity Check Codes by using
Tanner Graph Covers},
author={Milos Ivkovic Shashi Kiran Chilappagari Bane Vasic},
journal={arXiv preprint arXiv:0805.1662},
year={2008},
archivePrefix={arXiv},
eprint={0805.1662},
primaryClass={cs.IT math.IT}
} | vasic2008eliminating |
arxiv-3716 | 0805.1696 | Grammatical Evolution with Restarts for Fast Fractal Generation | <|reference_start|>Grammatical Evolution with Restarts for Fast Fractal Generation: In a previous work, the authors proposed a Grammatical Evolution algorithm to automatically generate Lindenmayer Systems which represent fractal curves with a pre-determined fractal dimension. This paper gives strong statistical evidence that the probability distributions of the execution time of that algorithm exhibits a heavy tail with an hyperbolic probability decay for long executions, which explains the erratic performance of different executions of the algorithm. Three different restart strategies have been incorporated in the algorithm to mitigate the problems associated to heavy tail distributions: the first assumes full knowledge of the execution time probability distribution, the second and third assume no knowledge. These strategies exploit the fact that the probability of finding a solution in short executions is non-negligible and yield a severe reduction, both in the expected execution time (up to one order of magnitude) and in its variance, which is reduced from an infinite to a finite value.<|reference_end|> | arxiv | @article{cebrian2008grammatical,
title={Grammatical Evolution with Restarts for Fast Fractal Generation},
author={Manuel Cebrian, Manuel Alfonseca and Alfonso Ortega},
journal={arXiv preprint arXiv:0805.1696},
year={2008},
archivePrefix={arXiv},
eprint={0805.1696},
primaryClass={cs.NE cs.SC}
} | cebrian2008grammatical |
arxiv-3717 | 0805.1715 | Isotropy, entropy, and energy scaling | <|reference_start|>Isotropy, entropy, and energy scaling: Two principles explain emergence. First, in the Receipt's reference frame, Deg(S) = 4/3 Deg(R), where Supply S is an isotropic radiative energy source, Receipt R receives S's energy, and Deg is a system's degrees of freedom based on its mean path length. S's 1/3 more degrees of freedom relative to R enables R's growth and increasing complexity. Second, rho(R) = Deg(R) times rho(r), where rho(R) represents the collective rate of R and rho(r) represents the rate of an individual in R: as Deg(R) increases due to the first principle, the multiplier effect of networking in R increases. A universe like ours with isotropic energy distribution, in which both principles are operative, is therefore predisposed to exhibit emergence, and, for reasons shown, a ubiquitous role for the natural logarithm.<|reference_end|> | arxiv | @article{shour2008isotropy,,
title={Isotropy, entropy, and energy scaling},
author={Robert Shour},
journal={arXiv preprint arXiv:0805.1715},
year={2008},
archivePrefix={arXiv},
eprint={0805.1715},
primaryClass={cs.IT math.IT nlin.AO}
} | shour2008isotropy, |
arxiv-3718 | 0805.1727 | Swarm-Based Spatial Sorting | <|reference_start|>Swarm-Based Spatial Sorting: Purpose: To present an algorithm for spatially sorting objects into an annular structure. Design/Methodology/Approach: A swarm-based model that requires only stochastic agent behaviour coupled with a pheromone-inspired "attraction-repulsion" mechanism. Findings: The algorithm consistently generates high-quality annular structures, and is particularly powerful in situations where the initial configuration of objects is similar to those observed in nature. Research limitations/implications: Experimental evidence supports previous theoretical arguments about the nature and mechanism of spatial sorting by insects. Practical implications: The algorithm may find applications in distributed robotics. Originality/value: The model offers a powerful minimal algorithmic framework, and also sheds further light on the nature of attraction-repulsion algorithms and underlying natural processes.<|reference_end|> | arxiv | @article{amos2008swarm-based,
title={Swarm-Based Spatial Sorting},
author={Martyn Amos and Oliver Don},
journal={arXiv preprint arXiv:0805.1727},
year={2008},
archivePrefix={arXiv},
eprint={0805.1727},
primaryClass={cs.AI cs.MA}
} | amos2008swarm-based |
arxiv-3719 | 0805.1740 | Detecting Errors in Spreadsheets | <|reference_start|>Detecting Errors in Spreadsheets: The paper presents two complementary strategies for identifying errors in spreadsheet programs. The strategies presented are grounded on the assumption that spreadsheets are software, albeit of a different nature than conventional procedural software. Correspondingly, strategies for identifying errors have to take into account the inherent properties of spreadsheets as much as they have to recognize that the conceptual models of 'spreadsheet programmers' differ from the conceptual models of conventional programmers. Nevertheless, nobody can and will write a spreadsheet, without having such a conceptual model in mind, be it of numeric nature or be it of geometrical nature focused on some layout.<|reference_end|> | arxiv | @article{ayalew2008detecting,
title={Detecting Errors in Spreadsheets},
author={Yirsaw Ayalew, Markus Clermont, Roland T. Mittermeir},
journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2000 51-63},
year={2008},
archivePrefix={arXiv},
eprint={0805.1740},
primaryClass={cs.SE}
} | ayalew2008detecting |
arxiv-3720 | 0805.1741 | A Spreadsheet Auditing Tool Evaluated in an Industrial Context | <|reference_start|>A Spreadsheet Auditing Tool Evaluated in an Industrial Context: Amongst the large number of write-and-throw-away spreadsheets developed for one-time use there is a rather neglected proportion of spreadsheets that are huge, periodically used, and submitted to regular update-cycles like any conventionally evolving valuable legacy application software. However, due to the very nature of spreadsheets, their evolution is particularly tricky and therefore error-prone. In our strive to develop tools and methodologies to improve spreadsheet quality, we analysed consolidation spreadsheets of an internationally operating company for the errors they contain. The paper presents the results of the field audit, involving 78 spreadsheets with 60,446 non-empty cells. As a by-product, the study performed was also to validate our analysis tools in an industrial context. The evaluated auditing tool offers the auditor a new view on the formula structure of the spreadsheet by grouping similar formulas into equivalence classes. Our auditing approach defines three similarity criteria between formulae, namely copy, logical and structural equivalence. To improve the visualization of large spreadsheets, equivalences and data dependencies are displayed in separated windows that are interlinked with the spreadsheet. The auditing approach helps to find irregularities in the geometrical pattern of similar formulas.<|reference_end|> | arxiv | @article{clermont2008a,
title={A Spreadsheet Auditing Tool Evaluated in an Industrial Context},
author={Markus Clermont, Christian Hanin, Roland T. Mittermeir},
journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2002 35-47
ISBN 1 86166 182 7},
year={2008},
archivePrefix={arXiv},
eprint={0805.1741},
primaryClass={cs.HC}
} | clermont2008a |
arxiv-3721 | 0805.1759 | Algorithmic Methods for Sponsored Search Advertising | <|reference_start|>Algorithmic Methods for Sponsored Search Advertising: Modern commercial Internet search engines display advertisements along side the search results in response to user queries. Such sponsored search relies on market mechanisms to elicit prices for these advertisements, making use of an auction among advertisers who bid in order to have their ads shown for specific keywords. We present an overview of the current systems for such auctions and also describe the underlying game-theoretic aspects. The game involves three parties--advertisers, the search engine, and search users--and we present example research directions that emphasize the role of each. The algorithms for bidding and pricing in these games use techniques from three mathematical areas: mechanism design, optimization, and statistical estimation. Finally, we present some challenges in sponsored search advertising.<|reference_end|> | arxiv | @article{feldman2008algorithmic,
title={Algorithmic Methods for Sponsored Search Advertising},
author={Jon Feldman, S. Muthukrishnan},
journal={arXiv preprint arXiv:0805.1759},
year={2008},
archivePrefix={arXiv},
eprint={0805.1759},
primaryClass={cs.GT}
} | feldman2008algorithmic |
arxiv-3722 | 0805.1765 | Efficiently Testing Sparse GF(2) Polynomials | <|reference_start|>Efficiently Testing Sparse GF(2) Polynomials: We give the first algorithm that is both query-efficient and time-efficient for testing whether an unknown function $f: \{0,1\}^n \to \{0,1\}$ is an $s$-sparse GF(2) polynomial versus $\eps$-far from every such polynomial. Our algorithm makes $\poly(s,1/\eps)$ black-box queries to $f$ and runs in time $n \cdot \poly(s,1/\eps)$. The only previous algorithm for this testing problem \cite{DLM+:07} used poly$(s,1/\eps)$ queries, but had running time exponential in $s$ and super-polynomial in $1/\eps$. Our approach significantly extends the ``testing by implicit learning'' methodology of \cite{DLM+:07}. The learning component of that earlier work was a brute-force exhaustive search over a concept class to find a hypothesis consistent with a sample of random examples. In this work, the learning component is a sophisticated exact learning algorithm for sparse GF(2) polynomials due to Schapire and Sellie \cite{SchapireSellie:96}. A crucial element of this work, which enables us to simulate the membership queries required by \cite{SchapireSellie:96}, is an analysis establishing new properties of how sparse GF(2) polynomials simplify under certain restrictions of ``low-influence'' sets of variables.<|reference_end|> | arxiv | @article{diakonikolas2008efficiently,
title={Efficiently Testing Sparse GF(2) Polynomials},
author={Ilias Diakonikolas, Homin K. Lee, Kevin Matulef, Rocco A. Servedio,
Andrew Wan},
journal={arXiv preprint arXiv:0805.1765},
year={2008},
archivePrefix={arXiv},
eprint={0805.1765},
primaryClass={cs.CC}
} | diakonikolas2008efficiently |
arxiv-3723 | 0805.1785 | Distributed Self Management for Distributed Security Systems | <|reference_start|>Distributed Self Management for Distributed Security Systems: Distributed system as e.g. artificial immune systems, complex adaptive systems, or multi-agent systems are widely used in Computer Science, e.g. for network security, optimisations, or simulations. In these systems, small entities move through the network and perform certain tasks. At some time, the entities move to another place and require therefore information where to move is most profitable. Common used systems do not provide any information or use a centralised approach where a center delegates the entities. This article discusses whether small information about the neighbours enhances the performance of the overall system or not. Therefore, two information-protocols are introduced and analysed. In addition, the protocols are implemented and tested using the artificial immune system SANA that protects a network against intrusions.<|reference_end|> | arxiv | @article{hilker2008distributed,
title={Distributed Self Management for Distributed Security Systems},
author={Michael Hilker},
journal={Proceedings of the 2nd International Conference on Bio-Inspired
Computing: Theories and Applications (BIC-TA 2007), September 2007,
Zhengzhou, China},
year={2008},
archivePrefix={arXiv},
eprint={0805.1785},
primaryClass={cs.MA cs.AI}
} | hilker2008distributed |
arxiv-3724 | 0805.1786 | Next Challenges in Bringing Artificial Immune Systems to Production in Network Security | <|reference_start|>Next Challenges in Bringing Artificial Immune Systems to Production in Network Security: The human immune system protects the human body against various pathogens like e.g. biological viruses and bacteria. Artificial immune systems reuse the architecture, organization, and workflows of the human immune system for various problems in computer science. In the network security, the artificial immune system is used to secure a network and its nodes against intrusions like viruses, worms, and trojans. However, these approaches are far away from production where they are academic proof-of-concept implementations or use only a small part to protect against a certain intrusion. This article discusses the required steps to bring artificial immune systems into production in the network security domain. It furthermore figures out the challenges and provides the description and results of the prototype of an artificial immune system, which is SANA called.<|reference_end|> | arxiv | @article{hilker2008next,
title={Next Challenges in Bringing Artificial Immune Systems to Production in
Network Security},
author={Michael Hilker},
journal={arXiv preprint arXiv:0805.1786},
year={2008},
archivePrefix={arXiv},
eprint={0805.1786},
primaryClass={cs.MA cs.AI}
} | hilker2008next |
arxiv-3725 | 0805.1787 | A Network Protection Framework through Artificial Immunity | <|reference_start|>A Network Protection Framework through Artificial Immunity: Current network protection systems use a collection of intelligent components - e.g. classifiers or rule-based firewall systems to detect intrusions and anomalies and to secure a network against viruses, worms, or trojans. However, these network systems rely on individuality and support an architecture with less collaborative work of the protection components. They give less administration support for maintenance, but offer a large number of individual single points of failures - an ideal situation for network attacks to succeed. In this work, we discuss the required features, the performance, and the problems of a distributed protection system called {\it SANA}. It consists of a cooperative architecture, it is motivated by the human immune system, where the components correspond to artificial immune cells that are connected for their collaborative work. SANA promises a better protection against intruders than common known protection systems through an adaptive self-management while keeping the resources efficiently by an intelligent reduction of redundancies. We introduce a library of several novel and common used protection components and evaluate the performance of SANA by a proof-of-concept implementation.<|reference_end|> | arxiv | @article{hilker2008a,
title={A Network Protection Framework through Artificial Immunity},
author={Michael Hilker and Christoph Schommer},
journal={arXiv preprint arXiv:0805.1787},
year={2008},
archivePrefix={arXiv},
eprint={0805.1787},
primaryClass={cs.MA cs.CR}
} | hilker2008a |
arxiv-3726 | 0805.1788 | Pedestrian Flow at Bottlenecks - Validation and Calibration of Vissim's Social Force Model of Pedestrian Traffic and its Empirical Foundations | <|reference_start|>Pedestrian Flow at Bottlenecks - Validation and Calibration of Vissim's Social Force Model of Pedestrian Traffic and its Empirical Foundations: In this contribution first results of experiments on pedestrian flow through bottlenecks are presented and then compared to simulation results obtained with the Social Force Model in the Vissim simulation framework. Concerning the experiments it is argued that the basic dependence between flow and bottleneck width is not a step function but that it is linear and modified by the effect of a psychological phenomenon. The simulation results as well show a linear dependence and the parameters can be calibrated such that the absolute values for flow and time fit to range of experimental results.<|reference_end|> | arxiv | @article{kretz2008pedestrian,
title={Pedestrian Flow at Bottlenecks - Validation and Calibration of Vissim's
Social Force Model of Pedestrian Traffic and its Empirical Foundations},
author={Tobias Kretz, Stefan Hengst, Peter Vortisch},
journal={arXiv preprint arXiv:0805.1788},
year={2008},
archivePrefix={arXiv},
eprint={0805.1788},
primaryClass={cs.MA physics.soc-ph}
} | kretz2008pedestrian |
arxiv-3727 | 0805.1798 | (Mechanical) Reasoning on Infinite Extensive Games | <|reference_start|>(Mechanical) Reasoning on Infinite Extensive Games: In order to better understand reasoning involved in analyzing infinite games in extensive form, we performed experiments in the proof assistant Coq that are reported here.<|reference_end|> | arxiv | @article{lescanne2008(mechanical),
title={(Mechanical) Reasoning on Infinite Extensive Games},
author={Pierre Lescanne (LIP)},
journal={arXiv preprint arXiv:0805.1798},
year={2008},
number={LIP (UMR5668) RR2008-16},
archivePrefix={arXiv},
eprint={0805.1798},
primaryClass={cs.GT cs.LO}
} | lescanne2008(mechanical) |
arxiv-3728 | 0805.1806 | Tuplix Calculus Specifications of Financial Transfer Networks | <|reference_start|>Tuplix Calculus Specifications of Financial Transfer Networks: We study the application of Tuplix Calculus in modular financial budget design. We formalize organizational structure using financial transfer networks. We consider the notion of flux of money over a network, and a way to enforce the matching of influx and outflux for parts of a network. We exploit so-called signed attribute notation to make internal streams visible through encapsulations. Finally, we propose a Tuplix Calculus construct for the definition of data functions.<|reference_end|> | arxiv | @article{bergstra2008tuplix,
title={Tuplix Calculus Specifications of Financial Transfer Networks},
author={J.A. Bergstra, S. Nolst Trenite, M.B. van der Zwaag},
journal={arXiv preprint arXiv:0805.1806},
year={2008},
number={PRG0807},
archivePrefix={arXiv},
eprint={0805.1806},
primaryClass={cs.CE cs.LO}
} | bergstra2008tuplix |
arxiv-3729 | 0805.1827 | Parallel Pricing Algorithms for Multi--Dimensional Bermudan/American Options using Monte Carlo methods | <|reference_start|>Parallel Pricing Algorithms for Multi--Dimensional Bermudan/American Options using Monte Carlo methods: In this paper we present two parallel Monte Carlo based algorithms for pricing multi--dimensional Bermudan/American options. First approach relies on computation of the optimal exercise boundary while the second relies on classification of continuation and exercise values. We also evaluate the performance of both the algorithms in a desktop grid environment. We show the effectiveness of the proposed approaches in a heterogeneous computing environment, and identify scalability constraints due to the algorithmic structure.<|reference_end|> | arxiv | @article{bossy2008parallel,
title={Parallel Pricing Algorithms for Multi--Dimensional Bermudan/American
Options using Monte Carlo methods},
author={Mireille Bossy (INRIA Sophia Antipolis / INRIA Lorraine / IECN),
Franc{c}oise Baude (INRIA Sophia Antipolis), Viet Dung Doan (INRIA Sophia
Antipolis), Abhijeet Gaikwad (INRIA Sophia Antipolis), Ian Stokes-Rees (INRIA
Sophia Antipolis)},
journal={N° RR-6530 (2008)},
year={2008},
doi={10.1016/j.matcom.2010.08.005},
number={RR-6530},
archivePrefix={arXiv},
eprint={0805.1827},
primaryClass={cs.DC cs.CE}
} | bossy2008parallel |
arxiv-3730 | 0805.1844 | Practical recipes for the model order reduction, dynamical simulation, and compressive sampling of large-scale open quantum systems | <|reference_start|>Practical recipes for the model order reduction, dynamical simulation, and compressive sampling of large-scale open quantum systems: This article presents numerical recipes for simulating high-temperature and non-equilibrium quantum spin systems that are continuously measured and controlled. The notion of a spin system is broadly conceived, in order to encompass macroscopic test masses as the limiting case of large-j spins. The simulation technique has three stages: first the deliberate introduction of noise into the simulation, then the conversion of that noise into an equivalent continuous measurement and control process, and finally, projection of the trajectory onto a state-space manifold having reduced dimensionality and possessing a Kahler potential of multi-linear form. The resulting simulation formalism is used to construct a positive P-representation for the thermal density matrix. Single-spin detection by magnetic resonance force microscopy (MRFM) is simulated, and the data statistics are shown to be those of a random telegraph signal with additive white noise. Larger-scale spin-dust models are simulated, having no spatial symmetry and no spatial ordering; the high-fidelity projection of numerically computed quantum trajectories onto low-dimensionality Kahler state-space manifolds is demonstrated. The reconstruction of quantum trajectories from sparse random projections is demonstrated, the onset of Donoho-Stodden breakdown at the Candes-Tao sparsity limit is observed, a deterministic construction for sampling matrices is given, and methods for quantum state optimization by Dantzig selection are given.<|reference_end|> | arxiv | @article{sidles2008practical,
title={Practical recipes for the model order reduction, dynamical simulation,
and compressive sampling of large-scale open quantum systems},
author={John A. Sidles, Joseph L. Garbini, Lee E. Harrell, Alfred O. Hero,
Jonathan P. Jacky, Joseph R. Malcomb, Anthony G. Norman, Austin M. Williamson},
journal={arXiv preprint arXiv:0805.1844},
year={2008},
doi={10.1088/1367-2630/11/6/065002},
archivePrefix={arXiv},
eprint={0805.1844},
primaryClass={quant-ph cs.IT math.IT math.NA}
} | sidles2008practical |
arxiv-3731 | 0805.1854 | A New Algorithm for Interactive Structural Image Segmentation | <|reference_start|>A New Algorithm for Interactive Structural Image Segmentation: This paper proposes a novel algorithm for the problem of structural image segmentation through an interactive model-based approach. Interaction is expressed in the model creation, which is done according to user traces drawn over a given input image. Both model and input are then represented by means of attributed relational graphs derived on the fly. Appearance features are taken into account as object attributes and structural properties are expressed as relational attributes. To cope with possible topological differences between both graphs, a new structure called the deformation graph is introduced. The segmentation process corresponds to finding a labelling of the input graph that minimizes the deformations introduced in the model when it is updated with input information. This approach has shown to be faster than other segmentation methods, with competitive output quality. Therefore, the method solves the problem of multiple label segmentation in an efficient way. Encouraging results on both natural and target-specific color images, as well as examples showing the reusability of the model, are presented and discussed.<|reference_end|> | arxiv | @article{noma2008a,
title={A New Algorithm for Interactive Structural Image Segmentation},
author={Alexandre Noma, Ana B. V. Graciano, Luis Augusto Consularo, Roberto M.
Cesar-Jr, Isabelle Bloch},
journal={arXiv preprint arXiv:0805.1854},
year={2008},
archivePrefix={arXiv},
eprint={0805.1854},
primaryClass={cs.CV}
} | noma2008a |
arxiv-3732 | 0805.1857 | The Gaussian Many-Help-One Distributed Source Coding Problem | <|reference_start|>The Gaussian Many-Help-One Distributed Source Coding Problem: Jointly Gaussian memoryless sources are observed at N distinct terminals. The goal is to efficiently encode the observations in a distributed fashion so as to enable reconstruction of any one of the observations, say the first one, at the decoder subject to a quadratic fidelity criterion. Our main result is a precise characterization of the rate-distortion region when the covariance matrix of the sources satisfies a "tree-structure" condition. In this situation, a natural analog-digital separation scheme optimally trades off the distributed quantization rate tuples and the distortion in the reconstruction: each encoder consists of a point-to-point Gaussian vector quantizer followed by a Slepian-Wolf binning encoder. We also provide a partial converse that suggests that the tree structure condition is fundamental.<|reference_end|> | arxiv | @article{tavildar2008the,
title={The Gaussian Many-Help-One Distributed Source Coding Problem},
author={Saurabha Tavildar, Pramod Viswanath, and Aaron B. Wagner},
journal={arXiv preprint arXiv:0805.1857},
year={2008},
archivePrefix={arXiv},
eprint={0805.1857},
primaryClass={cs.IT math.IT}
} | tavildar2008the |
arxiv-3733 | 0805.1877 | Perfect tag identification protocol in RFID networks | <|reference_start|>Perfect tag identification protocol in RFID networks: Radio Frequency IDentification (RFID) systems are becoming more and more popular in the field of ubiquitous computing, in particular for objects identification. An RFID system is composed by one or more readers and a number of tags. One of the main issues in an RFID network is the fast and reliable identification of all tags in the reader range. The reader issues some queries, and tags properly answer. Then, the reader must identify the tags from such answers. This is crucial for most applications. Since the transmission medium is shared, the typical problem to be faced is a MAC-like one, i.e. to avoid or limit the number of tags transmission collisions. We propose a protocol which, under some assumptions about transmission techniques, always achieves a 100% perfomance. It is based on a proper recursive splitting of the concurrent tags sets, until all tags have been identified. The other approaches present in literature have performances of about 42% in the average at most. The counterpart is a more sophisticated hardware to be deployed in the manufacture of low cost tags.<|reference_end|> | arxiv | @article{bonuccelli2008perfect,
title={Perfect tag identification protocol in RFID networks},
author={Maurizio A. Bonuccelli, Francesca Lonetti, Francesca Martelli},
journal={arXiv preprint arXiv:0805.1877},
year={2008},
archivePrefix={arXiv},
eprint={0805.1877},
primaryClass={cs.NI}
} | bonuccelli2008perfect |
arxiv-3734 | 0805.1886 | Platform-Independent Firewall Policy Representation | <|reference_start|>Platform-Independent Firewall Policy Representation: In this paper we will discuss the design of abstract firewall model along with platform-independent policy definition language. We will also discuss the main design challenges and solutions to these challenges, as well as examine several differences in policy semantics between vendors and how it could be mapped to our platform-independent language. We will also touch upon a processing model, describing the mechanism by which an abstract policy could be compiled into a concrete firewall policy syntax. We will discuss briefly some future research directions, such as policy optimization and validation<|reference_end|> | arxiv | @article{zaliva2008platform-independent,
title={Platform-Independent Firewall Policy Representation},
author={Vadim Zaliva},
journal={arXiv preprint arXiv:0805.1886},
year={2008},
archivePrefix={arXiv},
eprint={0805.1886},
primaryClass={cs.CR}
} | zaliva2008platform-independent |
arxiv-3735 | 0805.1968 | Heavy-Tailed Limits for Medium Size Jobs and Comparison Scheduling | <|reference_start|>Heavy-Tailed Limits for Medium Size Jobs and Comparison Scheduling: We study the conditional sojourn time distributions of processor sharing (PS), foreground background processor sharing (FBPS) and shortest remaining processing time first (SRPT) scheduling disciplines on an event where the job size of a customer arriving in stationarity is smaller than exactly k>=0 out of the preceding m>=k arrivals. Then, conditioning on the preceding event, the sojourn time distribution of this newly arriving customer behaves asymptotically the same as if the customer were served in isolation with a server of rate (1-\rho)/(k+1) for PS/FBPS, and (1-\rho) for SRPT, respectively, where \rho is the traffic intensity. Hence, the introduced notion of conditional limits allows us to distinguish the asymptotic performance of the studied schedulers by showing that SRPT exhibits considerably better asymptotic behavior for relatively smaller jobs than PS/FBPS. Inspired by the preceding results, we propose an approximation to the SRPT discipline based on a novel adaptive job grouping mechanism that uses relative size comparison of a newly arriving job to the preceding m arrivals. Specifically, if the newly arriving job is smaller than k and larger than m-k of the previous m jobs, it is routed into class k. Then, the classes of smaller jobs are served with higher priorities using the static priority scheduling. The good performance of this mechanism, even for a small number of classes m+1, is demonstrated using the asymptotic queueing analysis under the heavy-tailed job requirements. We also discuss refinements of the comparison grouping mechanism that improve the accuracy of job classification at the expense of a small additional complexity.<|reference_end|> | arxiv | @article{jelenkovic2008heavy-tailed,
title={Heavy-Tailed Limits for Medium Size Jobs and Comparison Scheduling},
author={Predrag R. Jelenkovic, Xiaozhu Kang, Jian Tan},
journal={arXiv preprint arXiv:0805.1968},
year={2008},
archivePrefix={arXiv},
eprint={0805.1968},
primaryClass={cs.PF cs.NI}
} | jelenkovic2008heavy-tailed |
arxiv-3736 | 0805.1974 | Lower Bound for the Communication Complexity of the Russian Cards Problem | <|reference_start|>Lower Bound for the Communication Complexity of the Russian Cards Problem: In this paper it is shown that no public announcement scheme that can be modeled in Dynamic Epistemic Logic (DEL) can solve the Russian Cards Problem (RCP) in one announcement. Since DEL is a general model for any public announcement scheme we conclude that there exist no single announcement solution to the RCP. The proof demonstrates the utility of DEL in proving lower bounds for communication protocols. It is also shown that a general version of RCP has no two announcement solution when the adversary has sufficiently large number of cards.<|reference_end|> | arxiv | @article{cyriac2008lower,
title={Lower Bound for the Communication Complexity of the Russian Cards
Problem},
author={Aiswarya Cyriac, K. Murali Krishnan},
journal={arXiv preprint arXiv:0805.1974},
year={2008},
archivePrefix={arXiv},
eprint={0805.1974},
primaryClass={cs.LO}
} | cyriac2008lower |
arxiv-3737 | 0805.1981 | P&P protocol: local coordination of mobile sensors for self-deployment | <|reference_start|>P&P protocol: local coordination of mobile sensors for self-deployment: The use of mobile sensors is of great relevance for a number of strategic applications devoted to monitoring critical areas where sensors can not be deployed manually. In these networks, each sensor adapts its position on the basis of a local evaluation of the coverage efficiency, thus permitting an autonomous deployment. Several algorithms have been proposed to deploy mobile sensors over the area of interest. The applicability of these approaches largely depends on a proper formalization of rigorous rules to coordinate sensor movements, solve local conflicts and manage possible failures of communications and devices. In this paper we introduce P&P, a communication protocol that permits a correct and efficient coordination of sensor movements in agreement with the PUSH&PULL algorithm. We deeply investigate and solve the problems that may occur when coordinating asynchronous local decisions in the presence of an unreliable transmission medium and possibly faulty devices such as in the typical working scenario of mobile sensor networks. Simulation results show the performance of our protocol under a range of operative settings, including conflict situations, irregularly shaped target areas, and node failures.<|reference_end|> | arxiv | @article{bartolini2008p&p,
title={P&P protocol: local coordination of mobile sensors for self-deployment},
author={N. Bartolini, A. Massini, S. Silvestri},
journal={arXiv preprint arXiv:0805.1981},
year={2008},
archivePrefix={arXiv},
eprint={0805.1981},
primaryClass={cs.NI cs.DC}
} | bartolini2008p&p |
arxiv-3738 | 0805.2015 | Algorithms and Bounds for Rollout Sampling Approximate Policy Iteration | <|reference_start|>Algorithms and Bounds for Rollout Sampling Approximate Policy Iteration: Several approximate policy iteration schemes without value functions, which focus on policy representation using classifiers and address policy learning as a supervised learning problem, have been proposed recently. Finding good policies with such methods requires not only an appropriate classifier, but also reliable examples of best actions, covering the state space sufficiently. Up to this time, little work has been done on appropriate covering schemes and on methods for reducing the sample complexity of such methods, especially in continuous state spaces. This paper focuses on the simplest possible covering scheme (a discretized grid over the state space) and performs a sample-complexity comparison between the simplest (and previously commonly used) rollout sampling allocation strategy, which allocates samples equally at each state under consideration, and an almost as simple method, which allocates samples only as needed and requires significantly fewer samples.<|reference_end|> | arxiv | @article{dimitrakakis2008algorithms,
title={Algorithms and Bounds for Rollout Sampling Approximate Policy Iteration},
author={Christos Dimitrakakis and Michail G. Lagoudakis},
journal={arXiv preprint arXiv:0805.2015},
year={2008},
number={IAS-UVA-08-03},
archivePrefix={arXiv},
eprint={0805.2015},
primaryClass={stat.ML cs.LO math.ST stat.TH}
} | dimitrakakis2008algorithms |
arxiv-3739 | 0805.2027 | Rollout Sampling Approximate Policy Iteration | <|reference_start|>Rollout Sampling Approximate Policy Iteration: Several researchers have recently investigated the connection between reinforcement learning and classification. We are motivated by proposals of approximate policy iteration schemes without value functions which focus on policy representation using classifiers and address policy learning as a supervised learning problem. This paper proposes variants of an improved policy iteration scheme which addresses the core sampling problem in evaluating a policy through simulation as a multi-armed bandit machine. The resulting algorithm offers comparable performance to the previous algorithm achieved, however, with significantly less computational effort. An order of magnitude improvement is demonstrated experimentally in two standard reinforcement learning domains: inverted pendulum and mountain-car.<|reference_end|> | arxiv | @article{dimitrakakis2008rollout,
title={Rollout Sampling Approximate Policy Iteration},
author={Christos Dimitrakakis and Michail G. Lagoudakis},
journal={arXiv preprint arXiv:0805.2027},
year={2008},
doi={10.1007/s10994-008-5069-3},
archivePrefix={arXiv},
eprint={0805.2027},
primaryClass={cs.LG cs.AI cs.CC}
} | dimitrakakis2008rollout |
arxiv-3740 | 0805.2045 | Semantic Analysis of Tag Similarity Measures in Collaborative Tagging Systems | <|reference_start|>Semantic Analysis of Tag Similarity Measures in Collaborative Tagging Systems: Social bookmarking systems allow users to organise collections of resources on the Web in a collaborative fashion. The increasing popularity of these systems as well as first insights into their emergent semantics have made them relevant to disciplines like knowledge extraction and ontology learning. The problem of devising methods to measure the semantic relatedness between tags and characterizing it semantically is still largely open. Here we analyze three measures of tag relatedness: tag co-occurrence, cosine similarity of co-occurrence distributions, and FolkRank, an adaptation of the PageRank algorithm to folksonomies. Each measure is computed on tags from a large-scale dataset crawled from the social bookmarking system del.icio.us. To provide a semantic grounding of our findings, a connection to WordNet (a semantic lexicon for the English language) is established by mapping tags into synonym sets of WordNet, and applying there well-known metrics of semantic similarity. Our results clearly expose different characteristics of the selected measures of relatedness, making them applicable to different subtasks of knowledge extraction such as synonym detection or discovery of concept hierarchies.<|reference_end|> | arxiv | @article{cattuto2008semantic,
title={Semantic Analysis of Tag Similarity Measures in Collaborative Tagging
Systems},
author={Ciro Cattuto, Dominik Benz, Andreas Hotho, Gerd Stumme},
journal={arXiv preprint arXiv:0805.2045},
year={2008},
archivePrefix={arXiv},
eprint={0805.2045},
primaryClass={cs.DL cs.IR}
} | cattuto2008semantic |
arxiv-3741 | 0805.2063 | Replication via Invalidating the Applicability of the Fixed Point Theorem | <|reference_start|>Replication via Invalidating the Applicability of the Fixed Point Theorem: We present a construction of a certain infinite complete partial order (CPO) that differs from the standard construction used in Scott's denotational semantics. In addition, we construct several other infinite CPO's. For some of those, we apply the usual Fixed Point Theorem (FPT) to yield a fixed point for every continuous function $\mu:2\to 2$ (where 2 denotes the set $\{0,1\}$), while for the other CPO's we cannot invoke that theorem to yield such fixed points. Every element of each of these CPO's is a binary string in the monotypic form and we show that invalidation of the applicability of the FPT to the CPO that Scott's constructed yields the concept of replication.<|reference_end|> | arxiv | @article{ito2008replication,
title={Replication via Invalidating the Applicability of the Fixed Point
Theorem},
author={Genta Ito},
journal={arXiv preprint arXiv:0805.2063},
year={2008},
archivePrefix={arXiv},
eprint={0805.2063},
primaryClass={cs.LO}
} | ito2008replication |
arxiv-3742 | 0805.2068 | Fork Sequential Consistency is Blocking | <|reference_start|>Fork Sequential Consistency is Blocking: We consider an untrusted server storing shared data on behalf of clients. We show that no storage access protocol can on the one hand preserve sequential consistency and wait-freedom when the server is correct, and on the other hand always preserve fork sequential consistency.<|reference_end|> | arxiv | @article{cachin2008fork,
title={Fork Sequential Consistency is Blocking},
author={Christian Cachin, Idit Keidar, Alexander Shraer},
journal={arXiv preprint arXiv:0805.2068},
year={2008},
number={CCIT 697},
archivePrefix={arXiv},
eprint={0805.2068},
primaryClass={cs.DC}
} | cachin2008fork |
arxiv-3743 | 0805.2081 | Least change in the Determinant or Permanent of a matrix under perturbation of a single element: continuous and discrete cases | <|reference_start|>Least change in the Determinant or Permanent of a matrix under perturbation of a single element: continuous and discrete cases: We formulate the problem of finding the probability that the determinant of a matrix undergoes the least change upon perturbation of one of its elements, provided that most or all of the elements of the matrix are chosen at random and that the randomly chosen elements have a fixed probability of being non-zero. Also, we show that the procedure for finding the probability that the determinant undergoes the least change depends on whether the random variables for the matrix elements are continuous or discrete.<|reference_end|> | arxiv | @article{ito2008least,
title={Least change in the Determinant or Permanent of a matrix under
perturbation of a single element: continuous and discrete cases},
author={Genta Ito},
journal={arXiv preprint arXiv:0805.2081},
year={2008},
archivePrefix={arXiv},
eprint={0805.2081},
primaryClass={cs.DM cs.CC}
} | ito2008least |
arxiv-3744 | 0805.2083 | Approximate formulation of the probability that the Determinant or Permanent of a matrix undergoes the least change under perturbation of a single element | <|reference_start|>Approximate formulation of the probability that the Determinant or Permanent of a matrix undergoes the least change under perturbation of a single element: In an earlier paper, we discussed the probability that the determinant of a matrix undergoes the least change upon perturbation of one of its elements, provided that most or all of the elements of the matrix are chosen at random and that the randomly chosen elements have a fixed probability of being non-zero. In this paper, we derive approximate formulas for that probability by assuming that the terms in the permanent of a matrix are independent of one another, and we apply that assumption to several classes of matrices. In the course of deriving those formulas, we identified several integer sequences that are not listed on Sloane's Web site.<|reference_end|> | arxiv | @article{ito2008approximate,
title={Approximate formulation of the probability that the Determinant or
Permanent of a matrix undergoes the least change under perturbation of a
single element},
author={Genta Ito},
journal={arXiv preprint arXiv:0805.2083},
year={2008},
archivePrefix={arXiv},
eprint={0805.2083},
primaryClass={cs.DM cs.CC}
} | ito2008approximate |
arxiv-3745 | 0805.2105 | On Emergence of Dominating Cliques in Random Graphs | <|reference_start|>On Emergence of Dominating Cliques in Random Graphs: Emergence of dominating cliques in Erd\"os-R\'enyi random graph model ${\bbbg(n,p)}$ is investigated in this paper. It is shown this phenomenon possesses a phase transition. Namely, we have argued that, given a constant probability $p$, an $n$-node random graph $G$ from ${\bbbg(n,p)}$ and for $r= c \log_{1/p} n$ with $1 \leq c \leq 2$, it holds: (1) if $p > 1/2$ then an $r$-node clique is dominating in $G$ almost surely and, (2) if $p \leq (3 - \sqrt{5})/2$ then an $r$-node clique is not dominating in $G$ almost surely. The remaining range of probability $p$ is discussed with more attention. A detailed study shows that this problem is answered by examination of sub-logarithmic growth of $r$ upon $n$.<|reference_end|> | arxiv | @article{nehez2008on,
title={On Emergence of Dominating Cliques in Random Graphs},
author={Martin Nehez, Daniel Olejar, Michal Demetrian},
journal={arXiv preprint arXiv:0805.2105},
year={2008},
archivePrefix={arXiv},
eprint={0805.2105},
primaryClass={math.CO cs.IT math.IT}
} | nehez2008on |
arxiv-3746 | 0805.2135 | Communication Lower Bounds Using Dual Polynomials | <|reference_start|>Communication Lower Bounds Using Dual Polynomials: Representations of Boolean functions by real polynomials play an important role in complexity theory. Typically, one is interested in the least degree of a polynomial p(x_1,...,x_n) that approximates or sign-represents a given Boolean function f(x_1,...,x_n). This article surveys a new and growing body of work in communication complexity that centers around the dual objects, i.e., polynomials that certify the difficulty of approximating or sign-representing a given function. We provide a unified guide to the following results, complete with all the key proofs: (1) Sherstov's Degree/Discrepancy Theorem, which translates lower bounds on the threshold degree of a Boolean function into upper bounds on the discrepancy of a related function; (2) Two different methods for proving lower bounds on bounded-error communication based on the approximate degree: Sherstov's pattern matrix method and Shi and Zhu's block composition method; (3) Extension of the pattern matrix method to the multiparty model, obtained by Lee and Shraibman and by Chattopadhyay and Ada, and the resulting improved lower bounds for DISJOINTNESS; (4) David and Pitassi's separation of NP and BPP in multiparty communication complexity for k=(1-eps)log n players.<|reference_end|> | arxiv | @article{sherstov2008communication,
title={Communication Lower Bounds Using Dual Polynomials},
author={Alexander A. Sherstov},
journal={arXiv preprint arXiv:0805.2135},
year={2008},
archivePrefix={arXiv},
eprint={0805.2135},
primaryClass={cs.CC}
} | sherstov2008communication |
arxiv-3747 | 0805.2170 | Independence of P vs NP in regards to oracle relativizations | <|reference_start|>Independence of P vs NP in regards to oracle relativizations: This is the third article in a series of four articles dealing with the P vs. NP question. The purpose of this work is to demonstrate that the methods used in the first two articles of this series are not affected by oracle relativizations. Furthermore, the solution to the P vs. NP problem is actually independent of oracle relativizations.<|reference_end|> | arxiv | @article{meek2008independence,
title={Independence of P vs. NP in regards to oracle relativizations},
author={Jerrald Meek},
journal={arXiv preprint arXiv:0805.2170},
year={2008},
archivePrefix={arXiv},
eprint={0805.2170},
primaryClass={cs.CC}
} | meek2008independence |
arxiv-3748 | 0805.2179 | Mnesors for databases | <|reference_start|>Mnesors for databases: We add commutativity to axioms defining mnesors and substitute a bitrop for the lattice. We show that it can be applied to relational database querying: set union, intersection and selection are redifined only from the mnesor addition and the granular multiplication. Union-compatibility is not required.<|reference_end|> | arxiv | @article{champenois2008mnesors,
title={Mnesors for databases},
author={Gilles Champenois},
journal={arXiv preprint arXiv:0805.2179},
year={2008},
archivePrefix={arXiv},
eprint={0805.2179},
primaryClass={cs.LO}
} | champenois2008mnesors |
arxiv-3749 | 0805.2185 | Path Diversity over Packet Switched Networks: Performance Analysis and Rate Allocation | <|reference_start|>Path Diversity over Packet Switched Networks: Performance Analysis and Rate Allocation: Path diversity works by setting up multiple parallel connections between the end points using the topological path redundancy of the network. In this paper, \textit{Forward Error Correction} (FEC) is applied across multiple independent paths to enhance the end-to-end reliability. Network paths are modeled as erasure Gilbert-Elliot channels. It is known that over any erasure channel, \textit{Maximum Distance Separable} (MDS) codes achieve the minimum probability of irrecoverable loss among all block codes of the same size. Based on the adopted model for the error behavior, we prove that the probability of irrecoverable loss for MDS codes decays exponentially for an asymptotically large number of paths. Then, optimal rate allocation problem is solved for the asymptotic case where the number of paths is large. Moreover, it is shown that in such asymptotically optimal rate allocation, each path is assigned a positive rate \textit{iff} its quality is above a certain threshold. The quality of a path is defined as the percentage of the time it spends in the bad state. Finally, using dynamic programming, a heuristic suboptimal algorithm with polynomial runtime is proposed for rate allocation over a finite number of paths. This algorithm converges to the asymptotically optimal rate allocation when the number of paths is large. The simulation results show that the proposed algorithm approximates the optimal rate allocation (found by exhaustive search) very closely for practical number of paths, and provides significant performance improvement compared to the alternative schemes of rate allocation.<|reference_end|> | arxiv | @article{fashandi2008path,
title={Path Diversity over Packet Switched Networks: Performance Analysis and
Rate Allocation},
author={Shervan Fashandi, Shahab Oveis Gharan and Amir K. Khandani},
journal={arXiv preprint arXiv:0805.2185},
year={2008},
number={Technical Report UW-E&CE#2008-09},
archivePrefix={arXiv},
eprint={0805.2185},
primaryClass={cs.NI cs.IT math.IT}
} | fashandi2008path |
arxiv-3750 | 0805.2189 | Visual Checking of Spreadsheets | <|reference_start|>Visual Checking of Spreadsheets: The difference between surface and deep structures of a spreadsheet is a major cause of difficulty in checking spreadsheets. After a brief survey of current methods of checking (or debugging) spreadsheets, new visual methods of showing the deep structures are presented. Illustrations are given on how these visual methods can be employed in various interactive local and global debugging strategies.<|reference_end|> | arxiv | @article{chen2008visual,
title={Visual Checking of Spreadsheets},
author={Ying Chen, Hock Chuan Chan},
journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2000 75-85
ISBN:1 86166 158 4},
year={2008},
archivePrefix={arXiv},
eprint={0805.2189},
primaryClass={cs.HC}
} | chen2008visual |
arxiv-3751 | 0805.2199 | Constraint Complexity of Realizations of Linear Codes on Arbitrary Graphs | <|reference_start|>Constraint Complexity of Realizations of Linear Codes on Arbitrary Graphs: A graphical realization of a linear code C consists of an assignment of the coordinates of C to the vertices of a graph, along with a specification of linear state spaces and linear ``local constraint'' codes to be associated with the edges and vertices, respectively, of the graph. The $\k$-complexity of a graphical realization is defined to be the largest dimension of any of its local constraint codes. $\k$-complexity is a reasonable measure of the computational complexity of a sum-product decoding algorithm specified by a graphical realization. The main focus of this paper is on the following problem: given a linear code C and a graph G, how small can the $\k$-complexity of a realization of C on G be? As useful tools for attacking this problem, we introduce the Vertex-Cut Bound, and the notion of ``vc-treewidth'' for a graph, which is closely related to the well-known graph-theoretic notion of treewidth. Using these tools, we derive tight lower bounds on the $\k$-complexity of any realization of C on G. Our bounds enable us to conclude that good error-correcting codes can have low-complexity realizations only on graphs with large vc-treewidth. Along the way, we also prove the interesting result that the ratio of the $\k$-complexity of the best conventional trellis realization of a length-n code C to the $\k$-complexity of the best cycle-free realization of C grows at most logarithmically with codelength n. Such a logarithmic growth rate is, in fact, achievable.<|reference_end|> | arxiv | @article{kashyap2008constraint,
title={Constraint Complexity of Realizations of Linear Codes on Arbitrary
Graphs},
author={Navin Kashyap},
journal={arXiv preprint arXiv:0805.2199},
year={2008},
doi={10.1109/TIT.2009.2030492},
archivePrefix={arXiv},
eprint={0805.2199},
primaryClass={cs.DM cs.IT math.IT}
} | kashyap2008constraint |
arxiv-3752 | 0805.2286 | Secure Network Coding Against the Contamination and Eavesdropping Adversaries | <|reference_start|>Secure Network Coding Against the Contamination and Eavesdropping Adversaries: In this paper, we propose an algorithm that targets contamination and eavesdropping adversaries. We consider the case when the number of independent packets available to the eavesdropper is less than the multicast capacity of the network. By means of our algorithm every node can verify the integrity of the received packets easily and an eavesdropper is unable to get any meaningful information about the source. We call it practical security if an eavesdropper is unable to get any meaningful information about the source.We show that, by giving up a small amount of overall capacity, our algorithm achieves achieves the practically secure condition at a probability of one. Furthermore, the communication overhead of our algorithm are negligible compared with previous works, since the transmission of the hash values and the code coefficients are both avoided.<|reference_end|> | arxiv | @article{zhou2008secure,
title={Secure Network Coding Against the Contamination and Eavesdropping
Adversaries},
author={Yejun Zhou, Hui Li and Jianfeng Ma},
journal={arXiv preprint arXiv:0805.2286},
year={2008},
archivePrefix={arXiv},
eprint={0805.2286},
primaryClass={cs.CR cs.NI}
} | zhou2008secure |
arxiv-3753 | 0805.2303 | Graph Algorithms for Improving Type-Logical Proof Search | <|reference_start|>Graph Algorithms for Improving Type-Logical Proof Search: Proof nets are a graph theoretical representation of proofs in various fragments of type-logical grammar. In spite of this basis in graph theory, there has been relatively little attention to the use of graph theoretic algorithms for type-logical proof search. In this paper we will look at several ways in which standard graph theoretic algorithms can be used to restrict the search space. In particular, we will provide an O(n4) algorithm for selecting an optimal axiom link at any stage in the proof search as well as a O(kn3) algorithm for selecting the k best proof candidates.<|reference_end|> | arxiv | @article{moot2008graph,
title={Graph Algorithms for Improving Type-Logical Proof Search},
author={Richard Moot (LaBRI, Inria Futurs)},
journal={Dans Categorial grammars - an efficient tool for natural language
processing - June 2004 - Categorial grammars - an efficient tool for natural
language processing, Montpellier : France (2004)},
year={2008},
archivePrefix={arXiv},
eprint={0805.2303},
primaryClass={cs.CL}
} | moot2008graph |
arxiv-3754 | 0805.2308 | Toward Fuzzy block theory | <|reference_start|>Toward Fuzzy block theory: This study, fundamentals of fuzzy block theory, and its application in assessment of stability in underground openings, has surveyed. Using fuzzy topics and inserting them in to key block theory, in two ways, fundamentals of fuzzy block theory has been presented. In indirect combining, by coupling of adaptive Neuro Fuzzy Inference System (NFIS) and classic block theory, we could extract possible damage parts around a tunnel. In direct solution, some principles of block theory, by means of different fuzzy facets theory, were rewritten.<|reference_end|> | arxiv | @article{owladeghaffari2008toward,
title={Toward Fuzzy block theory},
author={H.Owladeghaffari},
journal={arXiv preprint arXiv:0805.2308},
year={2008},
archivePrefix={arXiv},
eprint={0805.2308},
primaryClass={cs.AI}
} | owladeghaffari2008toward |
arxiv-3755 | 0805.2311 | Aplicacion de la descomposicion racional univariada a monstrous moonshine (in Spanish) | <|reference_start|>Aplicacion de la descomposicion racional univariada a monstrous moonshine (in Spanish): This paper shows how to use Computational Algebra techniques, namely the decomposition of rational functions in one variable, to explore a certain set of modular functions, called replicable functions, that arise in Monstrous Moonshine. In particular, we have computed all the rational relations with coefficients in Z between pairs of replicable functions. ----- En este articulo mostramos como usar tecnicas de Algebra Computacional, concretamente la descomposcion de funciones racionales univariadas, para estudiar un cierto conjunto de funciones modulares, llamadas funciones replicables, que aparecen en Monstrous Moonshine. En concreto, hemos calculado todas las relaciones racionales con coeficientes en Z entre pares de funciones replicables.<|reference_end|> | arxiv | @article{mckay2008aplicacion,
title={Aplicacion de la descomposicion racional univariada a monstrous
moonshine (in Spanish)},
author={John McKay and David Sevilla},
journal={Proceedings of the 2004 Encuentro de Algebra Computacional y
Aplicaciones (EACA), p. 289-294. ISBN 84-688-6988-04},
year={2008},
archivePrefix={arXiv},
eprint={0805.2311},
primaryClass={math.NT cs.SC}
} | mckay2008aplicacion |
arxiv-3756 | 0805.2324 | A multilateral filtering method applied to airplane runway image | <|reference_start|>A multilateral filtering method applied to airplane runway image: By considering the features of the airport runway image filtering, an improved bilateral filtering method was proposed which can remove noise with edge preserving. Firstly the steerable filtering decomposition is used to calculate the sub-band parameters of 4 orients, and the texture feature matrix is then obtained from the sub-band local median energy. The texture similar, the spatial closer and the color similar functions are used to filter the image.The effect of the weighting function parameters is qualitatively analyzed also. In contrast with the standard bilateral filter and the simulation results for the real airport runway image show that the multilateral filtering is more effective than the standard bilateral filtering.<|reference_end|> | arxiv | @article{yu2008a,
title={A multilateral filtering method applied to airplane runway image},
author={Zhang Yu, Shi Zhong-ke, Wang Run-quan},
journal={arXiv preprint arXiv:0805.2324},
year={2008},
archivePrefix={arXiv},
eprint={0805.2324},
primaryClass={cs.CV}
} | yu2008a |
arxiv-3757 | 0805.2331 | Computing the fixing group of a rational function | <|reference_start|>Computing the fixing group of a rational function: Let G=Aut_K (K(x)) be the Galois group of the transcendental degree one pure field extension K(x)/K. In this paper we describe polynomial time algorithms for computing the field Fix(H) fixed by a subgroup H < G and for computing the fixing group G_f of a rational function f in K(x).<|reference_end|> | arxiv | @article{gutierrez2008computing,
title={Computing the fixing group of a rational function},
author={Jaime Gutierrez, Rosario Rubio, David Sevilla},
journal={Proceedings of the 5th International workshop on Computer Algebra
in Scientific Computing (CASC), p. 159-164, Instituet fuer Informatik,
Technische Universitaet Muenchen, 2002. ISBN 3-9808546-0-4},
year={2008},
archivePrefix={arXiv},
eprint={0805.2331},
primaryClass={cs.SC math.AC}
} | gutierrez2008computing |
arxiv-3758 | 0805.2338 | Unirational fields of transcendence degree one and functional decomposition | <|reference_start|>Unirational fields of transcendence degree one and functional decomposition: In this paper we present an algorithm to compute all unirational fields of transcendence degree one containing a given finite set of multivariate rational functions. In particular, we provide an algorithm to decompose a multivariate rational function f of the form f=g(h), where g is a univariate rational function and h a multivariate one.<|reference_end|> | arxiv | @article{gutierrez2008unirational,
title={Unirational fields of transcendence degree one and functional
decomposition},
author={Jaime Gutierrez, Rosario Rubio, David Sevilla},
journal={Proceedings of the 2001 International Symposium on Symbolic and
Algebraic Computation (ISSAC), p. 167-175, ACM, New York, 2001. ISBN
1-58113-417-7},
year={2008},
archivePrefix={arXiv},
eprint={0805.2338},
primaryClass={cs.SC math.AC}
} | gutierrez2008unirational |
arxiv-3759 | 0805.2362 | An optimization problem on the sphere | <|reference_start|>An optimization problem on the sphere: We prove existence and uniqueness of the minimizer for the average geodesic distance to the points of a geodesically convex set on the sphere. This implies a corresponding existence and uniqueness result for an optimal algorithm for halfspace learning, when data and target functions are drawn from the uniform distribution.<|reference_end|> | arxiv | @article{maurer2008an,
title={An optimization problem on the sphere},
author={Andreas Maurer},
journal={arXiv preprint arXiv:0805.2362},
year={2008},
archivePrefix={arXiv},
eprint={0805.2362},
primaryClass={cs.LG cs.CG}
} | maurer2008an |
arxiv-3760 | 0805.2368 | A Kernel Method for the Two-Sample Problem | <|reference_start|>A Kernel Method for the Two-Sample Problem: We propose a framework for analyzing and comparing distributions, allowing us to design statistical tests to determine if two samples are drawn from different distributions. Our test statistic is the largest difference in expectations over functions in the unit ball of a reproducing kernel Hilbert space (RKHS). We present two tests based on large deviation bounds for the test statistic, while a third is based on the asymptotic distribution of this statistic. The test statistic can be computed in quadratic time, although efficient linear time approximations are available. Several classical metrics on distributions are recovered when the function space used to compute the difference in expectations is allowed to be more general (eg. a Banach space). We apply our two-sample tests to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where they perform strongly. Excellent performance is also obtained when comparing distributions over graphs, for which these are the first such tests.<|reference_end|> | arxiv | @article{gretton2008a,
title={A Kernel Method for the Two-Sample Problem},
author={Arthur Gretton, Karsten Borgwardt, Malte J. Rasch, Bernhard Scholkopf,
Alexander J. Smola},
journal={arXiv preprint arXiv:0805.2368},
year={2008},
archivePrefix={arXiv},
eprint={0805.2368},
primaryClass={cs.LG cs.AI}
} | gretton2008a |
arxiv-3761 | 0805.2379 | Linear code-based vector quantization for independent random variables | <|reference_start|>Linear code-based vector quantization for independent random variables: In this paper we analyze the rate-distortion function R(D) achievable using linear codes over GF(q), where q is a prime number.<|reference_end|> | arxiv | @article{kudryashov2008linear,
title={Linear code-based vector quantization for independent random variables},
author={Boris Kudryashov and Kirill Yurkov},
journal={arXiv preprint arXiv:0805.2379},
year={2008},
archivePrefix={arXiv},
eprint={0805.2379},
primaryClass={cs.IT math.IT}
} | kudryashov2008linear |
arxiv-3762 | 0805.2421 | Malicious Bayesian Congestion Games | <|reference_start|>Malicious Bayesian Congestion Games: In this paper, we introduce malicious Bayesian congestion games as an extension to congestion games where players might act in a malicious way. In such a game each player has two types. Either the player is a rational player seeking to minimize her own delay, or - with a certain probability - the player is malicious in which case her only goal is to disturb the other players as much as possible. We show that such games do in general not possess a Bayesian Nash equilibrium in pure strategies (i.e. a pure Bayesian Nash equilibrium). Moreover, given a game, we show that it is NP-complete to decide whether it admits a pure Bayesian Nash equilibrium. This result even holds when resource latency functions are linear, each player is malicious with the same probability, and all strategy sets consist of singleton sets. For a slightly more restricted class of malicious Bayesian congestion games, we provide easy checkable properties that are necessary and sufficient for the existence of a pure Bayesian Nash equilibrium. In the second part of the paper we study the impact of the malicious types on the overall performance of the system (i.e. the social cost). To measure this impact, we use the Price of Malice. We provide (tight) bounds on the Price of Malice for an interesting class of malicious Bayesian congestion games. Moreover, we show that for certain congestion games the advent of malicious types can also be beneficial to the system in the sense that the social cost of the worst case equilibrium decreases. We provide a tight bound on the maximum factor by which this happens.<|reference_end|> | arxiv | @article{gairing2008malicious,
title={Malicious Bayesian Congestion Games},
author={Martin Gairing},
journal={arXiv preprint arXiv:0805.2421},
year={2008},
archivePrefix={arXiv},
eprint={0805.2421},
primaryClass={cs.GT cs.CC}
} | gairing2008malicious |
arxiv-3763 | 0805.2422 | Transceiver Pair Designs for Multiple Access Channels under Fixed Sum Mutual Information using MMSE Decision Feedback Detection | <|reference_start|>Transceiver Pair Designs for Multiple Access Channels under Fixed Sum Mutual Information using MMSE Decision Feedback Detection: In this paper, we consider the joint design of the transceivers for a multiple access Multiple Input and Multiple Output (MIMO) system having Inter-Symbol Interference (ISI) channels. The system we consider is equipped with the Minimum Mean Square Error (MMSE) Decision-Feedback (DF) detector. Traditionally, transmitter designs for this system have been based on constraints of either the transmission power or the signal-to-interference-and-noise ratio (SINR) for each user. Here, we explore a novel perspective and examine a transceiver design which is based on a fixed sum mutual information constraint and minimizes the arithmetic average of mean square error of MMSE-decision feedback detection. For this optimization problem, a closed-form solution is obtained and is achieved if and only if the averaged sum mutual information is uniformly distributed over each active subchannel. Meanwhile, the mutual information of the currently detected user is uniformly distributed over each individual symbol within the block signal of the user, assuming all the previous user signals have been perfectly detected.<|reference_end|> | arxiv | @article{jiang2008transceiver,
title={Transceiver Pair Designs for Multiple Access Channels under Fixed Sum
Mutual Information using MMSE Decision Feedback Detection},
author={Wenwen Jiang, Jian-Kang Zhang and Kon Max Wong},
journal={arXiv preprint arXiv:0805.2422},
year={2008},
archivePrefix={arXiv},
eprint={0805.2422},
primaryClass={cs.IT math.IT}
} | jiang2008transceiver |
arxiv-3764 | 0805.2423 | Green Codes: Energy-Efficient Short-Range Communication | <|reference_start|>Green Codes: Energy-Efficient Short-Range Communication: A green code attempts to minimize the total energy per-bit required to communicate across a noisy channel. The classical information-theoretic approach neglects the energy expended in processing the data at the encoder and the decoder and only minimizes the energy required for transmissions. Since there is no cost associated with using more degrees of freedom, the traditionally optimal strategy is to communicate at rate zero. In this work, we use our recently proposed model for the power consumed by iterative message passing. Using generalized sphere-packing bounds on the decoding power, we find lower bounds on the total energy consumed in the transmissions and the decoding, allowing for freedom in the choice of the rate. We show that contrary to the classical intuition, the rate for green codes is bounded away from zero for any given error probability. In fact, as the desired bit-error probability goes to zero, the optimizing rate for our bounds converges to 1.<|reference_end|> | arxiv | @article{grover2008green,
title={Green Codes: Energy-Efficient Short-Range Communication},
author={Pulkit Grover and Anant Sahai},
journal={arXiv preprint arXiv:0805.2423},
year={2008},
archivePrefix={arXiv},
eprint={0805.2423},
primaryClass={cs.IT math.IT}
} | grover2008green |
arxiv-3765 | 0805.2427 | On Trapping Sets and Guaranteed Error Correction Capability of LDPC Codes and GLDPC Codes | <|reference_start|>On Trapping Sets and Guaranteed Error Correction Capability of LDPC Codes and GLDPC Codes: The relation between the girth and the guaranteed error correction capability of $\gamma$-left regular LDPC codes when decoded using the bit flipping (serial and parallel) algorithms is investigated. A lower bound on the size of variable node sets which expand by a factor of at least $3 \gamma/4$ is found based on the Moore bound. An upper bound on the guaranteed error correction capability is established by studying the sizes of smallest possible trapping sets. The results are extended to generalized LDPC codes. It is shown that generalized LDPC codes can correct a linear fraction of errors under the parallel bit flipping algorithm when the underlying Tanner graph is a good expander. It is also shown that the bound cannot be improved when $\gamma$ is even by studying a class of trapping sets. A lower bound on the size of variable node sets which have the required expansion is established.<|reference_end|> | arxiv | @article{chilappagari2008on,
title={On Trapping Sets and Guaranteed Error Correction Capability of LDPC
Codes and GLDPC Codes},
author={Shashi Kiran Chilappagari, Dung Viet Nguyen, Bane Vasic, Michael W.
Marcellin},
journal={arXiv preprint arXiv:0805.2427},
year={2008},
doi={10.1109/TIT.2010.2040962},
archivePrefix={arXiv},
eprint={0805.2427},
primaryClass={cs.IT math.IT}
} | chilappagari2008on |
arxiv-3766 | 0805.2438 | Certified Exact Transcendental Real Number Computation in Coq | <|reference_start|>Certified Exact Transcendental Real Number Computation in Coq: Reasoning about real number expressions in a proof assistant is challenging. Several problems in theorem proving can be solved by using exact real number computation. I have implemented a library for reasoning and computing with complete metric spaces in the Coq proof assistant and used this library to build a constructive real number implementation including elementary real number functions and proofs of correctness. Using this library, I have created a tactic that automatically proves strict inequalities over closed elementary real number expressions by computation.<|reference_end|> | arxiv | @article{o'connor2008certified,
title={Certified Exact Transcendental Real Number Computation in Coq},
author={Russell O'Connor},
journal={Ait Mohamed, C. Munoz, and S. Tahar (Eds.): TPHOLs 2008, LNCS
5170, pp. 246-261, 2008},
year={2008},
doi={10.1007/978-3-540-71067-7_21},
archivePrefix={arXiv},
eprint={0805.2438},
primaryClass={cs.LO cs.MS cs.NA}
} | o'connor2008certified |
arxiv-3767 | 0805.2440 | Analysis of hydrocyclone performance based on information granulation theory | <|reference_start|>Analysis of hydrocyclone performance based on information granulation theory: This paper describes application of information granulation theory, on the analysis of hydrocyclone perforamance. In this manner, using a combining of Self Organizing Map (SOM) and Neuro-Fuzzy Inference System (NFIS), crisp and fuzzy granules are obtained(briefly called SONFIS). Balancing of crisp granules and sub fuzzy granules, within non fuzzy information (initial granulation), is rendered in an open-close iteration. Using two criteria, "simplicity of rules "and "adaptive threoshold error level", stability of algorithm is guaranteed. Validation of the proposed method, on the data set of the hydrocyclone is rendered.<|reference_end|> | arxiv | @article{owladeghaffari2008analysis,
title={Analysis of hydrocyclone performance based on information granulation
theory},
author={Hamed Owladeghaffari, Majid Ejtemaei, Mehdi Irannajad},
journal={arXiv preprint arXiv:0805.2440},
year={2008},
archivePrefix={arXiv},
eprint={0805.2440},
primaryClass={cs.AI}
} | owladeghaffari2008analysis |
arxiv-3768 | 0805.2537 | A toolkit for a generative lexicon | <|reference_start|>A toolkit for a generative lexicon: In this paper we describe the conception of a software toolkit designed for the construction, maintenance and collaborative use of a Generative Lexicon. In order to ease its portability and spreading use, this tool was built with free and open source products. We eventually tested the toolkit and showed it filters the adequate form of anaphoric reference to the modifier in endocentric compounds.<|reference_end|> | arxiv | @article{henry2008a,
title={A toolkit for a generative lexicon},
author={Patrick Henry, Christian Bassac (LaBRI)},
journal={Dans A toolkit for a Generative Lexicon - Fourth International
Workshop on Generative Approaches to the Lexicon, PARIS : France (2007)},
year={2008},
archivePrefix={arXiv},
eprint={0805.2537},
primaryClass={cs.CL}
} | henry2008a |
arxiv-3769 | 0805.2620 | Algorithms for B\"uchi Games | <|reference_start|>Algorithms for B\"uchi Games: The classical algorithm for solving B\"uchi games requires time $O(n\cdot m)$ for game graphs with $n$ states and $m$ edges. For game graphs with constant outdegree, the best known algorithm has running time $O(n^2/\log n)$. We present two new algorithms for B\"uchi games. First, we give an algorithm that performs at most $O(m)$ more work than the classical algorithm, but runs in time O(n) on infinitely many graphs of constant outdegree on which the classical algorithm requires time $O(n^2)$. Second, we give an algorithm with running time $O(n\cdot m\cdot\log\delta(n)/\log n)$, where $1\le\delta(n)\le n$ is the outdegree of the game graph. Note that this algorithm performs asymptotically better than the classical algorithm if $\delta(n)=O(\log n)$.<|reference_end|> | arxiv | @article{chatterjee2008algorithms,
title={Algorithms for B\"uchi Games},
author={Krishnendu Chatterjee, Thomas A. Henzinger and Nir Piterman},
journal={arXiv preprint arXiv:0805.2620},
year={2008},
archivePrefix={arXiv},
eprint={0805.2620},
primaryClass={cs.GT cs.LO}
} | chatterjee2008algorithms |
arxiv-3770 | 0805.2622 | Stochastic Limit-Average Games are in EXPTIME | <|reference_start|>Stochastic Limit-Average Games are in EXPTIME: The value of a finite-state two-player zero-sum stochastic game with limit-average payoff can be approximated to within $\epsilon$ in time exponential in a polynomial in the size of the game times polynomial in logarithmic in $\frac{1}{\epsilon}$, for all $\epsilon>0$.<|reference_end|> | arxiv | @article{chatterjee2008stochastic,
title={Stochastic Limit-Average Games are in EXPTIME},
author={Krishnendu Chatterjee, Rupak Majumdar and Thomas A. Henzinger},
journal={arXiv preprint arXiv:0805.2622},
year={2008},
archivePrefix={arXiv},
eprint={0805.2622},
primaryClass={cs.GT}
} | chatterjee2008stochastic |
arxiv-3771 | 0805.2627 | Fast Monte Carlo Estimation of Timing Yield: Importance Sampling with Stochastic Logical Effort (ISLE) | <|reference_start|>Fast Monte Carlo Estimation of Timing Yield: Importance Sampling with Stochastic Logical Effort (ISLE): In the nano era in integrated circuit fabrication technologies, the performance variability due to statistical process and circuit parameter variations is becoming more and more significant. Considerable effort has been expended in the EDA community during the past several years in trying to cope with the so-called statistical timing problem. Most of this effort has been aimed at generalizing the static timing analyzers to the statistical case. In this paper, we take a pragmatic approach in pursuit of making the Monte Carlo method for timing yield estimation practically feasible. The Monte Carlo method is widely used as a golden reference in assessing the accuracy of other timing yield estimation techniques. However, it is generally believed that it can not be used in practice for estimating timing yield as it requires too many costly full circuit simulations for acceptable accuracy. In this paper, we present a novel approach to constructing an improvedMonte Carlo estimator for timing yield which provides the same accuracy as the standard Monte Carlo estimator, but at a cost of much fewer full circuit simulations. This improved estimator is based on a novel combination of a variance reduction technique, importance sampling, and a stochastic generalization of the logical effort formalism for cheap but approximate delay estimation. The results we present demonstrate that our improved yield estimator achieves the same accuracy as the standard Monte Carlo estimator at a cost reduction reaching several orders of magnitude.<|reference_end|> | arxiv | @article{bayrakci2008fast,
title={Fast Monte Carlo Estimation of Timing Yield: Importance Sampling with
Stochastic Logical Effort (ISLE)},
author={Alp Arslan Bayrakci, Alper Demir, Serdar Tasiran},
journal={arXiv preprint arXiv:0805.2627},
year={2008},
archivePrefix={arXiv},
eprint={0805.2627},
primaryClass={cs.OH}
} | bayrakci2008fast |
arxiv-3772 | 0805.2629 | On Full Diversity Space-Time Block Codes with Partial Interference Cancellation Group Decoding | <|reference_start|>On Full Diversity Space-Time Block Codes with Partial Interference Cancellation Group Decoding: In this paper, we propose a partial interference cancellation (PIC) group decoding for linear dispersive space-time block codes (STBC) and a design criterion for the codes to achieve full diversity when the PIC group decoding is used at the receiver. A PIC group decoding decodes the symbols embedded in an STBC by dividing them into several groups and decoding each group separately after a linear PIC operation is implemented. It can be viewed as an intermediate decoding between the maximum likelihood (ML) receiver that decodes all the embedded symbols together, i.e., all the embedded symbols are in a single group, and the zero-forcing (ZF) receiver that decodes all the embedded symbols separately and independently, i.e., each group has and only has one embedded symbol, after the ZF operation is implemented. Our proposed design criterion for the PIC group decoding to achieve full diversity is an intermediate condition between the loosest ML full rank criterion of codewords and the strongest ZF linear independence condition of the column vectors in the equivalent channel matrix. We also propose asymptotically optimal (AO) group decoding algorithm, which is an intermediate decoding between the MMSE decoding algorithm and the ML decoding algorithm. The design criterion for the PIC group decoding applies to the AO group decoding algorithm. It is well-known that the symbol rate for a full rank linear STBC can be full, i.e., n_t for n_t transmit antennas. It has been recently shown that its rate is upper bounded by 1 if a code achieves full diversity with a linear receiver. The intermediate criterion proposed in this paper provides the possibility for codes of rates between n_t and 1 that achieve full diversity with a PIC group decoding. This therefore provides a complexity-performance-rate tradeoff.<|reference_end|> | arxiv | @article{guo2008on,
title={On Full Diversity Space-Time Block Codes with Partial Interference
Cancellation Group Decoding},
author={Xiaoyong Guo and Xiang-Gen Xia},
journal={arXiv preprint arXiv:0805.2629},
year={2008},
archivePrefix={arXiv},
eprint={0805.2629},
primaryClass={cs.IT math.IT}
} | guo2008on |
arxiv-3773 | 0805.2630 | Sequential Design of Experiments via Linear Programming | <|reference_start|>Sequential Design of Experiments via Linear Programming: The celebrated multi-armed bandit problem in decision theory models the basic trade-off between exploration, or learning about the state of a system, and exploitation, or utilizing the system. In this paper we study the variant of the multi-armed bandit problem where the exploration phase involves costly experiments and occurs before the exploitation phase; and where each play of an arm during the exploration phase updates a prior belief about the arm. The problem of finding an inexpensive exploration strategy to optimize a certain exploitation objective is NP-Hard even when a single play reveals all information about an arm, and all exploration steps cost the same. We provide the first polynomial time constant-factor approximation algorithm for this class of problems. We show that this framework also generalizes several problems of interest studied in the context of data acquisition in sensor networks. Our analyses also extends to switching and setup costs, and to concave utility objectives. Our solution approach is via a novel linear program rounding technique based on stochastic packing. In addition to yielding exploration policies whose performance is within a small constant factor of the adaptive optimal policy, a nice feature of this approach is that the resulting policies explore the arms sequentially without revisiting any arm. Sequentiality is a well-studied concept in decision theory, and is very desirable in domains where multiple explorations can be conducted in parallel, for instance, in the sensor network context.<|reference_end|> | arxiv | @article{guha2008sequential,
title={Sequential Design of Experiments via Linear Programming},
author={Sudipto Guha and Kamesh Munagala},
journal={arXiv preprint arXiv:0805.2630},
year={2008},
archivePrefix={arXiv},
eprint={0805.2630},
primaryClass={cs.DS}
} | guha2008sequential |
arxiv-3774 | 0805.2641 | On the Capacity of the Diamond Half-Duplex Relay Channel | <|reference_start|>On the Capacity of the Diamond Half-Duplex Relay Channel: We consider a diamond-shaped dual-hop communication system consisting a source, two parallel half-duplex relays and a destination. In a single antenna configuration, it has been previously shown that a two-phase node-scheduling algorithm, along with the decode and forward strategy can achieve the capacity of the diamond channel for a certain symmetric channel gains [1]. In this paper, we obtain a more general condition for the optimality of the scheme in terms of power resources and channel gains. In particular, it is proved that if the product of the capacity of the simultaneously active links are equal in both transmission phases, the scheme achieves the capacity of the channel.<|reference_end|> | arxiv | @article{bagheri2008on,
title={On the Capacity of the Diamond Half-Duplex Relay Channel},
author={Hossein Bagheri, Abolfazl S. Motahari, and Amir K. Khandani},
journal={arXiv preprint arXiv:0805.2641},
year={2008},
archivePrefix={arXiv},
eprint={0805.2641},
primaryClass={cs.IT math.IT}
} | bagheri2008on |
arxiv-3775 | 0805.2646 | Small Approximate Pareto Sets for Bi-objective Shortest Paths and Other Problems | <|reference_start|>Small Approximate Pareto Sets for Bi-objective Shortest Paths and Other Problems: We investigate the problem of computing a minimum set of solutions that approximates within a specified accuracy $\epsilon$ the Pareto curve of a multiobjective optimization problem. We show that for a broad class of bi-objective problems (containing many important widely studied problems such as shortest paths, spanning tree, and many others), we can compute in polynomial time an $\epsilon$-Pareto set that contains at most twice as many solutions as the minimum such set. Furthermore we show that the factor of 2 is tight for these problems, i.e., it is NP-hard to do better. We present upper and lower bounds for three or more objectives, as well as for the dual problem of computing a specified number $k$ of solutions which provide a good approximation to the Pareto curve.<|reference_end|> | arxiv | @article{diakonikolas2008small,
title={Small Approximate Pareto Sets for Bi-objective Shortest Paths and Other
Problems},
author={Ilias Diakonikolas, Mihalis Yannakakis},
journal={arXiv preprint arXiv:0805.2646},
year={2008},
archivePrefix={arXiv},
eprint={0805.2646},
primaryClass={cs.DS}
} | diakonikolas2008small |
arxiv-3776 | 0805.2671 | Finger Indexed Sets: New Approaches | <|reference_start|>Finger Indexed Sets: New Approaches: In the particular case we have insertions/deletions at the tail of a given set S of $n$ one-dimensional elements, we present a simpler and more concrete algorithm than that presented in [Anderson, 2007] achieving the same (but also amortized) upper bound of $O(\sqrt{logd/loglogd})$ for finger searching queries, where $d$ is the number of sorted keys between the finger element and the target element we are looking for. Furthermore, in general case we have insertions/deletions anywhere we present a new randomized algorithm achieving the same expected time bounds. Even the new solutions achieve the optimal bounds in amortized or expected case, the advantage of simplicity is of great importance due to practical merits we gain.<|reference_end|> | arxiv | @article{sioutas2008finger,
title={Finger Indexed Sets: New Approaches},
author={Spyros Sioutas},
journal={arXiv preprint arXiv:0805.2671},
year={2008},
archivePrefix={arXiv},
eprint={0805.2671},
primaryClass={cs.DS cs.DB}
} | sioutas2008finger |
arxiv-3777 | 0805.2675 | MAPEL: Achieving Global Optimality for a Non-convex Wireless Power Control Problem | <|reference_start|>MAPEL: Achieving Global Optimality for a Non-convex Wireless Power Control Problem: Achieving weighted throughput maximization (WTM) through power control has been a long standing open problem in interference-limited wireless networks. The complicated coupling between the mutual interferences of links gives rise to a non-convex optimization problem. Previous work has considered the WTM problem in the high signal to interference-and-noise ratio (SINR) regime, where the problem can be approximated and transformed into a convex optimization problem through proper change of variables. In the general SINR regime, however, the approximation and transformation approach does not work. This paper proposes an algorithm, MAPEL, which globally converges to a global optimal solution of the WTM problem in the general SINR regime. The MAPEL algorithm is designed based on three key observations of the WTM problem: (1) the objective function is monotonically increasing in SINR, (2) the objective function can be transformed into a product of exponentiated linear fraction functions, and (3) the feasible set of the equivalent transformed problem is always normal although not necessarily convex. The MAPLE algorithm finds the desired optimal power control solution by constructing a series of polyblocks that approximate the feasible SINR region in increasing precision. Furthermore, by tuning the approximation factor in MAPEL, we could engineer a desirable tradeoff between optimality and convergence time. MAPEL provides an important benchmark for performance evaluation of other heuristic algorithms targeting the same problem. With the help of MAPEL, we evaluate the performance of several respective algorithms through extensive simulations.<|reference_end|> | arxiv | @article{qian2008mapel:,
title={MAPEL: Achieving Global Optimality for a Non-convex Wireless Power
Control Problem},
author={Liping Qian, Ying Jun Zhang, Jianwei Huang},
journal={arXiv preprint arXiv:0805.2675},
year={2008},
archivePrefix={arXiv},
eprint={0805.2675},
primaryClass={cs.NI cs.NA}
} | qian2008mapel: |
arxiv-3778 | 0805.2681 | Canonical polygon Queries on the plane: a New Approach | <|reference_start|>Canonical polygon Queries on the plane: a New Approach: The polygon retrieval problem on points is the problem of preprocessing a set of $n$ points on the plane, so that given a polygon query, the subset of points lying inside it can be reported efficiently. It is of great interest in areas such as Computer Graphics, CAD applications, Spatial Databases and GIS developing tasks. In this paper we study the problem of canonical $k$-vertex polygon queries on the plane. A canonical $k$-vertex polygon query always meets the following specific property: a point retrieval query can be transformed into a linear number (with respect to the number of vertices) of point retrievals for orthogonal objects such as rectangles and triangles (throughout this work we call a triangle orthogonal iff two of its edges are axis-parallel). We present two new algorithms for this problem. The first one requires $O(n\log^2{n})$ space and $O(k\frac{log^3n}{loglogn}+A)$ query time. A simple modification scheme on first algorithm lead us to a second solution, which consumes $O(n^2)$ space and $O(k \frac{logn}{loglogn}+A)$ query time, where $A$ denotes the size of the answer and $k$ is the number of vertices. The best previous solution for the general polygon retrieval problem uses $O(n^2)$ space and answers a query in $O(k\log{n}+A)$ time, where $k$ is the number of vertices. It is also very complicated and difficult to be implemented in a standard imperative programming language such as C or C++.<|reference_end|> | arxiv | @article{sioutas2008canonical,
title={Canonical polygon Queries on the plane: a New Approach},
author={Spyros Sioutas, Dimitrios Sofotassios, Kostas Tsichlas, Dimitrios
Sotiropoulos, Panayiotis Vlamos},
journal={arXiv preprint arXiv:0805.2681},
year={2008},
archivePrefix={arXiv},
eprint={0805.2681},
primaryClass={cs.CG cs.DS}
} | sioutas2008canonical |
arxiv-3779 | 0805.2684 | Assessing Random Dynamical Network Architectures for Nanoelectronics | <|reference_start|>Assessing Random Dynamical Network Architectures for Nanoelectronics: Independent of the technology, it is generally expected that future nanoscale devices will be built from vast numbers of densely arranged devices that exhibit high failure rates. Other than that, there is little consensus on what type of technology and computing architecture holds most promises to go far beyond today's top-down engineered silicon devices. Cellular automata (CA) have been proposed in the past as a possible class of architectures to the von Neumann computing architecture, which is not generally well suited for future parallel and fine-grained nanoscale electronics. While the top-down engineered semi-conducting technology favors regular and locally interconnected structures, future bottom-up self-assembled devices tend to have irregular structures because of the current lack precise control over these processes. In this paper, we will assess random dynamical networks, namely Random Boolean Networks (RBNs) and Random Threshold Networks (RTNs), as alternative computing architectures and models for future information processing devices. We will illustrate that--from a theoretical perspective--they offer superior properties over classical CA-based architectures, such as inherent robustness as the system scales up, more efficient information processing capabilities, and manufacturing benefits for bottom-up designed devices, which motivates this investigation. We will present recent results on the dynamic behavior and robustness of such random dynamical networks while also including manufacturing issues in the assessment.<|reference_end|> | arxiv | @article{teuscher2008assessing,
title={Assessing Random Dynamical Network Architectures for Nanoelectronics},
author={Christof Teuscher, Natali Gulbahce, Thimo Rohlf},
journal={arXiv preprint arXiv:0805.2684},
year={2008},
doi={10.1109/NANOARCH.2008.4585787},
number={LA-UR-08-2190},
archivePrefix={arXiv},
eprint={0805.2684},
primaryClass={cs.AR cond-mat.dis-nn nlin.CG}
} | teuscher2008assessing |
arxiv-3780 | 0805.2690 | Increasing Linear Dynamic Range of Commercial Digital Photocamera Used in Imaging Systems with Optical Coding | <|reference_start|>Increasing Linear Dynamic Range of Commercial Digital Photocamera Used in Imaging Systems with Optical Coding: Methods of increasing linear optical dynamic range of commercial photocamera for optical-digital imaging systems are described. Use of such methods allows to use commercial photocameras for optical measurements. Experimental results are reported.<|reference_end|> | arxiv | @article{konnik2008increasing,
title={Increasing Linear Dynamic Range of Commercial Digital Photocamera Used
in Imaging Systems with Optical Coding},
author={M.V. Konnik, E.A. Manykin, S.N. Starikov},
journal={arXiv preprint arXiv:0805.2690},
year={2008},
archivePrefix={arXiv},
eprint={0805.2690},
primaryClass={cs.CV}
} | konnik2008increasing |
arxiv-3781 | 0805.2691 | Equivalent characterizations of partial randomness for a recursively enumerable real | <|reference_start|>Equivalent characterizations of partial randomness for a recursively enumerable real: A real number \alpha is called recursively enumerable if there exists a computable, increasing sequence of rational numbers which converges to \alpha. The randomness of a recursively enumerable real \alpha can be characterized in various ways using each of the notions; program-size complexity, Martin-L\"{o}f test, Chaitin's \Omega number, the domination and \Omega-likeness of \alpha, the universality of a computable, increasing sequence of rational numbers which converges to \alpha, and universal probability. In this paper, we generalize these characterizations of randomness over the notion of partial randomness by parameterizing each of the notions above by a real number T\in(0,1]. We thus present several equivalent characterizations of partial randomness for a recursively enumerable real number.<|reference_end|> | arxiv | @article{tadaki2008equivalent,
title={Equivalent characterizations of partial randomness for a recursively
enumerable real},
author={Kohtaro Tadaki},
journal={arXiv preprint arXiv:0805.2691},
year={2008},
archivePrefix={arXiv},
eprint={0805.2691},
primaryClass={cs.IT cs.CC math.IT math.LO}
} | tadaki2008equivalent |
arxiv-3782 | 0805.2701 | An authentication scheme based on the twisted conjugacy problem | <|reference_start|>An authentication scheme based on the twisted conjugacy problem: The conjugacy search problem in a group $G$ is the problem of recovering an $x \in G$ from given $g \in G$ and $h=x^{-1}gx$. The alleged computational hardness of this problem in some groups was used in several recently suggested public key exchange protocols, including the one due to Anshel, Anshel, and Goldfeld, and the one due to Ko, Lee et al. Sibert, Dehornoy, and Girault used this problem in their authentication scheme, which was inspired by the Fiat-Shamir scheme involving repeating several times a three-pass challenge-response step. In this paper, we offer an authentication scheme whose security is based on the apparent hardness of the twisted conjugacy search problem, which is: given a pair of endomorphisms (i.e., homomorphisms into itself) phi, \psi of a group G and a pair of elements w, t \in G, find an element s \in G such that t = \psi(s^{-1}) w \phi(s) provided at least one such s exists. This problem appears to be very non-trivial even for free groups. We offer here another platform, namely, the semigroup of all 2x2 matrices over truncated one-variable polynomials over F_2, the field of two elements, with transposition used instead of inversion in the equality above.<|reference_end|> | arxiv | @article{shpilrain2008an,
title={An authentication scheme based on the twisted conjugacy problem},
author={Vladimir Shpilrain and Alexander Ushakov},
journal={ACNS 2008, Lecture Notes Comp. Sc. 5037 (2008), 366-372},
year={2008},
archivePrefix={arXiv},
eprint={0805.2701},
primaryClass={math.GR cs.CR}
} | shpilrain2008an |
arxiv-3783 | 0805.2705 | Three-dimensional Random Voronoi Tessellations: From Cubic Crystal Lattices to Poisson Point Processes | <|reference_start|>Three-dimensional Random Voronoi Tessellations: From Cubic Crystal Lattices to Poisson Point Processes: We perturb the SC, BCC, and FCC crystal structures with a spatial Gaussian noise whose adimensional strength is controlled by the parameter a, and analyze the topological and metrical properties of the resulting Voronoi Tessellations (VT). The topological properties of the VT of the SC and FCC crystals are unstable with respect to the introduction of noise, because the corresponding polyhedra are geometrically degenerate, whereas the tessellation of the BCC crystal is topologically stable even against noise of small but finite intensity. For weak noise, the mean area of the perturbed BCC and FCC crystals VT increases quadratically with a. In the case of perturbed SCC crystals, there is an optimal amount of noise that minimizes the mean area of the cells. Already for a moderate noise (a>0.5), the properties of the three perturbed VT are indistinguishable, and for intense noise (a>2), results converge to the Poisson-VT limit. Notably, 2-parameter gamma distributions are an excellent model for the empirical of of all considered properties. The VT of the perturbed BCC and FCC structures are local maxima for the isoperimetric quotient, which measures the degre of sphericity of the cells, among space filling VT. In the BCC case, this suggests a weaker form of the recentluy disproved Kelvin conjecture. Due to the fluctuations of the shape of the cells, anomalous scalings with exponents >3/2 is observed between the area and the volumes of the cells, and, except for the FCC case, also for a->0. In the Poisson-VT limit, the exponent is about 1.67. As the number of faces is positively correlated with the sphericity of the cells, the anomalous scaling is heavily reduced when we perform powerlaw fits separately on cells with a specific number of faces.<|reference_end|> | arxiv | @article{lucarini2008three-dimensional,
title={Three-dimensional Random Voronoi Tessellations: From Cubic Crystal
Lattices to Poisson Point Processes},
author={Valerio Lucarini},
journal={J. Stat. Phys., 134, 185-206 (2009)},
year={2008},
doi={10.1007/s10955-008-9668-y},
archivePrefix={arXiv},
eprint={0805.2705},
primaryClass={cond-mat.stat-mech cond-mat.dis-nn cond-mat.other cs.CG math-ph math.MP nlin.PS physics.data-an}
} | lucarini2008three-dimensional |
arxiv-3784 | 0805.2739 | Innovation in Scholarly Communication: Vision and Projects from High-Energy Physics | <|reference_start|>Innovation in Scholarly Communication: Vision and Projects from High-Energy Physics: Having always been at the forefront of information management and open access, High-Energy Physics (HEP) proves to be an ideal test-bed for innovations in scholarly communication including new information and communication technologies. Three selected topics of scholarly communication in High-Energy Physics are presented here: A new open access business model, SCOAP3, a world-wide sponsoring consortium for peer-reviewed HEP literature; the design, development and deployment of an e-infrastructure for information management; and the emerging debate on long-term preservation, re-use and (open) access to HEP data.<|reference_end|> | arxiv | @article{heuer2008innovation,
title={Innovation in Scholarly Communication: Vision and Projects from
High-Energy Physics},
author={Rolf-Dieter Heuer, Annette Holtkamp, Salvatore Mele},
journal={Inform.Serv.Use 28:83-96,2008},
year={2008},
doi={10.3233/ISU-2008-0570},
archivePrefix={arXiv},
eprint={0805.2739},
primaryClass={cs.DL}
} | heuer2008innovation |
arxiv-3785 | 0805.2749 | State and history in operating systems | <|reference_start|>State and history in operating systems: A method of using recursive functions to describe state change is applied to process switching in UNIX-like operating systems.<|reference_end|> | arxiv | @article{yodaiken2008state,
title={State and history in operating systems},
author={Victor Yodaiken},
journal={arXiv preprint arXiv:0805.2749},
year={2008},
archivePrefix={arXiv},
eprint={0805.2749},
primaryClass={cs.SE cs.DC}
} | yodaiken2008state |
arxiv-3786 | 0805.2752 | The Margitron: A Generalised Perceptron with Margin | <|reference_start|>The Margitron: A Generalised Perceptron with Margin: We identify the classical Perceptron algorithm with margin as a member of a broader family of large margin classifiers which we collectively call the Margitron. The Margitron, (despite its) sharing the same update rule with the Perceptron, is shown in an incremental setting to converge in a finite number of updates to solutions possessing any desirable fraction of the maximum margin. Experiments comparing the Margitron with decomposition SVMs on tasks involving linear kernels and 2-norm soft margin are also reported.<|reference_end|> | arxiv | @article{panagiotakopoulos2008the,
title={The Margitron: A Generalised Perceptron with Margin},
author={Constantinos Panagiotakopoulos and Petroula Tsampouka},
journal={arXiv preprint arXiv:0805.2752},
year={2008},
archivePrefix={arXiv},
eprint={0805.2752},
primaryClass={cs.LG}
} | panagiotakopoulos2008the |
arxiv-3787 | 0805.2775 | Sample Selection Bias Correction Theory | <|reference_start|>Sample Selection Bias Correction Theory: This paper presents a theoretical analysis of sample selection bias correction. The sample bias correction technique commonly used in machine learning consists of reweighting the cost of an error on each training point of a biased sample to more closely reflect the unbiased distribution. This relies on weights derived by various estimation techniques based on finite samples. We analyze the effect of an error in that estimation on the accuracy of the hypothesis returned by the learning algorithm for two estimation techniques: a cluster-based estimation technique and kernel mean matching. We also report the results of sample bias correction experiments with several data sets using these techniques. Our analysis is based on the novel concept of distributional stability which generalizes the existing concept of point-based stability. Much of our work and proof techniques can be used to analyze other importance weighting techniques and their effect on accuracy when using a distributionally stable algorithm.<|reference_end|> | arxiv | @article{cortes2008sample,
title={Sample Selection Bias Correction Theory},
author={Corinna Cortes, Mehryar Mohri, Michael Riley, Afshin Rostamizadeh},
journal={arXiv preprint arXiv:0805.2775},
year={2008},
archivePrefix={arXiv},
eprint={0805.2775},
primaryClass={cs.LG}
} | cortes2008sample |
arxiv-3788 | 0805.2785 | Proof Search Specifications of Bisimulation and Modal Logics for the pi-Calculus | <|reference_start|>Proof Search Specifications of Bisimulation and Modal Logics for the pi-Calculus: We specify the operational semantics and bisimulation relations for the finite pi-calculus within a logic that contains the nabla quantifier for encoding generic judgments and definitions for encoding fixed points. Since we restrict to the finite case, the ability of the logic to unfold fixed points allows this logic to be complete for both the inductive nature of operational semantics and the coinductive nature of bisimulation. The nabla quantifier helps with the delicate issues surrounding the scope of variables within pi-calculus expressions and their executions (proofs). We illustrate several merits of the logical specifications permitted by this logic: they are natural and declarative; they contain no side-conditions concerning names of variables while maintaining a completely formal treatment of such variables; differences between late and open bisimulation relations arise from familar logic distinctions; the interplay between the three quantifiers (for all, exists, and nabla) and their scopes can explain the differences between early and late bisimulation and between various modal operators based on bound input and output actions; and proof search involving the application of inference rules, unification, and backtracking can provide complete proof systems for one-step transitions, bisimulation, and satisfaction in modal logic. We also illustrate how one can encode the pi-calculus with replications, in an extended logic with induction and co-induction.<|reference_end|> | arxiv | @article{tiu2008proof,
title={Proof Search Specifications of Bisimulation and Modal Logics for the
pi-Calculus},
author={Alwen Tiu and Dale Miller},
journal={arXiv preprint arXiv:0805.2785},
year={2008},
archivePrefix={arXiv},
eprint={0805.2785},
primaryClass={cs.LO}
} | tiu2008proof |
arxiv-3789 | 0805.2797 | Young's axiomatization of the Shapley value - a new proof | <|reference_start|>Young's axiomatization of the Shapley value - a new proof: We consider Young (1985)'s characterization of the Shapley value, and give a new proof of this axiomatization. Moreover, as applications of the new proof, we show that Young (1985)'s axiomatization of the Shapley value works on various well-known subclasses of TU games.<|reference_end|> | arxiv | @article{pinter2008young's,
title={Young's axiomatization of the Shapley value - a new proof},
author={M. Pinter},
journal={arXiv preprint arXiv:0805.2797},
year={2008},
archivePrefix={arXiv},
eprint={0805.2797},
primaryClass={cs.GT}
} | pinter2008young's |
arxiv-3790 | 0805.2812 | Codeword-Independent Performance of Nonbinary Linear Codes Under Linear-Programming and Sum-Product Decoding | <|reference_start|>Codeword-Independent Performance of Nonbinary Linear Codes Under Linear-Programming and Sum-Product Decoding: A coded modulation system is considered in which nonbinary coded symbols are mapped directly to nonbinary modulation signals. It is proved that if the modulator-channel combination satisfies a particular symmetry condition, the codeword error rate performance is independent of the transmitted codeword. It is shown that this result holds for both linear-programming decoders and sum-product decoders. In particular, this provides a natural modulation mapping for nonbinary codes mapped to PSK constellations for transmission over memoryless channels such as AWGN channels or flat fading channels with AWGN.<|reference_end|> | arxiv | @article{flanagan2008codeword-independent,
title={Codeword-Independent Performance of Nonbinary Linear Codes Under
Linear-Programming and Sum-Product Decoding},
author={Mark F. Flanagan},
journal={arXiv preprint arXiv:0805.2812},
year={2008},
doi={10.1109/ISIT.2008.4595238},
archivePrefix={arXiv},
eprint={0805.2812},
primaryClass={cs.IT math.IT}
} | flanagan2008codeword-independent |
arxiv-3791 | 0805.2854 | Network QoS Management in Cyber-Physical Systems | <|reference_start|>Network QoS Management in Cyber-Physical Systems: Technical advances in ubiquitous sensing, embedded computing, and wireless communication are leading to a new generation of engineered systems called cyber-physical systems (CPS). CPS promises to transform the way we interact with the physical world just as the Internet transformed how we interact with one another. Before this vision becomes a reality, however, a large number of challenges have to be addressed. Network quality of service (QoS) management in this new realm is among those issues that deserve extensive research efforts. It is envisioned that wireless sensor/actuator networks (WSANs) will play an essential role in CPS. This paper examines the main characteristics of WSANs and the requirements of QoS provisioning in the context of cyber-physical computing. Several research topics and challenges are identified. As a sample solution, a feedback scheduling framework is proposed to tackle some of the identified challenges. A simple example is also presented that illustrates the effectiveness of the proposed solution.<|reference_end|> | arxiv | @article{xia2008network,
title={Network QoS Management in Cyber-Physical Systems},
author={Feng Xia, Longhua Ma, Jinxiang Dong and Youxian Sun},
journal={arXiv preprint arXiv:0805.2854},
year={2008},
archivePrefix={arXiv},
eprint={0805.2854},
primaryClass={cs.NI cs.DC}
} | xia2008network |
arxiv-3792 | 0805.2855 | LCSH, SKOS and Linked Data | <|reference_start|>LCSH, SKOS and Linked Data: A technique for converting Library of Congress Subject Headings MARCXML to Simple Knowledge Organization System (SKOS) RDF is described. Strengths of the SKOS vocabulary are highlighted, as well as possible points for extension, and the integration of other semantic web vocabularies such as Dublin Core. An application for making the vocabulary available as linked-data on the Web is also described.<|reference_end|> | arxiv | @article{summers2008lcsh,,
title={LCSH, SKOS and Linked Data},
author={Ed Summers, Antoine Isaac, Clay Redding, Dan Krech},
journal={Web Semantics: Science, Services and Agents on the World Wide Web,
Volume 20, May 2013, Pages 35-49, ISSN 1570-8268},
year={2008},
doi={10.1016/j.websem.2013.05.001},
archivePrefix={arXiv},
eprint={0805.2855},
primaryClass={cs.DL cs.IR}
} | summers2008lcsh, |
arxiv-3793 | 0805.2864 | Fusion d'images: application au contr\^ole de la distribution des biopsies prostatiques | <|reference_start|>Fusion d'images: application au contr\^ole de la distribution des biopsies prostatiques: This paper is about the application of a 3D ultrasound data fusion technique to the 3D reconstruction of prostate biopies in a reference volume. The method is introduced and its evaluation on a series of data coming from 15 patients is described.<|reference_end|> | arxiv | @article{mozer2008fusion,
title={Fusion d'images: application au contr\^ole de la distribution des
biopsies prostatiques},
author={Pierre Mozer (TIMC), Michael Baumann (TIMC), G. Chevreau (TIMC),
Jocelyne Troccaz (TIMC)},
journal={Progr\`es en Urologie 18, 1 (2008) F15-F18},
year={2008},
archivePrefix={arXiv},
eprint={0805.2864},
primaryClass={cs.OH}
} | mozer2008fusion |
arxiv-3794 | 0805.2891 | Learning Low-Density Separators | <|reference_start|>Learning Low-Density Separators: We define a novel, basic, unsupervised learning problem - learning the lowest density homogeneous hyperplane separator of an unknown probability distribution. This task is relevant to several problems in machine learning, such as semi-supervised learning and clustering stability. We investigate the question of existence of a universally consistent algorithm for this problem. We propose two natural learning paradigms and prove that, on input unlabeled random samples generated by any member of a rich family of distributions, they are guaranteed to converge to the optimal separator for that distribution. We complement this result by showing that no learning algorithm for our task can achieve uniform learning rates (that are independent of the data generating distribution).<|reference_end|> | arxiv | @article{ben-david2008learning,
title={Learning Low-Density Separators},
author={Shai Ben-David, Tyler Lu, David Pal, Miroslava Sotakova},
journal={arXiv preprint arXiv:0805.2891},
year={2008},
archivePrefix={arXiv},
eprint={0805.2891},
primaryClass={cs.LG cs.AI}
} | ben-david2008learning |
arxiv-3795 | 0805.2938 | Steganography of VoIP Streams | <|reference_start|>Steganography of VoIP Streams: The paper concerns available steganographic techniques that can be used for creating covert channels for VoIP (Voice over Internet Protocol) streams. Apart from characterizing existing steganographic methods we provide new insights by presenting two new techniques. The first one is network steganography solution which exploits free/unused protocols' fields and is known for IP, UDP or TCP protocols but has never been applied to RTP (Real-Time Transport Protocol) and RTCP (Real-Time Control Protocol) which are characteristic for VoIP. The second method, called LACK (Lost Audio Packets Steganography), provides hybrid storage-timing covert channel by utilizing delayed audio packets. The results of the experiment, that was performed to estimate a total amount of data that can be covertly transferred during typical VoIP conversation phase, regardless of steganalysis, are also included in this paper.<|reference_end|> | arxiv | @article{mazurczyk2008steganography,
title={Steganography of VoIP Streams},
author={Wojciech Mazurczyk and Krzysztof Szczypiorski},
journal={arXiv preprint arXiv:0805.2938},
year={2008},
archivePrefix={arXiv},
eprint={0805.2938},
primaryClass={cs.MM cs.CR}
} | mazurczyk2008steganography |
arxiv-3796 | 0805.2949 | Performability Aspects of the Atlas Vo; Using Lmbench Suite | <|reference_start|>Performability Aspects of the Atlas Vo; Using Lmbench Suite: The ATLAS Virtual Organization is grid's largest Virtual Organization which is currently in full production stage. Hereby a case is being made that a user working within that VO is going to face a wide spectrum of different systems, whose heterogeneity is enough to count as "orders of magnitude" according to a number of metrics; including integer/float operations, memory throughput (STREAM) and communication latencies. Furthermore, the spread of performance does not appear to follow any known distribution pattern, which is demonstrated in graphs produced during May 2007 measurements. It is implied that the current practice where either "all-WNs-are-equal" or, the alternative of SPEC-based rating used by LCG/EGEE is an oversimplification which is inappropriate and expensive from an operational point of view, therefore new techniques are needed for optimal grid resources allocation.<|reference_end|> | arxiv | @article{georgatos2008performability,
title={Performability Aspects of the Atlas Vo; Using Lmbench Suite},
author={Fotis Georgatos, John Kouvakis, John Kouretis},
journal={arXiv preprint arXiv:0805.2949},
year={2008},
archivePrefix={arXiv},
eprint={0805.2949},
primaryClass={cs.PF cs.CE cs.DC}
} | georgatos2008performability |
arxiv-3797 | 0805.2995 | Lossless Compression with Security Constraints | <|reference_start|>Lossless Compression with Security Constraints: Secure distributed data compression in the presence of an eavesdropper is explored. Two correlated sources that need to be reliably transmitted to a legitimate receiver are available at separate encoders. Noise-free, limited rate links from the encoders to the legitimate receiver, one of which can also be perfectly observed by the eavesdropper, are considered. The eavesdropper also has its own correlated observation. Inner and outer bounds on the achievable compression-equivocation rate region are given. Several different scenarios involving the side information at the transmitters as well as multiple receivers/eavesdroppers are also considered.<|reference_end|> | arxiv | @article{gunduz2008lossless,
title={Lossless Compression with Security Constraints},
author={Deniz Gunduz, Elza Erkip, H. Vincent Poor},
journal={arXiv preprint arXiv:0805.2995},
year={2008},
doi={10.1109/ISIT.2008.4594958},
archivePrefix={arXiv},
eprint={0805.2995},
primaryClass={cs.IT math.IT}
} | gunduz2008lossless |
arxiv-3798 | 0805.2996 | Lossy Source Transmission over the Relay Channel | <|reference_start|>Lossy Source Transmission over the Relay Channel: Lossy transmission over a relay channel in which the relay has access to correlated side information is considered. First, a joint source-channel decode-and-forward scheme is proposed for general discrete memoryless sources and channels. Then the Gaussian relay channel where the source and the side information are jointly Gaussian is analyzed. For this Gaussian model, several new source-channel cooperation schemes are introduced and analyzed in terms of the squared-error distortion at the destination. A comparison of the proposed upper bounds with the cut-set lower bound is given, and it is seen that joint source-channel cooperation improves the reconstruction quality significantly. Moreover, the performance of the joint code is close to the lower bound on distortion for a wide range of source and channel parameters.<|reference_end|> | arxiv | @article{gunduz2008lossy,
title={Lossy Source Transmission over the Relay Channel},
author={Deniz Gunduz, Elza Erkip, Andrea J. Goldsmith, H. Vincent Poor},
journal={arXiv preprint arXiv:0805.2996},
year={2008},
doi={10.1109/ISIT.2008.4595359},
archivePrefix={arXiv},
eprint={0805.2996},
primaryClass={cs.IT math.IT}
} | gunduz2008lossy |
arxiv-3799 | 0805.3005 | High-dimensional subset recovery in noise: Sparsified measurements without loss of statistical efficiency | <|reference_start|>High-dimensional subset recovery in noise: Sparsified measurements without loss of statistical efficiency: We consider the problem of estimating the support of a vector $\beta^* \in \mathbb{R}^{p}$ based on observations contaminated by noise. A significant body of work has studied behavior of $\ell_1$-relaxations when applied to measurement matrices drawn from standard dense ensembles (e.g., Gaussian, Bernoulli). In this paper, we analyze \emph{sparsified} measurement ensembles, and consider the trade-off between measurement sparsity, as measured by the fraction $\gamma$ of non-zero entries, and the statistical efficiency, as measured by the minimal number of observations $n$ required for exact support recovery with probability converging to one. Our main result is to prove that it is possible to let $\gamma \to 0$ at some rate, yielding measurement matrices with a vanishing fraction of non-zeros per row while retaining the same statistical efficiency as dense ensembles. A variety of simulation results confirm the sharpness of our theoretical predictions.<|reference_end|> | arxiv | @article{omidiran2008high-dimensional,
title={High-dimensional subset recovery in noise: Sparsified measurements
without loss of statistical efficiency},
author={Dapo Omidiran, Martin J. Wainwright},
journal={arXiv preprint arXiv:0805.3005},
year={2008},
archivePrefix={arXiv},
eprint={0805.3005},
primaryClass={stat.ML cs.IT math.IT}
} | omidiran2008high-dimensional |
arxiv-3800 | 0805.3058 | A New Structural Property of SAT | <|reference_start|>A New Structural Property of SAT: We review a minimum set of notions from our previous paper on structural properties of SAT at arXiv:0802.1790 that will allow us to define and discuss the "complete internal independence" of a decision problem. This property is strictly stronger than the independence property that was called "strong internal independence" in cited paper. We show that SAT exhibits this property. We argue that this form of independence of a decision problem is the strongest possible for a problem. By relying upon this maximally strong form of internal independence, we reformulate in more strict terms the informal remarks on possible exponentiality of SAT that concluded our previous paper. The net result of that reformulation is a hint for a proof for SAT being exponential. We conjecture that a complete proof of that proposition can be obtained by strictly following the line of given hint of proof.<|reference_end|> | arxiv | @article{di zenzo2008a,
title={A New Structural Property of SAT},
author={Silvano Di Zenzo},
journal={arXiv preprint arXiv:0805.3058},
year={2008},
archivePrefix={arXiv},
eprint={0805.3058},
primaryClass={cs.CC}
} | di zenzo2008a |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.