corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-2101
0712.2449
Reducing semantic complexity in distributed Digital Libraries: treatment of term vagueness and document re-ranking
<|reference_start|>Reducing semantic complexity in distributed Digital Libraries: treatment of term vagueness and document re-ranking: The purpose of the paper is to propose models to reduce the semantic complexity in heterogeneous DLs. The aim is to introduce value-added services (treatment of term vagueness and document re-ranking) that gain a certain quality in DLs if they are combined with heterogeneity components established in the project "Competence Center Modeling and Treatment of Semantic Heterogeneity". Empirical observations show that freely formulated user terms and terms from controlled vocabularies are often not the same or match just by coincidence. Therefore, a value-added service will be developed which rephrases the natural language searcher terms into suggestions from the controlled vocabulary, the Search Term Recommender (STR). Two methods, which are derived from scientometrics and network analysis, will be implemented with the objective to re-rank result sets by the following structural properties: the ranking of the results by core journals (so-called Bradfordizing) and ranking by centrality of authors in co-authorship networks.<|reference_end|>
arxiv
@article{mayr2007reducing, title={Reducing semantic complexity in distributed Digital Libraries: treatment of term vagueness and document re-ranking}, author={Philipp Mayr, Peter Mutschke, Vivien Petras}, journal={arXiv preprint arXiv:0712.2449}, year={2007}, doi={10.1108/00242530810865484}, archivePrefix={arXiv}, eprint={0712.2449}, primaryClass={cs.DL} }
mayr2007reducing
arxiv-2102
0712.2467
Rethinking Information Theory for Mobile Ad Hoc Networks
<|reference_start|>Rethinking Information Theory for Mobile Ad Hoc Networks: The subject of this paper is the long-standing open problem of developing a general capacity theory for wireless networks, particularly a theory capable of describing the fundamental performance limits of mobile ad hoc networks (MANETs). A MANET is a peer-to-peer network with no pre-existing infrastructure. MANETs are the most general wireless networks, with single-hop, relay, interference, mesh, and star networks comprising special cases. The lack of a MANET capacity theory has stunted the development and commercialization of many types of wireless networks, including emergency, military, sensor, and community mesh networks. Information theory, which has been vital for links and centralized networks, has not been successfully applied to decentralized wireless networks. Even if this was accomplished, for such a theory to truly characterize the limits of deployed MANETs it must overcome three key roadblocks. First, most current capacity results rely on the allowance of unbounded delay and reliability. Second, spatial and timescale decompositions have not yet been developed for optimally modeling the spatial and temporal dynamics of wireless networks. Third, a useful network capacity theory must integrate rather than ignore the important role of overhead messaging and feedback. This paper describes some of the shifts in thinking that may be needed to overcome these roadblocks and develop a more general theory that we refer to as non-equilibrium information theory.<|reference_end|>
arxiv
@article{andrews2007rethinking, title={Rethinking Information Theory for Mobile Ad Hoc Networks}, author={Jeff Andrews, Nihar Jindal, Martin Haenggi, Randy Berry, Syed Jafar, Dongning Guo, Sanjay Shakkottai, Robert Heath, Michael Neely, Steven Weber, Aylin Yener}, journal={arXiv preprint arXiv:0712.2467}, year={2007}, doi={10.1109/MCOM.2008.4689214}, archivePrefix={arXiv}, eprint={0712.2467}, primaryClass={cs.IT math.IT} }
andrews2007rethinking
arxiv-2103
0712.2469
Directed Percolation in Wireless Networks with Interference and Noise
<|reference_start|>Directed Percolation in Wireless Networks with Interference and Noise: Previous studies of connectivity in wireless networks have focused on undirected geometric graphs. More sophisticated models such as Signal-to-Interference-and-Noise-Ratio (SINR) model, however, usually leads to directed graphs. In this paper, we study percolation processes in wireless networks modelled by directed SINR graphs. We first investigate interference-free networks, where we define four types of phase transitions and show that they take place at the same time. By coupling the directed SINR graph with two other undirected SINR graphs, we further obtain analytical upper and lower bounds on the critical density. Then, we show that with interference, percolation in directed SINR graphs depends not only on the density but also on the inverse system processing gain. We also provide bounds on the critical value of the inverse system processing gain.<|reference_end|>
arxiv
@article{kong2007directed, title={Directed Percolation in Wireless Networks with Interference and Noise}, author={Zhenning Kong and Edmund M. Yeh}, journal={arXiv preprint arXiv:0712.2469}, year={2007}, archivePrefix={arXiv}, eprint={0712.2469}, primaryClass={cs.IT cs.NI math.IT math.PR} }
kong2007directed
arxiv-2104
0712.2497
A New Theoretic Foundation for Cross-Layer Optimization
<|reference_start|>A New Theoretic Foundation for Cross-Layer Optimization: Cross-layer optimization solutions have been proposed in recent years to improve the performance of network users operating in a time-varying, error-prone wireless environment. However, these solutions often rely on ad-hoc optimization approaches, which ignore the different environmental dynamics experienced at various layers by a user and violate the layered network architecture of the protocol stack by requiring layers to provide access to their internal protocol parameters to other layers. This paper presents a new theoretic foundation for cross-layer optimization, which allows each layer to make autonomous decisions individually, while maximizing the utility of the wireless user by optimally determining what information needs to be exchanged among layers. Hence, this cross-layer framework does not change the current layered architecture. Specifically, because the wireless user interacts with the environment at various layers of the protocol stack, the cross-layer optimization problem is formulated as a layered Markov decision process (MDP) in which each layer adapts its own protocol parameters and exchanges information (messages) with other layers in order to cooperatively maximize the performance of the wireless user. The message exchange mechanism for determining the optimal cross-layer transmission strategies has been designed for both off-line optimization and on-line dynamic adaptation. We also show that many existing cross-layer optimization algorithms can be formulated as simplified, sub-optimal, versions of our layered MDP framework.<|reference_end|>
arxiv
@article{fu2007a, title={A New Theoretic Foundation for Cross-Layer Optimization}, author={Fangwen Fu and Mihaela van der Schaar}, journal={arXiv preprint arXiv:0712.2497}, year={2007}, archivePrefix={arXiv}, eprint={0712.2497}, primaryClass={cs.NI cs.LG} }
fu2007a
arxiv-2105
0712.2501
Cell mapping description for digital control system with quantization effect
<|reference_start|>Cell mapping description for digital control system with quantization effect: Quantization problem in digital control system have attracted more and more attention in these years. Normally, a quantized variable is regarded as a perturbed copy of the unquantized variable in the research of quantization effect, but this model has shown many obvious disadvantages in control system analysis and design process. In this paper, we give a new model for quantization based 'cell mapping' concept. This cell model could clearly describe the global dynamics of quantized digital system. Then some important characteristics of control system like controllability are analyzed by this model. The finite precision control design method based on cell concept is also presented.<|reference_end|>
arxiv
@article{liang2007cell, title={Cell mapping description for digital control system with quantization effect}, author={Wang Liang, Wang Bing-wen, Guo Yi-Ping}, journal={arXiv preprint arXiv:0712.2501}, year={2007}, archivePrefix={arXiv}, eprint={0712.2501}, primaryClass={cs.OH} }
liang2007cell
arxiv-2106
0712.2552
The PBD-Closure of Constant-Composition Codes
<|reference_start|>The PBD-Closure of Constant-Composition Codes: We show an interesting PBD-closure result for the set of lengths of constant-composition codes whose distance and size meet certain conditions. A consequence of this PBD-closure result is that the size of optimal constant-composition codes can be determined for infinite families of parameter sets from just a single example of an optimal code. As an application, the size of several infinite families of optimal constant-composition codes are derived. In particular, the problem of determining the size of optimal constant-composition codes having distance four and weight three is solved for all lengths sufficiently large. This problem was previously unresolved for odd lengths, except for lengths seven and eleven.<|reference_end|>
arxiv
@article{chee2007the, title={The PBD-Closure of Constant-Composition Codes}, author={Yeow Meng Chee, Alan C. H. Ling, San Ling, and Hao Shen}, journal={IEEE Transactions on Information Theory, vol. 53, No. 8, August 2007, pp. 2685-2692}, year={2007}, doi={10.1109/TIT.2007.901175}, archivePrefix={arXiv}, eprint={0712.2552}, primaryClass={cs.IT cs.DM math.CO math.IT} }
chee2007the
arxiv-2107
0712.2553
Constructions for Difference Triangle Sets
<|reference_start|>Constructions for Difference Triangle Sets: Difference triangle sets are useful in many practical problems of information transmission. This correspondence studies combinatorial and computational constructions for difference triangle sets having small scopes. Our algorithms have been used to produce difference triangle sets whose scopes are the best currently known.<|reference_end|>
arxiv
@article{chee2007constructions, title={Constructions for Difference Triangle Sets}, author={Yeow Meng Chee and Charles J. Colbourn}, journal={IEEE Transactions on Information Theory, vol. 43, No. 4, July 1997, pp. 1346-1349}, year={2007}, archivePrefix={arXiv}, eprint={0712.2553}, primaryClass={cs.IT cs.DM math.CO math.IT} }
chee2007constructions
arxiv-2108
0712.2559
Cycle time of stochastic max-plus linear systems
<|reference_start|>Cycle time of stochastic max-plus linear systems: We analyze the asymptotic behavior of sequences of random variables defined by an initial condition, a stationary and ergodic sequence of random matrices, and an induction formula involving multiplication is the so-called max-plus algebra. This type of recursive sequences are frequently used in applied probability as they model many systems as some queueing networks, train and computer networks, and production systems. We give a necessary condition for the recursive sequences to satisfy a strong law of large numbers, which proves to be sufficient when the matrices are i.i.d. Moreover, we construct a new example, in which the sequence of matrices is strongly mixing, that condition is satisfied, but the recursive sequence do not converges almost surely.<|reference_end|>
arxiv
@article{merlet2007cycle, title={Cycle time of stochastic max-plus linear systems}, author={Glenn Merlet (LIAFA)}, journal={Electronic Journal of Probability 13 (2008) (2008) Paper 12, 322-340}, year={2007}, archivePrefix={arXiv}, eprint={0712.2559}, primaryClass={math.PR cs.DM} }
merlet2007cycle
arxiv-2109
0712.2567
On Lower Bound for W(K_2n)
<|reference_start|>On Lower Bound for W(K_2n): The lower bound W(K_{2n})>=3n-2 is proved for the greatest possible number of colors in an interval edge coloring of the complete graph K_{2n}.<|reference_end|>
arxiv
@article{kamalian2007on, title={On Lower Bound for W(K_{2n})}, author={Rafael R. Kamalian and Petros A. Petrosyan}, journal={Mathematical Problems of Computer Science 23, 2004, 127--129}, year={2007}, archivePrefix={arXiv}, eprint={0712.2567}, primaryClass={cs.DM} }
kamalian2007on
arxiv-2110
0712.2577
Is the injectivity of the global function of a cellular automaton in the hyperbolic plane undecidable?
<|reference_start|>Is the injectivity of the global function of a cellular automaton in the hyperbolic plane undecidable?: In this paper, we look at the following question. We consider cellular automata in the hyperbolic plane and we consider the global function defined on all possible configurations. Is the injectivity of this function undecidable? The problem was answered positively in the case of the Euclidean plane by Jarkko Kari, in 1994. In the present paper, we give a partial answer: when the configurations are restricted to a certain condition, the problem is undecidable.<|reference_end|>
arxiv
@article{maurice2007is, title={Is the injectivity of the global function of a cellular automaton in the hyperbolic plane undecidable?}, author={Margenstern Maurice}, journal={arXiv preprint arXiv:0712.2577}, year={2007}, archivePrefix={arXiv}, eprint={0712.2577}, primaryClass={cs.DM cs.LO} }
maurice2007is
arxiv-2111
0712.2579
On the Information of the Second Moments Between Random Variables Using Mutually Unbiased Bases
<|reference_start|>On the Information of the Second Moments Between Random Variables Using Mutually Unbiased Bases: The notation of mutually unbiased bases(MUB) was first introduced by Ivanovic to reconstruct density matrixes\cite{Ivanovic}. The subject about how to use MUB to analyze, process, and utilize the information of the second moments between random variables is studied in this paper. In the first part, the mathematical foundation will be built. It will be shown that the spectra of MUB have complete information for the correlation matrixes of finite discrete signals, and the nice properties of them. Roughly speaking, it will be shown that each spectrum from MUB plays an equal role for finite discrete signals, and the effect between any two spectra can be treated as a global constant shift. These properties will be used to find some important and natural characterizations of random vectors and random discrete operators/filters. For a technical reason, it will be shown that any MUB spectra can be found as fast as Fourier spectrum when the length of the signal is a prime number. In the second part, some applications will be presented. First of all, a protocol about how to increase the number of users in a basic digital communication model will be studied, which has bring some deep insights about how to encode the information into the second moments between random variables. Secondly, the application of signal analysis will be studied. It is suggested that complete "MUB" spectra analysis works well in any case, and people can just choose the spectra they are interested in to do analysis. For instance, single Fourier spectra analysis can be also applied in nonstationary case. Finally, the application of MUB in dimensionality reduction will be considered, when the prior knowledge of the data isn't reliable.<|reference_end|>
arxiv
@article{yao2007on, title={On the Information of the Second Moments Between Random Variables Using Mutually Unbiased Bases}, author={Hongyi Yao}, journal={arXiv preprint arXiv:0712.2579}, year={2007}, archivePrefix={arXiv}, eprint={0712.2579}, primaryClass={cs.IT math.IT} }
yao2007on
arxiv-2112
0712.2585
Interval Edge Colourings of Complete Graphs and n-cubes
<|reference_start|>Interval Edge Colourings of Complete Graphs and n-cubes: For complete graphs and n-cubes bounds are found for the possible number of colours in an interval edge colourings.<|reference_end|>
arxiv
@article{petrosyan2007interval, title={Interval Edge Colourings of Complete Graphs and n-cubes}, author={Petros A. Petrosyan}, journal={Mathematical Problems of Computer Science 25, 2006, 5--8}, year={2007}, archivePrefix={arXiv}, eprint={0712.2585}, primaryClass={cs.DM} }
petrosyan2007interval
arxiv-2113
0712.2587
Maximum-Likelihood Priority-First Search Decodable Codes for Combined Channel Estimation and Error Protection
<|reference_start|>Maximum-Likelihood Priority-First Search Decodable Codes for Combined Channel Estimation and Error Protection: The code that combines channel estimation and error protection has received general attention recently, and has been considered a promising methodology to compensate multi-path fading effect. It has been shown by simulations that such code design can considerably improve the system performance over the conventional design with separate channel estimation and error protection modules under the same code rate. Nevertheless, the major obstacle that prevents from the practice of the codes is that the existing codes are mostly searched by computers, and hence exhibit no good structure for efficient decoding. Hence, the time-consuming exhaustive search becomes the only decoding choice, and the decoding complexity increases dramatically with the codeword length. In this paper, by optimizing the signal-tonoise ratio, we found a systematic construction for the codes for combined channel estimation and error protection, and confirmed its equivalence in performance to the computer-searched codes by simulations. Moreover, the structural codes that we construct by rules can now be maximum-likelihoodly decodable in terms of a newly derived recursive metric for use of the priority-first search decoding algorithm. Thus,the decoding complexity reduces significantly when compared with that of the exhaustive decoder. The extension code design for fast-fading channels is also presented. Simulations conclude that our constructed extension code is robust in performance even if the coherent period is shorter than the codeword length.<|reference_end|>
arxiv
@article{wu2007maximum-likelihood, title={Maximum-Likelihood Priority-First Search Decodable Codes for Combined Channel Estimation and Error Protection}, author={Chia-Lung Wu, Po-Ning Chen, Yunghsiang S. Han, Ming-Hsin Kuo}, journal={arXiv preprint arXiv:0712.2587}, year={2007}, archivePrefix={arXiv}, eprint={0712.2587}, primaryClass={cs.IT math.IT} }
wu2007maximum-likelihood
arxiv-2114
0712.2591
A Typical Model Audit Approach: Spreadsheet Audit Methodologies in the City of London
<|reference_start|>A Typical Model Audit Approach: Spreadsheet Audit Methodologies in the City of London: Spreadsheet audit and review procedures are an essential part of almost all City of London financial transactions. Structured processes are used to discover errors in large financial spreadsheets underpinning major transactions of all types. Serious errors are routinely found and are fed back to model development teams generally under conditions of extreme time urgency. Corrected models form the essence of the completed transaction and firms undertaking model audit and review expose themselves to significant financial liability in the event of any remaining significant error. It is noteworthy that in the United Kingdom, the management of spreadsheet error is almost unheard of outside of the City of London despite the commercial ubiquity of the spreadsheet.<|reference_end|>
arxiv
@article{croll2007a, title={A Typical Model Audit Approach: Spreadsheet Audit Methodologies in the City of London}, author={Grenville J. Croll}, journal={IFIP, Integrity and Internal Control in Information Systems, Vol 124, pp. 213-219, Kluwer, 2003}, year={2007}, archivePrefix={arXiv}, eprint={0712.2591}, primaryClass={cs.SE cs.CY} }
croll2007a
arxiv-2115
0712.2592
Strongly consistent nonparametric forecasting and regression for stationary ergodic sequences
<|reference_start|>Strongly consistent nonparametric forecasting and regression for stationary ergodic sequences: Let $\{(X_i,Y_i)\}$ be a stationary ergodic time series with $(X,Y)$ values in the product space $\R^d\bigotimes \R .$ This study offers what is believed to be the first strongly consistent (with respect to pointwise, least-squares, and uniform distance) algorithm for inferring $m(x)=E[Y_0|X_0=x]$ under the presumption that $m(x)$ is uniformly Lipschitz continuous. Auto-regression, or forecasting, is an important special case, and as such our work extends the literature of nonparametric, nonlinear forecasting by circumventing customary mixing assumptions. The work is motivated by a time series model in stochastic finance and by perspectives of its contribution to the issues of universal time series estimation.<|reference_end|>
arxiv
@article{yakowitz2007strongly, title={Strongly consistent nonparametric forecasting and regression for stationary ergodic sequences}, author={S. Yakowitz, L. Gyorfi, J. Kieffer, G. Morvai}, journal={J. Multivariate Anal. 71 (1999), no. 1, 24--41}, year={2007}, archivePrefix={arXiv}, eprint={0712.2592}, primaryClass={math.PR cs.IT math.IT} }
yakowitz2007strongly
arxiv-2116
0712.2594
Stop That Subversive Spreadsheet!
<|reference_start|>Stop That Subversive Spreadsheet!: This paper documents the formation of the European Spreadsheet Risks Interest Group (EuSpRIG www.eusprig.org) and outlines some of the research undertaken and reported upon by interested parties in EuSpRIG publications<|reference_end|>
arxiv
@article{chadwick2007stop, title={Stop That Subversive Spreadsheet!}, author={David Chadwick}, journal={IFIP, Integrity and Internal Control in Information Systems, Vol 24, pp. 205-211, Kluwer, 2003}, year={2007}, archivePrefix={arXiv}, eprint={0712.2594}, primaryClass={cs.GL} }
chadwick2007stop
arxiv-2117
0712.2595
Distinguishing Short Quantum Computations
<|reference_start|>Distinguishing Short Quantum Computations: Distinguishing logarithmic depth quantum circuits on mixed states is shown to be complete for QIP, the class of problems having quantum interactive proof systems. Circuits in this model can represent arbitrary quantum processes, and thus this result has implications for the verification of implementations of quantum algorithms. The distinguishability problem is also complete for QIP on constant depth circuits containing the unbounded fan-out gate. These results are shown by reducing a QIP-complete problem to a logarithmic depth version of itself using a parallelization technique.<|reference_end|>
arxiv
@article{rosgen2007distinguishing, title={Distinguishing Short Quantum Computations}, author={Bill Rosgen}, journal={arXiv preprint arXiv:0712.2595}, year={2007}, doi={10.4230/LIPIcs.STACS.2008.1322}, archivePrefix={arXiv}, eprint={0712.2595}, primaryClass={quant-ph cs.CC} }
rosgen2007distinguishing
arxiv-2118
0712.2605
Some A Priori Torah Decryption Principles
<|reference_start|>Some A Priori Torah Decryption Principles: The author proposes, a priori, a simple set of principles that can be developed into a range of algorithms by which means the Torah might be decoded. It is assumed that the Torah is some form of transposition cipher with the unusual property that the plain text of the Torah may also be the cipher text of one or more other documents written in Biblical Hebrew. The decryption principles are based upon the use of Equidistant Letter Sequences (ELS) and the notions of Message Length, Dimensionality, Euclidean Dimension, Topology, Read Direction, Skip Distance and offset. The principles can be applied recursively and define numerous large subsets of the 304,807! theoretically possible permutations of the characters of the Torah.<|reference_end|>
arxiv
@article{croll2007some, title={Some A Priori Torah Decryption Principles}, author={Grenville J. Croll}, journal={Proc. ANPA Cambridge, UK, 2007}, year={2007}, archivePrefix={arXiv}, eprint={0712.2605}, primaryClass={cs.CR} }
croll2007some
arxiv-2119
0712.2606
Algorithmic Permutation of part of the Torah
<|reference_start|>Algorithmic Permutation of part of the Torah: A small part of the Torah is arranged into a two dimensional array. The characters are then permuted using a simple recursive deterministic algorithm. The various permutations are then passed through three stochastic filters and one deterministic filter to identify the permutations which most closely approximate readable Biblical Hebrew. Of the 15 Billion sequences available at the second level of recursion, 800 pass the a priori thresholds set for each filter. The resulting "Biblical Hebrew" text is available for inspection and the generation of further material continues.<|reference_end|>
arxiv
@article{croll2007algorithmic, title={Algorithmic Permutation of part of the Torah}, author={Grenville J. Croll}, journal={Proc. ANPA 27, Wesley College, Cambridge, UK, September 2005}, year={2007}, archivePrefix={arXiv}, eprint={0712.2606}, primaryClass={cs.CR} }
croll2007algorithmic
arxiv-2120
0712.2619
A New Lower Bound for A(17,6,6)
<|reference_start|>A New Lower Bound for A(17,6,6): We construct a record-breaking binary code of length 17, minimal distance 6, constant weight 6, and containing 113 codewords.<|reference_end|>
arxiv
@article{chee2007a, title={A New Lower Bound for A(17,6,6)}, author={Yeow Meng Chee}, journal={Ars Combinatoria, Vol. 83, pp. 361-363, 2007}, year={2007}, archivePrefix={arXiv}, eprint={0712.2619}, primaryClass={cs.IT cs.DM math.CO math.IT} }
chee2007a
arxiv-2121
0712.2629
Approximation Algorithms for the Highway Problem under the Coupon Model
<|reference_start|>Approximation Algorithms for the Highway Problem under the Coupon Model: When a store sells items to customers, the store wishes to determine the prices of the items to maximize its profit. Intuitively, if the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. So it would be hard for the store to decide the prices of items. Assume that the store has a set V of n items and there is a set E of m customers who wish to buy those items, and also assume that each item i \in V has the production cost d_i and each customer e_j \in E has the valuation v_j on the bundle e_j \subseteq V of items. When the store sells an item i \in V at the price r_i, the profit for the item i is p_i=r_i-d_i. The goal of the store is to decide the price of each item to maximize its total profit. In most of the previous works, the item pricing problem was considered under the assumption that p_i \geq 0 for each i \in V, however, Balcan, et al. [In Proc. of WINE, LNCS 4858, 2007] introduced the notion of loss-leader, and showed that the seller can get more total profit in the case that p_i < 0 is allowed than in the case that p_i < 0 is not allowed. In this paper, we consider the line and the cycle highway problem, and show approximation algorithms for the line and/or cycle highway problem for which the smallest valuation is s and the largest valuation is \ell or all valuations are identical.<|reference_end|>
arxiv
@article{hamane2007approximation, title={Approximation Algorithms for the Highway Problem under the Coupon Model}, author={Ryoso Hamane, Toshiya Itoh, and Kouhei Tomita}, journal={IEICE Trans. on Fundamentals, E92-A(8), pp.1779-1786, 2009}, year={2007}, doi={10.1587/transfun.E92.A.1779}, archivePrefix={arXiv}, eprint={0712.2629}, primaryClass={cs.DS} }
hamane2007approximation
arxiv-2122
0712.2630
Evolving XSLT stylesheets
<|reference_start|>Evolving XSLT stylesheets: This paper introduces a procedure based on genetic programming to evolve XSLT programs (usually called stylesheets or logicsheets). XSLT is a general purpose, document-oriented functional language, generally used to transform XML documents (or, in general, solve any problem that can be coded as an XML document). The proposed solution uses a tree representation for the stylesheets as well as diverse specific operators in order to obtain, in the studied cases and a reasonable time, a XSLT stylesheet that performs the transformation. Several types of representation have been compared, resulting in different performance and degree of success.<|reference_end|>
arxiv
@article{zorzano2007evolving, title={Evolving XSLT stylesheets}, author={Nestor Zorzano, Daniel Merino, J.L.J. Laredo, J.P. Sevilla, Pablo Garcia, J.J. Merelo}, journal={arXiv preprint arXiv:0712.2630}, year={2007}, archivePrefix={arXiv}, eprint={0712.2630}, primaryClass={cs.NE cs.PL} }
zorzano2007evolving
arxiv-2123
0712.2638
Towards Persistence-Based Reconstruction in Euclidean Spaces
<|reference_start|>Towards Persistence-Based Reconstruction in Euclidean Spaces: Manifold reconstruction has been extensively studied for the last decade or so, especially in two and three dimensions. Recently, significant improvements were made in higher dimensions, leading to new methods to reconstruct large classes of compact subsets of Euclidean space $\R^d$. However, the complexities of these methods scale up exponentially with d, which makes them impractical in medium or high dimensions, even for handling low-dimensional submanifolds. In this paper, we introduce a novel approach that stands in-between classical reconstruction and topological estimation, and whose complexity scales up with the intrinsic dimension of the data. Specifically, when the data points are sufficiently densely sampled from a smooth $m$-submanifold of $\R^d$, our method retrieves the homology of the submanifold in time at most $c(m)n^5$, where $n$ is the size of the input and $c(m)$ is a constant depending solely on $m$. It can also provably well handle a wide range of compact subsets of $\R^d$, though with worse complexities. Along the way to proving the correctness of our algorithm, we obtain new results on \v{C}ech, Rips, and witness complex filtrations in Euclidean spaces.<|reference_end|>
arxiv
@article{chazal2007towards, title={Towards Persistence-Based Reconstruction in Euclidean Spaces}, author={Fr'ed'eric Chazal (INRIA Sophia Antipolis), Steve Oudot (INRIA Sophia Antipolis)}, journal={arXiv preprint arXiv:0712.2638}, year={2007}, archivePrefix={arXiv}, eprint={0712.2638}, primaryClass={cs.CG math.AT} }
chazal2007towards
arxiv-2124
0712.2640
Optimal Memoryless Encoding for Low Power Off-Chip Data Buses
<|reference_start|>Optimal Memoryless Encoding for Low Power Off-Chip Data Buses: Off-chip buses account for a significant portion of the total system power consumed in embedded systems. Bus encoding schemes have been proposed to minimize power dissipation, but none has been demonstrated to be optimal with respect to any measure. In this paper, we give the first provably optimal and explicit (polynomial-time constructible) families of memoryless codes for minimizing bit transitions in off-chip buses. Our results imply that having access to a clock does not make a memoryless encoding scheme that minimizes bit transitions more powerful.<|reference_end|>
arxiv
@article{chee2007optimal, title={Optimal Memoryless Encoding for Low Power Off-Chip Data Buses}, author={Yeow Meng Chee, Charles J. Colbourn, and Alan C. H. Ling}, journal={arXiv preprint arXiv:0712.2640}, year={2007}, doi={10.1145/1233501.1233575}, archivePrefix={arXiv}, eprint={0712.2640}, primaryClass={cs.AR cs.DM cs.IT math.IT} }
chee2007optimal
arxiv-2125
0712.2643
Changing Levels of Description in a Fluid Flow Simulation
<|reference_start|>Changing Levels of Description in a Fluid Flow Simulation: We describe here our perception of complex systems, of how we feel the different layers of description are important part of a correct complex system simulation. We describe a rough models categorization between rules based and law based, of how these categories handled the levels of descriptions or scales. We then describe our fluid flow simulation, which combines different fineness of grain in a mixed approach of these categories. This simulation is built keeping in mind an ulterior use inside a more general aquatic ecosystem.<|reference_end|>
arxiv
@article{tranouez2007changing, title={Changing Levels of Description in a Fluid Flow Simulation}, author={Pierrick Tranouez (LITIS), Cyrille Bertelle (LITIS), Damien Olivier (LITIS)}, journal={Emergent Properties in Natural and Artificial Dynamical Systems, Springer (Ed.) (2006) 87-99}, year={2007}, archivePrefix={arXiv}, eprint={0712.2643}, primaryClass={physics.flu-dyn cs.CE} }
tranouez2007changing
arxiv-2126
0712.2644
Automata-based Adaptive Behavior for Economical Modelling Using Game Theory
<|reference_start|>Automata-based Adaptive Behavior for Economical Modelling Using Game Theory: In this chapter, we deal with some specific domains of applications to game theory. This is one of the major class of models in the new approaches of modelling in the economic domain. For that, we use genetic automata which allow to build adaptive strategies for the players. We explain how the automata-based formalism proposed - matrix representation of automata with multiplicities - allows to define semi-distance between the strategy behaviors. With that tools, we are able to generate an automatic processus to compute emergent systems of entities whose behaviors are represented by these genetic automata.<|reference_end|>
arxiv
@article{ghnemat2007automata-based, title={Automata-based Adaptive Behavior for Economical Modelling Using Game Theory}, author={Rawan Ghnemat (LITIS), Saleh Oqeili (IT), Cyrille Bertelle (LITIS), G'erard Henry Edmond Duchamp (LIPN)}, journal={Emergent Properties in Natural and Artificial Dynamical Systems, Springer (Ed.) (2006) 171-183}, year={2007}, archivePrefix={arXiv}, eprint={0712.2644}, primaryClass={cs.GT cs.CC} }
ghnemat2007automata-based
arxiv-2127
0712.2661
Algorithms for Generating Convex Sets in Acyclic Digraphs
<|reference_start|>Algorithms for Generating Convex Sets in Acyclic Digraphs: A set $X$ of vertices of an acyclic digraph $D$ is convex if $X\neq \emptyset$ and there is no directed path between vertices of $X$ which contains a vertex not in $X$. A set $X$ is connected if $X\neq \emptyset$ and the underlying undirected graph of the subgraph of $D$ induced by $X$ is connected. Connected convex sets and convex sets of acyclic digraphs are of interest in the area of modern embedded processor technology. We construct an algorithm $\cal A$ for enumeration of all connected convex sets of an acyclic digraph $D$ of order $n$. The time complexity of $\cal A$ is $O(n\cdot cc(D))$, where $cc(D)$ is the number of connected convex sets in $D$. We also give an optimal algorithm for enumeration of all (not just connected) convex sets of an acyclic digraph $D$ of order $n$. In computational experiments we demonstrate that our algorithms outperform the best algorithms in the literature. Using the same approach as for $\cal A$, we design an algorithm for generating all connected sets of a connected undirected graph $G$. The complexity of the algorithm is $O(n\cdot c(G)),$ where $n$ is the order of $G$ and $c(G)$ is the number of connected sets of $G.$ The previously reported algorithm for connected set enumeration is of running time $O(mn\cdot c(G))$, where $m$ is the number of edges in $G.$<|reference_end|>
arxiv
@article{balister2007algorithms, title={Algorithms for Generating Convex Sets in Acyclic Digraphs}, author={P. Balister, S. Gerke, G. Gutin, A. Johnstone, J. Reddington, E. Scott, A. Soleimanfallah, A. Yeo}, journal={arXiv preprint arXiv:0712.2661}, year={2007}, archivePrefix={arXiv}, eprint={0712.2661}, primaryClass={cs.DM cs.DS} }
balister2007algorithms
arxiv-2128
0712.2671
On the equations of the moving curve ideal of a rational algebraic plane curve
<|reference_start|>On the equations of the moving curve ideal of a rational algebraic plane curve: Given a parametrization of a rational plane algebraic curve C, some explicit adjoint pencils on C are described in terms of determinants. Moreover, some generators of the Rees algebra associated to this parametrization are presented. The main ingredient developed in this paper is a detailed study of the elimination ideal of two homogeneous polynomials in two homogeneous variables that form a regular sequence.<|reference_end|>
arxiv
@article{busé2007on, title={On the equations of the moving curve ideal of a rational algebraic plane curve}, author={Laurent Bus'e (INRIA Sophia Antipolis)}, journal={arXiv preprint arXiv:0712.2671}, year={2007}, archivePrefix={arXiv}, eprint={0712.2671}, primaryClass={math.AG cs.SC math.AC} }
busé2007on
arxiv-2129
0712.2678
Convex sets in acyclic digraphs
<|reference_start|>Convex sets in acyclic digraphs: A non-empty set $X$ of vertices of an acyclic digraph is called connected if the underlying undirected graph induced by $X$ is connected and it is called convex if no two vertices of $X$ are connected by a directed path in which some vertices are not in $X$. The set of convex sets (connected convex sets) of an acyclic digraph $D$ is denoted by $\sco(D)$ ($\scc(D)$) and its size by $\co(D)$ ($\cc(D)$). Gutin, Johnstone, Reddington, Scott, Soleimanfallah, and Yeo (Proc. ACiD'07) conjectured that the sum of the sizes of all (connected) convex sets in $D$ equals $\Theta(n \cdot \co(D))$ ($\Theta(n \cdot \cc(D))$) where $n$ is the order of $D$. In this paper we exhibit a family of connected acyclic digraphs with $\sum_{C\in \sco(D)}|C| = o(n\cdot \co(D))$ and $\sum_{C\in \scc(D)}|C| = o(n\cdot \cc(D))$. We also show that the number of connected convex sets of order $k$ in any connected acyclic digraph of order $n$ is at least $n-k+1$. This is a strengthening of a theorem by Gutin and Yeo.<|reference_end|>
arxiv
@article{balister2007convex, title={Convex sets in acyclic digraphs}, author={P. Balister, S. Gerke, G. Gutin}, journal={arXiv preprint arXiv:0712.2678}, year={2007}, archivePrefix={arXiv}, eprint={0712.2678}, primaryClass={cs.DM} }
balister2007convex
arxiv-2130
0712.2682
An Approximation Ratio for Biclustering
<|reference_start|>An Approximation Ratio for Biclustering: The problem of biclustering consists of the simultaneous clustering of rows and columns of a matrix such that each of the submatrices induced by a pair of row and column clusters is as uniform as possible. In this paper we approximate the optimal biclustering by applying one-way clustering algorithms independently on the rows and on the columns of the input matrix. We show that such a solution yields a worst-case approximation ratio of 1+sqrt(2) under L1-norm for 0-1 valued matrices, and of 2 under L2-norm for real valued matrices.<|reference_end|>
arxiv
@article{puolamäki2007an, title={An Approximation Ratio for Biclustering}, author={Kai Puolam"aki, Sami Hanhij"arvi, Gemma C. Garriga}, journal={Information Processing Letters 108 (2008) 45-49}, year={2007}, doi={10.1016/j.ipl.2008.03.013}, number={Publications in Computer and Information Science E13}, archivePrefix={arXiv}, eprint={0712.2682}, primaryClass={cs.DS stat.ML} }
puolamäki2007an
arxiv-2131
0712.2684
An Economic Model of Coupled Exponential Maps
<|reference_start|>An Economic Model of Coupled Exponential Maps: In this work, an ensemble of economic interacting agents is considered. The agents are arranged in a linear array where only local couplings are allowed. The deterministic dynamics of each agent is given by a map. This map is expressed by two factors. The first one is a linear term that models the expansion of the agent's economy and that is controlled by the {\it growth capacity parameter}. The second one is an inhibition exponential term that is regulated by the {\it local environmental pressure}. Depending on the parameter setting, the system can display Pareto or Boltzmann-Gibbs behavior in the asymptotic dynamical regime. The regions of parameter space where the system exhibits one of these two statistical behaviors are delimited. Other properties of the system, such as the mean wealth, the standard deviation and the Gini coefficient, are also calculated.<|reference_end|>
arxiv
@article{lopez-ruiz2007an, title={An Economic Model of Coupled Exponential Maps}, author={R. Lopez-Ruiz, J. Gonzalez-Estevez, M.G. Cosenza, and J.R. Sanchez}, journal={arXiv preprint arXiv:0712.2684}, year={2007}, archivePrefix={arXiv}, eprint={0712.2684}, primaryClass={q-fin.GN cs.MA nlin.AO physics.soc-ph} }
lopez-ruiz2007an
arxiv-2132
0712.2737
Experiments with a Convex Polyhedral Analysis Tool for Logic Programs
<|reference_start|>Experiments with a Convex Polyhedral Analysis Tool for Logic Programs: Convex polyhedral abstractions of logic programs have been found very useful in deriving numeric relationships between program arguments in order to prove program properties and in other areas such as termination and complexity analysis. We present a tool for constructing polyhedral analyses of (constraint) logic programs. The aim of the tool is to make available, with a convenient interface, state-of-the-art techniques for polyhedral analysis such as delayed widening, narrowing, "widening up-to", and enhanced automatic selection of widening points. The tool is accessible on the web, permits user programs to be uploaded and analysed, and is integrated with related program transformations such as size abstractions and query-answer transformation. We then report some experiments using the tool, showing how it can be conveniently used to analyse transition systems arising from models of embedded systems, and an emulator for a PIC microcontroller which is used for example in wearable computing systems. We discuss issues including scalability, tradeoffs of precision and computation time, and other program transformations that can enhance the results of analysis.<|reference_end|>
arxiv
@article{henriksen2007experiments, title={Experiments with a Convex Polyhedral Analysis Tool for Logic Programs}, author={Kim Henriksen, Gourinath Banda, John Gallagher}, journal={arXiv preprint arXiv:0712.2737}, year={2007}, archivePrefix={arXiv}, eprint={0712.2737}, primaryClass={cs.PL cs.SE} }
henriksen2007experiments
arxiv-2133
0712.2773
Middleware-based Database Replication: The Gaps between Theory and Practice
<|reference_start|>Middleware-based Database Replication: The Gaps between Theory and Practice: The need for high availability and performance in data management systems has been fueling a long running interest in database replication from both academia and industry. However, academic groups often attack replication problems in isolation, overlooking the need for completeness in their solutions, while commercial teams take a holistic approach that often misses opportunities for fundamental innovation. This has created over time a gap between academic research and industrial practice. This paper aims to characterize the gap along three axes: performance, availability, and administration. We build on our own experience developing and deploying replication systems in commercial and academic settings, as well as on a large body of prior related work. We sift through representative examples from the last decade of open-source, academic, and commercial database replication systems and combine this material with case studies from real systems deployed at Fortune 500 customers. We propose two agendas, one for academic research and one for industrial R&D, which we believe can bridge the gap within 5-10 years. This way, we hope to both motivate and help researchers in making the theory and practice of middleware-based database replication more relevant to each other.<|reference_end|>
arxiv
@article{cecchet2007middleware-based, title={Middleware-based Database Replication: The Gaps between Theory and Practice}, author={Emmanuel Cecchet, George Candea, Anastasia Ailamaki}, journal={arXiv preprint arXiv:0712.2773}, year={2007}, number={EPFL technical report DSLAB-REPORT-2007-001}, archivePrefix={arXiv}, eprint={0712.2773}, primaryClass={cs.DB cs.DC cs.PF} }
cecchet2007middleware-based
arxiv-2134
0712.2789
Trading in Risk Dimensions (TRD)
<|reference_start|>Trading in Risk Dimensions (TRD): Previous work, mostly published, developed two-shell recursive trading systems. An inner-shell of Canonical Momenta Indicators (CMI) is adaptively fit to incoming market data. A parameterized trading-rule outer-shell uses the global optimization code Adaptive Simulated Annealing (ASA) to fit the trading system to historical data. A simple fitting algorithm, usually not requiring ASA, is used for the inner-shell fit. An additional risk-management middle-shell has been added to create a three-shell recursive optimization/sampling/fitting algorithm. Portfolio-level distributions of copula-transformed multivariate distributions (with constituent markets possessing different marginal distributions in returns space) are generated by Monte Carlo samplings. ASA is used to importance-sample weightings of these markets. The core code, Trading in Risk Dimensions (TRD), processes Training and Testing trading systems on historical data, and consistently interacts with RealTime trading platforms at minute resolutions, but this scale can be modified. This approach transforms constituent probability distributions into a common space where it makes sense to develop correlations to further develop probability distributions and risk/uncertainty analyses of the full portfolio. ASA is used for importance-sampling these distributions and for optimizing system parameters.<|reference_end|>
arxiv
@article{ingber2007trading, title={Trading in Risk Dimensions (TRD)}, author={Lester Ingber}, journal={arXiv preprint arXiv:0712.2789}, year={2007}, archivePrefix={arXiv}, eprint={0712.2789}, primaryClass={cs.CE cs.NA} }
ingber2007trading
arxiv-2135
0712.2857
Single-Exclusion Number and the Stopping Redundancy of MDS Codes
<|reference_start|>Single-Exclusion Number and the Stopping Redundancy of MDS Codes: For a linear block code C, its stopping redundancy is defined as the smallest number of check nodes in a Tanner graph for C, such that there exist no stopping sets of size smaller than the minimum distance of C. Schwartz and Vardy conjectured that the stopping redundancy of an MDS code should only depend on its length and minimum distance. We define the (n,t)-single-exclusion number, S(n,t) as the smallest number of t-subsets of an n-set, such that for each i-subset of the n-set, i=1,...,t+1, there exists a t-subset that contains all but one element of the i-subset. New upper bounds on the single-exclusion number are obtained via probabilistic methods, recurrent inequalities, as well as explicit constructions. The new bounds are used to better understand the stopping redundancy of MDS codes. In particular, it is shown that for [n,k=n-d+1,d] MDS codes, as n goes to infinity, the stopping redundancy is asymptotic to S(n,d-2), if d=o(\sqrt{n}), or if k=o(\sqrt{n}) and k goes to infinity, thus giving partial confirmation of the Schwartz-Vardy conjecture in the asymptotic sense.<|reference_end|>
arxiv
@article{han2007single-exclusion, title={Single-Exclusion Number and the Stopping Redundancy of MDS Codes}, author={Junsheng Han, Paul H. Siegel and Ron M. Roth}, journal={arXiv preprint arXiv:0712.2857}, year={2007}, doi={10.1109/TIT.2009.2025578}, archivePrefix={arXiv}, eprint={0712.2857}, primaryClass={cs.IT cs.DM math.CO math.IT} }
han2007single-exclusion
arxiv-2136
0712.2869
Density estimation in linear time
<|reference_start|>Density estimation in linear time: We consider the problem of choosing a density estimate from a set of distributions F, minimizing the L1-distance to an unknown distribution (Devroye, Lugosi 2001). Devroye and Lugosi analyze two algorithms for the problem: Scheffe tournament winner and minimum distance estimate. The Scheffe tournament estimate requires fewer computations than the minimum distance estimate, but has strictly weaker guarantees than the latter. We focus on the computational aspect of density estimation. We present two algorithms, both with the same guarantee as the minimum distance estimate. The first one, a modification of the minimum distance estimate, uses the same number (quadratic in |F|) of computations as the Scheffe tournament. The second one, called ``efficient minimum loss-weight estimate,'' uses only a linear number of computations, assuming that F is preprocessed. We also give examples showing that the guarantees of the algorithms cannot be improved and explore randomized algorithms for density estimation.<|reference_end|>
arxiv
@article{mahalanabis2007density, title={Density estimation in linear time}, author={Satyaki Mahalanabis, Daniel Stefankovic}, journal={arXiv preprint arXiv:0712.2869}, year={2007}, archivePrefix={arXiv}, eprint={0712.2869}, primaryClass={cs.LG} }
mahalanabis2007density
arxiv-2137
0712.2870
The source coding game with a cheating switcher
<|reference_start|>The source coding game with a cheating switcher: Motivated by the lossy compression of an active-vision video stream, we consider the problem of finding the rate-distortion function of an arbitrarily varying source (AVS) composed of a finite number of subsources with known distributions. Berger's paper `The Source Coding Game', \emph{IEEE Trans. Inform. Theory}, 1971, solves this problem under the condition that the adversary is allowed only strictly causal access to the subsource realizations. We consider the case when the adversary has access to the subsource realizations non-causally. Using the type-covering lemma, this new rate-distortion function is determined to be the maximum of the IID rate-distortion function over a set of source distributions attainable by the adversary. We then extend the results to allow for partial or noisy observations of subsource realizations. We further explore the model by attempting to find the rate-distortion function when the adversary is actually helpful. Finally, a bound is developed on the uniform continuity of the IID rate-distortion function for finite-alphabet sources. The bound is used to give a sufficient number of distributions that need to be sampled to compute the rate-distortion function of an AVS to within a certain accuracy. The bound is also used to give a rate of convergence for the estimate of the rate-distortion function for an unknown IID finite-alphabet source .<|reference_end|>
arxiv
@article{palaiyanur2007the, title={The source coding game with a cheating switcher}, author={Hari Palaiyanur, Cheng Chang and Anant Sahai}, journal={arXiv preprint arXiv:0712.2870}, year={2007}, number={EECS-2007-155}, archivePrefix={arXiv}, eprint={0712.2870}, primaryClass={cs.IT cs.CV math.IT} }
palaiyanur2007the
arxiv-2138
0712.2872
Low SNR Capacity of Noncoherent Fading Channels
<|reference_start|>Low SNR Capacity of Noncoherent Fading Channels: Discrete-time Rayleigh fading single-input single-output (SISO) and multiple-input multiple-output (MIMO) channels are considered, with no channel state information at the transmitter or the receiver. The fading is assumed to be stationary and correlated in time, but independent from antenna to antenna. Peak-power and average-power constraints are imposed on the transmit antennas. For MIMO channels, these constraints are either imposed on the sum over antennas, or on each individual antenna. For SISO channels and MIMO channels with sum power constraints, the asymptotic capacity as the peak signal-to-noise ratio tends to zero is identified; for MIMO channels with individual power constraints, this asymptotic capacity is obtained for a class of channels called transmit separable channels. The results for MIMO channels with individual power constraints are carried over to SISO channels with delay spread (i.e. frequency selective fading).<|reference_end|>
arxiv
@article{sethuraman2007low, title={Low SNR Capacity of Noncoherent Fading Channels}, author={Vignesh Sethuraman, Ligong Wang, Bruce Hajek and Amos Lapidoth}, journal={arXiv preprint arXiv:0712.2872}, year={2007}, doi={10.1109/TIT.2009.2012995}, archivePrefix={arXiv}, eprint={0712.2872}, primaryClass={cs.IT math.IT} }
sethuraman2007low
arxiv-2139
0712.2923
A Class of LULU Operators on Multi-Dimensional Arrays
<|reference_start|>A Class of LULU Operators on Multi-Dimensional Arrays: The LULU operators for sequences are extended to multi-dimensional arrays via the morphological concept of connection in a way which preserves their essential properties, e.g. they are separators and form a four element fully ordered semi-group. The power of the operators is demonstrated by deriving a total variation preserving discrete pulse decomposition of images.<|reference_end|>
arxiv
@article{anguelov2007a, title={A Class of LULU Operators on Multi-Dimensional Arrays}, author={Roumen Anguelov, Inger Plaskitt}, journal={arXiv preprint arXiv:0712.2923}, year={2007}, archivePrefix={arXiv}, eprint={0712.2923}, primaryClass={cs.CV} }
anguelov2007a
arxiv-2140
0712.2943
Software (Re-)Engineering with PSF
<|reference_start|>Software (Re-)Engineering with PSF: This paper investigates the usefulness of PSF in software engineering and reengineering. PSF is based on ACP (Algebra of Communicating Processes) and as some architectural description languages are based on process algebra, we investigate whether PSF can be used at the software architecture level, but we also use PSF at lower abstract levels. As a case study we reengineer the compiler from the Toolkit of PSF.<|reference_end|>
arxiv
@article{diertens2007software, title={Software (Re-)Engineering with PSF}, author={Bob Diertens}, journal={arXiv preprint arXiv:0712.2943}, year={2007}, number={PRG0505}, archivePrefix={arXiv}, eprint={0712.2943}, primaryClass={cs.SE} }
diertens2007software
arxiv-2141
0712.2952
Partial Conway and iteration semirings
<|reference_start|>Partial Conway and iteration semirings: A Conway semiring is a semiring $S$ equipped with a unary operation $^*:S \to S$, always called 'star', satisfying the sum star and product star identities. It is known that these identities imply a Kleene type theorem. Some computationally important semirings, such as $N$ or $N^{\rat}\llangle \Sigma^* \rrangle$ of rational power series of words on $\Sigma$ with coefficients in $N$, cannot have a total star operation satisfying the Conway identities. We introduce here partial Conway semirings, which are semirings $S$ which have a star operation defined only on an ideal of $S$; when the arguments are appropriate, the operation satisfies the above identities. We develop the general theory of partial Conway semirings and prove a Kleene theorem for this generalization.<|reference_end|>
arxiv
@article{bloom2007partial, title={Partial Conway and iteration semirings}, author={S. L. Bloom, Z. Esik, W. Kuich}, journal={arXiv preprint arXiv:0712.2952}, year={2007}, archivePrefix={arXiv}, eprint={0712.2952}, primaryClass={cs.DM cs.LO} }
bloom2007partial
arxiv-2142
0712.2958
Power-Aware Real-Time Scheduling upon Identical Multiprocessor Platforms
<|reference_start|>Power-Aware Real-Time Scheduling upon Identical Multiprocessor Platforms: In this paper, we address the power-aware scheduling of sporadic constrained-deadline hard real-time tasks using dynamic voltage scaling upon multiprocessor platforms. We propose two distinct algorithms. Our first algorithm is an off-line speed determination mechanism which provides an identical speed for each processor. That speed guarantees that all deadlines are met if the jobs are scheduled using EDF. The second algorithm is an on-line and adaptive speed adjustment mechanism which reduces the energy consumption while the system is running.<|reference_end|>
arxiv
@article{nélis2007power-aware, title={Power-Aware Real-Time Scheduling upon Identical Multiprocessor Platforms}, author={Vincent N'elis, Jo"el Goossens, Nicolas Navet, Raymond Devillers and Dragomir Milojevic}, journal={arXiv preprint arXiv:0712.2958}, year={2007}, archivePrefix={arXiv}, eprint={0712.2958}, primaryClass={cs.OS} }
nélis2007power-aware
arxiv-2143
0712.2959
Joint Source-Channel Coding Revisited: Information-Spectrum Approach
<|reference_start|>Joint Source-Channel Coding Revisited: Information-Spectrum Approach: Given a general source with countably infinite source alphabet and a general channel with arbitrary abstract channel input/channel output alphabets, we study the joint source-channel coding problem from the information-spectrum point of view. First, we generalize Feinstein's lemma (direct part) and Verdu-Han's lemma (converse part) so as to be applicable to the general joint source-channel coding problem. Based on these lemmas, we establish a sufficient condition as well as a necessary condition for the source to be reliably transmissible over the channel with asymptotically vanishing probability of error. It is shown that our sufficient condition is equivalent to the sufficient condition derived by Vembu, Verdu and Steinberg, whereas our necessary condition is shown to be stronger than or equivalent to the necessary condition derived by them. It turns out, as a direct consequence, that separation principle in a relevantly generalized sense holds for a wide class of sources and channels, as was shown in a quite dfifferent manner by Vembu, Verdu and Steinberg. It should also be remarked that a nice duality is found between our necessary and sufficient conditions, whereas we cannot fully enjoy such a duality between the necessary condition and the sufficient condition by Vembu, Verdu and Steinberg. In addition, we demonstrate a sufficient condition as well as a necessary condition for the epsilon-transmissibility. Finally, the separation theorem of the traditional standard form is shown to hold for the class of sources and channels that satisfy the semi-strong converse property.<|reference_end|>
arxiv
@article{han2007joint, title={Joint Source-Channel Coding Revisited: Information-Spectrum Approach}, author={Te Sun Han}, journal={arXiv preprint arXiv:0712.2959}, year={2007}, archivePrefix={arXiv}, eprint={0712.2959}, primaryClass={cs.IT math.IT} }
han2007joint
arxiv-2144
0712.3037
Comments on "Improved Efficient Remote User Authentication Schemes"
<|reference_start|>Comments on "Improved Efficient Remote User Authentication Schemes": Recently, Tian et al presented an article, in which they discussed some security weaknesses of Yoon et al's scheme and subsequently proposed two ``improved'' schemes. In this paper, we show that the Tian et al's schemes are insecure and vulnerable than the Yoon et al's scheme.<|reference_end|>
arxiv
@article{das2007comments, title={Comments on "Improved Efficient Remote User Authentication Schemes"}, author={Manik Lal Das}, journal={International Journal of Network Security, Vol. 6, No. 3, pp. 282-284, 2008}, year={2007}, archivePrefix={arXiv}, eprint={0712.3037}, primaryClass={cs.CR} }
das2007comments
arxiv-2145
0712.3084
Proxy Signature Scheme with Effective Revocation Using Bilinear Pairings
<|reference_start|>Proxy Signature Scheme with Effective Revocation Using Bilinear Pairings: We present a proxy signature scheme using bilinear pairings that provides effective proxy revocation. The scheme uses a binding-blinding technique to avoid secure channel requirements in the key issuance stage. With this technique, the signer receives a partial private key from a trusted authority and unblinds it to get his private key, in turn, overcomes the key escrow problem which is a constraint in most of the pairing-based proxy signature schemes. The scheme fulfills the necessary security requirements of proxy signature and resists other possible threats.<|reference_end|>
arxiv
@article{das2007proxy, title={Proxy Signature Scheme with Effective Revocation Using Bilinear Pairings}, author={Manik Lal Das, Ashutosh Saxena, Deepak B Phatak}, journal={International Journal of Network Security, Vol. 4, No.3, pp.312-317, 2007}, year={2007}, archivePrefix={arXiv}, eprint={0712.3084}, primaryClass={cs.CR} }
das2007proxy
arxiv-2146
0712.3088
Clones and Genoids in Lambda Calculus and First Order Logic
<|reference_start|>Clones and Genoids in Lambda Calculus and First Order Logic: A genoid is a category of two objects such that one is the product of itself with the other. A genoid may be viewed as an abstract substitution algebra. It is a remarkable fact that such a simple concept can be applied to present a unified algebraic approach to lambda calculus and first order logic.<|reference_end|>
arxiv
@article{luo2007clones, title={Clones and Genoids in Lambda Calculus and First Order Logic}, author={Zhaohua Luo}, journal={arXiv preprint arXiv:0712.3088}, year={2007}, archivePrefix={arXiv}, eprint={0712.3088}, primaryClass={cs.LO cs.PL} }
luo2007clones
arxiv-2147
0712.3113
Optimizing Queries in a Logic-based Information Integration System
<|reference_start|>Optimizing Queries in a Logic-based Information Integration System: The SINTAGMA information integration system is an infrastructure for accessing several different information sources together. Besides providing a uniform interface to the information sources (databases, web services, web sites, RDF resources, XML files), semantic integration is also needed. Semantic integration is carried out by providing a high-level model and the mappings to the models of the sources. When executing a query of the high level model, a query is transformed to a low-level query plan, which is a piece of Prolog code that answers the high-level query. This transformation is done in two phases. First, the Query Planner produces a plan as a logic formula expressing the low-level query. Next, the Query Optimizer transforms this formula to executable Prolog code and optimizes it according to structural and statistical information about the information sources. This article discusses the main ideas of the optimization algorithm and its implementation.<|reference_end|>
arxiv
@article{békés2007optimizing, title={Optimizing Queries in a Logic-based Information Integration System}, author={Andr'as Gyorgy B'ek'es, P'eter Szeredi}, journal={arXiv preprint arXiv:0712.3113}, year={2007}, archivePrefix={arXiv}, eprint={0712.3113}, primaryClass={cs.PL cs.SE} }
békés2007optimizing
arxiv-2148
0712.3115
Software (Re-)Engineering with PSF II: from architecture to implementation
<|reference_start|>Software (Re-)Engineering with PSF II: from architecture to implementation: This paper presents ongoing research on the application of PSF in the field of software engineering and reengineering. We build a new implementation for the simulator of the PSF Toolkit starting from the specification in PSF of the architecture of a simple simulator and extend it with features to obtain the architecture of a full simulator. We apply refining and constraining techniques on the specification of the architecture to obtain a specification low enough to build an implementation from.<|reference_end|>
arxiv
@article{diertens2007software, title={Software (Re-)Engineering with PSF II: from architecture to implementation}, author={Bob Diertens}, journal={arXiv preprint arXiv:0712.3115}, year={2007}, number={prg0609}, archivePrefix={arXiv}, eprint={0712.3115}, primaryClass={cs.SE} }
diertens2007software
arxiv-2149
0712.3116
Proceedings of the 17th Workshop on Logic-based methods in Programming Environments (WLPE 2007)
<|reference_start|>Proceedings of the 17th Workshop on Logic-based methods in Programming Environments (WLPE 2007): This volume contains the papers presented at WLPE 2007: the 17th Workshop on Logic-based Methods in Programming Environments on 13th September, 2007 in Porto, Portugal. It was held as a satellite workshop of ICLP 2007, the 23th International Conference on Logic Programming.<|reference_end|>
arxiv
@article{hill2007proceedings, title={Proceedings of the 17th Workshop on Logic-based methods in Programming Environments (WLPE 2007)}, author={Patricia Hill, Wim Vanhoof}, journal={arXiv preprint arXiv:0712.3116}, year={2007}, archivePrefix={arXiv}, eprint={0712.3116}, primaryClass={cs.PL cs.SE} }
hill2007proceedings
arxiv-2150
0712.3128
Software (Re-)Engineering with PSF III: an IDE for PSF
<|reference_start|>Software (Re-)Engineering with PSF III: an IDE for PSF: We describe the design of an integrated development environment (IDE) for PSF. In the software engineering process we used process algebra in the form of PSF for the specification of the architecture of the IDE. This specification is refined to a PSF specification of the IDE system as a ToolBus application, by applying vertical and horizontal implementation techniques. We implemented the various tools as specified and connected them with a ToolBus script extracted from the system specification.<|reference_end|>
arxiv
@article{diertens2007software, title={Software (Re-)Engineering with PSF III: an IDE for PSF}, author={Bob Diertens}, journal={arXiv preprint arXiv:0712.3128}, year={2007}, number={prg0708}, archivePrefix={arXiv}, eprint={0712.3128}, primaryClass={cs.SE} }
diertens2007software
arxiv-2151
0712.3137
Phase transition and computational complexity in a stochastic prime number generator
<|reference_start|>Phase transition and computational complexity in a stochastic prime number generator: We introduce a prime number generator in the form of a stochastic algorithm. The character of such algorithm gives rise to a continuous phase transition which distinguishes a phase where the algorithm is able to reduce the whole system of numbers into primes and a phase where the system reaches a frozen state with low prime density. In this paper we firstly pretend to give a broad characterization of this phase transition, both in terms of analytical and numerical analysis. Critical exponents are calculated, and data collapse is provided. Further on we redefine the model as a search problem, fitting it in the hallmark of computational complexity theory. We suggest that the system belongs to the class NP. The computational cost is maximal around the threshold, as common in many algorithmic phase transitions, revealing the presence of an easy-hard-easy pattern. We finally relate the nature of the phase transition to an average-case classification of the problem.<|reference_end|>
arxiv
@article{lacasa2007phase, title={Phase transition and computational complexity in a stochastic prime number generator}, author={Lucas Lacasa, Bartolo Luque, Octavio Miramontes}, journal={New Journal of Physics 10 (2008) 023009}, year={2007}, doi={10.1088/1367-2630/10/2/023009}, archivePrefix={arXiv}, eprint={0712.3137}, primaryClass={cs.CC physics.comp-ph} }
lacasa2007phase
arxiv-2152
0712.3146
Dynamic Logic of Common Knowledge in a Proof Assistant
<|reference_start|>Dynamic Logic of Common Knowledge in a Proof Assistant: Common Knowledge Logic is meant to describe situations of the real world where a group of agents is involved. These agents share knowledge and make strong statements on the knowledge of the other agents (the so called \emph{common knowledge}). But as we know, the real world changes and overall information on what is known about the world changes as well. The changes are described by dynamic logic. To describe knowledge changes, dynamic logic should be combined with logic of common knowledge. In this paper we describe experiments which we have made about the integration in a unique framework of common knowledge logic and dynamic logic in the proof assistant \Coq. This results in a set of fully checked proofs for readable statements. We describe the framework and how a proof can be<|reference_end|>
arxiv
@article{lescanne2007dynamic, title={Dynamic Logic of Common Knowledge in a Proof Assistant}, author={Pierre Lescanne (LIP), J'er^ome Puiss'egur (LIP)}, journal={arXiv preprint arXiv:0712.3146}, year={2007}, archivePrefix={arXiv}, eprint={0712.3146}, primaryClass={cs.GT} }
lescanne2007dynamic
arxiv-2153
0712.3147
Common knowledge logic in a higher order proof assistant?
<|reference_start|>Common knowledge logic in a higher order proof assistant?: This paper presents experiments on common knowledge logic, conducted with the help of the proof assistant Coq. The main feature of common knowledge logic is the eponymous modality that says that a group of agents shares a knowledge about a certain proposition in a inductive way. This modality is specified by using a fixpoint approach. Furthermore, from these experiments, we discuss and compare the structure of theorems that can be proved in specific theories that use common knowledge logic. Those structures manifests the interplay between the theory (as implemented in the proof assistant Coq) and the metatheory.<|reference_end|>
arxiv
@article{lescanne2007common, title={Common knowledge logic in a higher order proof assistant?}, author={Pierre Lescanne (LIP)}, journal={arXiv preprint arXiv:0712.3147}, year={2007}, archivePrefix={arXiv}, eprint={0712.3147}, primaryClass={cs.AI cs.LO} }
lescanne2007common
arxiv-2154
0712.3150
Interval Colourings of Some Regular Graphs
<|reference_start|>Interval Colourings of Some Regular Graphs: A lower bound is obtained for the greatest possible number of colors in an interval colourings of some regular graphs.<|reference_end|>
arxiv
@article{kamalian2007interval, title={Interval Colourings of Some Regular Graphs}, author={Rafael R. Kamalian and Petros A. Petrosyan}, journal={Mathematical Problems of Computer Science 25, 2006, 53--56}, year={2007}, archivePrefix={arXiv}, eprint={0712.3150}, primaryClass={cs.DM} }
kamalian2007interval
arxiv-2155
0712.3155
On Interval Colorings of Complete k-partite Graphs K_n^k
<|reference_start|>On Interval Colorings of Complete k-partite Graphs K_n^k: Problems of existence, construction and estimation of parameters of interval colorings of complete k-partite graphs K_{n}^{k} are investigated.<|reference_end|>
arxiv
@article{kamalian2007on, title={On Interval Colorings of Complete k-partite Graphs K_{n}^{k}}, author={Rafael R. Kamalian and Petros A. Petrosyan}, journal={Mathematical Problems of Computer Science 26, 2006, 28--32}, year={2007}, archivePrefix={arXiv}, eprint={0712.3155}, primaryClass={cs.DM} }
kamalian2007on
arxiv-2156
0712.3203
Solving Medium-Density Subset Sum Problems in Expected Polynomial Time: An Enumeration Approach
<|reference_start|>Solving Medium-Density Subset Sum Problems in Expected Polynomial Time: An Enumeration Approach: The subset sum problem (SSP) can be briefly stated as: given a target integer $E$ and a set $A$ containing $n$ positive integer $a_j$, find a subset of $A$ summing to $E$. The \textit{density} $d$ of an SSP instance is defined by the ratio of $n$ to $m$, where $m$ is the logarithm of the largest integer within $A$. Based on the structural and statistical properties of subset sums, we present an improved enumeration scheme for SSP, and implement it as a complete and exact algorithm (EnumPlus). The algorithm always equivalently reduces an instance to be low-density, and then solve it by enumeration. Through this approach, we show the possibility to design a sole algorithm that can efficiently solve arbitrary density instance in a uniform way. Furthermore, our algorithm has considerable performance advantage over previous algorithms. Firstly, it extends the density scope, in which SSP can be solved in expected polynomial time. Specifically, It solves SSP in expected $O(n\log{n})$ time when density $d \geq c\cdot \sqrt{n}/\log{n}$, while the previously best density scope is $d \geq c\cdot n/(\log{n})^{2}$. In addition, the overall expected time and space requirement in the average case are proven to be $O(n^5\log n)$ and $O(n^5)$ respectively. Secondly, in the worst case, it slightly improves the previously best time complexity of exact algorithms for SSP. Specifically, the worst-case time complexity of our algorithm is proved to be $O((n-6)2^{n/2}+n)$, while the previously best result is $O(n2^{n/2})$.<|reference_end|>
arxiv
@article{wan2007solving, title={Solving Medium-Density Subset Sum Problems in Expected Polynomial Time: An Enumeration Approach}, author={Changlin Wan, Zhongzhi Shi}, journal={Changlin Wan, Zhongzhi Shi: Solving Medium-Density Subset Sum Problems in Expected Polynomial Time: An Enumeration Approach. FAW 2008: 300-310}, year={2007}, doi={10.1007/978-3-540-69311-6_31}, archivePrefix={arXiv}, eprint={0712.3203}, primaryClass={cs.DS cs.CC cs.CR} }
wan2007solving
arxiv-2157
0712.3215
L'accessibilit\'e des E-services aux personnes non-voyantes : difficult\'es d'usage et recommandations
<|reference_start|>L'accessibilit\'e des E-services aux personnes non-voyantes : difficult\'es d'usage et recommandations: While taking into account handicapped people in the design of technologies represents a social and political stake that becomes important (in particular with the recent law on equal rights for all the citizens, March 2004), this paper aims at evaluating the level of accessibility of two sites of E-services thanks to tests of use and proposing a set of recommendations in order to increase usability for the largest amount of people.<|reference_end|>
arxiv
@article{sandoz-guermond2007l'accessibilit\'e, title={L'accessibilit\'e des E-services aux personnes non-voyantes : difficult\'es d'usage et recommandations}, author={Franc{c}oise Sandoz-Guermond (LIESP), Marc-Eric Bobiller-Chaumon (GRePS)}, journal={Dans International Conference Proceedings of IHM'2006 - IIHM : Interaction Homme Machine, Montr\'eal : Canada (2006)}, year={2007}, archivePrefix={arXiv}, eprint={0712.3215}, primaryClass={cs.HC} }
sandoz-guermond2007l'accessibilit\'e
arxiv-2158
0712.3220
What is Community Informatics (and Why Does It Matter)?
<|reference_start|>What is Community Informatics (and Why Does It Matter)?: Community Informatics (CI) is the application of information and communications technologies (ICTs) to enable community processes and the achievement of community objectives. CI goes beyond the "Digital Divide" to making ICT access usable and useful to excluded populations and communities for local economic development, social justice, and political empowerment. CI approaches ICTs from a "community" perspective and develops strategies and techniques for managing their use by communities both virtual and physical including the variety of Community Networking applications. CI assumes that both communities have characteristics, requirements, and opportunities that require different strategies for ICT intervention and development from individual access and use. Also, CI addresses ICT use in Developing Countries as well as among the poor, the marginalized, the elderly, or those living in remote locations in Developed Countries. CI is of interest both to ICT practitioners and academic researchers and addresses the connections between the policy and pragmatic issues arising from the tens of thousands of Community Networks, Community Technology Centres, Telecentres, Community Communications Centres, and Telecottages globally along with the rapidly emerging field of electronically based virtual "communities". Michael Gurstein, Ph.D. is Executive Director of the Centre for Community Informatics Research, Development and Training (Vancouver BC), a Director of The Information Society Institute, Cape Peninsula University of Technology, Cape Town South Africa; and Research Professor in the School of Computer and Information Systems at the New Jersey Institute of Technology, Newark.<|reference_end|>
arxiv
@article{gurstein2007what, title={What is Community Informatics (and Why Does It Matter)?}, author={Michael Gurstein}, journal={"Publishing studies" book series, edited by Giandomenico Sica, ISSN 1973-6061 (Printed edition), ISSN 1973-6053 (Electronic edition)}, year={2007}, archivePrefix={arXiv}, eprint={0712.3220}, primaryClass={cs.CY} }
gurstein2007what
arxiv-2159
0712.3277
On the Capacity and Energy Efficiency of Training-Based Transmissions over Fading Channels
<|reference_start|>On the Capacity and Energy Efficiency of Training-Based Transmissions over Fading Channels: In this paper, the capacity and energy efficiency of training-based communication schemes employed for transmission over a-priori unknown Rayleigh block fading channels are studied. In these schemes, periodically transmitted training symbols are used at the receiver to obtain the minimum mean-square-error (MMSE) estimate of the channel fading coefficients. Initially, the case in which the product of the estimate error and transmitted signal is assumed to be Gaussian noise is considered. In this case, it is shown that bit energy requirements grow without bound as the signal-to-noise ratio (SNR) goes to zero, and the minimum bit energy is achieved at a nonzero SNR value below which one should not operate. The effect of the block length on both the minimum bit energy and the SNR value at which the minimum is achieved is investigated. Flash training and transmission schemes are analyzed and shown to improve the energy efficiency in the low-SNR regime. In the second part of the paper, the capacity and energy efficiency of training-based schemes are investigated when the channel input is subject to peak power constraints. The capacity-achieving input structure is characterized and the magnitude distribution of the optimal input is shown to be discrete with a finite number of mass points. The capacity, bit energy requirements, and optimal resource allocation strategies are obtained through numerical analysis. The bit energy is again shown to grow without bound as SNR decreases to zero due to the presence of peakedness constraints. The improvements in energy efficiency when on-off keying with fixed peak power and vanishing duty cycle is employed are studied. Comparisons of the performances of training-based and noncoherent transmission schemes are provided.<|reference_end|>
arxiv
@article{gursoy2007on, title={On the Capacity and Energy Efficiency of Training-Based Transmissions over Fading Channels}, author={Mustafa Cenk Gursoy}, journal={arXiv preprint arXiv:0712.3277}, year={2007}, archivePrefix={arXiv}, eprint={0712.3277}, primaryClass={cs.IT math.IT} }
gursoy2007on
arxiv-2160
0712.3286
Error Rate Analysis for Peaky Signaling over Fading Channels
<|reference_start|>Error Rate Analysis for Peaky Signaling over Fading Channels: In this paper, the performance of signaling strategies with high peak-to-average power ratio is analyzed over both coherent and noncoherent fading channels. Two modulation schemes, namely on-off phase-shift keying (OOPSK) and on-off frequency-shift keying (OOFSK), are considered. Initially, uncoded systems are analyzed. For OOPSK and OOFSK, the optimal detector structures are identified and analytical expressions for the error probabilities are obtained for arbitrary constellation sizes. Numerical techniques are employed to compute the error probabilities. It is concluded that increasing the peakedness of the signals results in reduced error rates for a given power level and hence equivalently improves the energy efficiency for fixed error probabilities. The coded performance is also studied by analyzing the random coding error exponents achieved by OOPSK and OOFSK signaling.<|reference_end|>
arxiv
@article{gursoy2007error, title={Error Rate Analysis for Peaky Signaling over Fading Channels}, author={Mustafa Cenk Gursoy}, journal={arXiv preprint arXiv:0712.3286}, year={2007}, archivePrefix={arXiv}, eprint={0712.3286}, primaryClass={cs.IT math.IT} }
gursoy2007error
arxiv-2161
0712.3298
CLAIRLIB Documentation v103
<|reference_start|>CLAIRLIB Documentation v103: The Clair library is intended to simplify a number of generic tasks in Natural Language Processing (NLP), Information Retrieval (IR), and Network Analysis. Its architecture also allows for external software to be plugged in with very little effort. Functionality native to Clairlib includes Tokenization, Summarization, LexRank, Biased LexRank, Document Clustering, Document Indexing, PageRank, Biased PageRank, Web Graph Analysis, Network Generation, Power Law Distribution Analysis, Network Analysis (clustering coefficient, degree distribution plotting, average shortest path, diameter, triangles, shortest path matrices, connected components), Cosine Similarity, Random Walks on Graphs, Statistics (distributions, tests), Tf, Idf, Community Finding.<|reference_end|>
arxiv
@article{radev2007clairlib, title={CLAIRLIB Documentation v1.03}, author={Dragomir Radev, Mark Hodges, Anthony Fader, Mark Joseph, Joshua Gerrish, Mark Schaller, Jonathan dePeri, Bryan Gibson}, journal={arXiv preprint arXiv:0712.3298}, year={2007}, number={CSE-TR-536-07}, archivePrefix={arXiv}, eprint={0712.3298}, primaryClass={cs.IR cs.CL} }
radev2007clairlib
arxiv-2162
0712.3299
Computer- and robot-assisted urological surgery
<|reference_start|>Computer- and robot-assisted urological surgery: The author reviews the computer and robotic tools available to urologists to help in diagnosis and technical procedures. The first part concerns the contribution of robotics and presents several systems at various stages of development (laboratory prototypes, systems under validation or marketed systems). The second part describes image fusion tools and navigation systems currently under development or evaluation. Several studies on computerized simulation of urological procedures are also presented.<|reference_end|>
arxiv
@article{troccaz2007computer-, title={Computer- and robot-assisted urological surgery}, author={Jocelyne Troccaz (TIMC)}, journal={Progr\`es en urologie : journal de l'Association fran\c{c}aise d'urologie et de la Soci\'et\'e fran\c{c}aise d'urologie 16, 2 (2006) 112-20}, year={2007}, archivePrefix={arXiv}, eprint={0712.3299}, primaryClass={cs.OH cs.RO} }
troccaz2007computer-
arxiv-2163
0712.3327
The capacity of a class of 3-receiver broadcast channels with degraded message sets
<|reference_start|>The capacity of a class of 3-receiver broadcast channels with degraded message sets: Korner and Marton established the capacity region for the 2-receiver broadcast channel with degraded message sets. Recent results and conjectures suggest that a straightforward extension of the Korner-Marton region to more than 2 receivers is optimal. This paper shows that this is not the case. We establish the capacity region for a class of 3-receiver broadcast channels with 2 degraded message sets and show that it can be strictly larger than the straightforward extension of the Korner-Marton region. The key new idea is indirect decoding, whereby a receiver who cannot directly decode a cloud center, finds it indirectly by decoding satellite codewords. This idea is then used to establish new inner and outer bounds on the capacity region of the general 3-receiver broadcast channel with 2 and 3 degraded message sets. We show that these bounds are tight for some nontrivial cases. The results suggest that the capacity of the 3-receiver broadcast channel with degraded message sets is as at least as hard to find as the capacity of the general 2-receiver broadcast channel with common and private message.<|reference_end|>
arxiv
@article{nair2007the, title={The capacity of a class of 3-receiver broadcast channels with degraded message sets}, author={Chandra Nair and Abbas El Gamal}, journal={arXiv preprint arXiv:0712.3327}, year={2007}, archivePrefix={arXiv}, eprint={0712.3327}, primaryClass={cs.IT math.IT} }
nair2007the
arxiv-2164
0712.3329
Universal Intelligence: A Definition of Machine Intelligence
<|reference_start|>Universal Intelligence: A Definition of Machine Intelligence: A fundamental problem in artificial intelligence is that nobody really knows what intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: We take a number of well known informal definitions of human intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of intelligence for arbitrary machines. We believe that this equation formally captures the concept of machine intelligence in the broadest reasonable sense. We then show how this formal definition is related to the theory of universal optimal learning agents. Finally, we survey the many other tests and definitions of intelligence that have been proposed for machines.<|reference_end|>
arxiv
@article{legg2007universal, title={Universal Intelligence: A Definition of Machine Intelligence}, author={Shane Legg and Marcus Hutter}, journal={Minds & Machines, 17:4 (2007) pages 391-444}, year={2007}, number={IDSIA-10-07}, archivePrefix={arXiv}, eprint={0712.3329}, primaryClass={cs.AI} }
legg2007universal
arxiv-2165
0712.3331
How to Complete a Doubling Metric
<|reference_start|>How to Complete a Doubling Metric: In recent years, considerable advances have been made in the study of properties of metric spaces in terms of their doubling dimension. This line of research has not only enhanced our understanding of finite metrics, but has also resulted in many algorithmic applications. However, we still do not understand the interaction between various graph-theoretic (topological) properties of graphs, and the doubling (geometric) properties of the shortest-path metrics induced by them. For instance, the following natural question suggests itself: \emph{given a finite doubling metric $(V,d)$, is there always an \underline{unweighted} graph $(V',E')$ with $V\subseteq V'$ such that the shortest path metric $d'$ on $V'$ is still doubling, and which agrees with $d$ on $V$.} This is often useful, given that unweighted graphs are often easier to reason about. We show that for any metric space $(V,d)$, there is an \emph{unweighted} graph $(V',E')$ with shortest-path metric $d':V'\times V' \to \R_{\geq 0}$ such that -- for all $x,y \in V$, the distances $d(x,y) \leq d'(x,y) \leq (1+\eps) \cdot d(x,y)$, and -- the doubling dimension for $d'$ is not much more than that of $d$, where this change depends only on $\e$ and not on the size of the graph. We show a similar result when both $(V,d)$ and $(V',E')$ are restricted to be trees: this gives a simpler proof that doubling trees embed into constant dimensional Euclidean space with constant distortion. We also show that our results are tight in terms of the tradeoff between distortion and dimension blowup.<|reference_end|>
arxiv
@article{gupta2007how, title={How to Complete a Doubling Metric}, author={Anupam Gupta and Kunal Talwar}, journal={arXiv preprint arXiv:0712.3331}, year={2007}, archivePrefix={arXiv}, eprint={0712.3331}, primaryClass={cs.DM cs.CG} }
gupta2007how
arxiv-2166
0712.3333
On the approximability of the vertex cover and related problems
<|reference_start|>On the approximability of the vertex cover and related problems: In this paper we show that the problem of identifying an edge $(i,j)$ in a graph $G$ such that there exists an optimal vertex cover $S$ of $G$ containing exactly one of the nodes $i$ and $j$ is NP-hard. Such an edge is called a weak edge. We then develop a polynomial time approximation algorithm for the vertex cover problem with performance guarantee $2-\frac{1}{1+\sigma}$, where $\sigma$ is an upper bound on a measure related to a weak edge of a graph. Further, we discuss a new relaxation of the vertex cover problem which is used in our approximation algorithm to obtain smaller values of $\sigma$. We also obtain linear programming representations of the vertex cover problem for special graphs. Our results provide new insights into the approximability of the vertex cover problem - a long standing open problem.<|reference_end|>
arxiv
@article{han2007on, title={On the approximability of the vertex cover and related problems}, author={Qiaoming Han and Abraham P. Punnen}, journal={arXiv preprint arXiv:0712.3333}, year={2007}, archivePrefix={arXiv}, eprint={0712.3333}, primaryClass={cs.DS cs.DM} }
han2007on
arxiv-2167
0712.3335
A polynomial time $\frac 3 2$ -approximation algorithm for the vertex cover problem on a class of graphs
<|reference_start|>A polynomial time $\frac 3 2$ -approximation algorithm for the vertex cover problem on a class of graphs: We develop a polynomial time 3/2-approximation algorithm to solve the vertex cover problem on a class of graphs satisfying a property called ``active edge hypothesis''. The algorithm also guarantees an optimal solution on specially structured graphs. Further, we give an extended algorithm which guarantees a vertex cover $S_1$ on an arbitrary graph such that $|S_1|\leq {3/2} |S^*|+\xi$ where $S^*$ is an optimal vertex cover and $\xi$ is an error bound identified by the algorithm. We obtained $\xi = 0$ for all the test problems we have considered which include specially constructed instances that were expected to be hard. So far we could not construct a graph that gives $\xi \not= 0$.<|reference_end|>
arxiv
@article{han2007a, title={A polynomial time $\frac 3 2$ -approximation algorithm for the vertex cover problem on a class of graphs}, author={Qiaoming Han, Abraham P. Punnen, and Yinyu Ye}, journal={arXiv preprint arXiv:0712.3335}, year={2007}, archivePrefix={arXiv}, eprint={0712.3335}, primaryClass={cs.DS cs.DM} }
han2007a
arxiv-2168
0712.3348
On Exponential Time Lower Bound of Knapsack under Backtracking
<|reference_start|>On Exponential Time Lower Bound of Knapsack under Backtracking: M.Aleknovich et al. have recently proposed a model of algorithms, called BT model, which generalizes both the priority model of Borodin, Nielson and Rackoff, as well as a simple dynamic programming model by Woeginger. BT model can be further divided into three kinds of fixed, adaptive and fully adaptive ones. They have proved exponential time lower bounds of exact and approximation algorithms under adaptive BT model for Knapsack problem. Their exact lower bound is $\Omega(2^{0.5n}/\sqrt{n})$, in this paper, we slightly improve the exact lower bound to about $\Omega(2^{0.69n}/\sqrt{n})$, by the same technique, with related parameters optimized.<|reference_end|>
arxiv
@article{li2007on, title={On Exponential Time Lower Bound of Knapsack under Backtracking}, author={Xin Li, Tian Liu}, journal={arXiv preprint arXiv:0712.3348}, year={2007}, archivePrefix={arXiv}, eprint={0712.3348}, primaryClass={cs.CC} }
li2007on
arxiv-2169
0712.3360
Compressed Text Indexes:From Theory to Practice!
<|reference_start|>Compressed Text Indexes:From Theory to Practice!: A compressed full-text self-index represents a text in a compressed form and still answers queries efficiently. This technology represents a breakthrough over the text indexing techniques of the previous decade, whose indexes required several times the size of the text. Although it is relatively new, this technology has matured up to a point where theoretical research is giving way to practical developments. Nonetheless this requires significant programming skills, a deep engineering effort, and a strong algorithmic background to dig into the research results. To date only isolated implementations and focused comparisons of compressed indexes have been reported, and they missed a common API, which prevented their re-use or deployment within other applications. The goal of this paper is to fill this gap. First, we present the existing implementations of compressed indexes from a practitioner's point of view. Second, we introduce the Pizza&Chili site, which offers tuned implementations and a standardized API for the most successful compressed full-text self-indexes, together with effective testbeds and scripts for their automatic validation and test. Third, we show the results of our extensive experiments on these codes with the aim of demonstrating the practical relevance of this novel and exciting technology.<|reference_end|>
arxiv
@article{ferragina2007compressed, title={Compressed Text Indexes:From Theory to Practice!}, author={Paolo Ferragina (1), Rodrigo Gonzalez (2), Gonzalo Navarro (2), Rossano Venturini (2) ((1) Dept. of Computer Science, University of Pisa, (2) Dept. of Computer Science, University of Chile)}, journal={arXiv preprint arXiv:0712.3360}, year={2007}, archivePrefix={arXiv}, eprint={0712.3360}, primaryClass={cs.DS} }
ferragina2007compressed
arxiv-2170
0712.3380
Extending the Overlap Graph for Gene Assembly in Ciliates
<|reference_start|>Extending the Overlap Graph for Gene Assembly in Ciliates: Gene assembly is an intricate biological process that has been studied formally and modeled through string and graph rewriting systems. Recently, a restriction of the general (intramolecular) model, called simple gene assembly, has been introduced. This restriction has subsequently been defined as a string rewriting system. We show that by extending the notion of overlap graph it is possible to define a graph rewriting system for two of the three types of rules that make up simple gene assembly. It turns out that this graph rewriting system is less involved than its corresponding string rewriting system. Finally, we give characterizations of the `power' of both types of graph rewriting rules. Because of the equivalence of these string and graph rewriting systems, the given characterizations can be carried over to the string rewriting system.<|reference_end|>
arxiv
@article{brijder2007extending, title={Extending the Overlap Graph for Gene Assembly in Ciliates}, author={Robert Brijder, Hendrik Jan Hoogeboom}, journal={arXiv preprint arXiv:0712.3380}, year={2007}, number={LIACS Technical Report 2007-05}, archivePrefix={arXiv}, eprint={0712.3380}, primaryClass={cs.LO} }
brijder2007extending
arxiv-2171
0712.3389
RZBENCH: Performance evaluation of current HPC architectures using low-level and application benchmarks
<|reference_start|>RZBENCH: Performance evaluation of current HPC architectures using low-level and application benchmarks: RZBENCH is a benchmark suite that was specifically developed to reflect the requirements of scientific supercomputer users at the University of Erlangen-Nuremberg (FAU). It comprises a number of application and low-level codes under a common build infrastructure that fosters maintainability and expandability. This paper reviews the structure of the suite and briefly introduces the most relevant benchmarks. In addition, some widely known standard benchmark codes are reviewed in order to emphasize the need for a critical review of often-cited performance results. Benchmark data is presented for the HLRB-II at LRZ Munich and a local InfiniBand Woodcrest cluster as well as two uncommon system architectures: A bandwidth-optimized InfiniBand cluster based on single socket nodes ("Port Townsend") and an early version of Sun's highly threaded T2 architecture ("Niagara 2").<|reference_end|>
arxiv
@article{hager2007rzbench:, title={RZBENCH: Performance evaluation of current HPC architectures using low-level and application benchmarks}, author={Georg Hager, Holger Stengel, Thomas Zeiser, Gerhard Wellein}, journal={arXiv preprint arXiv:0712.3389}, year={2007}, archivePrefix={arXiv}, eprint={0712.3389}, primaryClass={cs.DC cs.PF} }
hager2007rzbench:
arxiv-2172
0712.3402
Graph kernels between point clouds
<|reference_start|>Graph kernels between point clouds: Point clouds are sets of points in two or three dimensions. Most kernel methods for learning on sets of points have not yet dealt with the specific geometrical invariances and practical constraints associated with point clouds in computer vision and graphics. In this paper, we present extensions of graph kernels for point clouds, which allow to use kernel methods for such ob jects as shapes, line drawings, or any three-dimensional point clouds. In order to design rich and numerically efficient kernels with as few free parameters as possible, we use kernels between covariance matrices and their factorizations on graphical models. We derive polynomial time dynamic programming recursions and present applications to recognition of handwritten digits and Chinese characters from few training examples.<|reference_end|>
arxiv
@article{bach2007graph, title={Graph kernels between point clouds}, author={Francis Bach (WILLOW Project - Inria/Ens)}, journal={arXiv preprint arXiv:0712.3402}, year={2007}, archivePrefix={arXiv}, eprint={0712.3402}, primaryClass={cs.LG} }
bach2007graph
arxiv-2173
0712.3423
Tuplix Calculus
<|reference_start|>Tuplix Calculus: We introduce a calculus for tuplices, which are expressions that generalize matrices and vectors. Tuplices have an underlying data type for quantities that are taken from a zero-totalized field. We start with the core tuplix calculus CTC for entries and tests, which are combined using conjunctive composition. We define a standard model and prove that CTC is relatively complete with respect to it. The core calculus is extended with operators for choice, information hiding, scalar multiplication, clearing and encapsulation. We provide two examples of applications; one on incremental financial budgeting, and one on modular financial budget design.<|reference_end|>
arxiv
@article{bergstra2007tuplix, title={Tuplix Calculus}, author={J.A. Bergstra, A. Ponse, M.B. van der Zwaag}, journal={Scientific Annals of Computer Science, 18:35--61, 2008}, year={2007}, number={PRG0713}, archivePrefix={arXiv}, eprint={0712.3423}, primaryClass={cs.LO cs.CE} }
bergstra2007tuplix
arxiv-2174
0712.3433
AccelKey Selection Method for Mobile Devices
<|reference_start|>AccelKey Selection Method for Mobile Devices: Portable Electronic Devices usually utilize a small screen with limited viewing area and a keyboard with a limited number of keys. This makes it difficult to perform quick searches in data arrays containing more than dozen items such an address book or song list. In this article we present a new data selection method which allows the user to quickly select an entry from a list using 4-way navigation device such as joystick, trackball or 4-way key pad. This method allows for quick navigation using just one hand, without looking at the screen.<|reference_end|>
arxiv
@article{zaliva2007accelkey, title={AccelKey Selection Method for Mobile Devices}, author={Vadim Zaliva}, journal={arXiv preprint arXiv:0712.3433}, year={2007}, archivePrefix={arXiv}, eprint={0712.3433}, primaryClass={cs.HC} }
zaliva2007accelkey
arxiv-2175
0712.3501
The Impact of Hard-Decision Detection on the Energy Efficiency of Phase and Frequency Modulation
<|reference_start|>The Impact of Hard-Decision Detection on the Energy Efficiency of Phase and Frequency Modulation: The central design challenge in next generation wireless systems is to have these systems operate at high bandwidths and provide high data rates while being cognizant of the energy consumption levels especially in mobile applications. Since communicating at very high data rates prohibits obtaining high bit resolutions from the analog-to-digital (A/D) converters, analysis of the energy efficiency under the assumption of hard-decision detection is called for to accurately predict the performance levels. In this paper, transmission over the additive white Gaussian noise (AWGN) channel, and coherent and noncoherent fading channels is considered, and the impact of hard-decision detection on the energy efficiency of phase and frequency modulations is investigated. Energy efficiency is analyzed by studying the capacity of these modulation schemes and the energy required to send one bit of information reliably in the low signal-to-noise ratio (SNR) regime. The capacity of hard-decision-detected phase and frequency modulations is characterized at low SNR levels through closed-form expressions for the first and second derivatives of the capacity at zero SNR. Subsequently, bit energy requirements in the low-SNR regime are identified. The increases in the bit energy incurred by hard-decision detection and channel fading are quantified. Moreover, practical design guidelines for the selection of the constellation size are drawn from the analysis of the spectral efficiency--bit energy tradeoff.<|reference_end|>
arxiv
@article{gursoy2007the, title={The Impact of Hard-Decision Detection on the Energy Efficiency of Phase and Frequency Modulation}, author={Mustafa Cenk Gursoy}, journal={arXiv preprint arXiv:0712.3501}, year={2007}, doi={10.1109/TWC.2009.080998}, archivePrefix={arXiv}, eprint={0712.3501}, primaryClass={cs.IT math.IT} }
gursoy2007the
arxiv-2176
0712.3568
A Partition-Based Relaxation For Steiner Trees
<|reference_start|>A Partition-Based Relaxation For Steiner Trees: The Steiner tree problem is a classical NP-hard optimization problem with a wide range of practical applications. In an instance of this problem, we are given an undirected graph G=(V,E), a set of terminals R, and non-negative costs c_e for all edges e in E. Any tree that contains all terminals is called a Steiner tree; the goal is to find a minimum-cost Steiner tree. The nodes V R are called Steiner nodes. The best approximation algorithm known for the Steiner tree problem is due to Robins and Zelikovsky (SIAM J. Discrete Math, 2005); their greedy algorithm achieves a performance guarantee of 1+(ln 3)/2 ~ 1.55. The best known linear (LP)-based algorithm, on the other hand, is due to Goemans and Bertsimas (Math. Programming, 1993) and achieves an approximation ratio of 2-2/|R|. In this paper we establish a link between greedy and LP-based approaches by showing that Robins and Zelikovsky's algorithm has a natural primal-dual interpretation with respect to a novel partition-based linear programming relaxation. We also exhibit surprising connections between the new formulation and existing LPs and we show that the new LP is stronger than the bidirected cut formulation. An instance is b-quasi-bipartite if each connected component of G R has at most b vertices. We show that Robins' and Zelikovsky's algorithm has an approximation ratio better than 1+(ln 3)/2 for such instances, and we prove that the integrality gap of our LP is between 8/7 and (2b+1)/(b+1).<|reference_end|>
arxiv
@article{konemann2007a, title={A Partition-Based Relaxation For Steiner Trees}, author={Jochen Konemann, David Pritchard, Kunlun Tan}, journal={arXiv preprint arXiv:0712.3568}, year={2007}, archivePrefix={arXiv}, eprint={0712.3568}, primaryClass={cs.DS} }
konemann2007a
arxiv-2177
0712.3576
Protocols For Half-Duplex Multiple Relay Networks
<|reference_start|>Protocols For Half-Duplex Multiple Relay Networks: In this paper we present several strategies for multiple relay networks which are constrained by a half-duplex operation, i. e., each node either transmits or receives on a particular resource. Using the discrete memoryless multiple relay channel we present achievable rates for a multilevel partial decode-and-forward approach which generalizes previous results presented by Kramer and Khojastepour et al.. Furthermore, we derive a compress-and-forward approach using a regular encoding scheme which simplifies the encoding and decoding scheme and improves the achievable rates in general. Finally, we give achievable rates for a mixed strategy used in a four-terminal network with alternately transmitting relay nodes.<|reference_end|>
arxiv
@article{rost2007protocols, title={Protocols For Half-Duplex Multiple Relay Networks}, author={P. Rost and G. Fettweis}, journal={arXiv preprint arXiv:0712.3576}, year={2007}, archivePrefix={arXiv}, eprint={0712.3576}, primaryClass={cs.IT math.IT} }
rost2007protocols
arxiv-2178
0712.3587
Pattern Recognition System Design with Linear Encoding for Discrete Patterns
<|reference_start|>Pattern Recognition System Design with Linear Encoding for Discrete Patterns: In this paper, designs and analyses of compressive recognition systems are discussed, and also a method of establishing a dual connection between designs of good communication codes and designs of recognition systems is presented. Pattern recognition systems based on compressed patterns and compressed sensor measurements can be designed using low-density matrices. We examine truncation encoding where a subset of the patterns and measurements are stored perfectly while the rest is discarded. We also examine the use of LDPC parity check matrices for compressing measurements and patterns. We show how more general ensembles of good linear codes can be used as the basis for pattern recognition system design, yielding system design strategies for more general noise models.<|reference_end|>
arxiv
@article{lai2007pattern, title={Pattern Recognition System Design with Linear Encoding for Discrete Patterns}, author={Po-Hsiang Lai and Joseph A. O'Sullivan}, journal={arXiv preprint arXiv:0712.3587}, year={2007}, archivePrefix={arXiv}, eprint={0712.3587}, primaryClass={cs.IT cs.CV math.IT} }
lai2007pattern
arxiv-2179
0712.3617
A Unified Framework for Pricing Credit and Equity Derivatives
<|reference_start|>A Unified Framework for Pricing Credit and Equity Derivatives: We propose a model which can be jointly calibrated to the corporate bond term structure and equity option volatility surface of the same company. Our purpose is to obtain explicit bond and equity option pricing formulas that can be calibrated to find a risk neutral model that matches a set of observed market prices. This risk neutral model can then be used to price more exotic, illiquid or over-the-counter derivatives. We observe that the model implied credit default swap (CDS) spread matches the market CDS spread and that our model produces a very desirable CDS spread term structure. This is observation is worth noticing since without calibrating any parameter to the CDS spread data, it is matched by the CDS spread that our model generates using the available information from the equity options and corporate bond markets. We also observe that our model matches the equity option implied volatility surface well since we properly account for the default risk premium in the implied volatility surface. We demonstrate the importance of accounting for the default risk and stochastic interest rate in equity option pricing by comparing our results to Fouque, Papanicolaou, Sircar and Solna (2003), which only accounts for stochastic volatility.<|reference_end|>
arxiv
@article{bayraktar2007a, title={A Unified Framework for Pricing Credit and Equity Derivatives}, author={Erhan Bayraktar, Bo Yang}, journal={arXiv preprint arXiv:0712.3617}, year={2007}, archivePrefix={arXiv}, eprint={0712.3617}, primaryClass={cs.CE} }
bayraktar2007a
arxiv-2180
0712.3641
Controlling Delay-induced Hopf bifurcation in Internet congestion control system
<|reference_start|>Controlling Delay-induced Hopf bifurcation in Internet congestion control system: This paper focuses on Hopf bifurcation control in a dual model of Internet congestion control algorithms which is modeled as a delay differential equation (DDE). By choosing communication delay as a bifurcation parameter, it has been demonstrated that the system loses stability and a Hopf bifurcation occurs when communication delay passes through a critical value. Therefore, a time-delayed feedback control method is applied to the system for delaying the onset of undesirable Hopf bifurcation. Theoretical analysis and numerical simulations confirm that the delayed feedback controller is efficient in controlling Hopf bifurcation in Internet congestion control system. Moreover, the direction of the Hopf bifurcation and the stability of the bifurcating periodic solutions are determinated by applying the center manifold theorem and the normal form theory.<|reference_end|>
arxiv
@article{ding2007controlling, title={Controlling Delay-induced Hopf bifurcation in Internet congestion control system}, author={Dawei Ding, Jie Zhu, Xiaoshu Luo, Yuliang Liu}, journal={arXiv preprint arXiv:0712.3641}, year={2007}, archivePrefix={arXiv}, eprint={0712.3641}, primaryClass={cs.NI} }
ding2007controlling
arxiv-2181
0712.3654
Improving the Performance of PieceWise Linear Separation Incremental Algorithms for Practical Hardware Implementations
<|reference_start|>Improving the Performance of PieceWise Linear Separation Incremental Algorithms for Practical Hardware Implementations: In this paper we shall review the common problems associated with Piecewise Linear Separation incremental algorithms. This kind of neural models yield poor performances when dealing with some classification problems, due to the evolving schemes used to construct the resulting networks. So as to avoid this undesirable behavior we shall propose a modification criterion. It is based upon the definition of a function which will provide information about the quality of the network growth process during the learning phase. This function is evaluated periodically as the network structure evolves, and will permit, as we shall show through exhaustive benchmarks, to considerably improve the performance(measured in terms of network complexity and generalization capabilities) offered by the networks generated by these incremental models.<|reference_end|>
arxiv
@article{de lara2007improving, title={Improving the Performance of PieceWise Linear Separation Incremental Algorithms for Practical Hardware Implementations}, author={Alejandro Chinea Manrique De Lara, Juan Manuel Moreno, Arostegui Jordi Madrenas, Joan Cabestany}, journal={Biological and Artificial Computation: From Neuroscience to Technology, J.Mira, R.Moreno-Diaz, J.Cabestany (eds.), pp. 607-616, Springer-Verlag, 1997}, year={2007}, archivePrefix={arXiv}, eprint={0712.3654}, primaryClass={cs.NE cs.AI cs.LG} }
de lara2007improving
arxiv-2182
0712.3705
Framework and Resources for Natural Language Parser Evaluation
<|reference_start|>Framework and Resources for Natural Language Parser Evaluation: Because of the wide variety of contemporary practices used in the automatic syntactic parsing of natural languages, it has become necessary to analyze and evaluate the strengths and weaknesses of different approaches. This research is all the more necessary because there are currently no genre- and domain-independent parsers that are able to analyze unrestricted text with 100% preciseness (I use this term to refer to the correctness of analyses assigned by a parser). All these factors create a need for methods and resources that can be used to evaluate and compare parsing systems. This research describes: (1) A theoretical analysis of current achievements in parsing and parser evaluation. (2) A framework (called FEPa) that can be used to carry out practical parser evaluations and comparisons. (3) A set of new evaluation resources: FiEval is a Finnish treebank under construction, and MGTS and RobSet are parser evaluation resources in English. (4) The results of experiments in which the developed evaluation framework and the two resources for English were used for evaluating a set of selected parsers.<|reference_end|>
arxiv
@article{kakkonen2007framework, title={Framework and Resources for Natural Language Parser Evaluation}, author={Tuomo Kakkonen}, journal={arXiv preprint arXiv:0712.3705}, year={2007}, number={University of Joensuu, Computer Science Dissertations 19}, archivePrefix={arXiv}, eprint={0712.3705}, primaryClass={cs.CL} }
kakkonen2007framework
arxiv-2183
0712.3757
$m$-Sequences of Different Lengths with Four-Valued Cross Correlation
<|reference_start|>$m$-Sequences of Different Lengths with Four-Valued Cross Correlation: {\bf Abstract.} Considered is the distribution of the cross correlation between $m$-sequences of length $2^m-1$, where $m$ is even, and $m$-sequences of shorter length $2^{m/2}-1$. The infinite family of pairs of $m$-sequences with four-valued cross correlation is constructed and the complete correlation distribution of this family is determined.<|reference_end|>
arxiv
@article{helleseth2007$m$-sequences, title={$m$-Sequences of Different Lengths with Four-Valued Cross Correlation}, author={Tor Helleseth and Alexander Kholosha and Aina Johanssen}, journal={arXiv preprint arXiv:0712.3757}, year={2007}, archivePrefix={arXiv}, eprint={0712.3757}, primaryClass={cs.DM cs.CR} }
helleseth2007$m$-sequences
arxiv-2184
0712.3807
Improved Collaborative Filtering Algorithm via Information Transformation
<|reference_start|>Improved Collaborative Filtering Algorithm via Information Transformation: In this paper, we propose a spreading activation approach for collaborative filtering (SA-CF). By using the opinion spreading process, the similarity between any users can be obtained. The algorithm has remarkably higher accuracy than the standard collaborative filtering (CF) using Pearson correlation. Furthermore, we introduce a free parameter $\beta$ to regulate the contributions of objects to user-user correlations. The numerical results indicate that decreasing the influence of popular objects can further improve the algorithmic accuracy and personality. We argue that a better algorithm should simultaneously require less computation and generate higher accuracy. Accordingly, we further propose an algorithm involving only the top-$N$ similar neighbors for each target user, which has both less computational complexity and higher algorithmic accuracy.<|reference_end|>
arxiv
@article{liu2007improved, title={Improved Collaborative Filtering Algorithm via Information Transformation}, author={Jian-Guo Liu, Bing-Hong Wang, Qiang Guo}, journal={Int. J. Mod. Phys. C 20(2), 285-293 (2009)}, year={2007}, doi={10.1142/S0129183109013613}, archivePrefix={arXiv}, eprint={0712.3807}, primaryClass={cs.LG cs.CY} }
liu2007improved
arxiv-2185
0712.3823
Multidimensional reconciliation for continuous-variable quantum key distribution
<|reference_start|>Multidimensional reconciliation for continuous-variable quantum key distribution: We propose a method for extracting an errorless secret key in a continuous-variable quantum key distribution protocol, which is based on Gaussian modulation of coherent states and homodyne detection. The crucial feature is an eight-dimensional reconciliation method, based on the algebraic properties of octonions. Since the protocol does not use any postselection, it can be proven secure against arbitrary collective attacks, by using well-established theorems on the optimality of Gaussian attacks. By using this new coding scheme with an appropriate signal to noise ratio, the distance for secure continuous-variable quantum key distribution can be significantly extended.<|reference_end|>
arxiv
@article{leverrier2007multidimensional, title={Multidimensional reconciliation for continuous-variable quantum key distribution}, author={Anthony Leverrier, Romain All'eaume, Joseph Boutros, Gilles Z'emor, Philippe Grangier}, journal={Phys. Rev. A 77, 042325 (2008)}, year={2007}, doi={10.1103/PhysRevA.77.042325}, archivePrefix={arXiv}, eprint={0712.3823}, primaryClass={quant-ph cs.IT math.IT} }
leverrier2007multidimensional
arxiv-2186
0712.3825
Tests of Machine Intelligence
<|reference_start|>Tests of Machine Intelligence: Although the definition and measurement of intelligence is clearly of fundamental importance to the field of artificial intelligence, no general survey of definitions and tests of machine intelligence exists. Indeed few researchers are even aware of alternatives to the Turing test and its many derivatives. In this paper we fill this gap by providing a short survey of the many tests of machine intelligence that have been proposed.<|reference_end|>
arxiv
@article{legg2007tests, title={Tests of Machine Intelligence}, author={Shane Legg and Marcus Hutter}, journal={50 Years of Artificial Intelligence (2007) pages 232-242}, year={2007}, number={IDSIA-11-07}, archivePrefix={arXiv}, eprint={0712.3825}, primaryClass={cs.AI} }
legg2007tests
arxiv-2187
0712.3829
Quantum Property Testing of Group Solvability
<|reference_start|>Quantum Property Testing of Group Solvability: Testing efficiently whether a finite set with a binary operation over it, given as an oracle, is a group is a well-known open problem in the field of property testing. Recently, Friedl, Ivanyos and Santha have made a significant step in the direction of solving this problem by showing that it it possible to test efficiently whether the input is an Abelian group or is far, with respect to some distance, from any Abelian group. In this paper, we make a step further and construct an efficient quantum algorithm that tests whether the input is a solvable group, or is far from any solvable group. More precisely, the number of queries used by our algorithm is polylogarithmic in the size of the set.<|reference_end|>
arxiv
@article{inui2007quantum, title={Quantum Property Testing of Group Solvability}, author={Yoshifumi Inui and Francois Le Gall}, journal={Algorithmica 59(1): 35-47 (2011)}, year={2007}, doi={10.1007/s00453-009-9338-8}, archivePrefix={arXiv}, eprint={0712.3829}, primaryClass={quant-ph cs.DS} }
inui2007quantum
arxiv-2188
0712.3830
TCHR: a framework for tabled CLP
<|reference_start|>TCHR: a framework for tabled CLP: Tabled Constraint Logic Programming is a powerful execution mechanism for dealing with Constraint Logic Programming without worrying about fixpoint computation. Various applications, e.g in the fields of program analysis and model checking, have been proposed. Unfortunately, a high-level system for developing new applications is lacking, and programmers are forced to resort to complicated ad hoc solutions. This papers presents TCHR, a high-level framework for tabled Constraint Logic Programming. It integrates in a light-weight manner Constraint Handling Rules (CHR), a high-level language for constraint solvers, with tabled Logic Programming. The framework is easily instantiated with new application-specific constraint domains. Various high-level operations can be instantiated to control performance. In particular, we propose a novel, generalized technique for compacting answer sets.<|reference_end|>
arxiv
@article{schrijvers2007tchr:, title={TCHR: a framework for tabled CLP}, author={Tom Schrijvers, Bart Demoen, David S. Warren}, journal={arXiv preprint arXiv:0712.3830}, year={2007}, archivePrefix={arXiv}, eprint={0712.3830}, primaryClass={cs.PL} }
schrijvers2007tchr:
arxiv-2189
0712.3831
Hopf bifurcation analysis in a dual model of Internet congestion control algorithm with communication delay
<|reference_start|>Hopf bifurcation analysis in a dual model of Internet congestion control algorithm with communication delay: This paper focuses on the delay induced Hopf bifurcation in a dual model of Internet congestion control algorithms which can be modeled as a time-delay system described by a one-order delay differential equation (DDE). By choosing communication delay as the bifurcation parameter, we demonstrate that the system loses its stability and a Hopf bifurcation occurs when communication delay passes through a critical value. Moreover, the bifurcating periodic solution of system is calculated by means of perturbation methods. Discussion of stability of the periodic solutions involves the computation of Floquet exponents by considering the corresponding Poincare -Lindstedt series expansion. Finally, numerical simulations for verify the theoretical analysis are provided.<|reference_end|>
arxiv
@article{ding2007hopf, title={Hopf bifurcation analysis in a dual model of Internet congestion control algorithm with communication delay}, author={Dawei Ding, Jie Zhu, Xiaoshu Luo, Yuliang Liu}, journal={arXiv preprint arXiv:0712.3831}, year={2007}, archivePrefix={arXiv}, eprint={0712.3831}, primaryClass={cs.NI} }
ding2007hopf
arxiv-2190
0712.3858
Bottleneck flows in networks
<|reference_start|>Bottleneck flows in networks: The bottleneck network flow problem (BNFP) is a generalization of several well-studied bottleneck problems such as the bottleneck transportation problem (BTP), bottleneck assignment problem (BAP), bottleneck path problem (BPP), and so on. In this paper we provide a review of important results on this topic and its various special cases. We observe that the BNFP can be solved as a sequence of $O(\log n)$ maximum flow problems. However, special augmenting path based algorithms for the maximum flow problem can be modified to obtain algorithms for the BNFP with the property that these variations and the corresponding maximum flow algorithms have identical worst case time complexity. On unit capacity network we show that BNFP can be solved in $O(\min \{{m(n\log n)}^{{2/3}}, m^{{3/2}}\sqrt{\log n}\})$. This improves the best available algorithm by a factor of $\sqrt{\log n}$. On unit capacity simple graphs, we show that BNFP can be solved in $O(m \sqrt {n \log n})$ time. As a consequence we have an $O(m \sqrt {n \log n})$ algorithm for the BTP with unit arc capacities.<|reference_end|>
arxiv
@article{punnen2007bottleneck, title={Bottleneck flows in networks}, author={Abraham P. Punnen and Ruonan Zhang}, journal={arXiv preprint arXiv:0712.3858}, year={2007}, archivePrefix={arXiv}, eprint={0712.3858}, primaryClass={cs.DS} }
punnen2007bottleneck
arxiv-2191
0712.3870
Substitute Valuations: Generation and Structure
<|reference_start|>Substitute Valuations: Generation and Structure: Substitute valuations (in some contexts called gross substitute valuations) are prominent in combinatorial auction theory. An algorithm is given in this paper for generating a substitute valuation through Monte Carlo simulation. In addition, the geometry of the set of all substitute valuations for a fixed number of goods K is investigated. The set consists of a union of polyhedrons, and the maximal polyhedrons are identified for K=4. It is shown that the maximum dimension of the maximal polyhedrons increases with K nearly as fast as two to the power K. Consequently, under broad conditions, if a combinatorial algorithm can present an arbitrary substitute valuation given a list of input numbers, the list must grow nearly as fast as two to the power K.<|reference_end|>
arxiv
@article{hajek2007substitute, title={Substitute Valuations: Generation and Structure}, author={Bruce Hajek}, journal={arXiv preprint arXiv:0712.3870}, year={2007}, doi={10.1016/j.peva.2008.07.001}, archivePrefix={arXiv}, eprint={0712.3870}, primaryClass={cs.GT cs.PF} }
hajek2007substitute
arxiv-2192
0712.3876
Explicit Non-Adaptive Combinatorial Group Testing Schemes
<|reference_start|>Explicit Non-Adaptive Combinatorial Group Testing Schemes: Group testing is a long studied problem in combinatorics: A small set of $r$ ill people should be identified out of the whole ($n$ people) by using only queries (tests) of the form "Does set X contain an ill human?". In this paper we provide an explicit construction of a testing scheme which is better (smaller) than any known explicit construction. This scheme has $\bigT{\min[r^2 \ln n,n]}$ tests which is as many as the best non-explicit schemes have. In our construction we use a fact that may have a value by its own right: Linear error-correction codes with parameters $[m,k,\delta m]_q$ meeting the Gilbert-Varshamov bound may be constructed quite efficiently, in $\bigT{q^km}$ time.<|reference_end|>
arxiv
@article{porat2007explicit, title={Explicit Non-Adaptive Combinatorial Group Testing Schemes}, author={Ely Porat and Amir Rothschild}, journal={arXiv preprint arXiv:0712.3876}, year={2007}, archivePrefix={arXiv}, eprint={0712.3876}, primaryClass={cs.DS} }
porat2007explicit
arxiv-2193
0712.3896
Tighter and Stable Bounds for Marcum Q-Function
<|reference_start|>Tighter and Stable Bounds for Marcum Q-Function: This paper proposes new bounds for Marcum Q-function, which prove extremely tight and outperform all the bounds previously proposed in the literature. What is more, the proposed bounds are good and stable both for large values and small values of the parameters of the Marcum Q-function, where the previously introduced bounds are bad and even useless under some conditions. The new bounds are derived by refined approximations for the 0th order modified Bessel function in the integration region of the Marcum Q-function. They should be useful since they are always tight no matter the parameters are large or small.<|reference_end|>
arxiv
@article{wang2007tighter, title={Tighter and Stable Bounds for Marcum Q-Function}, author={Jiangping Wang}, journal={arXiv preprint arXiv:0712.3896}, year={2007}, archivePrefix={arXiv}, eprint={0712.3896}, primaryClass={cs.IT math.IT} }
wang2007tighter
arxiv-2194
0712.3916
Discrete logarithms in curves over finite fields
<|reference_start|>Discrete logarithms in curves over finite fields: A survey on algorithms for computing discrete logarithms in Jacobians of curves over finite fields.<|reference_end|>
arxiv
@article{enge2007discrete, title={Discrete logarithms in curves over finite fields}, author={Andreas Enge (INRIA Futurs)}, journal={arXiv preprint arXiv:0712.3916}, year={2007}, archivePrefix={arXiv}, eprint={0712.3916}, primaryClass={cs.CR cs.DM math.AG} }
enge2007discrete
arxiv-2195
0712.3925
QIS-XML: A metadata specification for Quantum Information Science
<|reference_start|>QIS-XML: A metadata specification for Quantum Information Science: While Quantum Information Science (QIS) is still in its infancy, the ability for quantum based hardware or computers to communicate and integrate with their classical counterparts will be a major requirement towards their success. Little attention however has been paid to this aspect of QIS. To manage and exchange information between systems, today's classic Information Technology (IT) commonly uses the eXtensible Markup Language (XML) and its related tools. XML is composed of numerous specifications related to various fields of expertise. No such global specification however has been defined for quantum computers. QIS-XML is a proposed XML metadata specification for the description of fundamental components of QIS (gates & circuits) and a platform for the development of a hardware independent low level pseudo-code for quantum algorithms. This paper lays out the general characteristics of the QIS-XML specification and outlines practical applications through prototype use cases.<|reference_end|>
arxiv
@article{heus2007qis-xml:, title={QIS-XML: A metadata specification for Quantum Information Science}, author={Pascal Heus, Richard Gomez}, journal={arXiv preprint arXiv:0712.3925}, year={2007}, archivePrefix={arXiv}, eprint={0712.3925}, primaryClass={cs.SE cs.DB quant-ph} }
heus2007qis-xml:
arxiv-2196
0712.3936
Lagrangian Relaxation and Partial Cover
<|reference_start|>Lagrangian Relaxation and Partial Cover: Lagrangian relaxation has been used extensively in the design of approximation algorithms. This paper studies its strengths and limitations when applied to Partial Cover.<|reference_end|>
arxiv
@article{mestre2007lagrangian, title={Lagrangian Relaxation and Partial Cover}, author={Juli'an Mestre}, journal={arXiv preprint arXiv:0712.3936}, year={2007}, archivePrefix={arXiv}, eprint={0712.3936}, primaryClass={cs.DS cs.DM} }
mestre2007lagrangian
arxiv-2197
0712.3964
Cryptanalysis of an Image Encryption Scheme Based on a Compound Chaotic Sequence
<|reference_start|>Cryptanalysis of an Image Encryption Scheme Based on a Compound Chaotic Sequence: Recently, an image encryption scheme based on a compound chaotic sequence was proposed. In this paper, the security of the scheme is studied and the following problems are found: (1) a differential chosen-plaintext attack can break the scheme with only three chosen plain-images; (2) there is a number of weak keys and some equivalent keys for encryption; (3) the scheme is not sensitive to the changes of plain-images; and (4) the compound chaotic sequence does not work as a good random number resource.<|reference_end|>
arxiv
@article{li2007cryptanalysis, title={Cryptanalysis of an Image Encryption Scheme Based on a Compound Chaotic Sequence}, author={Chengqing Li, Shujun Li, Guanrong Chen and Wolfgang A. Halang}, journal={arXiv preprint arXiv:0712.3964}, year={2007}, doi={10.1016/j.imavis.2008.09.004}, archivePrefix={arXiv}, eprint={0712.3964}, primaryClass={cs.CR cs.MM} }
li2007cryptanalysis
arxiv-2198
0712.3973
GUIDE: Unifying Evolutionary Engines through a Graphical User Interface
<|reference_start|>GUIDE: Unifying Evolutionary Engines through a Graphical User Interface: Many kinds of Evolutionary Algorithms (EAs) have been described in the literature since the last 30 years. However, though most of them share a common structure, no existing software package allows the user to actually shift from one model to another by simply changing a few parameters, e.g. in a single window of a Graphical User Interface. This paper presents GUIDE, a Graphical User Interface for DREAM Experiments that, among other user-friendly features, unifies all kinds of EAs into a single panel, as far as evolution parameters are concerned. Such a window can be used either to ask for one of the well known ready-to-use algorithms, or to very easily explore new combinations that have not yet been studied. Another advantage of grouping all necessary elements to describe virtually all kinds of EAs is that it creates a fantastic pedagogic tool to teach EAs to students and newcomers to the field.<|reference_end|>
arxiv
@article{collet2007guide:, title={GUIDE: Unifying Evolutionary Engines through a Graphical User Interface}, author={Pierre Collet (LIL), Marc Schoenauer (INRIA Rocquencourt)}, journal={Dans Evolution Artificielle 2936 (2003) 203-215}, year={2007}, archivePrefix={arXiv}, eprint={0712.3973}, primaryClass={cs.NE} }
collet2007guide:
arxiv-2199
0712.3980
Distributed Slicing in Dynamic Systems
<|reference_start|>Distributed Slicing in Dynamic Systems: Peer to peer (P2P) systems are moving from application specific architectures to a generic service oriented design philosophy. This raises interesting problems in connection with providing useful P2P middleware services capable of dealing with resource assignment and management in a large-scale, heterogeneous and unreliable environment. The slicing service, has been proposed to allow for an automatic partitioning of P2P networks into groups (slices) that represent a controllable amount of some resource and that are also relatively homogeneous with respect to that resource. In this paper we propose two gossip-based algorithms to solve the distributed slicing problem. The first algorithm speeds up an existing algorithm sorting a set of uniform random numbers. The second algorithm statistically approximates the rank of nodes in the ordering. The scalability, efficiency and resilience to dynamics of both algorithms rely on their gossip-based models. These algorithms are proved viable theoretically and experimentally.<|reference_end|>
arxiv
@article{fernandez2007distributed, title={Distributed Slicing in Dynamic Systems}, author={Antonio Fernandez (LADyR), Vincent Gramoli (INRIA Futurs, IRISA), Ernesto Jimenez (EUI), Anne-Marie Kermarrec (IRISA), Michel Raynal (IRISA)}, journal={Dans The 27th International Conference on Distributed Computing Systems (ICDCS'07) (2007) 66}, year={2007}, number={ICDCS07}, archivePrefix={arXiv}, eprint={0712.3980}, primaryClass={cs.DC} }
fernandez2007distributed
arxiv-2200
0712.4011
Asymptotic Mutual Information Statistics of Separately-Correlated Rician Fading MIMO Channels
<|reference_start|>Asymptotic Mutual Information Statistics of Separately-Correlated Rician Fading MIMO Channels: Precise characterization of the mutual information of MIMO systems is required to assess the throughput of wireless communication channels in the presence of Rician fading and spatial correlation. Here, we present an asymptotic approach allowing to approximate the distribution of the mutual information as a Gaussian distribution in order to provide both the average achievable rate and the outage probability. More precisely, the mean and variance of the mutual information of the separatelycorrelated Rician fading MIMO channel are derived when the number of transmit and receive antennas grows asymptotically large and their ratio approaches a finite constant. The derivation is based on the replica method, an asymptotic technique widely used in theoretical physics and, more recently, in the performance analysis of communication (CDMA and MIMO) systems. The replica method allows to analyze very difficult system cases in a comparatively simple way though some authors pointed out that its assumptions are not always rigorous. Being aware of this, we underline the key assumptions made in this setting, quite similar to the assumptions made in the technical literature using the replica method in their asymptotic analyses. As far as concerns the convergence of the mutual information to the Gaussian distribution, it is shown that it holds under some mild technical conditions, which are tantamount to assuming that the spatial correlation structure has no asymptotically dominant eigenmodes. The accuracy of the asymptotic approach is assessed by providing a sizeable number of numerical results. It is shown that the approximation is very accurate in a wide variety of system settings even when the number of transmit and receive antennas is as small as a few units.<|reference_end|>
arxiv
@article{taricco2007asymptotic, title={Asymptotic Mutual Information Statistics of Separately-Correlated Rician Fading MIMO Channels}, author={Giorgio Taricco}, journal={arXiv preprint arXiv:0712.4011}, year={2007}, doi={10.1109/TIT.2008.926415}, archivePrefix={arXiv}, eprint={0712.4011}, primaryClass={cs.IT math.IT} }
taricco2007asymptotic