corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-3801
0805.3059
Fuzzy Feedback Scheduling of Resource-Constrained Embedded Control Systems
<|reference_start|>Fuzzy Feedback Scheduling of Resource-Constrained Embedded Control Systems: The quality of control (QoC) of a resource-constrained embedded control system may be jeopardized in dynamic environments with variable workload. This gives rise to the increasing demand of co-design of control and scheduling. To deal with uncertainties in resource availability, a fuzzy feedback scheduling (FFS) scheme is proposed in this paper. Within the framework of feedback scheduling, the sampling periods of control loops are dynamically adjusted using the fuzzy control technique. The feedback scheduler provides QoC guarantees in dynamic environments through maintaining the CPU utilization at a desired level. The framework and design methodology of the proposed FFS scheme are described in detail. A simplified mobile robot target tracking system is investigated as a case study to demonstrate the effectiveness of the proposed FFS scheme. The scheme is independent of task execution times, robust to measurement noises, and easy to implement, while incurring only a small overhead.<|reference_end|>
arxiv
@article{xia2008fuzzy, title={Fuzzy Feedback Scheduling of Resource-Constrained Embedded Control Systems}, author={Feng Xia, Youxian Sun, Yu-Chu Tian, Moses Tade, Jinxiang Dong}, journal={arXiv preprint arXiv:0805.3059}, year={2008}, archivePrefix={arXiv}, eprint={0805.3059}, primaryClass={cs.OH} }
xia2008fuzzy
arxiv-3802
0805.3062
Neural Feedback Scheduling of Real-Time Control Tasks
<|reference_start|>Neural Feedback Scheduling of Real-Time Control Tasks: Many embedded real-time control systems suffer from resource constraints and dynamic workload variations. Although optimal feedback scheduling schemes are in principle capable of maximizing the overall control performance of multitasking control systems, most of them induce excessively large computational overheads associated with the mathematical optimization routines involved and hence are not directly applicable to practical systems. To optimize the overall control performance while minimizing the overhead of feedback scheduling, this paper proposes an efficient feedback scheduling scheme based on feedforward neural networks. Using the optimal solutions obtained offline by mathematical optimization methods, a back-propagation (BP) neural network is designed to adapt online the sampling periods of concurrent control tasks with respect to changes in computing resource availability. Numerical simulation results show that the proposed scheme can reduce the computational overhead significantly while delivering almost the same overall control performance as compared to optimal feedback scheduling.<|reference_end|>
arxiv
@article{xia2008neural, title={Neural Feedback Scheduling of Real-Time Control Tasks}, author={Feng Xia, Yu-Chu Tian, Youxian Sun, Jinxiang Dong}, journal={arXiv preprint arXiv:0805.3062}, year={2008}, archivePrefix={arXiv}, eprint={0805.3062}, primaryClass={cs.OH} }
xia2008neural
arxiv-3803
0805.3082
Weakly Convergent Nonparametric Forecasting of Stationary Time Series
<|reference_start|>Weakly Convergent Nonparametric Forecasting of Stationary Time Series: The conditional distribution of the next outcome given the infinite past of a stationary process can be inferred from finite but growing segments of the past. Several schemes are known for constructing pointwise consistent estimates, but they all demand prohibitive amounts of input data. In this paper we consider real-valued time series and construct conditional distribution estimates that make much more efficient use of the input data. The estimates are consistent in a weak sense, and the question whether they are pointwise consistent is still open. For finite-alphabet processes one may rely on a universal data compression scheme like the Lempel-Ziv algorithm to construct conditional probability mass function estimates that are consistent in expected information divergence. Consistency in this strong sense cannot be attained in a universal sense for all stationary processes with values in an infinite alphabet, but weak consistency can. Some applications of the estimates to on-line forecasting, regression and classification are discussed.<|reference_end|>
arxiv
@article{morvai2008weakly, title={Weakly Convergent Nonparametric Forecasting of Stationary Time Series}, author={G. Morvai, S. Yakowitz and P. Algoet}, journal={IEEE Transactions on Information Theory Vol. 43, pp. 483-498, 1997}, year={2008}, doi={10.1109/18.556107}, archivePrefix={arXiv}, eprint={0805.3082}, primaryClass={math.ST cs.IT math.IT stat.TH} }
morvai2008weakly
arxiv-3804
0805.3091
A simple randomized algorithm for sequential prediction of ergodic time series
<|reference_start|>A simple randomized algorithm for sequential prediction of ergodic time series: We present a simple randomized procedure for the prediction of a binary sequence. The algorithm uses ideas from recent developments of the theory of the prediction of individual sequences. We show that if the sequence is a realization of a stationary and ergodic random process then the average number of mistakes converges, almost surely, to that of the optimum, given by the Bayes predictor. The desirable finite-sample properties of the predictor are illustrated by its performance for Markov processes. In such cases the predictor exhibits near optimal behavior even without knowing the order of the Markov process. Prediction with side information is also considered.<|reference_end|>
arxiv
@article{györfi2008a, title={A simple randomized algorithm for sequential prediction of ergodic time series}, author={L. Gy"orfi, G. Lugosi and G. Morvai}, journal={IEEE Trans. Inform. Theory 45 (1999), no. 7, 2642--2650}, year={2008}, archivePrefix={arXiv}, eprint={0805.3091}, primaryClass={math.ST cs.IT math.IT stat.TH} }
györfi2008a
arxiv-3805
0805.3118
Full Diversity Blind Signal Designs for Unique Identification of Frequency Selective Channels
<|reference_start|>Full Diversity Blind Signal Designs for Unique Identification of Frequency Selective Channels: In this paper, we develop two kinds of novel closed-form decompositions on phase shift keying (PSK) constellations by exploiting linear congruence equation theory: the one for factorizing a $pq$-PSK constellation into a product of a $p$-PSK constellation and a $q$-PSK constellation, and the other for decomposing a specific complex number into a difference of a $p$-PSK constellation and a $q$-PSK constellation. With this, we propose a simple signal design technique to blindly and uniquely identify frequency selective channels with zero-padded block transmission under noise-free environments by only using the first two block received signal vectors. Furthermore, a closed-form solution to determine the transmitted signals and the channel coefficients is obtained. In the Gaussian noise and Rayleigh fading environment, we prove that the newly proposed signaling scheme enables non-coherent full diversity for the Generalized Likelihood Ratio Test (GLRT) receiver.<|reference_end|>
arxiv
@article{zhang2008full, title={Full Diversity Blind Signal Designs for Unique Identification of Frequency Selective Channels}, author={Jian-Kang Zhang and Chau Yuen}, journal={arXiv preprint arXiv:0805.3118}, year={2008}, archivePrefix={arXiv}, eprint={0805.3118}, primaryClass={cs.IT math.IT} }
zhang2008full
arxiv-3806
0805.3126
Cognitive Architecture for Direction of Attention Founded on Subliminal Memory Searches, Pseudorandom and Nonstop
<|reference_start|>Cognitive Architecture for Direction of Attention Founded on Subliminal Memory Searches, Pseudorandom and Nonstop: By way of explaining how a brain works logically, human associative memory is modeled with logical and memory neurons, corresponding to standard digital circuits. The resulting cognitive architecture incorporates basic psychological elements such as short term and long term memory. Novel to the architecture are memory searches using cues chosen pseudorandomly from short term memory. Recalls alternated with sensory images, many tens per second, are analyzed subliminally as an ongoing process, to determine a direction of attention in short term memory.<|reference_end|>
arxiv
@article{burger2008cognitive, title={Cognitive Architecture for Direction of Attention Founded on Subliminal Memory Searches, Pseudorandom and Nonstop}, author={J. R. Burger}, journal={arXiv preprint arXiv:0805.3126}, year={2008}, archivePrefix={arXiv}, eprint={0805.3126}, primaryClass={cs.AI cs.NE} }
burger2008cognitive
arxiv-3807
0805.3155
Marketing in Random Networks
<|reference_start|>Marketing in Random Networks: Viral marketing takes advantage of preexisting social networks among customers to achieve large changes in behaviour. Models of influence spread have been studied in a number of domains, including the effect of "word of mouth" in the promotion of new products or the diffusion of technologies. A social network can be represented by a graph where the nodes are individuals and the edges indicate a form of social relationship. The flow of influence through this network can be thought of as an increasing process of active nodes: as individuals become aware of new technologies, they have the potential to pass them on to their neighbours. The goal of marketing is to trigger a large cascade of adoptions. In this paper, we develop a mathematical model that allows to analyze the dynamics of the cascading sequence of nodes switching to the new technology. To this end we describe a continuous-time and a discrete-time models and analyse the proportion of nodes that adopt the new technology over time.<|reference_end|>
arxiv
@article{amini2008marketing, title={Marketing in Random Networks}, author={Hamed Amini, Moez Draief and Marc Lelarge}, journal={arXiv preprint arXiv:0805.3155}, year={2008}, archivePrefix={arXiv}, eprint={0805.3155}, primaryClass={cs.GT} }
amini2008marketing
arxiv-3808
0805.3164
To Code or Not To Code in Multi-Hop Relay Channels
<|reference_start|>To Code or Not To Code in Multi-Hop Relay Channels: Multi-hop relay channels use multiple relay stages, each with multiple relay nodes, to facilitate communication between a source and destination. Previously, distributed space-time coding was used to maximize diversity gain. Assuming a low-rate feedback link from the destination to each relay stage and the source, this paper proposes end-to-end antenna selection strategies as an alternative to distributed space-time coding. One-way (where only the source has data for destination) and two-way (where the destination also has data for the source) multi-hop relay channels are considered with both the full-duplex and half duplex relay nodes. End-to-end antenna selection strategies are designed and proven to achieve maximum diversity gain by using a single antenna path (using single antenna of the source, each relay stage and the destination) with the maximum signal-to-noise ratio at the destination. For the half-duplex case, two single antenna paths with the two best signal-to-noise ratios in alternate time slots are used to overcome the rate loss with half-duplex nodes, with a small diversity gain penalty. Finally to answer the question, whether to code (distributed space-time code) or not (the proposed end-to-end antenna selection strategy) in a multi-hop relay channel, end-to-end antenna selection strategy and distributed space-time coding is compared with respect to several important performance metrics.<|reference_end|>
arxiv
@article{vaze2008to, title={To Code or Not To Code in Multi-Hop Relay Channels}, author={Rahul Vaze and Robert W. Heath Jr}, journal={arXiv preprint arXiv:0805.3164}, year={2008}, archivePrefix={arXiv}, eprint={0805.3164}, primaryClass={cs.IT math.IT} }
vaze2008to
arxiv-3809
0805.3196
Coupling Component Systems towards Systems of Systems
<|reference_start|>Coupling Component Systems towards Systems of Systems: Systems of systems (SoS) are a hot topic in our "fully connected global world". Our aim is not to provide another definition of what SoS are, but rather to focus on the adequacy of reusing standard system architecting techniques within this approach in order to improve performance, fault detection and safety issues in large-scale coupled systems that definitely qualify as SoS, whatever the definition is. A key issue will be to secure the availability of the services provided by the SoS despite the evolution of the various systems composing the SoS. We will also tackle contracting issues and responsibility transfers, as they should be addressed to ensure the expected behavior of the SoS whilst the various independently contracted systems evolve asynchronously.<|reference_end|>
arxiv
@article{autran2008coupling, title={Coupling Component Systems towards Systems of Systems}, author={Fr'ed'eric Autran (CRAN), Jean-Philippe Auzelle (CRAN), Denise Cattan (CRAN), Jean-Luc Garnier (CRAN), Dominique Luzeaux (DGA/CTA/DT/GIP), Fr'ed'erique Mayer (ERPI), Marc Peyrichon (CRAN), Jean-Ren'e Ruault (DGA/CTA/DT/GIP)}, journal={Dans CD ROM - 18th Annual International Symposium of INCOSE, 6th Biennal European System Engineering Conference, INCOSE 2008, Utrecht : Pays-Bas (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0805.3196}, primaryClass={cs.DC} }
autran2008coupling
arxiv-3810
0805.3200
On Tightness of Mutual Dependence Upperbound for Secret-key Capacity of Multiple Terminals
<|reference_start|>On Tightness of Mutual Dependence Upperbound for Secret-key Capacity of Multiple Terminals: Csiszar and Narayan[3] defined the notion of secret key capacity for multiple terminals, characterized it as a linear program with Slepian-Wolf constraints of the related source coding problem of communication for omniscience, and upper bounded it by some information divergence expression from the joint to the product distribution of the private observations. This paper proves that the bound is tight for the important case when all users are active, using the polymatroidal structure[6] underlying the source coding problem. When some users are not active, the bound may not be tight. This paper gives a counter-example in which 3 out of the 6 terminals are active.<|reference_end|>
arxiv
@article{chan2008on, title={On Tightness of Mutual Dependence Upperbound for Secret-key Capacity of Multiple Terminals}, author={Chung Chan}, journal={arXiv preprint arXiv:0805.3200}, year={2008}, archivePrefix={arXiv}, eprint={0805.3200}, primaryClass={cs.IT cs.CR math.CO math.IT math.PR} }
chan2008on
arxiv-3811
0805.3206
Sociological Inequality and the Second Law
<|reference_start|>Sociological Inequality and the Second Law: There are two fair ways to distribute particles in boxes. The first way is to divide the particles equally between the boxes. The second way, which is calculated here, is to score fairly the particles between the boxes. The obtained power law distribution function yields an uneven distribution of particles in boxes. It is shown that the obtained distribution fits well to sociological phenomena, such as the distribution of votes in polls and the distribution of wealth and Benford's law.<|reference_end|>
arxiv
@article{kafri2008sociological, title={Sociological Inequality and the Second Law}, author={Oded Kafri}, journal={arXiv preprint arXiv:0805.3206}, year={2008}, archivePrefix={arXiv}, eprint={0805.3206}, primaryClass={cs.IT math.IT} }
kafri2008sociological
arxiv-3812
0805.3217
Statistical region-based active contours with exponential family observations
<|reference_start|>Statistical region-based active contours with exponential family observations: In this paper, we focus on statistical region-based active contour models where image features (e.g. intensity) are random variables whose distribution belongs to some parametric family (e.g. exponential) rather than confining ourselves to the special Gaussian case. Using shape derivation tools, our effort focuses on constructing a general expression for the derivative of the energy (with respect to a domain) and derive the corresponding evolution speed. A general result is stated within the framework of multi-parameter exponential family. More particularly, when using Maximum Likelihood estimators, the evolution speed has a closed-form expression that depends simply on the probability density function, while complicating additive terms appear when using other estimators, e.g. moments method. Experimental results on both synthesized and real images demonstrate the applicability of our approach.<|reference_end|>
arxiv
@article{lecellier2008statistical, title={Statistical region-based active contours with exponential family observations}, author={Franc{c}ois Lecellier, St'ephanie Jehan-Besson, Jalal Fadili, Gilles Aubert, Marinette Revenu}, journal={arXiv preprint arXiv:0805.3217}, year={2008}, archivePrefix={arXiv}, eprint={0805.3217}, primaryClass={cs.CV} }
lecellier2008statistical
arxiv-3813
0805.3218
Region-based active contour with noise and shape priors
<|reference_start|>Region-based active contour with noise and shape priors: In this paper, we propose to combine formally noise and shape priors in region-based active contours. On the one hand, we use the general framework of exponential family as a prior model for noise. On the other hand, translation and scale invariant Legendre moments are considered to incorporate the shape prior (e.g. fidelity to a reference shape). The combination of the two prior terms in the active contour functional yields the final evolution equation whose evolution speed is rigorously derived using shape derivative tools. Experimental results on both synthetic images and real life cardiac echography data clearly demonstrate the robustness to initialization and noise, flexibility and large potential applicability of our segmentation algorithm.<|reference_end|>
arxiv
@article{lecellier2008region-based, title={Region-based active contour with noise and shape priors}, author={Franc{c}ois Lecellier, St'ephanie Jehan-Besson, Jalal Fadili, Gilles Aubert, Marinette Revenu, Eric Saloux}, journal={arXiv preprint arXiv:0805.3218}, year={2008}, archivePrefix={arXiv}, eprint={0805.3218}, primaryClass={cs.CV} }
lecellier2008region-based
arxiv-3814
0805.3237
Integrating Job Parallelism in Real-Time Scheduling Theory
<|reference_start|>Integrating Job Parallelism in Real-Time Scheduling Theory: We investigate the global scheduling of sporadic, implicit deadline, real-time task systems on multiprocessor platforms. We provide a task model which integrates job parallelism. We prove that the time-complexity of the feasibility problem of these systems is linear relatively to the number of (sporadic) tasks for a fixed number of processors. We propose a scheduling algorithm theoretically optimal (i.e., preemptions and migrations neglected). Moreover, we provide an exact feasibility utilization bound. Lastly, we propose a technique to limit the number of migrations and preemptions.<|reference_end|>
arxiv
@article{collette2008integrating, title={Integrating Job Parallelism in Real-Time Scheduling Theory}, author={S. Collette and L. Cucu and J. Goossens}, journal={arXiv preprint arXiv:0805.3237}, year={2008}, archivePrefix={arXiv}, eprint={0805.3237}, primaryClass={cs.OS} }
collette2008integrating
arxiv-3815
0805.3256
Model Checking Event-B by Encoding into Alloy
<|reference_start|>Model Checking Event-B by Encoding into Alloy: As systems become ever more complex, verification becomes more main stream. Event-B and Alloy are two formal specification languages based on fairly different methodologies. While Event-B uses theorem provers to prove that invariants hold for a given specification, Alloy uses a SAT-based model finder. In some settings, Event-B invariants may not be proved automatically, and so the often difficult step of interactive proof is required. One solution for this problem is to validate invariants with model checking. This work studies the encoding of Event-B machines and contexts to Alloy in order to perform temporal model checking with Alloy's SAT-based engine.<|reference_end|>
arxiv
@article{matos2008model, title={Model Checking Event-B by Encoding into Alloy}, author={Paulo J. Matos, Joao Marques-Silva}, journal={arXiv preprint arXiv:0805.3256}, year={2008}, archivePrefix={arXiv}, eprint={0805.3256}, primaryClass={cs.LO} }
matos2008model
arxiv-3816
0805.3261
k-Hyperarc Consistency for Soft Constraints over Divisible Residuated Lattices
<|reference_start|>k-Hyperarc Consistency for Soft Constraints over Divisible Residuated Lattices: We investigate the applicability of divisible residuated lattices (DRLs) as a general evaluation framework for soft constraint satisfaction problems (soft CSPs). DRLs are in fact natural candidates for this role, since they form the algebraic semantics of a large family of substructural and fuzzy logics. We present the following results. (i) We show that DRLs subsume important valuation structures for soft constraints, such as commutative idempotent semirings and fair valuation structures, in the sense that the last two are members of certain subvarieties of DRLs (namely, Heyting algebras and BL-algebras respectively). (ii) In the spirit of previous work of J. Larrosa and T. Schiex [2004], and S. Bistarelli and F. Gadducci [2006] we describe a polynomial-time algorithm that enforces k-hyperarc consistency on soft CSPs evaluated over DRLs. Observed that, in general, DRLs are neither idempotent nor totally ordered, this algorithm amounts to a generalization of the available algorithms that enforce k-hyperarc consistency.<|reference_end|>
arxiv
@article{bova2008k-hyperarc, title={k-Hyperarc Consistency for Soft Constraints over Divisible Residuated Lattices}, author={Simone Bova}, journal={arXiv preprint arXiv:0805.3261}, year={2008}, archivePrefix={arXiv}, eprint={0805.3261}, primaryClass={cs.LO} }
bova2008k-hyperarc
arxiv-3817
0805.3267
Compressing Binary Decision Diagrams
<|reference_start|>Compressing Binary Decision Diagrams: The paper introduces a new technique for compressing Binary Decision Diagrams in those cases where random access is not required. Using this technique, compression and decompression can be done in linear time in the size of the BDD and compression will in many cases reduce the size of the BDD to 1-2 bits per node. Empirical results for our compression technique are presented, including comparisons with previously introduced techniques, showing that the new technique dominate on all tested instances.<|reference_end|>
arxiv
@article{hansen2008compressing, title={Compressing Binary Decision Diagrams}, author={Esben Rune Hansen, S. Srinivasa Rao, Peter Tiedemann}, journal={arXiv preprint arXiv:0805.3267}, year={2008}, archivePrefix={arXiv}, eprint={0805.3267}, primaryClass={cs.AI cs.DC} }
hansen2008compressing
arxiv-3818
0805.3339
Tri de la table de faits et compression des index bitmaps avec alignement sur les mots
<|reference_start|>Tri de la table de faits et compression des index bitmaps avec alignement sur les mots: Bitmap indexes are frequently used to index multidimensional data. They rely mostly on sequential input/output. Bitmaps can be compressed to reduce input/output costs and minimize CPU usage. The most efficient compression techniques are based on run-length encoding (RLE), such as Word-Aligned Hybrid (WAH) compression. This type of compression accelerates logical operations (AND, OR) over the bitmaps. However, run-length encoding is sensitive to the order of the facts. Thus, we propose to sort the fact tables. We review lexicographic, Gray-code, and block-wise sorting. We found that a lexicographic sort improves compression--sometimes generating indexes twice as small--and make indexes several times faster. While sorting takes time, this is partially offset by the fact that it is faster to index a sorted table. Column order is significant: it is generally preferable to put the columns having more distinct values at the beginning. A block-wise sort is much less efficient than a full sort. Moreover, we found that Gray-code sorting is not better than lexicographic sorting when using word-aligned compression.<|reference_end|>
arxiv
@article{aouiche2008tri, title={Tri de la table de faits et compression des index bitmaps avec alignement sur les mots}, author={Kamel Aouiche, Daniel Lemire, Owen Kaser}, journal={arXiv preprint arXiv:0805.3339}, year={2008}, archivePrefix={arXiv}, eprint={0805.3339}, primaryClass={cs.DB} }
aouiche2008tri
arxiv-3819
0805.3366
Computational Representation of Linguistic Structures using Domain-Specific Languages
<|reference_start|>Computational Representation of Linguistic Structures using Domain-Specific Languages: We describe a modular system for generating sentences from formal definitions of underlying linguistic structures using domain-specific languages. The system uses Java in general, Prolog for lexical entries and custom domain-specific languages based on Functional Grammar and Functional Discourse Grammar notation, implemented using the ANTLR parser generator. We show how linguistic and technological parts can be brought together in a natural language processing system and how domain-specific languages can be used as a tool for consistent formal notation in linguistic description.<|reference_end|>
arxiv
@article{steeg2008computational, title={Computational Representation of Linguistic Structures using Domain-Specific Languages}, author={Fabian Steeg, Christoph Benden, Paul O. Samuelsdorff}, journal={arXiv preprint arXiv:0805.3366}, year={2008}, archivePrefix={arXiv}, eprint={0805.3366}, primaryClass={cs.CL} }
steeg2008computational
arxiv-3820
0805.3390
Design of Attitude Stability System for Prolate Dual-spin Satellite in Its Inclined Elliptical Orbit
<|reference_start|>Design of Attitude Stability System for Prolate Dual-spin Satellite in Its Inclined Elliptical Orbit: In general, most of communication satellites were designed to be operated in geostationary orbit. And many of them were designed in prolate dual-spin configuration. As a prolate dual-spin vehicle, they have to be stabilized against their internal energy dissipation effect. Several countries that located in southern hemisphere, has shown interest to use communication satellite. Because of those countries southern latitude, an idea emerged to incline the communication satellite (due to its prolate dualspin configuration) in elliptical orbit. This work is focused on designing Attitude Stability System for prolate dual-spin satellite in the effect of perturbed field of gravity due to the inclination of its elliptical orbit. DANDE (De-spin Active Nutation Damping Electronics) provides primary stabilization method for the satellite in its orbit. Classical Control Approach is used for the iteration of DANDE parameters. The control performance is evaluated based on time response analysis.<|reference_end|>
arxiv
@article{muliadi2008design, title={Design of Attitude Stability System for Prolate Dual-spin Satellite in Its Inclined Elliptical Orbit}, author={J. Muliadi, S.D. Jenie, A. Budiyono}, journal={Symposium on Aerospace Science and Technology, Jakarta, 2005}, year={2008}, archivePrefix={arXiv}, eprint={0805.3390}, primaryClass={cs.RO} }
muliadi2008design
arxiv-3821
0805.3406
Performance Analysis of Signal Detection using Quantized Received Signals of Linear Vector Channel
<|reference_start|>Performance Analysis of Signal Detection using Quantized Received Signals of Linear Vector Channel: Performance analysis of optimal signal detection using quantized received signals of a linear vector channel, which is an extension of code-division multiple-access (CDMA) or multiple-input multiple-output (MIMO) channels, in the large system limit, is presented in this paper. Here the dimensions of channel input and output are both sent to infinity while their ratio remains fixed. An optimal detector is one that uses a true channel model, true distribution of input signals, and perfect knowledge about quantization. Applying replica method developed in statistical mechanics, we show that, in the case of a noiseless channel, the optimal detector has perfect detection ability under certain conditions, and that for a noisy channel its detection ability decreases monotonically as the quantization step size increases.<|reference_end|>
arxiv
@article{nakamura2008performance, title={Performance Analysis of Signal Detection using Quantized Received Signals of Linear Vector Channel}, author={Kazutaka Nakamura}, journal={arXiv preprint arXiv:0805.3406}, year={2008}, archivePrefix={arXiv}, eprint={0805.3406}, primaryClass={cs.IT math.IT} }
nakamura2008performance
arxiv-3822
0805.3410
Exploring a type-theoretic approach to accessibility constraint modelling
<|reference_start|>Exploring a type-theoretic approach to accessibility constraint modelling: The type-theoretic modelling of DRT that [degroote06] proposed features continuations for the management of the context in which a clause has to be interpreted. This approach, while keeping the standard definitions of quantifier scope, translates the rules of the accessibility constraints of discourse referents inside the semantic recipes. In this paper, we deal with additional rules for these accessibility constraints. In particular in the case of discourse referents introduced by proper nouns, that negation does not block, and in the case of rhetorical relations that structure discourses. We show how this continuation-based approach applies to those accessibility constraints and how we can consider the parallel management of various principles.<|reference_end|>
arxiv
@article{pogodalla2008exploring, title={Exploring a type-theoretic approach to accessibility constraint modelling}, author={Sylvain Pogodalla (INRIA Lorraine - LORIA)}, journal={Dans Journ\'ees S\'emantiques et Mod\'elisation (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0805.3410}, primaryClass={cs.CL} }
pogodalla2008exploring
arxiv-3823
0805.3462
Enriched MU-Calculi Module Checking
<|reference_start|>Enriched MU-Calculi Module Checking: The model checking problem for open systems has been intensively studied in the literature, for both finite-state (module checking) and infinite-state (pushdown module checking) systems, with respect to Ctl and Ctl*. In this paper, we further investigate this problem with respect to the \mu-calculus enriched with nominals and graded modalities (hybrid graded Mu-calculus), in both the finite-state and infinite-state settings. Using an automata-theoretic approach, we show that hybrid graded \mu-calculus module checking is solvable in exponential time, while hybrid graded \mu-calculus pushdown module checking is solvable in double-exponential time. These results are also tight since they match the known lower bounds for Ctl. We also investigate the module checking problem with respect to the hybrid graded \mu-calculus enriched with inverse programs (Fully enriched \mu-calculus): by showing a reduction from the domino problem, we show its undecidability. We conclude with a short overview of the model checking problem for the Fully enriched Mu-calculus and the fragments obtained by dropping at least one of the additional constructs.<|reference_end|>
arxiv
@article{ferrante2008enriched, title={Enriched MU-Calculi Module Checking}, author={Alessandro Ferrante, Aniello Murano and Mimmo Parente}, journal={Logical Methods in Computer Science, Volume 4, Issue 3 (July 29, 2008) lmcs:829}, year={2008}, doi={10.2168/LMCS-4(3:1)2008}, archivePrefix={arXiv}, eprint={0805.3462}, primaryClass={cs.LO} }
ferrante2008enriched
arxiv-3824
0805.3484
A MacWilliams Identity for Convolutional Codes: The General Case
<|reference_start|>A MacWilliams Identity for Convolutional Codes: The General Case: A MacWilliams Identity for convolutional codes will be established. It makes use of the weight adjacency matrices of the code and its dual, based on state space realizations (the controller canonical form) of the codes in question. The MacWilliams Identity applies to various notions of duality appearing in the literature on convolutional coding theory.<|reference_end|>
arxiv
@article{gluesing-luerssen2008a, title={A MacWilliams Identity for Convolutional Codes: The General Case}, author={Heide Gluesing-Luerssen, Gert Schneider}, journal={arXiv preprint arXiv:0805.3484}, year={2008}, archivePrefix={arXiv}, eprint={0805.3484}, primaryClass={cs.IT math.IT math.OC} }
gluesing-luerssen2008a
arxiv-3825
0805.3518
Logic programming with social features
<|reference_start|>Logic programming with social features: In everyday life it happens that a person has to reason about what other people think and how they behave, in order to achieve his goals. In other words, an individual may be required to adapt his behaviour by reasoning about the others' mental state. In this paper we focus on a knowledge representation language derived from logic programming which both supports the representation of mental states of individual communities and provides each with the capability of reasoning about others' mental states and acting accordingly. The proposed semantics is shown to be translatable into stable model semantics of logic programs with aggregates.<|reference_end|>
arxiv
@article{buccafurri2008logic, title={Logic programming with social features}, author={Francesco Buccafurri and Gianluca Caminiti}, journal={arXiv preprint arXiv:0805.3518}, year={2008}, archivePrefix={arXiv}, eprint={0805.3518}, primaryClass={cs.AI} }
buccafurri2008logic
arxiv-3826
0805.3521
Towards applied theories based on computability logic
<|reference_start|>Towards applied theories based on computability logic: Computability logic (CL) (see http://www.cis.upenn.edu/~giorgi/cl.html) is a recently launched program for redeveloping logic as a formal theory of computability, as opposed to the formal theory of truth that logic has more traditionally been. Formulas in it represent computational problems, "truth" means existence of an algorithmic solution, and proofs encode such solutions. Within the line of research devoted to finding axiomatizations for ever more expressive fragments of CL, the present paper introduces a new deductive system CL12 and proves its soundness and completeness with respect to the semantics of CL. Conservatively extending classical predicate calculus and offering considerable additional expressive and deductive power, CL12 presents a reasonable, computationally meaningful, constructive alternative to classical logic as a basis for applied theories. To obtain a model example of such theories, this paper rebuilds the traditional, classical-logic-based Peano arithmetic into a computability-logic-based counterpart. Among the purposes of the present contribution is to provide a starting point for what, as the author wishes to hope, might become a new line of research with a potential of interesting findings -- an exploration of the presumably quite unusual metatheory of CL-based arithmetic and other CL-based applied systems.<|reference_end|>
arxiv
@article{japaridze2008towards, title={Towards applied theories based on computability logic}, author={Giorgi Japaridze}, journal={Journal of Symbolic Logic 75 (2010), pp. 565-601}, year={2008}, doi={10.2178/jsl/1268917495}, archivePrefix={arXiv}, eprint={0805.3521}, primaryClass={cs.LO cs.AI math.LO math.NT} }
japaridze2008towards
arxiv-3827
0805.3528
Coding Theory and Projective Spaces
<|reference_start|>Coding Theory and Projective Spaces: The projective space of order $n$ over a finite field $\F_q$ is a set of all subspaces of the vector space $\F_q^{n}$. In this work, we consider error-correcting codes in the projective space, focusing mainly on constant dimension codes. We start with the different representations of subspaces in the projective space. These representations involve matrices in reduced row echelon form, associated binary vectors, and Ferrers diagrams. Based on these representations, we provide a new formula for the computation of the distance between any two subspaces in the projective space. We examine lifted maximum rank distance (MRD) codes, which are nearly optimal constant dimension codes. We prove that a lifted MRD code can be represented in such a way that it forms a block design known as a transversal design. The incidence matrix of the transversal design derived from a lifted MRD code can be viewed as a parity-check matrix of a linear code in the Hamming space. We find the properties of these codes which can be viewed also as LDPC codes. We present new bounds and constructions for constant dimension codes. First, we present a multilevel construction for constant dimension codes, which can be viewed as a generalization of a lifted MRD codes construction. This construction is based on a new type of rank-metric codes, called Ferrers diagram rank-metric codes. Then we derive upper bounds on the size of constant dimension codes which contain the lifted MRD code, and provide a construction for two families of codes, that attain these upper bounds. We generalize the well-known concept of a punctured code for a code in the projective space to obtain large codes which are not constant dimension. We present efficient enumerative encoding and decoding techniques for the Grassmannian. Finally we describe a search method for constant dimension lexicodes.<|reference_end|>
arxiv
@article{silberstein2008coding, title={Coding Theory and Projective Spaces}, author={Natalia Silberstein}, journal={arXiv preprint arXiv:0805.3528}, year={2008}, archivePrefix={arXiv}, eprint={0805.3528}, primaryClass={cs.IT math.IT} }
silberstein2008coding
arxiv-3828
0805.3537
Public Discourse in the Web Does Not Exhibit Group Polarization
<|reference_start|>Public Discourse in the Web Does Not Exhibit Group Polarization: We performed a massive study of the dynamics of group deliberation among several websites containing millions of opinions on topics ranging from books to media. Contrary to the common phenomenon of group polarization observed offline, we measured a strong tendency towards moderate views in the course of time. This phenomenon possibly operates through a self-selection bias whereby previous comments and ratings elicit contrarian views that soften the previous opinions.<|reference_end|>
arxiv
@article{wu2008public, title={Public Discourse in the Web Does Not Exhibit Group Polarization}, author={Fang Wu and Bernardo A. Huberman}, journal={arXiv preprint arXiv:0805.3537}, year={2008}, archivePrefix={arXiv}, eprint={0805.3537}, primaryClass={cs.CY} }
wu2008public
arxiv-3829
0805.3538
Covert Channels in SIP for VoIP signalling
<|reference_start|>Covert Channels in SIP for VoIP signalling: In this paper, we evaluate available steganographic techniques for SIP (Session Initiation Protocol) that can be used for creating covert channels during signaling phase of VoIP (Voice over IP) call. Apart from characterizing existing steganographic methods we provide new insights by introducing new techniques. We also estimate amount of data that can be transferred in signalling messages for typical IP telephony call.<|reference_end|>
arxiv
@article{mazurczyk2008covert, title={Covert Channels in SIP for VoIP signalling}, author={Wojciech Mazurczyk and Krzysztof Szczypiorski}, journal={arXiv preprint arXiv:0805.3538}, year={2008}, doi={10.1007/978-3-540-69403-8_9}, archivePrefix={arXiv}, eprint={0805.3538}, primaryClass={cs.MM cs.CR} }
mazurczyk2008covert
arxiv-3830
0805.3569
Joint Cooperation and Multi-Hopping Increase the Capacity of Wireless Networks
<|reference_start|>Joint Cooperation and Multi-Hopping Increase the Capacity of Wireless Networks: The problem of communication among nodes in an \emph{extended network} is considered, where radio power decay and interference are limiting factors. It has been shown previously that, with simple multi-hopping, the achievable total communication rate in such a network is at most $\Theta(\sqrt{N})$. In this work, we study the benefit of node cooperation in conjunction with multi-hopping on the network capacity. We propose a multi-phase communication scheme, combining distributed MIMO transmission with multi-hop forwarding among clusters of nodes. We derive the network throughput of this communication scheme and determine the optimal cluster size. This provides a constructive lower bound on the network capacity. We first show that in \textit{regular networks} a rate of $\omega(N^{{2/3}})$ can be achieved with transmission power scaling of $\Theta(N^{\frac{\alpha}{6}-{1/3}})$, where $\alpha>2$ is the signal path-loss exponent. We further extend this result to \textit{random networks}, where we show a rate of $\omega (N^{2/3}(\log{N})^{(2-\alpha)/6})$ can be achieved with transmission power scaling of $\Theta(N^{\alpha/6-1/3}(\log{N})^{-(\alpha-2)^2/6})$ in a random network with unit node density. In particular, as $\alpha$ approaches 2, only constant transmission power is required. Finally, we study a random network with density $\lambda=\Omega(\log{N})$ and show that a rate of $\omega((\lambda N)^{2/3})$ is achieved and the required power scales as $\Theta(N^{\alpha/6-1/3}/\lambda^{\alpha/3-2/3})$.<|reference_end|>
arxiv
@article{vakil2008joint, title={Joint Cooperation and Multi-Hopping Increase the Capacity of Wireless Networks}, author={Sam Vakil and Ben Liang}, journal={arXiv preprint arXiv:0805.3569}, year={2008}, archivePrefix={arXiv}, eprint={0805.3569}, primaryClass={cs.NI cs.IT math.IT} }
vakil2008joint
arxiv-3831
0805.3605
Success Exponent of Wiretapper: A Tradeoff between Secrecy and Reliability
<|reference_start|>Success Exponent of Wiretapper: A Tradeoff between Secrecy and Reliability: Equivocation rate has been widely used as an information-theoretic measure of security after Shannon[10]. It simplifies problems by removing the effect of atypical behavior from the system. In [9], however, Merhav and Arikan considered the alternative of using guessing exponent to analyze the Shannon's cipher system. Because guessing exponent captures the atypical behavior, the strongest expressible notion of secrecy requires the more stringent condition that the size of the key, instead of its entropy rate, to be equal to the size of the message. The relationship between equivocation and guessing exponent are also investigated in [6][7] but it is unclear which is a better measure, and whether there is a unifying measure of security. Instead of using equivocation rate or guessing exponent, we study the wiretap channel in [2] using the success exponent, defined as the exponent of a wiretapper successfully learn the secret after making an exponential number of guesses to a sequential verifier that gives yes/no answer to each guess. By extending the coding scheme in [2][5] and the converse proof in [4] with the new Overlap Lemma 5.2, we obtain a tradeoff between secrecy and reliability expressed in terms of lower bounds on the error and success exponents of authorized and respectively unauthorized decoding of the transmitted messages. From this, we obtain an inner bound to the region of strongly achievable public, private and guessing rate triples for which the exponents are strictly positive. The closure of this region is equivalent to the closure of the region in Theorem 1 of [2] when we treat equivocation rate as the guessing rate. However, it is unclear if the inner bound is tight.<|reference_end|>
arxiv
@article{chan2008success, title={Success Exponent of Wiretapper: A Tradeoff between Secrecy and Reliability}, author={Chung Chan}, journal={arXiv preprint arXiv:0805.3605}, year={2008}, archivePrefix={arXiv}, eprint={0805.3605}, primaryClass={cs.IT cs.CR math.IT math.PR} }
chan2008success
arxiv-3832
0805.3638
Sequential detection of Markov targets with trajectory estimation
<|reference_start|>Sequential detection of Markov targets with trajectory estimation: The problem of detection and possible estimation of a signal generated by a dynamic system when a variable number of noisy measurements can be taken is here considered. Assuming a Markov evolution of the system (in particular, the pair signal-observation forms a hidden Markov model), a sequential procedure is proposed, wherein the detection part is a sequential probability ratio test (SPRT) and the estimation part relies upon a maximum-a-posteriori (MAP) criterion, gated by the detection stage (the parameter to be estimated is the trajectory of the state evolution of the system itself). A thorough analysis of the asymptotic behaviour of the test in this new scenario is given, and sufficient conditions for its asymptotic optimality are stated, i.e. for almost sure minimization of the stopping time and for (first-order) minimization of any moment of its distribution. An application to radar surveillance problems is also examined.<|reference_end|>
arxiv
@article{grossi2008sequential, title={Sequential detection of Markov targets with trajectory estimation}, author={Emanuele Grossi, Marco Lops}, journal={IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 9, SEPTEMBER 2008}, year={2008}, doi={10.1109/TIT.2008.928261}, archivePrefix={arXiv}, eprint={0805.3638}, primaryClass={cs.IT math.IT} }
grossi2008sequential
arxiv-3833
0805.3643
Overlay Cognitive Radio in Wireless Mesh Networks
<|reference_start|>Overlay Cognitive Radio in Wireless Mesh Networks: In this paper we apply the concept of overlay cognitive radio to the communication between nodes in a wireless mesh network. Based on the overlay cognitive radio model, it is possible to have two concurrent transmissions in a given interference region, where usually only one communication takes place at a given time. We analyze the cases of wireless mesh networks with regular and random topologies. Numerical results show that considerable network capacity gains can be achieved.<|reference_end|>
arxiv
@article{pereira2008overlay, title={Overlay Cognitive Radio in Wireless Mesh Networks}, author={Ricardo Carvalho Pereira, Richard Demo Souza, Marcelo Eduardo Pellenz}, journal={arXiv preprint arXiv:0805.3643}, year={2008}, archivePrefix={arXiv}, eprint={0805.3643}, primaryClass={cs.IT cs.NI math.IT} }
pereira2008overlay
arxiv-3834
0805.3742
Algorithmic problems in twisted groups of Lie type
<|reference_start|>Algorithmic problems in twisted groups of Lie type: This thesis contains a collection of algorithms for working with the twisted groups of Lie type known as Suzuki groups, and small and large Ree groups. The two main problems under consideration are constructive recognition and constructive membership testing. We also consider problems of generating and conjugating Sylow and maximal subgroups. The algorithms are motivated by, and form a part of, the Matrix Group Recognition Project. Obtaining both theoretically and practically efficient algorithms has been a central goal. The algorithms have been developed with, and implemented in, the computer algebra system MAGMA.<|reference_end|>
arxiv
@article{bäärnhielm2008algorithmic, title={Algorithmic problems in twisted groups of Lie type}, author={Henrik B"a"arnhielm}, journal={arXiv preprint arXiv:0805.3742}, year={2008}, archivePrefix={arXiv}, eprint={0805.3742}, primaryClass={math.GR cs.DS} }
bäärnhielm2008algorithmic
arxiv-3835
0805.3747
Constructing Folksonomies from User-specified Relations on Flickr
<|reference_start|>Constructing Folksonomies from User-specified Relations on Flickr: Many social Web sites allow users to publish content and annotate with descriptive metadata. In addition to flat tags, some social Web sites have recently began to allow users to organize their content and metadata hierarchically. The social photosharing site Flickr, for example, allows users to group related photos in sets, and related sets in collections. The social bookmarking site Del.icio.us similarly lets users group related tags into bundles. Although the sites themselves don't impose any constraints on how these hierarchies are used, individuals generally use them to capture relationships between concepts, most commonly the broader/narrower relations. Collective annotation of content with hierarchical relations may lead to an emergent classification system, called a folksonomy. While some researchers have explored using tags as evidence for learning folksonomies, we believe that hierarchical relations described above offer a high-quality source of evidence for this task. We propose a simple approach to aggregate shallow hierarchies created by many distinct Flickr users into a common folksonomy. Our approach uses statistics to determine if a particular relation should be retained or discarded. The relations are then woven together into larger hierarchies. Although we have not carried out a detailed quantitative evaluation of the approach, it looks very promising since it generates very reasonable, non-trivial hierarchies.<|reference_end|>
arxiv
@article{plangprasopchok2008constructing, title={Constructing Folksonomies from User-specified Relations on Flickr}, author={Anon Plangprasopchok and Kristina Lerman}, journal={arXiv preprint arXiv:0805.3747}, year={2008}, archivePrefix={arXiv}, eprint={0805.3747}, primaryClass={cs.AI} }
plangprasopchok2008constructing
arxiv-3836
0805.3799
The Structure of Narrative: the Case of Film Scripts
<|reference_start|>The Structure of Narrative: the Case of Film Scripts: We analyze the style and structure of story narrative using the case of film scripts. The practical importance of this is noted, especially the need to have support tools for television movie writing. We use the Casablanca film script, and scripts from six episodes of CSI (Crime Scene Investigation). For analysis of style and structure, we quantify various central perspectives discussed in McKee's book, "Story: Substance, Structure, Style, and the Principles of Screenwriting". Film scripts offer a useful point of departure for exploration of the analysis of more general narratives. Our methodology, using Correspondence Analysis, and hierarchical clustering, is innovative in a range of areas that we discuss. In particular this work is groundbreaking in taking the qualitative analysis of McKee and grounding this analysis in a quantitative and algorithmic framework.<|reference_end|>
arxiv
@article{murtagh2008the, title={The Structure of Narrative: the Case of Film Scripts}, author={Fionn Murtagh, Adam Ganz and Stewart McKie}, journal={Pattern Recognition, 42 (2), 302-312, 2009}, year={2008}, doi={10.1016/j.patcog.2008.05.026}, archivePrefix={arXiv}, eprint={0805.3799}, primaryClass={cs.AI} }
murtagh2008the
arxiv-3837
0805.3800
An Evolutionary-Based Approach to Learning Multiple Decision Models from Underrepresented Data
<|reference_start|>An Evolutionary-Based Approach to Learning Multiple Decision Models from Underrepresented Data: The use of multiple Decision Models (DMs) enables to enhance the accuracy in decisions and at the same time allows users to evaluate the confidence in decision making. In this paper we explore the ability of multiple DMs to learn from a small amount of verified data. This becomes important when data samples are difficult to collect and verify. We propose an evolutionary-based approach to solving this problem. The proposed technique is examined on a few clinical problems presented by a small amount of data.<|reference_end|>
arxiv
@article{schetinin2008an, title={An Evolutionary-Based Approach to Learning Multiple Decision Models from Underrepresented Data}, author={Vitaly Schetinin, Dayou Li, Carsten Maple}, journal={arXiv preprint arXiv:0805.3800}, year={2008}, archivePrefix={arXiv}, eprint={0805.3800}, primaryClass={cs.AI cs.NE} }
schetinin2008an
arxiv-3838
0805.3802
Feature Selection for Bayesian Evaluation of Trauma Death Risk
<|reference_start|>Feature Selection for Bayesian Evaluation of Trauma Death Risk: In the last year more than 70,000 people have been brought to the UK hospitals with serious injuries. Each time a clinician has to urgently take a patient through a screening procedure to make a reliable decision on the trauma treatment. Typically, such procedure comprises around 20 tests; however the condition of a trauma patient remains very difficult to be tested properly. What happens if these tests are ambiguously interpreted, and information about the severity of the injury will come misleading? The mistake in a decision can be fatal: using a mild treatment can put a patient at risk of dying from posttraumatic shock, while using an overtreatment can also cause death. How can we reduce the risk of the death caused by unreliable decisions? It has been shown that probabilistic reasoning, based on the Bayesian methodology of averaging over decision models, allows clinicians to evaluate the uncertainty in decision making. Based on this methodology, in this paper we aim at selecting the most important screening tests, keeping a high performance. We assume that the probabilistic reasoning within the Bayesian methodology allows us to discover new relationships between the screening tests and uncertainty in decisions. In practice, selection of the most informative tests can also reduce the cost of a screening procedure in trauma care centers. In our experiments we use the UK Trauma data to compare the efficiency of the proposed technique in terms of the performance. We also compare the uncertainty in decisions in terms of entropy.<|reference_end|>
arxiv
@article{jakaite2008feature, title={Feature Selection for Bayesian Evaluation of Trauma Death Risk}, author={L. Jakaite and V. Schetinin}, journal={arXiv preprint arXiv:0805.3802}, year={2008}, archivePrefix={arXiv}, eprint={0805.3802}, primaryClass={cs.AI} }
jakaite2008feature
arxiv-3839
0805.3824
On Metrics for Error Correction in Network Coding
<|reference_start|>On Metrics for Error Correction in Network Coding: The problem of error correction in both coherent and noncoherent network coding is considered under an adversarial model. For coherent network coding, where knowledge of the network topology and network code is assumed at the source and destination nodes, the error correction capability of an (outer) code is succinctly described by the rank metric; as a consequence, it is shown that universal network error correcting codes achieving the Singleton bound can be easily constructed and efficiently decoded. For noncoherent network coding, where knowledge of the network topology and network code is not assumed, the error correction capability of a (subspace) code is given exactly by a new metric, called the injection metric, which is closely related to, but different than, the subspace metric of K\"otter and Kschischang. In particular, in the case of a non-constant-dimension code, the decoder associated with the injection metric is shown to correct more errors then a minimum-subspace-distance decoder. All of these results are based on a general approach to adversarial error correction, which could be useful for other adversarial channels beyond network coding.<|reference_end|>
arxiv
@article{silva2008on, title={On Metrics for Error Correction in Network Coding}, author={Danilo Silva and Frank R. Kschischang}, journal={IEEE Transactions on Information Theory, vol. 55, no. 12, pp. 5479-5490, Dec. 2009}, year={2008}, doi={10.1109/TIT.2009.2032817}, archivePrefix={arXiv}, eprint={0805.3824}, primaryClass={cs.IT math.IT} }
silva2008on
arxiv-3840
0805.3858
A New Type of Cipher: DICING_csb
<|reference_start|>A New Type of Cipher: DICING_csb: In this paper, we will propose a new type of cipher named DICING_csb, which is derived from our previous stream cipher DICING. It has applied a stream of subkey and an encryption form of block ciphers, so it may be viewed as a combinative of stream cipher and block cipher. Hence, the new type of cipher has fast rate like a stream cipher and need no MAC.<|reference_end|>
arxiv
@article{li2008a, title={A New Type of Cipher: DICING_csb}, author={An-Ping Li}, journal={arXiv preprint arXiv:0805.3858}, year={2008}, archivePrefix={arXiv}, eprint={0805.3858}, primaryClass={cs.CR} }
li2008a
arxiv-3841
0805.3897
SPARK00: A Benchmark Package for the Compiler Evaluation of Irregular/Sparse Codes
<|reference_start|>SPARK00: A Benchmark Package for the Compiler Evaluation of Irregular/Sparse Codes: We propose a set of benchmarks that specifically targets a major cause of performance degradation in high performance computing platforms: irregular access patterns. These benchmarks are meant to be used to asses the performance of optimizing compilers on codes with a varying degree of irregular access. The irregularity caused by the use of pointers and indirection arrays are a major challenge for optimizing compilers. Codes containing such patterns are notoriously hard to optimize but they have a huge impact on the performance of modern architectures, which are under-utilized when encountering irregular memory accesses. In this paper, a set of benchmarks is described that explicitly measures the performance of kernels containing a variety of different access patterns found in real world applications. By offering a varying degree of complexity, we provide a platform for measuring the effectiveness of transformations. The difference in complexity stems from a difference in traversal patterns, the use of multiple indirections and control flow statements. The kernels used cover a variety of different access patterns, namely pointer traversals, indirection arrays, dynamic loop bounds and run-time dependent if-conditions. The kernels are small enough to be fully understood which makes this benchmark set very suitable for the evaluation of restructuring transformations.<|reference_end|>
arxiv
@article{van der spek2008spark00:, title={SPARK00: A Benchmark Package for the Compiler Evaluation of Irregular/Sparse Codes}, author={H.L.A. van der Spek, E.M. Bakker, H.A.G. Wijshoff}, journal={arXiv preprint arXiv:0805.3897}, year={2008}, archivePrefix={arXiv}, eprint={0805.3897}, primaryClass={cs.PF} }
van der spek2008spark00:
arxiv-3842
0805.3901
Properly Coloured Cycles and Paths: Results and Open Problems
<|reference_start|>Properly Coloured Cycles and Paths: Results and Open Problems: In this paper, we consider a number of results and seven conjectures on properly edge-coloured (PC) paths and cycles in edge-coloured multigraphs. We overview some known results and prove new ones. In particular, we consider a family of transformations of an edge-coloured multigraph $G$ into an ordinary graph that allow us to check the existence PC cycles and PC $(s,t)$-paths in $G$ and, if they exist, to find shortest ones among them. We raise a problem of finding the optimal transformation and consider a possible solution to the problem.<|reference_end|>
arxiv
@article{gutin2008properly, title={Properly Coloured Cycles and Paths: Results and Open Problems}, author={Gregory Gutin and Eun Jung Kim}, journal={arXiv preprint arXiv:0805.3901}, year={2008}, archivePrefix={arXiv}, eprint={0805.3901}, primaryClass={cs.DM cs.DS} }
gutin2008properly
arxiv-3843
0805.3935
Fusion for Evaluation of Image Classification in Uncertain Environments
<|reference_start|>Fusion for Evaluation of Image Classification in Uncertain Environments: We present in this article a new evaluation method for classification and segmentation of textured images in uncertain environments. In uncertain environments, real classes and boundaries are known with only a partial certainty given by the experts. Most of the time, in many presented papers, only classification or only segmentation are considered and evaluated. Here, we propose to take into account both the classification and segmentation results according to the certainty given by the experts. We present the results of this method on a fusion of classifiers of sonar images for a seabed characterization.<|reference_end|>
arxiv
@article{martin2008fusion, title={Fusion for Evaluation of Image Classification in Uncertain Environments}, author={Arnaud Martin (E3I2)}, journal={Dans Proceeding of the 9th International Conference on Information Fusion - Information Fusion, Florence : Italie (2006)}, year={2008}, archivePrefix={arXiv}, eprint={0805.3935}, primaryClass={cs.AI} }
martin2008fusion
arxiv-3844
0805.3939
Decision Support with Belief Functions Theory for Seabed Characterization
<|reference_start|>Decision Support with Belief Functions Theory for Seabed Characterization: The seabed characterization from sonar images is a very hard task because of the produced data and the unknown environment, even for an human expert. In this work we propose an original approach in order to combine binary classifiers arising from different kinds of strategies such as one-versus-one or one-versus-rest, usually used in the SVM-classification. The decision functions coming from these binary classifiers are interpreted in terms of belief functions in order to combine these functions with one of the numerous operators of the belief functions theory. Moreover, this interpretation of the decision function allows us to propose a process of decisions by taking into account the rejected observations too far removed from the learning data, and the imprecise decisions given in unions of classes. This new approach is illustrated and evaluated with a SVM in order to classify the different kinds of sediment on image sonar.<|reference_end|>
arxiv
@article{martin2008decision, title={Decision Support with Belief Functions Theory for Seabed Characterization}, author={Arnaud Martin (E3I2), Isabelle Quidu (E3I2)}, journal={Dans Proceeding of the 11th International Conference on Information Fusion - International Conference on Information Fusion, Cologne : Allemagne (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0805.3939}, primaryClass={cs.AI cs.IT math.IT} }
martin2008decision
arxiv-3845
0805.3964
DimReduction - Interactive Graphic Environment for Dimensionality Reduction
<|reference_start|>DimReduction - Interactive Graphic Environment for Dimensionality Reduction: Feature selection is a pattern recognition approach to choose important variables according to some criteria to distinguish or explain certain phenomena. There are many genomic and proteomic applications which rely on feature selection to answer questions such as: selecting signature genes which are informative about some biological state, e.g. normal tissues and several types of cancer; or defining a network of prediction or inference among elements such as genes, proteins, external stimuli and other elements of interest. In these applications, a recurrent problem is the lack of samples to perform an adequate estimate of the joint probabilities between element states. A myriad of feature selection algorithms and criterion functions are proposed, although it is difficult to point the best solution in general. The intent of this work is to provide an open-source multiplataform graphical environment to apply, test and compare many feature selection approaches suitable to be used in bioinformatics problems.<|reference_end|>
arxiv
@article{lopes2008dimreduction, title={DimReduction - Interactive Graphic Environment for Dimensionality Reduction}, author={Fabricio Martins Lopes, David Correa Martins-Jr and Roberto M. Cesar-Jr}, journal={BMC Bioinformatics 2008, 9:451}, year={2008}, doi={10.1186/1471-2105-9-451}, archivePrefix={arXiv}, eprint={0805.3964}, primaryClass={cs.CV} }
lopes2008dimreduction
arxiv-3846
0805.3972
Intuitive visualization of the intelligence for the run-down of terrorist wire-pullers
<|reference_start|>Intuitive visualization of the intelligence for the run-down of terrorist wire-pullers: The investigation of the terrorist attack is a time-critical task. The investigators have a limited time window to diagnose the organizational background of the terrorists, to run down and arrest the wire-pullers, and to take an action to prevent or eradicate the terrorist attack. The intuitive interface to visualize the intelligence data set stimulates the investigators' experience and knowledge, and aids them in decision-making for an immediately effective action. This paper presents a computational method to analyze the intelligence data set on the collective actions of the perpetrators of the attack, and to visualize it into the form of a social network diagram which predicts the positions where the wire-pullers conceals themselves.<|reference_end|>
arxiv
@article{maeno2008intuitive, title={Intuitive visualization of the intelligence for the run-down of terrorist wire-pullers}, author={Yoshiharu Maeno, and Yukio Ohsawa}, journal={arXiv preprint arXiv:0805.3972}, year={2008}, archivePrefix={arXiv}, eprint={0805.3972}, primaryClass={cs.AI} }
maeno2008intuitive
arxiv-3847
0805.4007
Every hierarchy of beliefs is a type
<|reference_start|>Every hierarchy of beliefs is a type: When modeling game situations of incomplete information one usually considers the players' hierarchies of beliefs, a source of all sorts of complications. Hars\'anyi (1967-68)'s idea henceforth referred to as the "Hars\'anyi program" is that hierarchies of beliefs can be replaced by "types". The types constitute the "type space". In the purely measurable framework Heifetz and Samet (1998) formalize the concept of type spaces and prove the existence and the uniqueness of a universal type space. Meier (2001) shows that the purely measurable universal type space is complete, i.e., it is a consistent object. With the aim of adding the finishing touch to these results, we will prove in this paper that in the purely measurable framework every hierarchy of beliefs can be represented by a unique element of the complete universal type space.<|reference_end|>
arxiv
@article{pinter2008every, title={Every hierarchy of beliefs is a type}, author={Miklos Pinter}, journal={arXiv preprint arXiv:0805.4007}, year={2008}, archivePrefix={arXiv}, eprint={0805.4007}, primaryClass={cs.GT} }
pinter2008every
arxiv-3848
0805.4020
Network Growth with Feedback
<|reference_start|>Network Growth with Feedback: Existing models of network growth typically have one or two parameters or strategies which are fixed for all times. We introduce a general framework where feedback on the current state of a network is used to dynamically alter the values of such parameters. A specific model is analyzed where limited resources are shared amongst arriving nodes, all vying to connect close to the root. We show that tunable feedback leads to growth of larger, more efficient networks. Exact results show that linear scaling of resources with system size yields crossover to a trivial condensed state, which can be considerably delayed with sublinear scaling.<|reference_end|>
arxiv
@article{d'souza2008network, title={Network Growth with Feedback}, author={Raissa M. D'Souza and Soumen Roy}, journal={Phys. Rev. E 78, 045101(R) (2008) (Rapid Comm.)}, year={2008}, doi={10.1103/PhysRevE.78.045101}, archivePrefix={arXiv}, eprint={0805.4020}, primaryClass={physics.soc-ph cond-mat.stat-mech cs.NI} }
d'souza2008network
arxiv-3849
0805.4023
Robust Joint Source-Channel Coding for Delay-Limited Applications
<|reference_start|>Robust Joint Source-Channel Coding for Delay-Limited Applications: In this paper, we consider the problem of robust joint source-channel coding over an additive white Gaussian noise channel. We propose a new scheme which achieves the optimal slope of the signal-to-distortion (SDR) curve (unlike the previously known coding schemes). Also, we propose a family of robust codes which together maintain a bounded gap with the optimum SDR curve (in terms of dB). To show the importance of this result, we drive some theoretical bounds on the asymptotic performance of delay-limited hybrid digital-analog (HDA) coding schemes. We show that, unlike the delay-unlimited case, for any family of delay-limited HDA codes, the asymptotic performance loss is unbounded (in terms of dB).<|reference_end|>
arxiv
@article{taherzadeh2008robust, title={Robust Joint Source-Channel Coding for Delay-Limited Applications}, author={Mahmoud Taherzadeh and Amir K. Khandani}, journal={arXiv preprint arXiv:0805.4023}, year={2008}, archivePrefix={arXiv}, eprint={0805.4023}, primaryClass={cs.IT math.IT} }
taherzadeh2008robust
arxiv-3850
0805.4029
Event Synchronization by Lightweight Message Passing
<|reference_start|>Event Synchronization by Lightweight Message Passing: Concurrent ML's events and event combinators facilitate modular concurrent programming with first-class synchronization abstractions. A standard implementation of these abstractions relies on fairly complex manipulations of first-class continuations in the underlying language. In this paper, we present a lightweight implementation of these abstractions in Concurrent Haskell, a language that already provides first-order message passing. At the heart of our implementation is a new distributed synchronization protocol. In contrast with several previous translations of event abstractions in concurrent languages, we remain faithful to the standard semantics for events and event combinators; for example, we retain the symmetry of $\mathtt{choose}$ for expressing selective communication.<|reference_end|>
arxiv
@article{chaudhuri2008event, title={Event Synchronization by Lightweight Message Passing}, author={Avik Chaudhuri}, journal={arXiv preprint arXiv:0805.4029}, year={2008}, archivePrefix={arXiv}, eprint={0805.4029}, primaryClass={cs.PL cs.DC} }
chaudhuri2008event
arxiv-3851
0805.4049
An NP-hardness Result on the Monoid Frobenius Problem
<|reference_start|>An NP-hardness Result on the Monoid Frobenius Problem: The following problem is NP-hard: given a regular expression $E$, decide if $E^*$ is not co-finite.<|reference_end|>
arxiv
@article{xu2008an, title={An NP-hardness Result on the Monoid Frobenius Problem}, author={Zhi Xu, J. Shallit}, journal={arXiv preprint arXiv:0805.4049}, year={2008}, archivePrefix={arXiv}, eprint={0805.4049}, primaryClass={cs.DM cs.CC} }
xu2008an
arxiv-3852
0805.4053
Source Coding for a Simple Network with Receiver Side Information
<|reference_start|>Source Coding for a Simple Network with Receiver Side Information: We consider the problem of source coding with receiver side information for the simple network proposed by R. Gray and A. Wyner in 1974. In this network, a transmitter must reliably transport the output of two correlated information sources to two receivers using three noiseless channels: a public channel which connects the transmitter to both receivers, and two private channels which connect the transmitter directly to each receiver. We extend Gray and Wyner's original problem by permitting side information to be present at each receiver. We derive inner and outer bounds for the achievable rate region and, for three special cases, we show that the outer bound is tight.<|reference_end|>
arxiv
@article{timo2008source, title={Source Coding for a Simple Network with Receiver Side Information}, author={R. Timo, A. Grant, T. Chan and G. Kramer}, journal={arXiv preprint arXiv:0805.4053}, year={2008}, archivePrefix={arXiv}, eprint={0805.4053}, primaryClass={cs.IT math.IT} }
timo2008source
arxiv-3853
0805.4059
Menger's Paths with Minimum Mergings
<|reference_start|>Menger's Paths with Minimum Mergings: For an acyclic directed graph with multiple sources and multiple sinks, we prove that one can choose the Merger's paths between the sources and the sinks such that the number of mergings between these paths is upper bounded by a constant depending only on the min-cuts between the sources and the sinks, regardless of the size and topology of the graph. We also give bounds on the minimum number of mergings between these paths, and discuss how it depends on the min-cuts.<|reference_end|>
arxiv
@article{han2008menger's, title={Menger's Paths with Minimum Mergings}, author={Guangyue Han}, journal={arXiv preprint arXiv:0805.4059}, year={2008}, archivePrefix={arXiv}, eprint={0805.4059}, primaryClass={cs.IT math.CO math.IT} }
han2008menger's
arxiv-3854
0805.4060
Sparse power-efficient topologies for wireless ad hoc sensor networks
<|reference_start|>Sparse power-efficient topologies for wireless ad hoc sensor networks: We study the problem of power-efficient routing for multihop wireless ad hoc sensor networks. The guiding insight of our work is that unlike an ad hoc wireless network, a wireless ad hoc sensor network does not require full connectivity among the nodes. As long as the sensing region is well covered by connected nodes, the network can perform its task. We consider two kinds of geometric random graphs as base interconnection structures: unit disk graphs $\UDG(2,\lambda)$ and $k$-nearest-neighbor graphs $\NN(2,k)$ built on points generated by a Poisson point process of density $\lambda$ in $\RR^2$. We provide subgraph constructions for these two models $\US(2,\lambda)$ and $\NS(2,k)$ and show that there are values $\lambda_s$ and $k_s$ above which these constructions have the following good properties: (i) they are sparse; (ii) they are power-efficient in the sense that the graph distance is no more than a constant times the Euclidean distance between any pair of points; (iii) they cover the space well; (iv) the subgraphs can be set up easily using local information at each node. We also describe a simple local algorithm for routing packets on these subgraphs. Our constructions also give new upper bounds for the critical values of the parameters $\lambda$ and $k$ for the models $\UDG(2,\lambda)$ and $\NN(2,k)$.<|reference_end|>
arxiv
@article{bagchi2008sparse, title={Sparse power-efficient topologies for wireless ad hoc sensor networks}, author={Amitabha Bagchi}, journal={arXiv preprint arXiv:0805.4060}, year={2008}, archivePrefix={arXiv}, eprint={0805.4060}, primaryClass={cs.NI math.PR} }
bagchi2008sparse
arxiv-3855
0805.4072
Extensional Uniformity for Boolean Circuits
<|reference_start|>Extensional Uniformity for Boolean Circuits: Imposing an extensional uniformity condition on a non-uniform circuit complexity class C means simply intersecting C with a uniform class L. By contrast, the usual intensional uniformity conditions require that a resource-bounded machine be able to exhibit the circuits in the circuit family defining C. We say that (C,L) has the "Uniformity Duality Property" if the extensionally uniform class C \cap L can be captured intensionally by means of adding so-called "L-numerical predicates" to the first-order descriptive complexity apparatus describing the connection language of the circuit family defining C. This paper exhibits positive instances and negative instances of the Uniformity Duality Property.<|reference_end|>
arxiv
@article{mckenzie2008extensional, title={Extensional Uniformity for Boolean Circuits}, author={Pierre McKenzie, Michael Thomas and Heribert Vollmer}, journal={arXiv preprint arXiv:0805.4072}, year={2008}, archivePrefix={arXiv}, eprint={0805.4072}, primaryClass={cs.LO cs.CC} }
mckenzie2008extensional
arxiv-3856
0805.4081
Dynamics of thematic information flows
<|reference_start|>Dynamics of thematic information flows: The studies of the dynamics of topical dataflow of new information in the framework of a logistic model were suggested. The condition of topic balance, when the number of publications on all topics is proportional to the information space and time, was presented. General time dependence of the publication intensity in the Internet, devoted to particular topics, was observed; unlike an exponent model, it has a saturation area. Some limitations of a logistic model were identified opening the way for further research.<|reference_end|>
arxiv
@article{lande2008dynamics, title={Dynamics of thematic information flows}, author={D.V. Lande, S.M. Braichevskii}, journal={arXiv preprint arXiv:0805.4081}, year={2008}, archivePrefix={arXiv}, eprint={0805.4081}, primaryClass={cs.IT math.IT} }
lande2008dynamics
arxiv-3857
0805.4085
Peculiarities of the Correlation between Local and Global News Popularity of Electronic Mass Media
<|reference_start|>Peculiarities of the Correlation between Local and Global News Popularity of Electronic Mass Media: One of the approaches to the solution of the navigation problem in current information flows is ranking the documents according to their popularity level. The definition of local and global news popularity which is based on the number of similar-in-content documents, published within local and global time interval, was suggested. Mutual behavior of the documents of local and global popularity levels was studied. The algorithm of detecting the documents which received great popularity before new topics appeared was suggested.<|reference_end|>
arxiv
@article{lande2008peculiarities, title={Peculiarities of the Correlation between Local and Global News Popularity of Electronic Mass Media}, author={D. V. Lande, S. M. Braichevskii, A. T. Darmokhval, A. A. Snarskii}, journal={arXiv preprint arXiv:0805.4085}, year={2008}, archivePrefix={arXiv}, eprint={0805.4085}, primaryClass={cs.IT math.IT} }
lande2008peculiarities
arxiv-3858
0805.4101
Goal-oriented Dialog as a Collaborative Subordinated Activity involving Collective Acceptance
<|reference_start|>Goal-oriented Dialog as a Collaborative Subordinated Activity involving Collective Acceptance: Modeling dialog as a collaborative activity consists notably in specifying the content of the Conversational Common Ground and the kind of social mental state involved. In previous work (Saget, 2006), we claim that Collective Acceptance is the proper social attitude for modeling Conversational Common Ground in the particular case of goal-oriented dialog. In this paper, a formalization of Collective Acceptance is shown, besides elements in order to integrate this attitude in a rational model of dialog are provided; and finally, a model of referential acts as being part of a collaborative activity is presented. The particular case of reference has been chosen in order to exemplify our claims.<|reference_end|>
arxiv
@article{saget2008goal-oriented, title={Goal-oriented Dialog as a Collaborative Subordinated Activity involving Collective Acceptance}, author={Sylvie Saget (IRISA), Marc Guyomard (IRISA)}, journal={Dans Proceedings of the 10th Workshop on the Semantics and the Pragmatics of Dialogue (Brandial 2006) - 10th Workshop on the Semantics and the Pragmatics of Dialogue (Brandial 2006), Potsdam : Allemagne (2006)}, year={2008}, archivePrefix={arXiv}, eprint={0805.4101}, primaryClass={cs.AI cs.CL} }
saget2008goal-oriented
arxiv-3859
0805.4107
Spiral Walk on Triangular Meshes : Adaptive Replication in Data P2P Networks
<|reference_start|>Spiral Walk on Triangular Meshes : Adaptive Replication in Data P2P Networks: We introduce a decentralized replication strategy for peer-to-peer file exchange based on exhaustive exploration of the neighborhood of any node in the network. The replication scheme lets the replicas evenly populate the network mesh, while regulating the total number of replicas at the same time. This is achieved by self adaptation to entering or leaving of nodes. Exhaustive exploration is achieved by a spiral walk algorithm that generates a number of messages linearly proportional to the number of visited nodes. It requires a dedicated topology (a triangular mesh on a closed surface). We introduce protocols for node connection and departure that maintain the triangular mesh at low computational and bandwidth cost. Search efficiency is increased using a mechanism based on dynamically allocated super peers. We conclude with a discussion on experimental validation results.<|reference_end|>
arxiv
@article{bonnel2008spiral, title={Spiral Walk on Triangular Meshes : Adaptive Replication in Data P2P Networks}, author={Nicolas Bonnel (VALORIA), Gilbas M'enier (VALORIA), Pierre-Franc{c}ois Marteau (VALORIA)}, journal={arXiv preprint arXiv:0805.4107}, year={2008}, archivePrefix={arXiv}, eprint={0805.4107}, primaryClass={cs.NI cs.DB} }
bonnel2008spiral
arxiv-3860
0805.4112
On the entropy and log-concavity of compound Poisson measures
<|reference_start|>On the entropy and log-concavity of compound Poisson measures: Motivated, in part, by the desire to develop an information-theoretic foundation for compound Poisson approximation limit theorems (analogous to the corresponding developments for the central limit theorem and for simple Poisson approximation), this work examines sufficient conditions under which the compound Poisson distribution has maximal entropy within a natural class of probability measures on the nonnegative integers. We show that the natural analog of the Poisson maximum entropy property remains valid if the measures under consideration are log-concave, but that it fails in general. A parallel maximum entropy result is established for the family of compound binomial measures. The proofs are largely based on ideas related to the semigroup approach introduced in recent work by Johnson for the Poisson family. Sufficient conditions are given for compound distributions to be log-concave, and specific examples are presented illustrating all the above results.<|reference_end|>
arxiv
@article{johnson2008on, title={On the entropy and log-concavity of compound Poisson measures}, author={Oliver Johnson, Ioannis Kontoyiannis and Mokshay Madiman}, journal={arXiv preprint arXiv:0805.4112}, year={2008}, number={Superceded by arXiv:0912.0581}, archivePrefix={arXiv}, eprint={0805.4112}, primaryClass={cs.IT math.IT math.PR} }
johnson2008on
arxiv-3861
0805.4134
Design and Implementation Aspects of a novel Java P2P Simulator with GUI
<|reference_start|>Design and Implementation Aspects of a novel Java P2P Simulator with GUI: Peer-to-peer networks consist of thousands or millions of nodes that might join and leave arbitrarily. The evaluation of new protocols in real environments is many times practically impossible, especially at design and testing stages. The purpose of this paper is to describe the implementation aspects of a new Java based P2P simulator that has been developed to support scalability in the evaluation of such P2P dynamic environments. Evolving the functionality presented by previous solutions, we provide a friendly graphical user interface through which the high-level theoretic researcher/designer of a P2P system can easily construct an overlay with the desirable number of nodes and evaluate its operations using a number of key distributions. Furthermore, the simulator has built-in ability to produce statistics about the distributed structure. Emphasis was given to the parametrical configuration of the simulator. As a result the developed tool can be utilized in the simulation and evaluation procedures of a variety of different protocols, with only few changes in the Java code.<|reference_end|>
arxiv
@article{chrissikopoulos2008design, title={Design and Implementation Aspects of a novel Java P2P Simulator with GUI}, author={V.Chrissikopoulos, G.Papaloukopoulos, E.Sakkopoulos, S. Sioutas}, journal={arXiv preprint arXiv:0805.4134}, year={2008}, archivePrefix={arXiv}, eprint={0805.4134}, primaryClass={cs.NI cs.DB cs.DC} }
chrissikopoulos2008design
arxiv-3862
0805.4147
Succinct Geometric Indexes Supporting Point Location Queries
<|reference_start|>Succinct Geometric Indexes Supporting Point Location Queries: We propose to design data structures called succinct geometric indexes of negligible space (more precisely, o(n) bits) that, by taking advantage of the n points in the data set permuted and stored elsewhere as a sequence, to support geometric queries in optimal time. Our first and main result is a succinct geometric index that can answer point location queries, a fundamental problem in computational geometry, on planar triangulations in O(lg n) time. We also design three variants of this index. The first supports point location using $\lg n + 2\sqrt{\lg n} + O(\lg^{1/4} n)$ point-line comparisons. The second supports point location in o(lg n) time when the coordinates are integers bounded by U. The last variant can answer point location in O(H+1) expected time, where H is the entropy of the query distribution. These results match the query efficiency of previous point location structures that use O(n) words or O(n lg n) bits, while saving drastic amounts of space. We then generalize our succinct geometric index to planar subdivisions, and design indexes for other types of queries. Finally, we apply our techniques to design the first implicit data structures that support point location in $O(\lg^2 n)$ time.<|reference_end|>
arxiv
@article{bose2008succinct, title={Succinct Geometric Indexes Supporting Point Location Queries}, author={Prosenjit Bose, Eric Y. Chen, Meng He, Anil Maheshwari, Pat Morin}, journal={arXiv preprint arXiv:0805.4147}, year={2008}, archivePrefix={arXiv}, eprint={0805.4147}, primaryClass={cs.CG cs.DS} }
bose2008succinct
arxiv-3863
0805.4167
Environment Assumptions for Synthesis
<|reference_start|>Environment Assumptions for Synthesis: The synthesis problem asks to construct a reactive finite-state system from an $\omega$-regular specification. Initial specifications are often unrealizable, which means that there is no system that implements the specification. A common reason for unrealizability is that assumptions on the environment of the system are incomplete. We study the problem of correcting an unrealizable specification $\phi$ by computing an environment assumption $\psi$ such that the new specification $\psi\to\phi$ is realizable. Our aim is to construct an assumption $\psi$ that constrains only the environment and is as weak as possible. We present a two-step algorithm for computing assumptions. The algorithm operates on the game graph that is used to answer the realizability question. First, we compute a safety assumption that removes a minimal set of environment edges from the graph. Second, we compute a liveness assumption that puts fairness conditions on some of the remaining environment edges. We show that the problem of finding a minimal set of fair edges is computationally hard, and we use probabilistic games to compute a locally minimal fairness assumption.<|reference_end|>
arxiv
@article{chatterjee2008environment, title={Environment Assumptions for Synthesis}, author={Krishnendu Chatterjee, Thomas A. Henzinger and Barbara Jobstmann}, journal={arXiv preprint arXiv:0805.4167}, year={2008}, archivePrefix={arXiv}, eprint={0805.4167}, primaryClass={cs.GT cs.LO} }
chatterjee2008environment
arxiv-3864
0805.4211
Managing Critical Spreadsheets in a Compliant Environment
<|reference_start|>Managing Critical Spreadsheets in a Compliant Environment: The use of uncontrolled financial spreadsheets can expose organizations to unacceptable business and compliance risks, including errors in the financial reporting process, spreadsheet misuse and fraud, or even significant operational errors. These risks have been well documented and thoroughly researched. With the advent of regulatory mandates such as SOX 404 and FDICIA in the U.S., and MiFID, Basel II and Combined Code in the UK and Europe, leading tax and audit firms are now recommending that organizations automate their internal controls over critical spreadsheets and other end-user computing applications, including Microsoft Access databases. At a minimum, auditors mandate version control, change control and access control for operational spreadsheets, with more advanced controls for critical financial spreadsheets. This paper summarises the key issues regarding the establishment and maintenance of control of Business Critical spreadsheets.<|reference_end|>
arxiv
@article{saadat2008managing, title={Managing Critical Spreadsheets in a Compliant Environment}, author={Soheil Saadat}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2007 21-24 ISBN 978-905617-58-6}, year={2008}, archivePrefix={arXiv}, eprint={0805.4211}, primaryClass={cs.SE cs.HC} }
saadat2008managing
arxiv-3865
0805.4218
A Structured Methodology for Spreadsheet Modelling
<|reference_start|>A Structured Methodology for Spreadsheet Modelling: In this paper, we discuss the problem of the software engineering of a class of business spreadsheet models. A methodology for structured software development is proposed, which is based on structured analysis of data, represented as Jackson diagrams. It is shown that this analysis allows a straightforward modularisation, and that individual modules may be represented with indentation in the block-structured form of structured programs. The benefits of structured format are discussed, in terms of comprehensibility, ease of maintenance, and reduction in errors. The capability of the methodology to provide a modular overview in the model is described, and examples are given. The potential for a reverse-engineering tool, to transform existing spreadsheet models is discussed.<|reference_end|>
arxiv
@article{knight2008a, title={A Structured Methodology for Spreadsheet Modelling}, author={Brian Knight, David Chadwick, Kamalesen Rajalingham}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2000 43-50 ISBN:1 86166 158 4}, year={2008}, archivePrefix={arXiv}, eprint={0805.4218}, primaryClass={cs.SE} }
knight2008a
arxiv-3866
0805.4219
Building Financial Accuracy into Spreadsheets
<|reference_start|>Building Financial Accuracy into Spreadsheets: Students learning how to apply spreadsheets to accounting problems are not always well served by the built-in financial functions. Problems can arise because of differences between UK and US practice, through anomalies in the functions themselves, and because the promptings of Wizards' engender an attitude of filling in the blanks on the screen, and hoping for the best. Some examples of these problems are described, and suggestions are presented for ways of improving the situation. Principally, it is suggested that spreadsheet prompts and 'Help' screens should offer integrated guidance, covering some aspects of financial practice, as well as matters of spreadsheet technique.<|reference_end|>
arxiv
@article{hawker2008building, title={Building Financial Accuracy into Spreadsheets}, author={Andrew Hawker}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2000 35-40 ISBN:1 86166 158 4}, year={2008}, archivePrefix={arXiv}, eprint={0805.4219}, primaryClass={cs.SE} }
hawker2008building
arxiv-3867
0805.4224
Classification of Spreadsheet Errors
<|reference_start|>Classification of Spreadsheet Errors: This paper describes a framework for a systematic classification of spreadsheet errors. This classification or taxonomy of errors is aimed at facilitating analysis and comprehension of the different types of spreadsheet errors. The taxonomy is an outcome of an investigation of the widespread problem of spreadsheet errors and an analysis of specific types of these errors. This paper contains a description of the various elements and categories of the classification and is supported by appropriate examples.<|reference_end|>
arxiv
@article{rajalingham2008classification, title={Classification of Spreadsheet Errors}, author={Kamalasen Rajalingham, David R. Chadwick, Brian Knight}, journal={arXiv preprint arXiv:0805.4224}, year={2008}, archivePrefix={arXiv}, eprint={0805.4224}, primaryClass={cs.SE} }
rajalingham2008classification
arxiv-3868
0805.4236
Risk Assessment For Spreadsheet Developments: Choosing Which Models to Audit
<|reference_start|>Risk Assessment For Spreadsheet Developments: Choosing Which Models to Audit: Errors in spreadsheet applications and models are alarmingly common (some authorities, with justification cite spreadsheets containing errors as the norm rather than the exception). Faced with this body of evidence, the auditor can be faced with a huge task - the temptation may be to launch code inspections for every spreadsheet in an organisation. This can be very expensive and time-consuming. This paper describes risk assessment based on the "SpACE" audit methodology used by H M Customs & Excise's tax inspectors. This allows the auditor to target resources on the spreadsheets posing the highest risk of error, and justify the deployment of those resources to managers and clients. Since the opposite of audit risk is audit assurance the paper also offers an overview of some elements of good practice in the use of spreadsheets in business.<|reference_end|>
arxiv
@article{butler2008risk, title={Risk Assessment For Spreadsheet Developments: Choosing Which Models to Audit}, author={Raymond J. Butler}, journal={Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2000 65-74 ISBN:1 86166 158 4}, year={2008}, archivePrefix={arXiv}, eprint={0805.4236}, primaryClass={cs.SE cs.CY} }
butler2008risk
arxiv-3869
0805.4247
Neural network learning of optimal Kalman prediction and control
<|reference_start|>Neural network learning of optimal Kalman prediction and control: Although there are many neural network (NN) algorithms for prediction and for control, and although methods for optimal estimation (including filtering and prediction) and for optimal control in linear systems were provided by Kalman in 1960 (with nonlinear extensions since then), there has been, to my knowledge, no NN algorithm that learns either Kalman prediction or Kalman control (apart from the special case of stationary control). Here we show how optimal Kalman prediction and control (KPC), as well as system identification, can be learned and executed by a recurrent neural network composed of linear-response nodes, using as input only a stream of noisy measurement data. The requirements of KPC appear to impose significant constraints on the allowed NN circuitry and signal flows. The NN architecture implied by these constraints bears certain resemblances to the local-circuit architecture of mammalian cerebral cortex. We discuss these resemblances, as well as caveats that limit our current ability to draw inferences for biological function. It has been suggested that the local cortical circuit (LCC) architecture may perform core functions (as yet unknown) that underlie sensory, motor,and other cortical processing. It is reasonable to conjecture that such functions may include prediction, the estimation or inference of missing or noisy sensory data, and the goal-driven generation of control signals. The resemblances found between the KPC NN architecture and that of the LCC are consistent with this conjecture.<|reference_end|>
arxiv
@article{linsker2008neural, title={Neural network learning of optimal Kalman prediction and control}, author={Ralph Linsker}, journal={arXiv preprint arXiv:0805.4247}, year={2008}, number={IBM Research Report RC24390}, archivePrefix={arXiv}, eprint={0805.4247}, primaryClass={cs.NE} }
linsker2008neural
arxiv-3870
0805.4248
On the Capacity of Wireless Multicast Networks
<|reference_start|>On the Capacity of Wireless Multicast Networks: The problem of maximizing the average rate in a multicast network subject to a coverage constraint (minimum quality of service) is studied. Assuming the channel state information is available only at the receiver side and single antenna nodes, the highest expected rate achievable by a random user in the network, called expected typical rate, is derived in two scenarios: hard coverage constraint and soft coverage constraint. In the first case, the coverage is expressed in terms of the outage probability, while in the second case, the expected rate should satisfy certain minimum requirement. It is shown that the optimum solution in both cases (achieving the highest expected typical rate for given coverage requirements) is achieved by an infinite layer superposition code for which the optimum power allocation among the different layers is derived. For the MISO case, a suboptimal coding scheme is proposed, which is shown to be asymptotically optimal, when the number of transmit antennas grows at least logarithmically with the number of users in the network.<|reference_end|>
arxiv
@article{mirghaderi2008on, title={On the Capacity of Wireless Multicast Networks}, author={Seyed Reza Mirghaderi, Alireza Bayesteh, and Amir K. Khandani}, journal={arXiv preprint arXiv:0805.4248}, year={2008}, number={UW Technical Report#25-2006}, archivePrefix={arXiv}, eprint={0805.4248}, primaryClass={cs.IT math.IT} }
mirghaderi2008on
arxiv-3871
0805.4249
Coalition Games with Cooperative Transmission: A Cure for the Curse of Boundary Nodes in Selfish Packet-Forwarding Wireless Networks
<|reference_start|>Coalition Games with Cooperative Transmission: A Cure for the Curse of Boundary Nodes in Selfish Packet-Forwarding Wireless Networks: In wireless packet-forwarding networks with selfish nodes, application of a repeated game can induce the nodes to forward each others' packets, so that the network performance can be improved. However, the nodes on the boundary of such networks cannot benefit from this strategy, as the other nodes do not depend on them. This problem is sometimes known as {\em the curse of the boundary nodes}. To overcome this problem, an approach based on coalition games is proposed, in which the boundary nodes can use cooperative transmission to help the backbone nodes in the middle of the network. In return, the backbone nodes are willing to forward the boundary nodes' packets. Here, the concept of core is used to study the stability of the coalitions in such games. Then three types of fairness are investigated, namely, min-max fairness using nucleolus, average fairness using the Shapley function, and a newly proposed market fairness. Based on the specific problem addressed in this paper, market fairness is a new fairness concept involving fairness between multiple backbone nodes and multiple boundary nodes. Finally, a protocol is designed using both repeated games and coalition games. Simulation results show how boundary nodes and backbone nodes form coalitions according to different fairness criteria. The proposed protocol can improve the network connectivity by about 50%, compared with pure repeated game schemes.<|reference_end|>
arxiv
@article{han2008coalition, title={Coalition Games with Cooperative Transmission: A Cure for the Curse of Boundary Nodes in Selfish Packet-Forwarding Wireless Networks}, author={Zhu Han and Vincent Poor}, journal={arXiv preprint arXiv:0805.4249}, year={2008}, archivePrefix={arXiv}, eprint={0805.4249}, primaryClass={cs.IT cs.GT math.IT} }
han2008coalition
arxiv-3872
0805.4290
From Data Topology to a Modular Classifier
<|reference_start|>From Data Topology to a Modular Classifier: This article describes an approach to designing a distributed and modular neural classifier. This approach introduces a new hierarchical clustering that enables one to determine reliable regions in the representation space by exploiting supervised information. A multilayer perceptron is then associated with each of these detected clusters and charged with recognizing elements of the associated cluster while rejecting all others. The obtained global classifier is comprised of a set of cooperating neural networks and completed by a K-nearest neighbor classifier charged with treating elements rejected by all the neural networks. Experimental results for the handwritten digit recognition problem and comparison with neural and statistical nonmodular classifiers are given.<|reference_end|>
arxiv
@article{ennaji2008from, title={From Data Topology to a Modular Classifier}, author={Abdel Ennaji (LITIS), Arnaud Ribert (LITIS), Yves Lecourtier (LITIS)}, journal={International Journal On Document Analysis and Recognition 6, 1 (2003) 1-9}, year={2008}, doi={10.1007/s10032-002-0095-3}, archivePrefix={arXiv}, eprint={0805.4290}, primaryClass={cs.LG} }
ennaji2008from
arxiv-3873
0805.4300
Balanced Families of Perfect Hash Functions and Their Applications
<|reference_start|>Balanced Families of Perfect Hash Functions and Their Applications: The construction of perfect hash functions is a well-studied topic. In this paper, this concept is generalized with the following definition. We say that a family of functions from $[n]$ to $[k]$ is a $\delta$-balanced $(n,k)$-family of perfect hash functions if for every $S \subseteq [n]$, $|S|=k$, the number of functions that are 1-1 on $S$ is between $T/\delta$ and $\delta T$ for some constant $T>0$. The standard definition of a family of perfect hash functions requires that there will be at least one function that is 1-1 on $S$, for each $S$ of size $k$. In the new notion of balanced families, we require the number of 1-1 functions to be almost the same (taking $\delta$ to be close to 1) for every such $S$. Our main result is that for any constant $\delta > 1$, a $\delta$-balanced $(n,k)$-family of perfect hash functions of size $2^{O(k \log \log k)} \log n$ can be constructed in time $2^{O(k \log \log k)} n \log n$. Using the technique of color-coding we can apply our explicit constructions to devise approximation algorithms for various counting problems in graphs. In particular, we exhibit a deterministic polynomial time algorithm for approximating both the number of simple paths of length $k$ and the number of simple cycles of size $k$ for any $k \leq O(\frac{\log n}{\log \log \log n})$ in a graph with $n$ vertices. The approximation is up to any fixed desirable relative error.<|reference_end|>
arxiv
@article{alon2008balanced, title={Balanced Families of Perfect Hash Functions and Their Applications}, author={Noga Alon and Shai Gutner}, journal={Proc. of 34th ICALP (2007), 435-446}, year={2008}, archivePrefix={arXiv}, eprint={0805.4300}, primaryClass={cs.DS cs.DM} }
alon2008balanced
arxiv-3874
0805.4323
Network Connection Games with Disconnected Equilibria
<|reference_start|>Network Connection Games with Disconnected Equilibria: In this paper we extend a popular non-cooperative network creation game (NCG) to allow for disconnected equilibrium networks. There are n players, each is a vertex in a graph, and a strategy is a subset of players to build edges to. For each edge a player must pay a cost \alpha, and the individual cost for a player represents a trade-off between edge costs and shortest path lengths to all other players. We extend the model to a penalized game (PCG), for which we reduce the penalty counted towards the individual cost for a pair of disconnected players to a finite value \beta. Our analysis concentrates on existence, structure, and cost of disconnected Nash and strong equilibria. Although the PCG is not a potential game, pure Nash equilibria always and pure strong equilibria very often exist. We provide tight conditions under which disconnected Nash (strong) equilibria can evolve. Components of these equilibria must be Nash (strong) equilibria of a smaller NCG. However, in contrast to the NCG, for almost all parameter values no tree is a stable component. Finally, we present a detailed characterization of the price of anarchy that reveals cases in which the price of anarchy is \Theta(n) and thus several orders of magnitude larger than in the NCG. Perhaps surprisingly, the strong price of anarchy increases to at most 4. This indicates that global communication and coordination can be extremely valuable to overcome socially inferior topologies in distributed selfish network design.<|reference_end|>
arxiv
@article{brandes2008network, title={Network Connection Games with Disconnected Equilibria}, author={Ulrik Brandes, Martin Hoefer, Bobo Nick}, journal={arXiv preprint arXiv:0805.4323}, year={2008}, archivePrefix={arXiv}, eprint={0805.4323}, primaryClass={cs.GT} }
brandes2008network
arxiv-3875
0805.4338
Quantization of Prior Probabilities for Hypothesis Testing
<|reference_start|>Quantization of Prior Probabilities for Hypothesis Testing: Bayesian hypothesis testing is investigated when the prior probabilities of the hypotheses, taken as a random vector, are quantized. Nearest neighbor and centroid conditions are derived using mean Bayes risk error as a distortion measure for quantization. A high-resolution approximation to the distortion-rate function is also obtained. Human decision making in segregated populations is studied assuming Bayesian hypothesis testing with quantized priors.<|reference_end|>
arxiv
@article{varshney2008quantization, title={Quantization of Prior Probabilities for Hypothesis Testing}, author={Kush R. Varshney and Lav R. Varshney}, journal={IEEE Transactions on Signal Processing, vol. 56, no. 10, October 2008, p. 4553-4562}, year={2008}, doi={10.1109/TSP.2008.928164}, archivePrefix={arXiv}, eprint={0805.4338}, primaryClass={cs.IT math.IT math.ST stat.TH} }
varshney2008quantization
arxiv-3876
0805.4369
A semantic space for modeling children's semantic memory
<|reference_start|>A semantic space for modeling children's semantic memory: The goal of this paper is to present a model of children's semantic memory, which is based on a corpus reproducing the kinds of texts children are exposed to. After presenting the literature in the development of the semantic memory, a preliminary French corpus of 3.2 million words is described. Similarities in the resulting semantic space are compared to human data on four tests: association norms, vocabulary test, semantic judgments and memory tasks. A second corpus is described, which is composed of subcorpora corresponding to various ages. This stratified corpus is intended as a basis for developmental studies. Finally, two applications of these models of semantic memory are presented: the first one aims at tracing the development of semantic similarities paragraph by paragraph; the second one describes an implementation of a model of text comprehension derived from the Construction-integration model (Kintsch, 1988, 1998) and based on such models of semantic memory.<|reference_end|>
arxiv
@article{denhière2008a, title={A semantic space for modeling children's semantic memory}, author={Guy Denhi`ere (LPC), Beno^it Lemaire (TIMC), C'edrick Bellissens, Sandra Jhean}, journal={The handbook of Latent Semantic Analysis, Lawrence Erlbaum Associates (Ed.) (2007) 143-165}, year={2008}, archivePrefix={arXiv}, eprint={0805.4369}, primaryClass={cs.CL} }
denhière2008a
arxiv-3877
0805.4374
Capacity Bounds for Broadcast Channels with Confidential Messages
<|reference_start|>Capacity Bounds for Broadcast Channels with Confidential Messages: In this paper, we study capacity bounds for discrete memoryless broadcast channels with confidential messages. Two private messages as well as a common message are transmitted; the common message is to be decoded by both receivers, while each private message is only for its intended receiver. In addition, each private message is to be kept secret from the unintended receiver where secrecy is measured by equivocation. We propose both inner and outer bounds to the rate equivocation region for broadcast channels with confidential messages. The proposed inner bound generalizes Csisz\'{a}r and K\"{o}rner's rate equivocation region for broadcast channels with a single confidential message, Liu {\em et al}'s achievable rate region for broadcast channels with perfect secrecy, Marton's and Gel'fand and Pinsker's achievable rate region for general broadcast channels. Our proposed outer bounds, together with the inner bound, helps establish the rate equivocation region of several classes of discrete memoryless broadcast channels with confidential messages, including less noisy, deterministic, and semi-deterministic channels. Furthermore, specializing to the general broadcast channel by removing the confidentiality constraint, our proposed outer bounds reduce to new capacity outer bounds for the discrete memory broadcast channel.<|reference_end|>
arxiv
@article{xu2008capacity, title={Capacity Bounds for Broadcast Channels with Confidential Messages}, author={Jin Xu, Yi Cao, and Biao Chen}, journal={arXiv preprint arXiv:0805.4374}, year={2008}, doi={10.1109/TIT.2009.2027500}, archivePrefix={arXiv}, eprint={0805.4374}, primaryClass={cs.IT math.IT} }
xu2008capacity
arxiv-3878
0805.4394
Confidentiality, Integrity and High Availability with Open Source IT green
<|reference_start|>Confidentiality, Integrity and High Availability with Open Source IT green: This paper presents elements that form the structure of a network of data using secure stable and mature technologies that meet the requirement of having code free. The principle would be conflicting code open Tuesday where he wants to keep maximum control over the data but is already evidence that open source does not hide the famous backdoor possible in closed systems code. Basearemos this work experience gained in a real environment and using paravirtualization to show a situation more critical and now real in most companies, the virtualization of servers.<|reference_end|>
arxiv
@article{guimaraes2008confidentiality,, title={Confidentiality, Integrity and High Availability with Open Source IT green}, author={Luciana Guimaraes}, journal={Z. Pan, X. Ren, R. Eigenmann, and D. Xu. Executing mpi programs on virtual machines in a internet sharing system. In 20th International Parallel and Distributed Processing Symposium (IPDPS 2006). IEEE, 2006}, year={2008}, number={M. L. Massie, B. N. Chun, and D. E. Culler. Te ganglia distributed system: design, implementatiosn, and experience. Parrallel Computing, 30(7):817-840, July 2004}, archivePrefix={arXiv}, eprint={0805.4394}, primaryClass={cs.CR cs.CY} }
guimaraes2008confidentiality,
arxiv-3879
0805.4425
Low-Complexity Structured Precoding for Spatially Correlated MIMO Channels
<|reference_start|>Low-Complexity Structured Precoding for Spatially Correlated MIMO Channels: The focus of this paper is on spatial precoding in correlated multi-antenna channels, where the number of independent data-streams is adapted to trade-off the data-rate with the transmitter complexity. Towards the goal of a low-complexity implementation, a structured precoder is proposed, where the precoder matrix evolves fairly slowly at a rate comparable with the statistical evolution of the channel. Here, the eigenvectors of the precoder matrix correspond to the dominant eigenvectors of the transmit covariance matrix, whereas the power allocation across the modes is fixed, known at both the ends, and is of low-complexity. A particular case of the proposed scheme (semiunitary precoding), where the spatial modes are excited with equal power, is shown to be near-optimal in matched channels. A matched channel is one where the dominant eigenvalues of the transmit covariance matrix are well-conditioned and their number equals the number of independent data-streams, and the receive covariance matrix is also well-conditioned. In mismatched channels, where the above conditions are not met, it is shown that the loss in performance with semiunitary precoding when compared with a perfect channel information benchmark is substantial. This loss needs to be mitigated via limited feedback techniques that provide partial channel information to the transmitter. More importantly, we develop matching metrics that capture the degree of matching of a channel to the precoder structure continuously, and allow ordering two matrix channels in terms of their mutual information or error probability performance.<|reference_end|>
arxiv
@article{raghavan2008low-complexity, title={Low-Complexity Structured Precoding for Spatially Correlated MIMO Channels}, author={Vasanthan Raghavan, Akbar Sayeed, Venu Veeravalli}, journal={arXiv preprint arXiv:0805.4425}, year={2008}, archivePrefix={arXiv}, eprint={0805.4425}, primaryClass={cs.IT math.IT} }
raghavan2008low-complexity
arxiv-3880
0805.4440
Optimal Coding for the Erasure Channel with Arbitrary Alphabet Size
<|reference_start|>Optimal Coding for the Erasure Channel with Arbitrary Alphabet Size: An erasure channel with a fixed alphabet size $q$, where $q \gg 1$, is studied. It is proved that over any erasure channel (with or without memory), Maximum Distance Separable (MDS) codes achieve the minimum probability of error (assuming maximum likelihood decoding). Assuming a memoryless erasure channel, the error exponent of MDS codes are compared with that of random codes and linear random codes. It is shown that the envelopes of all these exponents are identical for rates above the critical rate. Noting the optimality of MDS codes, it is concluded that both random codes and linear random codes are exponentially optimal, whether the block sizes is larger or smaller than the alphabet size.<|reference_end|>
arxiv
@article{fashandi2008optimal, title={Optimal Coding for the Erasure Channel with Arbitrary Alphabet Size}, author={Shervan Fashandi, Shahab Oveis Gharan and Amir K. Khandani}, journal={arXiv preprint arXiv:0805.4440}, year={2008}, archivePrefix={arXiv}, eprint={0805.4440}, primaryClass={cs.IT math.IT} }
fashandi2008optimal
arxiv-3881
0805.4471
Exact Matrix Completion via Convex Optimization
<|reference_start|>Exact Matrix Completion via Convex Optimization: We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys m >= C n^{1.2} r log n for some positive numerical constant C, then with very high probability, most n by n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information.<|reference_end|>
arxiv
@article{candes2008exact, title={Exact Matrix Completion via Convex Optimization}, author={Emmanuel J. Candes and Benjamin Recht}, journal={arXiv preprint arXiv:0805.4471}, year={2008}, archivePrefix={arXiv}, eprint={0805.4471}, primaryClass={cs.IT math.IT} }
candes2008exact
arxiv-3882
0805.4502
Golden Space-Time Block Coded Modulation
<|reference_start|>Golden Space-Time Block Coded Modulation: In this paper we present a block coded modulation scheme for a 2 x 2 MIMO system over slow fading channels, where the inner code is the Golden Code. The scheme is based on a set partitioning of the Golden Code using two-sided ideals whose norm is a power of two. In this case, a lower bound for the minimum determinant is given by the minimum Hamming distance. The description of the ring structure of the quotients suggests further optimization in order to improve the overall distribution of determinants. Performance simulations show that the GC-RS schemes achieve a significant gain over the uncoded Golden Code.<|reference_end|>
arxiv
@article{luzzi2008golden, title={Golden Space-Time Block Coded Modulation}, author={L. Luzzi, G. Rekaya-Ben Othman, J.-C. Belfiore, E. Viterbo}, journal={arXiv preprint arXiv:0805.4502}, year={2008}, archivePrefix={arXiv}, eprint={0805.4502}, primaryClass={cs.IT math.IT} }
luzzi2008golden
arxiv-3883
0805.4508
Modeling Loosely Annotated Images with Imagined Annotations
<|reference_start|>Modeling Loosely Annotated Images with Imagined Annotations: In this paper, we present an approach to learning latent semantic analysis models from loosely annotated images for automatic image annotation and indexing. The given annotation in training images is loose due to: (1) ambiguous correspondences between visual features and annotated keywords; (2) incomplete lists of annotated keywords. The second reason motivates us to enrich the incomplete annotation in a simple way before learning topic models. In particular, some imagined keywords are poured into the incomplete annotation through measuring similarity between keywords. Then, both given and imagined annotations are used to learning probabilistic topic models for automatically annotating new images. We conduct experiments on a typical Corel dataset of images and loose annotations, and compare the proposed method with state-of-the-art discrete annotation methods (using a set of discrete blobs to represent an image). The proposed method improves word-driven probability Latent Semantic Analysis (PLSA-words) up to a comparable performance with the best discrete annotation method, while a merit of PLSA-words is still kept, i.e., a wider semantic range.<|reference_end|>
arxiv
@article{tang2008modeling, title={Modeling Loosely Annotated Images with Imagined Annotations}, author={Hong Tang, Nozha Boujemma, Yunhao Chen}, journal={arXiv preprint arXiv:0805.4508}, year={2008}, archivePrefix={arXiv}, eprint={0805.4508}, primaryClass={cs.IR cs.AI} }
tang2008modeling
arxiv-3884
0805.4521
Textual Entailment Recognizing by Theorem Proving Approach
<|reference_start|>Textual Entailment Recognizing by Theorem Proving Approach: In this paper we present two original methods for recognizing textual inference. First one is a modified resolution method such that some linguistic considerations are introduced in the unification of two atoms. The approach is possible due to the recent methods of transforming texts in logic formulas. Second one is based on semantic relations in text, as presented in WordNet. Some similarities between these two methods are remarked.<|reference_end|>
arxiv
@article{tatar2008textual, title={Textual Entailment Recognizing by Theorem Proving Approach}, author={Doina Tatar and Militon Frentiu}, journal={Studia Univ.Babes-Bolyai, Informatica, Vol.LI, Number 2, 2006}, year={2008}, archivePrefix={arXiv}, eprint={0805.4521}, primaryClass={cs.CL} }
tatar2008textual
arxiv-3885
0805.4543
Determination of the basis of the space of all root functionals of a system of polynomial equations and of the basis of its ideal by the operation of the extension of bounded root functionals
<|reference_start|>Determination of the basis of the space of all root functionals of a system of polynomial equations and of the basis of its ideal by the operation of the extension of bounded root functionals: It is proposed the algorithm that find a basis of the ideal and a basis of the space of all root functionals by using the extension operation for bounded root functionals, when the number of polynomials is equal to the number of variables, if it is known that the ideal of polynomials is 0-dimensional. The asyptotic complexity of this algorithm is d^{O(n)} operations, where n is the number of polynomials and the number of variables, d is the maximal degree of polynomials. The extension operation has connection with the multivariate Bezoutian construction.<|reference_end|>
arxiv
@article{seifullin2008determination, title={Determination of the basis of the space of all root functionals of a system of polynomial equations and of the basis of its ideal by the operation of the extension of bounded root functionals}, author={Timur R. Seifullin}, journal={Dopov. Nats. Akad. Nauk Ukr. Mat. Prirodozn. Tekh. Nauki 2003, no. 8, 29--36. MR2046291 (2005a:13055)}, year={2008}, archivePrefix={arXiv}, eprint={0805.4543}, primaryClass={math.AG cs.SC math.AC} }
seifullin2008determination
arxiv-3886
0805.4560
Rock mechanics modeling based on soft granulation theory
<|reference_start|>Rock mechanics modeling based on soft granulation theory: This paper describes application of information granulation theory, on the design of rock engineering flowcharts. Firstly, an overall flowchart, based on information granulation theory has been highlighted. Information granulation theory, in crisp (non-fuzzy) or fuzzy format, can take into account engineering experiences (especially in fuzzy shape-incomplete information or superfluous), or engineering judgments, in each step of designing procedure, while the suitable instruments modeling are employed. In this manner and to extension of soft modeling instruments, using three combinations of Self Organizing Map (SOM), Neuro-Fuzzy Inference System (NFIS), and Rough Set Theory (RST) crisp and fuzzy granules, from monitored data sets are obtained. The main underlined core of our algorithms are balancing of crisp(rough or non-fuzzy) granules and sub fuzzy granules, within non fuzzy information (initial granulation) upon the open-close iterations. Using different criteria on balancing best granules (information pockets), are obtained. Validations of our proposed methods, on the data set of in-situ permeability in rock masses in Shivashan dam, Iran have been highlighted.<|reference_end|>
arxiv
@article{owladeghaffari2008rock, title={Rock mechanics modeling based on soft granulation theory}, author={H.Owladeghaffari}, journal={arXiv preprint arXiv:0805.4560}, year={2008}, doi={10.1016/j.ijrmms.2008.09.001}, archivePrefix={arXiv}, eprint={0805.4560}, primaryClass={cs.AI} }
owladeghaffari2008rock
arxiv-3887
0805.4583
Channels that Heat Up
<|reference_start|>Channels that Heat Up: This work considers an additive noise channel where the time-k noise variance is a weighted sum of the channel input powers prior to time k. This channel is motivated by point-to-point communication between two terminals that are embedded in the same chip. Transmission heats up the entire chip and hence increases the thermal noise at the receiver. The capacity of this channel (both with and without feedback) is studied at low transmit powers and at high transmit powers. At low transmit powers, the slope of the capacity-vs-power curve at zero is computed and it is shown that the heating-up effect is beneficial. At high transmit powers, conditions are determined under which the capacity is bounded, i.e., under which the capacity does not grow to infinity as the allowed average power tends to infinity.<|reference_end|>
arxiv
@article{koch2008channels, title={Channels that Heat Up}, author={Tobias Koch, Amos Lapidoth, Paul P. Sotiriadis}, journal={arXiv preprint arXiv:0805.4583}, year={2008}, doi={10.1109/TIT.2009.2023753}, archivePrefix={arXiv}, eprint={0805.4583}, primaryClass={cs.IT math.IT} }
koch2008channels
arxiv-3888
0805.4606
Community Detection using a Measure of Global Influence
<|reference_start|>Community Detection using a Measure of Global Influence: The growing popularity of online social networks has provided researchers with access to large amount of social network data. This, coupled with the ever increasing computation speed, storage capacity and data mining capabilities, led to the renewal of interest in automatic community detection methods. Surprisingly, there is no universally accepted definition of the community. One frequently used definition states that ``communities, that have more and/or better-connected `internal edges' connecting members of the set than `cut edges' connecting the set to the rest of the world''[Leskovec et al. 20008]. This definition inspired the modularity-maximization class of community detection algorithms, which look for regions of the network that have higher than expected density of edges within them. We introduce an alternative definition which states that a community is composed of individuals who have more influence on others within the community than on those outside of it. We present a mathematical formulation of influence, define an influence-based modularity metric, and show how to use it to partition the network into communities. We evaluated our approach on the standard data sets used in literature, and found that it often outperforms the edge-based modularity algorithm.<|reference_end|>
arxiv
@article{ghosh2008community, title={Community Detection using a Measure of Global Influence}, author={Rumi Ghosh and Kristina Lerman}, journal={arXiv preprint arXiv:0805.4606}, year={2008}, archivePrefix={arXiv}, eprint={0805.4606}, primaryClass={cs.CY} }
ghosh2008community
arxiv-3889
0805.4620
Uplink Macro Diversity of Limited Backhaul Cellular Network
<|reference_start|>Uplink Macro Diversity of Limited Backhaul Cellular Network: In this work new achievable rates are derived, for the uplink channel of a cellular network with joint multicell processing, where unlike previous results, the ideal backhaul network has finite capacity per-cell. Namely, the cell sites are linked to the central joint processor via lossless links with finite capacity. The cellular network is abstracted by symmetric models, which render analytical treatment plausible. For this idealistic model family, achievable rates are presented for cell-sites that use compress-and-forward schemes combined with local decoding, for both Gaussian and fading channels. The rates are given in closed form for the classical Wyner model and the soft-handover model. These rates are then demonstrated to be rather close to the optimal unlimited backhaul joint processing rates, already for modest backhaul capacities, supporting the potential gain offered by the joint multicell processing approach. Particular attention is also given to the low-SNR characterization of these rates through which the effect of the limited backhaul network is explicitly revealed. In addition, the rate at which the backhaul capacity should scale in order to maintain the original high-SNR characterization of an unlimited backhaul capacity system is found.<|reference_end|>
arxiv
@article{sanderovich2008uplink, title={Uplink Macro Diversity of Limited Backhaul Cellular Network}, author={Amichai Sanderovich, Oren Somekh, H. Vincent Poor and Shlomo Shamai (Shitz)}, journal={arXiv preprint arXiv:0805.4620}, year={2008}, archivePrefix={arXiv}, eprint={0805.4620}, primaryClass={cs.IT math.IT} }
sanderovich2008uplink
arxiv-3890
0805.4648
On White-box Cryptography and Obfuscation
<|reference_start|>On White-box Cryptography and Obfuscation: We study the relationship between obfuscation and white-box cryptography. We capture the requirements of any white-box primitive using a \emph{White-Box Property (WBP)} and give some negative/positive results. Loosely speaking, the WBP is defined for some scheme and a security notion (we call the pair a \emph{specification}), and implies that w.r.t. the specification, an obfuscation does not leak any ``useful'' information, even though it may leak some ``useless'' non-black-box information. Our main result is a negative one - for most interesting programs, an obfuscation (under \emph{any} definition) cannot satisfy the WBP for every specification in which the program may be present. To do this, we define a \emph{Universal White-Box Property (UWBP)}, which if satisfied, would imply that under \emph{whatever} specification we conceive, the WBP is satisfied. We then show that for every non-approximately-learnable family, there exist (contrived) specifications for which the WBP (and thus, the UWBP) fails. On the positive side, we show that there exists an obfuscator for a non-approximately-learnable family that achieves the WBP for a certain specification. Furthermore, there exists an obfuscator for a non-learnable (but approximately-learnable) family that achieves the UWBP. Our results can also be viewed as formalizing the distinction between ``useful'' and ``useless'' non-black-box information.<|reference_end|>
arxiv
@article{saxena2008on, title={On White-box Cryptography and Obfuscation}, author={Amitabh Saxena and Brecht Wyseur}, journal={arXiv preprint arXiv:0805.4648}, year={2008}, archivePrefix={arXiv}, eprint={0805.4648}, primaryClass={cs.CR} }
saxena2008on
arxiv-3891
0805.4665
On Secure Distributed Implementations of Dynamic Access Control
<|reference_start|>On Secure Distributed Implementations of Dynamic Access Control: Distributed implementations of access control abound in distributed storage protocols. While such implementations are often accompanied by informal justifications of their correctness, our formal analysis reveals that their correctness can be tricky. In particular, we discover several subtleties in a standard protocol based on capabilities, that can break security under a simple specification of access control. At the same time, we show a sensible refinement of the specification for which a secure implementation of access control is possible. Our models and proofs are formalized in the applied pi calculus, following some new techniques that may be of independent interest. Finally, we indicate how our principles can be applied to securely distribute other state machines.<|reference_end|>
arxiv
@article{chaudhuri2008on, title={On Secure Distributed Implementations of Dynamic Access Control}, author={Avik Chaudhuri}, journal={arXiv preprint arXiv:0805.4665}, year={2008}, archivePrefix={arXiv}, eprint={0805.4665}, primaryClass={cs.CR cs.DC} }
chaudhuri2008on
arxiv-3892
0805.4680
Telex: Principled System Support for Write-Sharing in Collaborative Applications
<|reference_start|>Telex: Principled System Support for Write-Sharing in Collaborative Applications: The Telex system is designed for sharing mutable data in a distributed environment, particularly for collaborative applications. Users operate on their local, persistent replica of shared documents; they can work disconnected and suffer no network latency. The Telex approach to detect and correct conflicts is application independent, based on an action-constraint graph (ACG) that summarises the concurrency semantics of applications. The ACG is stored efficiently in a multilog structure that eliminates contention and is optimised for locality. Telex supports multiple applications and multi-document updates. The Telex system clearly separates system logic (which includes replication, views, undo, security, consistency, conflicts, and commitment) from application logic. An example application is a shared calendar for managing multi-user meetings; the system detects meeting conflicts and resolves them consistently.<|reference_end|>
arxiv
@article{benmouffok2008telex:, title={Telex: Principled System Support for Write-Sharing in Collaborative Applications}, author={Lamia Benmouffok (INRIA Rocquencourt, LIP6), Jean-Michel Busca (INRIA Rocquencourt, LIP6), Joan Manuel Marqu`es (LIP6, UOC), Marc Shapiro (INRIA Rocquencourt, LIP6), Pierre Sutra (INRIA Rocquencourt, LIP6), Georgios Tsoukalas (NTUA)}, journal={arXiv preprint arXiv:0805.4680}, year={2008}, number={RR-6546}, archivePrefix={arXiv}, eprint={0805.4680}, primaryClass={cs.OS cs.DC} }
benmouffok2008telex:
arxiv-3893
0805.4718
Report on article The Travelling Salesman Problem: A Linear Programming Formulation
<|reference_start|>Report on article The Travelling Salesman Problem: A Linear Programming Formulation: This article describes counter example prepared in order to prove that linear formulation of TSP problem proposed in [arXiv:0803.4354] is incorrect (it applies also to QAP problem formulation in [arXiv:0802.4307]). Article refers not only to model itself, but also to ability of extension of proposed model to be correct.<|reference_end|>
arxiv
@article{hofman2008report, title={Report on article The Travelling Salesman Problem: A Linear Programming Formulation}, author={Radoslaw Hofman}, journal={arXiv preprint arXiv:0805.4718}, year={2008}, archivePrefix={arXiv}, eprint={0805.4718}, primaryClass={cs.CC cs.DM} }
hofman2008report
arxiv-3894
0805.4722
La fiabilit\'e des informations sur le web
<|reference_start|>La fiabilit\'e des informations sur le web: Online IR tools have to take into account new phenomena linked to the appearance of blogs, wiki and other collaborative publications. Among these collaborative sites, Wikipedia represents a crucial source of information. However, the quality of this information has been recently questionned. A better knowledge of the contributors' behaviors should help users navigate through information whose quality may vary from one source to another. In order to explore this idea, we present an analysis of the role of different types of contributors in the control of the publication of conflictual articles.<|reference_end|>
arxiv
@article{jacquemin2008la, title={La fiabilit\'e des informations sur le web}, author={Bernard Jacquemin (LIMSI), Aur'elien Lauf (LIMSI), C'eline Poudat (LTCI), Martine Hurault-Plantet (LIMSI), Nicolas Auray (LTCI)}, journal={Dans Actes de la Conf\'erence en Recherche d'Information et Applications CORIA 2008 - Conf\'erence en Recherche d'Information et Applications 2008, Tr\'egastel : France (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0805.4722}, primaryClass={cs.IR cs.CL cs.CY} }
jacquemin2008la
arxiv-3895
0805.4745
Pattern-based Model-to-Model Transformation: Long Version
<|reference_start|>Pattern-based Model-to-Model Transformation: Long Version: We present a new, high-level approach for the specification of model-to-model transformations based on declarative patterns. These are (atomic or composite) constraints on triple graphs declaring the allowed or forbidden relationships between source and target models. In this way, a transformation is defined by specifying a set of triple graph constraints that should be satisfied by the result of the transformation. The description of the transformation is then compiled into lower-level operational mechanisms to perform forward or backward transformations, as well as to establish mappings between two existent models. In this paper we study one of such mechanisms based on the generation of operational triple graph grammar rules. Moreover, we exploit deduction techniques at the specification level to generate more specialized constraints (preserving the specification semantics) reflecting pattern dependencies, from which additional rules can be derived. This is an extended version of the paper submitted to ICGT'08, with additional definitions and proofs.<|reference_end|>
arxiv
@article{de lara2008pattern-based, title={Pattern-based Model-to-Model Transformation: Long Version}, author={Juan de Lara and Esther Guerra}, journal={arXiv preprint arXiv:0805.4745}, year={2008}, archivePrefix={arXiv}, eprint={0805.4745}, primaryClass={cs.SE cs.DM cs.LO} }
de lara2008pattern-based
arxiv-3896
0805.4748
New Construction of 2-Generator Quasi-Twisted Codes
<|reference_start|>New Construction of 2-Generator Quasi-Twisted Codes: Quasi-twisted (QT) codes are a generalization of quasi-cyclic (QC) codes. Based on consta-cyclic simplex codes, a new explicit construction of a family of 2-generator quasi-twisted (QT) two-weight codes is presented. It is also shown that many codes in the family meet the Griesmer bound and therefore are length-optimal. New distance-optimal binary QC [195, 8, 96], [210, 8, 104] and [240, 8, 120] codes, and good ternary QC [208, 6, 135] and [221, 6, 144] codes are also obtained by the construction.<|reference_end|>
arxiv
@article{chen2008new, title={New Construction of 2-Generator Quasi-Twisted Codes}, author={Eric Z. Chen}, journal={arXiv preprint arXiv:0805.4748}, year={2008}, archivePrefix={arXiv}, eprint={0805.4748}, primaryClass={cs.IT math.IT} }
chen2008new
arxiv-3897
0805.4754
Managing conflicts between users in Wikipedia
<|reference_start|>Managing conflicts between users in Wikipedia: Wikipedia is nowadays a widely used encyclopedia, and one of the most visible sites on the Internet. Its strong principle of collaborative work and free editing sometimes generates disputes due to disagreements between users. In this article we study how the wikipedian community resolves the conflicts and which roles do wikipedian choose in this process. We observed the users behavior both in the article talk pages, and in the Arbitration Committee pages specifically dedicated to serious disputes. We first set up a users typology according to their involvement in conflicts and their publishing and management activity in the encyclopedia. We then used those user types to describe users behavior in contributing to articles that are tagged by the wikipedian community as being in conflict with the official guidelines of Wikipedia, or conversely as being well featured.<|reference_end|>
arxiv
@article{jacquemin2008managing, title={Managing conflicts between users in Wikipedia}, author={Bernard Jacquemin (LIMSI), Aur'elien Lauf (LIMSI), C'eline Poudat (LTCI), Martine Hurault-Plantet (LIMSI), Nicolas Auray (LTCI)}, journal={Dans BIS 2008 Workshop proceedings - 11th Conference on Business Information Systems (BIS 2008), Social Aspects of the Web Workshop (SAW 2008), Innsbruck : Autriche (2008)}, year={2008}, archivePrefix={arXiv}, eprint={0805.4754}, primaryClass={cs.IR cs.CL cs.CY cs.HC} }
jacquemin2008managing
arxiv-3898
0806.0036
On the Design of Universal LDPC Codes
<|reference_start|>On the Design of Universal LDPC Codes: Low-density parity-check (LDPC) coding for a multitude of equal-capacity channels is studied. First, based on numerous observations, a conjecture is stated that when the belief propagation decoder converges on a set of equal-capacity channels, it would also converge on any convex combination of those channels. Then, it is proved that when the stability condition is satisfied for a number of channels, it is also satisfied for any channel in their convex hull. For the purpose of code design, a method is proposed which can decompose every symmetric channel with capacity C into a set of identical-capacity basis channels. We expect codes that work on the basis channels to be suitable for any channel with capacity C. Such codes are found and in comparison with existing LDPC codes that are designed for specific channels, our codes obtain considerable coding gains when used across a multitude of channels.<|reference_end|>
arxiv
@article{sanaei2008on, title={On the Design of Universal LDPC Codes}, author={Ali Sanaei, Mahdi Ramezani, and Masoud Ardakani}, journal={arXiv preprint arXiv:0806.0036}, year={2008}, doi={10.1109/ISIT.2008.4595097}, archivePrefix={arXiv}, eprint={0806.0036}, primaryClass={cs.IT math.IT} }
sanaei2008on
arxiv-3899
0806.0043
Dejean's conjecture holds for n >= 30
<|reference_start|>Dejean's conjecture holds for n >= 30: We extend Carpi's results by showing that Dejean's conjecture holds for n >= 30.<|reference_end|>
arxiv
@article{currie2008dejean's, title={Dejean's conjecture holds for n >= 30}, author={James Currie and Narad Rampersad}, journal={arXiv preprint arXiv:0806.0043}, year={2008}, archivePrefix={arXiv}, eprint={0806.0043}, primaryClass={math.CO cs.FL} }
currie2008dejean's
arxiv-3900
0806.0075
An Experimental Investigation of XML Compression Tools
<|reference_start|>An Experimental Investigation of XML Compression Tools: This paper presents an extensive experimental study of the state-of-the-art of XML compression tools. The study reports the behavior of nine XML compressors using a large corpus of XML documents which covers the different natures and scales of XML documents. In addition to assessing and comparing the performance characteristics of the evaluated XML compression tools, the study tries to assess the effectiveness and practicality of using these tools in the real world. Finally, we provide some guidelines and recommen- dations which are useful for helping developers and users for making an effective decision for selecting the most suitable XML compression tool for their needs.<|reference_end|>
arxiv
@article{sakr2008an, title={An Experimental Investigation of XML Compression Tools}, author={Sherif Sakr}, journal={arXiv preprint arXiv:0806.0075}, year={2008}, archivePrefix={arXiv}, eprint={0806.0075}, primaryClass={cs.DB} }
sakr2008an