corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-901
0708.0271
Capacity Region of the Finite-State Multiple Access Channel with and without Feedback
<|reference_start|>Capacity Region of the Finite-State Multiple Access Channel with and without Feedback: The capacity region of the Finite-State Multiple Access Channel (FS-MAC) with feedback that may be an arbitrary time-invariant function of the channel output samples is considered. We characterize both an inner and an outer bound for this region, using Masseys's directed information. These bounds are shown to coincide, and hence yield the capacity region, of FS-MACs where the state process is stationary and ergodic and not affected by the inputs. Though `multi-letter' in general, our results yield explicit conclusions when applied to specific scenarios of interest. E.g., our results allow us to: - Identify a large class of FS-MACs, that includes the additive mod-2 noise MAC where the noise may have memory, for which feedback does not enlarge the capacity region. - Deduce that, for a general FS-MAC with states that are not affected by the input, if the capacity (region) without feedback is zero, then so is the capacity (region) with feedback. - Deduce that the capacity region of a MAC that can be decomposed into a `multiplexer' concatenated by a point-to-point channel (with, without, or with partial feedback), the capacity region is given by $\sum_{m} R_m \leq C$, where C is the capacity of the point to point channel and m indexes the encoders. Moreover, we show that for this family of channels source-channel coding separation holds.<|reference_end|>
arxiv
@article{permuter2007capacity, title={Capacity Region of the Finite-State Multiple Access Channel with and without Feedback}, author={Haim Permuter, Tsachy Weissman}, journal={arXiv preprint arXiv:0708.0271}, year={2007}, archivePrefix={arXiv}, eprint={0708.0271}, primaryClass={cs.IT math.IT} }
permuter2007capacity
arxiv-902
0708.0353
The Local Fractal Properties of the Financial Time Series on the Polish Stock Exchange Market
<|reference_start|>The Local Fractal Properties of the Financial Time Series on the Polish Stock Exchange Market: We investigate the local fractal properties of the financial time series based on the evolution of the Warsaw Stock Exchange Index (WIG) connected with the largest developing financial market in Europe. Calculating the local Hurst exponent for the WIG time series we find an interesting dependence between the behavior of the local fractal properties of the WIG time series and the crashes appearance on the financial market.<|reference_end|>
arxiv
@article{grech2007the, title={The Local Fractal Properties of the Financial Time Series on the Polish Stock Exchange Market}, author={D. Grech, G. Pamu{l}a (University of Wroclaw, ITP)}, journal={arXiv preprint arXiv:0708.0353}, year={2007}, archivePrefix={arXiv}, eprint={0708.0353}, primaryClass={q-fin.ST cs.CE physics.data-an} }
grech2007the
arxiv-903
0708.0361
Why the relational data model can be considered as a formal basis for group operations in object-oriented systems
<|reference_start|>Why the relational data model can be considered as a formal basis for group operations in object-oriented systems: Relational data model defines a specification of a type "relation". However, its simplicity does not mean that the system implementing this model must operate with structures having the same simplicity. We consider two principles allowing create a system which combines object-oriented paradigm (OOP) and relational data model (RDM) in one framework. The first principle -- "complex data in encapsulated domains" -- is well known from The Third Manifesto by Date and Darwen. The second principle --"data complexity in names"-- is the basis for a system where data are described as complex objects and uniquely represented as a set of relations. Names of these relations and names of their attributes are combinations of names entered in specifications of the complex objects. Below, we consider the main properties of such a system.<|reference_end|>
arxiv
@article{grigoriev2007why, title={Why the relational data model can be considered as a formal basis for group operations in object-oriented systems}, author={Evgeniy Grigoriev}, journal={arXiv preprint arXiv:0708.0361}, year={2007}, archivePrefix={arXiv}, eprint={0708.0361}, primaryClass={cs.DB} }
grigoriev2007why
arxiv-904
0708.0386
Diversity of MIMO Multihop Relay Channels
<|reference_start|>Diversity of MIMO Multihop Relay Channels: We consider slow fading relay channels with a single multi-antenna source-destination terminal pair. The source signal arrives at the destination via N hops through N-1 layers of relays. We analyze the diversity of such channels with fixed network size at high SNR. In the clustered case where the relays within the same layer can have full cooperation, the cooperative decode-and-forward (DF) scheme is shown to be optimal in terms of the diversity-multiplexing tradeoff (DMT). The upper bound on the DMT, the cut-set bound, is attained. In the non-clustered case, we show that the naive amplify-and-forward (AF) scheme has the maximum multiplexing gain of the channel but is suboptimal in diversity, as compared to the cut-set bound. To improve the diversity, space-time relay processing is introduced through the parallel partition of the multihop channel. The idea is to let the source signal go through K different "AF paths" in the multihop channel. This parallel AF scheme creates a parallel channel in the time domain and has the maximum diversity if the partition is properly designed. Since this scheme does not achieve the maximum multiplexing gain in general, we propose a flip-and-forward (FF) scheme that is built from the parallel AF scheme. It is shown that the FF scheme achieves both the maximum diversity and multiplexing gains in a distributed multihop channel of arbitrary size. In order to realize the DMT promised by the relaying strategies, approximately universal coding schemes are also proposed.<|reference_end|>
arxiv
@article{yang2007diversity, title={Diversity of MIMO Multihop Relay Channels}, author={Sheng Yang and Jean-Claude Belfiore}, journal={arXiv preprint arXiv:0708.0386}, year={2007}, archivePrefix={arXiv}, eprint={0708.0386}, primaryClass={cs.IT math.IT} }
yang2007diversity
arxiv-905
0708.0495
Virtual Manufacturing : Tools for improving Design and Production
<|reference_start|>Virtual Manufacturing : Tools for improving Design and Production: The research area "Virtual Manufacturing" can be defined as an integrated manufacturing environment which can enhance one or several levels of decision and control in manufacturing process. Several domains can be addressed: Product and Process Design, Process and Production Planning, Machine Tool, Robot and Manufacturing System. As automation technologies such as CAD/CAM have substantially shortened the time required to design products, Virtual Manufacturing will have a similar effect on the manufacturing phase thanks to the modelling, simulation and optimisation of the product and the processes involved in its fabrication.<|reference_end|>
arxiv
@article{dépincé2007virtual, title={Virtual Manufacturing : Tools for improving Design and Production}, author={Philippe D'epinc'e (IRCCyN), Damien Chablat (IRCCyN), Peer-Oliver Woelk (IFW)}, journal={Dans International Design Seminar - CIRP International Design Seminar, Caire : \'Egypte (2004)}, year={2007}, archivePrefix={arXiv}, eprint={0708.0495}, primaryClass={cs.RO physics.class-ph} }
dépincé2007virtual
arxiv-906
0708.0505
A preliminary analysis on metaheuristics methods applied to the Haplotype Inference Problem
<|reference_start|>A preliminary analysis on metaheuristics methods applied to the Haplotype Inference Problem: Haplotype Inference is a challenging problem in bioinformatics that consists in inferring the basic genetic constitution of diploid organisms on the basis of their genotype. This information allows researchers to perform association studies for the genetic variants involved in diseases and the individual responses to therapeutic agents. A notable approach to the problem is to encode it as a combinatorial problem (under certain hypotheses, such as the pure parsimony criterion) and to solve it using off-the-shelf combinatorial optimization techniques. The main methods applied to Haplotype Inference are either simple greedy heuristic or exact methods (Integer Linear Programming, Semidefinite Programming, SAT encoding) that, at present, are adequate only for moderate size instances. We believe that metaheuristic and hybrid approaches could provide a better scalability. Moreover, metaheuristics can be very easily combined with problem specific heuristics and they can also be integrated with tree-based search techniques, thus providing a promising framework for hybrid systems in which a good trade-off between effectiveness and efficiency can be reached. In this paper we illustrate a feasibility study of the approach and discuss some relevant design issues, such as modeling and design of approximate solvers that combine constructive heuristics, local search-based improvement strategies and learning mechanisms. Besides the relevance of the Haplotype Inference problem itself, this preliminary analysis is also an interesting case study because the formulation of the problem poses some challenges in modeling and hybrid metaheuristic solver design that can be generalized to other problems.<|reference_end|>
arxiv
@article{di gaspero2007a, title={A preliminary analysis on metaheuristics methods applied to the Haplotype Inference Problem}, author={Luca Di Gaspero, Andrea Roli}, journal={arXiv preprint arXiv:0708.0505}, year={2007}, number={DEIS-LIA-006-07}, archivePrefix={arXiv}, eprint={0708.0505}, primaryClass={cs.AI cs.CE cs.DM q-bio.QM} }
di gaspero2007a
arxiv-907
0708.0522
Quasi-stationary distributions as centrality measures of reducible graphs
<|reference_start|>Quasi-stationary distributions as centrality measures of reducible graphs: Random walk can be used as a centrality measure of a directed graph. However, if the graph is reducible the random walk will be absorbed in some subset of nodes and will never visit the rest of the graph. In Google PageRank the problem was solved by introduction of uniform random jumps with some probability. Up to the present, there is no clear criterion for the choice this parameter. We propose to use parameter-free centrality measure which is based on the notion of quasi-stationary distribution. Specifically we suggest four quasi-stationary based centrality measures, analyze them and conclude that they produce approximately the same ranking. The new centrality measures can be applied in spam detection to detect ``link farms'' and in image search to find photo albums.<|reference_end|>
arxiv
@article{avrachenkov2007quasi-stationary, title={Quasi-stationary distributions as centrality measures of reducible graphs}, author={Konstantin Avrachenkov (INRIA Sophia Antipolis), Vivek Borkar, Danil Nemirovsky (INRIA Sophia Antipolis)}, journal={arXiv preprint arXiv:0708.0522}, year={2007}, archivePrefix={arXiv}, eprint={0708.0522}, primaryClass={cs.NI} }
avrachenkov2007quasi-stationary
arxiv-908
0708.0580
Efficient Divide-and-Conquer Implementations Of Symmetric FSAs
<|reference_start|>Efficient Divide-and-Conquer Implementations Of Symmetric FSAs: A deterministic finite-state automaton (FSA) is an abstract sequential machine that reads the symbols comprising an input word one at a time. An FSA is symmetric if its output is independent of the order in which the input symbols are read, i.e., if the output is invariant under permutations of the input. We show how to convert a symmetric FSA A into an automaton-like divide-and-conquer process whose intermediate results are no larger than the size of A's memory. In comparison, a similar result for general FSA's has been long known via functional composition, but entails an exponential increase in memory size. The new result has applications to parallel processing and symmetric FSA networks.<|reference_end|>
arxiv
@article{pritchard2007efficient, title={Efficient Divide-and-Conquer Implementations Of Symmetric FSAs}, author={David Pritchard}, journal={Journal of Cellular Automata 5(6) (special issue for Automata 2007, H. Fuks & A. T. Lawniczak, eds), pages 481-490, 2010}, year={2007}, archivePrefix={arXiv}, eprint={0708.0580}, primaryClass={cs.FL cs.DM} }
pritchard2007efficient
arxiv-909
0708.0598
An Application of Chromatic Prototypes
<|reference_start|>An Application of Chromatic Prototypes: This paper has been withdrawn.<|reference_end|>
arxiv
@article{mccool2007an, title={An Application of Chromatic Prototypes}, author={Matthew McCool}, journal={arXiv preprint arXiv:0708.0598}, year={2007}, archivePrefix={arXiv}, eprint={0708.0598}, primaryClass={cs.HC cs.MM} }
mccool2007an
arxiv-910
0708.0600
Complementary algorithms for graphs and percolation
<|reference_start|>Complementary algorithms for graphs and percolation: A pair of complementary algorithms are presented. One of the pair is a fast method for connecting graphs with an edge. The other is a fast method for removing edges from a graph. Both algorithms employ the same tree based graph representation and so, in concert, can arbitrarily modify any graph. Since the clusters of a percolation model may be described as simple connected graphs, an efficient Monte Carlo scheme can be constructed that uses the algorithms to sweep the occupation probability back and forth between two turning points. This approach concentrates computational sampling time within a region of interest. A high precision value of pc = 0.59274603(9) was thus obtained, by Mersenne twister, for the two dimensional square site percolation threshold.<|reference_end|>
arxiv
@article{lee2007complementary, title={Complementary algorithms for graphs and percolation}, author={Michael J. Lee}, journal={arXiv preprint arXiv:0708.0600}, year={2007}, doi={10.1103/PhysRevE.76.027702}, archivePrefix={arXiv}, eprint={0708.0600}, primaryClass={cs.DS} }
lee2007complementary
arxiv-911
0708.0603
Public Cluster : parallel machine with multi-block approach
<|reference_start|>Public Cluster : parallel machine with multi-block approach: We introduce a new approach to enable an open and public parallel machine which is accessible for multi users with multi jobs belong to different blocks running at the same time. The concept is required especially for parallel machines which are dedicated for public use as implemented at the LIPI Public Cluster. We have deployed the simplest technique by running multi daemons of parallel processing engine with different configuration files specified for each user assigned to access the system, and also developed an integrated system to fully control and monitor the whole system over web. A brief performance analysis is also given for Message Parsing Interface (MPI) engine. It is shown that the proposed approach is quite reliable and affect the whole performances only slightly.<|reference_end|>
arxiv
@article{akbar2007public, title={Public Cluster : parallel machine with multi-block approach}, author={Z. Akbar, Slamet, B. I. Ajinagoro, G.I. Ohara, I. Firmansyah, B. Hermanto and L.T. Handoko}, journal={arXiv preprint arXiv:0708.0603}, year={2007}, number={FISIKALIPI-07002}, archivePrefix={arXiv}, eprint={0708.0603}, primaryClass={cs.DC cs.CY} }
akbar2007public
arxiv-912
0708.0604
Introducing OPTO : Portal for Optical Communities in Indonesia
<|reference_start|>Introducing OPTO : Portal for Optical Communities in Indonesia: Since January 1, 2005 we have launched "OPTO" Portal, a website dedicated to optical communities in Indonesia. The address of this portal is http://www.opto.lipi.go.id and is self-supporting managed and not for commercial purposes. Our aims in launching this portal are to benefit Internet facility in increasing the communities' scientific activity; to provide an online reference in Indonesian language for optics-based science and technology subjects; as well as to pioneer the communities' online activities with real impacts and benefits for our society. We will describe in the paper the features of this portal that can be utilized by all individuals or members of optical communities to store and share information and to build networks or partnership as well. We realized that this portal is still not popular and most of our aims are still not reached. This conference should be a good place for all of us to collaborate to properly utilize this portal for the advantages to the optical communities in Indonesia and our society at large.<|reference_end|>
arxiv
@article{waluyo2007introducing, title={Introducing OPTO : Portal for Optical Communities in Indonesia}, author={T.B. Waluyo and L.T. Handoko}, journal={arXiv preprint arXiv:0708.0604}, year={2007}, number={FISIKALIPI-07006}, archivePrefix={arXiv}, eprint={0708.0604}, primaryClass={cs.CY} }
waluyo2007introducing
arxiv-913
0708.0605
Open and Free Cluster for Public
<|reference_start|>Open and Free Cluster for Public: We introduce the LIPI Public Cluster, the first parallel machine facility fully open for public and for free in Indonesia and surrounding countries. In this paper, we focus on explaining our globally new concept on open cluster, and how to realize and manage it to meet the users needs. We show that after 2 years trial running and several upgradings, the Public Cluster performs well and is able to fulfil all requirements as expected.<|reference_end|>
arxiv
@article{akbar2007open, title={Open and Free Cluster for Public}, author={Z. Akbar, Slamet, B. I. Ajinagoro, G.I. Ohara, I. Firmansyah, B. Hermanto and L.T. Handoko}, journal={arXiv preprint arXiv:0708.0605}, year={2007}, number={FISIKALIPI-07003}, archivePrefix={arXiv}, eprint={0708.0605}, primaryClass={cs.DC cs.CY} }
akbar2007open
arxiv-914
0708.0607
Real-time control and monitoring system for LIPI's Public Cluster
<|reference_start|>Real-time control and monitoring system for LIPI's Public Cluster: We have developed a monitoring and control system for LIPI's Public Cluster. The system consists of microcontrollers and full web-based user interfaces for daily operation. It is argued that, due to its special natures, the cluster requires fully dedicated and self developed control and monitoring system. We discuss the implementation of using parallel port and dedicated micro-controller for this purpose. We also show that integrating such systems enables an autonomous control system based on the real time monitoring, for instance an autonomous power supply control based on the actual temperature, etc.<|reference_end|>
arxiv
@article{firmansyah2007real-time, title={Real-time control and monitoring system for LIPI's Public Cluster}, author={I. Firmansyah, B. Hermanto, Hadiyanto and L.T. Handoko}, journal={arXiv preprint arXiv:0708.0607}, year={2007}, number={FISIKALIPI-07004}, archivePrefix={arXiv}, eprint={0708.0607}, primaryClass={cs.DC cs.RO} }
firmansyah2007real-time
arxiv-915
0708.0608
Resource Allocation in Public Cluster with Extended Optimization Algorithm
<|reference_start|>Resource Allocation in Public Cluster with Extended Optimization Algorithm: We introduce an optimization algorithm for resource allocation in the LIPI Public Cluster to optimize its usage according to incoming requests from users. The tool is an extended and modified genetic algorithm developed to match specific natures of public cluster. We present a detail analysis of optimization, and compare the results with the exact calculation. We show that it would be very useful and could realize an automatic decision making system for public clusters.<|reference_end|>
arxiv
@article{akbar2007resource, title={Resource Allocation in Public Cluster with Extended Optimization Algorithm}, author={Z. Akbar and L.T. Handoko}, journal={arXiv preprint arXiv:0708.0608}, year={2007}, number={FISIKALIPI-07005}, archivePrefix={arXiv}, eprint={0708.0608}, primaryClass={cs.DC} }
akbar2007resource
arxiv-916
0708.0624
ADS-Directory Services for Mobile Ad-Hoc Networks Based on an Information Market Model
<|reference_start|>ADS-Directory Services for Mobile Ad-Hoc Networks Based on an Information Market Model: Ubiquitous computing based on small mobile devices using wireless communication links is becoming very attractive. The computational power and storage capacities provided allow the execution of sophisticated applications. Due to the fact that sharing of information is a central problem for distributed applications, the development of self organizing middleware services providing high level interfaces for information managing is essential. ADS is a directory service for mobile ad-hoc networks dealing with local and nearby information as well as providing access to distant information. The approach discussed throughout this paper is based upon the concept of information markets.<|reference_end|>
arxiv
@article{hutter2007ads-directory, title={ADS-Directory Services for Mobile Ad-Hoc Networks Based on an Information Market Model}, author={Christian Hutter, Matthias R. Brust, Steffen Rothkugel}, journal={arXiv preprint arXiv:0708.0624}, year={2007}, archivePrefix={arXiv}, eprint={0708.0624}, primaryClass={cs.NI cs.DC} }
hutter2007ads-directory
arxiv-917
0708.0627
ADS as Information Management Service in an M-Learning Environment
<|reference_start|>ADS as Information Management Service in an M-Learning Environment: Leveraging the potential power of even small handheld devices able to communicate wirelessly requires dedicated support. In particular, collaborative applications need sophisticated assistance in terms of querying and exchanging different kinds of data. Using a concrete example from the domain of mobile learning, the general need for information dissemination is motivated. Subsequently, and driven by infrastructural conditions, realization strategies of an appropriate middleware service are discussed.<|reference_end|>
arxiv
@article{brust2007ads, title={ADS as Information Management Service in an M-Learning Environment}, author={Matthias R. Brust, Daniel Goergen, Christian Hutter, Steffen Rothkugel}, journal={arXiv preprint arXiv:0708.0627}, year={2007}, archivePrefix={arXiv}, eprint={0708.0627}, primaryClass={cs.NI cs.DC} }
brust2007ads
arxiv-918
0708.0648
Auction-Based Distributed Resource Allocation for Cooperation Transmission in Wireless Networks
<|reference_start|>Auction-Based Distributed Resource Allocation for Cooperation Transmission in Wireless Networks: Cooperative transmission can greatly improve communication system performance by taking advantage of the broadcast nature of wireless channels. Most previous work on resource allocation for cooperation transmission is based on centralized control. In this paper, we propose two share auction mechanisms, the SNR auction and the power auction, to distributively coordinate the resource allocation among users. We prove the existence, uniqueness and effectiveness of the auction results. In particular, the SNR auction leads to a fair resource allocation among users, and the power auction achieves a solution that is close to the efficient allocation.<|reference_end|>
arxiv
@article{huang2007auction-based, title={Auction-Based Distributed Resource Allocation for Cooperation Transmission in Wireless Networks}, author={Jianwei Huang, Zhu Han, Mung Chiang, and H. Vincent Poor}, journal={arXiv preprint arXiv:0708.0648}, year={2007}, doi={10.1109/GLOCOM.2007.912}, archivePrefix={arXiv}, eprint={0708.0648}, primaryClass={cs.IT math.IT} }
huang2007auction-based
arxiv-919
0708.0654
Structure or Noise?
<|reference_start|>Structure or Noise?: We show how rate-distortion theory provides a mechanism for automated theory building by naturally distinguishing between regularity and randomness. We start from the simple principle that model variables should, as much as possible, render the future and past conditionally independent. From this, we construct an objective function for model making whose extrema embody the trade-off between a model's structural complexity and its predictive power. The solutions correspond to a hierarchy of models that, at each level of complexity, achieve optimal predictive power at minimal cost. In the limit of maximal prediction the resulting optimal model identifies a process's intrinsic organization by extracting the underlying causal states. In this limit, the model's complexity is given by the statistical complexity, which is known to be minimal for achieving maximum prediction. Examples show how theory building can profit from analyzing a process's causal compressibility, which is reflected in the optimal models' rate-distortion curve--the process's characteristic for optimally balancing structure and noise at different levels of representation.<|reference_end|>
arxiv
@article{still2007structure, title={Structure or Noise?}, author={Susanne Still, James P. Crutchfield}, journal={arXiv preprint arXiv:0708.0654}, year={2007}, archivePrefix={arXiv}, eprint={0708.0654}, primaryClass={physics.data-an cond-mat.stat-mech cs.IT cs.LG math-ph math.IT math.MP math.ST nlin.CD stat.TH} }
still2007structure
arxiv-920
0708.0660
Network synchronizability analysis: the theory of subgraphs and complementary graphs
<|reference_start|>Network synchronizability analysis: the theory of subgraphs and complementary graphs: In this paper, subgraphs and complementary graphs are used to analyze the network synchronizability. Some sharp and attainable bounds are provided for the eigenratio of the network structural matrix, which characterizes the network synchronizability, especially when the network's corresponding graph has cycles, chains, bipartite graphs or product graphs as its subgraphs.<|reference_end|>
arxiv
@article{duan2007network, title={Network synchronizability analysis: the theory of subgraphs and complementary graphs}, author={Zhisheng Duan, Chao Liu and Guanrong Chen}, journal={arXiv preprint arXiv:0708.0660}, year={2007}, doi={10.1016/j.physd.2007.12.003}, archivePrefix={arXiv}, eprint={0708.0660}, primaryClass={cs.NI cs.GR} }
duan2007network
arxiv-921
0708.0694
Reconstruction of Protein-Protein Interaction Pathways by Mining Subject-Verb-Objects Intermediates
<|reference_start|>Reconstruction of Protein-Protein Interaction Pathways by Mining Subject-Verb-Objects Intermediates: The exponential increase in publication rate of new articles is limiting access of researchers to relevant literature. This has prompted the use of text mining tools to extract key biological information. Previous studies have reported extensive modification of existing generic text processors to process biological text. However, this requirement for modification had not been examined. In this study, we have constructed Muscorian, using MontyLingua, a generic text processor. It uses a two-layered generalization-specialization paradigm previously proposed where text was generically processed to a suitable intermediate format before domain-specific data extraction techniques are applied at the specialization layer. Evaluation using a corpus and experts indicated 86-90% precision and approximately 30% recall in extracting protein-protein interactions, which was comparable to previous studies using either specialized biological text processing tools or modified existing tools. Our study had also demonstrated the flexibility of the two-layered generalization-specialization paradigm by using the same generalization layer for two specialized information extraction tasks.<|reference_end|>
arxiv
@article{ling2007reconstruction, title={Reconstruction of Protein-Protein Interaction Pathways by Mining Subject-Verb-Objects Intermediates}, author={Maurice HT Ling, Christophe Lefevre, Kevin R. Nicholas, and Feng Lin}, journal={Ling, Maurice HT, Lefevre, Christophe, Nicholas, Kevin R, Lin, Feng. 2007. In J.C. Ragapakse, B. Schmidt, and G. Volkert (Eds.), PRIB 2007. Lecture Notes in Bioinformatics 4774: 286-299. Springer-Verlag.}, year={2007}, archivePrefix={arXiv}, eprint={0708.0694}, primaryClass={cs.IR cs.CL cs.DL} }
ling2007reconstruction
arxiv-922
0708.0712
Virtual Environments for Training: From Individual Learning to Collaboration with Humanoids
<|reference_start|>Virtual Environments for Training: From Individual Learning to Collaboration with Humanoids: The next generation of virtual environments for training is oriented towards collaborative aspects. Therefore, we have decided to enhance our platform for virtual training environments, adding collaboration opportunities and integrating humanoids. In this paper we put forward a model of humanoid that suits both virtual humans and representations of real users, according to collaborative training activities. We suggest adaptations to the scenario model of our platform making it possible to write collaborative procedures. We introduce a mechanism of action selection made up of a global repartition and an individual choice. These models are currently being integrated and validated in GVT, a virtual training tool for maintenance of military equipments, developed in collaboration with the French company NEXTER-Group.<|reference_end|>
arxiv
@article{gerbaud2007virtual, title={Virtual Environments for Training: From Individual Learning to Collaboration with Humanoids}, author={St'ephanie Gerbaud (IRISA), Nicolas Mollet (IRISA), Bruno Arnaldi (IRISA)}, journal={Dans Edutainment (2007)}, year={2007}, archivePrefix={arXiv}, eprint={0708.0712}, primaryClass={cs.GR} }
gerbaud2007virtual
arxiv-923
0708.0713
Edit and verify
<|reference_start|>Edit and verify: Automated theorem provers are used in extended static checking, where they are the performance bottleneck. Extended static checkers are run typically after incremental changes to the code. We propose to exploit this usage pattern to improve performance. We present two approaches of how to do so and a full solution.<|reference_end|>
arxiv
@article{grigore2007edit, title={Edit and verify}, author={Radu Grigore and Micha{l} Moskal}, journal={arXiv preprint arXiv:0708.0713}, year={2007}, archivePrefix={arXiv}, eprint={0708.0713}, primaryClass={cs.LO} }
grigore2007edit
arxiv-924
0708.0741
Characterising Web Site Link Structure
<|reference_start|>Characterising Web Site Link Structure: The topological structures of the Internet and the Web have received considerable attention. However, there has been little research on the topological properties of individual web sites. In this paper, we consider whether web sites (as opposed to the entire Web) exhibit structural similarities. To do so, we exhaustively crawled 18 web sites as diverse as governmental departments, commercial companies and university departments in different countries. These web sites consisted of as little as a few thousand pages to millions of pages. Statistical analysis of these 18 sites revealed that the internal link structure of the web sites are significantly different when measured with first and second-order topological properties, i.e. properties based on the connectivity of an individual or a pairs of nodes. However, examination of a third-order topological property that consider the connectivity between three nodes that form a triangle, revealed a strong correspondence across web sites, suggestive of an invariant. Comparison with the Web, the AS Internet, and a citation network, showed that this third-order property is not shared across other types of networks. Nor is the property exhibited in generative network models such as that of Barabasi and Albert.<|reference_end|>
arxiv
@article{zhou2007characterising, title={Characterising Web Site Link Structure}, author={Shi Zhou, Ingemar Cox and Vaclav Petricek}, journal={arXiv preprint arXiv:0708.0741}, year={2007}, doi={10.1109/WSE.2007.4380247}, archivePrefix={arXiv}, eprint={0708.0741}, primaryClass={cs.IR} }
zhou2007characterising
arxiv-925
0708.0805
Cooperative Beamforming for Wireless Ad Hoc Networks
<|reference_start|>Cooperative Beamforming for Wireless Ad Hoc Networks: Via collaborative beamforming, nodes in a wireless network are able to transmit a common message over long distances in an energy efficient fashion. However, the process of making available the same message to all collaborating nodes introduces delays. In this paper, a MAC-PHY cross-layer scheme is proposed that enables collaborative beamforming at significantly reduced collaboration overhead. It consists of two phases. In the first phase, nodes transmit locally in a random access time-slotted fashion. Simultaneous transmissions from multiple source nodes are viewed as linear mixtures of all transmitted packets. In the second phase, a set of collaborating nodes, acting as a distributed antenna system, beamform the received analog waveform to one or more faraway destinations. This step requires multiplication of the received analog waveform by a complex weight, which is independently computed by each cooperating node, and which allows packets bound to the same destination to add coherently at the destination node. Assuming that each node has access to location information, the proposed scheme can achieve high throughput, which in certain cases exceeds one. An analysis of the symbol error probability corresponding to the proposed scheme is provided.<|reference_end|>
arxiv
@article{dong2007cooperative, title={Cooperative Beamforming for Wireless Ad Hoc Networks}, author={Lun Dong, Athina P. Petropulu and H. Vincent Poor}, journal={arXiv preprint arXiv:0708.0805}, year={2007}, doi={10.1109/GLOCOM.2007.560}, archivePrefix={arXiv}, eprint={0708.0805}, primaryClass={cs.IT math.IT} }
dong2007cooperative
arxiv-926
0708.0846
Cooperative game theory and the Gaussian interference channel
<|reference_start|>Cooperative game theory and the Gaussian interference channel: In this paper we discuss the use of cooperative game theory for analyzing interference channels. We extend our previous work, to games with N players as well as frequency selective channels and joint TDM/FDM strategies. We show that the Nash bargaining solution can be computed using convex optimization techniques. We also show that the same results are applicable to interference channels where only statistical knowledge of the channel is available. Moreover, for the special case of two players $2\times K$ frequency selective channel (with K frequency bins) we provide an $O(K \log_2 K)$ complexity algorithm for computing the Nash bargaining solution under mask constraint and using joint FDM/TDM strategies. Simulation results are also provided.<|reference_end|>
arxiv
@article{leshem2007cooperative, title={Cooperative game theory and the Gaussian interference channel}, author={Amir Leshem and Ephi Zehavi}, journal={arXiv preprint arXiv:0708.0846}, year={2007}, archivePrefix={arXiv}, eprint={0708.0846}, primaryClass={cs.IT cs.GT math.IT} }
leshem2007cooperative
arxiv-927
0708.0850
Relations between random coding exponents and the statistical physics of random codes
<|reference_start|>Relations between random coding exponents and the statistical physics of random codes: The partition function pertaining to finite--temperature decoding of a (typical) randomly chosen code is known to have three types of behavior, corresponding to three phases in the plane of rate vs. temperature: the {\it ferromagnetic phase}, corresponding to correct decoding, the {\it paramagnetic phase}, of complete disorder, which is dominated by exponentially many incorrect codewords, and the {\it glassy phase} (or the condensed phase), where the system is frozen at minimum energy and dominated by subexponentially many incorrect codewords. We show that the statistical physics associated with the two latter phases are intimately related to random coding exponents. In particular, the exponent associated with the probability of correct decoding at rates above capacity is directly related to the free energy in the glassy phase, and the exponent associated with probability of error (the error exponent) at rates below capacity, is strongly related to the free energy in the paramagnetic phase. In fact, we derive alternative expressions of these exponents in terms of the corresponding free energies, and make an attempt to obtain some insights from these expressions. Finally, as a side result, we also compare the phase diagram associated with a simple finite-temperature universal decoder for discrete memoryless channels, to that of the finite--temperature decoder that is aware of the channel statistics.<|reference_end|>
arxiv
@article{merhav2007relations, title={Relations between random coding exponents and the statistical physics of random codes}, author={Neri Merhav}, journal={arXiv preprint arXiv:0708.0850}, year={2007}, archivePrefix={arXiv}, eprint={0708.0850}, primaryClass={cs.IT math.IT} }
merhav2007relations
arxiv-928
0708.0877
A Portal Analysis for the Design of a Collaborative Research Environment for Students and Supervisors (CRESS) within the CSCR Domain
<|reference_start|>A Portal Analysis for the Design of a Collaborative Research Environment for Students and Supervisors (CRESS) within the CSCR Domain: In a previous paper the CSCR domain was defined. Here this is taken to the next stage where we consider the design of a particular Collaborative Research Environment to support Students and Supervisors CRESS. Following the CSCR structure a preliminary design for CRESS has been established and a portal framework analysis is undertaken in order to determine the most appropriate set of tools for its implementation.<|reference_end|>
arxiv
@article{hinze-hoare2007a, title={A Portal Analysis for the Design of a Collaborative Research Environment for Students and Supervisors (CRESS) within the CSCR Domain}, author={V. Hinze-Hoare}, journal={arXiv preprint arXiv:0708.0877}, year={2007}, archivePrefix={arXiv}, eprint={0708.0877}, primaryClass={cs.HC} }
hinze-hoare2007a
arxiv-929
0708.0905
Permutation Decoding and the Stopping Redundancy Hierarchy of Cyclic and Extended Cyclic Codes
<|reference_start|>Permutation Decoding and the Stopping Redundancy Hierarchy of Cyclic and Extended Cyclic Codes: We introduce the notion of the stopping redundancy hierarchy of a linear block code as a measure of the trade-off between performance and complexity of iterative decoding for the binary erasure channel. We derive lower and upper bounds for the stopping redundancy hierarchy via Lovasz's Local Lemma and Bonferroni-type inequalities, and specialize them for codes with cyclic parity-check matrices. Based on the observed properties of parity-check matrices with good stopping redundancy characteristics, we develop a novel decoding technique, termed automorphism group decoding, that combines iterative message passing and permutation decoding. We also present bounds on the smallest number of permutations of an automorphism group decoder needed to correct any set of erasures up to a prescribed size. Simulation results demonstrate that for a large number of algebraic codes, the performance of the new decoding method is close to that of maximum likelihood decoding.<|reference_end|>
arxiv
@article{hehn2007permutation, title={Permutation Decoding and the Stopping Redundancy Hierarchy of Cyclic and Extended Cyclic Codes}, author={Thorsten Hehn, Olgica Milenkovic, Stefan Laendner, Johannes B. Huber}, journal={arXiv preprint arXiv:0708.0905}, year={2007}, doi={10.1109/TIT.2008.2006456}, archivePrefix={arXiv}, eprint={0708.0905}, primaryClass={cs.IT math.IT} }
hehn2007permutation
arxiv-930
0708.0909
On the Self-stabilization of Mobile Robots in Graphs
<|reference_start|>On the Self-stabilization of Mobile Robots in Graphs: Self-stabilization is a versatile technique to withstand any transient fault in a distributed system. Mobile robots (or agents) are one of the emerging trends in distributed computing as they mimic autonomous biologic entities. The contribution of this paper is threefold. First, we present a new model for studying mobile entities in networks subject to transient faults. Our model differs from the classical robot model because robots have constraints about the paths they are allowed to follow, and from the classical agent model because the number of agents remains fixed throughout the execution of the protocol. Second, in this model, we study the possibility of designing self-stabilizing algorithms when those algorithms are run by mobile robots (or agents) evolving on a graph. We concentrate on the core building blocks of robot and agents problems: naming and leader election. Not surprisingly, when no constraints are given on the network graph topology and local execution model, both problems are impossible to solve. Finally, using minimal hypothesis with respect to impossibility results, we provide deterministic and probabilistic solutions to both problems, and show equivalence of these problems by an algorithmic reduction mechanism.<|reference_end|>
arxiv
@article{blin2007on, title={On the Self-stabilization of Mobile Robots in Graphs}, author={L'elia Blin (IBISC), Maria Gradinariu Potop-Butucaru (INRIA Rocquencourt, LIP6), S'ebastien Tixeuil (INRIA Futurs, LRI)}, journal={arXiv preprint arXiv:0708.0909}, year={2007}, archivePrefix={arXiv}, eprint={0708.0909}, primaryClass={cs.DS cs.DC} }
blin2007on
arxiv-931
0708.0927
Modeling Visual Information Processing in Brain: A Computer Vision Point of View and Approach
<|reference_start|>Modeling Visual Information Processing in Brain: A Computer Vision Point of View and Approach: We live in the Information Age, and information has become a critically important component of our life. The success of the Internet made huge amounts of it easily available and accessible to everyone. To keep the flow of this information manageable, means for its faultless circulation and effective handling have become urgently required. Considerable research efforts are dedicated today to address this necessity, but they are seriously hampered by the lack of a common agreement about "What is information?" In particular, what is "visual information" - human's primary input from the surrounding world. The problem is further aggravated by a long-lasting stance borrowed from the biological vision research that assumes human-like information processing as an enigmatic mix of perceptual and cognitive vision faculties. I am trying to find a remedy for this bizarre situation. Relying on a new definition of "information", which can be derived from Kolmogorov's compexity theory and Chaitin's notion of algorithmic information, I propose a unifying framework for visual information processing, which explicitly accounts for the perceptual and cognitive image processing peculiarities. I believe that this framework will be useful to overcome the difficulties that are impeding our attempts to develop the right model of human-like intelligent image processing.<|reference_end|>
arxiv
@article{diamant2007modeling, title={Modeling Visual Information Processing in Brain: A Computer Vision Point of View and Approach}, author={Emanuel Diamant}, journal={Signal Processing: Image Communication, vol. 22, issue 6, pp. 583-590, July 2007}, year={2007}, archivePrefix={arXiv}, eprint={0708.0927}, primaryClass={cs.AI cs.CV} }
diamant2007modeling
arxiv-932
0708.0964
Nodally 3-connected planar graphs and convex combination mappings
<|reference_start|>Nodally 3-connected planar graphs and convex combination mappings: A convex combination mapping of a planar graph is a plane mapping in which the external vertices are mapped to the corners of a convex polygon and every internal vertex is a proper weighted average of its neighbours. If a planar graph is nodally 3-connected or triangulated then every such mapping is an embedding (Tutte, Floater). We give a simple characterisation of nodally 3-connected planar graphs, and generalise the above result to any planar graph which admits any convex embedding.<|reference_end|>
arxiv
@article{dunlaing2007nodally, title={Nodally 3-connected planar graphs and convex combination mappings}, author={Colm O Dunlaing}, journal={arXiv preprint arXiv:0708.0964}, year={2007}, number={TCDMATH 06-16}, archivePrefix={arXiv}, eprint={0708.0964}, primaryClass={cs.CG} }
dunlaing2007nodally
arxiv-933
0708.0975
Near Optimal Broadcast with Network Coding in Large Sensor Networks
<|reference_start|>Near Optimal Broadcast with Network Coding in Large Sensor Networks: We study efficient broadcasting for wireless sensor networks, with network coding. We address this issue for homogeneous sensor networks in the plane. Our results are based on a simple principle (IREN/IRON), which sets the same rate on most of the nodes (wireless links) of the network. With this rate selection, we give a value of the maximum achievable broadcast rate of the source: our central result is a proof of the value of the min-cut for such networks, viewed as hypergraphs. Our metric for efficiency is the number of transmissions necessary to transmit one packet from the source to every destination: we show that IREN/IRON achieves near optimality for large networks; that is, asymptotically, nearly every transmission brings new information from the source to the receiver. As a consequence, network coding asymptotically outperforms any scheme that does not use network coding.<|reference_end|>
arxiv
@article{adjih2007near, title={Near Optimal Broadcast with Network Coding in Large Sensor Networks}, author={C'edric Adjih (INRIA Rocquencourt), Song Yean Cho (INRIA Rocquencourt), Philippe Jacquet (INRIA Rocquencourt)}, journal={arXiv preprint arXiv:0708.0975}, year={2007}, archivePrefix={arXiv}, eprint={0708.0975}, primaryClass={cs.NI} }
adjih2007near
arxiv-934
0708.0977
From symmetry break to Poisson point process in 2D Voronoi tessellations: the generic nature of hexagons
<|reference_start|>From symmetry break to Poisson point process in 2D Voronoi tessellations: the generic nature of hexagons: We bridge the properties of the regular square and honeycomb Voronoi tessellations of the plane to those of the Poisson-Voronoi case, thus analyzing in a common framework symmetry-break processes and the approach to uniformly random distributions of tessellation-generating points. We consider ensemble simulations of tessellations generated by points whose regular positions are perturbed through a Gaussian noise controlled by the parameter alpha. We study the number of sides, the area, and the perimeter of the Voronoi cells. For alpha>0, hexagons are the most common class of cells, and 2-parameter gamma distributions describe well the statistics of the geometrical characteristics. The symmetry break due to noise destroys the square tessellation, whereas the honeycomb hexagonal tessellation is very stable and all Voronoi cells are hexagon for small but finite noise with alpha<0.1. For a moderate amount of Gaussian noise, memory of the specific unperturbed tessellation is lost, because the statistics of the two perturbed tessellations is indistinguishable. When alpha>2, results converge to those of Poisson-Voronoi tessellations. The properties of n-sided cells change with alpha until the Poisson-Voronoi limit is reached for alpha>2. The Desch law for perimeters is confirmed to be not valid and a square root dependence on n is established. The ensemble mean of the cells area and perimeter restricted to the hexagonal cells coincides with the full ensemble mean; this might imply that the number of sides acts as a thermodynamic state variable fluctuating about n=6; this reinforces the idea that hexagons, beyond their ubiquitous numerical prominence, can be taken as generic polygons in 2D Voronoi tessellations.<|reference_end|>
arxiv
@article{lucarini2007from, title={From symmetry break to Poisson point process in 2D Voronoi tessellations: the generic nature of hexagons}, author={Valerio Lucarini}, journal={J. Stat. Phys., 130, 1047-1062 (2008)}, year={2007}, doi={10.1007/s10955-007-9475-x}, archivePrefix={arXiv}, eprint={0708.0977}, primaryClass={cond-mat.stat-mech cond-mat.dis-nn cs.CG math-ph math.MP physics.data-an} }
lucarini2007from
arxiv-935
0708.1037
A Formulation of the Channel Capacity of Multiple-Access Channel
<|reference_start|>A Formulation of the Channel Capacity of Multiple-Access Channel: The necessary and sufficient condition of the channel capacity is rigorously formulated for the N-user discrete memoryless multiple-access channel (MAC). The essence of the formulation is to invoke an {\em elementary} MAC where sizes of input alphabets are not greater than the size of output alphabet. The main objective is to demonstrate that the channel capacity of an MAC is achieved by an elementary MAC included in the original MAC. The proof is quite straightforward by the very definition of the elementary MAC. Moreover it is proved that the Kuhn-Tucker conditions of the elementary MAC are strictly sufficient and obviously necessary for the channel capacity. The latter proof requires some steps such that for the elementary MAC every solution of the Kuhn-Tucker conditions reveals itself as local maximum on the domain of all possible input probability distributions and then it achieves the channel capacity. As a result, in respect of the channel capacity, the MAC in general can be regarded as an aggregate of a finite number of elementary MAC's.<|reference_end|>
arxiv
@article{watanabe2007a, title={A Formulation of the Channel Capacity of Multiple-Access Channel}, author={Yoichiro Watanabe and Koichi Kamoi}, journal={arXiv preprint arXiv:0708.1037}, year={2007}, number={Proc. 2002 IEEE Int'l Sym. on Information Theory, Lausanne, p.308, 2002}, archivePrefix={arXiv}, eprint={0708.1037}, primaryClass={cs.IT math.IT} }
watanabe2007a
arxiv-936
0708.1049
An Interval Analysis Based Study for the Design and the Comparison of 3-DOF Parallel Kinematic Machines
<|reference_start|>An Interval Analysis Based Study for the Design and the Comparison of 3-DOF Parallel Kinematic Machines: This paper addresses an interval analysis based study that is applied to the design and the comparison of 3-DOF parallel kinematic machines. Two design criteria are used, (i) a regular workspace shape and, (ii) a kinetostatic performance index that needs to be as homogeneous as possible throughout the workspace. The interval analysis based method takes these two criteria into account: on the basis of prescribed kinetostatic performances, the workspace is analysed to find out the largest regular dextrous workspace enclosed in the Cartesian workspace. An algorithm describing this method is introduced. Two 3-DOF translational parallel mechanisms designed for machining applications are compared using this method. The first machine features three fixed linear joints which are mounted orthogonally and the second one features three linear joints which are mounted in parallel. In both cases, the mobile platform moves in the Cartesian x-y-z space with fixed orientation.<|reference_end|>
arxiv
@article{chablat2007an, title={An Interval Analysis Based Study for the Design and the Comparison of 3-DOF Parallel Kinematic Machines}, author={Damien Chablat (IRCCyN), Philippe Wenger (IRCCyN), F'elix Majou (IRCCyN), Jean-Pierre Merlet (INRIA Sophia-Antipolis)}, journal={International Journal of Robotics Research 23, 6 (2004) 615-624}, year={2007}, archivePrefix={arXiv}, eprint={0708.1049}, primaryClass={cs.RO} }
chablat2007an
arxiv-937
0708.1078
Nearly MDS expander codes with reduced alphabet size
<|reference_start|>Nearly MDS expander codes with reduced alphabet size: Recently, Roth and Skachek proposed two methods for constructing nearly maximum-distance separable (MDS) expander codes. We show that through the simple modification of using mixed-alphabet codes derived from MDS codes as constituent codes in their code designs, one can obtain nearly MDS codes of significantly smaller alphabet size, albeit at the expense of a (very slight) reduction in code rate.<|reference_end|>
arxiv
@article{armand2007nearly, title={Nearly MDS expander codes with reduced alphabet size}, author={Marc A. Armand, Jianwen Zhang}, journal={arXiv preprint arXiv:0708.1078}, year={2007}, archivePrefix={arXiv}, eprint={0708.1078}, primaryClass={cs.IT math.IT} }
armand2007nearly
arxiv-938
0708.1116
A variant of the Recoil Growth algorithm to generate multi-polymer systems
<|reference_start|>A variant of the Recoil Growth algorithm to generate multi-polymer systems: The Recoil Growth algorithm, proposed in 1999 by Consta et al., is one of the most efficient algorithm available in the literature to sample from a multi-polymer system. Such problems are closely related to the generation of self-avoiding paths. In this paper, we study a variant of the original Recoil Growth algorithm, where we constrain the generation of a new polymer to take place on a specific class of graphs. This makes it possible to make a fine trade-off between computational cost and success rate. We moreover give a simple proof for a lower bound on the irreducibility of this new algorithm, which applies to the original algorithm as well.<|reference_end|>
arxiv
@article{simatos2007a, title={A variant of the Recoil Growth algorithm to generate multi-polymer systems}, author={Florian Simatos}, journal={arXiv preprint arXiv:0708.1116}, year={2007}, archivePrefix={arXiv}, eprint={0708.1116}, primaryClass={cs.CE cond-mat.stat-mech} }
simatos2007a
arxiv-939
0708.1150
A Practical Ontology for the Large-Scale Modeling of Scholarly Artifacts and their Usage
<|reference_start|>A Practical Ontology for the Large-Scale Modeling of Scholarly Artifacts and their Usage: The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. We present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.<|reference_end|>
arxiv
@article{rodriguez2007a, title={A Practical Ontology for the Large-Scale Modeling of Scholarly Artifacts and their Usage}, author={Marko A. Rodriguez, Johah Bollen, Herbert Van de Sompel}, journal={Proceedings of the IEEE/ACM Joint Conference on Digital Libraries (JCDL'07), pp. 278-287, 2007}, year={2007}, doi={10.1145/1255175.1255229}, archivePrefix={arXiv}, eprint={0708.1150}, primaryClass={cs.DL cs.AI} }
rodriguez2007a
arxiv-940
0708.1179
Diversity-Multiplexing Tradeoff of Asynchronous Cooperative Diversity in Wireless Networks
<|reference_start|>Diversity-Multiplexing Tradeoff of Asynchronous Cooperative Diversity in Wireless Networks: Synchronization of relay nodes is an important and critical issue in exploiting cooperative diversity in wireless networks. In this paper, two asynchronous cooperative diversity schemes are proposed, namely, distributed delay diversity and asynchronous space-time coded cooperative diversity schemes. In terms of the overall diversity-multiplexing (DM) tradeoff function, we show that the proposed independent coding based distributed delay diversity and asynchronous space-time coded cooperative diversity schemes achieve the same performance as the synchronous space-time coded approach which requires an accurate symbol-level timing synchronization to ensure signals arriving at the destination from different relay nodes are perfectly synchronized. This demonstrates diversity order is maintained even at the presence of asynchronism between relay node. Moreover, when all relay nodes succeed in decoding the source information, the asynchronous space-time coded approach is capable of achieving better DM-tradeoff than synchronous schemes and performs equivalently to transmitting information through a parallel fading channel as far as the DM-tradeoff is concerned. Our results suggest the benefits of fully exploiting the space-time degrees of freedom in multiple antenna systems by employing asynchronous space-time codes even in a frequency flat fading channel. In addition, it is shown asynchronous space-time coded systems are able to achieve higher mutual information than synchronous space-time coded systems for any finite signal-to-noise-ratio (SNR) when properly selected baseband waveforms are employed.<|reference_end|>
arxiv
@article{wei2007diversity-multiplexing, title={Diversity-Multiplexing Tradeoff of Asynchronous Cooperative Diversity in Wireless Networks}, author={Shuangqing Wei}, journal={arXiv preprint arXiv:0708.1179}, year={2007}, archivePrefix={arXiv}, eprint={0708.1179}, primaryClass={cs.IT math.IT} }
wei2007diversity-multiplexing
arxiv-941
0708.1211
A Deterministic Sub-linear Time Sparse Fourier Algorithm via Non-adaptive Compressed Sensing Methods
<|reference_start|>A Deterministic Sub-linear Time Sparse Fourier Algorithm via Non-adaptive Compressed Sensing Methods: We study the problem of estimating the best B term Fourier representation for a given frequency-sparse signal (i.e., vector) $\textbf{A}$ of length $N \gg B$. More explicitly, we investigate how to deterministically identify B of the largest magnitude frequencies of $\hat{\textbf{A}}$, and estimate their coefficients, in polynomial$(B,\log N)$ time. Randomized sub-linear time algorithms which have a small (controllable) probability of failure for each processed signal exist for solving this problem. However, for failure intolerant applications such as those involving mission-critical hardware designed to process many signals over a long lifetime, deterministic algorithms with no probability of failure are highly desirable. In this paper we build on the deterministic Compressed Sensing results of Cormode and Muthukrishnan (CM) \cite{CMDetCS3,CMDetCS1,CMDetCS2} in order to develop the first known deterministic sub-linear time sparse Fourier Transform algorithm suitable for failure intolerant applications. Furthermore, in the process of developing our new Fourier algorithm, we present a simplified deterministic Compressed Sensing algorithm which improves on CM's algebraic compressibility results while simultaneously maintaining their results concerning exponential decay.<|reference_end|>
arxiv
@article{iwen2007a, title={A Deterministic Sub-linear Time Sparse Fourier Algorithm via Non-adaptive Compressed Sensing Methods}, author={M. A. Iwen}, journal={arXiv preprint arXiv:0708.1211}, year={2007}, archivePrefix={arXiv}, eprint={0708.1211}, primaryClass={cs.DM cs.NA} }
iwen2007a
arxiv-942
0708.1242
Cost-minimising strategies for data labelling : optimal stopping and active learning
<|reference_start|>Cost-minimising strategies for data labelling : optimal stopping and active learning: Supervised learning deals with the inference of a distribution over an output or label space $\CY$ conditioned on points in an observation space $\CX$, given a training dataset $D$ of pairs in $\CX \times \CY$. However, in a lot of applications of interest, acquisition of large amounts of observations is easy, while the process of generating labels is time-consuming or costly. One way to deal with this problem is {\em active} learning, where points to be labelled are selected with the aim of creating a model with better performance than that of an model trained on an equal number of randomly sampled points. In this paper, we instead propose to deal with the labelling cost directly: The learning goal is defined as the minimisation of a cost which is a function of the expected model performance and the total cost of the labels used. This allows the development of general strategies and specific algorithms for (a) optimal stopping, where the expected cost dictates whether label acquisition should continue (b) empirical evaluation, where the cost is used as a performance metric for a given combination of inference, stopping and sampling methods. Though the main focus of the paper is optimal stopping, we also aim to provide the background for further developments and discussion in the related field of active learning.<|reference_end|>
arxiv
@article{dimitrakakis2007cost-minimising, title={Cost-minimising strategies for data labelling : optimal stopping and active learning}, author={Christos Dimitrakakis and Christian Savu-Krohn}, journal={arXiv preprint arXiv:0708.1242}, year={2007}, archivePrefix={arXiv}, eprint={0708.1242}, primaryClass={cs.LG} }
dimitrakakis2007cost-minimising
arxiv-943
0708.1343
A Matrix Ring Description for Cyclic Convolutional Codes
<|reference_start|>A Matrix Ring Description for Cyclic Convolutional Codes: In this paper, we study convolutional codes with a specific cyclic structure. By definition, these codes are left ideals in a certain skew polynomial ring. Using that the skew polynomial ring is isomorphic to a matrix ring we can describe the algebraic parameters of the codes in a more accessible way. We show that the existence of such codes with given algebraic parameters can be reduced to the solvability of a modified rook problem. It is our strong belief that the rook problem is always solvable, and we present solutions in particular cases.<|reference_end|>
arxiv
@article{gluesing-luerssen2007a, title={A Matrix Ring Description for Cyclic Convolutional Codes}, author={Heide Gluesing-Luerssen, Fai-Lung Tsang}, journal={arXiv preprint arXiv:0708.1343}, year={2007}, archivePrefix={arXiv}, eprint={0708.1343}, primaryClass={cs.IT math.IT math.RA} }
gluesing-luerssen2007a
arxiv-944
0708.1362
Physical limits of inference
<|reference_start|>Physical limits of inference: I show that physical devices that perform observation, prediction, or recollection share an underlying mathematical structure. I call devices with that structure "inference devices". I present a set of existence and impossibility results concerning inference devices. These results hold independent of the precise physical laws governing our universe. In a limited sense, the impossibility results establish that Laplace was wrong to claim that even in a classical, non-chaotic universe the future can be unerringly predicted, given sufficient knowledge of the present. Alternatively, these impossibility results can be viewed as a non-quantum mechanical "uncertainty principle". Next I explore the close connections between the mathematics of inference devices and of Turing Machines. In particular, the impossibility results for inference devices are similar to the Halting theorem for TM's. Furthermore, one can define an analog of Universal TM's (UTM's) for inference devices. I call those analogs "strong inference devices". I use strong inference devices to define the "inference complexity" of an inference task, which is the analog of the Kolmogorov complexity of computing a string. However no universe can contain more than one strong inference device. So whereas the Kolmogorov complexity of a string is arbitrary up to specification of the UTM, there is no such arbitrariness in the inference complexity of an inference task. I end by discussing the philosophical implications of these results, e.g., for whether the universe "is" a computer.<|reference_end|>
arxiv
@article{wolpert2007physical, title={Physical limits of inference}, author={David H. Wolpert}, journal={PhysicaD237:1257-1281,2008}, year={2007}, doi={10.1016/j.physd.2008.03.040}, archivePrefix={arXiv}, eprint={0708.1362}, primaryClass={cond-mat.stat-mech cs.CC cs.IT gr-qc math.IT} }
wolpert2007physical
arxiv-945
0708.1411
Achievable Outage Rates with Improved Decoding of Bicm Multiband Ofdm Under Channel Estimation Errors
<|reference_start|>Achievable Outage Rates with Improved Decoding of Bicm Multiband Ofdm Under Channel Estimation Errors: We consider the decoding of bit interleaved coded modulation (BICM) applied to multiband OFDM for practical scenarios where only a noisy (possibly very bad) estimate of the channel is available at the receiver. First, a decoding metric based on the channel it a posteriori probability density, conditioned on the channel estimate is derived and used for decoding BICM multiband OFDM. Then, we characterize the limits of reliable information rates in terms of the maximal achievable outage rates associated to the proposed metric. We also compare our results with the outage rates of a system using a theoretical decoder. Our results are useful for designing a communication system where a prescribed quality of service (QoS), in terms of achievable target rates with small error probability, must be satisfied even in the presence of imperfect channel estimation. Numerical results over both realistic UWB and theoretical Rayleigh fading channels show that the proposed method provides significant gain in terms of BER and outage rates compared to the classical mismatched detector, without introducing any additional complexity.<|reference_end|>
arxiv
@article{sadough2007achievable, title={Achievable Outage Rates with Improved Decoding of Bicm Multiband Ofdm Under Channel Estimation Errors}, author={Sajad Sadough (LSS), Pablo Piantanida (LSS), Pierre Duhamel (LSS)}, journal={Dans 40th Asilomar Conference on Signals, Systems, and Computers - 40th Asilomar Conference on Signals, Systems, and Computers, Monterey : \'Etats-Unis d'Am\'erique (2007)}, year={2007}, archivePrefix={arXiv}, eprint={0708.1411}, primaryClass={cs.NI} }
sadough2007achievable
arxiv-946
0708.1413
On Optimal Turbo Decoding of Wideband MIMO-OFDM Systems Under Imperfect Channel State Information
<|reference_start|>On Optimal Turbo Decoding of Wideband MIMO-OFDM Systems Under Imperfect Channel State Information: We consider the decoding of bit interleaved coded modulation (BICM) applied to both multiband and MIMO OFDM systems for typical scenarios where only a noisy (possibly very bad) estimate of the channel is provided by sending a limited number of pilot symbols. First, by using a Bayesian framework involving the channel a posteriori density, we adopt a practical decoding metric that is robust to the presence of channel estimation errors. Then this metric is used in the demapping part of BICM multiband and MIMO OFDM receivers. We also compare our results with the performance of a mismatched decoder that replaces the channel by its estimate in the decoding metric. Numerical results over both realistic UWB and theoretical Rayleigh fading channels show that the proposed method provides significant gain in terms of bit error rate compared to the classical mismatched detector, without introducing any additional complexity.<|reference_end|>
arxiv
@article{sadough2007on, title={On Optimal Turbo Decoding of Wideband MIMO-OFDM Systems Under Imperfect Channel State Information}, author={Sajad Sadough (LSS), Pierre Duhamel (LSS)}, journal={COST2100 Meeting, Lisbon : Portugal (2007)}, year={2007}, archivePrefix={arXiv}, eprint={0708.1413}, primaryClass={cs.NI} }
sadough2007on
arxiv-947
0708.1414
Wavelet Based Semi-blind Channel Estimation For Multiband OFDM
<|reference_start|>Wavelet Based Semi-blind Channel Estimation For Multiband OFDM: This paper introduces an expectation-maximization (EM) algorithm within a wavelet domain Bayesian framework for semi-blind channel estimation of multiband OFDM based UWB communications. A prior distribution is chosen for the wavelet coefficients of the unknown channel impulse response in order to model a sparseness property of the wavelet representation. This prior yields, in maximum a posteriori estimation, a thresholding rule within the EM algorithm. We particularly focus on reducing the number of estimated parameters by iteratively discarding ``unsignificant'' wavelet coefficients from the estimation process. Simulation results using UWB channels issued from both models and measurements show that under sparsity conditions, the proposed algorithm outperforms pilot based channel estimation in terms of mean square error and bit error rate and enhances the estimation accuracy with less computational complexity than traditional semi-blind methods.<|reference_end|>
arxiv
@article{sadough2007wavelet, title={Wavelet Based Semi-blind Channel Estimation For Multiband OFDM}, author={Sajad Sadough (LSS), Mahieddine Ichir (LSS), Emmanuel Jaffrot, Pierre Duhamel (LSS)}, journal={Dans European Wireless - European Wireless, Paris : France (2007)}, year={2007}, archivePrefix={arXiv}, eprint={0708.1414}, primaryClass={cs.NI} }
sadough2007wavelet
arxiv-948
0708.1416
MIMO-OFDM Optimal Decoding and Achievable Information Rates Under Imperfect Channel Estimation
<|reference_start|>MIMO-OFDM Optimal Decoding and Achievable Information Rates Under Imperfect Channel Estimation: Optimal decoding of bit interleaved coded modulation (BICM) MIMO-OFDM where an imperfect channel estimate is available at the receiver is investigated. First, by using a Bayesian approach involving the channel a posteriori density, we derive a practical decoding metric for general memoryless channels that is robust to the presence of channel estimation errors. Then, we evaluate the outage rates achieved by a decoder that uses our proposed metric. The performance of the proposed decoder is compared to the classical mismatched decoder and a theoretical decoder defined as the best decoder in the presence of imperfect channel estimation. Numerical results over Rayleigh block fading MIMO-OFDM channels show that the proposed decoder outperforms mismatched decoding in terms of bit error rate and outage capacity without introducing any additional complexity.<|reference_end|>
arxiv
@article{sadough2007mimo-ofdm, title={MIMO-OFDM Optimal Decoding and Achievable Information Rates Under Imperfect Channel Estimation}, author={Sajad Sadough (LSS), Pablo Piantanida (LSS), Pierre Duhamel (LSS)}, journal={Dans The VIII IEEE Workshop on Signal Processing Advances in Wireless Communications - The VIII IEEE Workshop on Signal Processing Advances in Wireless Communications, Finlande (2007)}, year={2007}, archivePrefix={arXiv}, eprint={0708.1416}, primaryClass={cs.NI} }
sadough2007mimo-ofdm
arxiv-949
0708.1480
Valid formulas, games and network protocols
<|reference_start|>Valid formulas, games and network protocols: We describe a remarkable relation between the notion of valid formula of predicate logic and the specification of network protocols. We give several examples such as the acknowledgement of one packet or of a sequence of packets. We show how to specify the composition of protocols.<|reference_end|>
arxiv
@article{krivine2007valid, title={Valid formulas, games and network protocols}, author={Jean-Louis Krivine (PPS), Yves Legrandg'erard (PPS)}, journal={arXiv preprint arXiv:0708.1480}, year={2007}, archivePrefix={arXiv}, eprint={0708.1480}, primaryClass={cs.LO} }
krivine2007valid
arxiv-950
0708.1491
On perfect, amicable, and sociable chains
<|reference_start|>On perfect, amicable, and sociable chains: Let $x = (x_0,...,x_{n-1})$ be an n-chain, i.e., an n-tuple of non-negative integers $< n$. Consider the operator $s: x \mapsto x' = (x'_0,...,x'_{n-1})$, where x'_j represents the number of $j$'s appearing among the components of x. An n-chain x is said to be perfect if $s(x) = x$. For example, (2,1,2,0,0) is a perfect 5-chain. Analogously to the theory of perfect, amicable, and sociable numbers, one can define from the operator s the concepts of amicable pair and sociable group of chains. In this paper we give an exhaustive list of all the perfect, amicable, and sociable chains.<|reference_end|>
arxiv
@article{marichal2007on, title={On perfect, amicable, and sociable chains}, author={Jean-Luc Marichal}, journal={arXiv preprint arXiv:0708.1491}, year={2007}, archivePrefix={arXiv}, eprint={0708.1491}, primaryClass={math.CO cs.DM math.NT} }
marichal2007on
arxiv-951
0708.1496
A Light-Based Device for Solving the Hamiltonian Path Problem
<|reference_start|>A Light-Based Device for Solving the Hamiltonian Path Problem: In this paper we suggest the use of light for performing useful computations. Namely, we propose a special device which uses light rays for solving the Hamiltonian path problem on a directed graph. The device has a graph-like representation and the light is traversing it following the routes given by the connections between nodes. In each node the rays are uniquely marked so that they can be easily identified. At the destination node we will search only for particular rays that have passed only once through each node. We show that the proposed device can solve small and medium instances of the problem in reasonable time.<|reference_end|>
arxiv
@article{oltean2007a, title={A Light-Based Device for Solving the Hamiltonian Path Problem}, author={Mihai Oltean}, journal={LNCS 4135, Unconventional Computation conference, pp. 217-227, 2006}, year={2007}, doi={10.1007/11839132}, archivePrefix={arXiv}, eprint={0708.1496}, primaryClass={cs.AR cs.DC} }
oltean2007a
arxiv-952
0708.1503
Defensive forecasting for optimal prediction with expert advice
<|reference_start|>Defensive forecasting for optimal prediction with expert advice: The method of defensive forecasting is applied to the problem of prediction with expert advice for binary outcomes. It turns out that defensive forecasting is not only competitive with the Aggregating Algorithm but also handles the case of "second-guessing" experts, whose advice depends on the learner's prediction; this paper assumes that the dependence on the learner's prediction is continuous.<|reference_end|>
arxiv
@article{vovk2007defensive, title={Defensive forecasting for optimal prediction with expert advice}, author={Vladimir Vovk}, journal={arXiv preprint arXiv:0708.1503}, year={2007}, archivePrefix={arXiv}, eprint={0708.1503}, primaryClass={cs.LG} }
vovk2007defensive
arxiv-953
0708.1512
Solving the Hamiltonian path problem with a light-based computer
<|reference_start|>Solving the Hamiltonian path problem with a light-based computer: In this paper we propose a special computational device which uses light rays for solving the Hamiltonian path problem on a directed graph. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. In each node the rays are uniquely marked so that they can be easily identified. At the destination node we will search only for particular rays that have passed only once through each node. We show that the proposed device can solve small and medium instances of the problem in reasonable time.<|reference_end|>
arxiv
@article{oltean2007solving, title={Solving the Hamiltonian path problem with a light-based computer}, author={Mihai Oltean}, journal={Natural Computing, Springer, Vol 6, 2007}, year={2007}, doi={10.1007/s11047-007-9042-z}, archivePrefix={arXiv}, eprint={0708.1512}, primaryClass={cs.AR cs.DC} }
oltean2007solving
arxiv-954
0708.1527
A Data-Parallel Version of Aleph
<|reference_start|>A Data-Parallel Version of Aleph: This is to present work on modifying the Aleph ILP system so that it evaluates the hypothesised clauses in parallel by distributing the data-set among the nodes of a parallel or distributed machine. The paper briefly discusses MPI, the interface used to access message- passing libraries for parallel computers and clusters. It then proceeds to describe an extension of YAP Prolog with an MPI interface and an implementation of data-parallel clause evaluation for Aleph through this interface. The paper concludes by testing the data-parallel Aleph on artificially constructed data-sets.<|reference_end|>
arxiv
@article{konstantopoulos2007a, title={A Data-Parallel Version of Aleph}, author={Stasinos Konstantopoulos}, journal={arXiv preprint arXiv:0708.1527}, year={2007}, archivePrefix={arXiv}, eprint={0708.1527}, primaryClass={cs.AI cs.DC} }
konstantopoulos2007a
arxiv-955
0708.1529
Resolution over Linear Equations and Multilinear Proofs
<|reference_start|>Resolution over Linear Equations and Multilinear Proofs: We develop and study the complexity of propositional proof systems of varying strength extending resolution by allowing it to operate with disjunctions of linear equations instead of clauses. We demonstrate polynomial-size refutations for hard tautologies like the pigeonhole principle, Tseitin graph tautologies and the clique-coloring tautologies in these proof systems. Using the (monotone) interpolation by a communication game technique we establish an exponential-size lower bound on refutations in a certain, considerably strong, fragment of resolution over linear equations, as well as a general polynomial upper bound on (non-monotone) interpolants in this fragment. We then apply these results to extend and improve previous results on multilinear proofs (over fields of characteristic 0), as studied in [RazTzameret06]. Specifically, we show the following: 1. Proofs operating with depth-3 multilinear formulas polynomially simulate a certain, considerably strong, fragment of resolution over linear equations. 2. Proofs operating with depth-3 multilinear formulas admit polynomial-size refutations of the pigeonhole principle and Tseitin graph tautologies. The former improve over a previous result that established small multilinear proofs only for the \emph{functional} pigeonhole principle. The latter are different than previous proofs, and apply to multilinear proofs of Tseitin mod p graph tautologies over any field of characteristic 0. We conclude by connecting resolution over linear equations with extensions of the cutting planes proof system.<|reference_end|>
arxiv
@article{raz2007resolution, title={Resolution over Linear Equations and Multilinear Proofs}, author={Ran Raz, Iddo Tzameret}, journal={Annals of Pure and Applied Logic , 155(3):194-224, 2008;}, year={2007}, doi={10.1016/j.apal.2008.04.001}, archivePrefix={arXiv}, eprint={0708.1529}, primaryClass={cs.CC cs.LO} }
raz2007resolution
arxiv-956
0708.1558
Construction of a 3-Dimensional MDS code
<|reference_start|>Construction of a 3-Dimensional MDS code: In this paper, we describe a procedure for constructing $q$--ary $[N,3,N-2]$--MDS codes, of length $N\leq q+1$ (for $q$ odd) or $N\leq q+2$ (for $q$ even), using a set of non--degenerate Hermitian forms in $PG(2,q^2)$.<|reference_end|>
arxiv
@article{aguglia2007construction, title={Construction of a 3-Dimensional MDS code}, author={A. Aguglia, L. Giuzzi}, journal={Contributions to Discrete Mathematics 3 no. 1: 39-46 (2007)}, year={2007}, archivePrefix={arXiv}, eprint={0708.1558}, primaryClass={cs.IT math.IT} }
aguglia2007construction
arxiv-957
0708.1564
Learning Phonotactics Using ILP
<|reference_start|>Learning Phonotactics Using ILP: This paper describes experiments on learning Dutch phonotactic rules using Inductive Logic Programming, a machine learning discipline based on inductive logical operators. Two different ways of approaching the problem are experimented with, and compared against each other as well as with related work on the task. The results show a direct correspondence between the quality and informedness of the background knowledge and the constructed theory, demonstrating the ability of ILP to take good advantage of the prior domain knowledge available. Further research is outlined.<|reference_end|>
arxiv
@article{konstantopoulos2007learning, title={Learning Phonotactics Using ILP}, author={Stasinos Konstantopoulos}, journal={Special Issue of the WEB-SLS Journal: The Language Sections of the ESSLLI-01 Student Session. 2002}, year={2007}, archivePrefix={arXiv}, eprint={0708.1564}, primaryClass={cs.CL} }
konstantopoulos2007learning
arxiv-958
0708.1579
Homogeneous temporal activity patterns in a large online communication space
<|reference_start|>Homogeneous temporal activity patterns in a large online communication space: The many-to-many social communication activity on the popular technology-news website Slashdot has been studied. We have concentrated on the dynamics of message production without considering semantic relations and have found regular temporal patterns in the reaction time of the community to a news-post as well as in single user behavior. The statistics of these activities follow log-normal distributions. Daily and weekly oscillatory cycles, which cause slight variations of this simple behavior, are identified. A superposition of two log-normal distributions can account for these variations. The findings are remarkable since the distribution of the number of comments per users, which is also analyzed, indicates a great amount of heterogeneity in the community. The reader may find surprising that only a few parameters allow a detailed description, or even prediction, of social many-to-many information exchange in this kind of popular public spaces.<|reference_end|>
arxiv
@article{kaltenbrunner2007homogeneous, title={Homogeneous temporal activity patterns in a large online communication space}, author={Andreas Kaltenbrunner, Vicenc{c} G'omez, Ayman Moghnieh, Rodrigo Meza, Josep Blat, Vicente L'opez}, journal={arXiv preprint arXiv:0708.1579}, year={2007}, archivePrefix={arXiv}, eprint={0708.1579}, primaryClass={cs.NI} }
kaltenbrunner2007homogeneous
arxiv-959
0708.1580
Optimal Causal Inference: Estimating Stored Information and Approximating Causal Architecture
<|reference_start|>Optimal Causal Inference: Estimating Stored Information and Approximating Causal Architecture: We introduce an approach to inferring the causal architecture of stochastic dynamical systems that extends rate distortion theory to use causal shielding---a natural principle of learning. We study two distinct cases of causal inference: optimal causal filtering and optimal causal estimation. Filtering corresponds to the ideal case in which the probability distribution of measurement sequences is known, giving a principled method to approximate a system's causal structure at a desired level of representation. We show that, in the limit in which a model complexity constraint is relaxed, filtering finds the exact causal architecture of a stochastic dynamical system, known as the causal-state partition. From this, one can estimate the amount of historical information the process stores. More generally, causal filtering finds a graded model-complexity hierarchy of approximations to the causal architecture. Abrupt changes in the hierarchy, as a function of approximation, capture distinct scales of structural organization. For nonideal cases with finite data, we show how the correct number of underlying causal states can be found by optimal causal estimation. A previously derived model complexity control term allows us to correct for the effect of statistical fluctuations in probability estimates and thereby avoid over-fitting.<|reference_end|>
arxiv
@article{still2007optimal, title={Optimal Causal Inference: Estimating Stored Information and Approximating Causal Architecture}, author={Susanne Still, James P. Crutchfield, Christopher J. Ellison}, journal={arXiv preprint arXiv:0708.1580}, year={2007}, archivePrefix={arXiv}, eprint={0708.1580}, primaryClass={cs.IT cond-mat.stat-mech cs.LG math.IT math.ST stat.TH} }
still2007optimal
arxiv-960
0708.1593
Updating Probabilities with Data and Moments
<|reference_start|>Updating Probabilities with Data and Moments: We use the method of Maximum (relative) Entropy to process information in the form of observed data and moment constraints. The generic "canonical" form of the posterior distribution for the problem of simultaneous updating with data and moments is obtained. We discuss the general problem of non-commuting constraints, when they should be processed sequentially and when simultaneously. As an illustration, the multinomial example of die tosses is solved in detail for two superficially similar but actually very different problems.<|reference_end|>
arxiv
@article{giffin2007updating, title={Updating Probabilities with Data and Moments}, author={Adom Giffin and Ariel Caticha}, journal={arXiv preprint arXiv:0708.1593}, year={2007}, doi={10.1063/1.2821302}, archivePrefix={arXiv}, eprint={0708.1593}, primaryClass={physics.data-an cs.IT math.IT math.ST physics.comp-ph physics.pop-ph stat.AP stat.CO stat.ME stat.TH} }
giffin2007updating
arxiv-961
0708.1624
Designing a Collaborative Research Environment for Students and their Supervisors (CRESS)
<|reference_start|>Designing a Collaborative Research Environment for Students and their Supervisors (CRESS): In a previous paper the CSCR domain was defined. Here this is taken to the next stage where the design of a particular Collaborative Research Environment to support Students and Supervisors (CRESS) is considered. Following the CSCR structure this paper deals with an analysis of 13 collaborative working environments to determine a preliminary design for CRESS in order to discover the most appropriate set of tools for its implementation.<|reference_end|>
arxiv
@article{hinze-hoare2007designing, title={Designing a Collaborative Research Environment for Students and their Supervisors (CRESS)}, author={V. Hinze-Hoare}, journal={arXiv preprint arXiv:0708.1624}, year={2007}, archivePrefix={arXiv}, eprint={0708.1624}, primaryClass={cs.HC} }
hinze-hoare2007designing
arxiv-962
0708.1723
Hybrid Branching-Time Logics
<|reference_start|>Hybrid Branching-Time Logics: Hybrid branching-time logics are introduced as extensions of CTL-like logics with state variables and the downarrow-binder. Following recent work in the linear framework, only logics with a single variable are considered. The expressive power and the complexity of satisfiability of the resulting logics is investigated. As main result, the satisfiability problem for the hybrid versions of several branching-time logics is proved to be 2EXPTIME-complete. These branching-time logics range from strict fragments of CTL to extensions of CTL that can talk about the past and express fairness-properties. The complexity gap relative to CTL is explained by a corresponding succinctness result. To prove the upper bound, the automata-theoretic approach to branching-time logics is extended to hybrid logics, showing that non-emptiness of alternating one-pebble Buchi tree automata is 2EXPTIME-complete.<|reference_end|>
arxiv
@article{weber2007hybrid, title={Hybrid Branching-Time Logics}, author={Volker Weber}, journal={arXiv preprint arXiv:0708.1723}, year={2007}, archivePrefix={arXiv}, eprint={0708.1723}, primaryClass={cs.LO cs.CC} }
weber2007hybrid
arxiv-963
0708.1725
Design: One, but in different forms
<|reference_start|>Design: One, but in different forms: This overview paper defends an augmented cognitively oriented generic-design hypothesis: there are both significant similarities between the design activities implemented in different situations and crucial differences between these and other cognitive activities; yet, characteristics of a design situation (related to the design process, the designers, and the artefact) introduce specificities in the corresponding cognitive activities and structures that are used, and in the resulting designs. We thus augment the classical generic-design hypothesis with that of different forms of designing. We review the data available in the cognitive design research literature and propose a series of candidates underlying such forms of design, outlining a number of directions requiring further elaboration.<|reference_end|>
arxiv
@article{visser2007design:, title={Design: One, but in different forms}, author={Willemien Visser (LTCI)}, journal={Design Studies 30, 3 (2009) 187-223}, year={2007}, doi={10.1016/j.destud.2008.11.004}, archivePrefix={arXiv}, eprint={0708.1725}, primaryClass={cs.HC} }
visser2007design:
arxiv-964
0708.1768
Cryptanalysis of shifted conjugacy authentication protocol
<|reference_start|>Cryptanalysis of shifted conjugacy authentication protocol: In this paper we present the first practical attack on the shifted conjugacy-based authentication protocol proposed by P. Dehornoy. We discuss the weaknesses of that primitive and propose ways to improve the protocol.<|reference_end|>
arxiv
@article{longrigg2007cryptanalysis, title={Cryptanalysis of shifted conjugacy authentication protocol}, author={Jonathan Longrigg and Alexander Ushakov}, journal={arXiv preprint arXiv:0708.1768}, year={2007}, archivePrefix={arXiv}, eprint={0708.1768}, primaryClass={math.GR cs.CR} }
longrigg2007cryptanalysis
arxiv-965
0708.1818
Computational Simulation and 3D Virtual Reality Engineering Tools for Dynamical Modeling and Imaging of Composite Nanomaterials
<|reference_start|>Computational Simulation and 3D Virtual Reality Engineering Tools for Dynamical Modeling and Imaging of Composite Nanomaterials: An adventure at engineering design and modeling is possible with a Virtual Reality Environment (VRE) that uses multiple computer-generated media to let a user experience situations that are temporally and spatially prohibiting. In this paper, an approach to developing some advanced architecture and modeling tools is presented to allow multiple frameworks work together while being shielded from the application program. This architecture is being developed in a framework of workbench interactive tools for next generation nanoparticle-reinforced damping/dynamic systems. Through the use of system, an engineer/programmer can respectively concentrate on tailoring an engineering design concept of novel system and the application software design while using existing databases/software outputs.<|reference_end|>
arxiv
@article{bochkareva2007computational, title={Computational Simulation and 3D Virtual Reality Engineering Tools for Dynamical Modeling and Imaging of Composite Nanomaterials}, author={L.-V. Bochkareva, M.-V. Kireitseu, G. R. Tomlinson, H. Altenbach, V. Kompis, D. Hui}, journal={Dans European Nano Systems Worshop - ENS 2005, Paris : France (2005)}, year={2007}, archivePrefix={arXiv}, eprint={0708.1818}, primaryClass={cs.CE cond-mat.other} }
bochkareva2007computational
arxiv-966
0708.1859
Multiple-Description Coding by Dithered Delta-Sigma Quantization
<|reference_start|>Multiple-Description Coding by Dithered Delta-Sigma Quantization: We address the connection between the multiple-description (MD) problem and Delta-Sigma quantization. The inherent redundancy due to oversampling in Delta-Sigma quantization, and the simple linear-additive noise model resulting from dithered lattice quantization, allow us to construct a symmetric and time-invariant MD coding scheme. We show that the use of a noise shaping filter makes it possible to trade off central distortion for side distortion. Asymptotically as the dimension of the lattice vector quantizer and order of the noise shaping filter approach infinity, the entropy rate of the dithered Delta-Sigma quantization scheme approaches the symmetric two-channel MD rate-distortion function for a memoryless Gaussian source and MSE fidelity criterion, at any side-to-central distortion ratio and any resolution. In the optimal scheme, the infinite-order noise shaping filter must be minimum phase and have a piece-wise flat power spectrum with a single jump discontinuity. An important advantage of the proposed design is that it is symmetric in rate and distortion by construction, so the coding rates of the descriptions are identical and there is therefore no need for source splitting.<|reference_end|>
arxiv
@article{ostergaard2007multiple-description, title={Multiple-Description Coding by Dithered Delta-Sigma Quantization}, author={Jan Ostergaard and Ram Zamir}, journal={arXiv preprint arXiv:0708.1859}, year={2007}, archivePrefix={arXiv}, eprint={0708.1859}, primaryClass={cs.IT math.IT} }
ostergaard2007multiple-description
arxiv-967
0708.1877
A nearly tight memory-redundancy trade-off for one-pass compression
<|reference_start|>A nearly tight memory-redundancy trade-off for one-pass compression: Let $s$ be a string of length $n$ over an alphabet of constant size $\sigma$ and let $c$ and $\epsilon$ be constants with (1 \geq c \geq 0) and (\epsilon > 0). Using (O (n)) time, (O (n^c)) bits of memory and one pass we can always encode $s$ in (n H_k (s) + O (\sigma^k n^{1 - c + \epsilon})) bits for all integers (k \geq 0) simultaneously. On the other hand, even with unlimited time, using (O (n^c)) bits of memory and one pass we cannot always encode $s$ in (O (n H_k (s) + \sigma^k n^{1 - c - \epsilon})) bits for, e.g., (k = \lceil (c + \epsilon / 2) \log_\sigma n \rceil).<|reference_end|>
arxiv
@article{gagie2007a, title={A nearly tight memory-redundancy trade-off for one-pass compression}, author={Travis Gagie}, journal={arXiv preprint arXiv:0708.1877}, year={2007}, archivePrefix={arXiv}, eprint={0708.1877}, primaryClass={cs.IT math.IT} }
gagie2007a
arxiv-968
0708.1903
On Edge-Disjoint Pairs Of Matchings
<|reference_start|>On Edge-Disjoint Pairs Of Matchings: For a graph G, consider the pairs of edge-disjoint matchings whose union consists of as many edges as possible. Let H be the largest matching among such pairs. Let M be a maximum matching of G. We show that 5/4 is a tight upper bound for |M|/|H|.<|reference_end|>
arxiv
@article{mkrtchyan2007on, title={On Edge-Disjoint Pairs Of Matchings}, author={V. V. Mkrtchyan, V. L. Musoyan, A. V. Tserunyan}, journal={Discrete Mathematics, 2008, Vol 308/23 pp 5823-5828}, year={2007}, doi={10.1016/j.disc.2007.09.061}, archivePrefix={arXiv}, eprint={0708.1903}, primaryClass={cs.DM} }
mkrtchyan2007on
arxiv-969
0708.1909
Lower Bounds for the Complexity of the Voronoi Diagram of Polygonal Curves under the Discrete Frechet Distance
<|reference_start|>Lower Bounds for the Complexity of the Voronoi Diagram of Polygonal Curves under the Discrete Frechet Distance: We give lower bounds for the combinatorial complexity of the Voronoi diagram of polygonal curves under the discrete Frechet distance. We show that the Voronoi diagram of n curves in R^d with k vertices each, has complexity Omega(n^{dk}) for dimension d=1,2 and Omega(n^{d(k-1)+2}) for d>2.<|reference_end|>
arxiv
@article{buchin2007lower, title={Lower Bounds for the Complexity of the Voronoi Diagram of Polygonal Curves under the Discrete Frechet Distance}, author={Kevin Buchin and Maike Buchin}, journal={arXiv preprint arXiv:0708.1909}, year={2007}, archivePrefix={arXiv}, eprint={0708.1909}, primaryClass={cs.CG cs.CC} }
buchin2007lower
arxiv-970
0708.1962
Exact Cover with light
<|reference_start|>Exact Cover with light: We suggest a new optical solution for solving the YES/NO version of the Exact Cover problem by using the massive parallelism of light. The idea is to build an optical device which can generate all possible solutions of the problem and then to pick the correct one. In our case the device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible covers (exact or not) of the given set. For selecting the correct solution we assign to each item, from the set to be covered, a special integer number. These numbers will actually represent delays induced to light when it passes through arcs. The solution is represented as a subray arriving at a certain moment in the destination node. This will tell us if an exact cover does exist or not.<|reference_end|>
arxiv
@article{oltean2007exact, title={Exact Cover with light}, author={Mihai Oltean, Oana Muntean}, journal={New Generation Computing, Springer-Verlag, Vol. 26, Issue 4, pp. 327-344, 2008}, year={2007}, doi={10.1007/s00354-008-0049-5}, archivePrefix={arXiv}, eprint={0708.1962}, primaryClass={cs.AR cs.DC} }
oltean2007exact
arxiv-971
0708.1964
Solving the subset-sum problem with a light-based device
<|reference_start|>Solving the subset-sum problem with a light-based device: We propose a special computational device which uses light rays for solving the subset-sum problem. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible subsets of the given set. To each arc we assign either a number from the given set or a predefined constant. When the light is passing through an arc it is delayed by the amount of time indicated by the number placed in that arc. At the destination node we will check if there is a ray whose total delay is equal to the target value of the subset sum problem (plus some constants).<|reference_end|>
arxiv
@article{oltean2007solving, title={Solving the subset-sum problem with a light-based device}, author={Mihai Oltean, Oana Muntean}, journal={Natural Computing, Springer-Verlag, Vol 8, Issue 2, pp. 321-331, 2009}, year={2007}, doi={10.1007/s11047-007-9059-3}, archivePrefix={arXiv}, eprint={0708.1964}, primaryClass={cs.AR cs.AI cs.DC} }
oltean2007solving
arxiv-972
0708.2021
Who is the best connected EC researcher? Centrality analysis of the complex network of authors in evolutionary computation
<|reference_start|>Who is the best connected EC researcher? Centrality analysis of the complex network of authors in evolutionary computation: Co-authorship graphs (that is, the graph of authors linked by co-authorship of papers) are complex networks, which expresses the dynamics of a complex system. Only recently its study has started to draw interest from the EC community, the first paper dealing with it having been published two years ago. In this paper we will study the co-authorship network of EC at a microscopic level. Our objective is ascertaining which are the most relevant nodes (i.e. authors) in it. For this purpose, we examine several metrics defined in the complex-network literature, and analyze them both in isolation and combined within a Pareto-dominance approach. The result of our analysis indicates that there are some well-known researchers that appear systematically in top rankings. This also provides some hints on the social behavior of our community.<|reference_end|>
arxiv
@article{merelo2007who, title={Who is the best connected EC researcher? Centrality analysis of the complex network of authors in evolutionary computation}, author={Juan J. Merelo and Carlos Cotta}, journal={arXiv preprint arXiv:0708.2021}, year={2007}, archivePrefix={arXiv}, eprint={0708.2021}, primaryClass={cs.CY cs.NE} }
merelo2007who
arxiv-973
0708.2023
Nonantagonistic noisy duels of discrete type with an arbitrary number of actions
<|reference_start|>Nonantagonistic noisy duels of discrete type with an arbitrary number of actions: We study a nonzero-sum game of two players which is a generalization of the antagonistic noisy duel of discrete type. The game is considered from the point of view of various criterions of optimality. We prove existence of epsilon-equilibrium situations and show that the epsilon-equilibrium strategies that we have found are epsilon-maxmin. Conditions under which the equilibrium plays are Pareto-optimal are given. Keywords: noisy duel, payoff function, strategy, equilibrium situation, Pareto optimality, the value of a game.<|reference_end|>
arxiv
@article{positselskaya2007nonantagonistic, title={Nonantagonistic noisy duels of discrete type with an arbitrary number of actions}, author={Lyubov N. Positselskaya}, journal={arXiv preprint arXiv:0708.2023}, year={2007}, archivePrefix={arXiv}, eprint={0708.2023}, primaryClass={math.OC cs.GT math.PR} }
positselskaya2007nonantagonistic
arxiv-974
0708.2026
Derivative of BICM Mutual Information
<|reference_start|>Derivative of BICM Mutual Information: In this letter we determine the derivative of the mutual information corresponding to bit-interleaved coded modulation systems. The derivative follows as a linear combination of minimum-mean-squared error functions of coded modulation sets. The result finds applications to the analysis of communications systems in the wideband regime and to the design of power allocation over parallel channels.<|reference_end|>
arxiv
@article{fabregas2007derivative, title={Derivative of BICM Mutual Information}, author={Albert Guillen I Fabregas and Alfonso Martinez}, journal={arXiv preprint arXiv:0708.2026}, year={2007}, archivePrefix={arXiv}, eprint={0708.2026}, primaryClass={cs.IT math.IT} }
fabregas2007derivative
arxiv-975
0708.2076
Repairing Inconsistent XML Write-Access Control Policies
<|reference_start|>Repairing Inconsistent XML Write-Access Control Policies: XML access control policies involving updates may contain security flaws, here called inconsistencies, in which a forbidden operation may be simulated by performing a sequence of allowed operations. This paper investigates the problem of deciding whether a policy is consistent, and if not, how its inconsistencies can be repaired. We consider policies expressed in terms of annotated DTDs defining which operations are allowed or denied for the XML trees that are instances of the DTD. We show that consistency is decidable in PTIME for such policies and that consistent partial policies can be extended to unique "least-privilege" consistent total policies. We also consider repair problems based on deleting privileges to restore consistency, show that finding minimal repairs is NP-complete, and give heuristics for finding repairs.<|reference_end|>
arxiv
@article{bravo2007repairing, title={Repairing Inconsistent XML Write-Access Control Policies}, author={Loreto Bravo, James Cheney and Irini Fundulaki}, journal={arXiv preprint arXiv:0708.2076}, year={2007}, archivePrefix={arXiv}, eprint={0708.2076}, primaryClass={cs.DB} }
bravo2007repairing
arxiv-976
0708.2078
Obstructions to Genericity in Study of Parametric Problems in Control Theory
<|reference_start|>Obstructions to Genericity in Study of Parametric Problems in Control Theory: We investigate systems of equations, involving parameters from the point of view of both control theory and computer algebra. The equations might involve linear operators such as partial (q-)differentiation, (q-)shift, (q-)difference as well as more complicated ones, which act trivially on the parameters. Such a system can be identified algebraically with a certain left module over a non-commutative algebra, where the operators commute with the parameters. We develop, implement and use in practice the algorithm for revealing all the expressions in parameters, for which e.g. homological properties of a system differ from the generic properties. We use Groebner bases and Groebner basics in rings of solvable type as main tools. In particular, we demonstrate an optimized algorithm for computing the left inverse of a matrix over a ring of solvable type. We illustrate the article with interesting examples. In particular, we provide a complete solution to the "two pendula, mounted on a cart" problem from the classical book of Polderman and Willems, including the case, where the friction at the joints is essential . To the best of our knowledge, the latter example has not been solved before in a complete way.<|reference_end|>
arxiv
@article{levandovskyy2007obstructions, title={Obstructions to Genericity in Study of Parametric Problems in Control Theory}, author={Viktor Levandovskyy and Eva Zerz}, journal={arXiv preprint arXiv:0708.2078}, year={2007}, archivePrefix={arXiv}, eprint={0708.2078}, primaryClass={math.OC cs.SC math.RA} }
levandovskyy2007obstructions
arxiv-977
0708.2084
Empirical entropy in context
<|reference_start|>Empirical entropy in context: We trace the history of empirical entropy, touching briefly on its relation to Markov processes, normal numbers, Shannon entropy, the Chomsky hierarchy, Kolmogorov complexity, Ziv-Lempel compression, de Bruijn sequences and stochastic complexity.<|reference_end|>
arxiv
@article{gagie2007empirical, title={Empirical entropy in context}, author={Travis Gagie}, journal={arXiv preprint arXiv:0708.2084}, year={2007}, archivePrefix={arXiv}, eprint={0708.2084}, primaryClass={cs.IT math.IT} }
gagie2007empirical
arxiv-978
0708.2105
Attribute Estimation and Testing Quasi-Symmetry
<|reference_start|>Attribute Estimation and Testing Quasi-Symmetry: A Boolean function is symmetric if it is invariant under all permutations of its arguments; it is quasi-symmetric if it is symmetric with respect to the arguments on which it actually depends. We present a test that accepts every quasi-symmetric function and, except with an error probability at most delta>0, rejects every function that differs from every quasi-symmetric function on at least a fraction epsilon>0 of the inputs. For a function of n arguments, the test probes the function at O((n/epsilon)\log(n/delta)) inputs. Our quasi-symmetry test acquires information concerning the arguments on which the function actually depends. To do this, it employs a generalization of the property testing paradigm that we call attribute estimation. Like property testing, attribute estimation uses random sampling to obtain results that have only "one-sided'' errors and that are close to accurate with high probability.<|reference_end|>
arxiv
@article{majewski2007attribute, title={Attribute Estimation and Testing Quasi-Symmetry}, author={Krzysztof Majewski and Nicholas Pippenger}, journal={arXiv preprint arXiv:0708.2105}, year={2007}, archivePrefix={arXiv}, eprint={0708.2105}, primaryClass={cs.CC} }
majewski2007attribute
arxiv-979
0708.2173
Provenance as Dependency Analysis
<|reference_start|>Provenance as Dependency Analysis: Provenance is information recording the source, derivation, or history of some information. Provenance tracking has been studied in a variety of settings; however, although many design points have been explored, the mathematical or semantic foundations of data provenance have received comparatively little attention. In this paper, we argue that dependency analysis techniques familiar from program analysis and program slicing provide a formal foundation for forms of provenance that are intended to show how (part of) the output of a query depends on (parts of) its input. We introduce a semantic characterization of such dependency provenance, show that this form of provenance is not computable, and provide dynamic and static approximation techniques.<|reference_end|>
arxiv
@article{cheney2007provenance, title={Provenance as Dependency Analysis}, author={James Cheney, Amal Ahmed, and Umut Acar}, journal={arXiv preprint arXiv:0708.2173}, year={2007}, archivePrefix={arXiv}, eprint={0708.2173}, primaryClass={cs.DB cs.PL} }
cheney2007provenance
arxiv-980
0708.2213
Moderate Growth Time Series for Dynamic Combinatorics Modelisation
<|reference_start|>Moderate Growth Time Series for Dynamic Combinatorics Modelisation: Here, we present a family of time series with a simple growth constraint. This family can be the basis of a model to apply to emerging computation in business and micro-economy where global functions can be expressed from local rules. We explicit a double statistics on these series which allows to establish a one-to-one correspondence between three other ballot-like strunctures.<|reference_end|>
arxiv
@article{jaff2007moderate, title={Moderate Growth Time Series for Dynamic Combinatorics Modelisation}, author={Lua"i Jaff (LIPN), G'erard H.E. Duchamp (LIPN), Hatem Hadj Kacem, Cyrille Bertelle (LITIS)}, journal={ECELM-2, Tirgu-Mures : Roumanie (2006)}, year={2007}, archivePrefix={arXiv}, eprint={0708.2213}, primaryClass={cs.SC cs.MA math.CO} }
jaff2007moderate
arxiv-981
0708.2230
Collection analysis for Horn clause programs
<|reference_start|>Collection analysis for Horn clause programs: We consider approximating data structures with collections of the items that they contain. For examples, lists, binary trees, tuples, etc, can be approximated by sets or multisets of the items within them. Such approximations can be used to provide partial correctness properties of logic programs. For example, one might wish to specify than whenever the atom $sort(t,s)$ is proved then the two lists $t$ and $s$ contain the same multiset of items (that is, $s$ is a permutation of $t$). If sorting removes duplicates, then one would like to infer that the sets of items underlying $t$ and $s$ are the same. Such results could be useful to have if they can be determined statically and automatically. We present a scheme by which such collection analysis can be structured and automated. Central to this scheme is the use of linear logic as a omputational logic underlying the logic of Horn clauses.<|reference_end|>
arxiv
@article{miller2007collection, title={Collection analysis for Horn clause programs}, author={Dale Miller (INRIA Futurs)}, journal={Dans International ACM SIGPLAN Conference on Principles and Practice of Declarative Programming (2006)}, year={2007}, archivePrefix={arXiv}, eprint={0708.2230}, primaryClass={cs.LO} }
miller2007collection
arxiv-982
0708.2252
Focusing and Polarization in Intuitionistic Logic
<|reference_start|>Focusing and Polarization in Intuitionistic Logic: A focused proof system provides a normal form to cut-free proofs that structures the application of invertible and non-invertible inference rules. The focused proof system of Andreoli for linear logic has been applied to both the proof search and the proof normalization approaches to computation. Various proof systems in literature exhibit characteristics of focusing to one degree or another. We present a new, focused proof system for intuitionistic logic, called LJF, and show how other proof systems can be mapped into the new system by inserting logical connectives that prematurely stop focusing. We also use LJF to design a focused proof system for classical logic. Our approach to the design and analysis of these systems is based on the completeness of focusing in linear logic and on the notion of polarity that appears in Girard's LC and LU proof systems.<|reference_end|>
arxiv
@article{liang2007focusing, title={Focusing and Polarization in Intuitionistic Logic}, author={Chuck Liang, Dale Miller (INRIA Futurs)}, journal={Dans Computer Science Logic (2007)}, year={2007}, archivePrefix={arXiv}, eprint={0708.2252}, primaryClass={cs.LO} }
liang2007focusing
arxiv-983
0708.2255
A Language for Generic Programming in the Large
<|reference_start|>A Language for Generic Programming in the Large: Generic programming is an effective methodology for developing reusable software libraries. Many programming languages provide generics and have features for describing interfaces, but none completely support the idioms used in generic programming. To address this need we developed the language G. The central feature of G is the concept, a mechanism for organizing constraints on generics that is inspired by the needs of modern C++ libraries. G provides modular type checking and separate compilation (even of generics). These characteristics support modular software development, especially the smooth integration of independently developed components. In this article we present the rationale for the design of G and demonstrate the expressiveness of G with two case studies: porting the Standard Template Library and the Boost Graph Library from C++ to G. The design of G shares much in common with the concept extension proposed for the next C++ Standard (the authors participated in its design) but there are important differences described in this article.<|reference_end|>
arxiv
@article{siek2007a, title={A Language for Generic Programming in the Large}, author={Jeremy G. Siek and Andrew Lumsdaine}, journal={arXiv preprint arXiv:0708.2255}, year={2007}, archivePrefix={arXiv}, eprint={0708.2255}, primaryClass={cs.PL cs.SE} }
siek2007a
arxiv-984
0708.2266
The study of a new gerrymandering methodology
<|reference_start|>The study of a new gerrymandering methodology: This paper is to obtain a simple dividing-diagram of the congressional districts, where the only limit is that each district should contain the same population if possibly. In order to solve this problem, we introduce three different standards of the "simple" shape. The first standard is that the final shape of the congressional districts should be of a simplest figure and we apply a modified "shortest split line algorithm" where the factor of the same population is considered only. The second standard is that the gerrymandering should ensure the integrity of the current administrative area as the convenience for management. Thus we combine the factor of the administrative area with the first standard, and generate an improved model resulting in the new diagram in which the perimeters of the districts are along the boundaries of some current counties. Moreover, the gerrymandering should consider the geographic features.The third standard is introduced to describe this situation. Finally, it can be proved that the difference between the supporting ratio of a certain party in each district and the average supporting ratio of that particular party in the whole state obeys the Chi-square distribution approximately. Consequently, we can obtain an archetypal formula to check whether the gerrymandering we propose is fair.<|reference_end|>
arxiv
@article{kai2007the, title={The study of a new gerrymandering methodology}, author={Pan Kai, Tan Yue and Jiang Sheng}, journal={arXiv preprint arXiv:0708.2266}, year={2007}, archivePrefix={arXiv}, eprint={0708.2266}, primaryClass={cs.CY} }
kai2007the
arxiv-985
0708.2270
Capacity of the Degraded Half-Duplex Relay Channel
<|reference_start|>Capacity of the Degraded Half-Duplex Relay Channel: A discrete memoryless half-duplex relay channel is constructed from a broadcast channel from the source to the relay and destination and a multiple access channel from the source and relay to the destination. When the relay listens, the channel operates in the broadcast mode. The channel switches to the multiple access mode when the relay transmits. If the broadcast component channel is physically degraded, the half-duplex relay channel will also be referred to as physically degraded. The capacity of this degraded half-duplex relay channel is examined. It is shown that the block Markov coding suggested in the seminal paper by Cover and El Gamal can be modified to achieve capacity for the degraded half-duplex relay channel. In the code construction, the listen-transmit schedule of the relay is made to depend on the message to be sent and hence the schedule carries information itself. If the schedule is restricted to be deterministic, it is shown that the capacity can be achieved by a simple management of information flows across the broadcast and multiple access component channels.<|reference_end|>
arxiv
@article{vijayakumaran2007capacity, title={Capacity of the Degraded Half-Duplex Relay Channel}, author={Saravanan Vijayakumaran, Tan F. Wong and Tat M. Lok}, journal={arXiv preprint arXiv:0708.2270}, year={2007}, archivePrefix={arXiv}, eprint={0708.2270}, primaryClass={cs.IT math.IT} }
vijayakumaran2007capacity
arxiv-986
0708.2273
Opportunism in Multiuser Relay Channels: Scheduling, Routing and Spectrum Reuse
<|reference_start|>Opportunism in Multiuser Relay Channels: Scheduling, Routing and Spectrum Reuse: In order to understand the key merits of multiuser diversity techniques in relay-assisted cellular multihop networks, this paper analyzes the spectral efficiency of opportunistic (i.e., channel-aware) scheduling algorithms over a fading multiuser relay channel with $K$ users in the asymptotic regime of large (but finite) number of users. Using tools from extreme-value theory, we characterize the limiting distribution of spectral efficiency focusing on Type I convergence and utilize it in investigating the large system behavior of the multiuser relay channel as a function of the number of users and physical channel signal-to-noise ratios (SNRs). Our analysis results in very accurate formulas in the large (but finite) $K$ regime, provides insights on the potential performance enhancements from multihop routing and spectrum reuse policies in the presence of multiuser diversity gains from opportunistic scheduling and helps to identify the regimes and conditions in which relay-assisted multiuser communication provides a clear advantage over direct multiuser communication.<|reference_end|>
arxiv
@article{oyman2007opportunism, title={Opportunism in Multiuser Relay Channels: Scheduling, Routing and Spectrum Reuse}, author={Ozgur Oyman}, journal={IEEE International Symposium on Information Theory (ISIT), Nice, France, June 2007}, year={2007}, doi={10.1109/ISIT.2007.4557240}, archivePrefix={arXiv}, eprint={0708.2273}, primaryClass={cs.IT math.IT} }
oyman2007opportunism
arxiv-987
0708.2303
Compositional Semantics Grounded in Commonsense Metaphysics
<|reference_start|>Compositional Semantics Grounded in Commonsense Metaphysics: We argue for a compositional semantics grounded in a strongly typed ontology that reflects our commonsense view of the world and the way we talk about it in ordinary language. Assuming the existence of such a structure, we show that the semantics of various natural language phenomena may become nearly trivial.<|reference_end|>
arxiv
@article{saba2007compositional, title={Compositional Semantics Grounded in Commonsense Metaphysics}, author={Walid S. Saba}, journal={arXiv preprint arXiv:0708.2303}, year={2007}, archivePrefix={arXiv}, eprint={0708.2303}, primaryClass={cs.AI cs.CL} }
saba2007compositional
arxiv-988
0708.2309
On Compact Routing for the Internet
<|reference_start|>On Compact Routing for the Internet: While there exist compact routing schemes designed for grids, trees, and Internet-like topologies that offer routing tables of sizes that scale logarithmically with the network size, we demonstrate in this paper that in view of recent results in compact routing research, such logarithmic scaling on Internet-like topologies is fundamentally impossible in the presence of topology dynamics or topology-independent (flat) addressing. We use analytic arguments to show that the number of routing control messages per topology change cannot scale better than linearly on Internet-like topologies. We also employ simulations to confirm that logarithmic routing table size scaling gets broken by topology-independent addressing, a cornerstone of popular locator-identifier split proposals aiming at improving routing scaling in the presence of network topology dynamics or host mobility. These pessimistic findings lead us to the conclusion that a fundamental re-examination of assumptions behind routing models and abstractions is needed in order to find a routing architecture that would be able to scale ``indefinitely.''<|reference_end|>
arxiv
@article{krioukov2007on, title={On Compact Routing for the Internet}, author={Dmitri Krioukov, kc claffy, Kevin Fall, Arthur Brady}, journal={ACM SIGCOMM Computer Communication Review (CCR), v.37, n.3, p.41-52, 2007}, year={2007}, doi={10.1145/1273445.1273450}, archivePrefix={arXiv}, eprint={0708.2309}, primaryClass={cs.NI} }
krioukov2007on
arxiv-989
0708.2310
Benefiting from Disorder: Source Coding for Unordered Data
<|reference_start|>Benefiting from Disorder: Source Coding for Unordered Data: The order of letters is not always relevant in a communication task. This paper discusses the implications of order irrelevance on source coding, presenting results in several major branches of source coding theory: lossless coding, universal lossless coding, rate-distortion, high-rate quantization, and universal lossy coding. The main conclusions demonstrate that there is a significant rate savings when order is irrelevant. In particular, lossless coding of n letters from a finite alphabet requires Theta(log n) bits and universal lossless coding requires n + o(n) bits for many countable alphabet sources. However, there are no universal schemes that can drive a strong redundancy measure to zero. Results for lossy coding include distribution-free expressions for the rate savings from order irrelevance in various high-rate quantization schemes. Rate-distortion bounds are given, and it is shown that the analogue of the Shannon lower bound is loose at all finite rates.<|reference_end|>
arxiv
@article{varshney2007benefiting, title={Benefiting from Disorder: Source Coding for Unordered Data}, author={Lav R. Varshney and Vivek K. Goyal}, journal={arXiv preprint arXiv:0708.2310}, year={2007}, archivePrefix={arXiv}, eprint={0708.2310}, primaryClass={cs.IT math.IT} }
varshney2007benefiting
arxiv-990
0708.2319
On Semimeasures Predicting Martin-Loef Random Sequences
<|reference_start|>On Semimeasures Predicting Martin-Loef Random Sequences: Solomonoff's central result on induction is that the posterior of a universal semimeasure M converges rapidly and with probability 1 to the true sequence generating posterior mu, if the latter is computable. Hence, M is eligible as a universal sequence predictor in case of unknown mu. Despite some nearby results and proofs in the literature, the stronger result of convergence for all (Martin-Loef) random sequences remained open. Such a convergence result would be particularly interesting and natural, since randomness can be defined in terms of M itself. We show that there are universal semimeasures M which do not converge for all random sequences, i.e. we give a partial negative answer to the open problem. We also provide a positive answer for some non-universal semimeasures. We define the incomputable measure D as a mixture over all computable measures and the enumerable semimeasure W as a mixture over all enumerable nearly-measures. We show that W converges to D and D to mu on all random sequences. The Hellinger distance measuring closeness of two distributions plays a central role.<|reference_end|>
arxiv
@article{hutter2007on, title={On Semimeasures Predicting Martin-Loef Random Sequences}, author={Marcus Hutter and Andrej Muchnik}, journal={Theoretical Computer Science, 382 (2007) 247-261}, year={2007}, archivePrefix={arXiv}, eprint={0708.2319}, primaryClass={cs.IT cs.LG math.IT math.PR} }
hutter2007on
arxiv-991
0708.2336
Unsatisfiable Linear k-CNFs Exist, for every k
<|reference_start|>Unsatisfiable Linear k-CNFs Exist, for every k: We call a CNF formula linear if any two clauses have at most one variable in common. Let Linear k-SAT be the problem of deciding whether a given linear k-CNF formula is satisfiable. Here, a k-CNF formula is a CNF formula in which every clause has size exactly k. It was known that for k >= 3, Linear k-SAT is NP-complete if and only if an unsatisfiable linear k-CNF formula exists, and that they do exist for k >= 4. We prove that unsatisfiable linear k-CNF formulas exist for every k. Let f(k) be the minimum number of clauses in an unsatisfiable linear k-CNF formula. We show that f(k) is Omega(k2^k) and O(4^k*k^4), i.e., minimum size unsatisfiable linear k-CNF formulas are significantly larger than minimum size unsatisfiable k-CNF formulas. Finally, we prove that, surprisingly, linear k-CNF formulas do not allow for a larger fraction of clauses to be satisfied than general k-CNF formulas.<|reference_end|>
arxiv
@article{scheder2007unsatisfiable, title={Unsatisfiable Linear k-CNFs Exist, for every k}, author={Dominik Scheder}, journal={arXiv preprint arXiv:0708.2336}, year={2007}, archivePrefix={arXiv}, eprint={0708.2336}, primaryClass={cs.DM cs.CC cs.LO} }
scheder2007unsatisfiable
arxiv-992
0708.2351
Randomized algorithm for the k-server problem on decomposable spaces
<|reference_start|>Randomized algorithm for the k-server problem on decomposable spaces: We study the randomized k-server problem on metric spaces consisting of widely separated subspaces. We give a method which extends existing algorithms to larger spaces with the growth rate of the competitive quotients being at most O(log k). This method yields o(k)-competitive algorithms solving the randomized k-server problem, for some special underlying metric spaces, e.g. HSTs of "small" height (but unbounded degree). HSTs are important tools for probabilistic approximation of metric spaces.<|reference_end|>
arxiv
@article{nagy-györgy2007randomized, title={Randomized algorithm for the k-server problem on decomposable spaces}, author={Judit Nagy-Gy"orgy}, journal={arXiv preprint arXiv:0708.2351}, year={2007}, archivePrefix={arXiv}, eprint={0708.2351}, primaryClass={cs.DS cs.DM} }
nagy-györgy2007randomized
arxiv-993
0708.2353
Continuous and randomized defensive forecasting: unified view
<|reference_start|>Continuous and randomized defensive forecasting: unified view: Defensive forecasting is a method of transforming laws of probability (stated in game-theoretic terms as strategies for Sceptic) into forecasting algorithms. There are two known varieties of defensive forecasting: "continuous", in which Sceptic's moves are assumed to depend on the forecasts in a (semi)continuous manner and which produces deterministic forecasts, and "randomized", in which the dependence of Sceptic's moves on the forecasts is arbitrary and Forecaster's moves are allowed to be randomized. This note shows that the randomized variety can be obtained from the continuous variety by smearing Sceptic's moves to make them continuous.<|reference_end|>
arxiv
@article{vovk2007continuous, title={Continuous and randomized defensive forecasting: unified view}, author={Vladimir Vovk}, journal={arXiv preprint arXiv:0708.2353}, year={2007}, archivePrefix={arXiv}, eprint={0708.2353}, primaryClass={cs.LG} }
vovk2007continuous
arxiv-994
0708.2363
On a constructive characterization of a class of trees related to pairs of disjoint matchings
<|reference_start|>On a constructive characterization of a class of trees related to pairs of disjoint matchings: For a graph consider the pairs of disjoint matchings which union contains as many edges as possible, and define a parameter $\alpha$ which eqauls the cardinality of the largest matching in those pairs. Also, define $\betta$ to be the cardinality of a maximum matching of the graph. We give a constructive characterization of trees which satisfy the $\alpha$=$\betta$ equality. The proof of our main theorem is based on a new decomposition algorithm obtained for trees.<|reference_end|>
arxiv
@article{kamalian2007on, title={On a constructive characterization of a class of trees related to pairs of disjoint matchings}, author={R. R. Kamalian, V. V. Mkrtchyan}, journal={arXiv preprint arXiv:0708.2363}, year={2007}, archivePrefix={arXiv}, eprint={0708.2363}, primaryClass={cs.DM} }
kamalian2007on
arxiv-995
0708.2395
Key Agreement and Authentication Schemes Using Non-Commutative Semigroups
<|reference_start|>Key Agreement and Authentication Schemes Using Non-Commutative Semigroups: We give a new two-pass authentication scheme, whichis a generalisation of an authentication scheme of Sibert-Dehornoy-Girault based on the Diffie-Hellman conjugacy problem. Compared to the above scheme, for some parameters it is more efficient with respect to multiplications. We sketch a proof that our authentication scheme is secure. We give a new key agreement protocols.<|reference_end|>
arxiv
@article{chowdhury2007key, title={Key Agreement and Authentication Schemes Using Non-Commutative Semigroups}, author={M. M. Chowdhury}, journal={arXiv preprint arXiv:0708.2395}, year={2007}, archivePrefix={arXiv}, eprint={0708.2395}, primaryClass={cs.CR} }
chowdhury2007key
arxiv-996
0708.2397
On the AAGL Protocol
<|reference_start|>On the AAGL Protocol: Recently the AAGL (Anshel-Anshel-Goldfeld-Lemieux) has been proposed which can be used for RFID tags. We give algorithms for the problem (we call the MSCSPv) on which the security of the AAGL protocol is based upon. Hence we give various attacks for general parameters on the recent AAGL protocol proposed. One of our attacks is a deterministic algorithm which has space complexity and time complexity both atleast exponentialin the worst case. In a better case using a probabilistic algorithm the time complexity canbe O(|XSS(ui')^L5*(n^(1+e)) and the space complexity can be O(|XSS(ui')|^L6), where the element ui' is part of a public key, n is the index of braid group, XSS is a summit type set and e is a constant in a limit. The above shows the AAGL protocol is potentially not significantly more secure as using key agreement protocols based on the conjugacy problem such as the AAG (Anshel-Anshel-Goldfeld) protocol because both protocols can be broken with complexity which do not significantly differ. We think our attacks can be improved.<|reference_end|>
arxiv
@article{chowdhury2007on, title={On the AAGL Protocol}, author={M. M. Chowdhury}, journal={arXiv preprint arXiv:0708.2397}, year={2007}, archivePrefix={arXiv}, eprint={0708.2397}, primaryClass={cs.CR} }
chowdhury2007on
arxiv-997
0708.2414
User Participation in Social Media: Digg Study
<|reference_start|>User Participation in Social Media: Digg Study: The social news aggregator Digg allows users to submit and moderate stories by voting on (digging) them. As is true of most social sites, user participation on Digg is non-uniformly distributed, with few users contributing a disproportionate fraction of content. We studied user participation on Digg, to see whether it is motivated by competition, fueled by user ranking, or social factors, such as community acceptance. For our study we collected activity data of the top users weekly over the course of a year. We computed the number of stories users submitted, dugg or commented on weekly. We report a spike in user activity in September 2006, followed by a gradual decline, which seems unaffected by the elimination of user ranking. The spike can be explained by a controversy that broke out at the beginning of September 2006. We believe that the lasting acrimony that this incident has created led to a decline of top user participation on Digg.<|reference_end|>
arxiv
@article{lerman2007user, title={User Participation in Social Media: Digg Study}, author={Kristina Lerman}, journal={arXiv preprint arXiv:0708.2414}, year={2007}, doi={10.1109/WI-IATW.2007.68}, archivePrefix={arXiv}, eprint={0708.2414}, primaryClass={cs.CY} }
lerman2007user
arxiv-998
0708.2432
A structure from motion inequality
<|reference_start|>A structure from motion inequality: We state an elementary inequality for the structure from motion problem for m cameras and n points. This structure from motion inequality relates space dimension, camera parameter dimension, the number of cameras and number points and global symmetry properties and provides a rigorous criterion for which reconstruction is not possible with probability 1. Mathematically the inequality is based on Frobenius theorem which is a geometric incarnation of the fundamental theorem of linear algebra. The paper also provides a general mathematical formalism for the structure from motion problem. It includes the situation the points can move while the camera takes the pictures.<|reference_end|>
arxiv
@article{knill2007a, title={A structure from motion inequality}, author={Oliver Knill and Jose Ramirez-Herran}, journal={arXiv preprint arXiv:0708.2432}, year={2007}, archivePrefix={arXiv}, eprint={0708.2432}, primaryClass={cs.CV cs.AI} }
knill2007a
arxiv-999
0708.2438
On Ullman's theorem in computer vision
<|reference_start|>On Ullman's theorem in computer vision: Both in the plane and in space, we invert the nonlinear Ullman transformation for 3 points and 3 orthographic cameras. While Ullman's theorem assures a unique reconstruction modulo a reflection for 3 cameras and 4 points, we find a locally unique reconstruction for 3 cameras and 3 points. Explicit reconstruction formulas allow to decide whether picture data of three cameras seeing three points can be realized as a point-camera configuration.<|reference_end|>
arxiv
@article{knill2007on, title={On Ullman's theorem in computer vision}, author={Oliver Knill and Jose Ramirez-Herran}, journal={arXiv preprint arXiv:0708.2438}, year={2007}, archivePrefix={arXiv}, eprint={0708.2438}, primaryClass={cs.CV cs.AI} }
knill2007on
arxiv-1000
0708.2442
Space and camera path reconstruction for omni-directional vision
<|reference_start|>Space and camera path reconstruction for omni-directional vision: In this paper, we address the inverse problem of reconstructing a scene as well as the camera motion from the image sequence taken by an omni-directional camera. Our structure from motion results give sharp conditions under which the reconstruction is unique. For example, if there are three points in general position and three omni-directional cameras in general position, a unique reconstruction is possible up to a similarity. We then look at the reconstruction problem with m cameras and n points, where n and m can be large and the over-determined system is solved by least square methods. The reconstruction is robust and generalizes to the case of a dynamic environment where landmarks can move during the movie capture. Possible applications of the result are computer assisted scene reconstruction, 3D scanning, autonomous robot navigation, medical tomography and city reconstructions.<|reference_end|>
arxiv
@article{knill2007space, title={Space and camera path reconstruction for omni-directional vision}, author={Oliver Knill and Jose Ramirez-Herran}, journal={arXiv preprint arXiv:0708.2442}, year={2007}, archivePrefix={arXiv}, eprint={0708.2442}, primaryClass={cs.CV cs.AI} }
knill2007space