corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-673301
cs/0509029
Quickest detection of a minimum of disorder times
<|reference_start|>Quickest detection of a minimum of disorder times: A multi-source quickest detection problem is considered. Assume there are two independent Poisson processes $X^{1}$ and $X^{2}$ with disorder times $\theta_{1}$ and $\theta_{2}$, respectively; that is, the intensities of $X^1$ and $X^2$ change at random unobservable times $\theta_1$ and $\theta_2$, respectively. $\theta_1$ and $\theta_2$ are independent of each other and are exponentially distributed. Define $\theta \triangleq \theta_1 \wedge \theta_2=\min\{\theta_{1},\theta_{2}\}$ . For any stopping time $\tau$ that is measurable with respect to the filtration generated by the observations define a penalty function of the form \[ R_{\tau}=\mathbb{P}(\tau<\theta)+c \mathbb{E}[(\tau-\theta)^{+}], \] where $c>0$ and $(\tau-\theta)^{+}$ is the positive part of $\tau-\theta$. It is of interest to find a stopping time $\tau$ that minimizes the above performance index. Since both observations $X^{1}$ and $X^{2}$ reveal information about the disorder time $\theta$, even this simple problem is more involved than solving the disorder problems for $X^{1}$ and $X^{2}$ separately. This problem is formulated in terms of a three dimensional sufficient statistic, and the corresponding optimal stopping problem is examined. A two dimensional optimal stopping problem whose optimal stopping time turns out to coincide with the optimal stopping time of the original problem for some range of parameters is also solved. The value function of this problem serves as a tight upper bound for the original problem's value function. The two solutions are characterized by iterating suitable functional operators.<|reference_end|>
arxiv
@article{bayraktar2005quickest, title={Quickest detection of a minimum of disorder times}, author={Erhan Bayraktar, H. Vincent Poor}, journal={arXiv preprint arXiv:cs/0509029}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509029}, primaryClass={cs.CE cs.IT math.IT} }
bayraktar2005quickest
arxiv-673302
cs/0509030
Multi-Proxy Multi-Signcryption Scheme from Pairings
<|reference_start|>Multi-Proxy Multi-Signcryption Scheme from Pairings: A first multi-proxy multi-signcryption scheme from pairings, which efficiently combines a multi-proxy multi-signature scheme with a signcryption, is proposed. Its security is analyzed in detail. In our scheme, a proxy signcrypter group could be authorized as a proxy agent by the cooperation of all members in the original signcrypter group. Then the proxy signcryptions can be generated by the cooperation of all the signcrypters in the authorized proxy signcrypter group on behalf of the original signcrypter group. The correctness and the security of this scheme are proved.<|reference_end|>
arxiv
@article{jun-bao2005multi-proxy, title={Multi-Proxy Multi-Signcryption Scheme from Pairings}, author={Liu Jun-Bao, Xiao Guo-Zhen}, journal={arXiv preprint arXiv:cs/0509030}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509030}, primaryClass={cs.CR} }
jun-bao2005multi-proxy
arxiv-673303
cs/0509031
On the Worst-case Performance of the Sum-of-Squares Algorithm for Bin Packing
<|reference_start|>On the Worst-case Performance of the Sum-of-Squares Algorithm for Bin Packing: The Sum of Squares algorithm for bin packing was defined in [2] and studied in great detail in [1], where it was proved that its worst case performance ratio is at most 3. In this note, we improve the asymptotic worst case bound to 2.7777...<|reference_end|>
arxiv
@article{csirik2005on, title={On the Worst-case Performance of the Sum-of-Squares Algorithm for Bin Packing}, author={Janos Csirik, David S. Johnson, and Claire Kenyon}, journal={arXiv preprint arXiv:cs/0509031}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509031}, primaryClass={cs.DS} }
csirik2005on
arxiv-673304
cs/0509032
A Simple Model to Generate Hard Satisfiable Instances
<|reference_start|>A Simple Model to Generate Hard Satisfiable Instances: In this paper, we try to further demonstrate that the models of random CSP instances proposed by [Xu and Li, 2000; 2003] are of theoretical and practical interest. Indeed, these models, called RB and RD, present several nice features. First, it is quite easy to generate random instances of any arity since no particular structure has to be integrated, or property enforced, in such instances. Then, the existence of an asymptotic phase transition can be guaranteed while applying a limited restriction on domain size and on constraint tightness. In that case, a threshold point can be precisely located and all instances have the guarantee to be hard at the threshold, i.e., to have an exponential tree-resolution complexity. Next, a formal analysis shows that it is possible to generate forced satisfiable instances whose hardness is similar to unforced satisfiable ones. This analysis is supported by some representative results taken from an intensive experimentation that we have carried out, using complete and incomplete search methods.<|reference_end|>
arxiv
@article{xu2005a, title={A Simple Model to Generate Hard Satisfiable Instances}, author={Ke Xu, Frederic Boussemart, Fred Hemery and Christophe Lecoutre}, journal={arXiv preprint arXiv:cs/0509032}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509032}, primaryClass={cs.AI cond-mat.stat-mech cs.CC} }
xu2005a
arxiv-673305
cs/0509033
K-Histograms: An Efficient Clustering Algorithm for Categorical Dataset
<|reference_start|>K-Histograms: An Efficient Clustering Algorithm for Categorical Dataset: Clustering categorical data is an integral part of data mining and has attracted much attention recently. In this paper, we present k-histogram, a new efficient algorithm for clustering categorical data. The k-histogram algorithm extends the k-means algorithm to categorical domain by replacing the means of clusters with histograms, and dynamically updates histograms in the clustering process. Experimental results on real datasets show that k-histogram algorithm can produce better clustering results than k-modes algorithm, the one related with our work most closely.<|reference_end|>
arxiv
@article{he2005k-histograms:, title={K-Histograms: An Efficient Clustering Algorithm for Categorical Dataset}, author={Zengyou He, Xiaofei Xu, Shengchun Deng, Bin Dong}, journal={arXiv preprint arXiv:cs/0509033}, year={2005}, number={Tr-2003-08}, archivePrefix={arXiv}, eprint={cs/0509033}, primaryClass={cs.AI} }
he2005k-histograms:
arxiv-673306
cs/0509034
N-free extensions of posetsNote on a theorem of PAGrillet
<|reference_start|>N-free extensions of posetsNote on a theorem of PAGrillet: Let $S\_{N}(P)$ be the poset obtained by adding a dummy vertex on each diagonal edge of the $N$'s of a finite poset $P$. We show that $S\_{N}(S\_{N}(P))$ is $N$-free. It follows that this poset is the smallest $N$-free barycentric subdivision of the diagram of $P$, poset whose existence was proved by P.A. Grillet. This is also the poset obtained by the algorithm starting with $P\_0:=P$ and consisting at step $m$ of adding a dummy vertex on a diagonal edge of some $N$ in $P\_m$, proving that the result of this algorithm does not depend upon the particular choice of the diagonal edge choosen at each step. These results are linked to drawing of posets.<|reference_end|>
arxiv
@article{pouzet2005n-free, title={N-free extensions of posets.Note on a theorem of P.A.Grillet}, author={Maurice Pouzet (ICJ), Nejib Zaguia (SITE)}, journal={arXiv preprint arXiv:cs/0509034}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509034}, primaryClass={cs.DM} }
pouzet2005n-free
arxiv-673307
cs/0509035
Cryptanalysis of an MPEG-Video Encryption Scheme Based on Secret Huffman Tables
<|reference_start|>Cryptanalysis of an MPEG-Video Encryption Scheme Based on Secret Huffman Tables: This paper studies the security of a recently-proposed MPEG-video encryption scheme based on secret Huffman tables. Our cryptanalysis shows that: 1) the key space of the encryption scheme is not sufficiently large against divide-and-conquer (DAC) attack and known-plaintext attack; 2) it is possible to decrypt a cipher-video with a partially-known key, thus dramatically reducing the complexity of the DAC brute-force attack in some cases; 3) its security against the chosen-plaintext attack is very weak. Some experimental results are included to support the cryptanalytic results with a brief discuss on how to improve this MPEG-video encryption scheme.<|reference_end|>
arxiv
@article{li2005cryptanalysis, title={Cryptanalysis of an MPEG-Video Encryption Scheme Based on Secret Huffman Tables}, author={Shujun Li, Guanrong Chen, Albert Cheung and Kwok-Tung Lo}, journal={Advances in Image and Video Technology - Third Pacific Rim Symposium, PSIVT 2009, Tokyo, Japan, January 13-16, 2009. Proceedings, Lecture Notes in Computer Science, vol. 5414, pp. 898-909, 2009}, year={2005}, doi={10.1007/978-3-540-92957-4_78}, archivePrefix={arXiv}, eprint={cs/0509035}, primaryClass={cs.MM cs.CR} }
li2005cryptanalysis
arxiv-673308
cs/0509036
Security Problems with Improper Implementations of Improved FEA-M
<|reference_start|>Security Problems with Improper Implementations of Improved FEA-M: This paper reports security problems with improper implementations of an improved version of FEA-M (fast encryption algorithm for multimedia). It is found that an implementation-dependent differential chosen-plaintext attack or its chosen-ciphertext counterpart can reveal the secret key of the cryptosystem, if the involved (pseudo-)random process can be tampered (for example, through a public time service). The implementation-dependent differential attack is very efficient in complexity and needs only $O(n^2)$ chosen plaintext or ciphertext bits. In addition, this paper also points out a minor security problem with the selection of the session key. In real implementations of the cryptosystem, these security problems should be carefully avoided, or the cryptosystem has to be further enhanced to work under such weak implementations.<|reference_end|>
arxiv
@article{li2005security, title={Security Problems with Improper Implementations of Improved FEA-M}, author={Shujun Li and Kwok-Tung Lo}, journal={Journal of Systems and Software, vol. 80, no. 5, pp. 791-794, 2007}, year={2005}, doi={10.1016/j.jss.2006.05.002}, archivePrefix={arXiv}, eprint={cs/0509036}, primaryClass={cs.CR cs.MM} }
li2005security
arxiv-673309
cs/0509037
Friends for Free: Self-Organizing Artificial Social Networks for Trust and Cooperation
<|reference_start|>Friends for Free: Self-Organizing Artificial Social Networks for Trust and Cooperation: By harvesting friendship networks from e-mail contacts or instant message "buddy lists" Peer-to-Peer (P2P) applications can improve performance in low trust environments such as the Internet. However, natural social networks are not always suitable, reliable or available. We propose an algorithm (SLACER) that allows peer nodes to create and manage their own friendship networks. We evaluate performance using a canonical test application, requiring cooperation between peers for socially optimal outcomes. The Artificial Social Networks (ASN) produced are connected, cooperative and robust - possessing many of the disable properties of human friendship networks such as trust between friends (directly linked peers) and short paths linking everyone via a chain of friends. In addition to new application possibilities, SLACER could supply ASN to P2P applications that currently depend on human social networks thus transforming them into fully autonomous, self-managing systems.<|reference_end|>
arxiv
@article{hales2005friends, title={Friends for Free: Self-Organizing Artificial Social Networks for Trust and Cooperation}, author={David Hales, Stefano Arteconi}, journal={arXiv preprint arXiv:cs/0509037}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509037}, primaryClass={cs.MA} }
hales2005friends
arxiv-673310
cs/0509038
Algorithms for Max Hamming Exact Satisfiability
<|reference_start|>Algorithms for Max Hamming Exact Satisfiability: We here study Max Hamming XSAT, ie, the problem of finding two XSAT models at maximum Hamming distance. By using a recent XSAT solver as an auxiliary function, an O(1.911^n) time algorithm can be constructed, where n is the number of variables. This upper time bound can be further improved to O(1.8348^n) by introducing a new kind of branching, more directly suited for finding models at maximum Hamming distance. The techniques presented here are likely to be of practical use as well as of theoretical value, proving that there are non-trivial algorithms for maximum Hamming distance problems.<|reference_end|>
arxiv
@article{dahllof2005algorithms, title={Algorithms for Max Hamming Exact Satisfiability}, author={Vilhelm Dahllof}, journal={arXiv preprint arXiv:cs/0509038}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509038}, primaryClass={cs.DS} }
dahllof2005algorithms
arxiv-673311
cs/0509039
Coding for the feedback Gel'fand-Pinsker channel and the feedforward Wyner-Ziv source
<|reference_start|>Coding for the feedback Gel'fand-Pinsker channel and the feedforward Wyner-Ziv source: We consider both channel coding and source coding, with perfect past feedback/feedforward, in the presence of side information. It is first observed that feedback does not increase the capacity of the Gel'fand-Pinsker channel, nor does feedforward improve the achievable rate-distortion performance in the Wyner-Ziv problem. We then focus on the Gaussian case showing that, as in the absence of side information, feedback/feedforward allows to efficiently attain the respective performance limits. In particular, we derive schemes via variations on that of Schalkwijk and Kailath. These variants, which are as simple as their origins and require no binning, are shown to achieve, respectively, the capacity of Costa's channel, and the Wyner-Ziv rate distortion function. Finally, we consider the finite-alphabet setting and derive schemes for both the channel and the source coding problems that attain the fundamental limits, using variations on schemes of Ahlswede and Ooi and Wornell, and of Martinian and Wornell, respectively.<|reference_end|>
arxiv
@article{merhav2005coding, title={Coding for the feedback Gel'fand-Pinsker channel and the feedforward Wyner-Ziv source}, author={Neri Merhav and Tsachy Weissman}, journal={arXiv preprint arXiv:cs/0509039}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509039}, primaryClass={cs.IT math.IT} }
merhav2005coding
arxiv-673312
cs/0509040
Authoring case based training by document data extraction
<|reference_start|>Authoring case based training by document data extraction: In this paper, we propose an scalable approach to modeling based upon word processing documents, and we describe the tool Phoenix providing the technical infrastructure. For our training environment d3web.Train, we developed a tool to extract case knowledge from existing documents, usually dismissal records, extending Phoenix to d3web.CaseImporter. Independent authors used this tool to develop training systems, observing a significant decrease of time for setteling-in and a decrease of time necessary for developing a case.<|reference_end|>
arxiv
@article{betz2005authoring, title={Authoring case based training by document data extraction}, author={Christian Betz, Alexander Hoernlein and Frank Puppe}, journal={arXiv preprint arXiv:cs/0509040}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509040}, primaryClass={cs.AI cs.IR} }
betz2005authoring
arxiv-673313
cs/0509041
Efficient Reconciliation of Correlated Continuous Random Variables using LDPC Codes
<|reference_start|>Efficient Reconciliation of Correlated Continuous Random Variables using LDPC Codes: This paper investigates an efficient and practical information reconciliation method in the case where two parties have access to correlated continuous random variables. We show that reconciliation is a special case of channel coding and that existing coded modulation techniques can be adapted for reconciliation. We describe an explicit reconciliation method based on LDPC codes in the case of correlated Gaussian variables. We believe that the proposed method can improve the efficiency of quantum key distribution protocols based on continuous-spectrum quantum states.<|reference_end|>
arxiv
@article{bloch2005efficient, title={Efficient Reconciliation of Correlated Continuous Random Variables using LDPC Codes}, author={Matthieu Bloch, Andrew Thangaraj and Steven W. McLaughlin}, journal={arXiv preprint arXiv:cs/0509041}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509041}, primaryClass={cs.IT math.IT} }
bloch2005efficient
arxiv-673314
cs/0509042
Computing over the Reals: Foundations for Scientific Computing
<|reference_start|>Computing over the Reals: Foundations for Scientific Computing: We give a detailed treatment of the ``bit-model'' of computability and complexity of real functions and subsets of R^n, and argue that this is a good way to formalize many problems of scientific computation. In the introduction we also discuss the alternative Blum-Shub-Smale model. In the final section we discuss the issue of whether physical systems could defeat the Church-Turing Thesis.<|reference_end|>
arxiv
@article{braverman2005computing, title={Computing over the Reals: Foundations for Scientific Computing}, author={Mark Braverman, Stephen Cook}, journal={arXiv preprint arXiv:cs/0509042}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509042}, primaryClass={cs.CC cs.LO} }
braverman2005computing
arxiv-673315
cs/0509043
Optimal Power Control for Multiuser CDMA Channels
<|reference_start|>Optimal Power Control for Multiuser CDMA Channels: In this paper, we define the power region as the set of power allocations for K users such that everybody meets a minimum signal-to-interference ratio (SIR). The SIR is modeled in a multiuser CDMA system with fixed linear receiver and signature sequences. We show that the power region is convex in linear and logarithmic scale. It furthermore has a componentwise minimal element. Power constraints are included by the intersection with the set of all viable power adjustments. In this framework, we aim at minimizing the total expended power by minimizing a componentwise monotone functional. If the feasible power region is nonempty, the minimum is attained. Otherwise, as a solution to balance conflicting interests, we suggest the projection of the minimum point in the power region onto the set of viable power settings. Finally, with an appropriate utility function, the problem of minimizing the total expended power can be seen as finding the Nash bargaining solution, which sheds light on power assignment from a game theoretic point of view. Convexity and componentwise monotonicity are essential prerequisites for this result.<|reference_end|>
arxiv
@article{feiten2005optimal, title={Optimal Power Control for Multiuser CDMA Channels}, author={Anke Feiten and Rudolf Mathar}, journal={arXiv preprint arXiv:cs/0509043}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509043}, primaryClass={cs.IT math.IT} }
feiten2005optimal
arxiv-673316
cs/0509044
Accumulate-Repeat-Accumulate Codes: Systematic Codes Achieving the Binary Erasure Channel Capacity with Bounded Complexity
<|reference_start|>Accumulate-Repeat-Accumulate Codes: Systematic Codes Achieving the Binary Erasure Channel Capacity with Bounded Complexity: The paper introduces ensembles of accumulate-repeat-accumulate (ARA) codes which asymptotically achieve capacity on the binary erasure channel (BEC) with {\em bounded complexity} per information bit. It also introduces symmetry properties which play a central role in the construction of capacity-achieving ensembles for the BEC. The results here improve on the tradeoff between performance and complexity provided by the first capacity-achieving ensembles of irregular repeat-accumulate (IRA) codes with bounded complexity per information bit; these IRA ensembles were previously constructed by Pfister, Sason and Urbanke. The superiority of ARA codes with moderate to large block length is exemplified by computer simulations which compare their performance with those of previously reported capacity-achieving ensembles of LDPC and IRA codes. The ARA codes also have the advantage of being systematic.<|reference_end|>
arxiv
@article{pfister2005accumulate-repeat-accumulate, title={Accumulate-Repeat-Accumulate Codes: Systematic Codes Achieving the Binary Erasure Channel Capacity with Bounded Complexity}, author={Henry D. Pfister and Igal Sason}, journal={arXiv preprint arXiv:cs/0509044}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509044}, primaryClass={cs.IT math.IT} }
pfister2005accumulate-repeat-accumulate
arxiv-673317
cs/0509045
On Hats and other Covers
<|reference_start|>On Hats and other Covers: We study a game puzzle that has enjoyed recent popularity among mathematicians, computer scientist, coding theorists and even the mass press. In the game, $n$ players are fitted with randomly assigned colored hats. Individual players can see their teammates' hat colors, but not their own. Based on this information, and without any further communication, each player must attempt to guess his hat color, or pass. The team wins if there is at least one correct guess, and no incorrect ones. The goal is to devise guessing strategies that maximize the team winning probability. We show that for the case of two hat colors, and for any value of $n$, playing strategies are equivalent to binary covering codes of radius one. This link, in particular with Hamming codes, had been observed for values of $n$ of the form $2^m-1$. We extend the analysis to games with hats of $q$ colors, $q\geq 2$, where 1-coverings are not sufficient to characterize the best strategies. Instead, we introduce the more appropriate notion of a {\em strong covering}, and show efficient constructions of these coverings, which achieve winning probabilities approaching unity. Finally, we briefly discuss results on variants of the problem, including arbitrary input distributions, randomized playing strategies, and symmetric strategies.<|reference_end|>
arxiv
@article{lenstra2005on, title={On Hats and other Covers}, author={Hendrik W. Lenstra and Gadiel Seroussi}, journal={arXiv preprint arXiv:cs/0509045}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509045}, primaryClass={cs.IT math.IT} }
lenstra2005on
arxiv-673318
cs/0509046
On the number of t-ary trees with a given path length
<|reference_start|>On the number of t-ary trees with a given path length: We show that the number of $t$-ary trees with path length equal to $p$ is $\exp(h(t^{-1})t\log t \frac{p}{\log p}(1+o(1)))$, where $\entropy(x){=}{-}x\log x {-}(1{-}x)\log (1{-}x)$ is the binary entropy function. Besides its intrinsic combinatorial interest, the question recently arose in the context of information theory, where the number of $t$-ary trees with path length $p$ estimates the number of universal types, or, equivalently, the number of different possible Lempel-Ziv'78 dictionaries for sequences of length $p$ over an alphabet of size $t$.<|reference_end|>
arxiv
@article{seroussi2005on, title={On the number of t-ary trees with a given path length}, author={Gadiel Seroussi}, journal={Algorithmica, Vol. 46, No. 3, pp. 557--565, 2006}, year={2005}, doi={10.1007/s00453-006-0122-8}, archivePrefix={arXiv}, eprint={cs/0509046}, primaryClass={cs.DM cs.IT math.IT} }
seroussi2005on
arxiv-673319
cs/0509047
Secure multiplex coding to attain the channel capacity in wiretap channels
<|reference_start|>Secure multiplex coding to attain the channel capacity in wiretap channels: It is known that a message can be transmitted safely against any wiretapper via a noisy channel without a secret key if the coding rate is less than the so-called secrecy capacity $C_S$, which is usually smaller than the channel capacity $C$. In order to remove the loss $C - C_S$, we propose a multiplex coding scheme with plural independent messages. In this paper, it is shown that the proposed multiplex coding scheme can attain the channel capacity as the total rate of the plural messages and the perfect secrecy for each message. The coding theorem is proved by extending Hayashi's proof, in which the coding of the channel resolvability is applied to wiretap channels.<|reference_end|>
arxiv
@article{kobayashi2005secure, title={Secure multiplex coding to attain the channel capacity in wiretap channels}, author={Daisuke Kobayashi, Hirosuke Yamamoto, Tomohiro Ogawa}, journal={arXiv preprint arXiv:cs/0509047}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509047}, primaryClass={cs.IT cs.CR math.IT} }
kobayashi2005secure
arxiv-673320
cs/0509048
Capacity of Complexity-Constrained Noise-Free CDMA
<|reference_start|>Capacity of Complexity-Constrained Noise-Free CDMA: An interference-limited noise-free CDMA downlink channel operating under a complexity constraint on the receiver is introduced. According to this paradigm, detected bits, obtained by performing hard decisions directly on the channel's matched filter output, must be the same as the transmitted binary inputs. This channel setting, allowing the use of the simplest receiver scheme, seems to be worthless, making reliable communication at any rate impossible. We prove, by adopting statistical mechanics notion, that in the large-system limit such a complexity-constrained CDMA channel gives rise to a non-trivial Shannon-theoretic capacity, rigorously analyzed and corroborated using finite-size channel simulations.<|reference_end|>
arxiv
@article{shental2005capacity, title={Capacity of Complexity-Constrained Noise-Free CDMA}, author={Ori Shental, Ido Kanter and Anthony J. Weiss}, journal={arXiv preprint arXiv:cs/0509048}, year={2005}, doi={10.1109/LCOMM.2006.1576553}, archivePrefix={arXiv}, eprint={cs/0509048}, primaryClass={cs.IT math.IT} }
shental2005capacity
arxiv-673321
cs/0509049
On the Achievable Information Rates of CDMA Downlink with Trivial Receivers
<|reference_start|>On the Achievable Information Rates of CDMA Downlink with Trivial Receivers: A noisy CDMA downlink channel operating under a strict complexity constraint on the receiver is introduced. According to this constraint, detected bits, obtained by performing hard decisions directly on the channel's matched filter output, must be the same as the transmitted binary inputs. This channel setting, allowing the use of the simplest receiver scheme, seems to be worthless, making reliable communication at any rate impossible. However, recently this communication paradigm was shown to yield valuable information rates in the case of a noiseless channel. This finding calls for the investigation of this attractive complexity-constrained transmission scheme for the more practical noisy channel case. By adopting the statistical mechanics notion of metastable states of the renowned Hopfield model, it is proved that under a bounded noise assumption such complexity-constrained CDMA channel gives rise to a non-trivial Shannon-theoretic capacity, rigorously analyzed and corroborated using finite-size channel simulations. For unbounded noise the channel's outage capacity is addressed and specifically described for the popular additive white Gaussian noise.<|reference_end|>
arxiv
@article{shental2005on, title={On the Achievable Information Rates of CDMA Downlink with Trivial Receivers}, author={Ori Shental, Ido Kanter and Anthony J. Weiss}, journal={arXiv preprint arXiv:cs/0509049}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509049}, primaryClass={cs.IT math.IT} }
shental2005on
arxiv-673322
cs/0509050
Effect of door delay on aircraft evacuation time
<|reference_start|>Effect of door delay on aircraft evacuation time: The recent commercial launch of twin-deck Very Large Transport Aircraft (VLTA) such as the Airbus A380 has raised questions concerning the speed at which they may be evacuated. The abnormal height of emergency exits on the upper deck has led to speculation that emotional factors such as fear may lead to door delay, and thus play a significant role in increasing overall evacuation time. Full-scale evacuation tests are financially expensive and potentially hazardous, and systematic studies of the evacuation of VLTA are rare. Here we present a computationally cheap agent-based framework for the general simulation of aircraft evacuation, and apply it to the particular case of the Airbus A380. In particular, we investigate the effect of door delay, and conclude that even a moderate average delay can lead to evacuation times that exceed the maximum for safety certification. The model suggests practical ways to minimise evacuation time, as well as providing a general framework for the simulation of evacuation.<|reference_end|>
arxiv
@article{amos2005effect, title={Effect of door delay on aircraft evacuation time}, author={Martyn Amos and Andrew Wood}, journal={arXiv preprint arXiv:cs/0509050}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509050}, primaryClass={cs.MA} }
amos2005effect
arxiv-673323
cs/0509052
Club Formation by Rational Sharing : Content, Viability and Community Structure
<|reference_start|>Club Formation by Rational Sharing : Content, Viability and Community Structure: A sharing community prospers when participation and contribution are both high. We suggest the two, while being related decisions every peer makes, should be given separate rational bases. Considered as such, a basic issue is the viability of club formation, which necessitates the modelling of two major sources of heterogeneity, namely, peers and shared content. This viability perspective clearly explains why rational peers contribute (or free-ride when they don't) and how their collective action determines viability as well as the size of the club formed. It also exposes another fundamental source of limitation to club formation apart from free-riding, in the community structure in terms of the relation between peers' interest (demand) and sharing (supply).<|reference_end|>
arxiv
@article{ng2005club, title={Club Formation by Rational Sharing : Content, Viability and Community Structure}, author={W.-Y. Ng, D.M. Chiu, W.K. Lin}, journal={arXiv preprint arXiv:cs/0509052}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509052}, primaryClass={cs.NI cond-mat.stat-mech physics.soc-ph} }
ng2005club
arxiv-673324
cs/0509053
Underwater Hacker Missile Wars: A Cryptography and Engineering Contest
<|reference_start|>Underwater Hacker Missile Wars: A Cryptography and Engineering Contest: For a recent student conference, the authors developed a day-long design problem and competition suitable for engineering, mathematics and science undergraduates. The competition included a cryptography problem, for which a workshop was run during the conference. This paper describes the competition, focusing on the cryptography problem and the workshop. Notes from the workshop and code for the computer programs are made available via the Internet. The results of a personal self-evaluation (PSE) are described.<|reference_end|>
arxiv
@article{holden2005underwater, title={Underwater Hacker Missile Wars: A Cryptography and Engineering Contest}, author={Joshua Holden, Richard Layton, Laurence Merkle, and Tina Hudson}, journal={Cryptologia, 30:69--77, 2006}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509053}, primaryClass={cs.CR cs.CE} }
holden2005underwater
arxiv-673325
cs/0509054
Grid Vertex-Unfolding Orthogonal Polyhedra
<|reference_start|>Grid Vertex-Unfolding Orthogonal Polyhedra: An edge-unfolding of a polyhedron is produced by cutting along edges and flattening the faces to a *net*, a connected planar piece with no overlaps. A *grid unfolding* allows additional cuts along grid edges induced by coordinate planes passing through every vertex. A vertex-unfolding permits faces in the net to be connected at single vertices, not necessarily along edges. We show that any orthogonal polyhedron of genus zero has a grid vertex-unfolding. (There are orthogonal polyhedra that cannot be vertex-unfolded, so some type of "gridding" of the faces is necessary.) For any orthogonal polyhedron P with n vertices, we describe an algorithm that vertex-unfolds P in O(n^2) time. Enroute to explaining this algorithm, we present a simpler vertex-unfolding algorithm that requires a 3 x 1 refinement of the vertex grid.<|reference_end|>
arxiv
@article{damian2005grid, title={Grid Vertex-Unfolding Orthogonal Polyhedra}, author={Mirela Damian, Robin Flatland, Joseph O'Rourke}, journal={arXiv preprint arXiv:cs/0509054}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509054}, primaryClass={cs.CG cs.DM} }
damian2005grid
arxiv-673326
cs/0509055
Learning Optimal Augmented Bayes Networks
<|reference_start|>Learning Optimal Augmented Bayes Networks: Naive Bayes is a simple Bayesian classifier with strong independence assumptions among the attributes. This classifier, desipte its strong independence assumptions, often performs well in practice. It is believed that relaxing the independence assumptions of a naive Bayes classifier may improve the classification accuracy of the resulting structure. While finding an optimal unconstrained Bayesian Network (for most any reasonable scoring measure) is an NP-hard problem, it is possible to learn in polynomial time optimal networks obeying various structural restrictions. Several authors have examined the possibilities of adding augmenting arcs between attributes of a Naive Bayes classifier. Friedman, Geiger and Goldszmidt define the TAN structure in which the augmenting arcs form a tree on the attributes, and present a polynomial time algorithm that learns an optimal TAN with respect to MDL score. Keogh and Pazzani define Augmented Bayes Networks in which the augmenting arcs form a forest on the attributes (a collection of trees, hence a relaxation of the stuctural restriction of TAN), and present heuristic search methods for learning good, though not optimal, augmenting arc sets. The authors, however, evaluate the learned structure only in terms of observed misclassification error and not against a scoring metric, such as MDL. In this paper, we present a simple, polynomial time greedy algorithm for learning an optimal Augmented Bayes Network with respect to MDL score.<|reference_end|>
arxiv
@article{hamine2005learning, title={Learning Optimal Augmented Bayes Networks}, author={Vikas Hamine and Paul Helman}, journal={arXiv preprint arXiv:cs/0509055}, year={2005}, number={TR-CS-2004-11}, archivePrefix={arXiv}, eprint={cs/0509055}, primaryClass={cs.LG} }
hamine2005learning
arxiv-673327
cs/0509056
Pairing-based identification schemes
<|reference_start|>Pairing-based identification schemes: We propose four different identification schemes that make use of bilinear pairings, and prove their security under certain computational assumptions. Each of the schemes is more efficient and/or more secure than any known pairing-based identification scheme.<|reference_end|>
arxiv
@article{freeman2005pairing-based, title={Pairing-based identification schemes}, author={David Freeman}, journal={arXiv preprint arXiv:cs/0509056}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509056}, primaryClass={cs.CR} }
freeman2005pairing-based
arxiv-673328
cs/0509057
Language embeddings that preserve staging and safety
<|reference_start|>Language embeddings that preserve staging and safety: We study embeddings of programming languages into one another that preserve what reductions take place at compile-time, i.e., staging. A certain condition -- what we call a `Turing complete kernel' -- is sufficient for a language to be stage-universal in the sense that any language may be embedded in it while preserving staging. A similar line of reasoning yields the notion of safety-preserving embeddings, and a useful characterization of safety-universality. Languages universal with respect to staging and safety are good candidates for realizing domain-specific embedded languages (DSELs) and `active libraries' that provide domain-specific optimizations and safety checks.<|reference_end|>
arxiv
@article{veldhuizen2005language, title={Language embeddings that preserve staging and safety}, author={Todd L. Veldhuizen}, journal={arXiv preprint arXiv:cs/0509057}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509057}, primaryClass={cs.PL} }
veldhuizen2005language
arxiv-673329
cs/0509058
Interactive Unawareness Revisited
<|reference_start|>Interactive Unawareness Revisited: We analyze a model of interactive unawareness introduced by Heifetz, Meier and Schipper (HMS). We consider two axiomatizations for their model, which capture different notions of validity. These axiomatizations allow us to compare the HMS approach to both the standard (S5) epistemic logic and two other approaches to unawareness: that of Fagin and Halpern and that of Modica and Rustichini. We show that the differences between the HMS approach and the others are mainly due to the notion of validity used and the fact that the HMS is based on a 3-valued propositional logic.<|reference_end|>
arxiv
@article{halpern2005interactive, title={Interactive Unawareness Revisited}, author={Joseph Y. Halpern, Leandro C. Rego}, journal={arXiv preprint arXiv:cs/0509058}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509058}, primaryClass={cs.AI cs.LO} }
halpern2005interactive
arxiv-673330
cs/0509059
On an authentication scheme based on the Root Problem in the braid group
<|reference_start|>On an authentication scheme based on the Root Problem in the braid group: Lal and Chaturvedi proposed two authentication schemes based on the difficulty of the Root Problem in the braid group. We point out that the first scheme is not really as secure as the Root Problem, and describe an efficient way to crack it. The attack works for any group.<|reference_end|>
arxiv
@article{tsaban2005on, title={On an authentication scheme based on the Root Problem in the braid group}, author={Boaz Tsaban}, journal={arXiv preprint arXiv:cs/0509059}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509059}, primaryClass={cs.CR} }
tsaban2005on
arxiv-673331
cs/0509060
Cluster Computing and the Power of Edge Recognition
<|reference_start|>Cluster Computing and the Power of Edge Recognition: We study the robustness--the invariance under definition changes--of the cluster class CL#P [HHKW05]. This class contains each #P function that is computed by a balanced Turing machine whose accepting paths always form a cluster with respect to some length-respecting total order with efficient adjacency checks. The definition of CL#P is heavily influenced by the defining paper's focus on (global) orders. In contrast, we define a cluster class, CLU#P, to capture what seems to us a more natural model of cluster computing. We prove that the naturalness is costless: CL#P = CLU#P. Then we exploit the more natural, flexible features of CLU#P to prove new robustness results for CL#P and to expand what is known about the closure properties of CL#P. The complexity of recognizing edges--of an ordered collection of computation paths or of a cluster of accepting computation paths--is central to this study. Most particularly, our proofs exploit the power of unique discovery of edges--the ability of nondeterministic functions to, in certain settings, discover on exactly one (in some cases, on at most one) computation path a critical piece of information regarding edges of orderings or clusters.<|reference_end|>
arxiv
@article{hemaspaandra2005cluster, title={Cluster Computing and the Power of Edge Recognition}, author={Lane A. Hemaspaandra, Christopher M. Homan, Sven Kosub}, journal={arXiv preprint arXiv:cs/0509060}, year={2005}, number={URCS-TR-2005-878}, archivePrefix={arXiv}, eprint={cs/0509060}, primaryClass={cs.CC cs.DM} }
hemaspaandra2005cluster
arxiv-673332
cs/0509061
Guarantees for the Success Frequency of an Algorithm for Finding Dodgson-Election Winners
<|reference_start|>Guarantees for the Success Frequency of an Algorithm for Finding Dodgson-Election Winners: In the year 1876 the mathematician Charles Dodgson, who wrote fiction under the now more famous name of Lewis Carroll, devised a beautiful voting system that has long fascinated political scientists. However, determining the winner of a Dodgson election is known to be complete for the \Theta_2^p level of the polynomial hierarchy. This implies that unless P=NP no polynomial-time solution to this problem exists, and unless the polynomial hierarchy collapses to NP the problem is not even in NP. Nonetheless, we prove that when the number of voters is much greater than the number of candidates--although the number of voters may still be polynomial in the number of candidates--a simple greedy algorithm very frequently finds the Dodgson winners in such a way that it ``knows'' that it has found them, and furthermore the algorithm never incorrectly declares a nonwinner to be a winner.<|reference_end|>
arxiv
@article{homan2005guarantees, title={Guarantees for the Success Frequency of an Algorithm for Finding Dodgson-Election Winners}, author={Christopher M. Homan and Lane A. Hemaspaandra}, journal={arXiv preprint arXiv:cs/0509061}, year={2005}, number={URCS-TR-2005-881}, archivePrefix={arXiv}, eprint={cs/0509061}, primaryClass={cs.DS cs.MA} }
homan2005guarantees
arxiv-673333
cs/0509062
Capacity-Achieving Codes with Bounded Graphical Complexity on Noisy Channels
<|reference_start|>Capacity-Achieving Codes with Bounded Graphical Complexity on Noisy Channels: We introduce a new family of concatenated codes with an outer low-density parity-check (LDPC) code and an inner low-density generator matrix (LDGM) code, and prove that these codes can achieve capacity under any memoryless binary-input output-symmetric (MBIOS) channel using maximum-likelihood (ML) decoding with bounded graphical complexity, i.e., the number of edges per information bit in their graphical representation is bounded. In particular, we also show that these codes can achieve capacity on the binary erasure channel (BEC) under belief propagation (BP) decoding with bounded decoding complexity per information bit per iteration for all erasure probabilities in (0, 1). By deriving and analyzing the average weight distribution (AWD) and the corresponding asymptotic growth rate of these codes with a rate-1 inner LDGM code, we also show that these codes achieve the Gilbert-Varshamov bound with asymptotically high probability. This result can be attributed to the presence of the inner rate-1 LDGM code, which is demonstrated to help eliminate high weight codewords in the LDPC code while maintaining a vanishingly small amount of low weight codewords.<|reference_end|>
arxiv
@article{hsu2005capacity-achieving, title={Capacity-Achieving Codes with Bounded Graphical Complexity on Noisy Channels}, author={Chun-Hao Hsu and Achilleas Anastasopoulos}, journal={arXiv preprint arXiv:cs/0509062}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509062}, primaryClass={cs.IT math.IT} }
hsu2005capacity-achieving
arxiv-673334
cs/0509063
Order Independence and Rationalizability
<|reference_start|>Order Independence and Rationalizability: Two natural strategy elimination procedures have been studied for strategic games. The first one involves the notion of (strict, weak, etc) dominance and the second the notion of rationalizability. In the case of dominance the criterion of order independence allowed us to clarify which notions and under what circumstances are robust. In the case of rationalizability this criterion has not been considered. In this paper we investigate the problem of order independence for rationalizability by focusing on three naturally entailed reduction relations on games. These reduction relations are distinguished by the adopted reference point for the notion of a better response. Additionally, they are parametrized by the adopted system of beliefs. We show that for one reduction relation the outcome of its (possibly transfinite) iterations does not depend on the order of elimination of the strategies. This result does not hold for the other two reduction relations. However, under a natural assumption the iterations of all three reduction relations yield the same outcome. The obtained order independence results apply to the frameworks considered in Bernheim 84 and Pearce 84. For finite games the iterations of all three reduction relations coincide and the order independence holds for three natural systems of beliefs considered in the literature.<|reference_end|>
arxiv
@article{apt2005order, title={Order Independence and Rationalizability}, author={Krzysztof R. Apt}, journal={arXiv preprint arXiv:cs/0509063}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509063}, primaryClass={cs.GT} }
apt2005order
arxiv-673335
cs/0509064
On joint coding for watermarking and encryption
<|reference_start|>On joint coding for watermarking and encryption: In continuation to earlier works where the problem of joint information embedding and lossless compression (of the composite signal) was studied in the absence \cite{MM03} and in the presence \cite{MM04} of attacks, here we consider the additional ingredient of protecting the secrecy of the watermark against an unauthorized party, which has no access to a secret key shared by the legitimate parties. In other words, we study the problem of joint coding for three objectives: information embedding, compression, and encryption.Our main result is a coding theorem that provides a single--letter characterization of the best achievable tradeoffs among the following parameters: the distortion between the composite signal and the covertext, the distortion in reconstructing the watermark by the legitimate receiver, the compressibility of the composite signal (with and without the key), and the equivocation of the watermark, as well as its reconstructed version, given the composite signal. In the attack--free case, if the key is independent of the covertext, this coding theorem gives rise to a {\it threefold} separation principle that tells that asymptotically, for long block codes, no optimality is lost by first applying a rate--distortion code to the watermark source, then encrypting the compressed codeword, and finally, embedding it into the covertext using the embedding scheme of \cite{MM03}. In the more general case, however, this separation principle is no longer valid, as the key plays an additional role of side information used by the embedding unit.<|reference_end|>
arxiv
@article{merhav2005on, title={On joint coding for watermarking and encryption}, author={Neri Merhav}, journal={arXiv preprint arXiv:cs/0509064}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509064}, primaryClass={cs.IT cs.CR math.IT} }
merhav2005on
arxiv-673336
cs/0509065
On Deciding Deep Holes of Reed-Solomon Codes
<|reference_start|>On Deciding Deep Holes of Reed-Solomon Codes: For generalized Reed-Solomon codes, it has been proved \cite{GuruswamiVa05} that the problem of determining if a received word is a deep hole is co-NP-complete. The reduction relies on the fact that the evaluation set of the code can be exponential in the length of the code -- a property that practical codes do not usually possess. In this paper, we first presented a much simpler proof of the same result. We then consider the problem for standard Reed-Solomon codes, i.e. the evaluation set consists of all the nonzero elements in the field. We reduce the problem of identifying deep holes to deciding whether an absolutely irreducible hypersurface over a finite field contains a rational point whose coordinates are pairwise distinct and nonzero. By applying Schmidt and Cafure-Matera estimation of rational points on algebraic varieties, we prove that the received vector $(f(\alpha))_{\alpha \in \F_q}$ for Reed-Solomon $[q,k]_q$, $k < q^{1/7 - \epsilon}$, cannot be a deep hole, whenever $f(x)$ is a polynomial of degree $k+d$ for $1\leq d < q^{3/13 -\epsilon}$.<|reference_end|>
arxiv
@article{cheng2005on, title={On Deciding Deep Holes of Reed-Solomon Codes}, author={Qi Cheng and Elizabeth Murray}, journal={arXiv preprint arXiv:cs/0509065}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509065}, primaryClass={cs.IT math.IT} }
cheng2005on
arxiv-673337
cs/0509066
A Model-driven Approach for Grid Services Engineering
<|reference_start|>A Model-driven Approach for Grid Services Engineering: As a consequence to the hype of Grid computing, such systems have seldom been designed using formal techniques. The complexity and rapidly growing demand around Grid technologies has favour the use of classical development techniques, resulting in no guidelines or rules and unstructured engineering processes. This paper advocates a formal approach to Grid applications development in an effort to contribute to the rigorous development of Grids software architectures. This approach addresses cross-platform interoperability and quality of service; the model-driven paradigm is applied to a formal architecture-centric engineering method in order to benefit from the formal semantic description power in addition to model-based transformations. The result of such a novel combined concept promotes the re-use of design models and eases developments in Grid computing by providing an adapted development process and ensuring correctness at each design step.<|reference_end|>
arxiv
@article{manset2005a, title={A Model-driven Approach for Grid Services Engineering}, author={David Manset, Richard McClatchey, Flavio Oquendo, Herve Verjus}, journal={arXiv preprint arXiv:cs/0509066}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509066}, primaryClass={cs.SE cs.DC} }
manset2005a
arxiv-673338
cs/0509067
Decomposing Solution Sets of Polynomial Systems: A New Parallel Monodromy Breakup Algorithm
<|reference_start|>Decomposing Solution Sets of Polynomial Systems: A New Parallel Monodromy Breakup Algorithm: We consider the numerical irreducible decomposition of a positive dimensional solution set of a polynomial system into irreducible factors. Path tracking techniques computing loops around singularities connect points on the same irreducible components. The computation of a linear trace for each factor certifies the decomposition. This factorization method exhibits a good practical performance on solution sets of relative high degrees. Using the same concepts of monodromy and linear trace, we present a new monodromy breakup algorithm. It shows a better performance than the old method which requires construction of permutations of witness points in order to break up the solution set. In contrast, the new algorithm assumes a finer approach allowing us to avoid tracking unnecessary homotopy paths. As we designed the serial algorithm keeping in mind distributed computing, an additional advantage is that its parallel version can be easily built. Synchronization issues resulted in a performance loss of the straightforward parallel version of the old algorithm. Our parallel implementation of the new approach bypasses these issues, therefore, exhibiting a better performance, especially on solution sets of larger degree.<|reference_end|>
arxiv
@article{leykin2005decomposing, title={Decomposing Solution Sets of Polynomial Systems: A New Parallel Monodromy Breakup Algorithm}, author={Anton Leykin and Jan Verschelde}, journal={arXiv preprint arXiv:cs/0509067}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509067}, primaryClass={cs.DC cs.NA math.AG} }
leykin2005decomposing
arxiv-673339
cs/0509068
Channel Uncertainty in Ultra Wideband Communication Systems
<|reference_start|>Channel Uncertainty in Ultra Wideband Communication Systems: Wide band systems operating over multipath channels may spread their power over bandwidth if they use duty cycle. Channel uncertainty limits the achievable data rates of power constrained wide band systems; Duty cycle transmission reduces the channel uncertainty because the receiver has to estimate the channel only when transmission takes place. The optimal choice of the fraction of time used for transmission depends on the spectral efficiency of the signal modulation. The general principle is demonstrated by comparing the channel conditions that allow different modulations to achieve the capacity in the limit. Direct sequence spread spectrum and pulse position modulation systems with duty cycle achieve the channel capacity, if the increase of the number of channel paths with the bandwidth is not too rapid. The higher spectral efficiency of the spread spectrum modulation lets it achieve the channel capacity in the limit, in environments where pulse position modulation with non-vanishing symbol time cannot be used because of the large number of channel paths.<|reference_end|>
arxiv
@article{porrat2005channel, title={Channel Uncertainty in Ultra Wideband Communication Systems}, author={Dana Porrat, David N. C. Tse and Serban Nacu}, journal={arXiv preprint arXiv:cs/0509068}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509068}, primaryClass={cs.IT math.IT} }
porrat2005channel
arxiv-673340
cs/0509069
Fast and Compact Regular Expression Matching
<|reference_start|>Fast and Compact Regular Expression Matching: We study 4 problems in string matching, namely, regular expression matching, approximate regular expression matching, string edit distance, and subsequence indexing, on a standard word RAM model of computation that allows logarithmic-sized words to be manipulated in constant time. We show how to improve the space and/or remove a dependency on the alphabet size for each problem using either an improved tabulation technique of an existing algorithm or by combining known algorithms in a new way.<|reference_end|>
arxiv
@article{bille2005fast, title={Fast and Compact Regular Expression Matching}, author={Philip Bille and Martin Farach-Colton}, journal={arXiv preprint arXiv:cs/0509069}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509069}, primaryClass={cs.DS} }
bille2005fast
arxiv-673341
cs/0509070
A Maple Package for Computing Groebner Bases for Linear Recurrence Relations
<|reference_start|>A Maple Package for Computing Groebner Bases for Linear Recurrence Relations: A Maple package for computing Groebner bases of linear difference ideals is described. The underlying algorithm is based on Janet and Janet-like monomial divisions associated with finite difference operators. The package can be used, for example, for automatic generation of difference schemes for linear partial differential equations and for reduction of multiloop Feynman integrals. These two possible applications are illustrated by simple examples of the Laplace equation and a one-loop scalar integral of propagator type<|reference_end|>
arxiv
@article{gerdt2005a, title={A Maple Package for Computing Groebner Bases for Linear Recurrence Relations}, author={Vladimir P. Gerdt and Daniel Robertz}, journal={Nucl.Instrum.Meth. A559 (2006) 215-219}, year={2005}, doi={10.1016/j.nima.2005.11.171}, archivePrefix={arXiv}, eprint={cs/0509070}, primaryClass={cs.SC cs.MS} }
gerdt2005a
arxiv-673342
cs/0509071
CP-nets and Nash equilibria
<|reference_start|>CP-nets and Nash equilibria: We relate here two formalisms that are used for different purposes in reasoning about multi-agent systems. One of them are strategic games that are used to capture the idea that agents interact with each other while pursuing their own interest. The other are CP-nets that were introduced to express qualitative and conditional preferences of the users and which aim at facilitating the process of preference elicitation. To relate these two formalisms we introduce a natural, qualitative, extension of the notion of a strategic game. We show then that the optimal outcomes of a CP-net are exactly the Nash equilibria of an appropriately defined strategic game in the above sense. This allows us to use the techniques of game theory to search for optimal outcomes of CP-nets and vice-versa, to use techniques developed for CP-nets to search for Nash equilibria of the considered games.<|reference_end|>
arxiv
@article{apt2005cp-nets, title={CP-nets and Nash equilibria}, author={Krzysztof R. Apt, Francesca Rossi and K. Brent Venable}, journal={arXiv preprint arXiv:cs/0509071}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509071}, primaryClass={cs.GT cs.AI} }
apt2005cp-nets
arxiv-673343
cs/0509072
Folksonomy as a Complex Network
<|reference_start|>Folksonomy as a Complex Network: Folksonomy is an emerging technology that works to classify the information over WWW through tagging the bookmarks, photos or other web-based contents. It is understood to be organized by every user while not limited to the authors of the contents and the professional editors. This study surveyed the folksonomy as a complex network. The result indicates that the network, which is composed of the tags from the folksonomy, displays both properties of small world and scale-free. However, the statistics only shows a local and static slice of the vast body of folksonomy which is still evolving.<|reference_end|>
arxiv
@article{shen2005folksonomy, title={Folksonomy as a Complex Network}, author={Kaikai Shen, Lide Wu}, journal={arXiv preprint arXiv:cs/0509072}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509072}, primaryClass={cs.IR cs.DL physics.soc-ph} }
shen2005folksonomy
arxiv-673344
cs/0509073
Distance-Increasing Maps of All Length by Simple Mapping Algorithms
<|reference_start|>Distance-Increasing Maps of All Length by Simple Mapping Algorithms: Distance-increasing maps from binary vectors to permutations, namely DIMs, are useful for the construction of permutation arrays. While a simple mapping algorithm defining DIMs of even length is known, existing DIMs of odd length are either recursively constructed by merging shorter DIMs or defined by much complicated mapping algorithms. In this paper, DIMs of all length defined by simple mapping algorithms are presented.<|reference_end|>
arxiv
@article{lee2005distance-increasing, title={Distance-Increasing Maps of All Length by Simple Mapping Algorithms}, author={Kwankyu Lee}, journal={arXiv preprint arXiv:cs/0509073}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509073}, primaryClass={cs.IT cs.DM math.IT} }
lee2005distance-increasing
arxiv-673345
cs/0509074
Planar Earthmover is not in $L_1$
<|reference_start|>Planar Earthmover is not in $L_1$: We show that any $L_1$ embedding of the transportation cost (a.k.a. Earthmover) metric on probability measures supported on the grid $\{0,1,...,n\}^2\subseteq \R^2$ incurs distortion $\Omega(\sqrt{\log n})$. We also use Fourier analytic techniques to construct a simple $L_1$ embedding of this space which has distortion $O(\log n)$.<|reference_end|>
arxiv
@article{naor2005planar, title={Planar Earthmover is not in $L_1$}, author={Assaf Naor and Gideon Schechtman}, journal={arXiv preprint arXiv:cs/0509074}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509074}, primaryClass={cs.CG math.FA} }
naor2005planar
arxiv-673346
cs/0509075
On the Capacity of Doubly Correlated MIMO Channels
<|reference_start|>On the Capacity of Doubly Correlated MIMO Channels: In this paper, we analyze the capacity of multiple-input multiple-output (MIMO) Rayleigh-fading channels in the presence of spatial fading correlation at both the transmitter and the receiver, assuming that the channel is unknown at the transmitter and perfectly known at the receiver. We first derive the determinant representation for the exact characteristic function of the capacity, which is then used to determine the trace representations for the mean, variance, skewness, kurtosis, and other higher-order statistics (HOS). These results allow us to exactly evaluate two relevant information-theoretic capacity measures--ergodic capacity and outage capacity--and the HOS of the capacity for such a MIMO channel. The analytical framework presented in the paper is valid for arbitrary numbers of antennas and generalizes the previously known results for independent and identically distributed or one-sided correlated MIMO channels to the case when fading correlation exists on both sides. We verify our analytical results by comparing them with Monte Carlo simulations for a correlation model based on realistic channel measurements as well as a classical exponential correlation model.<|reference_end|>
arxiv
@article{shin2005on, title={On the Capacity of Doubly Correlated MIMO Channels}, author={Hyundong Shin, Moe Z. Win, Jae Hong Lee, Marco Chiani}, journal={arXiv preprint arXiv:cs/0509075}, year={2005}, doi={10.1109/TWC.2006.1687741}, archivePrefix={arXiv}, eprint={cs/0509075}, primaryClass={cs.IT math.IT} }
shin2005on
arxiv-673347
cs/0509076
On Vulnerabilities, Constraints and Assumptions
<|reference_start|>On Vulnerabilities, Constraints and Assumptions: This report presents a taxonomy of vulnerabilities created as a part of an effort to develop a framework for deriving verification and validation strategies to assess software security. This taxonomy is grounded in a theoretical model of computing, which establishes the relationship between vulnerabilities, software applications and the computer system resources. This relationship illustrates that a software application is exploited by violating constraints imposed by computer system resources and assumptions made about their usage. In other words, a vulnerability exists in the software application if it allows violation of these constraints and assumptions. The taxonomy classifies these constraints and assumptions. The model also serves as a basis for the classification scheme the taxonomy uses, in which the computer system resources such as, memory, input/output, and cryptographic resources serve as categories and subcategories. Vulnerabilities, which are expressed in the form of constraints and assumptions, are classified according to these categories and subcategories. This taxonomy is both novel and distinctively different from other taxonomies found in the literature.<|reference_end|>
arxiv
@article{bazaz2005on, title={On Vulnerabilities, Constraints and Assumptions}, author={Anil Bazaz and James D. Arthur}, journal={arXiv preprint arXiv:cs/0509076}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509076}, primaryClass={cs.CR} }
bazaz2005on
arxiv-673348
cs/0509077
Capacity Limits of Cognitive Radio with Distributed and Dynamic Spectral Activity
<|reference_start|>Capacity Limits of Cognitive Radio with Distributed and Dynamic Spectral Activity: We investigate the capacity of opportunistic communication in the presence of dynamic and distributed spectral activity, i.e. when the time varying spectral holes sensed by the cognitive transmitter are correlated but not identical to those sensed by the cognitive receiver. Using the information theoretic framework of communication with causal and non-causal side information at the transmitter and/or the receiver, we obtain analytical capacity expressions and the corresponding numerical results. We find that cognitive radio communication is robust to dynamic spectral environments even when the communication occurs in bursts of only 3-5 symbols. The value of handshake overhead is investigated for both lightly loaded and heavily loaded systems. We find that the capacity benefits of overhead information flow from the transmitter to the receiver is negligible while feedback information overhead in the opposite direction significantly improves capacity.<|reference_end|>
arxiv
@article{jafar2005capacity, title={Capacity Limits of Cognitive Radio with Distributed and Dynamic Spectral Activity}, author={Syed A. Jafar, Sudhir Srinivasa}, journal={IEEE Journal on Selected Areas in Communications Special Issue on Adaptive, Spectrum Agile and Cognitive Wireless Networks, Volume 25, No. 3, April 2007, Pages: 529-537}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509077}, primaryClass={cs.IT math.IT} }
jafar2005capacity
arxiv-673349
cs/0509078
On the Feedback Capacity of Stationary Gaussian Channels
<|reference_start|>On the Feedback Capacity of Stationary Gaussian Channels: The capacity of stationary additive Gaussian noise channels with feedback is characterized as the solution to a variational problem. Toward this end, it is proved that the optimal feedback coding scheme is stationary. When specialized to the first-order autoregressive moving-average noise spectrum, this variational characterization yields a closed-form expression for the feedback capacity. In particular, this result shows that the celebrated Schalkwijk--Kailath coding scheme achieves the feedback capacity for the first-order autoregressive moving-average Gaussian channel, resolving a long-standing open problem studied by Butman, Schalkwijk--Tiernan, Wolfowitz, Ozarow, Ordentlich, Yang--Kavcic--Tatikonda, and others.<|reference_end|>
arxiv
@article{kim2005on, title={On the Feedback Capacity of Stationary Gaussian Channels}, author={Young-Han Kim}, journal={arXiv preprint arXiv:cs/0509078}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509078}, primaryClass={cs.IT math.IT} }
kim2005on
arxiv-673350
cs/0509079
The WSSUS Pulse Design Problem in Multicarrier Transmission
<|reference_start|>The WSSUS Pulse Design Problem in Multicarrier Transmission: Optimal link adaption to the scattering function of wide sense stationary uncorrelated mobile communication channels is still an unsolved problem despite its importance for next-generation system design. In multicarrier transmission such link adaption is performed by pulse shaping, i.e. by properly adjusting the transmit and receive filters. For example pulse shaped Offset--QAM systems have been recently shown to have superior performance over standard cyclic prefix OFDM (while operating at higher spectral efficiency).In this paper we establish a general mathematical framework for joint transmitter and receiver pulse shape optimization for so-called Weyl--Heisenberg or Gabor signaling with respect to the scattering function of the WSSUS channel. In our framework the pulse shape optimization problem is translated to an optimization problem over trace class operators which in turn is related to fidelity optimization in quantum information processing. By convexity relaxation the problem is shown to be equivalent to a \emph{convex constraint quasi-convex maximization problem} thereby revealing the non-convex nature of the overall WSSUS pulse design problem. We present several iterative algorithms for optimization providing applicable results even for large--scale problem constellations. We show that with transmitter-side knowledge of the channel statistics a gain of $3 - 6$dB in $\SINR$ can be expected.<|reference_end|>
arxiv
@article{jung2005the, title={The WSSUS Pulse Design Problem in Multicarrier Transmission}, author={Peter Jung, Gerhard Wunder}, journal={IEEE Trans. on Comm., 2007, 55(10), 1918-1928}, year={2005}, doi={10.1109/TCOMM.2007.906427}, archivePrefix={arXiv}, eprint={cs/0509079}, primaryClass={cs.IT math.IT} }
jung2005the
arxiv-673351
cs/0509080
Capacity and Character Expansions: Moment generating function and other exact results for MIMO correlated channels
<|reference_start|>Capacity and Character Expansions: Moment generating function and other exact results for MIMO correlated channels: We apply a promising new method from the field of representations of Lie groups to calculate integrals over unitary groups, which are important for multi-antenna communications. To demonstrate the power and simplicity of this technique, we first re-derive a number of results that have been used recently in the community of wireless information theory, using only a few simple steps. In particular, we derive the joint probability distribution of eigenvalues of the matrix GG*, with G a semicorrelated Gaussian random matrix or a Gaussian random matrix with a non-zero mean (and G* its hermitian conjugate) . These joint probability distribution functions can then be used to calculate the moment generating function of the mutual information for Gaussian channels with multiple antennas on both ends with this probability distribution of their channel matrices G. We then turn to the previously unsolved problem of calculating the moment generating function of the mutual information of MIMO (multiple input-multiple output) channels, which are correlated at both the receiver and the transmitter. From this moment generating function we obtain the ergodic average of the mutual information and study the outage probability. These methods can be applied to a number of other problems. As a particular example, we examine unitary encoded space-time transmission of MIMO systems and we derive the received signal distribution when the channel matrix is correlated at the transmitter end.<|reference_end|>
arxiv
@article{simon2005capacity, title={Capacity and Character Expansions: Moment generating function and other exact results for MIMO correlated channels}, author={Steven H. Simon, Aris L. Moustakas and Luca Marinelli}, journal={IEEETrans.Info.Theor.52:5336-5351,2006}, year={2005}, doi={10.1109/TIT.2006.885519}, archivePrefix={arXiv}, eprint={cs/0509080}, primaryClass={cs.IT cond-mat.mes-hall cond-mat.stat-mech hep-lat math-ph math.IT math.MP} }
simon2005capacity
arxiv-673352
cs/0509081
Automatic Face Recognition System Based on Local Fourier-Bessel Features
<|reference_start|>Automatic Face Recognition System Based on Local Fourier-Bessel Features: We present an automatic face verification system inspired by known properties of biological systems. In the proposed algorithm the whole image is converted from the spatial to polar frequency domain by a Fourier-Bessel Transform (FBT). Using the whole image is compared to the case where only face image regions (local analysis) are considered. The resulting representations are embedded in a dissimilarity space, where each image is represented by its distance to all the other images, and a Pseudo-Fisher discriminator is built. Verification test results on the FERET database showed that the local-based algorithm outperforms the global-FBT version. The local-FBT algorithm performed as state-of-the-art methods under different testing conditions, indicating that the proposed system is highly robust for expression, age, and illumination variations. We also evaluated the performance of the proposed system under strong occlusion conditions and found that it is highly robust for up to 50% of face occlusion. Finally, we automated completely the verification system by implementing face and eye detection algorithms. Under this condition, the local approach was only slightly superior to the global approach.<|reference_end|>
arxiv
@article{zana2005automatic, title={Automatic Face Recognition System Based on Local Fourier-Bessel Features}, author={Yossi Zana, Roberto M. Cesar-Jr and Regis de A. Barbosa}, journal={arXiv preprint arXiv:cs/0509081}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509081}, primaryClass={cs.CV} }
zana2005automatic
arxiv-673353
cs/0509082
Face Recognition Based on Polar Frequency Features
<|reference_start|>Face Recognition Based on Polar Frequency Features: A novel biologically motivated face recognition algorithm based on polar frequency is presented. Polar frequency descriptors are extracted from face images by Fourier-Bessel transform (FBT). Next, the Euclidean distance between all images is computed and each image is now represented by its dissimilarity to the other images. A Pseudo-Fisher Linear Discriminant was built on this dissimilarity space. The performance of Discrete Fourier transform (DFT) descriptors, and a combination of both feature types was also evaluated. The algorithms were tested on a 40- and 1196-subjects face database (ORL and FERET, respectively). With 5 images per subject in the training and test datasets, error rate on the ORL database was 3.8, 1.25 and 0.2% for the FBT, DFT, and the combined classifier, respectively, as compared to 2.6% achieved by the best previous algorithm. The most informative polar frequency features were concentrated at low-to-medium angular frequencies coupled to low radial frequencies. On the FERET database, where an affine normalization pre-processing was applied, the FBT algorithm outperformed only the PCA in a rank recognition test. However, it achieved performance comparable to state-of-the-art methods when evaluated by verification tests. These results indicate the high informative value of the polar frequency content of face images in relation to recognition and verification tasks, and that the Cartesian frequency content can complement information about the subjects' identity, but possibly only when the images are not pre-normalized. Possible implications for human face recognition are discussed.<|reference_end|>
arxiv
@article{zana2005face, title={Face Recognition Based on Polar Frequency Features}, author={Yossi Zana, Roberto M. Cesar-JR}, journal={arXiv preprint arXiv:cs/0509082}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509082}, primaryClass={cs.CV} }
zana2005face
arxiv-673354
cs/0509083
Face Verification in Polar Frequency Domain: a Biologically Motivated Approach
<|reference_start|>Face Verification in Polar Frequency Domain: a Biologically Motivated Approach: We present a novel local-based face verification system whose components are analogous to those of biological systems. In the proposed system, after global registration and normalization, three eye regions are converted from the spatial to polar frequency domain by a Fourier-Bessel Transform. The resulting representations are embedded in a dissimilarity space, where each image is represented by its distance to all the other images. In this dissimilarity space a Pseudo-Fisher discriminator is built. ROC and equal error rate verification test results on the FERET database showed that the system performed at least as state-of-the-art methods and better than a system based on polar Fourier features. The local-based system is especially robust to facial expression and age variations, but sensitive to registration errors.<|reference_end|>
arxiv
@article{zana2005face, title={Face Verification in Polar Frequency Domain: a Biologically Motivated Approach}, author={Yossi Zana, Roberto M. Cesar-Jr, Rogerio S. Feris, Matthew Turk}, journal={arXiv preprint arXiv:cs/0509083}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509083}, primaryClass={cs.CV} }
zana2005face
arxiv-673355
cs/0509084
Representing Digital Assets for Long-Term Preservation using MPEG-21 DID
<|reference_start|>Representing Digital Assets for Long-Term Preservation using MPEG-21 DID: Various efforts aimed at representing digital assets have emerged from several communities over the last years, including the Metadata Encoding and Transmission Standard (METS), the IMS Content Packaging (IMS-CP) XML Binding and the XML Formatted Data Units (XFDU). The MPEG-21 Digital Item Declaration (MPEG-21 DID) is another approach that can be used for the representation of digital assets in XML. This paper will explore the potential of the MPEG-21 DID in a Digital Preservation context, by looking at the core building blocks of the OAIS Information Model and the way in which they map to the MPEG-21 DID abstract model and the MPEG-21 DIDL XML syntax.<|reference_end|>
arxiv
@article{bekaert2005representing, title={Representing Digital Assets for Long-Term Preservation using MPEG-21 DID}, author={Jeroen Bekaert, Xiaoming Liu, Herbert Van de Sompel}, journal={arXiv preprint arXiv:cs/0509084}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509084}, primaryClass={cs.DL} }
bekaert2005representing
arxiv-673356
cs/0509085
An Improved Lower Bound to the Number of Neighbors Required for the Asymptotic Connectivity of Ad Hoc Networks
<|reference_start|>An Improved Lower Bound to the Number of Neighbors Required for the Asymptotic Connectivity of Ad Hoc Networks: Xue and Kumar have established that the number of neighbors required for connectivity of wireless networks with N uniformly distributed nodes must grow as log(N), and they also established that the actual number required lies between 0.074log(N) and 5.1774log(N). In this short paper, by recognizing that connectivity results for networks where the nodes are distributed according to a Poisson point process can often be applied to the problem for a network with N nodes, we are able to improve the lower bound. In particular, we show that a network with nodes distributed in a unit square according to a 2D Poisson point process of parameter N will be asymptotically disconnected with probability one if the number of neighbors is less than 0.129log(N). Moreover, similar number of neighbors is not enough for an asymptotically connected network with N nodes uniformly in a unit square, hence improving the lower bound.<|reference_end|>
arxiv
@article{song2005an, title={An Improved Lower Bound to the Number of Neighbors Required for the Asymptotic Connectivity of Ad Hoc Networks}, author={Sanquan Song, Dennis L. Goeckel, Don Towsley}, journal={arXiv preprint arXiv:cs/0509085}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509085}, primaryClass={cs.NI cs.IT math.IT} }
song2005an
arxiv-673357
cs/0509086
Statistical Mechanical Approach to Lossy Data Compression:Theory and Practice
<|reference_start|>Statistical Mechanical Approach to Lossy Data Compression:Theory and Practice: The encoder and decoder for lossy data compression of binary memoryless sources are developed on the basis of a specific-type nonmonotonic perceptron. Statistical mechanical analysis indicates that the potential ability of the perceptron-based code saturates the theoretically achievable limit in most cases although exactly performing the compression is computationally difficult. To resolve this difficulty, we provide a computationally tractable approximation algorithm using belief propagation (BP), which is a current standard algorithm of probabilistic inference. Introducing several approximations and heuristics, the BP-based algorithm exhibits performance that is close to the achievable limit in a practical time scale in optimal cases.<|reference_end|>
arxiv
@article{hosaka2005statistical, title={Statistical Mechanical Approach to Lossy Data Compression:Theory and Practice}, author={Tadaaki Hosaka, Yoshiyuki Kabashima}, journal={arXiv preprint arXiv:cs/0509086}, year={2005}, doi={10.1016/j.physa.2006.01.013}, archivePrefix={arXiv}, eprint={cs/0509086}, primaryClass={cs.IT math.IT} }
hosaka2005statistical
arxiv-673358
cs/0509087
On Time-Variant Distortions in Multicarrier Transmission with Application to Frequency Offsets and Phase Noise
<|reference_start|>On Time-Variant Distortions in Multicarrier Transmission with Application to Frequency Offsets and Phase Noise: Phase noise and frequency offsets are due to their time-variant behavior one of the most limiting disturbances in practical OFDM designs and therefore intensively studied by many authors. In this paper we present a generalized framework for the prediction of uncoded system performance in the presence of time-variant distortions including the transmitter and receiver pulse shapes as well as the channel. Therefore, unlike existing studies, our approach can be employed for more general multicarrier schemes. To show the usefulness of our approach, we apply the results to OFDM in the context of frequency offset and Wiener phase noise, yielding improved bounds on the uncoded performance. In particular, we obtain exact formulas for the averaged performance in AWGN and time-invariant multipath channels.<|reference_end|>
arxiv
@article{jung2005on, title={On Time-Variant Distortions in Multicarrier Transmission with Application to Frequency Offsets and Phase Noise}, author={Peter Jung, Gerhard Wunder}, journal={IEEE Transactions on Communications Vol. 53 (9), Sep. 2005, pp. 1561-1570}, year={2005}, doi={10.1109/TCOMM.2005.855010}, archivePrefix={arXiv}, eprint={cs/0509087}, primaryClass={cs.IT math.IT} }
jung2005on
arxiv-673359
cs/0509088
Business intelligence systems and user's parameters: an application to a documents' database
<|reference_start|>Business intelligence systems and user's parameters: an application to a documents' database: This article presents earlier results of our research works in the area of modeling Business Intelligence Systems. The basic idea of this research area is presented first. We then show the necessity of including certain users' parameters in Information systems that are used in Business Intelligence systems in order to integrate a better response from such systems. We identified two main types of attributes that can be missing from a base and we showed why they needed to be included. A user model that is based on a cognitive user evolution is presented. This model when used together with a good definition of the information needs of the user (decision maker) will accelerate his decision making process.<|reference_end|>
arxiv
@article{afolabi2005business, title={Business intelligence systems and user's parameters: an application to a documents' database}, author={Babajide Afolabi (LORIA), Odile Thiery (LORIA)}, journal={Dans Modelling Others for Observation A workshop of IJCAI 2005}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509088}, primaryClass={cs.DB} }
afolabi2005business
arxiv-673360
cs/0509089
Semantics of UML 20 Activity Diagram for Business Modeling by Means of Virtual Machine
<|reference_start|>Semantics of UML 20 Activity Diagram for Business Modeling by Means of Virtual Machine: The paper proposes a more formalized definition of UML 2.0 Activity Diagram semantics. A subset of activity diagram constructs relevant for business process modeling is considered. The semantics definition is based on the original token flow methodology, but a more constructive approach is used. The Activity Diagram Virtual machine is defined by means of a metamodel, with operations defined by a mix of pseudocode and OCL pre- and postconditions. A formal procedure is described which builds the virtual machine for any activity diagram. The relatively complicated original token movement rules in control nodes and edges are combined into paths from an action to action. A new approach is the use of different (push and pull) engines, which move tokens along the paths. Pull engines are used for paths containing join nodes, where the movement of several tokens must be coordinated. The proposed virtual machine approach makes the activity semantics definition more transparent where the token movement can be easily traced. However, the main benefit of the approach is the possibility to use the defined virtual machine as a basis for UML activity diagram based workflow or simulation engine.<|reference_end|>
arxiv
@article{vitolins2005semantics, title={Semantics of UML 2.0 Activity Diagram for Business Modeling by Means of Virtual Machine}, author={Valdis Vitolins, Audris Kalnins}, journal={Valdis Vitolins, Audris Kalnins, Proceedings Ninth IEEE International EDOC Enterprise Computing Conference, IEEE, 2005, pp. 181.-192}, year={2005}, doi={10.1109/EDOC.2005.29}, archivePrefix={arXiv}, eprint={cs/0509089}, primaryClass={cs.CE cs.PL} }
vitolins2005semantics
arxiv-673361
cs/0509090
Access Interfaces for Open Archival Information Systems based on the OAI-PMH and the OpenURL Framework for Context-Sensitive Services
<|reference_start|>Access Interfaces for Open Archival Information Systems based on the OAI-PMH and the OpenURL Framework for Context-Sensitive Services: In recent years, a variety of digital repository and archival systems have been developed and adopted. All of these systems aim at hosting a variety of compound digital assets and at providing tools for storing, managing and accessing those assets. This paper will focus on the definition of common and standardized access interfaces that could be deployed across such diverse digital respository and archival systems. The proposed interfaces are based on the two formal specifications that have recently emerged from the Digital Library community: The Open Archive Initiative Protocol for Metadata Harvesting (OAI-PMH) and the NISO OpenURL Framework for Context-Sensitive Services (OpenURL Standard). As will be described, the former allows for the retrieval of batches of XML-based representations of digital assets, while the latter facilitates the retrieval of disseminations of a specific digital asset or of one or more of its constituents. The core properties of the proposed interfaces are explained in terms of the Reference Model for an Open Archival Information System (OAIS).<|reference_end|>
arxiv
@article{bekaert2005access, title={Access Interfaces for Open Archival Information Systems based on the OAI-PMH and the OpenURL Framework for Context-Sensitive Services}, author={Jeroen Bekaert, Herbert Van de Sompel}, journal={arXiv preprint arXiv:cs/0509090}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509090}, primaryClass={cs.DL} }
bekaert2005access
arxiv-673362
cs/0509091
Minimum Cost Homomorphisms to Semicomplete Multipartite Digraphs
<|reference_start|>Minimum Cost Homomorphisms to Semicomplete Multipartite Digraphs: For digraphs $D$ and $H$, a mapping $f: V(D)\dom V(H)$ is a {\em homomorphism of $D$ to $H$} if $uv\in A(D)$ implies $f(u)f(v)\in A(H).$ For a fixed directed or undirected graph $H$ and an input graph $D$, the problem of verifying whether there exists a homomorphism of $D$ to $H$ has been studied in a large number of papers. We study an optimization version of this decision problem. Our optimization problem is motivated by a real-world problem in defence logistics and was introduced very recently by the authors and M. Tso. Suppose we are given a pair of digraphs $D,H$ and a positive integral cost $c_i(u)$ for each $u\in V(D)$ and $i\in V(H)$. The cost of a homomorphism $f$ of $D$ to $H$ is $\sum_{u\in V(D)}c_{f(u)}(u)$. Let $H$ be a fixed digraph. The minimum cost homomorphism problem for $H$, MinHOMP($H$), is stated as follows: For input digraph $D$ and costs $c_i(u)$ for each $u\in V(D)$ and $i\in V(H)$, verify whether there is a homomorphism of $D$ to $H$ and, if it does exist, find such a homomorphism of minimum cost. In our previous paper we obtained a dichotomy classification of the time complexity of \MiP for $H$ being a semicomplete digraph. In this paper we extend the classification to semicomplete $k$-partite digraphs, $k\ge 3$, and obtain such a classification for bipartite tournaments.<|reference_end|>
arxiv
@article{gutin2005minimum, title={Minimum Cost Homomorphisms to Semicomplete Multipartite Digraphs}, author={G. Gutin, A. Rafiey, A. Yeo}, journal={arXiv preprint arXiv:cs/0509091}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509091}, primaryClass={cs.DM} }
gutin2005minimum
arxiv-673363
cs/0509092
Automatic extraction of paraphrastic phrases from medium size corpora
<|reference_start|>Automatic extraction of paraphrastic phrases from medium size corpora: This paper presents a versatile system intended to acquire paraphrastic phrases from a representative corpus. In order to decrease the time spent on the elaboration of resources for NLP system (for example Information Extraction, IE hereafter), we suggest to use a machine learning system that helps defining new templates and associated resources. This knowledge is automatically derived from the text collection, in interaction with a large semantic network.<|reference_end|>
arxiv
@article{poibeau2005automatic, title={Automatic extraction of paraphrastic phrases from medium size corpora}, author={Thierry Poibeau (LIPN)}, journal={Actes de la conf\'{e}rence Computational Linguisitcs (COLING 2004) (2004) 638-644}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509092}, primaryClass={cs.CL cs.AI} }
poibeau2005automatic
arxiv-673364
cs/0509093
On the Outage Capacity of Correlated Multiple-Path MIMO Channels
<|reference_start|>On the Outage Capacity of Correlated Multiple-Path MIMO Channels: The use of multi-antenna arrays in both transmission and reception has been shown to dramatically increase the throughput of wireless communication systems. As a result there has been considerable interest in characterizing the ergodic average of the mutual information for realistic correlated channels. Here, an approach is presented that provides analytic expressions not only for the average, but also the higher cumulant moments of the distribution of the mutual information for zero-mean Gaussian (multiple-input multiple-output) MIMO channels with the most general multipath covariance matrices when the channel is known at the receiver. These channels include multi-tap delay paths, as well as general channels with covariance matrices that cannot be written as a Kronecker product, such as dual-polarized antenna arrays with general correlations at both transmitter and receiver ends. The mathematical methods are formally valid for large antenna numbers, in which limit it is shown that all higher cumulant moments of the distribution, other than the first two scale to zero. Thus, it is confirmed that the distribution of the mutual information tends to a Gaussian, which enables one to calculate the outage capacity. These results are quite accurate even in the case of a few antennas, which makes this approach applicable to realistic situations.<|reference_end|>
arxiv
@article{moustakas2005on, title={On the Outage Capacity of Correlated Multiple-Path MIMO Channels}, author={Aris L. Moustakas and Steven H. Simon}, journal={arXiv preprint arXiv:cs/0509093}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509093}, primaryClass={cs.IT math.IT} }
moustakas2005on
arxiv-673365
cs/0509094
Telling Great Stories: An NSDL Content and Communications System for Aggregation, Display, and Distribution of News and Features
<|reference_start|>Telling Great Stories: An NSDL Content and Communications System for Aggregation, Display, and Distribution of News and Features: Education digital libraries contain cataloged resources as well as contextual information about innovations in the use of educational technology, exemplar stories about community activities, and news from various user communities that include teachers, students, scholars, and developers. Long-standing library traditions of service, preservation, democratization of knowledge, rich discourse, equal access, and fair use are evident in library communications models that both pull in and push out contextual information from multiple sources integrated with editorial production processes. This paper argues that a dynamic narrative flow [1] is enabled by effective management of complex content and communications in a decentralized web-based education digital library making publishing objects such as aggregations of resources, or selected parts of objects [4] accessible through a Content and Communications System. Providing services that encourage patrons to reuse, reflect out, and contribute resources back [5] to the Library increases the reach and impact of the National Science Digital Library (NSDL). This system is a model for distributed content development and effective communications for education digital libraries in general.<|reference_end|>
arxiv
@article{morris2005telling, title={Telling Great Stories: An NSDL Content and Communications System for Aggregation, Display, and Distribution of News and Features}, author={Carol Minton Morris}, journal={arXiv preprint arXiv:cs/0509094}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509094}, primaryClass={cs.DL cs.SE} }
morris2005telling
arxiv-673366
cs/0509095
Leveraging Social-Network Infrastructure to Improve Peer-to-Peer Overlay Performance: Results from Orkut
<|reference_start|>Leveraging Social-Network Infrastructure to Improve Peer-to-Peer Overlay Performance: Results from Orkut: Application-level peer-to-peer (P2P) network overlays are an emerging paradigm that facilitates decentralization and flexibility in the scalable deployment of applications such as group communication, content delivery, and data sharing. However the construction of the overlay graph topology optimized for low latency, low link and node stress and lookup performance is still an open problem. We present a design of an overlay constructed on top of a social network and show that it gives a sizable improvement in lookups, average round-trip delay and scalability as opposed to other overlay topologies. We build our overlay on top of the topology of a popular real-world social network namely Orkut. We show Orkuts suitability for our purposes by evaluating the clustering behavior of its graph structure and the socializing pattern of its members.<|reference_end|>
arxiv
@article{anwar2005leveraging, title={Leveraging Social-Network Infrastructure to Improve Peer-to-Peer Overlay Performance: Results from Orkut}, author={Zahid Anwar (1) William Yurcik (2) Vivek Pandey (1) Asim Shankar (1) Indranil Gupta (1) Roy H. Campbell (1) ((1) Department of Computer Science University of Illinois at Urbana-Champaign, (2)(National Center for Supercomputing Applications University of Illinois at Urbana-Champaign USA))}, journal={arXiv preprint arXiv:cs/0509095}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509095}, primaryClass={cs.NI cs.CY} }
anwar2005leveraging
arxiv-673367
cs/0509096
Performance Analysis and Enhancement of Multiband OFDM for UWB Communications
<|reference_start|>Performance Analysis and Enhancement of Multiband OFDM for UWB Communications: In this paper, we analyze the frequency-hopping orthogonal frequency-division multiplexing (OFDM) system known as Multiband OFDM for high-rate wireless personal area networks (WPANs) based on ultra-wideband (UWB) transmission. Besides considering the standard, we also propose and study system performance enhancements through the application of Turbo and Repeat-Accumulate (RA) codes, as well as OFDM bit-loading. Our methodology consists of (a) a study of the channel model developed under IEEE 802.15 for UWB from a frequency-domain perspective suited for OFDM transmission, (b) development and quantification of appropriate information-theoretic performance measures, (c) comparison of these measures with simulation results for the Multiband OFDM standard proposal as well as our proposed extensions, and (d) the consideration of the influence of practical, imperfect channel estimation on the performance. We find that the current Multiband OFDM standard sufficiently exploits the frequency selectivity of the UWB channel, and that the system performs in the vicinity of the channel cutoff rate. Turbo codes and a reduced-complexity clustered bit-loading algorithm improve the system power efficiency by over 6 dB at a data rate of 480 Mbps.<|reference_end|>
arxiv
@article{snow2005performance, title={Performance Analysis and Enhancement of Multiband OFDM for UWB Communications}, author={C. Snow, L. Lampe, R. Schober}, journal={arXiv preprint arXiv:cs/0509096}, year={2005}, doi={10.1109/TWC.2007.05770}, archivePrefix={arXiv}, eprint={cs/0509096}, primaryClass={cs.IT math.IT} }
snow2005performance
arxiv-673368
cs/0509097
Iterative Algebraic Soft-Decision List Decoding of Reed-Solomon Codes
<|reference_start|>Iterative Algebraic Soft-Decision List Decoding of Reed-Solomon Codes: In this paper, we present an iterative soft-decision decoding algorithm for Reed-Solomon codes offering both complexity and performance advantages over previously known decoding algorithms. Our algorithm is a list decoding algorithm which combines two powerful soft decision decoding techniques which were previously regarded in the literature as competitive, namely, the Koetter-Vardy algebraic soft-decision decoding algorithm and belief-propagation based on adaptive parity check matrices, recently proposed by Jiang and Narayanan. Building on the Jiang-Narayanan algorithm, we present a belief-propagation based algorithm with a significant reduction in computational complexity. We introduce the concept of using a belief-propagation based decoder to enhance the soft-input information prior to decoding with an algebraic soft-decision decoder. Our algorithm can also be viewed as an interpolation multiplicity assignment scheme for algebraic soft-decision decoding of Reed-Solomon codes.<|reference_end|>
arxiv
@article{el-khamy2005iterative, title={Iterative Algebraic Soft-Decision List Decoding of Reed-Solomon Codes}, author={Mostafa El-Khamy and Robert J. McEliece}, journal={IEEE Journal on Selected Areas in Communications, Volume 24, Issue 3, March 2006 Page(s):481 - 490}, year={2005}, doi={10.1109/JSAC.2005.862399}, archivePrefix={arXiv}, eprint={cs/0509097}, primaryClass={cs.IT math.IT} }
el-khamy2005iterative
arxiv-673369
cs/0509098
Applications of correlation inequalities to low density graphical codes
<|reference_start|>Applications of correlation inequalities to low density graphical codes: This contribution is based on the contents of a talk delivered at the Next-SigmaPhi conference held in Crete in August 2005. It is adressed to an audience of physicists with diverse horizons and does not assume any background in communications theory. Capacity approaching error correcting codes for channel communication known as Low Density Parity Check (LDPC) codes have attracted considerable attention from coding theorists in the last decade. Surprisingly strong connections with the theory of diluted spin glasses have been discovered. In this work we elucidate one new connection, namely that a class of correlation inequalities valid for gaussian spin glasses can be applied to the theoretical analysis of LDPC codes. This allows for a rigorous comparison between the so called (optimal) maximum a posteriori and the computationaly efficient belief propagation decoders. The main ideas of the proofs are explained and we refer to recent works for the more lengthy technical details.<|reference_end|>
arxiv
@article{macris2005applications, title={Applications of correlation inequalities to low density graphical codes}, author={Nicolas Macris}, journal={arXiv preprint arXiv:cs/0509098}, year={2005}, doi={10.1140/epjb/e2006-00129-6}, archivePrefix={arXiv}, eprint={cs/0509098}, primaryClass={cs.IT cond-mat.stat-mech math.IT} }
macris2005applications
arxiv-673370
cs/0509099
State-Based Control of Fuzzy Discrete Event Systems
<|reference_start|>State-Based Control of Fuzzy Discrete Event Systems: To effectively represent possibility arising from states and dynamics of a system, fuzzy discrete event systems as a generalization of conventional discrete event systems have been introduced recently. Supervisory control theory based on event feedback has been well established for such systems. Noting that the system state description, from the viewpoint of specification, seems more convenient, we investigate the state-based control of fuzzy discrete event systems in this paper. We first present an approach to finding all fuzzy states that are reachable by controlling the system. After introducing the notion of controllability for fuzzy states, we then provide a necessary and sufficient condition for a set of fuzzy states to be controllable. We also find that event-based control and state-based control are not equivalent and further discuss the relationship between them. Finally, we examine the possibility of driving a fuzzy discrete event system under control from a given initial state to a prescribed set of fuzzy states and then keeping it there indefinitely.<|reference_end|>
arxiv
@article{cao2005state-based, title={State-Based Control of Fuzzy Discrete Event Systems}, author={Yongzhi Cao, Mingsheng Ying, and Guoqing Chen}, journal={IEEE Transactions on Systems, Man, and Cybernetics--Part B: Cybernetics, vol. 37, no. 2, pp. 410-424, April 2007.}, year={2005}, doi={10.1109/TSMCB.2006.883429}, archivePrefix={arXiv}, eprint={cs/0509099}, primaryClass={cs.DM cs.DC} }
cao2005state-based
arxiv-673371
cs/0509100
Distance-2 Edge Coloring is NP-Complete
<|reference_start|>Distance-2 Edge Coloring is NP-Complete: We prove that it is NP-complete to determine whether there exists a distance-2 edge coloring (strong edge coloring) with 5 colors of a bipartite 2-inductive graph with girth 6 and maximum degree 3.<|reference_end|>
arxiv
@article{erickson2005distance-2, title={Distance-2 Edge Coloring is NP-Complete}, author={Jeff Erickson (1), Shripad Thite (1), David P. Bunde (1) ((1) Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana, IL, USA)}, journal={arXiv preprint arXiv:cs/0509100}, year={2005}, archivePrefix={arXiv}, eprint={cs/0509100}, primaryClass={cs.DM cs.CC} }
erickson2005distance-2
arxiv-673372
cs/0510001
Retinal Vessel Segmentation Using the 2-D Morlet Wavelet and Supervised Classification
<|reference_start|>Retinal Vessel Segmentation Using the 2-D Morlet Wavelet and Supervised Classification: We present a method for automated segmentation of the vasculature in retinal images. The method produces segmentations by classifying each image pixel as vessel or non-vessel, based on the pixel's feature vector. Feature vectors are composed of the pixel's intensity and continuous two-dimensional Morlet wavelet transform responses taken at multiple scales. The Morlet wavelet is capable of tuning to specific frequencies, thus allowing noise filtering and vessel enhancement in a single step. We use a Bayesian classifier with class-conditional probability density functions (likelihoods) described as Gaussian mixtures, yielding a fast classification, while being able to model complex decision surfaces and compare its performance with the linear minimum squared error classifier. The probability distributions are estimated based on a training set of labeled pixels obtained from manual segmentations. The method's performance is evaluated on publicly available DRIVE and STARE databases of manually labeled non-mydriatic images. On the DRIVE database, it achieves an area under the receiver operating characteristic (ROC) curve of 0.9598, being slightly superior than that presented by the method of Staal et al.<|reference_end|>
arxiv
@article{soares2005retinal, title={Retinal Vessel Segmentation Using the 2-D Morlet Wavelet and Supervised Classification}, author={Jo~ao V. B. Soares, Jorge J. G. Leandro, Roberto M. Cesar Jr., Herbert F. Jelinek, Michael J. Cree}, journal={IEEE Trans Med Imag, Vol. 25, no. 9, pp. 1214- 1222, Sep. 2006.}, year={2005}, doi={10.1109/TMI.2006.879967}, archivePrefix={arXiv}, eprint={cs/0510001}, primaryClass={cs.CV} }
soares2005retinal
arxiv-673373
cs/0510002
Optimal Relay Functionality for SNR Maximization in Memoryless Relay Networks
<|reference_start|>Optimal Relay Functionality for SNR Maximization in Memoryless Relay Networks: We explore the SNR-optimal relay functionality in a \emph{memoryless} relay network, i.e. a network where, during each channel use, the signal transmitted by a relay depends only on the last received symbol at that relay. We develop a generalized notion of SNR for the class of memoryless relay functions. The solution to the generalized SNR optimization problem leads to the novel concept of minimum mean square uncorrelated error estimation(MMSUEE). For the elemental case of a single relay, we show that MMSUEE is the SNR-optimal memoryless relay function regardless of the source and relay transmit power, and the modulation scheme. This scheme, that we call estimate and forward (EF), is also shown to be SNR-optimal with PSK modulation in a parallel relay network. We demonstrate that EF performs better than the best of amplify and forward (AF) and demodulate and forward (DF), in both parallel and serial relay networks. We also determine that AF is near-optimal at low transmit power in a parallel network, while DF is near-optimal at high transmit power in a serial network. For hybrid networks that contain both serial and parallel elements, and when robust performance is desired, the advantage of EF over the best of AF and DF is found to be significant. Error probabilities are provided to substantiate the performance gain obtained through SNR optimality. We also show that, for \emph{Gaussian} inputs, AF, DF and EF become identical.<|reference_end|>
arxiv
@article{gomadam2005optimal, title={Optimal Relay Functionality for SNR Maximization in Memoryless Relay Networks}, author={Krishna Srikanth Gomadam, Syed Ali Jafar}, journal={arXiv preprint arXiv:cs/0510002}, year={2005}, archivePrefix={arXiv}, eprint={cs/0510002}, primaryClass={cs.IT math.IT} }
gomadam2005optimal
arxiv-673374
cs/0510003
Generalized ABBA Space-Time Block Codes
<|reference_start|>Generalized ABBA Space-Time Block Codes: Linear space-time block codes (STBCs) of unitary rate and full diversity, systematically constructed over arbitrary constellations for any number of transmit antennas are introduced. The codes are obtained by generalizing the existing ABBA STBCs, a.k.a quasi-orthogonal STBCs (QO-STBCs). Furthermore, a fully orthogonal (symbol-by-symbol) decoder for the new generalized ABBA (GABBA) codes is provided. This remarkably low-complexity decoder relies on partition orthogonality properties of the code structure to decompose the received signal vector into lower-dimension tuples, each dependent only on certain subsets of the transmitted symbols. Orthogonal decodability results from the nested application of this technique, with no matrix inversion or iterative signal processing required. The exact bit-error-rate probability of GABBA codes over generalized fading channels with maximum likelihood (ML) decoding is evaluated analytically and compared against simulation results obtained with the proposed orthogonal decoder. The comparison reveals that the proposed GABBA solution, despite its very low complexity, achieves nearly the same performance of the bound corresponding to the ML-decoded system, especially in systems with large numbers of antennas.<|reference_end|>
arxiv
@article{de abreu2005generalized, title={Generalized ABBA Space-Time Block Codes}, author={Giuseppe Thadeu Freitas de Abreu}, journal={arXiv preprint arXiv:cs/0510003}, year={2005}, archivePrefix={arXiv}, eprint={cs/0510003}, primaryClass={cs.IT math.IT} }
de abreu2005generalized
arxiv-673375
cs/0510004
The Hunting of the Bump: On Maximizing Statistical Discrepancy
<|reference_start|>The Hunting of the Bump: On Maximizing Statistical Discrepancy: Anomaly detection has important applications in biosurveilance and environmental monitoring. When comparing measured data to data drawn from a baseline distribution, merely, finding clusters in the measured data may not actually represent true anomalies. These clusters may likely be the clusters of the baseline distribution. Hence, a discrepancy function is often used to examine how different measured data is to baseline data within a region. An anomalous region is thus defined to be one with high discrepancy. In this paper, we present algorithms for maximizing statistical discrepancy functions over the space of axis-parallel rectangles. We give provable approximation guarantees, both additive and relative, and our methods apply to any convex discrepancy function. Our algorithms work by connecting statistical discrepancy to combinatorial discrepancy; roughly speaking, we show that in order to maximize a convex discrepancy function over a class of shapes, one needs only maximize a linear discrepancy function over the same set of shapes. We derive general discrepancy functions for data generated from a one- parameter exponential family. This generalizes the widely-used Kulldorff scan statistic for data from a Poisson distribution. We present an algorithm running in $O(\smash[tb]{\frac{1}{\epsilon} n^2 \log^2 n})$ that computes the maximum discrepancy rectangle to within additive error $\epsilon$, for the Kulldorff scan statistic. Similar results hold for relative error and for discrepancy functions for data coming from Gaussian, Bernoulli, and gamma distributions. Prior to our work, the best known algorithms were exact and ran in time $\smash[t]{O(n^4)}$.<|reference_end|>
arxiv
@article{agarwal2005the, title={The Hunting of the Bump: On Maximizing Statistical Discrepancy}, author={Deepak Agarwal and Jeff M. Phillips and Suresh Venkatasubramanian}, journal={arXiv preprint arXiv:cs/0510004}, year={2005}, archivePrefix={arXiv}, eprint={cs/0510004}, primaryClass={cs.CG} }
agarwal2005the
arxiv-673376
cs/0510005
Taylor series expansions for the entropy rate of Hidden Markov Processes
<|reference_start|>Taylor series expansions for the entropy rate of Hidden Markov Processes: Finding the entropy rate of Hidden Markov Processes is an active research topic, of both theoretical and practical importance. A recently used approach is studying the asymptotic behavior of the entropy rate in various regimes. In this paper we generalize and prove a previous conjecture relating the entropy rate to entropies of finite systems. Building on our new theorems, we establish series expansions for the entropy rate in two different regimes. We also study the radius of convergence of the two series expansions.<|reference_end|>
arxiv
@article{zuk2005taylor, title={Taylor series expansions for the entropy rate of Hidden Markov Processes}, author={Or Zuk, Eytan Domany, Ido Kanter and Michael Aizenman}, journal={Proceedings 2006 IEEE International Conference on Communications (ICC 2006).}, year={2005}, doi={10.1109/ICC.2006.255039}, archivePrefix={arXiv}, eprint={cs/0510005}, primaryClass={cs.IT cond-mat.stat-mech math.IT} }
zuk2005taylor
arxiv-673377
cs/0510006
Using the Modified Allan Variance for Accurate Estimation of the Hurst Parameter of Long-Range Dependent Traffic
<|reference_start|>Using the Modified Allan Variance for Accurate Estimation of the Hurst Parameter of Long-Range Dependent Traffic: Internet traffic exhibits self-similarity and long-range dependence (LRD) on various time scales. A well studied issue is the estimation of statistical parameters characterizing traffic self-similarity and LRD, such as the Hurst parameter H. In this paper, we propose to adapt the Modified Allan Variance (MAVAR), a time-domain quantity originally conceived to discriminate fractional noise in frequency stability measurement, to estimate the Hurst parameter of LRD traffic traces and, more generally, to identify fractional noise components in network traffic. This novel method is validated by comparison to one of the best techniques for analyzing self-similar and LRD traffic: the logscale diagram based on wavelet analysis. Both methods are applied to pseudo-random LRD data series, generated with assigned values of H. The superior spectral sensitivity of MAVAR achieves outstanding accuracy in estimating H, even better than the logscale method. The behaviour of MAVAR with most common deterministic signals that yield nonstationarity in data under analysis is also studied. Finally, both techniques are applied to a real IP traffic trace, providing a sound example of the usefulness of MAVAR also in traffic characterization, to complement other established techniques as the logscale method.<|reference_end|>
arxiv
@article{bregni2005using, title={Using the Modified Allan Variance for Accurate Estimation of the Hurst Parameter of Long-Range Dependent Traffic}, author={Stefano Bregni, Luca Primerano}, journal={arXiv preprint arXiv:cs/0510006}, year={2005}, archivePrefix={arXiv}, eprint={cs/0510006}, primaryClass={cs.NI} }
bregni2005using
arxiv-673378
cs/0510007
Network Inference from TraceRoute Measurements: Internet Topology `Species'
<|reference_start|>Network Inference from TraceRoute Measurements: Internet Topology `Species': Internet mapping projects generally consist in sampling the network from a limited set of sources by using traceroute probes. This methodology, akin to the merging of spanning trees from the different sources to a set of destinations, leads necessarily to a partial, incomplete map of the Internet. Accordingly, determination of Internet topology characteristics from such sampled maps is in part a problem of statistical inference. Our contribution begins with the observation that the inference of many of the most basic topological quantities -- including network size and degree characteristics -- from traceroute measurements is in fact a version of the so-called `species problem' in statistics. This observation has important implications, since species problems are often quite challenging. We focus here on the most fundamental example of a traceroute internet species: the number of nodes in a network. Specifically, we characterize the difficulty of estimating this quantity through a set of analytical arguments, we use statistical subsampling principles to derive two proposed estimators, and we illustrate the performance of these estimators on networks with various topological characteristics.<|reference_end|>
arxiv
@article{viger2005network, title={Network Inference from TraceRoute Measurements: Internet Topology `Species'}, author={Fabien Viger, Alain Barrat, Luca Dall'Asta, Cun-Hui Zhang, and Eric D. Kolaczyk}, journal={Phys. Rev. E 75 (2007) 056111}, year={2005}, doi={10.1103/PhysRevE.75.056111}, archivePrefix={arXiv}, eprint={cs/0510007}, primaryClass={cs.NI cond-mat.stat-mech math.ST physics.soc-ph stat.TH} }
viger2005network
arxiv-673379
cs/0510008
Accurate and robust image superresolution by neural processing of local image representations
<|reference_start|>Accurate and robust image superresolution by neural processing of local image representations: Image superresolution involves the processing of an image sequence to generate a still image with higher resolution. Classical approaches, such as bayesian MAP methods, require iterative minimization procedures, with high computational costs. Recently, the authors proposed a method to tackle this problem, based on the use of a hybrid MLP-PNN architecture. In this paper, we present a novel superresolution method, based on an evolution of this concept, to incorporate the use of local image models. A neural processing stage receives as input the value of model coefficients on local windows. The data dimensionality is firstly reduced by application of PCA. An MLP, trained on synthetic sequences with various amounts of noise, estimates the high-resolution image data. The effect of varying the dimension of the network input space is examined, showing a complex, structured behavior. Quantitative results are presented showing the accuracy and robustness of the proposed method.<|reference_end|>
arxiv
@article{miravet2005accurate, title={Accurate and robust image superresolution by neural processing of local image representations}, author={Carlos Miravet, Francisco B. Rodriguez}, journal={Lect. Notes Comput. Sc. 3696 (2005) 499-506}, year={2005}, archivePrefix={arXiv}, eprint={cs/0510008}, primaryClass={cs.CV cs.NE} }
miravet2005accurate
arxiv-673380
cs/0510009
Tree-Based Construction of LDPC Codes Having Good Pseudocodeword Weights
<|reference_start|>Tree-Based Construction of LDPC Codes Having Good Pseudocodeword Weights: We present a tree-based construction of LDPC codes that have minimum pseudocodeword weight equal to or almost equal to the minimum distance, and perform well with iterative decoding. The construction involves enumerating a $d$-regular tree for a fixed number of layers and employing a connection algorithm based on permutations or mutually orthogonal Latin squares to close the tree. Methods are presented for degrees $d=p^s$ and $d = p^s+1$, for $p$ a prime. One class corresponds to the well-known finite-geometry and finite generalized quadrangle LDPC codes; the other codes presented are new. We also present some bounds on pseudocodeword weight for $p$-ary LDPC codes. Treating these codes as $p$-ary LDPC codes rather than binary LDPC codes improves their rates, minimum distances, and pseudocodeword weights, thereby giving a new importance to the finite geometry LDPC codes where $p > 2$.<|reference_end|>
arxiv
@article{kelley2005tree-based, title={Tree-Based Construction of LDPC Codes Having Good Pseudocodeword Weights}, author={Christine Kelley, Deepak Sridhara, and Joachim Rosenthal}, journal={arXiv preprint arXiv:cs/0510009}, year={2005}, archivePrefix={arXiv}, eprint={cs/0510009}, primaryClass={cs.IT math.IT} }
kelley2005tree-based
arxiv-673381
cs/0510010
On the Expressiveness of the Ambient Logic
<|reference_start|>On the Expressiveness of the Ambient Logic: The Ambient Logic (AL) has been proposed for expressing properties of process mobility in the calculus of Mobile Ambients (MA), and as a basis for query languages on semistructured data. In this paper, we study the expressiveness of AL. We define formulas for capabilities and for communication in MA. We also derive some formulas that capture finitess of a term, name occurrences and persistence. We study extensions of the calculus involving more complex forms of communications, and we define characteristic formulas for the equivalence induced by the logic on a subcalculus of MA. This subcalculus is defined by imposing an image-finiteness condition on the reducts of a MA process.<|reference_end|>
arxiv
@article{hirschkoff2005on, title={On the Expressiveness of the Ambient Logic}, author={Daniel Hirschkoff (LIP-ENS LYON), Etienne Lozes (LSV), Davide Sangiorgi (DSI BOLOGNA)}, journal={Logical Methods in Computer Science, Volume 2, Issue 2 (March 30, 2006) lmcs:2251}, year={2005}, doi={10.2168/LMCS-2(2:3)2006}, archivePrefix={arXiv}, eprint={cs/0510010}, primaryClass={cs.LO} }
hirschkoff2005on
arxiv-673382
cs/0510011
Diophantus' 20th Problem and Fermat's Last Theorem for n=4: Formalization of Fermat's Proofs in the Coq Proof Assistant
<|reference_start|>Diophantus' 20th Problem and Fermat's Last Theorem for n=4: Formalization of Fermat's Proofs in the Coq Proof Assistant: We present the proof of Diophantus' 20th problem (book VI of Diophantus' Arithmetica), which consists in wondering if there exist right triangles whose sides may be measured as integers and whose surface may be a square. This problem was negatively solved by Fermat in the 17th century, who used the "wonderful" method (ipse dixit Fermat) of infinite descent. This method, which is, historically, the first use of induction, consists in producing smaller and smaller non-negative integer solutions assuming that one exists; this naturally leads to a reductio ad absurdum reasoning because we are bounded by zero. We describe the formalization of this proof which has been carried out in the Coq proof assistant. Moreover, as a direct and no less historical application, we also provide the proof (by Fermat) of Fermat's last theorem for n=4, as well as the corresponding formalization made in Coq.<|reference_end|>
arxiv
@article{delahaye2005diophantus', title={Diophantus' 20th Problem and Fermat's Last Theorem for n=4: Formalization of Fermat's Proofs in the Coq Proof Assistant}, author={David Delahaye (CEDRIC), Micaela Mayero (LIPN)}, journal={arXiv preprint arXiv:cs/0510011}, year={2005}, archivePrefix={arXiv}, eprint={cs/0510011}, primaryClass={cs.LO cs.SE math.NT} }
delahaye2005diophantus'
arxiv-673383
cs/0510012
On relating CTL to Datalog
<|reference_start|>On relating CTL to Datalog: CTL is the dominant temporal specification language in practice mainly due to the fact that it admits model checking in linear time. Logic programming and the database query language Datalog are often used as an implementation platform for logic languages. In this paper we present the exact relation between CTL and Datalog and moreover we build on this relation and known efficient algorithms for CTL to obtain efficient algorithms for fragments of stratified Datalog. The contributions of this paper are: a) We embed CTL into STD which is a proper fragment of stratified Datalog. Moreover we show that STD expresses exactly CTL -- we prove that by embedding STD into CTL. Both embeddings are linear. b) CTL can also be embedded to fragments of Datalog without negation. We define a fragment of Datalog with the successor build-in predicate that we call TDS and we embed CTL into TDS in linear time. We build on the above relations to answer open problems of stratified Datalog. We prove that query evaluation is linear and that containment and satisfiability problems are both decidable. The results presented in this paper are the first for fragments of stratified Datalog that are more general than those containing only unary EDBs.<|reference_end|>
arxiv
@article{afrati2005on, title={On relating CTL to Datalog}, author={Foto Afrati, Theodore Andronikos, Vassia Pavlaki, Eugenie Foustoucos, Irene Guessarian}, journal={arXiv preprint arXiv:cs/0510012}, year={2005}, archivePrefix={arXiv}, eprint={cs/0510012}, primaryClass={cs.LO} }
afrati2005on
arxiv-673384
cs/0510013
Safe Data Sharing and Data Dissemination on Smart Devices
<|reference_start|>Safe Data Sharing and Data Dissemination on Smart Devices: The erosion of trust put in traditional database servers, the growing interest for different forms of data dissemination and the concern for protecting children from suspicious Internet content are different factors that lead to move the access control from servers to clients. Several encryption schemes can be used to serve this purpose but all suffer from a static way of sharing data. In a precedent paper, we devised smarter client-based access control managers exploiting hardware security elements on client devices. The goal pursued is being able to evaluate dynamic and personalized access control rules on a ciphered XML input document, with the benefit of dissociating access rights from encryption. In this demonstration, we validate our solution using a real smart card platform and explain how we deal with the constraints usually met on hardware security elements (small memory and low throughput). Finally, we illustrate the generality of the approach and the easiness of its deployment through two different applications: a collaborative application and a parental control application on video streams.<|reference_end|>
arxiv
@article{bouganim2005safe, title={Safe Data Sharing and Data Dissemination on Smart Devices}, author={Luc Bouganim (INRIA Rocquencourt), Cosmin Cremarenco (INRIA Rocquencourt), Franc{c}ois Dang Ngoc (INRIA Rocquencourt, PRISM - UVSQ), Nicolas Dieu (INRIA Rocquencourt), Philippe Pucheral (INRIA Rocquencourt, PRISM - UVSQ)}, journal={arXiv preprint arXiv:cs/0510013}, year={2005}, archivePrefix={arXiv}, eprint={cs/0510013}, primaryClass={cs.CR cs.DB} }
bouganim2005safe
arxiv-673385
cs/0510014
Computing the Kalman form
<|reference_start|>Computing the Kalman form: We present two algorithms for the computation of the Kalman form of a linear control system. The first one is based on the technique developed by Keller-Gehrig for the computation of the characteristic polynomial. The cost is a logarithmic number of matrix multiplications. To our knowledge, this improves the best previously known algebraic complexity by an order of magnitude. Then we also present a cubic algorithm proven to more efficient in practice.<|reference_end|>
arxiv
@article{pernet2005computing, title={Computing the Kalman form}, author={Cl'ement Pernet (LMC - IMAG), Aude Rondepierre (LMC - IMAG), Gilles Villard (LIP)}, journal={arXiv preprint arXiv:cs/0510014}, year={2005}, archivePrefix={arXiv}, eprint={cs/0510014}, primaryClass={cs.SC math.OC} }
pernet2005computing
arxiv-673386
cs/0510015
Word sense disambiguation criteria: a systematic study
<|reference_start|>Word sense disambiguation criteria: a systematic study: This article describes the results of a systematic in-depth study of the criteria used for word sense disambiguation. Our study is based on 60 target words: 20 nouns, 20 adjectives and 20 verbs. Our results are not always in line with some practices in the field. For example, we show that omitting non-content words decreases performance and that bigrams yield better results than unigrams.<|reference_end|>
arxiv
@article{audibert2005word, title={Word sense disambiguation criteria: a systematic study}, author={Laurent Audibert (DELIC)}, journal={20th International Conference on Computational Linguistics (COLING-2004) (2004) pp. 910-916}, year={2005}, archivePrefix={arXiv}, eprint={cs/0510015}, primaryClass={cs.CL} }
audibert2005word
arxiv-673387
cs/0510016
From finite-system entropy to entropy rate for a Hidden Markov Process
<|reference_start|>From finite-system entropy to entropy rate for a Hidden Markov Process: A recent result presented the expansion for the entropy rate of a Hidden Markov Process (HMP) as a power series in the noise variable $\eps$. The coefficients of the expansion around the noiseless ($\eps = 0$) limit were calculated up to 11th order, using a conjecture that relates the entropy rate of a HMP to the entropy of a process of finite length (which is calculated analytically). In this communication we generalize and prove the validity of the conjecture, and discuss the theoretical and practical consequences of our new theorem.<|reference_end|>
arxiv
@article{zuk2005from, title={From finite-system entropy to entropy rate for a Hidden Markov Process}, author={Or Zuk, Eytan Domany, Ido Kanter and Michael Aizenman}, journal={IEEE Signal Processing Letters 13,517 (2006).}, year={2005}, doi={10.1109/LSP.2006.874466}, archivePrefix={arXiv}, eprint={cs/0510016}, primaryClass={cs.IT math-ph math.IT math.MP} }
zuk2005from
arxiv-673388
cs/0510017
Partial fillup and search time in LC tries
<|reference_start|>Partial fillup and search time in LC tries: Andersson and Nilsson introduced in 1993 a level-compressed trie (in short: LC trie) in which a full subtree of a node is compressed to a single node of degree being the size of the subtree. Recent experimental results indicated a 'dramatic improvement' when full subtrees are replaced by partially filled subtrees. In this paper, we provide a theoretical justification of these experimental results showing, among others, a rather moderate improvement of the search time over the original LC tries. For such an analysis, we assume that n strings are generated independently by a binary memoryless source with p denoting the probability of emitting a 1. We first prove that the so called alpha-fillup level (i.e., the largest level in a trie with alpha fraction of nodes present at this level) is concentrated on two values with high probability. We give these values explicitly up to O(1), and observe that the value of alpha (strictly between 0 and 1) does not affect the leading term. This result directly yields the typical depth (search time) in the alpha-LC tries with p not equal to 1/2, which turns out to be C loglog n for an explicitly given constant C (depending on p but not on alpha). This should be compared with recently found typical depth in the original LC tries which is C' loglog n for a larger constant C'. The search time in alpha-LC tries is thus smaller but of the same order as in the original LC tries.<|reference_end|>
arxiv
@article{janson2005partial, title={Partial fillup and search time in LC tries}, author={Svante Janson and Wojciech Szpankowski}, journal={arXiv preprint arXiv:cs/0510017}, year={2005}, archivePrefix={arXiv}, eprint={cs/0510017}, primaryClass={cs.DS math.PR} }
janson2005partial
arxiv-673389
cs/0510018
Candidate One-Way Functions and One-Way Permutations Based on Quasigroup String Transformations
<|reference_start|>Candidate One-Way Functions and One-Way Permutations Based on Quasigroup String Transformations: In this paper we propose a definition and construction of a new family of one-way candidate functions ${\cal R}_N:Q^N \to Q^N$, where $Q=\{0,1,...,s-1\}$ is an alphabet with $s$ elements. Special instances of these functions can have the additional property to be permutations (i.e. one-way permutations). These one-way functions have the property that for achieving the security level of $2^n$ computations in order to invert them, only $n$ bits of input are needed. The construction is based on quasigroup string transformations. Since quasigroups in general do not have algebraic properties such as associativity, commutativity, neutral elements, inverting these functions seems to require exponentially many readings from the lookup table that defines them (a Latin Square) in order to check the satisfiability for the initial conditions, thus making them natural candidates for one-way functions.<|reference_end|>
arxiv
@article{gligoroski2005candidate, title={Candidate One-Way Functions and One-Way Permutations Based on Quasigroup String Transformations}, author={Danilo Gligoroski}, journal={arXiv preprint arXiv:cs/0510018}, year={2005}, archivePrefix={arXiv}, eprint={cs/0510018}, primaryClass={cs.CR cs.CC} }
gligoroski2005candidate
arxiv-673390
cs/0510019
Entropy based Nearest Neighbor Search in High Dimensions
<|reference_start|>Entropy based Nearest Neighbor Search in High Dimensions: In this paper we study the problem of finding the approximate nearest neighbor of a query point in the high dimensional space, focusing on the Euclidean space. The earlier approaches use locality-preserving hash functions (that tend to map nearby points to the same value) to construct several hash tables to ensure that the query point hashes to the same bucket as its nearest neighbor in at least one table. Our approach is different -- we use one (or a few) hash table and hash several randomly chosen points in the neighborhood of the query point showing that at least one of them will hash to the bucket containing its nearest neighbor. We show that the number of randomly chosen points in the neighborhood of the query point $q$ required depends on the entropy of the hash value $h(p)$ of a random point $p$ at the same distance from $q$ at its nearest neighbor, given $q$ and the locality preserving hash function $h$ chosen randomly from the hash family. Precisely, we show that if the entropy $I(h(p)|q,h) = M$ and $g$ is a bound on the probability that two far-off points will hash to the same bucket, then we can find the approximate nearest neighbor in $O(n^\rho)$ time and near linear $\tilde O(n)$ space where $\rho = M/\log(1/g)$. Alternatively we can build a data structure of size $\tilde O(n^{1/(1-\rho)})$ to answer queries in $\tilde O(d)$ time. By applying this analysis to the locality preserving hash functions in and adjusting the parameters we show that the $c$ nearest neighbor can be computed in time $\tilde O(n^\rho)$ and near linear space where $\rho \approx 2.06/c$ as $c$ becomes large.<|reference_end|>
arxiv
@article{panigrahy2005entropy, title={Entropy based Nearest Neighbor Search in High Dimensions}, author={Rina Panigrahy}, journal={arXiv preprint arXiv:cs/0510019}, year={2005}, archivePrefix={arXiv}, eprint={cs/0510019}, primaryClass={cs.DS} }
panigrahy2005entropy
arxiv-673391
cs/0510020
Sur le statut r\'ef\'erentiel des entit\'es nomm\'ees
<|reference_start|>Sur le statut r\'ef\'erentiel des entit\'es nomm\'ees: We show in this paper that, on the one hand, named entities can be designated using different denominations and that, on the second hand, names denoting named entities are polysemous. The analysis cannot be limited to reference resolution but should take into account naming strategies, which are mainly based on two linguistic operations: synecdoche and metonymy. Lastly, we present a model that explicitly represents the different denominations in discourse, unifying the way to represent linguistic knowledge and world knowledge.<|reference_end|>
arxiv
@article{poibeau2005sur, title={Sur le statut r\'{e}f\'{e}rentiel des entit\'{e}s nomm\'{e}es}, author={Thierry Poibeau (LIPN)}, journal={Conf\'{e}rence Traitement Automatique des Langues 2005, France (2005)}, year={2005}, archivePrefix={arXiv}, eprint={cs/0510020}, primaryClass={cs.AI cs.IR} }
poibeau2005sur
arxiv-673392
cs/0510021
A Unified Power Control Algorithm for Multiuser Detectors in Large Systems: Convergence and Performance
<|reference_start|>A Unified Power Control Algorithm for Multiuser Detectors in Large Systems: Convergence and Performance: A unified approach to energy-efficient power control, applicable to a large family of receivers including the matched filter, the decorrelator, the (linear) minimum-mean-square-error detector (MMSE), and the individually and jointly optimal multiuser detectors, has recently been proposed for code-division-multiple-access (CDMA) networks. This unified power control (UPC) algorithm exploits the linear relationship that has been shown to exist between the transmit power and the output signal-to-interference-plus-noise ratio (SIR) in large systems. Based on this principle and by computing the multiuser efficiency, the UPC algorithm updates the users' transmit powers in an iterative way to achieve the desired target SIR. In this paper, the convergence of the UPC algorithm is proved for the matched filter, the decorrelator, and the MMSE detector. In addition, the performance of the algorithm in finite-size systems is studied and compared with that of existing power control schemes. The UPC algorithm is particularly suitable for systems with randomly generated long spreading sequences (i.e., sequences whose period is longer than one symbol duration).<|reference_end|>
arxiv
@article{meshkati2005a, title={A Unified Power Control Algorithm for Multiuser Detectors in Large Systems: Convergence and Performance}, author={Farhad Meshkati, H. Vincent Poor, Stuart C. Schwartz and Dongning Guo}, journal={arXiv preprint arXiv:cs/0510021}, year={2005}, archivePrefix={arXiv}, eprint={cs/0510021}, primaryClass={cs.IT math.IT} }
meshkati2005a
arxiv-673393
cs/0510022
Cryptographic Authentication of Navigation Protocols
<|reference_start|>Cryptographic Authentication of Navigation Protocols: We examine the security of existing radio navigation protocols and attempt to define secure, scalable replacements.<|reference_end|>
arxiv
@article{bretheim2005cryptographic, title={Cryptographic Authentication of Navigation Protocols}, author={Sam Bretheim}, journal={arXiv preprint arXiv:cs/0510022}, year={2005}, archivePrefix={arXiv}, eprint={cs/0510022}, primaryClass={cs.CR} }
bretheim2005cryptographic
arxiv-673394
cs/0510023
On the capacity of mobile ad hoc networks with delay constraints
<|reference_start|>On the capacity of mobile ad hoc networks with delay constraints: Previous work on ad hoc network capacity has focused primarily on source-destination throughput requirements for different models and transmission scenarios, with an emphasis on delay tolerant applications. In such problems, network capacity enhancement is achieved as a tradeoff with transmission delay. In this paper, the capacity of ad hoc networks supporting delay sensitive traffic is studied. First, a general framework is proposed for characterizing the interactions between the physical and the network layer in an ad hoc network. Then, CDMA ad hoc networks, in which advanced signal processing techniques such as multiuser detection are relied upon to enhance the user capacity, are analyzed. The network capacity is characterized using a combination of geometric arguments and large scale analysis, for several network scenarios employing matched filters, decorrelators and minimum-mean-square-error receivers. Insight into the network performance for finite systems is also provided by means of simulations. Both analysis and simulations show a significant network capacity gain for ad hoc networks employing multiuser detectors, compared with those using matched filter receivers, as well as very good performance even under tight delay and transmission power requirements.<|reference_end|>
arxiv
@article{comaniciu2005on, title={On the capacity of mobile ad hoc networks with delay constraints}, author={Cristina Comaniciu and H. Vincent Poor}, journal={arXiv preprint arXiv:cs/0510023}, year={2005}, archivePrefix={arXiv}, eprint={cs/0510023}, primaryClass={cs.IT cs.NI math.IT} }
comaniciu2005on
arxiv-673395
cs/0510024
Delta-confluent Drawings
<|reference_start|>Delta-confluent Drawings: We generalize the tree-confluent graphs to a broader class of graphs called Delta-confluent graphs. This class of graphs and distance-hereditary graphs, a well-known class of graphs, coincide. Some results about the visualization of Delta-confluent graphs are also given.<|reference_end|>
arxiv
@article{eppstein2005delta-confluent, title={Delta-confluent Drawings}, author={David Eppstein, Michael T. Goodrich, and Jeremy Yu Meng}, journal={arXiv preprint arXiv:cs/0510024}, year={2005}, archivePrefix={arXiv}, eprint={cs/0510024}, primaryClass={cs.CG} }
eppstein2005delta-confluent
arxiv-673396
cs/0510025
Practical Semantic Analysis of Web Sites and Documents
<|reference_start|>Practical Semantic Analysis of Web Sites and Documents: As Web sites are now ordinary products, it is necessary to explicit the notion of quality of a Web site. The quality of a site may be linked to the easiness of accessibility and also to other criteria such as the fact that the site is up to date and coherent. This last quality is difficult to insure because sites may be updated very frequently, may have many authors, may be partially generated and in this context proof-reading is very difficult. The same piece of information may be found in different occurrences, but also in data or meta-data, leading to the need for consistency checking. In this paper we make a parallel between programs and Web sites. We present some examples of semantic constraints that one would like to specify (constraints between the meaning of categories and sub-categories in a thematic directory, consistency between the organization chart and the rest of the site in an academic site). We present quickly the Natural Semantics, a way to specify the semantics of programming languages that inspires our works. Then we propose a specification language for semantic constraints in Web sites that, in conjunction with the well known ``make'' program, permits to generate some site verification tools by compiling the specification into Prolog code. We apply our method to a large XML document which is the scientific part of our institute activity report, tracking errors or inconsistencies and also constructing some indicators that can be used by the management of the institute.<|reference_end|>
arxiv
@article{despeyroux2005practical, title={Practical Semantic Analysis of Web Sites and Documents}, author={Thierry Despeyroux (INRIA Rocquencourt / INRIA Sophia Antipolis)}, journal={arXiv preprint arXiv:cs/0510025}, year={2005}, archivePrefix={arXiv}, eprint={cs/0510025}, primaryClass={cs.IR} }
despeyroux2005practical
arxiv-673397
cs/0510026
A decision support system for ship identification based on the curvature scale space representation
<|reference_start|>A decision support system for ship identification based on the curvature scale space representation: In this paper, a decision support system for ship identification is presented. The system receives as input a silhouette of the vessel to be identified, previously extracted from a side view of the object. This view could have been acquired with imaging sensors operating at different spectral ranges (CCD, FLIR, image intensifier). The input silhouette is preprocessed and compared to those stored in a database, retrieving a small number of potential matches ranked by their similarity to the target silhouette. This set of potential matches is presented to the system operator, who makes the final ship identification. This system makes use of an evolved version of the Curvature Scale Space (CSS) representation. In the proposed approach, it is curvature extrema, instead of zero crossings, that are tracked during silhouette evolution, hence improving robustness and enabling to cope successfully with cases where the standard CCS representation is found to be unstable. Also, the use of local curvature was replaced with the more robust concept of lobe concavity, with significant additional gains in performance. Experimental results on actual operational imagery prove the excellent performance and robustness of the developed method.<|reference_end|>
arxiv
@article{de luna2005a, title={A decision support system for ship identification based on the curvature scale space representation}, author={Alvaro Enriquez de Luna, Carlos Miravet, Deitze Otaduy, Carlos Dorronsoro}, journal={arXiv preprint arXiv:cs/0510026}, year={2005}, doi={10.1117/12.630532}, archivePrefix={arXiv}, eprint={cs/0510026}, primaryClass={cs.CV} }
de luna2005a
arxiv-673398
cs/0510027
A Market Test for the Positivity of Arrow-Debreu Prices
<|reference_start|>A Market Test for the Positivity of Arrow-Debreu Prices: We derive tractable necessary and sufficient conditions for the absence of buy-and-hold arbitrage opportunities in a perfectly liquid, one period market. We formulate the positivity of Arrow-Debreu prices as a generalized moment problem to show that this no arbitrage condition is equivalent to the positive semidefiniteness of matrices formed by the market price of tradeable securities and their products. We apply this result to a market with multiple assets and basket call options.<|reference_end|>
arxiv
@article{d'aspremont2005a, title={A Market Test for the Positivity of Arrow-Debreu Prices}, author={Alexandre d'Aspremont}, journal={arXiv preprint arXiv:cs/0510027}, year={2005}, archivePrefix={arXiv}, eprint={cs/0510027}, primaryClass={cs.CE} }
d'aspremont2005a
arxiv-673399
cs/0510028
Geo-aggregation permits low stretch and routing tables of logarithmical size
<|reference_start|>Geo-aggregation permits low stretch and routing tables of logarithmical size: This article first addresses applicability of Euclidean models to the domain of Internet routing. Those models are found (limitedly) applicable. Then a simplistic model of routing is constructed for Euclidean plane densely covered with points-routers. The model guarantees low stretch and logarithmical size of routing tables at any node. The paper concludes with a discussion on applicability of the model to real-world Internet routing.<|reference_end|>
arxiv
@article{grishchenko2005geo-aggregation, title={Geo-aggregation permits low stretch and routing tables of logarithmical size}, author={Victor S. Grishchenko}, journal={arXiv preprint arXiv:cs/0510028}, year={2005}, archivePrefix={arXiv}, eprint={cs/0510028}, primaryClass={cs.NI} }
grishchenko2005geo-aggregation
arxiv-673400
cs/0510029
Conditionally independent random variables
<|reference_start|>Conditionally independent random variables: In this paper we investigate the notion of conditional independence and prove several information inequalities for conditionally independent random variables.<|reference_end|>
arxiv
@article{makarychev2005conditionally, title={Conditionally independent random variables}, author={Konstantin Makarychev and Yury Makarychev}, journal={arXiv preprint arXiv:cs/0510029}, year={2005}, archivePrefix={arXiv}, eprint={cs/0510029}, primaryClass={cs.IT math.IT} }
makarychev2005conditionally