corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-669301
cond-mat/0508125
Criticality and Universality in the Unit-Propagation Search Rule
<|reference_start|>Criticality and Universality in the Unit-Propagation Search Rule: The probability Psuccess(alpha, N) that stochastic greedy algorithms successfully solve the random SATisfiability problem is studied as a function of the ratio alpha of constraints per variable and the number N of variables. These algorithms assign variables according to the unit-propagation (UP) rule in presence of constraints involving a unique variable (1-clauses), to some heuristic (H) prescription otherwise. In the infinite N limit, Psuccess vanishes at some critical ratio alpha\_H which depends on the heuristic H. We show that the critical behaviour is determined by the UP rule only. In the case where only constraints with 2 and 3 variables are present, we give the phase diagram and identify two universality classes: the power law class, where Psuccess[alpha\_H (1+epsilon N^{-1/3}), N] ~ A(epsilon)/N^gamma; the stretched exponential class, where Psuccess[alpha\_H (1+epsilon N^{-1/3}), N] ~ exp[-N^{1/6} Phi(epsilon)]. Which class is selected depends on the characteristic parameters of input data. The critical exponent gamma is universal and calculated; the scaling functions A and Phi weakly depend on the heuristic H and are obtained from the solutions of reaction-diffusion equations for 1-clauses. Computation of some non-universal corrections allows us to match numerical results with good precision. The critical behaviour for constraints with >3 variables is given. Our results are interpreted in terms of dynamical graph percolation and we argue that they should apply to more general situations where UP is used.<|reference_end|>
arxiv
@article{deroulers2005criticality, title={Criticality and Universality in the Unit-Propagation Search Rule}, author={Christophe Deroulers (LPTENS), R'emi Monasson (LPTENS)}, journal={European Physical Journal B 49 (2006) 339-369}, year={2005}, doi={10.1140/epjb/e2006-00072-6}, number={LPTENS-05/24}, archivePrefix={arXiv}, eprint={cond-mat/0508125}, primaryClass={cond-mat.stat-mech cs.CC} }
deroulers2005criticality
arxiv-669302
cond-mat/0508152
A Universal Scaling Theory for Complexity of Analog Computation
<|reference_start|>A Universal Scaling Theory for Complexity of Analog Computation: We discuss the computational complexity of solving linear programming problems by means of an analog computer. The latter is modeled by a dynamical system which converges to the optimal vertex solution. We analyze various probability ensembles of linear programming problems. For each one of these we obtain numerically the probability distribution functions of certain quantities which measure the complexity. Remarkably, in the asymptotic limit of very large problems, each of these probability distribution functions reduces to a universal scaling function, depending on a single scaling variable and independent of the details of its parent probability ensemble. These functions are reminiscent of the scaling functions familiar in the theory of phase transitions. The results reported here extend analytical and numerical results obtained recently for the Gaussian ensemble.<|reference_end|>
arxiv
@article{avizrats2005a, title={A Universal Scaling Theory for Complexity of Analog Computation}, author={Yaniv S. Avizrats, Joshua Feinberg and Shmuel Fishman}, journal={arXiv preprint arXiv:cond-mat/0508152}, year={2005}, archivePrefix={arXiv}, eprint={cond-mat/0508152}, primaryClass={cond-mat.other cs.CC} }
avizrats2005a
arxiv-669303
cond-mat/0508216
Cluster Variation Method in Statistical Physics and Probabilistic Graphical Models
<|reference_start|>Cluster Variation Method in Statistical Physics and Probabilistic Graphical Models: The cluster variation method (CVM) is a hierarchy of approximate variational techniques for discrete (Ising--like) models in equilibrium statistical mechanics, improving on the mean--field approximation and the Bethe--Peierls approximation, which can be regarded as the lowest level of the CVM. In recent years it has been applied both in statistical physics and to inference and optimization problems formulated in terms of probabilistic graphical models. The foundations of the CVM are briefly reviewed, and the relations with similar techniques are discussed. The main properties of the method are considered, with emphasis on its exactness for particular models and on its asymptotic properties. The problem of the minimization of the variational free energy, which arises in the CVM, is also addressed, and recent results about both provably convergent and message-passing algorithms are discussed.<|reference_end|>
arxiv
@article{pelizzola2005cluster, title={Cluster Variation Method in Statistical Physics and Probabilistic Graphical Models}, author={Alessandro Pelizzola}, journal={J. Phys. A 38, R309 (2005)}, year={2005}, doi={10.1088/0305-4470/38/33/R01}, archivePrefix={arXiv}, eprint={cond-mat/0508216}, primaryClass={cond-mat.stat-mech cs.IT math.IT} }
pelizzola2005cluster
arxiv-669304
cond-mat/0509102
k-core organization of complex networks
<|reference_start|>k-core organization of complex networks: We analytically describe the architecture of randomly damaged uncorrelated networks as a set of successively enclosed substructures -- k-cores. The k-core is the largest subgraph where vertices have at least k interconnections. We find the structure of k-cores, their sizes, and their birth points -- the bootstrap percolation thresholds. We show that in networks with a finite mean number z_2 of the second-nearest neighbors, the emergence of a k-core is a hybrid phase transition. In contrast, if z_2 diverges, the networks contain an infinite sequence of k-cores which are ultra-robust against random damage.<|reference_end|>
arxiv
@article{dorogovtsev2005k-core, title={k-core organization of complex networks}, author={S.N. Dorogovtsev, A.V. Goltsev, J.F.F. Mendes}, journal={Phys. Rev. Lett. 96, 040601 (2006)}, year={2005}, doi={10.1103/PhysRevLett.96.040601}, archivePrefix={arXiv}, eprint={cond-mat/0509102}, primaryClass={cond-mat.stat-mech cs.NI math-ph math.MP physics.soc-ph} }
dorogovtsev2005k-core
arxiv-669305
cond-mat/0510064
Brownian Functionals in Physics and Computer Science
<|reference_start|>Brownian Functionals in Physics and Computer Science: This is a brief review on Brownian functionals in one dimension and their various applications, a contribution to the special issue ``The Legacy of Albert Einstein" of Current Science. After a brief description of Einstein's original derivation of the diffusion equation, this article provides a pedagogical introduction to the path integral methods leading to the derivation of the celebrated Feynman-Kac formula. The usefulness of this technique in calculating the statistical properties of Brownian functionals is illustrated with several examples in physics and probability theory, with particular emphasis on applications in computer science. The statistical properties of "first-passage Brownian functionals" and their applications are also discussed.<|reference_end|>
arxiv
@article{majumdar2005brownian, title={Brownian Functionals in Physics and Computer Science}, author={Satya N. Majumdar}, journal={Current Science, vol-89, 2076 (2005).}, year={2005}, archivePrefix={arXiv}, eprint={cond-mat/0510064}, primaryClass={cond-mat.stat-mech cond-mat.dis-nn cs.DS} }
majumdar2005brownian
arxiv-669306
cond-mat/0510429
Phase Transition in the Aldous-Shields Model of Growing Trees
<|reference_start|>Phase Transition in the Aldous-Shields Model of Growing Trees: We study analytically the late time statistics of the number of particles in a growing tree model introduced by Aldous and Shields. In this model, a cluster grows in continuous time on a binary Cayley tree, starting from the root, by absorbing new particles at the empty perimeter sites at a rate proportional to c^{-l} where c is a positive parameter and l is the distance of the perimeter site from the root. For c=1, this model corresponds to random binary search trees and for c=2 it corresponds to digital search trees in computer science. By introducing a backward Fokker-Planck approach, we calculate the mean and the variance of the number of particles at large times and show that the variance undergoes a `phase transition' at a critical value c=sqrt{2}. While for c>sqrt{2} the variance is proportional to the mean and the distribution is normal, for c<sqrt{2} the variance is anomalously large and the distribution is non-Gaussian due to the appearance of extreme fluctuations. The model is generalized to one where growth occurs on a tree with $m$ branches and, in this more general case, we show that the critical point occurs at c=sqrt{m}.<|reference_end|>
arxiv
@article{dean2005phase, title={Phase Transition in the Aldous-Shields Model of Growing Trees}, author={David S. Dean and Satya N. Majumdar}, journal={J. Stat. Phys 124, 1351 (2006).}, year={2005}, doi={10.1007/s10955-006-9193-9}, archivePrefix={arXiv}, eprint={cond-mat/0510429}, primaryClass={cond-mat.stat-mech cs.DS math.PR} }
dean2005phase
arxiv-669307
cond-mat/0511159
Learning by message-passing in networks of discrete synapses
<|reference_start|>Learning by message-passing in networks of discrete synapses: We show that a message-passing process allows to store in binary "material" synapses a number of random patterns which almost saturates the information theoretic bounds. We apply the learning algorithm to networks characterized by a wide range of different connection topologies and of size comparable with that of biological systems (e.g. $n\simeq10^{5}-10^{6}$). The algorithm can be turned into an on-line --fault tolerant-- learning protocol of potential interest in modeling aspects of synaptic plasticity and in building neuromorphic devices.<|reference_end|>
arxiv
@article{braunstein2005learning, title={Learning by message-passing in networks of discrete synapses}, author={Alfredo Braunstein and Riccardo Zecchina}, journal={Phys. Rev. Lett. 96, 030201 (2006)}, year={2005}, doi={10.1103/PhysRevLett.96.030201}, archivePrefix={arXiv}, eprint={cond-mat/0511159}, primaryClass={cond-mat.dis-nn cs.LG q-bio.NC} }
braunstein2005learning
arxiv-669308
cond-mat/0512017
Combinatorial Information Theory: I Philosophical Basis of Cross-Entropy and Entropy
<|reference_start|>Combinatorial Information Theory: I Philosophical Basis of Cross-Entropy and Entropy: This study critically analyses the information-theoretic, axiomatic and combinatorial philosophical bases of the entropy and cross-entropy concepts. The combinatorial basis is shown to be the most fundamental (most primitive) of these three bases, since it gives (i) a derivation for the Kullback-Leibler cross-entropy and Shannon entropy functions, as simplified forms of the multinomial distribution subject to the Stirling approximation; (ii) an explanation for the need to maximize entropy (or minimize cross-entropy) to find the most probable realization; and (iii) new, generalized definitions of entropy and cross-entropy - supersets of the Boltzmann principle - applicable to non-multinomial systems. The combinatorial basis is therefore of much broader scope, with far greater power of application, than the information-theoretic and axiomatic bases. The generalized definitions underpin a new discipline of ``{\it combinatorial information theory}'', for the analysis of probabilistic systems of any type. Jaynes' generic formulation of statistical mechanics for multinomial systems is re-examined in light of the combinatorial approach. (abbreviated abstract)<|reference_end|>
arxiv
@article{niven2005combinatorial, title={Combinatorial Information Theory: I. Philosophical Basis of Cross-Entropy and Entropy}, author={Robert K. Niven}, journal={arXiv preprint arXiv:cond-mat/0512017}, year={2005}, archivePrefix={arXiv}, eprint={cond-mat/0512017}, primaryClass={cond-mat.stat-mech cs.IT math-ph math.IT math.MP physics.data-an} }
niven2005combinatorial
arxiv-669309
cond-mat/0601021
Characterizing correlations of flow oscillations at bottlenecks
<|reference_start|>Characterizing correlations of flow oscillations at bottlenecks: "Oscillations" occur in quite different kinds of many-particle-systems when two groups of particles with different directions of motion meet or intersect at a certain spot. We present a model of pedestrian motion that is able to reproduce oscillations with different characteristics. The Wald-Wolfowitz test and Gillis' correlated random walk are shown to hold observables that can be used to characterize different kinds of oscillations.<|reference_end|>
arxiv
@article{kretz2006characterizing, title={Characterizing correlations of flow oscillations at bottlenecks}, author={Tobias Kretz, Marko Woelki, and Michael Schreckenberg}, journal={J. Stat. Mech. (2006) P02005}, year={2006}, doi={10.1088/1742-5468/2006/02/P02005}, archivePrefix={arXiv}, eprint={cond-mat/0601021}, primaryClass={cond-mat.stat-mech cs.MA} }
kretz2006characterizing
arxiv-669310
cond-mat/0601487
Loop Calculus in Statistical Physics and Information Science
<|reference_start|>Loop Calculus in Statistical Physics and Information Science: Considering a discrete and finite statistical model of a general position we introduce an exact expression for the partition function in terms of a finite series. The leading term in the series is the Bethe-Peierls (Belief Propagation)-BP contribution, the rest are expressed as loop-contributions on the factor graph and calculated directly using the BP solution. The series unveils a small parameter that often makes the BP approximation so successful. Applications of the loop calculus in statistical physics and information science are discussed.<|reference_end|>
arxiv
@article{chertkov2006loop, title={Loop Calculus in Statistical Physics and Information Science}, author={Michael Chertkov and Vladimir Y. Chernyak}, journal={Phys. Rev. E 73, 065102(R) (2006)}, year={2006}, doi={10.1103/PhysRevE.73.065102}, number={LAUR-06-0443}, archivePrefix={arXiv}, eprint={cond-mat/0601487}, primaryClass={cond-mat.stat-mech cond-mat.dis-nn cs.IT math.IT} }
chertkov2006loop
arxiv-669311
cond-mat/0601573
Amorphous packings of hard spheres in large space dimension
<|reference_start|>Amorphous packings of hard spheres in large space dimension: In a recent paper (cond-mat/0506445) we derived an expression for the replicated free energy of a liquid of hard spheres based on the HNC free energy functional. An approximate equation of state for the glass and an estimate of the random close packing density were obtained in d=3. Here we show that the HNC approximation is not needed: the same expression can be obtained from the full diagrammatic expansion of the replicated free energy. Then, we consider the asymptotics of this expression when the space dimension d is very large. In this limit, the entropy of the hard sphere liquid has been computed exactly. Using this solution, we derive asymptotic expressions for the glass transition density and for the random close packing density for hard spheres in large space dimension.<|reference_end|>
arxiv
@article{parisi2006amorphous, title={Amorphous packings of hard spheres in large space dimension}, author={G.Parisi and F.Zamponi}, journal={J.Stat.Mech. (2006) P03017}, year={2006}, doi={10.1088/1742-5468/2006/03/P03017}, archivePrefix={arXiv}, eprint={cond-mat/0601573}, primaryClass={cond-mat.stat-mech cond-mat.dis-nn cs.IT math.GM math.IT} }
parisi2006amorphous
arxiv-669312
cond-mat/0602183
Nonlinear parametric model for Granger causality of time series
<|reference_start|>Nonlinear parametric model for Granger causality of time series: We generalize a previously proposed approach for nonlinear Granger causality of time series, based on radial basis function. The proposed model is not constrained to be additive in variables from the two time series and can approximate any function of these variables, still being suitable to evaluate causality. Usefulness of this measure of causality is shown in a physiological example and in the study of the feed-back loop in a model of excitatory and inhibitory neurons.<|reference_end|>
arxiv
@article{marinazzo2006nonlinear, title={Nonlinear parametric model for Granger causality of time series}, author={Daniele Marinazzo, Mario Pellicoro and Sebastiano Stramaglia}, journal={arXiv preprint arXiv:cond-mat/0602183}, year={2006}, doi={10.1103/PhysRevE.73.066216}, archivePrefix={arXiv}, eprint={cond-mat/0602183}, primaryClass={cond-mat.dis-nn cond-mat.stat-mech cs.LG physics.med-ph q-bio.QM} }
marinazzo2006nonlinear
arxiv-669313
cond-mat/0602345
Numerical Modeling of Coexistence, Competition and Collapse of Rotating Spiral Waves in Three-Level Excitable Media with Discrete Active Centers and Absorbing Boundaries
<|reference_start|>Numerical Modeling of Coexistence, Competition and Collapse of Rotating Spiral Waves in Three-Level Excitable Media with Discrete Active Centers and Absorbing Boundaries: Spatio-temporal dynamics of excitable media with discrete three-level active centers (ACs) and absorbing boundaries is studied numerically by means of a deterministic three-level model (see S. D. Makovetskiy and D. N. Makovetskii, on-line preprint cond-mat/0410460 ), which is a generalization of Zykov- Mikhailov model (see Sov. Phys. -- Doklady, 1986, Vol.31, No.1, P.51) for the case of two-channel diffusion of excitations. In particular, we revealed some qualitatively new features of coexistence, competition and collapse of rotating spiral waves (RSWs) in three-level excitable media under conditions of strong influence of the second channel of diffusion. Part of these features are caused by unusual mechanism of RSWs evolution when RSW's cores get into the surface layer of an active medium (i.~e. the layer of ACs resided at the absorbing boundary). Instead of well known scenario of RSW collapse, which takes place after collision of RSW's core with absorbing boundary, we observed complicated transformations of the core leading to nonlinear ''reflection'' of the RSW from the boundary or even to birth of several new RSWs in the surface layer. To our knowledge, such nonlinear ''reflections'' of RSWs and resulting die hard vorticity in excitable media with absorbing boundaries were unknown earlier. ACM classes: F.1.1, I.6, J.2; PACS numbers: 05.65.+b, 07.05.Tp, 82.20.Wt<|reference_end|>
arxiv
@article{makovetskiy2006numerical, title={Numerical Modeling of Coexistence, Competition and Collapse of Rotating Spiral Waves in Three-Level Excitable Media with Discrete Active Centers and Absorbing Boundaries}, author={S. D. Makovetskiy}, journal={arXiv preprint arXiv:cond-mat/0602345}, year={2006}, archivePrefix={arXiv}, eprint={cond-mat/0602345}, primaryClass={cond-mat.other cs.NE nlin.CG} }
makovetskiy2006numerical
arxiv-669314
cond-mat/0602351
Synchronization in Network Structures: Entangled Topology as Optimal Architecture for Network Design
<|reference_start|>Synchronization in Network Structures: Entangled Topology as Optimal Architecture for Network Design: In these notes we study synchronizability of dynamical processes defined on complex networks as well as its interplay with network topology. Building from a recent work by Barahona and Pecora [Phys. Rev. Lett. 89, 054101 (2002)], we use a simulated annealing algorithm to construct optimally-synchronizable networks. The resulting structures, known as entangled networks, are characterized by an extremely homogeneous and interwoven topology: degree, distance, and betweenness distributions are all very narrow, with short average distances, large loops, and small modularity. Entangled networks exhibit an excellent (almost optimal) performance with respect to other flow or connectivity properties such as robustness, random walk minimal first-passage times, and good searchability. All this converts entangled networks in a powerful concept with optimal properties in many respects.<|reference_end|>
arxiv
@article{donetti2006synchronization, title={Synchronization in Network Structures: Entangled Topology as Optimal Architecture for Network Design}, author={Luca Donetti, Pablo I. Hurtado, Miguel A. Munoz}, journal={Lecture Notes in Computer Science 3993, 1075 (2006)}, year={2006}, archivePrefix={arXiv}, eprint={cond-mat/0602351}, primaryClass={cond-mat.dis-nn cs.NI} }
donetti2006synchronization
arxiv-669315
cond-mat/0602611
k-core (bootstrap) percolation on complex networks: Critical phenomena and nonlocal effects
<|reference_start|>k-core (bootstrap) percolation on complex networks: Critical phenomena and nonlocal effects: We develop the theory of the k-core (bootstrap) percolation on uncorrelated random networks with arbitrary degree distributions. We show that the k-core percolation is an unusual, hybrid phase transition with a jump emergence of the k-core as at a first order phase transition but also with a critical singularity as at a continuous transition. We describe the properties of the k-core, explain the meaning of the order parameter for the k-core percolation, and reveal the origin of the specific critical phenomena. We demonstrate that a so-called ``corona'' of the k-core plays a crucial role (corona is a subset of vertices in the k-core which have exactly k neighbors in the k-core). It turns out that the k-core percolation threshold is at the same time the percolation threshold of finite corona clusters. The mean separation of vertices in corona clusters plays the role of the correlation length and diverges at the critical point. We show that a random removal of even one vertex from the k-core may result in the collapse of a vast region of the k-core around the removed vertex. The mean size of this region diverges at the critical point. We find an exact mapping of the k-core percolation to a model of cooperative relaxation. This model undergoes critical relaxation with a divergent rate at some critical moment.<|reference_end|>
arxiv
@article{goltsev2006k-core, title={k-core (bootstrap) percolation on complex networks: Critical phenomena and nonlocal effects}, author={A.V. Goltsev, S.N. Dorogovtsev, J.F.F. Mendes}, journal={Phys. Rev. E 73, 056101 (2006)}, year={2006}, doi={10.1103/PhysRevE.73.056101}, archivePrefix={arXiv}, eprint={cond-mat/0602611}, primaryClass={cond-mat.stat-mech cs.NI math-ph math.MP physics.soc-ph} }
goltsev2006k-core
arxiv-669316
cond-mat/0602661
On the high density behavior of Hamming codes with fixed minimum distance
<|reference_start|>On the high density behavior of Hamming codes with fixed minimum distance: We discuss the high density behavior of a system of hard spheres of diameter d on the hypercubic lattice of dimension n, in the limit n -> oo, d -> oo, d/n=delta. The problem is relevant for coding theory. We find a solution to the equations describing the liquid up to very large values of the density, but we show that this solution gives a negative entropy for the liquid phase when the density is large enough. We then conjecture that a phase transition towards a different phase might take place, and we discuss possible scenarios for this transition. Finally we discuss the relation between our results and known rigorous bounds on the maximal density of the system.<|reference_end|>
arxiv
@article{parisi2006on, title={On the high density behavior of Hamming codes with fixed minimum distance}, author={G.Parisi and F.Zamponi}, journal={J.Stat.Phys. 123, 1145 (2006)}, year={2006}, doi={10.1007/s10955-006-9142-7}, archivePrefix={arXiv}, eprint={cond-mat/0602661}, primaryClass={cond-mat.stat-mech cond-mat.dis-nn cs.IT math.IT} }
parisi2006on
arxiv-669317
cond-mat/0603189
Loop series for discrete statistical models on graphs
<|reference_start|>Loop series for discrete statistical models on graphs: In this paper we present derivation details, logic, and motivation for the loop calculus introduced in \cite{06CCa}. Generating functions for three inter-related discrete statistical models are each expressed in terms of a finite series. The first term in the series corresponds to the Bethe-Peierls (Belief Propagation)-BP contribution, the other terms are labeled by loops on the factor graph. All loop contributions are simple rational functions of spin correlation functions calculated within the BP approach. We discuss two alternative derivations of the loop series. One approach implements a set of local auxiliary integrations over continuous fields with the BP contribution corresponding to an integrand saddle-point value. The integrals are replaced by sums in the complimentary approach, briefly explained in \cite{06CCa}. A local gauge symmetry transformation that clarifies an important invariant feature of the BP solution, is revealed in both approaches. The partition function remains invariant while individual terms change under the gauge transformation. The requirement for all individual terms to be non-zero only for closed loops in the factor graph (as opposed to paths with loose ends) is equivalent to fixing the first term in the series to be exactly equal to the BP contribution. Further applications of the loop calculus to problems in statistical physics, computer and information sciences are discussed.<|reference_end|>
arxiv
@article{chertkov2006loop, title={Loop series for discrete statistical models on graphs}, author={Michael Chertkov and Vladimir Y. Chernyak}, journal={J. Stat. Mech. (2006) P06009}, year={2006}, doi={10.1088/1742-5468/2006/06/P06009}, number={LAUR-06-1221}, archivePrefix={arXiv}, eprint={cond-mat/0603189}, primaryClass={cond-mat.stat-mech cond-mat.dis-nn cs.IT math.IT} }
chertkov2006loop
arxiv-669318
cond-mat/0603350
The number of matchings in random graphs
<|reference_start|>The number of matchings in random graphs: We study matchings on sparse random graphs by means of the cavity method. We first show how the method reproduces several known results about maximum and perfect matchings in regular and Erdos-Renyi random graphs. Our main new result is the computation of the entropy, i.e. the leading order of the logarithm of the number of solutions, of matchings with a given size. We derive both an algorithm to compute this entropy for an arbitrary graph with a girth that diverges in the large size limit, and an analytic result for the entropy in regular and Erdos-Renyi random graph ensembles.<|reference_end|>
arxiv
@article{zdeborová2006the, title={The number of matchings in random graphs}, author={Lenka Zdeborov'a and Marc M'ezard}, journal={J. Stat. Mech. (2006) P05003}, year={2006}, doi={10.1088/1742-5468/2006/05/P05003}, archivePrefix={arXiv}, eprint={cond-mat/0603350}, primaryClass={cond-mat.dis-nn cs.CC math.CO} }
zdeborová2006the
arxiv-669319
cond-mat/0603861
Congestion-gradient driven transport on complex networks
<|reference_start|>Congestion-gradient driven transport on complex networks: We present a study of transport on complex networks with routing based on local information. Particles hop from one node of the network to another according to a set of routing rules with different degrees of congestion awareness, ranging from random diffusion to rigid congestion-gradient driven flow. Each node can be either source or destination for particles and all nodes have the same routing capacity, which are features of ad-hoc wireless networks. It is shown that the transport capacity increases when a small amount of congestion awareness is present in the routing rules, and that it then decreases as the routing rules become too rigid when the flow becomes strictly congestion-gradient driven. Therefore, an optimum value of the congestion awareness exists in the routing rules. It is also shown that, in the limit of a large number of nodes, networks using routing based on local information jam at any nonzero load. Finally, we study the correlation between congestion at node level and a betweenness centrality measure.<|reference_end|>
arxiv
@article{danila2006congestion-gradient, title={Congestion-gradient driven transport on complex networks}, author={Bogdan Danila, Yong Yu, Samuel Earl, John A. Marsh, Zoltan Toroczkai, and Kevin E. Bassler}, journal={Phys Rev E 74, 046114 (2006)}, year={2006}, doi={10.1103/PhysRevE.74.046114}, archivePrefix={arXiv}, eprint={cond-mat/0603861}, primaryClass={cond-mat.stat-mech cond-mat.dis-nn cs.NI} }
danila2006congestion-gradient
arxiv-669320
cond-mat/0604267
Survey propagation for the cascading Sourlas code
<|reference_start|>Survey propagation for the cascading Sourlas code: We investigate how insights from statistical physics, namely survey propagation, can improve decoding of a particular class of sparse error correcting codes. We show that a recently proposed algorithm, time averaged belief propagation, is in fact intimately linked to a specific survey propagation for which Parisi's replica symmetry breaking parameter is set to zero, and that the latter is always superior to belief propagation in the high connectivity limit. We briefly look at further improvements available by going to the second level of replica symmetry breaking.<|reference_end|>
arxiv
@article{hatchett2006survey, title={Survey propagation for the cascading Sourlas code}, author={Jonathan PL Hatchett and Yoshiyuki Kabashima}, journal={arXiv preprint arXiv:cond-mat/0604267}, year={2006}, doi={10.1088/0305-4470/39/34/005}, archivePrefix={arXiv}, eprint={cond-mat/0604267}, primaryClass={cond-mat.stat-mech cs.IT math.IT} }
hatchett2006survey
arxiv-669321
cond-mat/0604569
Public-channel cryptography based on mutual chaos pass filters
<|reference_start|>Public-channel cryptography based on mutual chaos pass filters: We study the mutual coupling of chaotic lasers and observe both experimentally and in numeric simulations, that there exists a regime of parameters for which two mutually coupled chaotic lasers establish isochronal synchronization, while a third laser coupled unidirectionally to one of the pair, does not synchronize. We then propose a cryptographic scheme, based on the advantage of mutual-coupling over unidirectional coupling, where all the parameters of the system are public knowledge. We numerically demonstrate that in such a scheme the two communicating lasers can add a message signal (compressed binary message) to the transmitted coupling signal, and recover the message in both directions with high fidelity by using a mutual chaos pass filter procedure. An attacker however, fails to recover an errorless message even if he amplifies the coupling signal.<|reference_end|>
arxiv
@article{klein2006public-channel, title={Public-channel cryptography based on mutual chaos pass filters}, author={Einat Klein, Noam Gross, Evi Kopelowitz, Michael Rosenbluh, Lev Khaykovich, Wolfgang Kinzel, Ido Kanter}, journal={arXiv preprint arXiv:cond-mat/0604569}, year={2006}, doi={10.1103/PhysRevE.74.046201}, archivePrefix={arXiv}, eprint={cond-mat/0604569}, primaryClass={cond-mat.stat-mech cs.CR} }
klein2006public-channel
arxiv-669322
cond-mat/0605190
Message passing for vertex covers
<|reference_start|>Message passing for vertex covers: Constructing a minimal vertex cover of a graph can be seen as a prototype for a combinatorial optimization problem under hard constraints. In this paper, we develop and analyze message passing techniques, namely warning and survey propagation, which serve as efficient heuristic algorithms for solving these computational hard problems. We show also, how previously obtained results on the typical-case behavior of vertex covers of random graphs can be recovered starting from the message passing equations, and how they can be extended.<|reference_end|>
arxiv
@article{weigt2006message, title={Message passing for vertex covers}, author={Martin Weigt, Haijun Zhou}, journal={Phys. Rev. E 74, 046110 (2006)}, year={2006}, doi={10.1103/PhysRevE.74.046110}, archivePrefix={arXiv}, eprint={cond-mat/0605190}, primaryClass={cond-mat.stat-mech cs.DS} }
weigt2006message
arxiv-669323
cond-mat/0605570
Generalized Box-Muller method for generating q-Gaussian random deviates
<|reference_start|>Generalized Box-Muller method for generating q-Gaussian random deviates: Addendum: The generalized Box-M\"uller algorithm provides a methodology for generating q-Gaussian random variates. The parameter $-\infty<q\leq3$ is related to the shape of the tail decay; $q<1$ for compact-support including parabola $(q=0)$; $1<q\leq3$ for heavy-tail including Cauchy $(q=2)$. This addendum clarifies the transformation $q'=((3q-1)/(q+1))$ within the algorithm is due to a difference in the dimensions d of the generalized logarithm and the generalized distribution. The transformation is clarified by the decomposition of $q=1+2\kappa/(1+d\kappa)$, where the shape parameter $-1<\kappa\leq\infty$ quantifies the magnitude of the deformation from exponential. A simpler specification for the generalized Box- M\"uller algorithm is provided using the shape of the tail decay. Original: The q-Gaussian distribution is known to be an attractor of certain correlated systems, and is the distribution which, under appropriate constraints, maximizes the entropy Sq, basis of nonextensive statistical mechanics. This theory is postulated as a natural extension of the standard (Boltzmann-Gibbs) statistical mechanics, and may explain the ubiquitous appearance of heavy-tailed distributions in both natural and man-made systems. The q-Gaussian distribution is also used as a numerical tool, for example as a visiting distribution in Generalized Simulated Annealing. We develop and present a simple, easy to implement numerical method for generating random deviates from a q-Gaussian distribution based upon a generalization of the well known Box-Muller method. Our method is suitable for a larger range of q values, q<3, than has previously appeared in the literature, and can generate deviates from q-Gaussian distributions of arbitrary width and center. MATLAB code showing a straightforward implementation is also included.<|reference_end|>
arxiv
@article{thistleton2006generalized, title={Generalized Box-Muller method for generating q-Gaussian random deviates}, author={William Thistleton, Kenric Nelson, John A. Marsh, and Constantino Tsallis}, journal={arXiv preprint arXiv:cond-mat/0605570}, year={2006}, archivePrefix={arXiv}, eprint={cond-mat/0605570}, primaryClass={cond-mat.stat-mech cs.MS} }
thistleton2006generalized
arxiv-669324
cond-mat/0606125
Microscopic activity patterns in the Naming Game
<|reference_start|>Microscopic activity patterns in the Naming Game: The models of statistical physics used to study collective phenomena in some interdisciplinary contexts, such as social dynamics and opinion spreading, do not consider the effects of the memory on individual decision processes. On the contrary, in the Naming Game, a recently proposed model of Language formation, each agent chooses a particular state, or opinion, by means of a memory-based negotiation process, during which a variable number of states is collected and kept in memory. In this perspective, the statistical features of the number of states collected by the agents becomes a relevant quantity to understand the dynamics of the model, and the influence of topological properties on memory-based models. By means of a master equation approach, we analyze the internal agent dynamics of Naming Game in populations embedded on networks, finding that it strongly depends on very general topological properties of the system (e.g. average and fluctuations of the degree). However, the influence of topological properties on the microscopic individual dynamics is a general phenomenon that should characterize all those social interactions that can be modeled by memory-based negotiation processes.<|reference_end|>
arxiv
@article{dall'asta2006microscopic, title={Microscopic activity patterns in the Naming Game}, author={Luca Dall'Asta, Andrea Baronchelli}, journal={J. Phys. A: Math. Gen. 39 14851-14867 (2006)}, year={2006}, doi={10.1088/0305-4470/39/48/002}, archivePrefix={arXiv}, eprint={cond-mat/0606125}, primaryClass={cond-mat.dis-nn cs.MA physics.soc-ph} }
dall'asta2006microscopic
arxiv-669325
cond-mat/0606128
Simplifying Random Satisfiability Problem by Removing Frustrating Interactions
<|reference_start|>Simplifying Random Satisfiability Problem by Removing Frustrating Interactions: How can we remove some interactions in a constraint satisfaction problem (CSP) such that it still remains satisfiable? In this paper we study a modified survey propagation algorithm that enables us to address this question for a prototypical CSP, i.e. random K-satisfiability problem. The average number of removed interactions is controlled by a tuning parameter in the algorithm. If the original problem is satisfiable then we are able to construct satisfiable subproblems ranging from the original one to a minimal one with minimum possible number of interactions. The minimal satisfiable subproblems will provide directly the solutions of the original problem.<|reference_end|>
arxiv
@article{ramezanpour2006simplifying, title={Simplifying Random Satisfiability Problem by Removing Frustrating Interactions}, author={A. Ramezanpour and S. Moghimi-Araghi}, journal={arXiv preprint arXiv:cond-mat/0606128}, year={2006}, doi={10.1103/PhysRevE.74.041105}, archivePrefix={arXiv}, eprint={cond-mat/0606128}, primaryClass={cond-mat.stat-mech cs.CC} }
ramezanpour2006simplifying
arxiv-669326
cond-mat/0606696
Statistical mechanics of error exponents for error-correcting codes
<|reference_start|>Statistical mechanics of error exponents for error-correcting codes: Error exponents characterize the exponential decay, when increasing message length, of the probability of error of many error-correcting codes. To tackle the long standing problem of computing them exactly, we introduce a general, thermodynamic, formalism that we illustrate with maximum-likelihood decoding of low-density parity-check (LDPC) codes on the binary erasure channel (BEC) and the binary symmetric channel (BSC). In this formalism, we apply the cavity method for large deviations to derive expressions for both the average and typical error exponents, which differ by the procedure used to select the codes from specified ensembles. When decreasing the noise intensity, we find that two phase transitions take place, at two different levels: a glass to ferromagnetic transition in the space of codewords, and a paramagnetic to glass transition in the space of codes.<|reference_end|>
arxiv
@article{mora2006statistical, title={Statistical mechanics of error exponents for error-correcting codes}, author={Thierry Mora and Olivier Rivoire}, journal={Phys. Rev. E 74, 056110 (2006)}, year={2006}, doi={10.1103/PhysRevE.74.056110}, archivePrefix={arXiv}, eprint={cond-mat/0606696}, primaryClass={cond-mat.dis-nn cond-mat.stat-mech cs.IT math.IT} }
mora2006statistical
arxiv-669327
cond-mat/0607017
Optimal routing on complex networks
<|reference_start|>Optimal routing on complex networks: We present a novel heuristic algorithm for routing optimization on complex networks. Previously proposed routing optimization algorithms aim at avoiding or reducing link overload. Our algorithm balances traffic on a network by minimizing the maximum node betweenness with as little path lengthening as possible, thus being useful in cases when networks are jamming due to queuing overload. By using the resulting routing table, a network can sustain significantly higher traffic without jamming than in the case of traditional shortest path routing.<|reference_end|>
arxiv
@article{danila2006optimal, title={Optimal routing on complex networks}, author={Bogdan Danila, Yong Yu, John A. Marsh, and Kevin E. Bassler}, journal={Phys Rev E 74, 046106 (2006)}, year={2006}, doi={10.1103/PhysRevE.74.046106}, archivePrefix={arXiv}, eprint={cond-mat/0607017}, primaryClass={cond-mat.stat-mech cond-mat.dis-nn cs.NI} }
danila2006optimal
arxiv-669328
cond-mat/0607290
A rigorous proof of the cavity method for counting matchings
<|reference_start|>A rigorous proof of the cavity method for counting matchings: In this paper we rigorously prove the validity of the cavity method for the problem of counting the number of matchings in graphs with large girth. Cavity method is an important heuristic developed by statistical physicists that has lead to the development of faster distributed algorithms for problems in various combinatorial optimization problems. The validity of the approach has been supported mostly by numerical simulations. In this paper we prove the validity of cavity method for the problem of counting matchings using rigorous techniques. We hope that these rigorous approaches will finally help us establish the validity of the cavity method in general.<|reference_end|>
arxiv
@article{bayati2006a, title={A rigorous proof of the cavity method for counting matchings}, author={Mohsen Bayati, Chandra Nair}, journal={arXiv preprint arXiv:cond-mat/0607290}, year={2006}, archivePrefix={arXiv}, eprint={cond-mat/0607290}, primaryClass={cond-mat.dis-nn cond-mat.stat-mech cs.CC} }
bayati2006a
arxiv-669329
cond-mat/0607454
Encryption of Covert Information into Multiple Statistical Distributions
<|reference_start|>Encryption of Covert Information into Multiple Statistical Distributions: A novel strategy to encrypt covert information (code) via unitary projections into the null spaces of ill-conditioned eigenstructures of multiple host statistical distributions, inferred from incomplete constraints, is presented. The host pdf's are inferred using the maximum entropy principle. The projection of the covert information is dependent upon the pdf's of the host statistical distributions. The security of the encryption/decryption strategy is based on the extreme instability of the encoding process. A self-consistent procedure to derive keys for both symmetric and asymmetric cryptography is presented. The advantages of using a multiple pdf model to achieve encryption of covert information are briefly highlighted. Numerical simulations exemplify the efficacy of the model.<|reference_end|>
arxiv
@article{venkatesan2006encryption, title={Encryption of Covert Information into Multiple Statistical Distributions}, author={R. C. Venkatesan}, journal={arXiv preprint arXiv:cond-mat/0607454}, year={2006}, doi={10.1016/j.physleta.2007.05.117}, archivePrefix={arXiv}, eprint={cond-mat/0607454}, primaryClass={cond-mat.stat-mech cs.CR} }
venkatesan2006encryption
arxiv-669330
cond-mat/0608312
On Cavity Approximations for Graphical Models
<|reference_start|>On Cavity Approximations for Graphical Models: We reformulate the Cavity Approximation (CA), a class of algorithms recently introduced for improving the Bethe approximation estimates of marginals in graphical models. In our new formulation, which allows for the treatment of multivalued variables, a further generalization to factor graphs with arbitrary order of interaction factors is explicitly carried out, and a message passing algorithm that implements the first order correction to the Bethe approximation is described. Furthermore we investigate an implementation of the CA for pairwise interactions. In all cases considered we could confirm that CA[k] with increasing $k$ provides a sequence of approximations of markedly increasing precision. Furthermore in some cases we could also confirm the general expectation that the approximation of order $k$, whose computational complexity is $O(N^{k+1})$ has an error that scales as $1/N^{k+1}$ with the size of the system. We discuss the relation between this approach and some recent developments in the field.<|reference_end|>
arxiv
@article{rizzo2006on, title={On Cavity Approximations for Graphical Models}, author={T. Rizzo, B. Wemmenhove and H.J. Kappen}, journal={arXiv preprint arXiv:cond-mat/0608312}, year={2006}, doi={10.1103/PhysRevE.76.011102}, archivePrefix={arXiv}, eprint={cond-mat/0608312}, primaryClass={cond-mat.stat-mech cond-mat.dis-nn cs.IT math.IT} }
rizzo2006on
arxiv-669331
cond-mat/0609098
Synchronization in Weighted Uncorrelated Complex Networks in a Noisy Environment: Optimization and Connections with Transport Efficiency
<|reference_start|>Synchronization in Weighted Uncorrelated Complex Networks in a Noisy Environment: Optimization and Connections with Transport Efficiency: Motivated by synchronization problems in noisy environments, we study the Edwards-Wilkinson process on weighted uncorrelated scale-free networks. We consider a specific form of the weights, where the strength (and the associated cost) of a link is proportional to $(k_{i}k_{j})^{\beta}$ with $k_{i}$ and $k_{j}$ being the degrees of the nodes connected by the link. Subject to the constraint that the total network cost is fixed, we find that in the mean-field approximation on uncorrelated scale-free graphs, synchronization is optimal at $\beta^{*}$$=$-1. Numerical results, based on exact numerical diagonalization of the corresponding network Laplacian, confirm the mean-field results, with small corrections to the optimal value of $\beta^{*}$. Employing our recent connections between the Edwards-Wilkinson process and resistor networks, and some well-known connections between random walks and resistor networks, we also pursue a naturally related problem of optimizing performance in queue-limited communication networks utilizing local weighted routing schemes.<|reference_end|>
arxiv
@article{korniss2006synchronization, title={Synchronization in Weighted Uncorrelated Complex Networks in a Noisy Environment: Optimization and Connections with Transport Efficiency}, author={G. Korniss}, journal={Phys. Rev. E 75, 051121 (2007)}, year={2006}, doi={10.1103/PhysRevE.75.051121}, archivePrefix={arXiv}, eprint={cond-mat/0609098}, primaryClass={cond-mat.stat-mech cond-mat.dis-nn cs.NI} }
korniss2006synchronization
arxiv-669332
cond-mat/0609099
Geometrical organization of solutions to random linear Boolean equations
<|reference_start|>Geometrical organization of solutions to random linear Boolean equations: The random XORSAT problem deals with large random linear systems of Boolean variables. The difficulty of such problems is controlled by the ratio of number of equations to number of variables. It is known that in some range of values of this parameter, the space of solutions breaks into many disconnected clusters. Here we study precisely the corresponding geometrical organization. In particular, the distribution of distances between these clusters is computed by the cavity method. This allows to study the `x-satisfiability' threshold, the critical density of equations where there exist two solutions at a given distance.<|reference_end|>
arxiv
@article{mora2006geometrical, title={Geometrical organization of solutions to random linear Boolean equations}, author={Thierry Mora (LPTMS), Marc M'ezard (LPTMS)}, journal={Journal of Statistical Mechanics: Theory and Experiment (2006) P10007}, year={2006}, doi={10.1088/1742-5468/2006/10/P10007}, archivePrefix={arXiv}, eprint={cond-mat/0609099}, primaryClass={cond-mat.dis-nn cs.CC} }
mora2006geometrical
arxiv-669333
cond-mat/0609584
Random numbers for large scale distributed Monte Carlo simulations
<|reference_start|>Random numbers for large scale distributed Monte Carlo simulations: Monte Carlo simulations are one of the major tools in statistical physics, complex system science, and other fields, and an increasing number of these simulations is run on distributed systems like clusters or grids. This raises the issue of generating random numbers in a parallel, distributed environment. In this contribution we demonstrate that multiple linear recurrences in finite fields are an ideal method to produce high quality pseudorandom numbers in sequential and parallel algorithms. Their known weakness (failure of sampling points in high dimensions) can be overcome by an appropriate delinearization that preserves all desirable properties of the underlying linear sequence.<|reference_end|>
arxiv
@article{bauke2006random, title={Random numbers for large scale distributed Monte Carlo simulations}, author={Heiko Bauke, Stephan Mertens}, journal={Physical Review E, vol. 75, nr. 6 (2007), article 066701}, year={2006}, doi={10.1103/PhysRevE.75.066701}, archivePrefix={arXiv}, eprint={cond-mat/0609584}, primaryClass={cond-mat.other cs.DC} }
bauke2006random
arxiv-669334
cond-mat/0611567
Generalized Statistics Framework for Rate Distortion Theory
<|reference_start|>Generalized Statistics Framework for Rate Distortion Theory: Variational principles for the rate distortion (RD) theory in lossy compression are formulated within the ambit of the generalized nonextensive statistics of Tsallis, for values of the nonextensivity parameter satisfying $ 0 < q < 1 $ and $ q > 1 $. Alternating minimization numerical schemes to evaluate the nonextensive RD function, are derived. Numerical simulations demonstrate the efficacy of generalized statistics RD models.<|reference_end|>
arxiv
@article{venkatesan2006generalized, title={Generalized Statistics Framework for Rate Distortion Theory}, author={R. C. Venkatesan and A. Plastino}, journal={arXiv preprint arXiv:cond-mat/0611567}, year={2006}, archivePrefix={arXiv}, eprint={cond-mat/0611567}, primaryClass={cond-mat.stat-mech cs.IT math.IT} }
venkatesan2006generalized
arxiv-669335
cond-mat/0611717
Non-equilibrium phase transition in negotiation dynamics
<|reference_start|>Non-equilibrium phase transition in negotiation dynamics: We introduce a model of negotiation dynamics whose aim is that of mimicking the mechanisms leading to opinion and convention formation in a population of individuals. The negotiation process, as opposed to ``herding-like'' or ``bounded confidence'' driven processes, is based on a microscopic dynamics where memory and feedback play a central role. Our model displays a non-equilibrium phase transition from an absorbing state in which all agents reach a consensus to an active stationary state characterized either by polarization or fragmentation in clusters of agents with different opinions. We show the exystence of at least two different universality classes, one for the case with two possible opinions and one for the case with an unlimited number of opinions. The phase transition is studied analytically and numerically for various topologies of the agents' interaction network. In both cases the universality classes do not seem to depend on the specific interaction topology, the only relevant feature being the total number of different opinions ever present in the system.<|reference_end|>
arxiv
@article{baronchelli2006non-equilibrium, title={Non-equilibrium phase transition in negotiation dynamics}, author={A. Baronchelli, L. Dall'Asta, A. Barrat, V. Loreto}, journal={Phys. Rev. E 76, 051102 (2007)}, year={2006}, doi={10.1103/PhysRevE.76.051102}, archivePrefix={arXiv}, eprint={cond-mat/0611717}, primaryClass={cond-mat.stat-mech cs.MA physics.soc-ph q-bio.PE} }
baronchelli2006non-equilibrium
arxiv-669336
cond-mat/0612365
Gibbs States and the Set of Solutions of Random Constraint Satisfaction Problems
<|reference_start|>Gibbs States and the Set of Solutions of Random Constraint Satisfaction Problems: An instance of a random constraint satisfaction problem defines a random subset S (the set of solutions) of a large product space (the set of assignments). We consider two prototypical problem ensembles (random k-satisfiability and q-coloring of random regular graphs), and study the uniform measure with support on S. As the number of constraints per variable increases, this measure first decomposes into an exponential number of pure states ("clusters"), and subsequently condensates over the largest such states. Above the condensation point, the mass carried by the n largest states follows a Poisson-Dirichlet process. For typical large instances, the two transitions are sharp. We determine for the first time their precise location. Further, we provide a formal definition of each phase transition in terms of different notions of correlation between distinct variables in the problem. The degree of correlation naturally affects the performances of many search/sampling algorithms. Empirical evidence suggests that local Monte Carlo Markov Chain strategies are effective up to the clustering phase transition, and belief propagation up to the condensation point. Finally, refined message passing techniques (such as survey propagation) may beat also this threshold.<|reference_end|>
arxiv
@article{krzakala2006gibbs, title={Gibbs States and the Set of Solutions of Random Constraint Satisfaction Problems}, author={Florent Krzakala, Andrea Montanari, Federico Ricci-Tersenghi, Guilhem Semerjian and Lenka Zdeborova}, journal={Proc. Natl. Acad. Sci. 104, 10318 (2007)}, year={2006}, doi={10.1073/pnas.0703685104}, archivePrefix={arXiv}, eprint={cond-mat/0612365}, primaryClass={cond-mat.stat-mech cond-mat.dis-nn cs.CC} }
krzakala2006gibbs
arxiv-669337
cond-mat/0701184
Transport optimization on complex networks
<|reference_start|>Transport optimization on complex networks: We present a comparative study of the application of a recently introduced heuristic algorithm to the optimization of transport on three major types of complex networks. The algorithm balances network traffic iteratively by minimizing the maximum node betweenness with as little path lengthening as possible. We show that by using this optimal routing, a network can sustain significantly higher traffic without jamming than in the case of shortest path routing. A formula is proved that allows quick computation of the average number of hops along the path and of the average travel times once the betweennesses of the nodes are computed. Using this formula, we show that routing optimization preserves the small-world character exhibited by networks under shortest path routing, and that it significantly reduces the average travel time on congested networks with only a negligible increase in the average travel time at low loads. Finally, we study the correlation between the weights of the links in the case of optimal routing and the betweennesses of the nodes connected by them.<|reference_end|>
arxiv
@article{danila2007transport, title={Transport optimization on complex networks}, author={Bogdan Danila, Yong Yu, John A. Marsh, Kevin E. Bassler}, journal={Chaos 17 (2), 026102 (2007)}, year={2007}, doi={10.1063/1.2731718}, archivePrefix={arXiv}, eprint={cond-mat/0701184}, primaryClass={cond-mat.dis-nn cs.NI} }
danila2007transport
arxiv-669338
cond-mat/0701218
Generalized Statistics Framework for Rate Distortion Theory with Bregman Divergences
<|reference_start|>Generalized Statistics Framework for Rate Distortion Theory with Bregman Divergences: A variational principle for the rate distortion (RD) theory with Bregman divergences is formulated within the ambit of the generalized (nonextensive) statistics of Tsallis. The Tsallis-Bregman RD lower bound is established. Alternate minimization schemes for the generalized Bregman RD (GBRD) theory are derived. A computational strategy to implement the GBRD model is presented. The efficacy of the GBRD model is exemplified with the aid of numerical simulations.<|reference_end|>
arxiv
@article{venkatesan2007generalized, title={Generalized Statistics Framework for Rate Distortion Theory with Bregman Divergences}, author={R. C. Venkatesan}, journal={arXiv preprint arXiv:cond-mat/0701218}, year={2007}, archivePrefix={arXiv}, eprint={cond-mat/0701218}, primaryClass={cond-mat.stat-mech cs.IT math.IT} }
venkatesan2007generalized
arxiv-669339
cond-mat/0701319
Statistical Cryptography using a Fisher-Schr\"odinger Model
<|reference_start|>Statistical Cryptography using a Fisher-Schr\"odinger Model: A principled procedure to infer a hierarchy of statistical distributions possessing ill-conditioned eigenstructures, from incomplete constraints, is presented. The inference process of the \textit{pdf}'s employs the Fisher information as the measure of uncertainty, and, utilizes a semi-supervised learning paradigm based on a measurement-response model. The principle underlying the learning paradigm involves providing a quantum mechanical connotation to statistical processes. The inferred \textit{pdf}'s constitute a statistical host that facilitates the encryption/decryption of covert information (code). A systematic strategy to encrypt/decrypt code via unitary projections into the \textit{null spaces} of the ill-conditioned eigenstructures, is presented. Numerical simulations exemplify the efficacy of the model.<|reference_end|>
arxiv
@article{venkatesan2007statistical, title={Statistical Cryptography using a Fisher-Schr\"{o}dinger Model}, author={R. C. Venkatesan}, journal={arXiv preprint arXiv:cond-mat/0701319}, year={2007}, archivePrefix={arXiv}, eprint={cond-mat/0701319}, primaryClass={cond-mat.stat-mech cs.CR} }
venkatesan2007statistical
arxiv-669340
cond-mat/0702421
A Hike in the Phases of the 1-in-3 Satisfiability
<|reference_start|>A Hike in the Phases of the 1-in-3 Satisfiability: We summarise our results for the random $\epsilon$--1-in-3 satisfiability problem, where $\epsilon$ is a probability of negation of the variable. We employ both rigorous and heuristic methods to describe the SAT/UNSAT and Hard/Easy transitions.<|reference_end|>
arxiv
@article{maneva2007a, title={A Hike in the Phases of the 1-in-3 Satisfiability}, author={Elitza Maneva, Talya Meltzer, Jack Raymond, Andrea Sportiello, Lenka Zdeborov'a}, journal={arXiv preprint arXiv:cond-mat/0702421}, year={2007}, archivePrefix={arXiv}, eprint={cond-mat/0702421}, primaryClass={cond-mat.stat-mech cond-mat.dis-nn cs.CC} }
maneva2007a
arxiv-669341
cond-mat/0702546
A Landscape Analysis of Constraint Satisfaction Problems
<|reference_start|>A Landscape Analysis of Constraint Satisfaction Problems: We discuss an analysis of Constraint Satisfaction problems, such as Sphere Packing, K-SAT and Graph Coloring, in terms of an effective energy landscape. Several intriguing geometrical properties of the solution space become in this light familiar in terms of the well-studied ones of rugged (glassy) energy landscapes. A `benchmark' algorithm naturally suggested by this construction finds solutions in polynomial time up to a point beyond the `clustering' and in some cases even the `thermodynamic' transitions. This point has a simple geometric meaning and can be in principle determined with standard Statistical Mechanical methods, thus pushing the analytic bound up to which problems are guaranteed to be easy. We illustrate this for the graph three and four-coloring problem. For Packing problems the present discussion allows to better characterize the `J-point', proposed as a systematic definition of Random Close Packing, and to place it in the context of other theories of glasses.<|reference_end|>
arxiv
@article{krzakala2007a, title={A Landscape Analysis of Constraint Satisfaction Problems}, author={Florent Krzakala and Jorge Kurchan}, journal={Phys. Rev. E 76, 021122 (2007)}, year={2007}, doi={10.1103/PhysRevE.76.021122}, archivePrefix={arXiv}, eprint={cond-mat/0702546}, primaryClass={cond-mat.stat-mech cond-mat.dis-nn cs.CC nlin.CD} }
krzakala2007a
arxiv-669342
cond-mat/0702613
Finding long cycles in graphs
<|reference_start|>Finding long cycles in graphs: We analyze the problem of discovering long cycles inside a graph. We propose and test two algorithms for this task. The first one is based on recent advances in statistical mechanics and relies on a message passing procedure. The second follows a more standard Monte Carlo Markov Chain strategy. Special attention is devoted to Hamiltonian cycles of (non-regular) random graphs of minimal connectivity equal to three.<|reference_end|>
arxiv
@article{marinari2007finding, title={Finding long cycles in graphs}, author={Enzo Marinari, Guilhem Semerjian and Valery Van Kerrebroeck}, journal={Phys. Rev. E 75, 066708 (2007)}, year={2007}, doi={10.1103/PhysRevE.75.066708}, archivePrefix={arXiv}, eprint={cond-mat/0702613}, primaryClass={cond-mat.stat-mech cond-mat.dis-nn cs.CC math.PR} }
marinari2007finding
arxiv-669343
cond-mat/0703351
Error Correction and Digitalization Concepts in Biochemical Computing
<|reference_start|>Error Correction and Digitalization Concepts in Biochemical Computing: We offer a theoretical design of new systems that show promise for digital biochemical computing, including realizations of error correction by utilizing redundancy, as well as signal rectification. The approach includes information processing using encoded DNA sequences, DNAzyme biocatalyzed reactions and the use of DNA-functionalized magnetic nanoparticles. Digital XOR and NAND logic gates and copying (fanout) are designed using the same components.<|reference_end|>
arxiv
@article{fedichkin2007error, title={Error Correction and Digitalization Concepts in Biochemical Computing}, author={L. Fedichkin, E. Katz, V. Privman}, journal={Journal of Computational and Theoretical Nanoscience 5, 36-43 (2008)}, year={2007}, archivePrefix={arXiv}, eprint={cond-mat/0703351}, primaryClass={cond-mat.soft cond-mat.dis-nn cond-mat.mtrl-sci cs.CE q-bio.BM quant-ph} }
fedichkin2007error
arxiv-669344
cond-mat/9703183
Finite size scaling of the bayesian perceptron
<|reference_start|>Finite size scaling of the bayesian perceptron: We study numerically the properties of the bayesian perceptron through a gradient descent on the optimal cost function. The theoretical distribution of stabilities is deduced. It predicts that the optimal generalizer lies close to the boundary of the space of (error-free) solutions. The numerical simulations are in good agreement with the theoretical distribution. The extrapolation of the generalization error to infinite input space size agrees with the theoretical results. Finite size corrections are negative and exhibit two different scaling regimes, depending on the training set size. The variance of the generalization error vanishes for $N \rightarrow \infty$ confirming the property of self-averaging.<|reference_end|>
arxiv
@article{buhot1997finite, title={Finite size scaling of the bayesian perceptron}, author={A. Buhot, J.-M. Torres Moreno and M. B. Gordon}, journal={arXiv preprint arXiv:cond-mat/9703183}, year={1997}, doi={10.1103/PhysRevE.55.7434}, archivePrefix={arXiv}, eprint={cond-mat/9703183}, primaryClass={cond-mat.stat-mech cond-mat.dis-nn cs.AI cs.LG} }
buhot1997finite
arxiv-669345
cond-mat/9703191
A Potts Neuron Approach to Communication Routing
<|reference_start|>A Potts Neuron Approach to Communication Routing: A feedback neural network approach to communication routing problems is developed with emphasis on Multiple Shortest Path problems, with several requests for transmissions between distinct start- and endnodes. The basic ingredients are a set of Potts neurons for each request, with interactions designed to minimize path lengths and to prevent overloading of network arcs. The topological nature of the problem is conveniently handled using a propagator matrix approach. Although the constraints are global, the algorithmic steps are based entirely on local information, facilitating distributed implementations. In the polynomially solvable single-request case the approach reduces to a fuzzy version of the Bellman-Ford algorithm. The approach is evaluated for synthetic problems of varying sizes and load levels, by comparing with exact solutions from a branch-and-bound method. With very few exceptions, the Potts approach gives legal solutions of very high quality. The computational demand scales merely as the product of the numbers of requests, nodes, and arcs.<|reference_end|>
arxiv
@article{häkkinen1997a, title={A Potts Neuron Approach to Communication Routing}, author={J. H"akkinen, M. Lagerholm, C. Peterson, B. S"oderberg (Theoretical Physics, Lund U.)}, journal={Neural Computation 10, 1587-1599 (1998)}, year={1997}, number={LU TP 97-02}, archivePrefix={arXiv}, eprint={cond-mat/9703191}, primaryClass={cond-mat.dis-nn cs.NI hep-lat} }
häkkinen1997a
arxiv-669346
cond-mat/9808130
Finite-difference methods for simulation models incorporating non-conservative forces
<|reference_start|>Finite-difference methods for simulation models incorporating non-conservative forces: We discuss algorithms applicable to the numerical solution of second-order ordinary differential equations by finite-differences. We make particular reference to the solution of the dissipative particle dynamics fluid model, and present extensive results comparing one of the algorithms discussed with the standard method of solution. These results show the successful modeling of phase separation and surface tension in a binary immiscible fluid mixture.<|reference_end|>
arxiv
@article{novik1998finite-difference, title={Finite-difference methods for simulation models incorporating non-conservative forces}, author={Keir E. Novik and Peter V. Coveney}, journal={arXiv preprint arXiv:cond-mat/9808130}, year={1998}, doi={10.1063/1.477413}, archivePrefix={arXiv}, eprint={cond-mat/9808130}, primaryClass={cond-mat.soft cs.NA math.NA physics.chem-ph physics.comp-ph physics.flu-dyn} }
novik1998finite-difference
arxiv-669347
cond-mat/9810144
Relaxation in graph coloring and satisfiability problems
<|reference_start|>Relaxation in graph coloring and satisfiability problems: Using T=0 Monte Carlo simulation, we study the relaxation of graph coloring (K-COL) and satisfiability (K-SAT), two hard problems that have recently been shown to possess a phase transition in solvability as a parameter is varied. A change from exponentially fast to power law relaxation, and a transition to freezing behavior are found. These changes take place for smaller values of the parameter than the solvability transition. Results for the coloring problem for colorable and clustered graphs and for the fraction of persistent spins for satisfiability are also presented.<|reference_end|>
arxiv
@article{svenson1998relaxation, title={Relaxation in graph coloring and satisfiability problems}, author={Pontus Svenson and Mats G. Nordahl}, journal={Phys. Rev. E 59(4) 3983-3999 (1999)}, year={1998}, doi={10.1103/PhysRevE.59.3983}, number={Goteborg ITP 98-15}, archivePrefix={arXiv}, eprint={cond-mat/9810144}, primaryClass={cond-mat.dis-nn cs.AI} }
svenson1998relaxation
arxiv-669348
cond-mat/9810347
An exact representation of the fermion dynamics in terms of Poisson processes and its connection with Monte Carlo algorithms
<|reference_start|>An exact representation of the fermion dynamics in terms of Poisson processes and its connection with Monte Carlo algorithms: We present a simple derivation of a Feynman-Kac type formula to study fermionic systems. In this approach the real time or the imaginary time dynamics is expressed in terms of the evolution of a collection of Poisson processes. A computer implementation of this formula leads to a family of algorithms parametrized by the values of the jump rates of the Poisson processes. From these an optimal algorithm can be chosen which coincides with the Green Function Monte Carlo method in the limit when the latter becomes exact.<|reference_end|>
arxiv
@article{beccaria1998an, title={An exact representation of the fermion dynamics in terms of Poisson processes and its connection with Monte Carlo algorithms}, author={Matteo Beccaria, Carlo Presilla, Gian Fabrizio De Angelis, and Giovanni Jona-Lasinio}, journal={Europhys.Lett. 48 (1999) 243-249}, year={1998}, doi={10.1209/epl/i1999-00472-2}, archivePrefix={arXiv}, eprint={cond-mat/9810347}, primaryClass={cond-mat cs.DS hep-lat math-ph math.MP quant-ph} }
beccaria1998an
arxiv-669349
cond-mat/9812344
Parallelization of a Dynamic Monte Carlo Algorithm: a Partially Rejection-Free Conservative Approach
<|reference_start|>Parallelization of a Dynamic Monte Carlo Algorithm: a Partially Rejection-Free Conservative Approach: We experiment with a massively parallel implementation of an algorithm for simulating the dynamics of metastable decay in kinetic Ising models. The parallel scheme is directly applicable to a wide range of stochastic cellular automata where the discrete events (updates) are Poisson arrivals. For high performance, we utilize a continuous-time, asynchronous parallel version of the n-fold way rejection-free algorithm. Each processing element carries an lxl block of spins, and we employ the fast SHMEM-library routines on the Cray T3E distributed-memory parallel architecture. Different processing elements have different local simulated times. To ensure causality, the algorithm handles the asynchrony in a conservative fashion. Despite relatively low utilization and an intricate relationship between the average time increment and the size of the spin blocks, we find that for sufficiently large l the algorithm outperforms its corresponding parallel Metropolis (non-rejection-free) counterpart. As an example application, we present results for metastable decay in a model ferromagnetic or ferroelectric film, observed with a probe of area smaller than the total system.<|reference_end|>
arxiv
@article{korniss1998parallelization, title={Parallelization of a Dynamic Monte Carlo Algorithm: a Partially Rejection-Free Conservative Approach}, author={G. Korniss, M. A. Novotny, and P. A. Rikvold (Florida State U.)}, journal={J. Comput. Phys. 153, 488 (1999)}, year={1998}, doi={10.1006/jcph.1999.6291}, number={FSU-SCRI-98-131}, archivePrefix={arXiv}, eprint={cond-mat/9812344}, primaryClass={cond-mat.stat-mech cond-mat.mtrl-sci cs.DC physics.comp-ph} }
korniss1998parallelization
arxiv-669350
cond-mat/9902011
Cortical Potential Distributions and Cognitive Information Processing
<|reference_start|>Cortical Potential Distributions and Cognitive Information Processing: The use of cortical field potentials rather than the details of spike trains as the basis for cognitive information processing is proposed. This results in a space of cognitive elements with natural metrics. Sets of spike trains may also be considered to be points in a multidimensional metric space. The closeness of sets of spike trains in such a space implies the closeness of points in the resulting function space of potential distributions.<|reference_end|>
arxiv
@article{tuckwell1999cortical, title={Cortical Potential Distributions and Cognitive Information Processing}, author={Henry C. Tuckwell}, journal={arXiv preprint arXiv:cond-mat/9902011}, year={1999}, number={B3E/99/001}, archivePrefix={arXiv}, eprint={cond-mat/9902011}, primaryClass={cond-mat.dis-nn adap-org cond-mat.stat-mech cs.NE math-ph math.MP nlin.AO physics.bio-ph q-bio} }
tuckwell1999cortical
arxiv-669351
cond-mat/9906017
Algorithmic Complexity in Minority Game
<|reference_start|>Algorithmic Complexity in Minority Game: In this paper we introduce a new approach for the study of the complex behavior of Minority Game using the tools of algorithmic complexity, physical entropy and information theory. We show that physical complexity and mutual information function strongly depend on memory size of the agents and yields more information about the complex features of the stream of binary outcomes of the game than volatility itself.<|reference_end|>
arxiv
@article{corona1999algorithmic, title={Algorithmic Complexity in Minority Game}, author={Ricardo Mansilla Corona}, journal={arXiv preprint arXiv:cond-mat/9906017}, year={1999}, archivePrefix={arXiv}, eprint={cond-mat/9906017}, primaryClass={cond-mat.stat-mech adap-org chao-dyn cs.CC nlin.AO nlin.CD} }
corona1999algorithmic
arxiv-669352
cond-mat/9906206
Ocular dominance patterns in mammalian visual cortex: A wire length minimization approach
<|reference_start|>Ocular dominance patterns in mammalian visual cortex: A wire length minimization approach: We propose a theory for ocular dominance (OD) patterns in mammalian primary visual cortex. This theory is based on the premise that OD pattern is an adaptation to minimize the length of intra-cortical wiring. Thus we can understand the existing OD patterns by solving a wire length minimization problem. We divide all the neurons into two classes: left-eye dominated and right-eye dominated. We find that segregation of neurons into monocular regions reduces wire length if the number of connections with the neurons of the same class differs from that with the other class. The shape of the regions depends on the relative fraction of neurons in the two classes. If the numbers are close we find that the optimal OD pattern consists of interdigitating stripes. If one class is less numerous than the other, the optimal OD pattern consists of patches of the first class neurons in the sea of the other class neurons. We predict the transition from stripes to patches when the fraction of neurons dominated by the ipsilateral eye is about 40%. This prediction agrees with the data in macaque and Cebus monkeys. This theory can be applied to other binary cortical systems.<|reference_end|>
arxiv
@article{chklovskii1999ocular, title={Ocular dominance patterns in mammalian visual cortex: A wire length minimization approach}, author={Dmitri B. Chklovskii and Alexei A. Koulakov}, journal={arXiv preprint arXiv:cond-mat/9906206}, year={1999}, archivePrefix={arXiv}, eprint={cond-mat/9906206}, primaryClass={cond-mat.soft cond-mat.dis-nn cs.NE physics.bio-ph q-bio} }
chklovskii1999ocular
arxiv-669353
cond-mat/9907038
The diameter of the world wide web
<|reference_start|>The diameter of the world wide web: Despite its increasing role in communication, the world wide web remains the least controlled medium: any individual or institution can create websites with unrestricted number of documents and links. While great efforts are made to map and characterize the Internet's infrastructure, little is known about the topology of the web. Here we take a first step to fill this gap: we use local connectivity measurements to construct a topological model of the world wide web, allowing us to explore and characterize its large scale properties.<|reference_end|>
arxiv
@article{albert1999the, title={The diameter of the world wide web}, author={Reka Albert, Hawoong Jeong and Albert-Laszlo Barabasi (University of Notre Dame)}, journal={Nature 401, 130-131 (1999)}, year={1999}, doi={10.1038/43601}, archivePrefix={arXiv}, eprint={cond-mat/9907038}, primaryClass={cond-mat.dis-nn adap-org cond-mat.stat-mech cs.NI math-ph math.MP nlin.AO physics.comp-ph} }
albert1999the
arxiv-669354
cond-mat/9907343
A variational description of the ground state structure in random satisfiability problems
<|reference_start|>A variational description of the ground state structure in random satisfiability problems: A variational approach to finite connectivity spin-glass-like models is developed and applied to describe the structure of optimal solutions in random satisfiability problems. Our variational scheme accurately reproduces the known replica symmetric results and also allows for the inclusion of replica symmetry breaking effects. For the 3-SAT problem, we find two transitions as the ratio $\alpha$ of logical clauses per Boolean variables increases. At the first one $\alpha_s \simeq 3.96$, a non-trivial organization of the solution space in geometrically separated clusters emerges. The multiplicity of these clusters as well as the typical distances between different solutions are calculated. At the second threshold $\alpha_c \simeq 4.48$, satisfying assignments disappear and a finite fraction $B_0 \simeq 0.13$ of variables are overconstrained and take the same values in all optimal (though unsatisfying) assignments. These values have to be compared to $\alpha_c \simeq 4.27, B_0 \simeq 0.4$ obtained from numerical experiments on small instances. Within the present variational approach, the SAT-UNSAT transition naturally appears as a mixture of a first and a second order transition. For the mixed $2+p$-SAT with $p<2/5$, the behavior is as expected much simpler: a unique smooth transition from SAT to UNSAT takes place at $\alpha_c=1/(1-p)$.<|reference_end|>
arxiv
@article{biroli1999a, title={A variational description of the ground state structure in random satisfiability problems}, author={Giulio Biroli, Remi Monasson, and Martin Weigt}, journal={Eur. Phys. J. B 14, 551 (2000)}, year={1999}, doi={10.1007/s100510051065}, number={LPTENS 99/22}, archivePrefix={arXiv}, eprint={cond-mat/9907343}, primaryClass={cond-mat.dis-nn cond-mat.stat-mech cs.CC} }
biroli1999a
arxiv-669355
cond-mat/9909114
From Massively Parallel Algorithms and Fluctuating Time Horizons to Non-equilibrium Surface Growth
<|reference_start|>From Massively Parallel Algorithms and Fluctuating Time Horizons to Non-equilibrium Surface Growth: We study the asymptotic scaling properties of a massively parallel algorithm for discrete-event simulations where the discrete events are Poisson arrivals. The evolution of the simulated time horizon is analogous to a non-equilibrium surface. Monte Carlo simulations and a coarse-grained approximation indicate that the macroscopic landscape in the steady state is governed by the Edwards-Wilkinson Hamiltonian. Since the efficiency of the algorithm corresponds to the density of local minima in the associated surface, our results imply that the algorithm is asymptotically scalable.<|reference_end|>
arxiv
@article{korniss1999from, title={From Massively Parallel Algorithms and Fluctuating Time Horizons to Non-equilibrium Surface Growth}, author={G. Korniss, Z. Toroczkai, M. A. Novotny, and P. A. Rikvold}, journal={Phys. Rev. Lett. 84, 1351 (2000).}, year={1999}, doi={10.1103/PhysRevLett.84.1351}, number={FSU-SCRI-99-58}, archivePrefix={arXiv}, eprint={cond-mat/9909114}, primaryClass={cond-mat.stat-mech cs.DC physics.comp-ph} }
korniss1999from
arxiv-669356
cs/0001001
Von Neumann Quantum Logic vs Classical von Neumann Architecture?
<|reference_start|>Von Neumann Quantum Logic vs Classical von Neumann Architecture?: The name of John von Neumann is common both in quantum mechanics and computer science. Are they really two absolutely unconnected areas? Many works devoted to quantum computations and communications are serious argument to suggest about existence of such a relation, but it is impossible to touch the new and active theme in a short review. In the paper are described the structures and models of linear algebra and just due to their generality it is possible to use universal description of very different areas as quantum mechanics and theory of Bayesian image analysis, associative memory, neural networks, fuzzy logic.<|reference_end|>
arxiv
@article{vlasov2000von, title={Von Neumann Quantum Logic vs. Classical von Neumann Architecture?}, author={Alexander Yu. Vlasov (FRC/IRH, St.-Petersburg, Russia)}, journal={arXiv preprint arXiv:cs/0001001}, year={2000}, archivePrefix={arXiv}, eprint={cs/0001001}, primaryClass={cs.OH quant-ph} }
vlasov2000von
arxiv-669357
cs/0001002
Minimum Description Length and Compositionality
<|reference_start|>Minimum Description Length and Compositionality: We present a non-vacuous definition of compositionality. It is based on the idea of combining the minimum description length principle with the original definition of compositionality (that is, that the meaning of the whole is a function of the meaning of the parts). The new definition is intuitive and allows us to distinguish between compositional and non-compositional semantics, and between idiomatic and non-idiomatic expressions. It is not ad hoc, since it does not make any references to non-intrinsic properties of meaning functions (like being a polynomial). Moreover, it allows us to compare different meaning functions with respect to how compositional they are. It bridges linguistic and corpus-based, statistical approaches to natural language understanding.<|reference_end|>
arxiv
@article{zadrozny2000minimum, title={Minimum Description Length and Compositionality}, author={Wlodek Zadrozny}, journal={H.Bunt and R.Muskens(Eds.) "Computing Meaning" Vol.1. Kluwer 1999. pp.113-128}, year={2000}, archivePrefix={arXiv}, eprint={cs/0001002}, primaryClass={cs.CL cs.AI} }
zadrozny2000minimum
arxiv-669358
cs/0001003
Why C++ is not very fit for GUI programming
<|reference_start|>Why C++ is not very fit for GUI programming: With no intent of starting a holy war, this paper lists several annoying C++ birthmarks that the author has come across developing GUI class libraries. C++'s view of classes, instances and hierarchies appears tantalizingly close to GUI concepts of controls, widgets, window classes and subwindows. OO models of C++ and of a window system are however different. C++ was designed to be a "static" language with a lexical name scoping, static type checking and hierarchies defined at compile time. Screen objects on the other hand are inherently dynamic; they usually live well beyond the procedure/block that created them; the hierarchy of widgets is defined to a large extent by layout, visibility and event flow. Many GUI fundamentals such as dynamic and geometric hierarchies of windows and controls, broadcasting and percolation of events are not supported directly by C++ syntax or execution semantics (or supported as "exceptions" -- pun intended). Therefore these features have to be emulated in C++ GUI code. This leads to duplication of a graphical toolkit or a window manager functionality, code bloat, engaging in unsafe practices and forgoing of many strong C++ features (like scoping rules and compile-time type checking). This paper enumerates a few major C++/GUI sores and illustrates them on simple examples.<|reference_end|>
arxiv
@article{kiselyov2000why, title={Why C++ is not very fit for GUI programming}, author={Oleg Kiselyov}, journal={arXiv preprint arXiv:cs/0001003}, year={2000}, archivePrefix={arXiv}, eprint={cs/0001003}, primaryClass={cs.PL} }
kiselyov2000why
arxiv-669359
cs/0001004
Multiplicative Algorithm for Orthgonal Groups and Independent Component Analysis
<|reference_start|>Multiplicative Algorithm for Orthgonal Groups and Independent Component Analysis: The multiplicative Newton-like method developed by the author et al. is extended to the situation where the dynamics is restricted to the orthogonal group. A general framework is constructed without specifying the cost function. Though the restriction to the orthogonal groups makes the problem somewhat complicated, an explicit expression for the amount of individual jumps is obtained. This algorithm is exactly second-order-convergent. The global instability inherent in the Newton method is remedied by a Levenberg-Marquardt-type variation. The method thus constructed can readily be applied to the independent component analysis. Its remarkable performance is illustrated by a numerical simulation.<|reference_end|>
arxiv
@article{akuzawa2000multiplicative, title={Multiplicative Algorithm for Orthgonal Groups and Independent Component Analysis}, author={Toshinao Akuzawa (RIKEN BSI)}, journal={arXiv preprint arXiv:cs/0001004}, year={2000}, archivePrefix={arXiv}, eprint={cs/0001004}, primaryClass={cs.LG} }
akuzawa2000multiplicative
arxiv-669360
cs/0001005
Effect of different packet sizes on RED performance
<|reference_start|>Effect of different packet sizes on RED performance: We consider the adaptation of random early detection (RED) as an active queue management algorithm for TCP traffic in Internet gateways where different maximum transfer units (MTUs) are used. We studied the two existing RED variants and point out a weakness in both. The first variant where the drop probability is independent from the packet size discriminates connections with smaller MTUs. The second variant results in a very high Packet Loss Ratio (PLR), and as a consequence low goodput, for connections with higher MTUs. We show that fairness in terms of loss and goodput can be supplied through an appropriate setting of the RED algorithm.<|reference_end|>
arxiv
@article{de cnodder2000effect, title={Effect of different packet sizes on RED performance}, author={Stefaan De Cnodder, Omar Elloumi, Kenny Pauwels}, journal={arXiv preprint arXiv:cs/0001005}, year={2000}, archivePrefix={arXiv}, eprint={cs/0001005}, primaryClass={cs.NI} }
de cnodder2000effect
arxiv-669361
cs/0001006
Compositionality, Synonymy, and the Systematic Representation of Meaning
<|reference_start|>Compositionality, Synonymy, and the Systematic Representation of Meaning: In a recent issue of Linguistics and Philosophy Kasmi and Pelletier (1998) (K&P), and Westerstahl (1998) criticize Zadrozny's (1994) argument that any semantics can be represented compositionally. The argument is based upon Zadrozny's theorem that every meaning function m can be encoded by a function \mu such that (i) for any expression E of a specified language L, m(E) can be recovered from \mu(E), and (ii) \mu is a homomorphism from the syntactic structures of L to interpretations of L. In both cases, the primary motivation for the objections brought against Zadrozny's argument is the view that his encoding of the original meaning function does not properly reflect the synonymy relations posited for the language. In this paper, we argue that these technical criticisms do not go through. In particular, we prove that \mu properly encodes synonymy relations, i.e. if two expressions are synonymous, then their compositional meanings are identical. This corrects some misconceptions about the function \mu, e.g. Janssen (1997). We suggest that the reason that semanticists have been anxious to preserve compositionality as a significant constraint on semantic theory is that it has been mistakenly regarded as a condition that must be satisfied by any theory that sustains a systematic connection between the meaning of an expression and the meanings of its parts. Recent developments in formal and computational semantics show that systematic theories of meanings need not be compositional.<|reference_end|>
arxiv
@article{lappin2000compositionality,, title={Compositionality, Synonymy, and the Systematic Representation of Meaning}, author={Shalom Lappin (King's College, London) and Wlodek Zadrozny (IBM T.J. Watson Research Center)}, journal={arXiv preprint arXiv:cs/0001006}, year={2000}, archivePrefix={arXiv}, eprint={cs/0001006}, primaryClass={cs.CL cs.LO} }
lappin2000compositionality,
arxiv-669362
cs/0001007
RED behavior with different packet sizes
<|reference_start|>RED behavior with different packet sizes: We consider the adaptation of random early detection (RED) as a buffer management algorithm for TCP traffic in Internet gateways where different maximum transfer units (MTUs) are used. We studied the two RED variants described in [4] and point out a weakness in both. The first variant where drop probability is independent from the packet size discriminates connections with smaller MTUs. The second variant results in a very high packet loss ratio (PLR), and as a consequence low goodput, for connections with higher MTUs. We show that fairness in terms of loss and goodput can be supplied through an appropriate setting of the RED algorithm.<|reference_end|>
arxiv
@article{de cnodder2000red, title={RED behavior with different packet sizes}, author={Stefaan De Cnodder, Omar Elloumi, Kenny Pauwels}, journal={arXiv preprint arXiv:cs/0001007}, year={2000}, doi={10.1109/ISCC.2000.860741}, archivePrefix={arXiv}, eprint={cs/0001007}, primaryClass={cs.NI} }
de cnodder2000red
arxiv-669363
cs/0001008
Predicting the expected behavior of agents that learn about agents: the CLRI framework
<|reference_start|>Predicting the expected behavior of agents that learn about agents: the CLRI framework: We describe a framework and equations used to model and predict the behavior of multi-agent systems (MASs) with learning agents. A difference equation is used for calculating the progression of an agent's error in its decision function, thereby telling us how the agent is expected to fare in the MAS. The equation relies on parameters which capture the agent's learning abilities, such as its change rate, learning rate and retention rate, as well as relevant aspects of the MAS such as the impact that agents have on each other. We validate the framework with experimental results using reinforcement learning agents in a market system, as well as with other experimental results gathered from the AI literature. Finally, we use PAC-theory to show how to calculate bounds on the values of the learning parameters.<|reference_end|>
arxiv
@article{vidal2000predicting, title={Predicting the expected behavior of agents that learn about agents: the CLRI framework}, author={Jose M. Vidal and Edmund H. Durfee}, journal={Autonomous Agents and Multi-Agent Systems Journal, January 2003}, year={2000}, archivePrefix={arXiv}, eprint={cs/0001008}, primaryClass={cs.MA cs.LG} }
vidal2000predicting
arxiv-669364
cs/0001009
Fractal Symbolic Analysis
<|reference_start|>Fractal Symbolic Analysis: Restructuring compilers use dependence analysis to prove that the meaning of a program is not changed by a transformation. A well-known limitation of dependence analysis is that it examines only the memory locations read and written by a statement, and does not assume any particular interpretation for the operations in that statement. Exploiting the semantics of these operations enables a wider set of transformations to be used, and is critical for optimizing important codes such as LU factorization with pivoting. Symbolic execution of programs enables the exploitation of such semantic properties, but it is intractable for all but the simplest programs. In this paper, we propose a new form of symbolic analysis for use in restructuring compilers. Fractal symbolic analysis compares a program and its transformed version by repeatedly simplifying these programs until symbolic analysis becomes tractable, ensuring that equality of simplified programs is sufficient to guarantee equality of the original programs. We present a prototype implementation of fractal symbolic analysis, and show how it can be used to optimize the cache performance of LU factorization with pivoting.<|reference_end|>
arxiv
@article{mateev2000fractal, title={Fractal Symbolic Analysis}, author={Nikolay Mateev, Vijay Menon, Keshav Pingali}, journal={arXiv preprint arXiv:cs/0001009}, year={2000}, archivePrefix={arXiv}, eprint={cs/0001009}, primaryClass={cs.PL} }
mateev2000fractal
arxiv-669365
cs/0001010
A Real World Implementation of Answer Extraction
<|reference_start|>A Real World Implementation of Answer Extraction: In this paper we describe ExtrAns, an answer extraction system. Answer extraction (AE) aims at retrieving those exact passages of a document that directly answer a given user question. AE is more ambitious than information retrieval and information extraction in that the retrieval results are phrases, not entire documents, and in that the queries may be arbitrarily specific. It is less ambitious than full-fledged question answering in that the answers are not generated from a knowledge base but looked up in the text of documents. The current version of ExtrAns is able to parse unedited Unix "man pages", and derive the logical form of their sentences. User queries are also translated into logical forms. A theorem prover then retrieves the relevant phrases, which are presented through selective highlighting in their context.<|reference_end|>
arxiv
@article{molla2000a, title={A Real World Implementation of Answer Extraction}, author={D. Molla, J. Berri, M. Hess}, journal={Proc. of 9th International Conference and Workshop on Database and Expert Systems. Workshop "Natural Language and Information Systems" (NLIS'98). Vienna: 1998}, year={2000}, archivePrefix={arXiv}, eprint={cs/0001010}, primaryClass={cs.CL} }
molla2000a
arxiv-669366
cs/0001011
Agents of Choice: Tools that Facilitate Notice and Choice about Web Site Data Practices
<|reference_start|>Agents of Choice: Tools that Facilitate Notice and Choice about Web Site Data Practices: A variety of tools have been introduced recently that are designed to help people protect their privacy on the Internet. These tools perform many different functions in-cluding encrypting and/or anonymizing communications, preventing the use of persistent identifiers such as cookies, automatically fetching and analyzing web site privacy policies, and displaying privacy-related information to users. This paper discusses the set of privacy tools that aim specifically at facilitating notice and choice about Web site data practices. While these tools may also have components that perform other functions such as encryption, or they may be able to work in conjunction with other privacy tools, the primary pur-pose of these tools is to help make users aware of web site privacy practices and to make it easier for users to make informed choices about when to provide data to web sites. Examples of such tools include the Platform for Privacy Preferences (P3P) and various infomediary services.<|reference_end|>
arxiv
@article{cranor2000agents, title={Agents of Choice: Tools that Facilitate Notice and Choice about Web Site Data Practices}, author={Lorrie Faith Cranor}, journal={Proceedings of the 21st International Conference on Privacy and Personal Data Protection, 13-15 September 1999, Hong Kong SAR, China, p. 19-25}, year={2000}, archivePrefix={arXiv}, eprint={cs/0001011}, primaryClass={cs.CY} }
cranor2000agents
arxiv-669367
cs/0001012
Measures of Distributional Similarity
<|reference_start|>Measures of Distributional Similarity: We study distributional similarity measures for the purpose of improving probability estimation for unseen cooccurrences. Our contributions are three-fold: an empirical comparison of a broad range of measures; a classification of similarity functions based on the information that they incorporate; and the introduction of a novel function that is superior at evaluating potential proxy distributions.<|reference_end|>
arxiv
@article{lee2000measures, title={Measures of Distributional Similarity}, author={Lillian Lee}, journal={37th Annual Meeting of the ACL, 1999, pp. 25-32}, year={2000}, archivePrefix={arXiv}, eprint={cs/0001012}, primaryClass={cs.CL} }
lee2000measures
arxiv-669368
cs/0001013
Query Complexity: Worst-Case Quantum Versus Average-Case Classical
<|reference_start|>Query Complexity: Worst-Case Quantum Versus Average-Case Classical: In this note we investigate the relationship between worst-case quantum query complexity and average-case classical query complexity. Specifically, we show that if a quantum computer can evaluate a total Boolean function f with bounded error using T queries in the worst case, then a deterministic classical computer can evaluate f using O(T^5) queries in the average case, under a uniform distribution of inputs. If f is monotone, we show furthermore that only O(T^3) queries are needed. Previously, Beals et al. (1998) showed that if a quantum computer can evaluate f with bounded error using T queries in the worst case, then a deterministic classical computer can evaluate f using O(T^6) queries in the worst case, or O(T^4) if f is monotone. The optimal bound is conjectured to be O(T^2), but improving on O(T^6) remains an open problem. Relating worst-case quantum complexity to average-case classical complexity may suggest new ways to reduce the polynomial gap in the ordinary worst-case versus worst-case setting.<|reference_end|>
arxiv
@article{aaronson2000query, title={Query Complexity: Worst-Case Quantum Versus Average-Case Classical}, author={Scott Aaronson}, journal={arXiv preprint arXiv:cs/0001013}, year={2000}, archivePrefix={arXiv}, eprint={cs/0001013}, primaryClass={cs.CC quant-ph} }
aaronson2000query
arxiv-669369
cs/0001014
Nondeterministic Quantum Query and Quantum Communication Complexities
<|reference_start|>Nondeterministic Quantum Query and Quantum Communication Complexities: We study nondeterministic quantum algorithms for Boolean functions f. Such algorithms have positive acceptance probability on input x iff f(x)=1. In the setting of query complexity, we show that the nondeterministic quantum complexity of a Boolean function is equal to its ``nondeterministic polynomial'' degree. We also prove a quantum-vs-classical gap of 1 vs n for nondeterministic query complexity for a total function. In the setting of communication complexity, we show that the nondeterministic quantum complexity of a two-party function is equal to the logarithm of the rank of a nondeterministic version of the communication matrix. This implies that the quantum communication complexities of the equality and disjointness functions are n+1 if we do not allow any error probability. We also exhibit a total function in which the nondeterministic quantum communication complexity is exponentially smaller than its classical counterpart.<|reference_end|>
arxiv
@article{de wolf2000nondeterministic, title={Nondeterministic Quantum Query and Quantum Communication Complexities}, author={Ronald de Wolf (CWI Amsterdam)}, journal={SIAM Journal on Computing, 32(3):681-699, 2003}, year={2000}, archivePrefix={arXiv}, eprint={cs/0001014}, primaryClass={cs.CC quant-ph} }
de wolf2000nondeterministic
arxiv-669370
cs/0001015
Multi-Agent Only Knowing
<|reference_start|>Multi-Agent Only Knowing: Levesque introduced a notion of ``only knowing'', with the goal of capturing certain types of nonmonotonic reasoning. Levesque's logic dealt with only the case of a single agent. Recently, both Halpern and Lakemeyer independently attempted to extend Levesque's logic to the multi-agent case. Although there are a number of similarities in their approaches, there are some significant differences. In this paper, we reexamine the notion of only knowing, going back to first principles. In the process, we simplify Levesque's completeness proof, and point out some problems with the earlier definitions. This leads us to reconsider what the properties of only knowing ought to be. We provide an axiom system that captures our desiderata, and show that it has a semantics that corresponds to it. The axiom system has an added feature of interest: it includes a modal operator for satisfiability, and thus provides a complete axiomatization for satisfiability in the logic K45.<|reference_end|>
arxiv
@article{halpern2000multi-agent, title={Multi-Agent Only Knowing}, author={Joseph Y. Halpern and Gerhard Lakemeyer}, journal={arXiv preprint arXiv:cs/0001015}, year={2000}, archivePrefix={arXiv}, eprint={cs/0001015}, primaryClass={cs.AI cs.LO} }
halpern2000multi-agent
arxiv-669371
cs/0001016
Take-home Complexity
<|reference_start|>Take-home Complexity: We discuss the use of projects in first-year graduate complexity theory courses.<|reference_end|>
arxiv
@article{hemaspaandra2000take-home, title={Take-home Complexity}, author={Lane A. Hemaspaandra}, journal={arXiv preprint arXiv:cs/0001016}, year={2000}, archivePrefix={arXiv}, eprint={cs/0001016}, primaryClass={cs.CY cs.GL} }
hemaspaandra2000take-home
arxiv-669372
cs/0001017
Bezier Curves Intersection Using Relief Perspective
<|reference_start|>Bezier Curves Intersection Using Relief Perspective: Presented paper describes the method for finding the intersection of class space rational Bezier curves. The problem curve/curve intersection belongs among basic geometric problems and the aim of this article is to describe the new technique to solve the problem using relief perspective and Bezier clipping.<|reference_end|>
arxiv
@article{hlusek2000bezier, title={Bezier Curves Intersection Using Relief Perspective}, author={Radoslav Hlusek}, journal={arXiv preprint arXiv:cs/0001017}, year={2000}, archivePrefix={arXiv}, eprint={cs/0001017}, primaryClass={cs.CG cs.GR} }
hlusek2000bezier
arxiv-669373
cs/0001018
Adaptive simulated annealing (ASA): Lessons learned
<|reference_start|>Adaptive simulated annealing (ASA): Lessons learned: Adaptive simulated annealing (ASA) is a global optimization algorithm based on an associated proof that the parameter space can be sampled much more efficiently than by using other previous simulated annealing algorithms. The author's ASA code has been publicly available for over two years. During this time the author has volunteered to help people via e-mail, and the feedback obtained has been used to further develop the code. Some lessons learned, in particular some which are relevant to other simulated annealing algorithms, are described.<|reference_end|>
arxiv
@article{ingber2000adaptive, title={Adaptive simulated annealing (ASA): Lessons learned}, author={Lester Ingber}, journal={Control and Cybernetics 25 (1996) 33-54}, year={2000}, archivePrefix={arXiv}, eprint={cs/0001018}, primaryClass={cs.MS cs.CE} }
ingber2000adaptive
arxiv-669374
cs/0001019
PushPush is NP-hard in 2D
<|reference_start|>PushPush is NP-hard in 2D: We prove that a particular pushing-blocks puzzle is intractable in 2D, improving an earlier result that established intractability in 3D [OS99]. The puzzle, inspired by the game *PushPush*, consists of unit square blocks on an integer lattice. An agent may push blocks (but never pull them) in attempting to move between given start and goal positions. In the PushPush version, the agent can only push one block at a time, and moreover, each block, when pushed, slides the maximal extent of its free range. We prove this version is NP-hard in 2D by reduction from SAT.<|reference_end|>
arxiv
@article{demaine2000pushpush, title={PushPush is NP-hard in 2D}, author={Erik D. Demaine, Martin L. Demaine, Joseph O'Rourke}, journal={arXiv preprint arXiv:cs/0001019}, year={2000}, number={Smith Technical Report 065}, archivePrefix={arXiv}, eprint={cs/0001019}, primaryClass={cs.CG cs.DM} }
demaine2000pushpush
arxiv-669375
cs/0001020
Exploiting Syntactic Structure for Natural Language Modeling
<|reference_start|>Exploiting Syntactic Structure for Natural Language Modeling: The thesis presents an attempt at using the syntactic structure in natural language for improved language models for speech recognition. The structured language model merges techniques in automatic parsing and language modeling using an original probabilistic parameterization of a shift-reduce parser. A maximum likelihood reestimation procedure belonging to the class of expectation-maximization algorithms is employed for training the model. Experiments on the Wall Street Journal, Switchboard and Broadcast News corpora show improvement in both perplexity and word error rate - word lattice rescoring - over the standard 3-gram language model. The significance of the thesis lies in presenting an original approach to language modeling that uses the hierarchical - syntactic - structure in natural language to improve on current 3-gram modeling techniques for large vocabulary speech recognition.<|reference_end|>
arxiv
@article{chelba2000exploiting, title={Exploiting Syntactic Structure for Natural Language Modeling}, author={Ciprian Chelba (CLSP, The Johns Hopkins University)}, journal={arXiv preprint arXiv:cs/0001020}, year={2000}, archivePrefix={arXiv}, eprint={cs/0001020}, primaryClass={cs.CL} }
chelba2000exploiting
arxiv-669376
cs/0001021
Refinement of a Structured Language Model
<|reference_start|>Refinement of a Structured Language Model: A new language model for speech recognition inspired by linguistic analysis is presented. The model develops hidden hierarchical structure incrementally and uses it to extract meaningful information from the word history - thus enabling the use of extended distance dependencies - in an attempt to complement the locality of currently used n-gram Markov models. The model, its probabilistic parametrization, a reestimation algorithm for the model parameters and a set of experiments meant to evaluate its potential for speech recognition are presented.<|reference_end|>
arxiv
@article{chelba2000refinement, title={Refinement of a Structured Language Model}, author={Ciprian Chelba, Frederick Jelinek (CLSP The Johns Hopkins University)}, journal={Proceedings of the International Conference on Advances in Pattern Recognition, 1998, pp. 275-284, Plymouth, UK}, year={2000}, archivePrefix={arXiv}, eprint={cs/0001021}, primaryClass={cs.CL} }
chelba2000refinement
arxiv-669377
cs/0001022
Recognition Performance of a Structured Language Model
<|reference_start|>Recognition Performance of a Structured Language Model: A new language model for speech recognition inspired by linguistic analysis is presented. The model develops hidden hierarchical structure incrementally and uses it to extract meaningful information from the word history - thus enabling the use of extended distance dependencies - in an attempt to complement the locality of currently used trigram models. The structured language model, its probabilistic parameterization and performance in a two-pass speech recognizer are presented. Experiments on the SWITCHBOARD corpus show an improvement in both perplexity and word error rate over conventional trigram models.<|reference_end|>
arxiv
@article{chelba2000recognition, title={Recognition Performance of a Structured Language Model}, author={Ciprian Chelba, Frederick Jelinek (CLSP The Johns Hopkins University)}, journal={Proceedings of Eurospeech, 1999, pp. 1567-1570, Budapest, Hungary}, year={2000}, archivePrefix={arXiv}, eprint={cs/0001022}, primaryClass={cs.CL} }
chelba2000recognition
arxiv-669378
cs/0001023
Structured Language Modeling for Speech Recognition
<|reference_start|>Structured Language Modeling for Speech Recognition: A new language model for speech recognition is presented. The model develops hidden hierarchical syntactic-like structure incrementally and uses it to extract meaningful information from the word history, thus complementing the locality of currently used trigram models. The structured language model (SLM) and its performance in a two-pass speech recognizer --- lattice decoding --- are presented. Experiments on the WSJ corpus show an improvement in both perplexity (PPL) and word error rate (WER) over conventional trigram models.<|reference_end|>
arxiv
@article{chelba2000structured, title={Structured Language Modeling for Speech Recognition}, author={Ciprian Chelba, Frederick Jelinek (CLSP, The Johns Hopkins University)}, journal={Proceedings of NLDB'99, Klagenfurt, Austria}, year={2000}, archivePrefix={arXiv}, eprint={cs/0001023}, primaryClass={cs.CL} }
chelba2000structured
arxiv-669379
cs/0001024
A Parallel Algorithm for Dilated Contour Extraction from Bilevel Images
<|reference_start|>A Parallel Algorithm for Dilated Contour Extraction from Bilevel Images: We describe a simple, but efficient algorithm for the generation of dilated contours from bilevel images. The initial part of the contour extraction is explained to be a good candidate for parallel computer code generation. The remainder of the algorithm is of linear nature.<|reference_end|>
arxiv
@article{schlei2000a, title={A Parallel Algorithm for Dilated Contour Extraction from Bilevel Images}, author={B. R. Schlei, L. Prasad}, journal={arXiv preprint arXiv:cs/0001024}, year={2000}, number={Los Alamos Preprint LA-UR-00-309}, archivePrefix={arXiv}, eprint={cs/0001024}, primaryClass={cs.CV} }
schlei2000a
arxiv-669380
cs/0001025
Computational Geometry Column 38
<|reference_start|>Computational Geometry Column 38: Recent results on curve reconstruction are described.<|reference_end|>
arxiv
@article{o'rourke2000computational, title={Computational Geometry Column 38}, author={Joseph O'Rourke}, journal={arXiv preprint arXiv:cs/0001025}, year={2000}, archivePrefix={arXiv}, eprint={cs/0001025}, primaryClass={cs.CG cs.CV} }
o'rourke2000computational
arxiv-669381
cs/0001026
A Logic for SDSI's Linked Local Name Spaces
<|reference_start|>A Logic for SDSI's Linked Local Name Spaces: Abadi has introduced a logic to explicate the meaning of local names in SDSI, the Simple Distributed Security Infrastructure proposed by Rivest and Lampson. Abadi's logic does not correspond precisely to SDSI, however; it draws conclusions about local names that do not follow from SDSI's name resolution algorithm. Moreover, its semantics is somewhat unintuitive. This paper presents the Logic of Local Name Containment, which does not suffer from these deficiencies. It has a clear semantics and provides a tight characterization of SDSI name resolution. The semantics is shown to be closely related to that of logic programs, leading to an approach to the efficient implementation of queries concerning local names. A complete axiomatization of the logic is also provided.<|reference_end|>
arxiv
@article{halpern2000a, title={A Logic for SDSI's Linked Local Name Spaces}, author={Joseph Y. Halpern and Ron van der Meyden}, journal={arXiv preprint arXiv:cs/0001026}, year={2000}, archivePrefix={arXiv}, eprint={cs/0001026}, primaryClass={cs.CR cs.LO} }
halpern2000a
arxiv-669382
cs/0001027
Pattern Discovery and Computational Mechanics
<|reference_start|>Pattern Discovery and Computational Mechanics: Computational mechanics is a method for discovering, describing and quantifying patterns, using tools from statistical physics. It constructs optimal, minimal models of stochastic processes and their underlying causal structures. These models tell us about the intrinsic computation embedded within a process---how it stores and transforms information. Here we summarize the mathematics of computational mechanics, especially recent optimality and uniqueness results. We also expound the principles and motivations underlying computational mechanics, emphasizing its connections to the minimum description length principle, PAC theory, and other aspects of machine learning.<|reference_end|>
arxiv
@article{shalizi2000pattern, title={Pattern Discovery and Computational Mechanics}, author={Cosma Rohilla Shalizi and James P. Crutchfield (Santa Fe Institute)}, journal={arXiv preprint arXiv:cs/0001027}, year={2000}, number={SFI 00-01-008}, archivePrefix={arXiv}, eprint={cs/0001027}, primaryClass={cs.LG cs.NE} }
shalizi2000pattern
arxiv-669383
cs/0002001
Computing large and small stable models
<|reference_start|>Computing large and small stable models: In this paper, we focus on the problem of existence and computing of small and large stable models. We show that for every fixed integer k, there is a linear-time algorithm to decide the problem LSM (large stable models problem): does a logic program P have a stable model of size at least |P|-k. In contrast, we show that the problem SSM (small stable models problem) to decide whether a logic program P has a stable model of size at most k is much harder. We present two algorithms for this problem but their running time is given by polynomials of order depending on k. We show that the problem SSM is fixed-parameter intractable by demonstrating that it is W[2]-hard. This result implies that it is unlikely, an algorithm exists to compute stable models of size at most k that would run in time O(n^c), where c is a constant independent of k. We also provide an upper bound on the fixed-parameter complexity of the problem SSM by showing that it belongs to the class W[3].<|reference_end|>
arxiv
@article{truszczynski2000computing, title={Computing large and small stable models}, author={Miroslaw Truszczynski}, journal={Theory and Practice of Logic Programming, 2(1), pp. 2002}, year={2000}, archivePrefix={arXiv}, eprint={cs/0002001}, primaryClass={cs.LO cs.AI} }
truszczynski2000computing
arxiv-669384
cs/0002002
Uniform semantic treatment of default and autoepistemic logics
<|reference_start|>Uniform semantic treatment of default and autoepistemic logics: We revisit the issue of connections between two leading formalisms in nonmonotonic reasoning: autoepistemic logic and default logic. For each logic we develop a comprehensive semantic framework based on the notion of a belief pair. The set of all belief pairs together with the so called knowledge ordering forms a complete lattice. For each logic, we introduce several semantics by means of fixpoints of operators on the lattice of belief pairs. Our results elucidate an underlying isomorphism of the respective semantic constructions. In particular, we show that the interpretation of defaults as modal formulas proposed by Konolige allows us to represent all semantics for default logic in terms of the corresponding semantics for autoepistemic logic. Thus, our results conclusively establish that default logic can indeed be viewed as a fragment of autoepistemic logic. However, as we also demonstrate, the semantics of Moore and Reiter are given by different operators and occupy different locations in their corresponding families of semantics. This result explains the source of the longstanding difficulty to formally relate these two semantics. In the paper, we also discuss approximating skeptical reasoning with autoepistemic and default logics and establish constructive principles behind such approximations.<|reference_end|>
arxiv
@article{denecker2000uniform, title={Uniform semantic treatment of default and autoepistemic logics}, author={Marc Denecker, Victor W. Marek, Miroslaw Truszczynski}, journal={Artificial Intelligence Journal, 143 (2003), pp. 79--122}, year={2000}, archivePrefix={arXiv}, eprint={cs/0002002}, primaryClass={cs.AI} }
denecker2000uniform
arxiv-669385
cs/0002003
On the accuracy and running time of GSAT
<|reference_start|>On the accuracy and running time of GSAT: Randomized algorithms for deciding satisfiability were shown to be effective in solving problems with thousands of variables. However, these algorithms are not complete. That is, they provide no guarantee that a satisfying assignment, if one exists, will be found. Thus, when studying randomized algorithms, there are two important characteristics that need to be considered: the running time and, even more importantly, the accuracy --- a measure of likelihood that a satisfying assignment will be found, provided one exists. In fact, we argue that without a reference to the accuracy, the notion of the running time for randomized algorithms is not well-defined. In this paper, we introduce a formal notion of accuracy. We use it to define a concept of the running time. We use both notions to study the random walk strategy GSAT algorithm. We investigate the dependence of accuracy on properties of input formulas such as clause-to-variable ratio and the number of satisfying assignments. We demonstrate that the running time of GSAT grows exponentially in the number of variables of the input formula for randomly generated 3-CNF formulas and for the formulas encoding 3- and 4-colorability of graphs.<|reference_end|>
arxiv
@article{east2000on, title={On the accuracy and running time of GSAT}, author={Deborah East, Miroslaw Truszczynski}, journal={arXiv preprint arXiv:cs/0002003}, year={2000}, archivePrefix={arXiv}, eprint={cs/0002003}, primaryClass={cs.AI} }
east2000on
arxiv-669386
cs/0002004
Stochastic Model Checking for Multimedia
<|reference_start|>Stochastic Model Checking for Multimedia: Modern distributed systems include a class of applications in which non-functional requirements are important. In particular, these applications include multimedia facilities where real time constraints are crucial to their correct functioning. In order to specify such systems it is necessary to describe that events occur at times given by probability distributions and stochastic automata have emerged as a useful technique by which such systems can be specified and verified. However, stochastic descriptions are very general, in particular they allow the use of general probability distribution functions, and therefore their verification can be complex. In the last few years, model checking has emerged as a useful verification tool for large systems. In this paper we describe two model checking algorithms for stochastic automata. These algorithms consider how properties written in a simple probabilistic real-time logic can be checked against a given stochastic automaton.<|reference_end|>
arxiv
@article{bryans2000stochastic, title={Stochastic Model Checking for Multimedia}, author={Jeremy Bryans, Howard Bowman and John Derrick}, journal={arXiv preprint arXiv:cs/0002004}, year={2000}, archivePrefix={arXiv}, eprint={cs/0002004}, primaryClass={cs.MM cs.LO} }
bryans2000stochastic
arxiv-669387
cs/0002005
Fully Sequential and Distributed Dynamic Algorithms for Minimum Spanning Trees
<|reference_start|>Fully Sequential and Distributed Dynamic Algorithms for Minimum Spanning Trees: In this paper, we present a fully-dynamic distributed algorithm for maintaining a minimum spanning tree on general graphs with positive real edge weights. The goal of a dynamic MST algorithm is to update efficiently the minimum spanning tree after dynamic changes like edge weight changes, rather than having to recompute it from scatch each time. The first part of the paper surveys various algorithms available today both in sequential and distributed environments to solve static MST problem. We also present some of the efficient sequential algorithms for computing dynamic MST like the Frederickson's algorithm and Eppstein's sparsification technique. Lastly we present our new sequential and distributed algorithms for dynamic MST problem. To our knowledge, this is the first of the distributed algorithms for computing dynamic MSTs.<|reference_end|>
arxiv
@article{mohapatra2000fully, title={Fully Sequential and Distributed Dynamic Algorithms for Minimum Spanning Trees}, author={Pradosh Kumar Mohapatra}, journal={arXiv preprint arXiv:cs/0002005}, year={2000}, archivePrefix={arXiv}, eprint={cs/0002005}, primaryClass={cs.DC cs.DS} }
mohapatra2000fully
arxiv-669388
cs/0002006
Multiplicative Nonholonomic/Newton -like Algorithm
<|reference_start|>Multiplicative Nonholonomic/Newton -like Algorithm: We construct new algorithms from scratch, which use the fourth order cumulant of stochastic variables for the cost function. The multiplicative updating rule here constructed is natural from the homogeneous nature of the Lie group and has numerous merits for the rigorous treatment of the dynamics. As one consequence, the second order convergence is shown. For the cost function, functions invariant under the componentwise scaling are choosen. By identifying points which can be transformed to each other by the scaling, we assume that the dynamics is in a coset space. In our method, a point can move toward any direction in this coset. Thus, no prewhitening is required.<|reference_end|>
arxiv
@article{akuzawa2000multiplicative, title={Multiplicative Nonholonomic/Newton -like Algorithm}, author={Toshinao Akuzawa and Noboru Murata (RIKEN BSI)}, journal={arXiv preprint arXiv:cs/0002006}, year={2000}, doi={10.1016/S0960-0779(00)00077-1}, archivePrefix={arXiv}, eprint={cs/0002006}, primaryClass={cs.LG} }
akuzawa2000multiplicative
arxiv-669389
cs/0002007
Requirements of Text Processing Lexicons
<|reference_start|>Requirements of Text Processing Lexicons: As text processing systems expand in scope, they will require ever larger lexicons along with a parsing capability for discriminating among many senses of a word. Existing systems do not incorporate such subtleties in meaning for their lexicons. Ordinary dictionaries contain such information, but are largely untapped. When the contents of dictionaries are scrutinized, they reveal many requirements that must be satisfied in representing meaning and in developing semantic parsers. These requirements were identified in research designed to find primitive verb concepts. The requirements are outlined and general procedures for satisfying them through the use of ordinary dictionaries are described, illustrated by building frames for and examining the definitions of "change" and its uses as a hypernym in other definitions.<|reference_end|>
arxiv
@article{litkowski2000requirements, title={Requirements of Text Processing Lexicons}, author={K. Litkowski}, journal={Proceedings of the 18th Annual Meeting of the Association for Computational Linguistics, Philadelphia, PA (1980), pp. 153-4}, year={2000}, archivePrefix={arXiv}, eprint={cs/0002007}, primaryClass={cs.CL} }
litkowski2000requirements
arxiv-669390
cs/0002008
On Automata with Boundary
<|reference_start|>On Automata with Boundary: We present a theory of automata with boundary for designing, modelling and analysing distributed systems. Notions of behaviour, design and simulation appropriate to the theory are defined. The problem of model checking for deadlock detection is discussed, and an algorithm for state space reduction in exhaustive search, based on the theory presented here, is described. Three examples of the application of the theory are given, one in the course of the development of the ideas and two as illustrative examples of the use of the theory.<|reference_end|>
arxiv
@article{gates2000on, title={On Automata with Boundary}, author={R. Gates, P. Katis, N. Sabadini, R.F.C. Walters}, journal={arXiv preprint arXiv:cs/0002008}, year={2000}, number={C/TR00-01}, archivePrefix={arXiv}, eprint={cs/0002008}, primaryClass={cs.DC} }
gates2000on
arxiv-669391
cs/0002009
Syntactic Autonomy: Why There is no Autonomy without Symbols and How Self-Organization Might Evolve Them
<|reference_start|>Syntactic Autonomy: Why There is no Autonomy without Symbols and How Self-Organization Might Evolve Them: Two different types of agency are discussed based on dynamically coherent and incoherent couplings with an environment respectively. I propose that until a private syntax (syntactic autonomy) is discovered by dynamically coherent agents, there are no significant or interesting types of closure or autonomy. When syntactic autonomy is established, then, because of a process of description-based selected self-organization, open-ended evolution is enabled. At this stage, agents depend, in addition to dynamics, on localized, symbolic memory, thus adding a level of dynamical incoherence to their interaction with the environment. Furthermore, it is the appearance of syntactic autonomy which enables much more interesting types of closures amongst agents which share the same syntax. To investigate how we can study the emergence of syntax from dynamical systems, experiments with cellular automata leading to emergent computation to solve non-trivial tasks are discussed. RNA editing is also mentioned as a process that may have been used to obtain a primordial biological code necessary open-ended evolution.<|reference_end|>
arxiv
@article{rocha2000syntactic, title={Syntactic Autonomy: Why There is no Autonomy without Symbols and How Self-Organization Might Evolve Them}, author={Luis M. Rocha}, journal={arXiv preprint arXiv:cs/0002009}, year={2000}, archivePrefix={arXiv}, eprint={cs/0002009}, primaryClass={cs.AI} }
rocha2000syntactic
arxiv-669392
cs/0002010
Biologically Motivated Distributed Designs for Adaptive Knowledge Management
<|reference_start|>Biologically Motivated Distributed Designs for Adaptive Knowledge Management: We discuss how distributed designs that draw from biological network metaphors can largely improve the current state of information retrieval and knowledge management of distributed information systems. In particular, two adaptive recommendation systems named TalkMine and @ApWeb are discussed in more detail. TalkMine operates at the semantic level of keywords. It leads different databases to learn new and adapt existing keywords to the categories recognized by its communities of users using distributed algorithms. @ApWeb operates at the structural level of information resources, namely citation or hyperlink structure. It relies on collective behavior to adapt such structure to the expectations of users. TalkMine and @ApWeb are currently being implemented for the research library of the Los Alamos National Laboratory under the Active Recommendation Project. Together they define a biologically motivated information retrieval system, recommending simultaneously at the level of user knowledge categories expressed in keywords, and at the level of individual documents and their associations to other documents. Rather than passive information retrieval, with this system, users obtain an active, evolving interaction with information resources.<|reference_end|>
arxiv
@article{rocha2000biologically, title={Biologically Motivated Distributed Designs for Adaptive Knowledge Management}, author={Luis M. Rocha, Johan Bollen}, journal={arXiv preprint arXiv:cs/0002010}, year={2000}, archivePrefix={arXiv}, eprint={cs/0002010}, primaryClass={cs.IR} }
rocha2000biologically
arxiv-669393
cs/0002011
An Internet Multicast System for the Stock Market
<|reference_start|>An Internet Multicast System for the Stock Market: We are moving toward a distributed, international, twenty-four hour, electronic stock exchange. The exchange will use the global Internet, or internet technology. This system is a natural application of multicast because there are a large number of receivers that should receive the same information simultaneously. The data requirements for the stock exchange are discussed. The current multicast protocols lack the reliability, fairness, and scalability needed in this application. We describe a distributed architecture together with a reliable multicast protocol, a modification of the RMP protocol, that has characteristics appropriate for this application. The architecture is used in three applications: In the first, we construct a unified stock ticker of the transactions that are being conducted on the various physical and electronic exchanges. Our objective is to deliver the the same combined ticker reliably and simultaneously to all receivers, anywhere in the world. In the second, we construct a unified sequence of buy and sell offers that are delivered to a single exchange or a collection of exchanges. Our objective is to give all traders the same fair access to an exchange independent of their relative distances to the exchange or the loss characteristics of the international network. In the third, we construct a distributed, electronic trading floor that can replace the current exchanges. This application uses the innovations from the first two applications to combine their fairness attributes.<|reference_end|>
arxiv
@article{maxemchuk2000an, title={An Internet Multicast System for the Stock Market}, author={N. F. Maxemchuk and D. H. Shur}, journal={arXiv preprint arXiv:cs/0002011}, year={2000}, archivePrefix={arXiv}, eprint={cs/0002011}, primaryClass={cs.NI} }
maxemchuk2000an
arxiv-669394
cs/0002012
On The Closest String and Substring Problems
<|reference_start|>On The Closest String and Substring Problems: The problem of finding a center string that is `close' to every given string arises and has many applications in computational biology and coding theory. This problem has two versions: the Closest String problem and the Closest Substring problem. Assume that we are given a set of strings ${\cal S}=\{s_1, s_2, ..., s_n\}$ of strings, say, each of length $m$. The Closest String problem asks for the smallest $d$ and a string $s$ of length $m$ which is within Hamming distance $d$ to each $s_i\in {\cal S}$. This problem comes from coding theory when we are looking for a code not too far away from a given set of codes. The problem is NP-hard. Berman et al give a polynomial time algorithm for constant $d$. For super-logarithmic $d$, Ben-Dor et al give an efficient approximation algorithm using linear program relaxation technique. The best polynomial time approximation has ratio 4/3 for all $d$ given by Lanctot et al and Gasieniec et al. The Closest Substring problem looks for a string $t$ which is within Hamming distance $d$ away from a substring of each $s_i$. This problem only has a $2- \frac{2}{2|\Sigma|+1}$ approximation algorithm previously Lanctot et al and is much more elusive than the Closest String problem, but it has many applications in finding conserved regions, genetic drug target identification, and genetic probes in molecular biology. Whether there are efficient approximation algorithms for both problems are major open questions in this area. We present two polynomial time approxmation algorithms with approximation ratio $1+ \epsilon$ for any small $\epsilon$ to settle both questions.<|reference_end|>
arxiv
@article{li2000on, title={On The Closest String and Substring Problems}, author={Ming Li, Bin Ma and Lusheng Wang}, journal={arXiv preprint arXiv:cs/0002012}, year={2000}, archivePrefix={arXiv}, eprint={cs/0002012}, primaryClass={cs.CE cs.CC} }
li2000on
arxiv-669395
cs/0002013
Computing and Comparing Semantics of Programs in Multi-valued Logics
<|reference_start|>Computing and Comparing Semantics of Programs in Multi-valued Logics: The different semantics that can be assigned to a logic program correspond to different assumptions made concerning the atoms whose logical values cannot be inferred from the rules. Thus, the well founded semantics corresponds to the assumption that every such atom is false, while the Kripke-Kleene semantics corresponds to the assumption that every such atom is unknown. In this paper, we propose to unify and extend this assumption-based approach by introducing parameterized semantics for logic programs. The parameter holds the value that one assumes for all atoms whose logical values cannot be inferred from the rules. We work within multi-valued logic with bilattice structure, and we consider the class of logic programs defined by Fitting. Following Fitting's approach, we define a simple operator that allows us to compute the parameterized semantics, and to compare and combine semantics obtained for different values of the parameter. The semantics proposed by Fitting corresponds to the value false. We also show that our approach captures and extends the usual semantics of conventional logic programs thereby unifying their computation.<|reference_end|>
arxiv
@article{loyer2000computing, title={Computing and Comparing Semantics of Programs in Multi-valued Logics}, author={Y. Loyer, N. Spyratos, D. Stamate}, journal={arXiv preprint arXiv:cs/0002013}, year={2000}, archivePrefix={arXiv}, eprint={cs/0002013}, primaryClass={cs.LO cs.DB} }
loyer2000computing
arxiv-669396
cs/0002014
Safe cooperative robot dynamics on graphs
<|reference_start|>Safe cooperative robot dynamics on graphs: This paper initiates the use of vector fields to design, optimize, and implement reactive schedules for safe cooperative robot patterns on planar graphs. We consider Automated Guided Vehicles (AGV's) operating upon a predefined network of pathways. In contrast to the case of locally Euclidean configuration spaces, regularization of collisions is no longer a local procedure, and issues concerning the global topology of configuration spaces must be addressed. The focus of the present inquiry is the achievement of safe, efficient, cooperative patterns in the simplest nontrivial example (a pair of robots on a Y-network) by means of a state-event heirarchical controller.<|reference_end|>
arxiv
@article{ghrist2000safe, title={Safe cooperative robot dynamics on graphs}, author={Robert Ghrist and Daniel Koditschek}, journal={arXiv preprint arXiv:cs/0002014}, year={2000}, archivePrefix={arXiv}, eprint={cs/0002014}, primaryClass={cs.RO cs.AI} }
ghrist2000safe
arxiv-669397
cs/0002015
Genetic Algorithms for Extension Search in Default Logic
<|reference_start|>Genetic Algorithms for Extension Search in Default Logic: A default theory can be characterized by its sets of plausible conclusions, called its extensions. But, due to the theoretical complexity of Default Logic (Sigma_2p-complete), the problem of finding such an extension is very difficult if one wants to deal with non trivial knowledge bases. Based on the principle of natural selection, Genetic Algorithms have been quite successfully applied to combinatorial problems and seem useful for problems with huge search spaces and when no tractable algorithm is available. The purpose of this paper is to show that techniques issued from Genetic Algorithms can be used in order to build an efficient default reasoning system. After providing a formal description of the components required for an extension search based on Genetic Algorithms principles, we exhibit some experimental results.<|reference_end|>
arxiv
@article{nicolas2000genetic, title={Genetic Algorithms for Extension Search in Default Logic}, author={P. Nicolas, F. Saubion, I. Stephan (University of Angers, France)}, journal={arXiv preprint arXiv:cs/0002015}, year={2000}, archivePrefix={arXiv}, eprint={cs/0002015}, primaryClass={cs.AI cs.LO} }
nicolas2000genetic
arxiv-669398
cs/0002016
SLT-Resolution for the Well-Founded Semantics
<|reference_start|>SLT-Resolution for the Well-Founded Semantics: Global SLS-resolution and SLG-resolution are two representative mechanisms for top-down evaluation of the well-founded semantics of general logic programs. Global SLS-resolution is linear for query evaluation but suffers from infinite loops and redundant computations. In contrast, SLG-resolution resolves infinite loops and redundant computations by means of tabling, but it is not linear. The principal disadvantage of a non-linear approach is that it cannot be implemented using a simple, efficient stack-based memory structure nor can it be easily extended to handle some strictly sequential operators such as cuts in Prolog. In this paper, we present a linear tabling method, called SLT-resolution, for top-down evaluation of the well-founded semantics. SLT-resolution is a substantial extension of SLDNF-resolution with tabling. Its main features include: (1) It resolves infinite loops and redundant computations while preserving the linearity. (2) It is terminating, and sound and complete w.r.t. the well-founded semantics for programs with the bounded-term-size property with non-floundering queries. Its time complexity is comparable with SLG-resolution and polynomial for function-free logic programs. (3) Because of its linearity for query evaluation, SLT-resolution bridges the gap between the well-founded semantics and standard Prolog implementation techniques. It can be implemented by an extension to any existing Prolog abstract machines such as WAM or ATOAM.<|reference_end|>
arxiv
@article{shen2000slt-resolution, title={SLT-Resolution for the Well-Founded Semantics}, author={Yi-Dong Shen, Li-Yan Yuan, Jia-Huai You}, journal={Journal of Automated Reasoning 28(1):53-97, 2002}, year={2000}, archivePrefix={arXiv}, eprint={cs/0002016}, primaryClass={cs.AI cs.PL} }
shen2000slt-resolution
arxiv-669399
cs/0002017
An Usage Measure Based on Psychophysical Relations
<|reference_start|>An Usage Measure Based on Psychophysical Relations: A new word usage measure is proposed. It is based on psychophysical relations and allows to reveal words by its degree of "importance" for making basic dictionaries of sublanguages.<|reference_end|>
arxiv
@article{kromer2000an, title={An Usage Measure Based on Psychophysical Relations}, author={V. Kromer}, journal={arXiv preprint arXiv:cs/0002017}, year={2000}, archivePrefix={arXiv}, eprint={cs/0002017}, primaryClass={cs.CL} }
kromer2000an
arxiv-669400
cs/0002018
Efficient generation of rotating workforce schedules
<|reference_start|>Efficient generation of rotating workforce schedules: Generating high-quality schedules for a rotating workforce is a critical task in all settings where a certain staffing level must be guaranteed beyond the capacity of single employees, such as for instance in industrial plants, hospitals, or airline companies. Results from ergonomics \cite{BEST91} indicate that rotating workforce schedules have a profound impact on the health and social life of employees as well as on their performance at work. Moreover, rotating workforce schedules must satisfy legal requirements and should also meet the objectives of the employing organization. We describe our solution to this problem. A basic design decision was to aim at quickly obtaining high-quality schedules for realistically sized problems while maintaining human control. The interaction between the decision maker and the algorithm therefore consists in four steps: (1) choosing a set of lengths of work blocks (a work block is a sequence of consecutive days of work shifts), (2) choosing a particular sequence of work and days-off blocks among those that have optimal weekend characteristics, (3) enumerating possible shift sequences for the chosen work blocks subject to shift change constraints and bounds on sequences of shifts, and (4) assignment of shift sequences to work blocks while fulfilling the staffing requirements. The combination of constraint satisfaction and problem-oriented intelligent backtracking algorithms in each of the four steps allows to find good solutions for real-world problems in acceptable time. Computational results from real-world problems and from benchmark examples found in the literature confirm the viability of our approach. The algorithms are now part of a commercial shift scheduling software package.<|reference_end|>
arxiv
@article{musliu2000efficient, title={Efficient generation of rotating workforce schedules}, author={Nysret Musliu, Johannes Gaertner, Wolfgang Slany}, journal={arXiv preprint arXiv:cs/0002018}, year={2000}, number={dbai-tr-2000-35}, archivePrefix={arXiv}, eprint={cs/0002018}, primaryClass={cs.OH} }
musliu2000efficient