corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-676501 | cs/9912019 | Quantum Bit Commitment Expansion | <|reference_start|>Quantum Bit Commitment Expansion: The paper was retracted.<|reference_end|> | arxiv | @article{mayers1999quantum,
title={Quantum Bit Commitment Expansion},
author={Dominic Mayers},
journal={arXiv preprint arXiv:cs/9912019},
year={1999},
archivePrefix={arXiv},
eprint={cs/9912019},
primaryClass={cs.CR}
} | mayers1999quantum |
arxiv-676502 | cs/9912020 | Additive models in high dimensions | <|reference_start|>Additive models in high dimensions: We discuss some aspects of approximating functions on high-dimensional data sets with additive functions or ANOVA decompositions, that is, sums of functions depending on fewer variables each. It is seen that under appropriate smoothness conditions, the errors of the ANOVA decompositions are of order $O(n^{m/2})$ for approximations using sums of functions of up to $m$ variables under some mild restrictions on the (possibly dependent) predictor variables. Several simulated examples illustrate this behaviour.<|reference_end|> | arxiv | @article{hegland1999additive,
title={Additive models in high dimensions},
author={Markus Hegland and Vladimir Pestov},
journal={Proc. of 12th Computational Techniques and Applications
Conference, CTAC-2004 (Rob May and A.J. Roberts, eds.), ANZIAM J. 46 (2005),
C1205-C1221.},
year={1999},
archivePrefix={arXiv},
eprint={cs/9912020},
primaryClass={cs.DS}
} | hegland1999additive |
arxiv-676503 | cs/9912021 | Seeing the Forest in the Tree: Applying VRML to Mathematical Problems in Number Theory | <|reference_start|>Seeing the Forest in the Tree: Applying VRML to Mathematical Problems in Number Theory: We show how VRML (Virtual Reality Modeling Language) can provide potentially powerful insight into the 3x + 1 problem via the introduction of a unique geometrical object, called the 'G-cell', akin to a fractal generator. We present an example of a VRML world developed programmatically with the G-cell. The role of VRML as a tool for furthering the understanding the 3x+1 problem is potentially significant for several reasons: a) VRML permits the observer to zoom into the geometric structure at all scales (up to limitations of the computing platform). b) VRML enables rotation to alter comparative visual perspective (similar to Tukey's data-spinning concept). c) VRML facilitates the demonstration of interesting tree features between collaborators on the internet who might otherwise have difficulty conveying their ideas unambiguously. d) VRML promises to reveal any dimensional dependencies among 3x+1 sequences.<|reference_end|> | arxiv | @article{gunther1999seeing,
title={Seeing the Forest in the Tree: Applying VRML to Mathematical Problems in
Number Theory},
author={Neil J. Gunther},
journal={Proc. IEEE-SPIE 2000 12th International Symposium on Internet
Imaging},
year={1999},
doi={10.1117/12.373461},
archivePrefix={arXiv},
eprint={cs/9912021},
primaryClass={cs.MS cs.CE}
} | gunther1999seeing |
arxiv-676504 | gr-qc/0209061 | Computers with closed timelike curves can solve hard problems | <|reference_start|>Computers with closed timelike curves can solve hard problems: A computer which has access to a closed timelike curve, and can thereby send the results of calculations into its own past, can exploit this to solve difficult computational problems efficiently. I give a specific demonstration of this for the problem of factoring large numbers, and argue that a similar approach can solve NP-complete and PSPACE-complete problems. I discuss the potential impact of quantum effects on this result.<|reference_end|> | arxiv | @article{brun2002computers,
title={Computers with closed timelike curves can solve hard problems},
author={Todd A. Brun (Institute for Advanced Study)},
journal={Found.Phys.Lett. 16 (2003) 245-253},
year={2002},
doi={10.1023/A:1025967225931},
archivePrefix={arXiv},
eprint={gr-qc/0209061},
primaryClass={gr-qc cs.CC quant-ph}
} | brun2002computers |
arxiv-676505 | gr-qc/0209096 | Gravity, torsion, Dirac field and computer algebra using MAPLE and REDUCE | <|reference_start|>Gravity, torsion, Dirac field and computer algebra using MAPLE and REDUCE: The article presents computer algebra procedures and routines applied to the study of the Dirac field on curved spacetimes. The main part of the procedures is devoted to the construction of Pauli and Dirac matrices algebra on an anholonomic orthonormal reference frame. Then these procedures are used to compute the Dirac equation on curved spacetimes in a sequence of special dedicated routines. A comparative review of such procedures obtained for two computer algebra platforms (REDUCE + EXCALC and MAPLE + GRTensorII) is carried out. Applications for the calculus of Dirac equation on specific examples of spacetimes with or without torsion are pointed out.<|reference_end|> | arxiv | @article{vulcanov2002gravity,,
title={Gravity, torsion, Dirac field and computer algebra using MAPLE and
REDUCE},
author={D.N. Vulcanov (Max-Planck-Institut fur Gravitationsphysik,
Albert-Einstein-Institut, Golm, Germany)},
journal={arXiv preprint arXiv:gr-qc/0209096},
year={2002},
archivePrefix={arXiv},
eprint={gr-qc/0209096},
primaryClass={gr-qc cs.SC hep-th physics.comp-ph}
} | vulcanov2002gravity, |
arxiv-676506 | hep-lat/0003009 | Data storage issues in lattice QCD calculations | <|reference_start|>Data storage issues in lattice QCD calculations: I describe some of the data management issues in lattice Quantum Chromodynamics calculations. I focus on the experience of the UKQCD collaboration. I describe an attempt to use a relational database to store part of the data produced by a lattice QCD calculation.<|reference_end|> | arxiv | @article{mcneile2000data,
title={Data storage issues in lattice QCD calculations},
author={Craig McNeile},
journal={arXiv preprint arXiv:hep-lat/0003009},
year={2000},
archivePrefix={arXiv},
eprint={hep-lat/0003009},
primaryClass={hep-lat cs.DB}
} | mcneile2000data |
arxiv-676507 | hep-lat/0004007 | Matrix Distributed Processing: A set of C++ Tools for implementing generic lattice computations on parallel systems | <|reference_start|>Matrix Distributed Processing: A set of C++ Tools for implementing generic lattice computations on parallel systems: We present a set of programming tools (classes and functions written in C++ and based on Message Passing Interface) for fast development of generic parallel (and non-parallel) lattice simulations. They are collectively called MDP 1.2. These programming tools include classes and algorithms for matrices, random number generators, distributed lattices (with arbitrary topology), fields and parallel iterations. No previous knowledge of MPI is required in order to use them. Some applications in electromagnetism, electronics, condensed matter and lattice QCD are presented.<|reference_end|> | arxiv | @article{di pierro2000matrix,
title={Matrix Distributed Processing: A set of C++ Tools for implementing
generic lattice computations on parallel systems},
author={Massimo Di Pierro},
journal={Comput.Phys.Commun. 141 (2001) 98-148},
year={2000},
doi={10.1016/S0010-4655(01)00297-1},
number={FERMILAB-PUB-00-079-T},
archivePrefix={arXiv},
eprint={hep-lat/0004007},
primaryClass={hep-lat cs.DC cs.MS physics.comp-ph}
} | di pierro2000matrix |
arxiv-676508 | hep-lat/0307015 | On the scaling of computational particle physics codes on cluster computers | <|reference_start|>On the scaling of computational particle physics codes on cluster computers: Many appplications in computational science are sufficiently compute-intensive that they depend on the power of parallel computing for viability. For all but the "embarrassingly parallel" problems, the performance depends upon the level of granularity that can be achieved on the computer platform. Our computational particle physics applications require machines that can support a wide range of granularities, but in general, compute-intensive state-of-the-art projects will require finely grained distributions. Of the different types of machines available for the task, we consider cluster computers. The use of clusters of commodity computers in high performance computing has many advantages including the raw price/performance ratio and the flexibility of machine configuration and upgrade. Here we focus on what is usually considered the weak point of cluster technology; the scaling behaviour when faced with a numerically intensive parallel computation. To this end we examine the scaling of our own applications from numerical quantum field theory on a cluster and infer conclusions about the more general case.<|reference_end|> | arxiv | @article{sroczynski2003on,
title={On the scaling of computational particle physics codes on cluster
computers},
author={Z. Sroczynski, N. Eicker, Th. Lippert, B. Orth and K. Schilling},
journal={arXiv preprint arXiv:hep-lat/0307015},
year={2003},
number={LTH 583},
archivePrefix={arXiv},
eprint={hep-lat/0307015},
primaryClass={hep-lat cs.DC}
} | sroczynski2003on |
arxiv-676509 | hep-lat/0308005 | Parallel implementation of a lattice-gauge-theory code: studying quark confinement on PC clusters | <|reference_start|>Parallel implementation of a lattice-gauge-theory code: studying quark confinement on PC clusters: We consider the implementation of a parallel Monte Carlo code for high-performance simulations on PC clusters with MPI. We carry out tests of speedup and efficiency. The code is used for numerical simulations of pure SU(2) lattice gauge theory at very large lattice volumes, in order to study the infrared behavior of gluon and ghost propagators. This problem is directly related to the confinement of quarks and gluons in the physics of strong interactions.<|reference_end|> | arxiv | @article{cucchieri2003parallel,
title={Parallel implementation of a lattice-gauge-theory code: studying quark
confinement on PC clusters},
author={Attilio Cucchieri, Tereza Mendes, Gonzalo Travieso and Andre R.
Taurines},
journal={arXiv preprint arXiv:hep-lat/0308005},
year={2003},
archivePrefix={arXiv},
eprint={hep-lat/0308005},
primaryClass={hep-lat cs.DC}
} | cucchieri2003parallel |
arxiv-676510 | hep-lat/0505005 | Parallel Programming with Matrix Distributed Processing | <|reference_start|>Parallel Programming with Matrix Distributed Processing: Matrix Distributed Processing (MDP) is a C++ library for fast development of efficient parallel algorithms. It constitues the core of FermiQCD. MDP enables programmers to focus on algorithms, while parallelization is dealt with automatically and transparently. Here we present a brief overview of MDP and examples of applications in Computer Science (Cellular Automata), Engineering (PDE Solver) and Physics (Ising Model).<|reference_end|> | arxiv | @article{di pierro2005parallel,
title={Parallel Programming with Matrix Distributed Processing},
author={Massimo Di Pierro},
journal={arXiv preprint arXiv:hep-lat/0505005},
year={2005},
archivePrefix={arXiv},
eprint={hep-lat/0505005},
primaryClass={hep-lat cs.CE physics.comp-ph}
} | di pierro2005parallel |
arxiv-676511 | hep-lat/9808001 | Genetic Algorithm for SU(N) gauge theory on a lattice | <|reference_start|>Genetic Algorithm for SU(N) gauge theory on a lattice: An Algorithm is proposed for the simulation of pure SU(N) lattice gauge theories based on Genetic Algorithms(GAs). Main difference between GAs and Metropolis methods(MPs) is that GAs treat a population of points at once, while MPs treat only one point in the searching space. This provides GAs with information about the assortment as well as the fitness of the evolution function and producting a better solution. We apply GAs to SU(2) pure gauge theory on a 2 dimensional lattice and show the results are consistent with those given by MP and Heatbath methods(HBs). Thermalization speed of GAs is especially faster than the simple MPs.<|reference_end|> | arxiv | @article{azusa1998genetic,
title={Genetic Algorithm for SU(N) gauge theory on a lattice},
author={Yamaguchi Azusa},
journal={arXiv preprint arXiv:hep-lat/9808001},
year={1998},
number={OCHA-PP-122},
archivePrefix={arXiv},
eprint={hep-lat/9808001},
primaryClass={hep-lat cs.NE}
} | azusa1998genetic |
arxiv-676512 | hep-lat/9809068 | Genetic Algorithm for SU(2) Gauge Theory on a 2-dimensional Lattice | <|reference_start|>Genetic Algorithm for SU(2) Gauge Theory on a 2-dimensional Lattice: An algorithm is proposed for the simulation of pure SU(N) lattice gauge theories based on Genetic Algorithms(GAs). We apply GAs to SU(2) pure gauge theory on a 2 dimensional lattice and show the results, the action per plaquette and Wilson loops, are consistent with those by Metropolis method(MP)s and Heatbath method(HB)s. Thermalization speed of GAs is especially faster than the simple MPs.<|reference_end|> | arxiv | @article{yamaguchi1998genetic,
title={Genetic Algorithm for SU(2) Gauge Theory on a 2-dimensional Lattice},
author={A.Yamaguchi},
journal={Nucl.Phys.Proc.Suppl. 73 (1999) 847-849},
year={1998},
doi={10.1016/S0920-5632(99)85221-9},
archivePrefix={arXiv},
eprint={hep-lat/9809068},
primaryClass={hep-lat cs.NE}
} | yamaguchi1998genetic |
arxiv-676513 | hep-ph/0411100 | LSJK - a C++ library for arbitrary-precision numeric evaluation of the generalized log-sine functions | <|reference_start|>LSJK - a C++ library for arbitrary-precision numeric evaluation of the generalized log-sine functions: Generalized log-sine functions appear in higher order epsilon-expansion of different Feynman diagrams. We present an algorithm for numerical evaluation of these functions of real argument. This algorithm is implemented as C++ library with arbitrary-precision arithmetics for integer 0 < k < 9 and j > 1. Some new relations and representations for the generalized log-sine functions are given.<|reference_end|> | arxiv | @article{kalmykov2004lsjk,
title={LSJK - a C++ library for arbitrary-precision numeric evaluation of the
generalized log-sine functions},
author={M.Yu.Kalmykov (Dubna, JINR) and A.Sheplyakov (Dubna, JINR)},
journal={Comput.Phys.Commun. 172 (2005) 45-59},
year={2004},
doi={10.1016/j.cpc.2005.04.013},
archivePrefix={arXiv},
eprint={hep-ph/0411100},
primaryClass={hep-ph cs.MS cs.NA math-ph math.MP math.NA}
} | kalmykov2004lsjk |
arxiv-676514 | hep-ph/0702279 | The Multithreaded version of FORM | <|reference_start|>The Multithreaded version of FORM: We present TFORM, the version of the symbolic manipulation system FORM that can make simultaneous use of several processors in a shared memory architecture. The implementation uses Posix threads, also called pthreads, and is therefore easily portable between various operating systems. Most existing FORM programs will be able to take advantage of the increased processing power, without the need for modifications. In some cases some minor additions may be needed. For a computer with two processors a typical improvement factor in the running time is 1.7 when compared to the traditional version of FORM. In the case of computers with 4 processors a typical improvement factor in the execution time is slightly above 3.<|reference_end|> | arxiv | @article{tentyukov2007the,
title={The Multithreaded version of FORM},
author={M. Tentyukov, J.A.M. Vermaseren},
journal={Comput.Phys.Commun.181:1419-1427,2010},
year={2007},
doi={10.1016/j.cpc.2010.04.009},
number={NIKHEF 07-005, SFB/CPP-07-08, TTP07-06},
archivePrefix={arXiv},
eprint={hep-ph/0702279},
primaryClass={hep-ph cs.SC}
} | tentyukov2007the |
arxiv-676515 | hep-th/0201092 | A Quantum Computer Foundation for the Standard Model and SuperString Theories | <|reference_start|>A Quantum Computer Foundation for the Standard Model and SuperString Theories: We show the Standard Model and SuperString Theories can be naturally based on a Quantum Computer foundation. The Standard Model of elementary particles can be viewed as defining a Quantum Computer Grammar and language. A Quantum Computer in a certain limit naturally forms a Superspace upon which Supersymmetry rotations can be defined - a Continuum Quantum Computer. Quantum high-level computer languages such as Quantum C and Quantum Assembly language are also discussed. In these new linguistic representations, particles become literally symbols or letters, and particle interactions become grammar rules. This view is NOT the same as the often-expressed view that Mathematics is the language of Physics. Some new developments relating to Quantum Computers and Quantum Turing Machines are also described.<|reference_end|> | arxiv | @article{blaha2002a,
title={A Quantum Computer Foundation for the Standard Model and SuperString
Theories},
author={Stephen Blaha},
journal={arXiv preprint arXiv:hep-th/0201092},
year={2002},
archivePrefix={arXiv},
eprint={hep-th/0201092},
primaryClass={hep-th cs.PL quant-ph}
} | blaha2002a |
arxiv-676516 | hep-th/0208218 | Introducing LambdaTensor10 - A package for explicit symbolic and numeric Lie algebra and Lie group calculations | <|reference_start|>Introducing LambdaTensor10 - A package for explicit symbolic and numeric Lie algebra and Lie group calculations: Due to the occurrence of large exceptional Lie groups in supergravity, calculations involving explicit Lie algebra and Lie group element manipulations easily become very complicated and hence also error-prone if done by hand. Research on the extremal structure of maximal gauged supergravity theories in various dimensions sparked the development of a library for efficient abstract multilinear algebra calculations involving sparse and non-sparse higher-rank tensors, which is presented here.<|reference_end|> | arxiv | @article{fischbacher2002introducing,
title={Introducing LambdaTensor1.0 - A package for explicit symbolic and
numeric Lie algebra and Lie group calculations},
author={Thomas Fischbacher},
journal={arXiv preprint arXiv:hep-th/0208218},
year={2002},
number={AEI-2002-065},
archivePrefix={arXiv},
eprint={hep-th/0208218},
primaryClass={hep-th cs.MS math-ph math.MP}
} | fischbacher2002introducing |
arxiv-676517 | hep-th/0305176 | Mapping the vacuum structure of gauged maximal supergravities: an application of high-performance symbolic algebra | <|reference_start|>Mapping the vacuum structure of gauged maximal supergravities: an application of high-performance symbolic algebra: The analysis of the extremal structure of the scalar potentials of gauged maximally extended supergravity models in five, four, and three dimensions, and hence the determination of possible vacuum states of these models is a computationally challenging task due to the occurrence of the exceptional Lie groups $E_6$, $E_7$, $E_8$ in the definition of these potentials. At present, the most promising approach to gain information about nontrivial vacua of these models is to perform a truncation of the potential to submanifolds of the $G/H$ coset manifold of scalars which are invariant under a subgroup of the gauge group and of sufficiently low dimension to make an analytic treatment possible. New tools are presented which allow a systematic and highly effective study of these potentials up to a previously unreached level of complexity. Explicit forms of new truncations of the potentials of four- and three-dimensional models are given, and for N=16, D=3 supergravities, which are much more rich in structure than their higher-dimensional cousins, a series of new nontrivial vacua is identified and analysed.<|reference_end|> | arxiv | @article{fischbacher2003mapping,
title={Mapping the vacuum structure of gauged maximal supergravities: an
application of high-performance symbolic algebra},
author={Thomas Fischbacher},
journal={arXiv preprint arXiv:hep-th/0305176},
year={2003},
number={AEI-2003-046},
archivePrefix={arXiv},
eprint={hep-th/0305176},
primaryClass={hep-th cs.SC}
} | fischbacher2003mapping |
arxiv-676518 | hep-th/0602072 | Computational complexity of the landscape I | <|reference_start|>Computational complexity of the landscape I: We study the computational complexity of the physical problem of finding vacua of string theory which agree with data, such as the cosmological constant, and show that such problems are typically NP hard. In particular, we prove that in the Bousso-Polchinski model, the problem is NP complete. We discuss the issues this raises and the possibility that, even if we were to find compelling evidence that some vacuum of string theory describes our universe, we might never be able to find that vacuum explicitly. In a companion paper, we apply this point of view to the question of how early cosmology might select a vacuum.<|reference_end|> | arxiv | @article{denef2006computational,
title={Computational complexity of the landscape I},
author={Frederik Denef (KU Leuven) and Michael R. Douglas (Rutgers and IHES)},
journal={AnnalsPhys.322:1096-1142,2007},
year={2006},
doi={10.1016/j.aop.2006.07.013},
archivePrefix={arXiv},
eprint={hep-th/0602072},
primaryClass={hep-th cs.CC}
} | denef2006computational |
arxiv-676519 | hep-th/0612240 | All order epsilon-expansion of Gauss hypergeometric functions with integer and half/integer values of parameters | <|reference_start|>All order epsilon-expansion of Gauss hypergeometric functions with integer and half/integer values of parameters: It is proved that the Laurent expansion of the following Gauss hypergeometric functions, 2F1(I1+a*epsilon, I2+b*ep; I3+c*epsilon;z), 2F1(I1+a*epsilon, I2+b*epsilon;I3+1/2+c*epsilon;z), 2F1(I1+1/2+a*epsilon, I2+b*epsilon; I3+c*epsilon;z), 2F1(I1+1/2+a*epsilon, I2+b*epsilon; I3+1/2+c*epsilon;z), 2F1(I1+1/2+a*epsilon,I2+1/2+b*epsilon; I3+1/2+c*epsilon;z), where I1,I2,I3 are an arbitrary integer nonnegative numbers, a,b,c are an arbitrary numbers and epsilon is an arbitrary small parameters, are expressible in terms of the harmonic polylogarithms of Remiddi and Vermaseren with polynomial coefficients. An efficient algorithm for the calculation of the higher-order coefficients of Laurent expansion is constructed. Some particular cases of Gauss hypergeometric functions are also discussed.<|reference_end|> | arxiv | @article{kalmykov2006all,
title={All order epsilon-expansion of Gauss hypergeometric functions with
integer and half/integer values of parameters},
author={M.Yu.Kalmykov (Baylor U. & Dubna, JINR), B.F.L.Ward, S.Yost (Baylor
U.)},
journal={JHEP 0702:040,2007},
year={2006},
doi={10.1088/1126-6708/2007/02/040},
number={BU-HEPP-06-12},
archivePrefix={arXiv},
eprint={hep-th/0612240},
primaryClass={hep-th cs.SC hep-ph math-ph math.CA math.MP physics.comp-ph}
} | kalmykov2006all |
arxiv-676520 | math-ph/0201011 | Symbolic Expansion of Transcendental Functions | <|reference_start|>Symbolic Expansion of Transcendental Functions: Higher transcendental function occur frequently in the calculation of Feynman integrals in quantum field theory. Their expansion in a small parameter is a non-trivial task. We report on a computer program which allows the systematic expansion of certain classes of functions. The algorithms are based on the Hopf algebra of nested sums. The program is written in C++ and uses the GiNaC library.<|reference_end|> | arxiv | @article{weinzierl2002symbolic,
title={Symbolic Expansion of Transcendental Functions},
author={Stefan Weinzierl},
journal={Comput.Phys.Commun.145:357-370,2002},
year={2002},
doi={10.1016/S0010-4655(02)00261-8},
archivePrefix={arXiv},
eprint={math-ph/0201011},
primaryClass={math-ph cs.SC hep-ph math.MP}
} | weinzierl2002symbolic |
arxiv-676521 | math-ph/0211067 | Method of Additional Structures on the Objects of a Monoidal Kleisli Category as a Background for Information Transformers Theory | <|reference_start|>Method of Additional Structures on the Objects of a Monoidal Kleisli Category as a Background for Information Transformers Theory: Category theory provides a compact method of encoding mathematical structures in a uniform way, thereby enabling the use of general theorems on, for example, equivalence and universal constructions. In this article we develop the method of additional structures on the objects of a monoidal Kleisli category. It is proposed to consider any uniform class of information transformers (ITs) as a family of morphisms of a category that satisfy certain set of axioms. This makes it possible to study in a uniform way different types of ITs, e.g., statistical, multivalued, and fuzzy ITs. Proposed axioms define a category of ITs as a monoidal category that contains a subcategory (of deterministic ITs) with finite products. Besides, it is shown that many categories of ITs can be constructed as Kleisli categories with additional structures.<|reference_end|> | arxiv | @article{golubtsov2002method,
title={Method of Additional Structures on the Objects of a Monoidal Kleisli
Category as a Background for Information Transformers Theory},
author={P. V. Golubtsov, S. S. Moskaliuk},
journal={Hadronic Journal, V.25, No.2,179-238 (2002)},
year={2002},
archivePrefix={arXiv},
eprint={math-ph/0211067},
primaryClass={math-ph cs.MA math.CT math.MP}
} | golubtsov2002method |
arxiv-676522 | math-ph/0407056 | Internal Turing Machines | <|reference_start|>Internal Turing Machines: Using nonstandard analysis, we will extend the classical Turing machines into the internal Turing machines. The internal Turing machines have the capability to work with infinite ($*$-finite) number of bits while keeping the finite combinatoric structures of the classical Turing machines. We will show the following. The internal deterministic Turing machines can do in $*$-polynomial time what a classical deterministic Turing machine can do in an arbitrary finite amount of time. Given an element of $<M;x>\in HALT$ (more precisely, the $*$-embedding of $HALT$), there is an internal deterministic Turing machine which will take $<M;x>$ as input and halt in the $"yes"$ state. The language ${}^*Halt$ can not be decided by the internal deterministic Turing machines. The internal deterministic Turing machines can be viewed as the asymptotic behavior of finite precision approximation to real number computations. It is possible to use the internal probabilistic Turing machines to simulate finite state quantum mechanics with infinite precision. This simulation suggests that no information can be transmitted instantaneously and at the same time, the Turing machine model can simulate instantaneous collapse of the wave function. The internal deterministic Turing machines are powerful, but if $P \neq NP$, then there are internal problems which the internal deterministic Turing machines can solve but not in $*$-polynomial time.<|reference_end|> | arxiv | @article{loo2004internal,
title={Internal Turing Machines},
author={Ken Loo},
journal={arXiv preprint arXiv:math-ph/0407056},
year={2004},
archivePrefix={arXiv},
eprint={math-ph/0407056},
primaryClass={math-ph cs.CC math.LO math.MP quant-ph}
} | loo2004internal |
arxiv-676523 | math-ph/0504048 | On Compatibility of Discrete Relations | <|reference_start|>On Compatibility of Discrete Relations: An approach to compatibility analysis of systems of discrete relations is proposed. Unlike the Groebner basis technique, the proposed scheme is not based on the polynomial ring structure. It uses more primitive set-theoretic and topological concepts and constructions. We illustrate the approach by application to some two-state cellular automata. In the two-state case the Groebner basis method is also applicable, and we compare both approaches.<|reference_end|> | arxiv | @article{kornyak2005on,
title={On Compatibility of Discrete Relations},
author={Vladimir V. Kornyak},
journal={CASC 2005, LNCS 3718, pp. 272--284, 2005. Springer-Verlag Berlin
Heidelberg 2005},
year={2005},
archivePrefix={arXiv},
eprint={math-ph/0504048},
primaryClass={math-ph cs.SC math.AC math.MP nlin.CG}
} | kornyak2005on |
arxiv-676524 | math-ph/0508065 | Finding Liouvillian first integrals of rational ODEs of any order in finite terms | <|reference_start|>Finding Liouvillian first integrals of rational ODEs of any order in finite terms: It is known, due to Mordukhai-Boltovski, Ritt, Prelle, Singer, Christopher and others, that if a given rational ODE has a Liouvillian first integral then the corresponding integrating factor of the ODE must be of a very special form of a product of powers and exponents of irreducible polynomials. These results lead to a partial algorithm for finding Liouvillian first integrals. However, there are two main complications on the way to obtaining polynomials in the integrating factor form. First of all, one has to find an upper bound for the degrees of the polynomials in the product above, an unsolved problem, and then the set of coefficients for each of the polynomials by the computationally-intensive method of undetermined parameters. As a result, this approach was implemented in CAS only for first and relatively simple second order ODEs. We propose an algebraic method for finding polynomials of the integrating factors for rational ODEs of any order, based on examination of the resultants of the polynomials in the numerator and the denominator of the right-hand side of such equation. If both the numerator and the denominator of the right-hand side of such ODE are not constants, the method can determine in finite terms an explicit expression of an integrating factor if the ODE permits integrating factors of the above mentioned form and then the Liouvillian first integral. The tests of this procedure based on the proposed method, implemented in Maple in the case of rational integrating factors, confirm the consistence and efficiency of the method.<|reference_end|> | arxiv | @article{kosovtsov2005finding,
title={Finding Liouvillian first integrals of rational ODEs of any order in
finite terms},
author={Yuri N. Kosovtsov},
journal={SIGMA 2 (2006), 059, 8 pages},
year={2005},
doi={10.3842/SIGMA.2006.059},
archivePrefix={arXiv},
eprint={math-ph/0508065},
primaryClass={math-ph cs.SC math.CA math.MP nlin.SI}
} | kosovtsov2005finding |
arxiv-676525 | math-ph/0512026 | MIMO Channel Correlation in General Scattering Environments | <|reference_start|>MIMO Channel Correlation in General Scattering Environments: This paper presents an analytical model for the fading channel correlation in general scattering environments. In contrast to the existing correlation models, our new approach treats the scattering environment as non-separable and it is modeled using a bi-angular power distribution. The bi-angular power distribution is parameterized by the mean departure and arrival angles, angular spreads of the univariate angular power distributions at the transmitter and receiver apertures, and a third parameter, the covariance between transmit and receive angles which captures the statistical interdependency between angular power distributions at the transmitter and receiver apertures. When this third parameter is zero, this new model reduces to the well known "Kronecker" model. Using the proposed model, we show that Kronecker model is a good approximation to the actual channel when the scattering channel consists of a single scattering cluster. In the presence of multiple remote scattering clusters we show that Kronecker model over estimates the performance by artificially increasing the number of multipaths in the channel.<|reference_end|> | arxiv | @article{lamahewa2005mimo,
title={MIMO Channel Correlation in General Scattering Environments},
author={Tharaka A. Lamahewa, Rodney A. Kennedy, Thushara D. Abhayapala,
Terence Betlehem},
journal={arXiv preprint arXiv:math-ph/0512026},
year={2005},
archivePrefix={arXiv},
eprint={math-ph/0512026},
primaryClass={math-ph cs.IT math.IT math.MP}
} | lamahewa2005mimo |
arxiv-676526 | math-ph/0603068 | A Spinorial Formulation of the Maximum Clique Problem of a Graph | <|reference_start|>A Spinorial Formulation of the Maximum Clique Problem of a Graph: We present a new formulation of the maximum clique problem of a graph in complex space. We start observing that the adjacency matrix A of a graph can always be written in the form A = B B where B is a complex, symmetric matrix formed by vectors of zero length (null vectors) and the maximum clique problem can be transformed in a geometrical problem for these vectors. This problem, in turn, is translated in spinorial language and we show that each graph uniquely identifies a set of pure spinors, that is vectors of the endomorphism space of Clifford algebras, and the maximum clique problem is formalized in this setting so that, this much studied problem, may take advantage from recent progresses of pure spinor geometry.<|reference_end|> | arxiv | @article{budinich2006a,
title={A Spinorial Formulation of the Maximum Clique Problem of a Graph},
author={Marco Budinich and Paolo Budinich},
journal={J. Math. Phys. 47, 043502 (2006)},
year={2006},
doi={10.1063/1.2186256},
archivePrefix={arXiv},
eprint={math-ph/0603068},
primaryClass={math-ph cs.DM math.CO math.MP}
} | budinich2006a |
arxiv-676527 | math-ph/0605049 | Some elementary rigorous remark about the replica formalism in the Statistical Physics' approach to threshold phenomena in Computational Complexity Theory | <|reference_start|>Some elementary rigorous remark about the replica formalism in the Statistical Physics' approach to threshold phenomena in Computational Complexity Theory: Some elementary rigorous remark about the replica formalism in the Statistical Physics' approach to threshold phenomena in Computational Complexity Theory is presented.<|reference_end|> | arxiv | @article{segre2006some,
title={Some elementary rigorous remark about the replica formalism in the
Statistical Physics' approach to threshold phenomena in Computational
Complexity Theory},
author={Gavriel Segre},
journal={arXiv preprint arXiv:math-ph/0605049},
year={2006},
archivePrefix={arXiv},
eprint={math-ph/0605049},
primaryClass={math-ph cond-mat.stat-mech cs.CC math.MP}
} | segre2006some |
arxiv-676528 | math-ph/0608014 | Gauss-Vanicek Spectral Analysis of the Sepkoski Compendium: No New Life Cycles | <|reference_start|>Gauss-Vanicek Spectral Analysis of the Sepkoski Compendium: No New Life Cycles: New periods can emerge from data as a byproduct of incorrect processing or even the method applied. In one such recent instance, a new life cycle with a 62+-3 Myr period was reportedly found (about trend) in genus variations from the Sepkoski compendium, the world most complete fossil record. The approach that led to reporting this period was based on Fourier method of spectral analysis. I show here that no such period is found when the original data set is considered rigorously and processed in the Gauss-Vanicek spectral analysis. I also demonstrate that data altering can boost spectral power up to a nearly 100 percent increase in the signal range, thus introducing artificial, "99 percent significant" periods as seen in the corresponding variance-spectra of noise. Besides geology and paleontology, virtually all science and engineering disciplines could benefit from the approach described here. The main general advantages of the Gauss-Vanicek spectral analysis lay in period detection from gapped records and in straightforward testing of statistical null hypothesis. The main advantage of the method for physical sciences is its use as a field descriptor for accurate simultaneous detection of eigenfrequencies and relative dynamics. Besides analyzing incomplete records, researchers might also want to remove less-trustworthy data from any time series before analyzing it with the Gauss-Vanicek method. This could increase both the accuracy and reliability of spectral analyses in general.<|reference_end|> | arxiv | @article{omerbashich2006gauss-vanicek,
title={Gauss-Vanicek Spectral Analysis of the Sepkoski Compendium: No New Life
Cycles},
author={M. Omerbashich},
journal={Computing in Science and Engineering 8, 4:26-30, Jul/Aug 2006.
Errata in: CiSE 9, 4:5-6, Jul/Aug 2007 (Opposition paper to: R.A. Rohde &
R.A. Muller (2005) Cycles in fossil diversity, Nature 434:208-210)},
year={2006},
doi={10.1109/MCSE.2006.68},
archivePrefix={arXiv},
eprint={math-ph/0608014},
primaryClass={math-ph cs.NA math.MP physics.data-an q-bio.PE q-bio.QM}
} | omerbashich2006gauss-vanicek |
arxiv-676529 | math-ph/0610037 | Cellular Computing and Least Squares for partial differential problems parallel solving | <|reference_start|>Cellular Computing and Least Squares for partial differential problems parallel solving: This paper shows how partial differential problems can be solved thanks to cellular computing and an adaptation of the Least Squares Finite Elements Method. As cellular computing can be implemented on distributed parallel architectures, this method allows the distribution of a resource demanding differential problem over a computer network.<|reference_end|> | arxiv | @article{fressengeas2006cellular,
title={Cellular Computing and Least Squares for partial differential problems
parallel solving},
author={Nicolas Fressengeas (LMOPS), Herv'e Frezza-Buet},
journal={Journal of Cellular Automata 9, 1 (2014) 1-21},
year={2006},
archivePrefix={arXiv},
eprint={math-ph/0610037},
primaryClass={math-ph cs.DC math.AP math.MP}
} | fressengeas2006cellular |
arxiv-676530 | math-ph/0701043 | Strong Spatial Mixing and Rapid Mixing with Five Colours for the Kagome Lattice | <|reference_start|>Strong Spatial Mixing and Rapid Mixing with Five Colours for the Kagome Lattice: We consider proper 5-colourings of the kagome lattice. Proper q-colourings correspond to configurations in the zero-temperature q-state anti-ferromagnetic Potts model. Salas and Sokal have given a computer assisted proof of strong spatial mixing on the kagome lattice for q>=6 under any temperature, including zero temperature. It is believed that there is strong spatial mixing for q>=4. Here we give a computer assisted proof of strong spatial mixing for q=5 and zero temperature. It is commonly known that strong spatial mixing implies that there is a unique infinite-volume Gibbs measure and that the Glauber dynamics is rapidly mixing. We give a proof of rapid mixing of the Glauber dynamics on any finite subset of the vertices of the kagome lattice, provided that the boundary is free (not coloured). The Glauber dynamics is not necessarily irreducible if the boundary is chosen arbitrarily for q=5 colours. The Glauber dynamics can be used to uniformly sample proper 5-colourings. Thus, a consequence of rapidly mixing Glauber dynamics is that there is fully polynomial randomised approximation scheme for counting the number of proper 5-colourings.<|reference_end|> | arxiv | @article{jalsenius2007strong,
title={Strong Spatial Mixing and Rapid Mixing with Five Colours for the Kagome
Lattice},
author={Markus Jalsenius},
journal={arXiv preprint arXiv:math-ph/0701043},
year={2007},
archivePrefix={arXiv},
eprint={math-ph/0701043},
primaryClass={math-ph cs.DM cs.DS math.MP}
} | jalsenius2007strong |
arxiv-676531 | math-ph/9903036 | Numerically Invariant Signature Curves | <|reference_start|>Numerically Invariant Signature Curves: Corrected versions of the numerically invariant expressions for the affine and Euclidean signature of a planar curve proposed by E.Calabi et. al are presented. The new formulas are valid for fine but otherwise arbitrary partitions of the curve. We also give numerically invariant expressions for the four differential invariants parametrizing the three dimensional version of the Euclidean signature curve, namely the curvature, the torsion and their derivatives with respect to arc length.<|reference_end|> | arxiv | @article{boutin1999numerically,
title={Numerically Invariant Signature Curves},
author={Mireille Boutin},
journal={arXiv preprint arXiv:math-ph/9903036},
year={1999},
archivePrefix={arXiv},
eprint={math-ph/9903036},
primaryClass={math-ph cs.CV math.MP}
} | boutin1999numerically |
arxiv-676532 | math/0002216 | About the globular homology of higher dimensional automata | <|reference_start|>About the globular homology of higher dimensional automata: We introduce a new simplicial nerve of higher dimensional automata whose homology groups yield a new definition of the globular homology. With this new definition, the drawbacks noticed with the construction of math.CT/9902151 disappear. Moreover the important morphisms which associate to every globe its corresponding branching area and merging area of execution paths become morphisms of simplicial sets.<|reference_end|> | arxiv | @article{gaucher2000about,
title={About the globular homology of higher dimensional automata},
author={Philippe Gaucher},
journal={Cahiers de Topologie et Geometrie Differentielle Categoriques,
p.107-156, vol XLIII-2 (2002)},
year={2000},
archivePrefix={arXiv},
eprint={math/0002216},
primaryClass={math.CT cs.OH math.AT}
} | gaucher2000about |
arxiv-676533 | math/0003117 | Reliable Cellular Automata with Self-Organization | <|reference_start|>Reliable Cellular Automata with Self-Organization: In a probabilistic cellular automaton in which all local transitions have positive probability, the problem of keeping a bit of information indefinitely is nontrivial, even in an infinite automaton. Still, there is a solution in 2 dimensions, and this solution can be used to construct a simple 3-dimensional discrete-time universal fault-tolerant cellular automaton. This technique does not help much to solve the following problems: remembering a bit of information in 1 dimension; computing in dimensions lower than 3; computing in any dimension with non-synchronized transitions. Our more complex technique organizes the cells in blocks that perform a reliable simulation of a second (generalized) cellular automaton. The cells of the latter automaton are also organized in blocks, simulating even more reliably a third automaton, etc. Since all this (a possibly infinite hierarchy) is organized in ``software'', it must be under repair all the time from damage caused by errors. A large part of the problem is essentially self-stabilization recovering from a mess of arbitrary size and content. The present paper constructs an asynchronous one-dimensional fault-tolerant cellular automaton, with the further feature of ``self-organization''. The latter means that the initial configuration does not have to encode an infinite hierarchy -- this will be built up over time. This is a corrected and strengthened version of the journal paper of 2001.<|reference_end|> | arxiv | @article{gacs2000reliable,
title={Reliable Cellular Automata with Self-Organization},
author={Peter Gacs},
journal={J. of Stat. Phys. vol.103 (2001), no. 1/2, 45-267},
year={2000},
doi={10.1023/A:1004823720305},
archivePrefix={arXiv},
eprint={math/0003117},
primaryClass={math.PR cs.DC}
} | gacs2000reliable |
arxiv-676534 | math/0005058 | An information-spectrum approach to joint source-channel coding | <|reference_start|>An information-spectrum approach to joint source-channel coding: Given a general source $\sV=\{V^n\}\noi$ with {\em countably infinite} source alphabet and a general channel $\sW=\{W^n\}\noi$ with arbitrary {\em abstract} channel input and output alphabets, we study the joint source-channel coding problem from the information-spectrum point of view. First, we generalize Feinstein's lemma (direct part) and Verd\'u-Han's lemma (converse part) so as to be applicable to the general joint source-channel coding problem. Based on these lemmas, we establish a sufficient condition as well as a necessary condition for the source $\sV$ to be reliably transmissible over the channel $\sW$ with asymptotically vanishing probability of error. It is shown that our sufficient condition coincides with the sufficient condition derived by Vembu, Verd\'u and Steinberg, whereas our necessary condition is much stronger than the necessary condition derived by them. Actually, our necessary condition coincide with our sufficient condition if we disregard some asymptotically vanishing terms appearing in those conditions. Also, it is shown that {\em Separation Theorem} in the generalized sense always holds. In addition, we demonstrate a sufficient condition as well as a necessary condition for the $\vep$-transmissibility ($0\le \vep <1$). Finally, the separation theorem of the traditional standard form is shown to hold for the class of sources and channels that satisfy the (semi-) strong converse property.<|reference_end|> | arxiv | @article{han2000an,
title={An information-spectrum approach to joint source-channel coding},
author={Te Sun Han},
journal={arXiv preprint arXiv:math/0005058},
year={2000},
archivePrefix={arXiv},
eprint={math/0005058},
primaryClass={math.PR cs.IT math.IT}
} | han2000an |
arxiv-676535 | math/0005235 | Smoothness and decay properties of the limiting Quicksort density function | <|reference_start|>Smoothness and decay properties of the limiting Quicksort density function: Using Fourier analysis, we prove that the limiting distribution of the standardized random number of comparisons used by Quicksort to sort an array of n numbers has an everywhere positive and infinitely differentiable density f, and that each derivative f^{(k)} enjoys superpolynomial decay at plus and minus infinity. In particular, each f^{(k)} is bounded. Our method is sufficiently computational to prove, for example, that f is bounded by 16.<|reference_end|> | arxiv | @article{fill2000smoothness,
title={Smoothness and decay properties of the limiting Quicksort density
function},
author={James Allen Fill (Johns Hopkins Univ.), Svante Janson (Uppsala Univ.)},
journal={arXiv preprint arXiv:math/0005235},
year={2000},
number={601, Department of Mathematical Sciences, The Johns Hopkins
University},
archivePrefix={arXiv},
eprint={math/0005235},
primaryClass={math.PR cs.DS}
} | fill2000smoothness |
arxiv-676536 | math/0005236 | A characterization of the set of fixed points of the Quicksort transformation | <|reference_start|>A characterization of the set of fixed points of the Quicksort transformation: The limiting distribution \mu of the normalized number of key comparisons required by the Quicksort sorting algorithm is known to be the unique fixed point of a certain distributional transformation T -- unique, that is, subject to the constraints of zero mean and finite variance. We show that a distribution is a fixed point of T if and only if it is the convolution of \mu with a Cauchy distribution of arbitrary center and scale. In particular, therefore, \mu is the unique fixed point of T having zero mean.<|reference_end|> | arxiv | @article{fill2000a,
title={A characterization of the set of fixed points of the Quicksort
transformation},
author={James Allen Fill (Johns Hopkins Univ.), Svante Janson (Uppsala Univ.)},
journal={arXiv preprint arXiv:math/0005236},
year={2000},
number={606, Department of Mathematical Sciences, The Johns Hopkins
University},
archivePrefix={arXiv},
eprint={math/0005236},
primaryClass={math.PR cs.DS}
} | fill2000a |
arxiv-676537 | math/0005237 | Perfect simulation from the Quicksort limit distribution | <|reference_start|>Perfect simulation from the Quicksort limit distribution: The weak limit of the normalized number of comparisons needed by the Quicksort algorithm to sort n randomly permuted items is known to be determined implicitly by a distributional fixed-point equation. We give an algorithm for perfect random variate generation from this distribution.<|reference_end|> | arxiv | @article{devroye2000perfect,
title={Perfect simulation from the Quicksort limit distribution},
author={Luc Devroye (McGill Univ.), James Allen Fill (Johns Hopkins Univ.),
Ralph Neininger (Univ. Freiburg)},
journal={arXiv preprint arXiv:math/0005237},
year={2000},
number={603, Department of Mathematical Sciences, The Johns Hopkins
University},
archivePrefix={arXiv},
eprint={math/0005237},
primaryClass={math.PR cs.DS}
} | devroye2000perfect |
arxiv-676538 | math/0005281 | Connections between Linear Systems and Convolutional Codes | <|reference_start|>Connections between Linear Systems and Convolutional Codes: The article reviews different definitions for a convolutional code which can be found in the literature. The algebraic differences between the definitions are worked out in detail. It is shown that bi-infinite support systems are dual to finite-support systems under Pontryagin duality. In this duality the dual of a controllable system is observable and vice versa. Uncontrollability can occur only if there are bi-infinite support trajectories in the behavior, so finite and half-infinite-support systems must be controllable. Unobservability can occur only if there are finite support trajectories in the behavior, so bi-infinite and half-infinite-support systems must be observable. It is shown that the different definitions for convolutional codes are equivalent if one restricts attention to controllable and observable codes.<|reference_end|> | arxiv | @article{rosenthal2000connections,
title={Connections between Linear Systems and Convolutional Codes},
author={Joachim Rosenthal},
journal={arXiv preprint arXiv:math/0005281},
year={2000},
archivePrefix={arXiv},
eprint={math/0005281},
primaryClass={math.OC cs.IT math.IT}
} | rosenthal2000connections |
arxiv-676539 | math/0006067 | One-Dimensional Peg Solitaire | <|reference_start|>One-Dimensional Peg Solitaire: We solve the problem of one-dimensional peg solitaire. In particular, we show that the set of configurations that can be reduced to a single peg forms a regular language, and that a linear-time algorithm exists for reducing any configuration to the minimum number of pegs.<|reference_end|> | arxiv | @article{moore2000one-dimensional,
title={One-Dimensional Peg Solitaire},
author={Cristopher Moore and David Eppstein},
journal={MSRI Workshop on Combinatorial Games 2000},
year={2000},
archivePrefix={arXiv},
eprint={math/0006067},
primaryClass={math.CO cs.GT}
} | moore2000one-dimensional |
arxiv-676540 | math/0006233 | Algorithmic Statistics | <|reference_start|>Algorithmic Statistics: While Kolmogorov complexity is the accepted absolute measure of information content of an individual finite object, a similarly absolute notion is needed for the relation between an individual data sample and an individual model summarizing the information in the data, for example, a finite set (or probability distribution) where the data sample typically came from. The statistical theory based on such relations between individual objects can be called algorithmic statistics, in contrast to classical statistical theory that deals with relations between probabilistic ensembles. We develop the algorithmic theory of statistic, sufficient statistic, and minimal sufficient statistic. This theory is based on two-part codes consisting of the code for the statistic (the model summarizing the regularity, the meaningful information, in the data) and the model-to-data code. In contrast to the situation in probabilistic statistical theory, the algorithmic relation of (minimal) sufficiency is an absolute relation between the individual model and the individual data sample. We distinguish implicit and explicit descriptions of the models. We give characterizations of algorithmic (Kolmogorov) minimal sufficient statistic for all data samples for both description modes--in the explicit mode under some constraints. We also strengthen and elaborate earlier results on the ``Kolmogorov structure function'' and ``absolutely non-stochastic objects''--those rare objects for which the simplest models that summarize their relevant information (minimal sufficient statistics) are at least as complex as the objects themselves. We demonstrate a close relation between the probabilistic notions and the algorithmic ones.<|reference_end|> | arxiv | @article{gacs2000algorithmic,
title={Algorithmic Statistics},
author={Peter Gacs (Boston University), John Tromp (CWI), Paul Vitanyi (CWI
and University of Amsterdam)},
journal={IEEE Transactions on Information Theory, Vol. 47, No. 6, September
2001, pp 2443-2463},
year={2000},
archivePrefix={arXiv},
eprint={math/0006233},
primaryClass={math.ST cs.IT cs.LG math.IT math.PR physics.data-an stat.TH}
} | gacs2000algorithmic |
arxiv-676541 | math/0008020 | The Lattice of integer partitions and its infinite extension | <|reference_start|>The Lattice of integer partitions and its infinite extension: In this paper, we use a simple discrete dynamical model to study integer partitions and their lattice. The set of reachable configurations of the model, with the order induced by the transition rule defined on it, is the lattice of all partitions of an integer, equipped with a dominance ordering. We first explain how this lattice can be constructed by an algorithm in linear time with respect to its size by showing that it has a self-similar structure. Then, we define a natural extension of the model to infinity, which we compare with the Young lattice. Using a self-similar tree, we obtain an encoding of the obtained lattice which makes it possible to enumerate easily and efficiently all the partitions of a given integer. This approach also gives a recursive formula for the number of partitions of an integer, and some informations on special sets of partitions, such as length bounded partitions.<|reference_end|> | arxiv | @article{latapy2000the,
title={The Lattice of integer partitions and its infinite extension},
author={Matthieu Latapy and Thi Ha Duong Phan},
journal={Discrete Mathematics Vol. 309, No. 6, 2009},
year={2000},
doi={10.1016/j.disc.2008.02.002},
archivePrefix={arXiv},
eprint={math/0008020},
primaryClass={math.CO cs.NA math.DS math.NA math.NT}
} | latapy2000the |
arxiv-676542 | math/0008172 | One-Dimensional Peg Solitaire, and Duotaire | <|reference_start|>One-Dimensional Peg Solitaire, and Duotaire: We solve the problem of one-dimensional Peg Solitaire. In particular, we show that the set of configurations that can be reduced to a single peg forms a regular language, and that a linear-time algorithm exists for reducing any configuration to the minimum number of pegs. We then look at the impartial two-player game, proposed by Ravikumar, where two players take turns making peg moves, and whichever player is left without a move loses. We calculate some simple nim-values and discuss when the game separates into a disjunctive sum of smaller games. In the version where a series of hops can be made in a single move, we show that neither the P-positions nor the N-positions (i.e. wins for the previous or next player) are described by a regular or context-free language.<|reference_end|> | arxiv | @article{moore2000one-dimensional,
title={One-Dimensional Peg Solitaire, and Duotaire},
author={Cristopher Moore and David Eppstein},
journal={More Games of No Chance, MSRI Publications 42, 2002, pp. 341-350},
year={2000},
archivePrefix={arXiv},
eprint={math/0008172},
primaryClass={math.CO cs.GT}
} | moore2000one-dimensional |
arxiv-676543 | math/0009018 | Critical Behavior in Lossy Source Coding | <|reference_start|>Critical Behavior in Lossy Source Coding: The following critical phenomenon was recently discovered. When a memoryless source is compressed using a variable-length fixed-distortion code, the fastest convergence rate of the (pointwise) compression ratio to the optimal $R(D)$ bits/symbol is either $O(\sqrt{n})$ or $O(\log n)$. We show it is always $O(\sqrt{n})$, except for discrete, uniformly distributed sources.<|reference_end|> | arxiv | @article{dembo2000critical,
title={Critical Behavior in Lossy Source Coding},
author={Amir Dembo and Ioannis Kontoyiannis},
journal={arXiv preprint arXiv:math/0009018},
year={2000},
archivePrefix={arXiv},
eprint={math/0009018},
primaryClass={math.PR cs.IT math.IT}
} | dembo2000critical |
arxiv-676544 | math/0010173 | Hot-pressing process modeling for medium density fiberboard (MDF) | <|reference_start|>Hot-pressing process modeling for medium density fiberboard (MDF): In this paper we present a numerical solution for the mathematical modeling of the hot-pressing process applied to medium density fiberboard. The model is based in the work of Humphrey[82], Humphrey and Bolton[89] and Carvalho and Costa[98], with some modifications and extensions in order to take into account mainly the convective effects on the phase change term and also a conservative numerical treatment of the resulting system of partial differential equations.<|reference_end|> | arxiv | @article{nigro2000hot-pressing,
title={Hot-pressing process modeling for medium density fiberboard (MDF)},
author={Noberto M. Nigro and Mario A. Storti},
journal={arXiv preprint arXiv:math/0010173},
year={2000},
number={CIMEC - 1/1999, formerly math.SC/0010173},
archivePrefix={arXiv},
eprint={math/0010173},
primaryClass={math.NA cs.CE}
} | nigro2000hot-pressing |
arxiv-676545 | math/0010307 | Fields, towers of function fields meeting asymptotic bounds, and basis constructions for algebraic-geometric codes | <|reference_start|>Fields, towers of function fields meeting asymptotic bounds, and basis constructions for algebraic-geometric codes: In this work, we use the notion of ``symmetry'' of functions for an extension $K/L$ of finite fields to produce extensions of a function field $F/K$ in which almost all places of degree one split completely. Then we introduce the notion of ``quasi-symmetry'' of functions for $K/L$, and demonstrate its use in producing extensions of $F/K$ in which all places of degree one split completely. Using these techniques, we are able to restrict the ramification either to one chosen rational place, or entirely to non-rational places. We then apply these methods to the related problem of building asymptotically good towers of function fields. We construct examples of towers of function fields in which all rational places split completely throughout the tower. We construct Abelian towers with this property also. Furthermore, all of the above are done explicitly, ie., we give generators for the extensions, and equations that they satisfy. We also construct an integral basis for a set of places in a tower of function fields meeting the Drinfeld-Vladut bound using the discriminant of the tower localized at each place. Thus we are able to obtain a basis for a collection of functions that contains the set of regular functions in this tower. Regular functions are of interest in the theory of error-correcting codes as they lead to an explicit description of the code associated to the tower by providing the code's generator matrix.<|reference_end|> | arxiv | @article{deolalikar2000fields,,
title={Fields, towers of function fields meeting asymptotic bounds, and basis
constructions for algebraic-geometric codes},
author={Vinay Deolalikar},
journal={arXiv preprint arXiv:math/0010307},
year={2000},
archivePrefix={arXiv},
eprint={math/0010307},
primaryClass={math.NT cs.IT math.IT}
} | deolalikar2000fields, |
arxiv-676546 | math/0012036 | Hamilton Circuits in Graphs and Directed Graphs | <|reference_start|>Hamilton Circuits in Graphs and Directed Graphs: We give polynomial-time algorithms for obtaining hamilton circuits in random graphs, G, and random directed graphs, D. If n is finite, we assume that G or D contains a hamilton circuit. If G is an arbitrary graph containing a hamilton circuit, we conjecture that Algorithm G always obtains a hamilton circuit in polynomial time.<|reference_end|> | arxiv | @article{kleiman2000hamilton,
title={Hamilton Circuits in Graphs and Directed Graphs},
author={Howard Kleiman (Prof. Emer., Queensborough Community Coll. (CUNY))},
journal={arXiv preprint arXiv:math/0012036},
year={2000},
archivePrefix={arXiv},
eprint={math/0012036},
primaryClass={math.CO cs.DS}
} | kleiman2000hamilton |
arxiv-676547 | math/0012163 | Learning Complexity Dimensions for a Continuous-Time Control System | <|reference_start|>Learning Complexity Dimensions for a Continuous-Time Control System: This paper takes a computational learning theory approach to a problem of linear systems identification. It is assumed that input signals have only a finite number k of frequency components, and systems to be identified have dimension no greater than n. The main result establishes that the sample complexity needed for identification scales polynomially with n and logarithmically with k.<|reference_end|> | arxiv | @article{kuusela2000learning,
title={Learning Complexity Dimensions for a Continuous-Time Control System},
author={Pirkko Kuusela, Daniel Ocone, Eduardo D. Sontag (Rutgers, The State
University of New Jersey, USA)},
journal={arXiv preprint arXiv:math/0012163},
year={2000},
archivePrefix={arXiv},
eprint={math/0012163},
primaryClass={math.OC cs.LG}
} | kuusela2000learning |
arxiv-676548 | math/0101092 | Structure of $Z^2$ modulo selfsimilar sublattices | <|reference_start|>Structure of $Z^2$ modulo selfsimilar sublattices: In this paper we show the combinatorial structure of $\mathbb{Z}^2$ modulo sublattices selfsimilar to $\mathbb{Z}^2$. The tool we use for dealing with this purpose is the notion of association scheme. We classify when the scheme defined by the lattice is imprimitive and characterize its decomposition in terms of the decomposition of the gaussian integer defining the lattice. This arise in the classification of different forms of tiling $\mathbb{Z}^2$ by lattices of this type.<|reference_end|> | arxiv | @article{canogar-mackenzie2001structure,
title={Structure of $Z^2$ modulo selfsimilar sublattices},
author={Roberto Canogar-Mackenzie and Edgar Martinez-Moro},
journal={arXiv preprint arXiv:math/0101092},
year={2001},
archivePrefix={arXiv},
eprint={math/0101092},
primaryClass={math.CO cs.IT math.IT}
} | canogar-mackenzie2001structure |
arxiv-676549 | math/0103007 | Source Coding, Large Deviations, and Approximate Pattern Matching | <|reference_start|>Source Coding, Large Deviations, and Approximate Pattern Matching: We present a development of parts of rate-distortion theory and pattern- matching algorithms for lossy data compression, centered around a lossy version of the Asymptotic Equipartition Property (AEP). This treatment closely parallels the corresponding development in lossless compression, a point of view that was advanced in an important paper of Wyner and Ziv in 1989. In the lossless case we review how the AEP underlies the analysis of the Lempel-Ziv algorithm by viewing it as a random code and reducing it to the idealized Shannon code. This also provides information about the redundancy of the Lempel-Ziv algorithm and about the asymptotic behavior of several relevant quantities. In the lossy case we give various versions of the statement of the generalized AEP and we outline the general methodology of its proof via large deviations. Its relationship with Barron's generalized AEP is also discussed. The lossy AEP is applied to: (i) prove strengthened versions of Shannon's source coding theorem and universal coding theorems; (ii) characterize the performance of mismatched codebooks; (iii) analyze the performance of pattern- matching algorithms for lossy compression; (iv) determine the first order asymptotics of waiting times (with distortion) between stationary processes; (v) characterize the best achievable rate of weighted codebooks as an optimal sphere-covering exponent. We then present a refinement to the lossy AEP and use it to: (i) prove second order coding theorems; (ii) characterize which sources are easier to compress; (iii) determine the second order asymptotics of waiting times; (iv) determine the precise asymptotic behavior of longest match-lengths. Extensions to random fields are also given.<|reference_end|> | arxiv | @article{dembo2001source,
title={Source Coding, Large Deviations, and Approximate Pattern Matching},
author={A. Dembo, I. Kontoyiannis},
journal={arXiv preprint arXiv:math/0103007},
year={2001},
archivePrefix={arXiv},
eprint={math/0103007},
primaryClass={math.PR cs.IT math.IT}
} | dembo2001source |
arxiv-676550 | math/0103011 | The branching nerve of HDA and the Kan condition | <|reference_start|>The branching nerve of HDA and the Kan condition: One can associate to any strict globular $\omega$-category three augmented simplicial nerves called the globular nerve, the branching and the merging semi-cubical nerves. If this strict globular $\omega$-category is freely generated by a precubical set, then the corresponding homology theories contain different informations about the geometry of the higher dimensional automaton modeled by the precubical set. Adding inverses in this $\omega$-category to any morphism of dimension greater than 2 and with respect to any composition laws of dimension greater than 1 does not change these homology theories. In such a framework, the globular nerve always satisfies the Kan condition. On the other hand, both branching and merging nerves never satisfy it, except in some very particular and uninteresting situations. In this paper, we introduce two new nerves (the branching and merging semi-globular nerves) satisfying the Kan condition and having conjecturally the same simplicial homology as the branching and merging semi-cubical nerves respectively in such framework. The latter conjecture is related to the thin elements conjecture already introduced in our previous papers.<|reference_end|> | arxiv | @article{gaucher2001the,
title={The branching nerve of HDA and the Kan condition},
author={Philippe Gaucher},
journal={Theory and Applications of Categories, Vol. 11, 2003, No. 3, pp
75-106},
year={2001},
archivePrefix={arXiv},
eprint={math/0103011},
primaryClass={math.AT cs.OH math.CT}
} | gaucher2001the |
arxiv-676551 | math/0103107 | Explicit modular towers | <|reference_start|>Explicit modular towers: We give a general recipe for explicitly constructing asymptotically optimal towers of modular curves such as {X_0(l^n): n=1,2,3,...}. We illustrate the method by giving equations for eight towers with various geometric features. We conclude by observing that such towers are all of a specific recursive form, and speculate that perhaps every tower of this form that attains the Drinfeld-Vladut bound is modular.<|reference_end|> | arxiv | @article{elkies2001explicit,
title={Explicit modular towers},
author={Noam D. Elkies},
journal={Pages 23-32 in Proceedings of the Thirty-Fifth Annual Allerton
Conference on Communication, Control and Computing (1997; T.Basar and
A.Vardy, eds.), Univ. of Illinois at Urbana-Champaign 1998},
year={2001},
archivePrefix={arXiv},
eprint={math/0103107},
primaryClass={math.NT cs.IT math.AG math.IT}
} | elkies2001explicit |
arxiv-676552 | math/0103109 | In search of an evolutionary coding style | <|reference_start|>In search of an evolutionary coding style: In the near future, all the human genes will be identified. But understanding the functions coded in the genes is a much harder problem. For example, by using block entropy, one has that the DNA code is closer to a random code then written text, which in turn is less ordered then an ordinary computer code; see \cite{schmitt}. Instead of saying that the DNA is badly written, using our programming standards, we might say that it is written in a different style -- an evolutionary style. We will suggest a way to search for such a style in a quantified manner by using an artificial life program, and by giving a definition of general codes and a definition of style for such codes.<|reference_end|> | arxiv | @article{lundh2001in,
title={In search of an evolutionary coding style},
author={Torbj"orn Lundh},
journal={arXiv preprint arXiv:math/0103109},
year={2001},
number={Stony Brook IMS 2000/3, formerly math.SC/0103109},
archivePrefix={arXiv},
eprint={math/0103109},
primaryClass={math.NA cs.IT math.DS math.IT q-bio}
} | lundh2001in |
arxiv-676553 | math/0104016 | Bounds for weight distribution of weakly self-dual codes | <|reference_start|>Bounds for weight distribution of weakly self-dual codes: Upper bounds are given for the weight distribution of binary weakly self-dual codes. To get these new bounds, we introduce a novel method of utilizing unitary operations on Hilbert spaces. This method is motivated by recent progress on quantum computing. This new approach leads to much simpler proofs for such genre of bounds on the weight distributions of certain classes of codes. Moreover, in some cases, our bounds are improvements on the earlier bounds. These improvements are achieved, either by extending the range of the weights over which the bounds apply, or by extending the class of codes subjected to these bounds.<|reference_end|> | arxiv | @article{roychowdhury2001bounds,
title={Bounds for weight distribution of weakly self-dual codes},
author={Vwani P. Roychowdhury and Farrokh Vatan},
journal={IEEE Transactions on Information Theory, vol. 47, no. 1, Jan.
2001, pp. 393-396},
year={2001},
archivePrefix={arXiv},
eprint={math/0104016},
primaryClass={math.CO cs.IT math.IT quant-ph}
} | roychowdhury2001bounds |
arxiv-676554 | math/0104115 | Excellent nonlinear codes from modular curves | <|reference_start|>Excellent nonlinear codes from modular curves: We introduce a new construction of error-correcting codes from algebraic curves over finite fields. Modular curves of genus g -> infty over a field of size q0^2 yield nonlinear codes more efficient than the linear Goppa codes obtained from the same curves. These new codes now have the highest asymptotic transmission rates known for certain ranges of alphabet size and error rate. Both the theory and possible practical use of these new record codes require the development of new tools. On the theoretical side, establishing the transmission rate depends on an error estimate for a theorem of Schanuel applied to the function field of an asymptotically optimal curve. On the computational side, actual use of the codes will hinge on the solution of new problems in the computational algebraic geometry of curves.<|reference_end|> | arxiv | @article{elkies2001excellent,
title={Excellent nonlinear codes from modular curves},
author={Noam D. Elkies},
journal={arXiv preprint arXiv:math/0104115},
year={2001},
archivePrefix={arXiv},
eprint={math/0104115},
primaryClass={math.NT cs.IT math.AG math.IT}
} | elkies2001excellent |
arxiv-676555 | math/0104222 | Decoding method for generalized algebraic geometry codes | <|reference_start|>Decoding method for generalized algebraic geometry codes: We propose a decoding method for the generalized algebraic geometry codes proposed by Xing et al. To show its practical usefulness, we give an example of generalized algebraic geometry codes of length 567 over F_8 whose numbers of correctable errors by the proposed method are larger than the shortened codes of the primitive BCH codes of length 4095 in the most range of dimension.<|reference_end|> | arxiv | @article{matsumoto2001decoding,
title={Decoding method for generalized algebraic geometry codes},
author={Ryutaroh Matsumoto and Masakuni Oishi},
journal={arXiv preprint arXiv:math/0104222},
year={2001},
archivePrefix={arXiv},
eprint={math/0104222},
primaryClass={math.NT cs.IT math.AG math.IT}
} | matsumoto2001decoding |
arxiv-676556 | math/0105235 | Mathematics of learning | <|reference_start|>Mathematics of learning: We study the convergence properties of a pair of learning algorithms (learning with and without memory). This leads us to study the dominant eigenvalue of a class of random matrices. This turns out to be related to the roots of the derivative of random polynomials (generated by picking their roots uniformly at random in the interval [0, 1], although our results extend to other distributions). This, in turn, requires the study of the statistical behavior of the harmonic mean of random variables as above, which leads us to delicate question of the rate of convergence to stable laws and tail estimates for stable laws. The reader can find the proofs of most of the results announced here in the paper entitled "Harmonic mean, random polynomials, and random matrices", by the same authors.<|reference_end|> | arxiv | @article{komarova2001mathematics,
title={Mathematics of learning},
author={Natalia Komarova and Igor Rivin},
journal={arXiv preprint arXiv:math/0105235},
year={2001},
archivePrefix={arXiv},
eprint={math/0105235},
primaryClass={math.PR cs.LG math.CO math.DS}
} | komarova2001mathematics |
arxiv-676557 | math/0105236 | Harmonic mean, random polynomials and stochastic matrices | <|reference_start|>Harmonic mean, random polynomials and stochastic matrices: Motivated by a problem in learning theory, we are led to study the dominant eigenvalue of a class of random matrices. This turns out to be related to the roots of the derivative of random polynomials (generated by picking their roots uniformly at random in the interval [0, 1], although our results extend to other distributions). This, in turn, requires the study of the statistical behavior of the harmonic mean of random variables as above, and that, in turn, leads us to delicate question of the rate of convergence to stable laws and tail estimates for stable laws.<|reference_end|> | arxiv | @article{komarova2001harmonic,
title={Harmonic mean, random polynomials and stochastic matrices},
author={Natalia Komarova and Igor Rivin},
journal={arXiv preprint arXiv:math/0105236},
year={2001},
archivePrefix={arXiv},
eprint={math/0105236},
primaryClass={math.PR cs.LG math.CA math.CO math.DS}
} | komarova2001harmonic |
arxiv-676558 | math/0106089 | The coset weight distributions of certain BCH codes and a family of curves | <|reference_start|>The coset weight distributions of certain BCH codes and a family of curves: We study the distribution of the number of rational points in a family of curves over a finite field of characteristic 2. This distribution determines the coset weight distribution of a certain BCH code.<|reference_end|> | arxiv | @article{van der geer2001the,
title={The coset weight distributions of certain BCH codes and a family of
curves},
author={Gerard van der Geer and Marcel van der Vlugt},
journal={arXiv preprint arXiv:math/0106089},
year={2001},
archivePrefix={arXiv},
eprint={math/0106089},
primaryClass={math.AG cs.IT math.CO math.IT}
} | van der geer2001the |
arxiv-676559 | math/0106120 | Least Sqaure Method for Sum of the Functions Satysfying the Differential Equations with Polynomial Coefficients | <|reference_start|>Least Sqaure Method for Sum of the Functions Satysfying the Differential Equations with Polynomial Coefficients: We propose a linear algorithm for determining two function parameters by their linear combination. These functions must satisfy the first order differential equations with polynomial coefficients and our parameters are the coefficients of these polynomials. The algorithm consists of sequential solution by least squares method of two linear problems - first, differential equation polynomial coefficients determining for linear combination of two given functions and second - determining functions parameters by these polynomial coefficients. Numerical modeling carried by this scheme gives an good accordance under weak normal noise (with dispersion (<5%)).<|reference_end|> | arxiv | @article{berngardt2001least,
title={Least Sqaure Method for Sum of the Functions Satysfying the Differential
Equations with Polynomial Coefficients},
author={Oleg I.Berngardt, Alexander L.Voronov},
journal={Analele Universitatii din Timisoara Vol. XXXIX, Fasc. special,
2001, Seria Matematica/Informatica, pp.21-29},
year={2001},
archivePrefix={arXiv},
eprint={math/0106120},
primaryClass={math.NA cs.NA math.OC}
} | berngardt2001least |
arxiv-676560 | math/0108096 | Geometrically Uniform Frames | <|reference_start|>Geometrically Uniform Frames: We introduce a new class of frames with strong symmetry properties called geometrically uniform frames (GU), that are defined over an abelian group of unitary matrices and are generated by a single generating vector. The notion of GU frames is then extended to compound GU (CGU) frames which are generated by an abelian group of unitary matrices using multiple generating vectors. The dual frame vectors and canonical tight frame vectors associated with GU frames are shown to be GU and therefore generated by a single generating vector, which can be computed very efficiently using a Fourier transform defined over the generating group of the frame. Similarly, the dual frame vectors and canonical tight frame vectors associated with CGU frames are shown to be CGU. The impact of removing single or multiple elements from a GU frame is considered. A systematic method for constructing optimal GU frames from a given set of frame vectors that are not GU is also developed. Finally, the Euclidean distance properties of GU frames are discussed and conditions are derived on the abelian group of unitary matrices to yield GU frames with strictly positive distance spectrum irrespective of the generating vector.<|reference_end|> | arxiv | @article{eldar2001geometrically,
title={Geometrically Uniform Frames},
author={Yonina C. Eldar and H. Bolcskei},
journal={IEEE Trans. Inform. Theory, vol. 49, pp. 993-1006, Apr. 2003.},
year={2001},
archivePrefix={arXiv},
eprint={math/0108096},
primaryClass={math.FA cs.IT math.GR math.IT}
} | eldar2001geometrically |
arxiv-676561 | math/0109195 | Separating Geometric Thickness from Book Thickness | <|reference_start|>Separating Geometric Thickness from Book Thickness: We show that geometric thickness and book thickness are not asymptotically equivalent: for every t, there exists a graph with geometric thickness two and book thickness >= t.<|reference_end|> | arxiv | @article{eppstein2001separating,
title={Separating Geometric Thickness from Book Thickness},
author={David Eppstein},
journal={arXiv preprint arXiv:math/0109195},
year={2001},
archivePrefix={arXiv},
eprint={math/0109195},
primaryClass={math.CO cs.CG cs.DM}
} | eppstein2001separating |
arxiv-676562 | math/0110086 | Randomness | <|reference_start|>Randomness: Here we present in a single essay a combination and completion of the several aspects of the problem of randomness of individual objects which of necessity occur scattered in our texbook "An Introduction to Kolmogorov Complexity and Its Applications" (M. Li and P. Vitanyi), 2nd Ed., Springer-Verlag, 1997.<|reference_end|> | arxiv | @article{vitanyi2001randomness,
title={Randomness},
author={Paul M.B. Vitanyi},
journal={arXiv preprint arXiv:math/0110086},
year={2001},
archivePrefix={arXiv},
eprint={math/0110086},
primaryClass={math.PR cs.CR math.ST physics.data-an stat.TH}
} | vitanyi2001randomness |
arxiv-676563 | math/0110157 | Some Applications of Algebraic Curves to Computational Vision | <|reference_start|>Some Applications of Algebraic Curves to Computational Vision: We introduce a new formalism and a number of new results in the context of geometric computational vision. The classical scope of the research in geometric computer vision is essentially limited to static configurations of points and lines in $P^3$ . By using some well known material from algebraic geometry, we open new branches to computational vision. We introduce algebraic curves embedded in $P^3$ as the building blocks from which the tensor of a couple of cameras (projections) can be computed. In the process we address dimensional issues and as a result establish the minimal number of algebraic curves required for the tensor variety to be discrete as a function of their degree and genus. We then establish new results on the reconstruction of an algebraic curves in $P^3$ from multiple projections on projective planes embedded in $P^3$ . We address three different presentations of the curve: (i) definition by a set of equations, for which we show that for a generic configuration, two projections of a curve of degree d defines a curve in $P^3$ with two irreducible components, one of degree d and the other of degree $d(d - 1)$, (ii) the dual presentation in the dual space $P^{3*}$, for which we derive a lower bound for the number of projections necessary for linear reconstruction as a function of the degree and the genus, and (iii) the presentation as an hypersurface of $P^5$, defined by the set of lines in $P^3$ meeting the curve, for which we also derive lower bounds for the number of projections necessary for linear reconstruction as a function of the degree (of the curve). Moreover we show that the latter representation yields a new and efficient algorithm for dealing with mixed configurations of static and moving points in $P^3$.<|reference_end|> | arxiv | @article{fryers2001some,
title={Some Applications of Algebraic Curves to Computational Vision},
author={Michael Fryers, Jeremy Yirmeyahu Kaminski, Mina Teicher},
journal={arXiv preprint arXiv:math/0110157},
year={2001},
archivePrefix={arXiv},
eprint={math/0110157},
primaryClass={math.AG cs.IT math.IT}
} | fryers2001some |
arxiv-676564 | math/0110214 | Coding Distributive Lattices with Edge Firing Games | <|reference_start|>Coding Distributive Lattices with Edge Firing Games: In this note, we show that any distributive lattice is isomorphic to the set of reachable configurations of an Edge Firing Game. Together with the result of James Propp, saying that the set of reachable configurations of any Edge Firing Game is always a distributive lattice, this shows that the two concepts are equivalent.<|reference_end|> | arxiv | @article{latapy2001coding,
title={Coding Distributive Lattices with Edge Firing Games},
author={Matthieu Latapy (LIAFA), Clemence Magnien (LIAFA)},
journal={arXiv preprint arXiv:math/0110214},
year={2001},
archivePrefix={arXiv},
eprint={math/0110214},
primaryClass={math.CO cs.IT math-ph math.DS math.IT math.MP}
} | latapy2001coding |
arxiv-676565 | math/0111159 | Constructing elliptic curves with a known number of points over a prime field | <|reference_start|>Constructing elliptic curves with a known number of points over a prime field: Elliptic curves with a known number of points over a given prime field with n elements are often needed for use in cryptography. In the context of primality proving, Atkin and Morain suggested the use of the theory of complex multiplication to construct such curves. One of the steps in this method is the calculation of a root modulo n of the Hilbert class polynomial H(X) for a fundamental discriminant D. The usual way is to compute H(X) over the integers and then to find the root modulo n. We present a modified version of the Chinese remainder theorem (CRT) to compute H(X) modulo n directly from the knowledge of H(X) modulo enough small primes. Our complexity analysis suggests that asymptotically our algorithm is an improvement over previously known methods.<|reference_end|> | arxiv | @article{agashe2001constructing,
title={Constructing elliptic curves with a known number of points over a prime
field},
author={Amod Agashe, Kristin Lauter, Ramarathnam Venkatesan},
journal={arXiv preprint arXiv:math/0111159},
year={2001},
archivePrefix={arXiv},
eprint={math/0111159},
primaryClass={math.NT cs.IT math.AG math.IT}
} | agashe2001constructing |
arxiv-676566 | math/0111309 | The Floyd-Warshall Algorithm, the AP and the TSP | <|reference_start|>The Floyd-Warshall Algorithm, the AP and the TSP: We use admissible permutations and a variant of the Floyd-Warshall algorithm to obtain an optimal solution to the Assignment Problem. Using another variant of the F-W algorithm, we obtain an approximate solution to the Traveling Salesman Problem. We also give a sufficient condition for the approximate solution to be an optimal solution.<|reference_end|> | arxiv | @article{kleiman2001the,
title={The Floyd-Warshall Algorithm, the AP and the TSP},
author={Howard Kleiman},
journal={arXiv preprint arXiv:math/0111309},
year={2001},
archivePrefix={arXiv},
eprint={math/0111309},
primaryClass={math.CO cs.DS}
} | kleiman2001the |
arxiv-676567 | math/0112052 | The Floyd-Warshall Algorithm, the AP and the TSP, Part II | <|reference_start|>The Floyd-Warshall Algorithm, the AP and the TSP, Part II: In math.CO/0111309, we used admissible permutations and a variant of the Floyd-Warshall Algorithm to obtain an optimal solution to the Assignment Problem and an approximate solution to the Traveling Salesman Problem. Here we give a large, detailed illustration of how the algorithms are applied.<|reference_end|> | arxiv | @article{kleiman2001the,
title={The Floyd-Warshall Algorithm, the AP and the TSP, Part II},
author={Howard Kleiman},
journal={arXiv preprint arXiv:math/0112052},
year={2001},
archivePrefix={arXiv},
eprint={math/0112052},
primaryClass={math.CO cs.DS}
} | kleiman2001the |
arxiv-676568 | math/0112216 | Classification of Finite Dynamical Systems | <|reference_start|>Classification of Finite Dynamical Systems: This paper is motivated by the theory of sequential dynamical systems, developed as a basis for a mathematical theory of computer simulation. It contains a classification of finite dynamical systems on binary strings, which are obtained by composing functions defined on the coordinates. The classification is in terms of the dependency relations among the coordinate functions. It suggests a natural notion of the linearization of a system. Furthermore, it contains a sharp upper bound on the number of systems in terms of the dependencies among the coordinate functions. This upper bound generalizes an upper bound for sequential dynamical systems.<|reference_end|> | arxiv | @article{garcia2001classification,
title={Classification of Finite Dynamical Systems},
author={Luis Garcia, Abdul Salam Jarrah and Reinhard Laubenbacher},
journal={arXiv preprint arXiv:math/0112216},
year={2001},
archivePrefix={arXiv},
eprint={math/0112216},
primaryClass={math.DS cs.MA math.CO}
} | garcia2001classification |
arxiv-676569 | math/0112257 | The computational complexity of the local postage stamp problem | <|reference_start|>The computational complexity of the local postage stamp problem: The well-studied local postage stamp problem (LPSP) is the following: given a positive integer k, a set of postive integers 1 = a1 < a2 < ... < ak and an integer h >= 1, what is the smallest positive integer which cannot be represented as a linear combination x1 a1 + ... + xk ak where x1 + ... + xk <= h and each xi is a non-negative integer? In this note we prove that LPSP is NP-hard under Turing reductions, but can be solved in polynomial time if k is fixed.<|reference_end|> | arxiv | @article{shallit2001the,
title={The computational complexity of the local postage stamp problem},
author={Jeffrey Shallit},
journal={arXiv preprint arXiv:math/0112257},
year={2001},
archivePrefix={arXiv},
eprint={math/0112257},
primaryClass={math.NT cs.CC math.CO}
} | shallit2001the |
arxiv-676570 | math/0201298 | Qualitative Visualization of Distance Information | <|reference_start|>Qualitative Visualization of Distance Information: Different types of two- and three-dimensional representations of a finite metric space are studied that focus on the accurate representation of the linear order among the distances rather than their actual values. Lower and upper bounds for representability probabilities are produced by experiments including random generation, a rubber-band algorithm for accuracy optimization, and automatic proof generation. It is proved that both farthest neighbour representations and cluster tree representations always exist in the plane. Moreover, a measure of order accuracy is introduced, and some lower bound on the possible accuracy is proved using some clustering method and a result on maximal cuts in graphs.<|reference_end|> | arxiv | @article{heitzig2002qualitative,
title={Qualitative Visualization of Distance Information},
author={Jobst Heitzig},
journal={arXiv preprint arXiv:math/0201298},
year={2002},
archivePrefix={arXiv},
eprint={math/0201298},
primaryClass={math.CO cs.CG}
} | heitzig2002qualitative |
arxiv-676571 | math/0202276 | A numerical method for solution of ordinary differential equations of fractional order | <|reference_start|>A numerical method for solution of ordinary differential equations of fractional order: In this paper we propose an algorithm for the numerical solution of arbitrary differential equations of fractional order. The algorithm is obtained by using the following decomposition of the differential equation into a system of differential equation of integer order connected with inverse forms of Abel-integral equations. The algorithm is used for solution of the linear and non-linear equations.<|reference_end|> | arxiv | @article{jacek2002a,
title={A numerical method for solution of ordinary differential equations of
fractional order},
author={Leszczynski Jacek, Ciesielski Mariusz},
journal={Lecture Notes in Computer Science (LNCS), Springer-Verlag, 2328,
2001, pp. 675-681},
year={2002},
archivePrefix={arXiv},
eprint={math/0202276},
primaryClass={math.NA cs.CE physics.comp-ph}
} | jacek2002a |
arxiv-676572 | math/0203059 | On linear programming bounds for spherical codes and designs | <|reference_start|>On linear programming bounds for spherical codes and designs: We investigate universal bounds on spherical codes and spherical designs that could be obtained using Delsarte's linear programming methods. We give a lower estimate for the LP upper bound on codes, and an upper estimate for the LP lower bound on designs. Specifically, when the distance of the code is fixed and the dimension goes to infinity, the LP upper bound on codes is at least as large as the average of the best known upper and lower bounds. When the dimension n of the design is fixed, and the strength k goes to infinity, the LP bound on designs turns out, in conjunction with known lower bounds, to be proportional to k^{n-1}.<|reference_end|> | arxiv | @article{samorodnitsky2002on,
title={On linear programming bounds for spherical codes and designs},
author={Alex Samorodnitsky},
journal={arXiv preprint arXiv:math/0203059},
year={2002},
archivePrefix={arXiv},
eprint={math/0203059},
primaryClass={math.CO cs.IT math.IT math.OC}
} | samorodnitsky2002on |
arxiv-676573 | math/0203239 | Generic-case complexity, decision problems in group theory and random walks | <|reference_start|>Generic-case complexity, decision problems in group theory and random walks: We give a precise definition of ``generic-case complexity'' and show that for a very large class of finitely generated groups the classical decision problems of group theory - the word, conjugacy and membership problems - all have linear-time generic-case complexity. We prove such theorems by using the theory of random walks on regular graphs.<|reference_end|> | arxiv | @article{kapovich2002generic-case,
title={Generic-case complexity, decision problems in group theory and random
walks},
author={Ilya Kapovich, Alexei Myasnikov, Paul Schupp and Vladimir Shpilrain},
journal={arXiv preprint arXiv:math/0203239},
year={2002},
archivePrefix={arXiv},
eprint={math/0203239},
primaryClass={math.GR cs.CC}
} | kapovich2002generic-case |
arxiv-676574 | math/0204068 | Computational problems for vector-valued quadratic forms | <|reference_start|>Computational problems for vector-valued quadratic forms: Given two real vector spaces $U$ and $V$, and a symmetric bilinear map $B: U\times U\to V$, let $Q_B$ be its associated quadratic map $Q_B$. The problems we consider are as follows: (i) are there necessary and sufficient conditions, checkable in polynomial-time, for determining when $Q_B$ is surjective?; (ii) if $Q_B$ is surjective, given $v\in V$ is there a polynomial-time algorithm for finding a point $u\in Q_B^{-1}(v)$?; (iii) are there necessary and sufficient conditions, checkable in polynomial-time, for determining when $B$ is indefinite? We present an alternative formulation of the problem of determining the image of a vector-valued quadratic form in terms of the unprojectivised Veronese surface. The relation of these questions with several interesting problems in Control Theory is illustrated.<|reference_end|> | arxiv | @article{bullo2002computational,
title={Computational problems for vector-valued quadratic forms},
author={Francesco Bullo, Jorge Cortes, Andrew D. Lewis, Sonia Martinez},
journal={arXiv preprint arXiv:math/0204068},
year={2002},
archivePrefix={arXiv},
eprint={math/0204068},
primaryClass={math.AG cs.CC math.OC}
} | bullo2002computational |
arxiv-676575 | math/0204252 | Separating Thickness from Geometric Thickness | <|reference_start|>Separating Thickness from Geometric Thickness: We show that graph-theoretic thickness and geometric thickness are not asymptotically equivalent: for every t, there exists a graph with thickness three and geometric thickness >= t.<|reference_end|> | arxiv | @article{eppstein2002separating,
title={Separating Thickness from Geometric Thickness},
author={David Eppstein},
journal={In "Towards a Theory of Geometric Graphs", J. Pach, ed.,
Contemporary Math. 342, pp. 75-86, 2004},
year={2002},
archivePrefix={arXiv},
eprint={math/0204252},
primaryClass={math.CO cs.CG cs.DM}
} | eppstein2002separating |
arxiv-676576 | math/0205049 | The asymptotic complexity of partial sorting -- How to learn large posets by pairwise comparisons | <|reference_start|>The asymptotic complexity of partial sorting -- How to learn large posets by pairwise comparisons: The expected number of pairwise comparisons needed to learn a partial order on n elements is shown to be at least n*n/4-o(n*n), and an algorithm is given that needs only n*n/4+o(n*n) comparisons on average. In addition, the optimal strategy for learning a poset with four elements is presented.<|reference_end|> | arxiv | @article{heitzig2002the,
title={The asymptotic complexity of partial sorting -- How to learn large
posets by pairwise comparisons},
author={Jobst Heitzig},
journal={arXiv preprint arXiv:math/0205049},
year={2002},
archivePrefix={arXiv},
eprint={math/0205049},
primaryClass={math.CO cs.CC math.OC}
} | heitzig2002the |
arxiv-676577 | math/0205218 | A New Operation on Sequences: the Boustrouphedon Transform | <|reference_start|>A New Operation on Sequences: the Boustrouphedon Transform: A generalization of the Seidel-Entringer-Arnold method for calculating the alternating permutation numbers (or secant-tangent numbers) leads to a new operation on integer sequences, the Boustrophedon transform.<|reference_end|> | arxiv | @article{millar2002a,
title={A New Operation on Sequences: the Boustrouphedon Transform},
author={Jessica Millar, N.J.A. Sloane, Neal E. Young},
journal={J. Combinatorial Theory, Series A 76(1):44-54 (1996)},
year={2002},
doi={10.1006/jcta.1996.0087},
archivePrefix={arXiv},
eprint={math/0205218},
primaryClass={math.CO cs.IT math.IT}
} | millar2002a |
arxiv-676578 | math/0205299 | The Lattice of N-Run Orthogonal Arrays | <|reference_start|>The Lattice of N-Run Orthogonal Arrays: If the number of runs in a (mixed-level) orthogonal array of strength 2 is specified, what numbers of levels and factors are possible? The collection of possible sets of parameters for orthogonal arrays with N runs has a natural lattice structure, induced by the ``expansive replacement'' construction method. In particular the dual atoms in this lattice are the most important parameter sets, since any other parameter set for an N-run orthogonal array can be constructed from them. To get a sense for the number of dual atoms, and to begin to understand the lattice as a function of N, we investigate the height and the size of the lattice. It is shown that the height is at most [c(N-1)], where c= 1.4039... and that there is an infinite sequence of values of N for which this bound is attained. On the other hand, the number of nodes in the lattice is bounded above by a superpolynomial function of N (and superpolynomial growth does occur for certain sequences of values of N). Using a new construction based on ``mixed spreads'', all parameter sets with 64 runs are determined. Four of these 64-run orthogonal arrays appear to be new.<|reference_end|> | arxiv | @article{rains2002the,
title={The Lattice of N-Run Orthogonal Arrays},
author={E. M. Rains, N. J. A. Sloane, John Stufken},
journal={J. Statistical Planning and Inference, Vol. 102 (2002), pp.
477-500},
year={2002},
archivePrefix={arXiv},
eprint={math/0205299},
primaryClass={math.CO cs.IT math.IT}
} | rains2002the |
arxiv-676579 | math/0205301 | Some Canonical Sequences of Integers | <|reference_start|>Some Canonical Sequences of Integers: Extending earlier work of R. Donaghey and P. J. Cameron, we investigate some canonical "eigen-sequences" associated with transformations of integer sequences. Several known sequences appear in a new setting: for instance the sequences (such as 1, 3, 11, 49, 257, 1531, ...) studied by T. Tsuzuku, H. O. Foulkes and A. Kerber in connection with multiply transitive groups are eigen-sequences for the binomial transform. Many interesting new sequences also arise, such as 1, 1, 2, 26, 152, 1144, ..., which shifts one place left when transformed by the Stirling numbers of the second kind, and whose exponential generating function satisfies A'(x) = A(e^x -1) + 1.<|reference_end|> | arxiv | @article{bernstein2002some,
title={Some Canonical Sequences of Integers},
author={Mira Bernstein and N. J. A. Sloane},
journal={Linear Algebra and its Applications, Vol. 226-228 (1995), pp.
57-72; errata Vol. 320 (2000), p. 210},
year={2002},
archivePrefix={arXiv},
eprint={math/0205301},
primaryClass={math.CO cs.IT math.IT}
} | bernstein2002some |
arxiv-676580 | math/0205303 | On Asymmetric Coverings and Covering Numbers | <|reference_start|>On Asymmetric Coverings and Covering Numbers: An asymmetric covering D(n,R) is a collection of special subsets S of an n-set such that every subset T of the n-set is contained in at least one special S with |S| - |T| <= R. In this paper we compute the smallest size of any D(n,1) for n <= 8. We also investigate ``continuous'' and ``banded'' versions of the problem. The latter involves the classical covering numbers C(n,k,k-1), and we determine the following new values: C(10,5,4) = 51, C(11,7,6,) =84, C(12,8,7) = 126, C(13,9,8)= 185 and C(14,10,9) = 259. We also find the number of nonisomorphic minimal covering designs in several cases.<|reference_end|> | arxiv | @article{applegate2002on,
title={On Asymmetric Coverings and Covering Numbers},
author={David Applegate, E. M. Rains and N. J. A. Sloane},
journal={J. Combinat. Designs 11 (2003), 218-228},
year={2002},
archivePrefix={arXiv},
eprint={math/0205303},
primaryClass={math.CO cs.IT math.IT}
} | applegate2002on |
arxiv-676581 | math/0206044 | Common transversals and tangents to two lines and two quadrics in P^3 | <|reference_start|>Common transversals and tangents to two lines and two quadrics in P^3: We solve the following geometric problem, which arises in several three-dimensional applications in computational geometry: For which arrangements of two lines and two spheres in R^3 are there infinitely many lines simultaneously transversal to the two lines and tangent to the two spheres? We also treat a generalization of this problem to projective quadrics: Replacing the spheres in R^3 by quadrics in projective space P^3, and fixing the lines and one general quadric, we give the following complete geometric description of the set of (second) quadrics for which the 2 lines and 2 quadrics have infinitely many transversals and tangents: In the nine-dimensional projective space P^9 of quadrics, this is a curve of degree 24 consisting of 12 plane conics, a remarkably reducible variety.<|reference_end|> | arxiv | @article{megyesi2002common,
title={Common transversals and tangents to two lines and two quadrics in P^3},
author={G'abor Megyesi (UMIST), Frank Sottile (U Massachusetts, Amherst),
Thorsten Theobald (Technische Universit"at M"unchen)},
journal={arXiv preprint arXiv:math/0206044},
year={2002},
archivePrefix={arXiv},
eprint={math/0206044},
primaryClass={math.AG cs.CG math.AC}
} | megyesi2002common |
arxiv-676582 | math/0206273 | Average-case complexity and decision problems in group theory | <|reference_start|>Average-case complexity and decision problems in group theory: We investigate the average-case complexity of decision problems for finitely generated groups, in particular the word and membership problems. Using our recent results on ``generic-case complexity'' we show that if a finitely generated group $G$ has the word problem solvable in subexponential time and has a subgroup of finite index which possesses a non-elementary word-hyperbolic quotient group, then the average-case complexity of the word problem for $G$ is linear time, uniformly with respect to the collection of all length-invariant measures on $G$. For example, the result applies to all braid groups $B_n$.<|reference_end|> | arxiv | @article{kapovich2002average-case,
title={Average-case complexity and decision problems in group theory},
author={Ilya Kapovich, Alexei Myasnikov, Paul Schupp and Vladimir Shpilrain},
journal={arXiv preprint arXiv:math/0206273},
year={2002},
archivePrefix={arXiv},
eprint={math/0206273},
primaryClass={math.GR cs.CC math.GT}
} | kapovich2002average-case |
arxiv-676583 | math/0207121 | The Shannon-McMillan Theorem for Ergodic Quantum Lattice Systems | <|reference_start|>The Shannon-McMillan Theorem for Ergodic Quantum Lattice Systems: We formulate and prove a quantum Shannon-McMillan theorem. The theorem demonstrates the significance of the von Neumann entropy for translation invariant ergodic quantum spin systems on n-dimensional lattices: the entropy gives the logarithm of the essential number of eigenvectors of the system on large boxes. The one-dimensional case covers quantum information sources and is basic for coding theorems.<|reference_end|> | arxiv | @article{bjelakovic2002the,
title={The Shannon-McMillan Theorem for Ergodic Quantum Lattice Systems},
author={Igor Bjelakovic, Tyll Krueger, Rainer Siegmund-Schultze, Arleta Szkola},
journal={arXiv preprint arXiv:math/0207121},
year={2002},
archivePrefix={arXiv},
eprint={math/0207121},
primaryClass={math.DS cs.DS cs.IT math-ph math.IT math.MP math.OA quant-ph}
} | bjelakovic2002the |
arxiv-676584 | math/0207135 | The Hilbert Zonotope and a Polynomial Time Algorithm for Universal Grobner Bases | <|reference_start|>The Hilbert Zonotope and a Polynomial Time Algorithm for Universal Grobner Bases: We provide a polynomial time algorithm for computing the universal Gr\"obner basis of any polynomial ideal having a finite set of common zeros in fixed number of variables. One ingredient of our algorithm is an effective construction of the state polyhedron of any member of the Hilbert scheme Hilb^d_n of n-long d-variate ideals, enabled by introducing the Hilbert zonotope H^d_n and showing that it simultaneously refines all state polyhedra of ideals on Hilb^d_n.<|reference_end|> | arxiv | @article{babson2002the,
title={The Hilbert Zonotope and a Polynomial Time Algorithm for Universal
Grobner Bases},
author={Eric Babson, Shmuel Onn, Rekha Thomas},
journal={Advances in Applied Mathematics, 30:529--544, 2003},
year={2002},
archivePrefix={arXiv},
eprint={math/0207135},
primaryClass={math.CO cs.SC math.AG}
} | babson2002the |
arxiv-676585 | math/0207136 | Convex Matroid Optimization | <|reference_start|>Convex Matroid Optimization: We consider a problem of optimizing convex functionals over matroid bases. It is richly expressive and captures certain quadratic assignment and clustering problems. While generally NP-hard, we show it is polynomial time solvable when a suitable parameter is restricted.<|reference_end|> | arxiv | @article{onn2002convex,
title={Convex Matroid Optimization},
author={Shmuel Onn},
journal={SIAM Journal on Discrete Mathematics, 17:249--253, 2003},
year={2002},
archivePrefix={arXiv},
eprint={math/0207136},
primaryClass={math.CO cs.DM math.OC math.RA}
} | onn2002convex |
arxiv-676586 | math/0207146 | A Zador-Like Formula for Quantizers Based on Periodic Tilings | <|reference_start|>A Zador-Like Formula for Quantizers Based on Periodic Tilings: We consider Zador's asymptotic formula for the distortion-rate function for a variable-rate vector quantizer in the high-rate case. This formula involves the differential entropy of the source, the rate of the quantizer in bits per sample, and a coefficient G which depends on the geometry of the quantizer but is independent of the source. We give an explicit formula for G in the case when the quantizing regions form a periodic tiling of n-dimensional space, in terms of the volumes and second moments of the Voronoi cells. As an application we show, extending earlier work of Kashyap and Neuhoff, that even a variable-rate three-dimensional quantizer based on the ``A15'' structure is still inferior to a quantizer based on the body-centered cubic lattice. We also determine the smallest covering radius of such a structure.<|reference_end|> | arxiv | @article{sloane2002a,
title={A Zador-Like Formula for Quantizers Based on Periodic Tilings},
author={N. J. A. Sloane, Vinay A. Vaishampayan},
journal={IEEE Trans. Information Theory 48 (2002), 3138-3140},
year={2002},
doi={10.1109/TIT.2002.805086},
archivePrefix={arXiv},
eprint={math/0207146},
primaryClass={math.CO cs.IT math.IT}
} | sloane2002a |
arxiv-676587 | math/0207147 | Quantizing Using Lattice Intersections | <|reference_start|>Quantizing Using Lattice Intersections: The usual quantizer based on an n-dimensional lattice L maps a point x in R^n to a closest lattice point. Suppose L is the intersection of lattices L_1, ..., L_r. Then one may instead combine the information obtained by simultaneously quantizing x with respect to each of the L_i. This corresponds to decomposing R^n into a honeycomb of cells which are the intersections of the Voronoi cells for the L_i, and identifying the cell to which x belongs. This paper shows how to write several standard lattices (the face-centered and body-centered cubic lattices, the root lattices D_4, E_6*, E_8, the Coxeter-Todd, Barnes-Wall and Leech lattices, etc.) in a canonical way as intersections of a small number of simpler, decomposable, lattices. The cells of the honeycombs are given explicitly and the mean squared quantizing error calculated in the cases when the intersection lattice is the face-centered or body-centered cubic lattice or the lattice D_4.<|reference_end|> | arxiv | @article{sloane2002quantizing,
title={Quantizing Using Lattice Intersections},
author={N. J. A. Sloane, B. Beferull-Lozano},
journal={Discrete and Computational Geometry 25 (2003), 799-824},
year={2002},
archivePrefix={arXiv},
eprint={math/0207147},
primaryClass={math.CO cs.IT math.IT}
} | sloane2002quantizing |
arxiv-676588 | math/0207186 | A Simple Construction for the Barnes-Wall Lattices | <|reference_start|>A Simple Construction for the Barnes-Wall Lattices: A certain family of orthogonal groups (called "Clifford groups" by G. E. Wall) has arisen in a variety of different contexts in recent years. These groups have a simple definition as the automorphism groups of certain generalized Barnes-Wall lattices. This leads to an especially simple construction for the usual Barnes-Wall lattices. This is based on the third author's talk at the Forney-Fest, M.I.T., March 2000, which in turn is based on our paper "The Invariants of the Clifford Groups", Designs, Codes, Crypt., 24 (2001), 99--121, to which the reader is referred for further details and proofs.<|reference_end|> | arxiv | @article{nebe2002a,
title={A Simple Construction for the Barnes-Wall Lattices},
author={G. Nebe, E.M. Rains, N.J.A. Sloane},
journal={Codes, Graphs, and Systems: A Celebraton of the Life and Career of
G. David Forney Jr., ed. R. E. Blahut and R. Koetter, Kluwer, 2002, pp.
333-342},
year={2002},
archivePrefix={arXiv},
eprint={math/0207186},
primaryClass={math.CO cs.IT math.IT}
} | nebe2002a |
arxiv-676589 | math/0207197 | On Single-Deletion-Correcting Codes | <|reference_start|>On Single-Deletion-Correcting Codes: This paper gives a brief survey of binary single-deletion-correcting codes. The Varshamov-Tenengolts codes appear to be optimal, but many interesting unsolved problems remain. The connections with shift-register sequences also remain somewhat mysterious.<|reference_end|> | arxiv | @article{sloane2002on,
title={On Single-Deletion-Correcting Codes},
author={N.J.A. Sloane},
journal={Codes and Designs, Ohio State University, May 2000 (Ray-Chaudhuri
Festschrift), K. T. Arasu and A. Seress (editors), Walter de Gruyter, Berlin,
2002, pp. 273-291},
year={2002},
archivePrefix={arXiv},
eprint={math/0207197},
primaryClass={math.CO cs.IT math.IT}
} | sloane2002on |
arxiv-676590 | math/0207200 | The Complexity of Three-Way Statistical Tables | <|reference_start|>The Complexity of Three-Way Statistical Tables: Multi-way tables with specified marginals arise in a variety of applications in statistics and operations research. We provide a comprehensive complexity classification of three fundamental computational problems on tables: existence, counting and entry-security. One major outcome of our work is that each of the following problems is intractable already for "slim" 3-tables, with constant and smallest possible number 3 of rows: (1) deciding existence of 3-tables with given consistent 2-marginals; (2) counting all 3-tables with given 2-marginals; (3) finding whether an integer value is attained in entry (i,j,k) by at least one of the 3-tables satisfying given (feasible) 2-marginals. This implies that a characterization of feasible marginals for such slim tables, sought by much recent research, is unlikely to exist. Another important consequence of our study is a systematic efficient way of embedding the set of 3-tables satisfying any given 1-marginals and entry upper bounds in a set of slim 3-tables satisfying suitable 2-marginals with no entry bounds. This provides a valuable tool for studying multi-index transportation problems and multi-index transportation polytopes.<|reference_end|> | arxiv | @article{de loera2002the,
title={The Complexity of Three-Way Statistical Tables},
author={Jesus De Loera, Shmuel Onn},
journal={SIAM Journal on Computing, 33:819--836, 2004},
year={2002},
archivePrefix={arXiv},
eprint={math/0207200},
primaryClass={math.CO cs.DM math.OC}
} | de loera2002the |
arxiv-676591 | math/0207208 | The Z_4-Linearity of Kerdock, Preparata, Goethals and Related Codes | <|reference_start|>The Z_4-Linearity of Kerdock, Preparata, Goethals and Related Codes: Certain notorious nonlinear binary codes contain more codewords than any known linear code. These include the codes constructed by Nordstrom-Robinson, Kerdock, Preparata, Goethals, and Delsarte-Goethals. It is shown here that all these codes can be very simply constructed as binary images under the Gray map of linear codes over Z_4, the integers mod 4 (although this requires a slight modification of the Preparata and Goethals codes). The construction implies that all these binary codes are distance invariant. Duality in the Z_4 domain implies that the binary images have dual weight distributions. The Kerdock and "Preparata" codes are duals over Z_4 -- and the Nordstrom-Robinson code is self-dual -- which explains why their weight distributions are dual to each other. The Kerdock and "Preparata" codes are Z_4-analogues of first-order Reed-Muller and extended Hamming codes, respectively. All these codes are extended cyclic codes over Z_4, which greatly simplifies encoding and decoding. An algebraic hard-decision decoding algorithm is given for the "Preparata" code and a Hadamard-transform soft-decision decoding algorithm for the Kerdock code. Binary first- and second-order Reed-Muller codes are also linear over Z_4, but extended Hamming codes of length n >= 32 and the Golay code are not. Using Z_4-linearity, a new family of distance regular graphs are constructed on the cosets of the "Preparata" code.<|reference_end|> | arxiv | @article{hammons,2002the,
title={The Z_4-Linearity of Kerdock, Preparata, Goethals and Related Codes},
author={A. Roger Hammons, Jr., P. Vijay Kumar, A.R. Calderbank, N.J.A. Sloane,
Patrick Sol'e},
journal={IEEE Trans. Inform. Theory, 40 (1994), 301-319},
year={2002},
archivePrefix={arXiv},
eprint={math/0207208},
primaryClass={math.CO cs.IT math.IT}
} | hammons,2002the |
arxiv-676592 | math/0207209 | Interleaver Design for Turbo Codes | <|reference_start|>Interleaver Design for Turbo Codes: The performance of a Turbo code with short block length depends critically on the interleaver design. There are two major criteria in the design of an interleaver: the distance spectrum of the code and the correlation between the information input data and the soft output of each decoder corresponding to its parity bits. This paper describes a new interleaver design for Turbo codes with short block length based on these two criteria. A deterministic interleaver suitable for Turbo codes is also described. Simulation results compare the new interleaver design to different existing interleavers.<|reference_end|> | arxiv | @article{sadjadpour2002interleaver,
title={Interleaver Design for Turbo Codes},
author={H.R. Sadjadpour, N.J.A. Sloane, M. Salehi, G. Nebe},
journal={IEEE J. Selected Areas Communication, 19 (2001), 831-837},
year={2002},
archivePrefix={arXiv},
eprint={math/0207209},
primaryClass={math.CO cs.IT math.IT}
} | sadjadpour2002interleaver |
arxiv-676593 | math/0207256 | The Sphere-Packing Problem | <|reference_start|>The Sphere-Packing Problem: A brief report on recent work on the sphere-packing problem.<|reference_end|> | arxiv | @article{sloane2002the,
title={The Sphere-Packing Problem},
author={N.J.A. Sloane},
journal={Documenta Mathematika, Vol. III (1998), 387-396},
year={2002},
archivePrefix={arXiv},
eprint={math/0207256},
primaryClass={math.CO cs.IT math.IT}
} | sloane2002the |
arxiv-676594 | math/0207291 | On Kissing Numbers in Dimensions 32 to 128 | <|reference_start|>On Kissing Numbers in Dimensions 32 to 128: An elementary construction using binary codes gives new record kissing numbers in dimensions from 32 to 128.<|reference_end|> | arxiv | @article{edel2002on,
title={On Kissing Numbers in Dimensions 32 to 128},
author={Yves Edel, E.M. Rains, N.J.A. Sloane},
journal={Electronic J. Combinatorics, 5 (1), item R22, 1998},
year={2002},
archivePrefix={arXiv},
eprint={math/0207291},
primaryClass={math.CO cs.IT math.IT}
} | edel2002on |
arxiv-676595 | math/0208001 | Self-Dual Codes | <|reference_start|>Self-Dual Codes: Self-dual codes are important because many of the best codes known are of this type and they have a rich mathematical theory. Topics covered in this survey include codes over F_2, F_3, F_4, F_q, Z_4, Z_m, shadow codes, weight enumerators, Gleason-Pierce theorem, invariant theory, Gleason theorems, bounds, mass formulae, enumeration, extremal codes, open problems. There is a comprehensive bibliography.<|reference_end|> | arxiv | @article{rains2002self-dual,
title={Self-Dual Codes},
author={E.M. Rains, N.J.A. Sloane},
journal={In Handbook of Coding Theory (ed. V. S. Pless and W. C. Huffman),
1998, pp. 177-294},
year={2002},
archivePrefix={arXiv},
eprint={math/0208001},
primaryClass={math.CO cs.IT math.IT}
} | rains2002self-dual |
arxiv-676596 | math/0208017 | Packing Planes in Four Dimensions and Other Mysteries | <|reference_start|>Packing Planes in Four Dimensions and Other Mysteries: How should you choose a good set of (say) 48 planes in four dimensions? More generally, how do you find packings in Grassmannian spaces? In this article I give a brief introduction to the work that I have been doing on this problem in collaboration with A. R. Calderbank, J. H. Conway, R. H. Hardin, E. M. Rains and P. W. Shor. We have found many nice examples of specific packings (70 4-spaces in 8-space, for instance), several general constructions, and an embedding theorem which shows that a packing in Grassmannian space G(m,n) is a subset of a sphere in R^D, where D = (m+2)(m-1)/2, and leads to a proof that many of our packings are optimal. There are a number of interesting unsolved problems.<|reference_end|> | arxiv | @article{sloane2002packing,
title={Packing Planes in Four Dimensions and Other Mysteries},
author={N.J.A. Sloane},
journal={In Algebraic Combinatorics and Related Topics (Yamagata 1997), ed.
E. Bannai, M. Harada and M. Ozeki, Yamagata University, 1999},
year={2002},
archivePrefix={arXiv},
eprint={math/0208017},
primaryClass={math.CO cs.IT math.IT}
} | sloane2002packing |
arxiv-676597 | math/0208155 | Toric codes over finite fields | <|reference_start|>Toric codes over finite fields: In this note, a class of error-correcting codes is associated to a toric variety associated to a fan defined over a finite field $\fff_q$, analogous to the class of Goppa codes associated to a curve. For such a ``toric code'' satisfying certain additional conditions, we present an efficient decoding algorithm for the dual of a Goppa code. Many examples are given. For small $q$, many of these codes have parameters beating the Gilbert-Varshamov bound. In fact, using toric codes, we construct a $(n,k,d)=(49,11,28)$ code over $\fff_8$, which is better than any other known code listed in Brouwer's on-line tables for that $n$ and $k$.<|reference_end|> | arxiv | @article{joyner2002toric,
title={Toric codes over finite fields},
author={David Joyner},
journal={arXiv preprint arXiv:math/0208155},
year={2002},
archivePrefix={arXiv},
eprint={math/0208155},
primaryClass={math.AG cs.IT math.CO math.IT}
} | joyner2002toric |
arxiv-676598 | math/0209047 | A mechanical model for the transportation problem | <|reference_start|>A mechanical model for the transportation problem: We describe a mechanical device which can be used as an analog computer to solve the transportation problem. In practice this device is simulated by a numerical algorithm. Tests show that this algorithm is 60 times faster than a current subroutine (NAG library) for an average 1000 x 1000 problem. Its performance is even better for degenerate problems in which the weights take only a small number of integer values.<|reference_end|> | arxiv | @article{henon2002a,
title={A mechanical model for the transportation problem},
author={Michel Henon},
journal={arXiv preprint arXiv:math/0209047},
year={2002},
archivePrefix={arXiv},
eprint={math/0209047},
primaryClass={math.OC astro-ph cs.DM cs.NA math.NA}
} | henon2002a |
arxiv-676599 | math/0209267 | Length-based conjugacy search in the Braid group | <|reference_start|>Length-based conjugacy search in the Braid group: Several key agreement protocols are based on the following "Generalized Conjugacy Search Problem": Find, given elements b_1,...,b_n and xb_1x^{-1},...,xb_nx^{-1} in a nonabelian group G, the conjugator x. In the case of subgroups of the braid group B_N, Hughes and Tannenbaum suggested a length-based approach to finding x. Since the introduction of this approach, its effectiveness and successfulness were debated. We introduce several effective realizations of this approach. In particular, a new length function is defined on B_N which possesses significantly better properties than the natural length associated to the Garside normal form. We give experimental results concerning the success probability of this approach, which suggest that very large computational power is required for this method to successfully solve the Generalized Conjugacy Search Problem when its parameters are as in existing protocols.<|reference_end|> | arxiv | @article{garber2002length-based,
title={Length-based conjugacy search in the Braid group},
author={D. Garber, S. Kaplan, M. Teicher, B. Tsaban and U. Vishne},
journal={Contemporary Mathematics 418 (2006), 75--87},
year={2002},
archivePrefix={arXiv},
eprint={math/0209267},
primaryClass={math.GR cs.CR math.AG}
} | garber2002length-based |
arxiv-676600 | math/0209316 | Cycle and Circle Tests of Balance in Gain Graphs: Forbidden Minors and Their Groups | <|reference_start|>Cycle and Circle Tests of Balance in Gain Graphs: Forbidden Minors and Their Groups: We examine two criteria for balance of a gain graph, one based on binary cycles and one on circles. The graphs for which each criterion is valid depend on the set of allowed gain groups. The binary cycle test is invalid, except for forests, if any possible gain group has an element of odd order. Assuming all groups are allowed, or all abelian groups, or merely the cyclic group of order 3, we characterize, both constructively and by forbidden minors, the graphs for which the circle test is valid. It turns out that these three classes of groups have the same set of forbidden minors. The exact reason for the importance of the ternary cyclic group is not clear.<|reference_end|> | arxiv | @article{rybnikov2002cycle,
title={Cycle and Circle Tests of Balance in Gain Graphs: Forbidden Minors and
Their Groups},
author={Konstantin Rybnikov (University of Massachusetts at Lowell and MSRI),
Thomas Zaslavsky (Binghamton University)},
journal={J. Graph Theory, 51 (2006), no. 1, 1--21.},
year={2002},
archivePrefix={arXiv},
eprint={math/0209316},
primaryClass={math.CO cs.DM cs.DS}
} | rybnikov2002cycle |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.