corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-677101
physics/0703126
The Laplace-Jaynes approach to induction
<|reference_start|>The Laplace-Jaynes approach to induction: An approach to induction is presented, based on the idea of analysing the context of a given problem into `circumstances'. This approach, fully Bayesian in form and meaning, provides a complement or in some cases an alternative to that based on de Finetti's representation theorem and on the notion of infinite exchangeability. In particular, it gives an alternative interpretation of those formulae that apparently involve `unknown probabilities' or `propensities'. Various advantages and applications of the presented approach are discussed, especially in comparison to that based on exchangeability. Generalisations are also discussed.<|reference_end|>
arxiv
@article{mana2007the, title={The Laplace-Jaynes approach to induction}, author={P. G. L. Porta Mana, A. M{aa}nsson, G. Bj"ork}, journal={arXiv preprint arXiv:physics/0703126}, year={2007}, archivePrefix={arXiv}, eprint={physics/0703126}, primaryClass={physics.data-an cs.AI quant-ph} }
mana2007the
arxiv-677102
physics/0703164
Cultural route to the emergence of linguistic categories
<|reference_start|>Cultural route to the emergence of linguistic categories: Categories provide a coarse grained description of the world. A fundamental question is whether categories simply mirror an underlying structure of nature, or instead come from the complex interactions of human beings among themselves and with the environment. Here we address this question by modelling a population of individuals who co-evolve their own system of symbols and meanings by playing elementary language games. The central result is the emergence of a hierarchical category structure made of two distinct levels: a basic layer, responsible for fine discrimination of the environment, and a shared linguistic layer that groups together perceptions to guarantee communicative success. Remarkably, the number of linguistic categories turns out to be finite and small, as observed in natural languages.<|reference_end|>
arxiv
@article{puglisi2007cultural, title={Cultural route to the emergence of linguistic categories}, author={Andrea Puglisi, Andrea Baronchelli, and Vittorio Loreto}, journal={Proc. Natl. Acad. Sci. 105, 7936 (2008)}, year={2007}, doi={10.1073/pnas.0802485105}, archivePrefix={arXiv}, eprint={physics/0703164}, primaryClass={physics.soc-ph cond-mat.dis-nn cs.MA} }
puglisi2007cultural
arxiv-677103
physics/0703218
Analysis of the structure of complex networks at different resolution levels
<|reference_start|>Analysis of the structure of complex networks at different resolution levels: Modular structure is ubiquitous in real-world complex networks, and its detection is important because it gives insights in the structure-functionality Modular structure is ubiquitous in real-world complex networks, and its detection is important because it gives insights in the structure-functionality relationship. The standard approach is based on the optimization of a quality function, modularity, which is a relative quality measure for a partition of a network into modules. Recently some authors [1,2] have pointed out that the optimization of modularity has a fundamental drawback: the existence of a resolution limit beyond which no modular structure can be detected even though these modules might have own entity. The reason is that several topological descriptions of the network coexist at different scales, which is, in general, a fingerprint of complex systems. Here we propose a method that allows for multiple resolution screening of the modular structure. The method has been validated using synthetic networks, discovering the predefined structures at all scales. Its application to two real social networks allows to find the exact splits reported in the literature, as well as the substructure beyond the actual split.<|reference_end|>
arxiv
@article{arenas2007analysis, title={Analysis of the structure of complex networks at different resolution levels}, author={Alex Arenas, Alberto Fernandez and Sergio Gomez}, journal={New J. Phys. 10 (2008) 053039}, year={2007}, doi={10.1088/1367-2630/10/5/053039}, archivePrefix={arXiv}, eprint={physics/0703218}, primaryClass={physics.data-an cond-mat.other cs.DM physics.soc-ph q-bio.QM} }
arenas2007analysis
arxiv-677104
physics/9911006
Genetic Algorithms in Time-Dependent Environments
<|reference_start|>Genetic Algorithms in Time-Dependent Environments: The influence of time-dependent fitnesses on the infinite population dynamics of simple genetic algorithms (without crossover) is analyzed. Based on general arguments, a schematic phase diagram is constructed that allows one to characterize the asymptotic states in dependence on the mutation rate and the time scale of changes. Furthermore, the notion of regular changes is raised for which the population can be shown to converge towards a generalized quasispecies. Based on this, error thresholds and an optimal mutation rate are approximately calculated for a generational genetic algorithm with a moving needle-in-the-haystack landscape. The so found phase diagram is fully consistent with our general considerations.<|reference_end|>
arxiv
@article{ronnewinkel1999genetic, title={Genetic Algorithms in Time-Dependent Environments}, author={Christopher Ronnewinkel, Claus O. Wilke and Thomas Martinetz}, journal={arXiv preprint arXiv:physics/9911006}, year={1999}, archivePrefix={arXiv}, eprint={physics/9911006}, primaryClass={physics.bio-ph adap-org cs.NE nlin.AO q-bio} }
ronnewinkel1999genetic
arxiv-677105
q-bio/0310011
Complex Independent Component Analysis of Frequency-Domain Electroencephalographic Data
<|reference_start|>Complex Independent Component Analysis of Frequency-Domain Electroencephalographic Data: Independent component analysis (ICA) has proven useful for modeling brain and electroencephalographic (EEG) data. Here, we present a new, generalized method to better capture the dynamics of brain signals than previous ICA algorithms. We regard EEG sources as eliciting spatio-temporal activity patterns, corresponding to, e.g., trajectories of activation propagating across cortex. This leads to a model of convolutive signal superposition, in contrast with the commonly used instantaneous mixing model. In the frequency-domain, convolutive mixing is equivalent to multiplicative mixing of complex signal sources within distinct spectral bands. We decompose the recorded spectral-domain signals into independent components by a complex infomax ICA algorithm. First results from a visual attention EEG experiment exhibit (1) sources of spatio-temporal dynamics in the data, (2) links to subject behavior, (3) sources with a limited spectral extent, and (4) a higher degree of independence compared to sources derived by standard ICA.<|reference_end|>
arxiv
@article{anemuller2003complex, title={Complex Independent Component Analysis of Frequency-Domain Electroencephalographic Data}, author={Jorn Anemuller, Terrence J. Sejnowski, Scott Makeig}, journal={Neural Networks, 16:1311-1323, 2003}, year={2003}, doi={10.1016/j.neunet.2003.08.003}, archivePrefix={arXiv}, eprint={q-bio/0310011}, primaryClass={q-bio.QM cs.CE physics.data-an q-bio.NC} }
anemuller2003complex
arxiv-677106
q-bio/0310025
Pattern Excitation-Based Processing: The Music of The Brain
<|reference_start|>Pattern Excitation-Based Processing: The Music of The Brain: An approach to information processing based on the excitation of patterns of activity by non-linear active resonators in response to their input patterns is proposed. Arguments are presented to show that any computation performed by a conventional Turing machine-based computer, called T-machine in this paper, could also be performed by the pattern excitation-based machine, which will be called P-machine. A realization of this processing scheme by neural networks is discussed. In this realization, the role of the resonators is played by neural pattern excitation networks, which are the neural circuits capable of exciting different spatio-temporal patterns of activity in response to different inputs. Learning in the neural pattern excitation networks is also considered. It is shown that there is a duality between pattern excitation and pattern recognition neural networks, which allows to create new pattern excitation modes corresponding to recognizable input patterns, based on Hebbian learning rules. Hierarchically organized, such networks can produce complex behavior. Animal behavior, human language and thought are treated as examples produced by such networks.<|reference_end|>
arxiv
@article{koyrakh2003pattern, title={Pattern Excitation-Based Processing: The Music of The Brain}, author={Lev Koyrakh}, journal={arXiv preprint arXiv:q-bio/0310025}, year={2003}, archivePrefix={arXiv}, eprint={q-bio/0310025}, primaryClass={q-bio.NC cs.NE physics.bio-ph} }
koyrakh2003pattern
arxiv-677107
q-bio/0311037
Hierarchical Clustering Using Mutual Information
<|reference_start|>Hierarchical Clustering Using Mutual Information: We present a method for hierarchical clustering of data called {\it mutual information clustering} (MIC) algorithm. It uses mutual information (MI) as a similarity measure and exploits its grouping property: The MI between three objects $X, Y,$ and $Z$ is equal to the sum of the MI between $X$ and $Y$, plus the MI between $Z$ and the combined object $(XY)$. We use this both in the Shannon (probabilistic) version of information theory and in the Kolmogorov (algorithmic) version. We apply our method to the construction of phylogenetic trees from mitochondrial DNA sequences and to the output of independent components analysis (ICA) as illustrated with the ECG of a pregnant woman.<|reference_end|>
arxiv
@article{kraskov2003hierarchical, title={Hierarchical Clustering Using Mutual Information}, author={Alexander Kraskov, Harald Stoegbauer, Ralph G. Andrzejak, Peter Grassberger}, journal={arXiv preprint arXiv:q-bio/0311037}, year={2003}, archivePrefix={arXiv}, eprint={q-bio/0311037}, primaryClass={q-bio.QM cs.CC physics.data-an} }
kraskov2003hierarchical
arxiv-677108
q-bio/0311039
Hierarchical Clustering Based on Mutual Information
<|reference_start|>Hierarchical Clustering Based on Mutual Information: Motivation: Clustering is a frequently used concept in variety of bioinformatical applications. We present a new method for hierarchical clustering of data called mutual information clustering (MIC) algorithm. It uses mutual information (MI) as a similarity measure and exploits its grouping property: The MI between three objects X, Y, and Z is equal to the sum of the MI between X and Y, plus the MI between Z and the combined object (XY). Results: We use this both in the Shannon (probabilistic) version of information theory, where the "objects" are probability distributions represented by random samples, and in the Kolmogorov (algorithmic) version, where the "objects" are symbol sequences. We apply our method to the construction of mammal phylogenetic trees from mitochondrial DNA sequences and we reconstruct the fetal ECG from the output of independent components analysis (ICA) applied to the ECG of a pregnant woman. Availability: The programs for estimation of MI and for clustering (probabilistic version) are available at http://www.fz-juelich.de/nic/cs/software<|reference_end|>
arxiv
@article{kraskov2003hierarchical, title={Hierarchical Clustering Based on Mutual Information}, author={Alexander Kraskov, Harald St"ogbauer, Ralph G. Andrzejak, and Peter Grassberger}, journal={arXiv preprint arXiv:q-bio/0311039}, year={2003}, archivePrefix={arXiv}, eprint={q-bio/0311039}, primaryClass={q-bio.QM cs.CC physics.bio-ph} }
kraskov2003hierarchical
arxiv-677109
q-bio/0401033
Parametric Inference for Biological Sequence Analysis
<|reference_start|>Parametric Inference for Biological Sequence Analysis: One of the major successes in computational biology has been the unification, using the graphical model formalism, of a multitude of algorithms for annotating and comparing biological sequences. Graphical models that have been applied towards these problems include hidden Markov models for annotation, tree models for phylogenetics, and pair hidden Markov models for alignment. A single algorithm, the sum-product algorithm, solves many of the inference problems associated with different statistical models. This paper introduces the \emph{polytope propagation algorithm} for computing the Newton polytope of an observation from a graphical model. This algorithm is a geometric version of the sum-product algorithm and is used to analyze the parametric behavior of maximum a posteriori inference calculations for graphical models.<|reference_end|>
arxiv
@article{pachter2004parametric, title={Parametric Inference for Biological Sequence Analysis}, author={Lior Pachter, Bernd Sturmfels}, journal={arXiv preprint arXiv:q-bio/0401033}, year={2004}, doi={10.1073/pnas.0406011101}, archivePrefix={arXiv}, eprint={q-bio/0401033}, primaryClass={q-bio.GN cs.LG math.ST stat.TH} }
pachter2004parametric
arxiv-677110
q-bio/0402029
Fluctuation-dissipation theorem and models of learning
<|reference_start|>Fluctuation-dissipation theorem and models of learning: Advances in statistical learning theory have resulted in a multitude of different designs of learning machines. But which ones are implemented by brains and other biological information processors? We analyze how various abstract Bayesian learners perform on different data and argue that it is difficult to determine which learning-theoretic computation is performed by a particular organism using just its performance in learning a stationary target (learning curve). Basing on the fluctuation-dissipation relation in statistical physics, we then discuss a different experimental setup that might be able to solve the problem.<|reference_end|>
arxiv
@article{nemenman2004fluctuation-dissipation, title={Fluctuation-dissipation theorem and models of learning}, author={Ilya Nemenman}, journal={Neural Comp. 17 (9): 2006-2033 SEP 2005}, year={2004}, number={NSF-KITP-04-20}, archivePrefix={arXiv}, eprint={q-bio/0402029}, primaryClass={q-bio.NC cs.LG nlin.AO physics.data-an} }
nemenman2004fluctuation-dissipation
arxiv-677111
q-bio/0403011
Memorization in a neural network with adjustable transfer function and conditional gating
<|reference_start|>Memorization in a neural network with adjustable transfer function and conditional gating: The main problem about replacing LTP as a memory mechanism has been to find other highly abstract, easily understandable principles for induced plasticity. In this paper we attempt to lay out such a basic mechanism, namely intrinsic plasticity. Important empirical observations with theoretical significance are time-layering of neural plasticity mediated by additional constraints to enter into later stages, various manifestations of intrinsic neural properties, and conditional gating of synaptic connections. An important consequence of the proposed mechanism is that it can explain the usually latent nature of memories.<|reference_end|>
arxiv
@article{scheler2004memorization, title={Memorization in a neural network with adjustable transfer function and conditional gating}, author={Gabriele Scheler}, journal={arXiv preprint arXiv:q-bio/0403011}, year={2004}, archivePrefix={arXiv}, eprint={q-bio/0403011}, primaryClass={q-bio.NC cs.NE} }
scheler2004memorization
arxiv-677112
q-bio/0403022
Intelligent encoding and economical communication in the visual stream
<|reference_start|>Intelligent encoding and economical communication in the visual stream: The theory of computational complexity is used to underpin a recent model of neocortical sensory processing. We argue that encoding into reconstruction networks is appealing for communicating agents using Hebbian learning and working on hard combinatorial problems, which are easy to verify. Computational definition of the concept of intelligence is provided. Simulations illustrate the idea.<|reference_end|>
arxiv
@article{lorincz2004intelligent, title={Intelligent encoding and economical communication in the visual stream}, author={Andras Lorincz}, journal={arXiv preprint arXiv:q-bio/0403022}, year={2004}, archivePrefix={arXiv}, eprint={q-bio/0403022}, primaryClass={q-bio.NC cs.AI cs.CC nlin.AO} }
lorincz2004intelligent
arxiv-677113
q-bio/0403036
The Triplet Genetic Code had a Doublet Predecessor
<|reference_start|>The Triplet Genetic Code had a Doublet Predecessor: Information theoretic analysis of genetic languages indicates that the naturally occurring 20 amino acids and the triplet genetic code arose by duplication of 10 amino acids of class-II and a doublet genetic code having codons NNY and anticodons $\overleftarrow{\rm GNN}$. Evidence for this scenario is presented based on the properties of aminoacyl-tRNA synthetases, amino acids and nucleotide bases.<|reference_end|>
arxiv
@article{patel2004the, title={The Triplet Genetic Code had a Doublet Predecessor}, author={Apoorva Patel}, journal={Journal of Theoretical Biology 233 (2005) 527-532}, year={2004}, archivePrefix={arXiv}, eprint={q-bio/0403036}, primaryClass={q-bio.GN cs.CE q-bio.BM quant-ph} }
patel2004the
arxiv-677114
q-bio/0406015
Information theory, multivariate dependence, and genetic network inference
<|reference_start|>Information theory, multivariate dependence, and genetic network inference: We define the concept of dependence among multiple variables using maximum entropy techniques and introduce a graphical notation to denote the dependencies. Direct inference of information theoretic quantities from data uncovers dependencies even in undersampled regimes when the joint probability distribution cannot be reliably estimated. The method is tested on synthetic data. We anticipate it to be useful for inference of genetic circuits and other biological signaling networks.<|reference_end|>
arxiv
@article{nemenman2004information, title={Information theory, multivariate dependence, and genetic network inference}, author={Ilya Nemenman}, journal={arXiv preprint arXiv:q-bio/0406015}, year={2004}, number={NSF-KITP-04-54}, archivePrefix={arXiv}, eprint={q-bio/0406015}, primaryClass={q-bio.QM cs.IT math.IT math.ST physics.data-an q-bio.GN stat.TH} }
nemenman2004information
arxiv-677115
q-bio/0409022
Topology of biological networks and reliability of information processing
<|reference_start|>Topology of biological networks and reliability of information processing: Biological systems rely on robust internal information processing: Survival depends on highly reproducible dynamics of regulatory processes. Biological information processing elements, however, are intrinsically noisy (genetic switches, neurons, etc.). Such noise poses severe stability problems to system behavior as it tends to desynchronize system dynamics (e.g. via fluctuating response or transmission time of the elements). Synchronicity in parallel information processing is not readily sustained in the absence of a central clock. Here we analyze the influence of topology on synchronicity in networks of autonomous noisy elements. In numerical and analytical studies we find a clear distinction between non-reliable and reliable dynamical attractors, depending on the topology of the circuit. In the reliable cases, synchronicity is sustained, while in the unreliable scenario, fluctuating responses of single elements can gradually desynchronize the system, leading to non-reproducible behavior. We find that the fraction of reliable dynamical attractors strongly correlates with the underlying circuitry. Our model suggests that the observed motif structure of biological signaling networks is shaped by the biological requirement for reproducibility of attractors.<|reference_end|>
arxiv
@article{klemm2004topology, title={Topology of biological networks and reliability of information processing}, author={Konstantin Klemm and Stefan Bornholdt}, journal={Proc. Natl. Acad. Sci. USA 102 (2005) 18414}, year={2004}, doi={10.1073/pnas.0509132102}, archivePrefix={arXiv}, eprint={q-bio/0409022}, primaryClass={q-bio.MN cond-mat.dis-nn cs.DC} }
klemm2004topology
arxiv-677116
q-bio/0411030
Statistical Mechanics Characterization of Neuronal Mosaics
<|reference_start|>Statistical Mechanics Characterization of Neuronal Mosaics: The spatial distribution of neuronal cells is an important requirement for achieving proper neuronal function in several parts of the nervous system of most animals. For instance, specific distribution of photoreceptors and related neuronal cells, particularly the ganglion cells, in mammal's retina is required in order to properly sample the projected scene. This work presents how two concepts from the areas of statistical mechanics and complex systems, namely the \emph{lacunarity} and the \emph{multiscale entropy} (i.e. the entropy calculated over progressively diffused representations of the cell mosaic), have allowed effective characterization of the spatial distribution of retinal cells.<|reference_end|>
arxiv
@article{costa2004statistical, title={Statistical Mechanics Characterization of Neuronal Mosaics}, author={Luciano da Fontoura Costa, Fernando Rocha and Silene Maria Araujo de Lima}, journal={Appl. Phys. Lett. 86, 093901 (2005)}, year={2004}, doi={10.1063/1.1874306}, archivePrefix={arXiv}, eprint={q-bio/0411030}, primaryClass={q-bio.NC cond-mat.dis-nn cs.CV physics.bio-ph q-bio.QM} }
costa2004statistical
arxiv-677117
q-bio/0501021
Spike timing precision and neural error correction: local behavior
<|reference_start|>Spike timing precision and neural error correction: local behavior: The effects of spike timing precision and dynamical behavior on error correction in spiking neurons were investigated. Stationary discharges -- phase locked, quasiperiodic, or chaotic -- were induced in a simulated neuron by presenting pacemaker presynaptic spike trains across a model of a prototypical inhibitory synapse. Reduced timing precision was modeled by jittering presynaptic spike times. Aftereffects of errors -- in this communication, missed presynaptic spikes -- were determined by comparing postsynaptic spike times between simulations identical except for the presence or absence of errors. Results show that the effects of an error vary greatly depending on the ongoing dynamical behavior. In the case of phase lockings, a high degree of presynaptic spike timing precision can provide significantly faster error recovery. For non-locked behaviors, isolated missed spikes can have little or no discernible aftereffects (or even serve to paradoxically reduce uncertainty in postsynaptic spike timing), regardless of presynaptic imprecision. This suggests two possible categories of error correction: high-precision locking with rapid recovery and low-precision non-locked with error immunity.<|reference_end|>
arxiv
@article{stiber2005spike, title={Spike timing precision and neural error correction: local behavior}, author={Michael Stiber}, journal={Neural Computation, v. 17, n. 7, 1577-1601, 2005}, year={2005}, archivePrefix={arXiv}, eprint={q-bio/0501021}, primaryClass={q-bio.NC cs.NE math.DS} }
stiber2005spike
arxiv-677118
q-bio/0502023
Learning intrinsic excitability in medium spiny neurons
<|reference_start|>Learning intrinsic excitability in medium spiny neurons: We present an unsupervised, local activation-dependent learning rule for intrinsic plasticity (IP) which affects the composition of ion channel conductances for single neurons in a use-dependent way. We use a single-compartment conductance-based model for medium spiny striatal neurons in order to show the effects of parametrization of individual ion channels on the neuronal activation function. We show that parameter changes within the physiological ranges are sufficient to create an ensemble of neurons with significantly different activation functions. We emphasize that the effects of intrinsic neuronal variability on spiking behavior require a distributed mode of synaptic input and can be eliminated by strongly correlated input. We show how variability and adaptivity in ion channel conductances can be utilized to store patterns without an additional contribution by synaptic plasticity (SP). The adaptation of the spike response may result in either "positive" or "negative" pattern learning. However, read-out of stored information depends on a distributed pattern of synaptic activity to let intrinsic variability determine spike response. We briefly discuss the implications of this conditional memory on learning and addiction.<|reference_end|>
arxiv
@article{scheler2005learning, title={Learning intrinsic excitability in medium spiny neurons}, author={Gabriele Scheler}, journal={F1000Research 2014, 2:88}, year={2005}, doi={10.12688/f1000research.2-88.v2}, archivePrefix={arXiv}, eprint={q-bio/0502023}, primaryClass={q-bio.NC cs.NE} }
scheler2005learning
arxiv-677119
q-bio/0505021
Characterizing Self-Developing Biological Neural Networks: A First Step Towards their Application To Computing Systems
<|reference_start|>Characterizing Self-Developing Biological Neural Networks: A First Step Towards their Application To Computing Systems: Carbon nanotubes are often seen as the only alternative technology to silicon transistors. While they are the most likely short-term one, other longer-term alternatives should be studied as well. While contemplating biological neurons as an alternative component may seem preposterous at first sight, significant recent progress in CMOS-neuron interface suggests this direction may not be unrealistic; moreover, biological neurons are known to self-assemble into very large networks capable of complex information processing tasks, something that has yet to be achieved with other emerging technologies. The first step to designing computing systems on top of biological neurons is to build an abstract model of self-assembled biological neural networks, much like computer architects manipulate abstract models of transistors and circuits. In this article, we propose a first model of the structure of biological neural networks. We provide empirical evidence that this model matches the biological neural networks found in living organisms, and exhibits the small-world graph structure properties commonly found in many large and self-organized systems, including biological neural networks. More importantly, we extract the simple local rules and characteristics governing the growth of such networks, enabling the development of potentially large but realistic biological neural networks, as would be needed for complex information processing/computing tasks. Based on this model, future work will be targeted to understanding the evolution and learning properties of such networks, and how they can be used to build computing systems.<|reference_end|>
arxiv
@article{berry2005characterizing, title={Characterizing Self-Developing Biological Neural Networks: A First Step Towards their Application To Computing Systems}, author={Hugues Berry (INRIA Futurs), Olivier Temam (INRIA Futurs)}, journal={arXiv preprint arXiv:q-bio/0505021}, year={2005}, archivePrefix={arXiv}, eprint={q-bio/0505021}, primaryClass={q-bio.NC cs.AR cs.NE nlin.AO} }
berry2005characterizing
arxiv-677120
q-bio/0505050
HLA and HIV Infection Progression: Application of the Minimum Description Length Principle to Statistical Genetics
<|reference_start|>HLA and HIV Infection Progression: Application of the Minimum Description Length Principle to Statistical Genetics: The minimum description length (MDL) principle states that the best model to account for some data minimizes the sum of the lengths, in bits, of the descriptions of the model and the residual error. The description length is thus a criterion for model selection. Description-length analysis of HLA alleles from the Chicago MACS cohort enables classification of alleles associated with plasma HIV RNA, an indicator of infection progression. Progression variation is most strongly associated with HLA-B. Individuals without B58s supertype alleles average viral RNA levels 3.6-fold greater than individuals with them.<|reference_end|>
arxiv
@article{hraber2005hla, title={HLA and HIV Infection Progression: Application of the Minimum Description Length Principle to Statistical Genetics}, author={Peter T. Hraber, Bette T. Korber, Steven Wolinsky, Henry A. Erlich, Elizabeth A. Trachtenberg, Thomas B. Kepler}, journal={Lecture Notes in Computer Science Volume 4345, 2006, pp 1-12}, year={2005}, doi={10.1007/11946465_1}, number={Santa Fe Institute Working Paper 03-04-023}, archivePrefix={arXiv}, eprint={q-bio/0505050}, primaryClass={q-bio.QM cs.IT math.IT} }
hraber2005hla
arxiv-677121
q-bio/0506034
The Shapley Value of Phylogenetic Trees
<|reference_start|>The Shapley Value of Phylogenetic Trees: Every weighted tree corresponds naturally to a cooperative game that we call a "tree game"; it assigns to each subset of leaves the sum of the weights of the minimal subtree spanned by those leaves. In the context of phylogenetic trees, the leaves are species and this assignment captures the diversity present in the coalition of species considered. We consider the Shapley value of tree games and suggest a biological interpretation. We determine the linear transformation M that shows the dependence of the Shapley value on the edge weights of the tree, and we also compute a null space basis of M. Both depend on the "split counts" of the tree. Finally, we characterize the Shapley value on tree games by four axioms, a counterpart to Shapley's original theorem on the larger class of cooperative games.<|reference_end|>
arxiv
@article{haake2005the, title={The Shapley Value of Phylogenetic Trees}, author={Claus-Jochen Haake, Akemi Kashiwada, Francis Edward Su}, journal={J. Mathematical Biology 56 (2008), 479--497}, year={2005}, doi={10.1007/s00285-007-0126-2}, archivePrefix={arXiv}, eprint={q-bio/0506034}, primaryClass={q-bio.QM cs.GT math.CO q-bio.PE} }
haake2005the
arxiv-677122
q-bio/0507037
Neuromodulation Influences Synchronization and Intrinsic Read-out
<|reference_start|>Neuromodulation Influences Synchronization and Intrinsic Read-out: Background: The roles of neuromodulation in a neural network, such as in a cortical microcolumn, are still incompletely understood. Neuromodulation influences neural processing by presynaptic and postsynaptic regulation of synaptic efficacy. Neuromodulation also affects ion channels and intrinsic excitability. Methods: Synaptic efficacy modulation is an effective way to rapidly alter network density and topology. We alter network topology and density to measure the effect on spike synchronization. We also operate with differently parameterized neuron models which alter the neurons intrinsic excitability, i.e., activation function. Results: We find that (a) fast synaptic efficacy modulation influences the amount of correlated spiking in a network. Also, (b) synchronization in a network influences the read-out of intrinsic properties. Highly synchronous input drives neurons, such that differences in intrinsic properties disappear, while asynchronous input lets intrinsic properties determine output behavior. Thus, altering network topology can alter the balance between intrinsically vs. synaptically driven network activity. Conclusion: We conclude that neuromodulation may allow a network to shift between a more synchronized transmission mode and a more asynchronous intrinsic read-out mode. This has significant implications for our understanding of the flexibility of cortical computations.<|reference_end|>
arxiv
@article{scheler2005neuromodulation, title={Neuromodulation Influences Synchronization and Intrinsic Read-out}, author={Gabriele Scheler}, journal={F1000Research 2018, 7:1277}, year={2005}, doi={10.12688/f1000research.15804.2}, archivePrefix={arXiv}, eprint={q-bio/0507037}, primaryClass={q-bio.NC cs.NE nlin.AO} }
scheler2005neuromodulation
arxiv-677123
q-bio/0510007
The fitness value of information
<|reference_start|>The fitness value of information: Biologists measure information in different ways. Neurobiologists and researchers in bioinformatics often measure information using information-theoretic measures such as Shannon's entropy or mutual information. Behavioral biologists and evolutionary ecologists more commonly use decision-theoretic measures, such the value of information, which assess the worth of information to a decision maker. Here we show that these two kinds of measures are intimately related in the context of biological evolution. We present a simple model of evolution in an uncertain environment, and calculate the increase in Darwinian fitness that is made possible by information about the environmental state. This fitness increase -- the fitness value of information -- is a composite of both Shannon's mutual information and the decision-theoretic value of information. Furthermore, we show that in certain cases the fitness value of responding to a cue is exactly equal to the mutual information between the cue and the environment. In general the Shannon entropy of the environment, which seemingly fails to take anything about organismal fitness into account, nonetheless imposes an upper bound on the fitness value of information.<|reference_end|>
arxiv
@article{bergstrom2005the, title={The fitness value of information}, author={Carl T. Bergstrom and Michael Lachmann}, journal={arXiv preprint arXiv:q-bio/0510007}, year={2005}, archivePrefix={arXiv}, eprint={q-bio/0510007}, primaryClass={q-bio.PE cs.IT math.IT q-bio.NC} }
bergstrom2005the
arxiv-677124
q-bio/0511045
The use of the GARP genetic algorithm and internet grid computing in the Lifemapper world atlas of species biodiversity
<|reference_start|>The use of the GARP genetic algorithm and internet grid computing in the Lifemapper world atlas of species biodiversity: Lifemapper (http://www.lifemapper.org) is a predictive electronic atlas of the Earth's biological biodiversity. Using a screensaver version of the GARP genetic algorithm for modeling species distributions, Lifemapper harnesses vast computing resources through 'volunteers' PCs similar to SETI@home, to develop models of the distribution of the worlds fauna and flora. The Lifemapper project's primary goal is to provide an up to date and comprehensive database of species maps and prediction models (i.e. a fauna and flora of the world) using available data on species' locations. The models are developed using specimen data from distributed museum collections and an archive of geospatial environmental correlates. A central server maintains a dynamic archive of species maps and models for research, outreach to the general community, and feedback to museum data providers. This paper is a case study in the role, use and justification of a genetic algorithm in development of large-scale environmental informatics infrastructure.<|reference_end|>
arxiv
@article{stockwell2005the, title={The use of the GARP genetic algorithm and internet grid computing in the Lifemapper world atlas of species biodiversity}, author={David R.B. Stockwell, James H. Beach, Aimee Stewart, Gregory Vorontsov, David Vieglais, Ricardo Scachetti Pereira}, journal={arXiv preprint arXiv:q-bio/0511045}, year={2005}, archivePrefix={arXiv}, eprint={q-bio/0511045}, primaryClass={q-bio.QM cs.DC cs.NE q-bio.OT} }
stockwell2005the
arxiv-677125
q-bio/0511046
Improving ecological niche models by data mining large environmental datasets for surrogate models
<|reference_start|>Improving ecological niche models by data mining large environmental datasets for surrogate models: WhyWhere is a new ecological niche modeling (ENM) algorithm for mapping and explaining the distribution of species. The algorithm uses image processing methods to efficiently sift through large amounts of data to find the few variables that best predict species occurrence. The purpose of this paper is to describe and justify the main parameterizations and to show preliminary success at rapidly providing accurate, scalable, and simple ENMs. Preliminary results for 6 species of plants and animals in different regions indicate a significant (p<0.01) 14% increase in accuracy over the GARP algorithm using models with few, typically two, variables. The increase is attributed to access to additional data, particularly monthly vs. annual climate averages. WhyWhere is also 6 times faster than GARP on large data sets. A data mining based approach with transparent access to remote data archives is a new paradigm for ENM, particularly suited to finding correlates in large databases of fine resolution surfaces. Software for WhyWhere is freely available, both as a service and in a desktop downloadable form from the web site http://biodi.sdsc.edu/ww_home.html.<|reference_end|>
arxiv
@article{stockwell2005improving, title={Improving ecological niche models by data mining large environmental datasets for surrogate models}, author={David R.B. Stockwell}, journal={arXiv preprint arXiv:q-bio/0511046}, year={2005}, archivePrefix={arXiv}, eprint={q-bio/0511046}, primaryClass={q-bio.QM cs.AI} }
stockwell2005improving
arxiv-677126
q-bio/0603007
Compression ratios based on the Universal Similarity Metric still yield protein distances far from CATH distances
<|reference_start|>Compression ratios based on the Universal Similarity Metric still yield protein distances far from CATH distances: Kolmogorov complexity has inspired several alignment-free distance measures, based on the comparison of lengths of compressions, which have been applied successfully in many areas. One of these measures, the so-called Universal Similarity Metric (USM), has been used by Krasnogor and Pelta to compare simple protein contact maps, showing that it yielded good clustering on four small datasets. We report an extensive test of this metric using a much larger and representative protein dataset: the domain dataset used by Sierk and Pearson to evaluate seven protein structure comparison methods and two protein sequence comparison methods. One result is that Krasnogor-Pelta method has less domain discriminant power than any one of the methods considered by Sierk and Pearson when using these simple contact maps. In another test, we found that the USM based distance has low agreement with the CATH tree structure for the same benchmark of Sierk and Pearson. In any case, its agreement is lower than the one of a standard sequential alignment method, SSEARCH. Finally, we manually found lots of small subsets of the database that are better clustered using SSEARCH than USM, to confirm that Krasnogor-Pelta's conclusions were based on datasets that were too small.<|reference_end|>
arxiv
@article{rocha2006compression, title={Compression ratios based on the Universal Similarity Metric still yield protein distances far from CATH distances}, author={Jairo Rocha, Francesc Rossell'o, Joan Segura}, journal={arXiv preprint arXiv:q-bio/0603007}, year={2006}, archivePrefix={arXiv}, eprint={q-bio/0603007}, primaryClass={q-bio.QM cs.CE physics.data-an q-bio.OT} }
rocha2006compression
arxiv-677127
q-bio/0604024
The transposition distance for phylogenetic trees
<|reference_start|>The transposition distance for phylogenetic trees: The search for similarity and dissimilarity measures on phylogenetic trees has been motivated by the computation of consensus trees, the search by similarity in phylogenetic databases, and the assessment of clustering results in bioinformatics. The transposition distance for fully resolved phylogenetic trees is a recent addition to the extensive collection of available metrics for comparing phylogenetic trees. In this paper, we generalize the transposition distance from fully resolved to arbitrary phylogenetic trees, through a construction that involves an embedding of the set of phylogenetic trees with a fixed number of labeled leaves into a symmetric group and a generalization of Reidys-Stadler's involution metric for RNA contact structures. We also present simple linear-time algorithms for computing it.<|reference_end|>
arxiv
@article{rossello2006the, title={The transposition distance for phylogenetic trees}, author={Francesc Rossello, Gabriel Valiente}, journal={arXiv preprint arXiv:q-bio/0604024}, year={2006}, archivePrefix={arXiv}, eprint={q-bio/0604024}, primaryClass={q-bio.PE cs.CE math.GR q-bio.OT} }
rossello2006the
arxiv-677128
q-bio/0605020
Laws in Darwinian Evolutionary Theory
<|reference_start|>Laws in Darwinian Evolutionary Theory: In the present article the recent works to formulate laws in Darwinian evolutionary dynamics are discussed. Although there is a strong consensus that general laws in biology may exist, opinions opposing such suggestion are abundant. Based on recent progress in both mathematics and biology, another attempt to address this issue is made in the present article. Specifically, three laws which form a mathematical framework for the evolutionary dynamics in biology are postulated. The second law is most quantitative and is explicitly expressed in the unique form of a stochastic differential equation. Salient features of Darwinian evolutionary dynamics are captured by this law: the probabilistic nature of evolution, ascendancy, and the adaptive landscape. Four dynamical elements are introduced in this formulation: the ascendant matrix, the transverse matrix, the Wright evolutionary potential, and the stochastic drive. The first law may be regarded as a special case of the second law. It gives the reference point to discuss the evolutionary dynamics. The third law describes the relationship between the focused level of description to its lower and higher ones, and defines the dichotomy of deterministic and stochastic drives. It is an acknowledgement of the hierarchical structure in biology. A new interpretation of Fisher's fundamental theorem of natural selection is provided in terms of the F-Theorem. The proposed laws are based on continuous representation in both time and population. Their generic nature is demonstrated through their equivalence to classical formulations. The present three laws appear to provide a coherent framework for the further development of the subject.<|reference_end|>
arxiv
@article{ao2006laws, title={Laws in Darwinian Evolutionary Theory}, author={P Ao}, journal={Physics of Life Reviews, 2 (2005) 117-156}, year={2006}, doi={10.1016/j.plrev.2005.03.002}, archivePrefix={arXiv}, eprint={q-bio/0605020}, primaryClass={q-bio.PE cond-mat.stat-mech cs.NE math.OC nlin.AO physics.bio-ph q-bio.QM} }
ao2006laws
arxiv-677129
q-bio/0607018
A p-Adic Model of DNA Sequence and Genetic Code
<|reference_start|>A p-Adic Model of DNA Sequence and Genetic Code: Using basic properties of p-adic numbers, we consider a simple new approach to describe main aspects of DNA sequence and genetic code. Central role in our investigation plays an ultrametric p-adic information space which basic elements are nucleotides, codons and genes. We show that a 5-adic model is appropriate for DNA sequence. This 5-adic model, combined with 2-adic distance, is also suitable for genetic code and for a more advanced employment in genomics. We find that genetic code degeneracy is related to the p-adic distance between codons.<|reference_end|>
arxiv
@article{dragovich2006a, title={A p-Adic Model of DNA Sequence and Genetic Code}, author={Branko Dragovich and Alexandra Dragovich}, journal={p-Adic Numbers, Ultrametric Analysis and Applications 1 (2009) 34-41}, year={2006}, doi={10.1134/S2070046609010038}, archivePrefix={arXiv}, eprint={q-bio/0607018}, primaryClass={q-bio.GN cs.IT math-ph math.IT math.MP physics.bio-ph} }
dragovich2006a
arxiv-677130
q-bio/0610017
Number sequence representation of protein structures based on the second derivative of a folded tetrahedron sequence
<|reference_start|>Number sequence representation of protein structures based on the second derivative of a folded tetrahedron sequence: This paper proposes a new mathematical approach to characterize native protein structures based on the discrete differential geometry of tetrahedron tiles. In the approach, local structure of proteins is classified into finite types according to shape. And one would obtain a number sequence representation of protein structures automatically. As a result, it would become possible to quantify structural preference of amino-acids objectively. And one could use the wide variety of sequence alignment programs to study protein structures since the number sequence has no internal structure. The programs and this paper with clear figures are available from http://www.genocript.com.<|reference_end|>
arxiv
@article{morikawa2006number, title={Number sequence representation of protein structures based on the second derivative of a folded tetrahedron sequence}, author={Naoto Morikawa}, journal={arXiv preprint arXiv:q-bio/0610017}, year={2006}, archivePrefix={arXiv}, eprint={q-bio/0610017}, primaryClass={q-bio.BM cs.CG cs.DM math.MG} }
morikawa2006number
arxiv-677131
q-bio/0610040
Metric learning pairwise kernel for graph inference
<|reference_start|>Metric learning pairwise kernel for graph inference: Much recent work in bioinformatics has focused on the inference of various types of biological networks, representing gene regulation, metabolic processes, protein-protein interactions, etc. A common setting involves inferring network edges in a supervised fashion from a set of high-confidence edges, possibly characterized by multiple, heterogeneous data sets (protein sequence, gene expression, etc.). Here, we distinguish between two modes of inference in this setting: direct inference based upon similarities between nodes joined by an edge, and indirect inference based upon similarities between one pair of nodes and another pair of nodes. We propose a supervised approach for the direct case by translating it into a distance metric learning problem. A relaxation of the resulting convex optimization problem leads to the support vector machine (SVM) algorithm with a particular kernel for pairs, which we call the metric learning pairwise kernel (MLPK). We demonstrate, using several real biological networks, that this direct approach often improves upon the state-of-the-art SVM for indirect inference with the tensor product pairwise kernel.<|reference_end|>
arxiv
@article{vert2006metric, title={Metric learning pairwise kernel for graph inference}, author={Jean-Philippe Vert (CB), Jian Qiu (GS-UW), William Stafford Noble (GS-UW)}, journal={arXiv preprint arXiv:q-bio/0610040}, year={2006}, archivePrefix={arXiv}, eprint={q-bio/0610040}, primaryClass={q-bio.QM cs.LG} }
vert2006metric
arxiv-677132
q-bio/0611052
Grid Added Value to Address Malaria
<|reference_start|>Grid Added Value to Address Malaria: Through this paper, we call for a distributed, internet-based collaboration to address one of the worst plagues of our present world, malaria. The spirit is a non-proprietary peer-production of information-embedding goods. And we propose to use the grid technology to enable such a world wide "open source" like collaboration. The first step towards this vision has been achieved during the summer on the EGEE grid infrastructure where 46 million ligands were docked for a total amount of 80 CPU years in 6 weeks in the quest for new drugs.<|reference_end|>
arxiv
@article{breton2006grid, title={Grid Added Value to Address Malaria}, author={V. Breton (LPC-Clermont), N. Jacq (LPC-Clermont, CS-Si), M. Hofmann (SCAI)}, journal={Dans IEEE Transactions on Information Technology in Biomedicine (12) - 6th IEEE International Symposium on Cluster Computing and the Grid, CCGrid06, Singapore (2006)}, year={2006}, doi={10.1109/TITB.2007.895930}, archivePrefix={arXiv}, eprint={q-bio/0611052}, primaryClass={q-bio.QM cs.DC} }
breton2006grid
arxiv-677133
q-bio/0611053
Integration and mining of malaria molecular, functional and pharmacological data: how far are we from a chemogenomic knowledge space?
<|reference_start|>Integration and mining of malaria molecular, functional and pharmacological data: how far are we from a chemogenomic knowledge space?: The organization and mining of malaria genomic and post-genomic data is highly motivated by the necessity to predict and characterize new biological targets and new drugs. Biological targets are sought in a biological space designed from the genomic data from Plasmodium falciparum, but using also the millions of genomic data from other species. Drug candidates are sought in a chemical space containing the millions of small molecules stored in public and private chemolibraries. Data management should therefore be as reliable and versatile as possible. In this context, we examined five aspects of the organization and mining of malaria genomic and post-genomic data: 1) the comparison of protein sequences including compositionally atypical malaria sequences, 2) the high throughput reconstruction of molecular phylogenies, 3) the representation of biological processes particularly metabolic pathways, 4) the versatile methods to integrate genomic data, biological representations and functional profiling obtained from X-omic experiments after drug treatments and 5) the determination and prediction of protein structures and their molecular docking with drug candidate structures. Progresses toward a grid-enabled chemogenomic knowledge space are discussed.<|reference_end|>
arxiv
@article{birkholtz2006integration, title={Integration and mining of malaria molecular, functional and pharmacological data: how far are we from a chemogenomic knowledge space?}, author={L.-M. Birkholtz, O. Bastien (DRDC), G. Wells, D. Grando (DRDC), F. Joubert, V. Kasam (LPC-Clermont), M. Zimmermann, P. Ortet (DEVM), N. Jacq (LPC-Clermont), S. Roy (DRDC/Bim), M. Hoffmann-Apitius, V. Breton (LPC-Clermont), A. I. Louw, E. Mar'echal (DRDC)}, journal={Malaria Journal 5 (2006) 1-24}, year={2006}, doi={10.1186/1475-2875-5-110}, archivePrefix={arXiv}, eprint={q-bio/0611053}, primaryClass={q-bio.QM cs.DC q-bio.GN} }
birkholtz2006integration
arxiv-677134
q-bio/0611054
Grid enabled virtual screening against malaria
<|reference_start|>Grid enabled virtual screening against malaria: WISDOM is an international initiative to enable a virtual screening pipeline on a grid infrastructure. Its first attempt was to deploy large scale in silico docking on a public grid infrastructure. Protein-ligand docking is about computing the binding energy of a protein target to a library of potential drugs using a scoring algorithm. Previous deployments were either limited to one cluster, to grids of clusters in the tightly protected environment of a pharmaceutical laboratory or to pervasive grids. The first large scale docking experiment ran on the EGEE grid production service from 11 July 2005 to 19 August 2005 against targets relevant to research on malaria and saw over 41 million compounds docked for the equivalent of 80 years of CPU time. Up to 1,700 computers were simultaneously used in 15 countries around the world. Issues related to the deployment and the monitoring of the in silico docking experiment as well as experience with grid operation and services are reported in the paper. The main problem encountered for such a large scale deployment was the grid infrastructure stability. Although the overall success rate was above 80%, a lot of monitoring and supervision was still required at the application level to resubmit the jobs that failed. But the experiment demonstrated how grid infrastructures have a tremendous capacity to mobilize very large CPU resources for well targeted goals during a significant period of time. This success leads to a second computing challenge targeting Avian Flu neuraminidase N1.<|reference_end|>
arxiv
@article{jacq2006grid, title={Grid enabled virtual screening against malaria}, author={N. Jacq (LPC-Clermont, CS-Si), J. Salzemann (LPC-Clermont), F. Jacq (LPC-Clermont), Y. Legr'e (LPC-Clermont), E. Medernach (LPC-Clermont), J. Montagnat (Informatique Signaux Et Syst`emes), A. Maass (SCAI), M. Reichstadt (LPC-Clermont), H. Schwichtenberg (SCAI), M. Sridhar (SCAI), V. Kasam (SCAI), M. Zimmermann (SCAI), M. Hofmann (SCAI), V. Breton (LPC-Clermont)}, journal={Journal of Grid Computing 6 (2008) 29-43}, year={2006}, archivePrefix={arXiv}, eprint={q-bio/0611054}, primaryClass={q-bio.QM cs.DC} }
jacq2006grid
arxiv-677135
q-bio/0612013
Clustering fetal heart rate tracings by compression
<|reference_start|>Clustering fetal heart rate tracings by compression: Fetal heart rate (FHR) monitoring, before and during labor, is a very important medical practice in the detection of fetuses in danger. We clustered FHR tracings by compression in order to identify abnormal ones. We use a recently introduced approach based on algorithmic information theory, a theoretical, rigorous and well-studied notion of information content in individual objects. The new method can mine patterns in completely different areas, there are no domain-specific parameters to set, and it does not require specific background knowledge. At the highest level the FHR tracings were clustered according to an unanticipated feature, namely the technology used in signal acquisition. At the lower levels all tracings with abnormal or suspicious patterns were clustered together, independent of the technology used. Moreover, FHR tracings with future poor neonatal outcomes were included in the cluster with other suspicious patterns.<|reference_end|>
arxiv
@article{santos2006clustering, title={Clustering fetal heart rate tracings by compression}, author={C. Costa Santos (Univ. Porto), J. Bernardes (Univ. Porto), P. Vitanyi (CWI/Univ. Amsterdam), L. Antunes (Univ. Porto)}, journal={arXiv preprint arXiv:q-bio/0612013}, year={2006}, archivePrefix={arXiv}, eprint={q-bio/0612013}, primaryClass={q-bio.TO cs.CV cs.IR q-bio.QM} }
santos2006clustering
arxiv-677136
q-bio/0701009
Attribute Exploration of Discrete Temporal Transitions
<|reference_start|>Attribute Exploration of Discrete Temporal Transitions: Discrete temporal transitions occur in a variety of domains, but this work is mainly motivated by applications in molecular biology: explaining and analyzing observed transcriptome and proteome time series by literature and database knowledge. The starting point of a formal concept analysis model is presented. The objects of a formal context are states of the interesting entities, and the attributes are the variable properties defining the current state (e.g. observed presence or absence of proteins). Temporal transitions assign a relation to the objects, defined by deterministic or non-deterministic transition rules between sets of pre- and postconditions. This relation can be generalized to its transitive closure, i.e. states are related if one results from the other by a transition sequence of arbitrary length. The focus of the work is the adaptation of the attribute exploration algorithm to such a relational context, so that questions concerning temporal dependencies can be asked during the exploration process and be answered from the computed stem base. Results are given for the abstract example of a game and a small gene regulatory network relevant to a biomedical question.<|reference_end|>
arxiv
@article{wollbold2007attribute, title={Attribute Exploration of Discrete Temporal Transitions}, author={Johannes Wollbold}, journal={In: Gely, A. et al.. Contributions to ICFCA 2007 - 5th International Conference on Formal Concept Analysis. Clermont-Ferrand 2007, 121-130}, year={2007}, archivePrefix={arXiv}, eprint={q-bio/0701009}, primaryClass={q-bio.QM cs.AI q-bio.MN} }
wollbold2007attribute
arxiv-677137
q-bio/0703044
On the existence of potential landscape in the evolution of complex systems
<|reference_start|>On the existence of potential landscape in the evolution of complex systems: A recently developed treatment of stochastic processes leads to the construction of a potential landscape for the dynamical evolution of complex systems. Since the existence of a potential function in generic settings has been frequently questioned in literature,herewe study several related theoretical issues that lie at core of the construction. We showthat the novel treatment,via a transformation,is closely related to the symplectic structure that is central in many branches of theoretical physics. Using this insight, we demonstrate an invariant under the transformation. We further explicitly demonstrate, in one-dimensional case, the contradistinction among the new treatment to those of Ito and Stratonovich, as well as others.Our results strongly suggest that the method from statistical physics can be useful in studying stochastic, complex systems in general.<|reference_end|>
arxiv
@article{ao2007on, title={On the existence of potential landscape in the evolution of complex systems}, author={P. Ao, C. Kwon, H. Qian}, journal={Complexity 12 (2007) 19-27}, year={2007}, archivePrefix={arXiv}, eprint={q-bio/0703044}, primaryClass={q-bio.QM cond-mat.stat-mech cs.IT math.DS math.IT nlin.AO q-bio.MN} }
ao2007on
arxiv-677138
quant-ph/0001077
Poly-locality in quantum computing
<|reference_start|>Poly-locality in quantum computing: A polynomial depth quantum circuit effects, by definition a poly-local unitary transformation of tensor product state space. It is a physically reasonable belief [Fy][L][FKW] that these are precisely the transformations which will be available from physics to help us solve computational problems. The poly-locality of discrete Fourier transform on cyclic groups is at the heart of Shor's factoring algorithm. We describe a class of poly-local transformations, including all the discrete orthogonal wavelet transforms in the hope that these may be helpful in constructing new quantum algorithms. We also observe that even a rather mild violation of poly-locality leads to a model without one-way functions, giving further evidence that poly-locality is an essential concept.<|reference_end|>
arxiv
@article{freedman2000poly-locality, title={Poly-locality in quantum computing}, author={Michael H. Freedman}, journal={arXiv preprint arXiv:quant-ph/0001077}, year={2000}, archivePrefix={arXiv}, eprint={quant-ph/0001077}, primaryClass={quant-ph cs.NA} }
freedman2000poly-locality
arxiv-677139
quant-ph/0002066
Quantum lower bounds by quantum arguments
<|reference_start|>Quantum lower bounds by quantum arguments: We propose a new method for proving lower bounds on quantum query algorithms. Instead of a classical adversary that runs the algorithm with one input and then modifies the input, we use a quantum adversary that runs the algorithm with a superposition of inputs. If the algorithm works correctly, its state becomes entangled with the superposition over inputs. We bound the number of queries needed to achieve a sufficient entanglement and this implies a lower bound on the number of queries for the computation. Using this method, we prove two new $\Omega(\sqrt{N})$ lower bounds on computing AND of ORs and inverting a permutation and also provide more uniform proofs for several known lower bounds which have been previously proven via variety of different techniques.<|reference_end|>
arxiv
@article{ambainis2000quantum, title={Quantum lower bounds by quantum arguments}, author={Andris Ambainis}, journal={arXiv preprint arXiv:quant-ph/0002066}, year={2000}, archivePrefix={arXiv}, eprint={quant-ph/0002066}, primaryClass={quant-ph cs.CC} }
ambainis2000quantum
arxiv-677140
quant-ph/0003035
One Complexity Theorist's View of Quantum Computing
<|reference_start|>One Complexity Theorist's View of Quantum Computing: The complexity of quantum computation remains poorly understood. While physicists attempt to find ways to create quantum computers, we still do not have much evidence one way or the other as to how useful these machines will be. The tools of computational complexity theory should come to bear on these important questions. Quantum computing often scares away many potential researchers from computer science because of the apparent background need in quantum mechanics and the alien looking notation used in papers on the topic. This paper will give an overview of quantum computation from the point of view of a complexity theorist. We will see that one can think of BQP as yet another complexity class and study its power without focusing on the physical aspects behind it.<|reference_end|>
arxiv
@article{fortnow2000one, title={One Complexity Theorist's View of Quantum Computing}, author={Lance Fortnow}, journal={arXiv preprint arXiv:quant-ph/0003035}, year={2000}, archivePrefix={arXiv}, eprint={quant-ph/0003035}, primaryClass={quant-ph cs.CC} }
fortnow2000one
arxiv-677141
quant-ph/0005106
Interaction in Quantum Communication Complexity
<|reference_start|>Interaction in Quantum Communication Complexity: One of the most intriguing facts about communication using quantum states is that these states cannot be used to transmit more classical bits than the number of qubits used, yet there are ways of conveying information with exponentially fewer qubits than possible classically. Moreover, these methods have a very simple structure---they involve little interaction between the communicating parties. We look more closely at the ways in which information encoded in quantum states may be manipulated, and consider the question as to whether every classical protocol may be transformed to a ``simpler'' quantum protocol of similar efficiency. By a simpler protocol, we mean a protocol that uses fewer message exchanges. We show that for any constant k, there is a problem such that its k+1 message classical communication complexity is exponentially smaller than its k message quantum communication complexity, thus answering the above question in the negative. Our result builds on two primitives, local transitions in bi-partite states (based on previous work) and average encoding which may be of significance in other applications as well.<|reference_end|>
arxiv
@article{nayak2000interaction, title={Interaction in Quantum Communication Complexity}, author={Ashwin Nayak, Amnon Ta-Shma and David Zuckerman}, journal={arXiv preprint arXiv:quant-ph/0005106}, year={2000}, archivePrefix={arXiv}, eprint={quant-ph/0005106}, primaryClass={quant-ph cs.CC} }
nayak2000interaction
arxiv-677142
quant-ph/0007021
The Quantum Complexity of Set Membership
<|reference_start|>The Quantum Complexity of Set Membership: We study the quantum complexity of the static set membership problem: given a subset S (|S| \leq n) of a universe of size m (m \gg n), store it as a table of bits so that queries of the form `Is x \in S?' can be answered. The goal is to use a small table and yet answer queries using few bitprobes. This problem was considered recently by Buhrman, Miltersen, Radhakrishnan and Venkatesh, where lower and upper bounds were shown for this problem in the classical deterministic and randomized models. In this paper, we formulate this problem in the "quantum bitprobe model" and show tradeoff results between space and time.In this model, the storage scheme is classical but the query scheme is quantum.We show, roughly speaking, that similar lower bounds hold in the quantum model as in the classical model, which imply that the classical upper bounds are more or less tight even in the quantum case. Our lower bounds are proved using linear algebraic techniques.<|reference_end|>
arxiv
@article{radhakrishnan2000the, title={The Quantum Complexity of Set Membership}, author={Jaikumar Radhakrishnan, Pranab Sen, S. Venkatesh}, journal={arXiv preprint arXiv:quant-ph/0007021}, year={2000}, archivePrefix={arXiv}, eprint={quant-ph/0007021}, primaryClass={quant-ph cs.CC} }
radhakrishnan2000the
arxiv-677143
quant-ph/0007071
A Numerical Study of the Performance of a Quantum Adiabatic Evolution Algorithm for Satisfiability
<|reference_start|>A Numerical Study of the Performance of a Quantum Adiabatic Evolution Algorithm for Satisfiability: Quantum computation by adiabatic evolution, as described in quant-ph/0001106, will solve satisfiability problems if the running time is long enough. In certain special cases (that are classically easy) we know that the quantum algorithm requires a running time that grows as a polynomial in the number of bits. In this paper we present numerical results on randomly generated instances of an NP-complete problem and of a problem that can be solved classically in polynomial time. We simulate a quantum computer (of up to 16 qubits) by integrating the Schrodinger equation on a conventional computer. For both problems considered, for the set of instances studied, the required running time appears to grow slowly as a function of the number of bits.<|reference_end|>
arxiv
@article{farhi2000a, title={A Numerical Study of the Performance of a Quantum Adiabatic Evolution Algorithm for Satisfiability}, author={Edward Farhi, Jeffrey Goldstone, Sam Gutmann}, journal={arXiv preprint arXiv:quant-ph/0007071}, year={2000}, number={MIT-CTP # 3006}, archivePrefix={arXiv}, eprint={quant-ph/0007071}, primaryClass={quant-ph cs.CC} }
farhi2000a
arxiv-677144
quant-ph/0008059
Quantum Algorithms for Weighing Matrices and Quadratic Residues
<|reference_start|>Quantum Algorithms for Weighing Matrices and Quadratic Residues: In this article we investigate how we can employ the structure of combinatorial objects like Hadamard matrices and weighing matrices to device new quantum algorithms. We show how the properties of a weighing matrix can be used to construct a problem for which the quantum query complexity is ignificantly lower than the classical one. It is pointed out that this scheme captures both Bernstein & Vazirani's inner-product protocol, as well as Grover's search algorithm. In the second part of the article we consider Paley's construction of Hadamard matrices, which relies on the properties of quadratic characters over finite fields. We design a query problem that uses the Legendre symbol chi (which indicates if an element of a finite field F_q is a quadratic residue or not). It is shown how for a shifted Legendre function f_s(i)=chi(i+s), the unknown s in F_q can be obtained exactly with only two quantum calls to f_s. This is in sharp contrast with the observation that any classical, probabilistic procedure requires more than log(q) + log((1-e)/2) queries to solve the same problem.<|reference_end|>
arxiv
@article{van dam2000quantum, title={Quantum Algorithms for Weighing Matrices and Quadratic Residues}, author={Wim van Dam (UC Berkeley)}, journal={Algorithmica, Volume 34, No. 4, pages 413-428 (2002)}, year={2000}, doi={10.1007/s00453-002-0975-4}, archivePrefix={arXiv}, eprint={quant-ph/0008059}, primaryClass={quant-ph cs.CC math.CO} }
van dam2000quantum
arxiv-677145
quant-ph/0009009
The definition of a random sequence of qubits: from Noncommutative Algorithmic Probability Theory to Quantum Algorithmic Information Theory and back
<|reference_start|>The definition of a random sequence of qubits: from Noncommutative Algorithmic Probability Theory to Quantum Algorithmic Information Theory and back: The issue of defining a random sequence of qubits is studied in the framework of Algorithmic Free Probability Theory.Its connection with Quantum Algorithmic Information Theory is shown<|reference_end|>
arxiv
@article{segre2000the, title={The definition of a random sequence of qubits: from Noncommutative Algorithmic Probability Theory to Quantum Algorithmic Information Theory and back}, author={Gavriel Segre}, journal={arXiv preprint arXiv:quant-ph/0009009}, year={2000}, archivePrefix={arXiv}, eprint={quant-ph/0009009}, primaryClass={quant-ph cs.CC math-ph math.MP} }
segre2000the
arxiv-677146
quant-ph/0010076
Beyond Stabilizer Codes II: Clifford Codes
<|reference_start|>Beyond Stabilizer Codes II: Clifford Codes: Knill introduced a generalization of stabilizer codes, in this note called Clifford codes. It remained unclear whether or not Clifford codes can be superior to stabilizer codes. We show that Clifford codes are stabilizer codes provided that the abstract error group has an abelian index group. In particular, if the errors are modelled by tensor products of Pauli matrices, then the associated Clifford codes are necessarily stabilizer codes.<|reference_end|>
arxiv
@article{klappenecker2000beyond, title={Beyond Stabilizer Codes II: Clifford Codes}, author={Andreas Klappenecker (Texas A&M University), Martin Roetteler (Universitaet Karlsruhe)}, journal={IEEE Transactions on Information Theory, vol. 48, no. 8, pp. 2396-2399, 2002}, year={2000}, archivePrefix={arXiv}, eprint={quant-ph/0010076}, primaryClass={quant-ph cs.ET} }
klappenecker2000beyond
arxiv-677147
quant-ph/0010082
Beyond Stabilizer Codes I: Nice Error Bases
<|reference_start|>Beyond Stabilizer Codes I: Nice Error Bases: Nice error bases have been introduced by Knill as a generalization of the Pauli basis. These bases are shown to be projective representations of finite groups. We classify all nice error bases of small degree, and all nice error bases with abelian index groups. We show that in general an index group of a nice error basis is necessarily solvable.<|reference_end|>
arxiv
@article{klappenecker2000beyond, title={Beyond Stabilizer Codes I: Nice Error Bases}, author={Andreas Klappenecker (Texas A&M University), Martin Roetteler (Universitaet Karlsruhe)}, journal={IEEE Transactions on Information Theory, vol. 48, no. 8, pp. 2392-2395, 2002}, year={2000}, archivePrefix={arXiv}, eprint={quant-ph/0010082}, primaryClass={quant-ph cs.ET} }
klappenecker2000beyond
arxiv-677148
quant-ph/0011067
Efficient Quantum Algorithms for Shifted Quadratic Character Problems
<|reference_start|>Efficient Quantum Algorithms for Shifted Quadratic Character Problems: We introduce the Shifted Legendre Symbol Problem and some variants along with efficient quantum algorithms to solve them. The problems and their algorithms are different from previous work on quantum computation in that they do not appear to fit into the framework of the Hidden Subgroup Problem. The classical complexity of the problem is unknown despite the various results on the irregularity of Legendre Sequences.<|reference_end|>
arxiv
@article{van dam2000efficient, title={Efficient Quantum Algorithms for Shifted Quadratic Character Problems}, author={Wim van Dam (Berkeley, CWI), Sean Hallgren (MSRI)}, journal={arXiv preprint arXiv:quant-ph/0011067}, year={2000}, archivePrefix={arXiv}, eprint={quant-ph/0011067}, primaryClass={quant-ph cs.CC math.NT} }
van dam2000efficient
arxiv-677149
quant-ph/0011122
Algorithmic Theories of Everything
<|reference_start|>Algorithmic Theories of Everything: The probability distribution P from which the history of our universe is sampled represents a theory of everything or TOE. We assume P is formally describable. Since most (uncountably many) distributions are not, this imposes a strong inductive bias. We show that P(x) is small for any universe x lacking a short description, and study the spectrum of TOEs spanned by two Ps, one reflecting the most compact constructive descriptions, the other the fastest way of computing everything. The former derives from generalizations of traditional computability, Solomonoff's algorithmic probability, Kolmogorov complexity, and objects more random than Chaitin's Omega, the latter from Levin's universal search and a natural resource-oriented postulate: the cumulative prior probability of all x incomputable within time t by this optimal algorithm should be 1/t. Between both Ps we find a universal cumulatively enumerable measure that dominates traditional enumerable measures; any such CEM must assign low probability to any universe lacking a short enumerating program. We derive P-specific consequences for evolving observers, inductive reasoning, quantum physics, philosophy, and the expected duration of our universe.<|reference_end|>
arxiv
@article{schmidhuber2000algorithmic, title={Algorithmic Theories of Everything}, author={Juergen Schmidhuber}, journal={Sections 1-5 in: Hierarchies of generalized Kolmogorov complexities and nonenumerable universal measures computable in the limit. International Journal of Foundations of Computer Science 13(4):587-612 (2002). Section 6 in: The Speed Prior: A New Simplicity Measure Yielding Near-Optimal Computable Predictions. In J. Kivinen and R. H. Sloan, editors, Proceedings of the 15th Annual Conference on Computational Learning Theory (COLT 2002), Sydney, Australia, Lecture Notes in Artificial Intelligence, pages 216--228. Springer, 2002.}, year={2000}, number={IDSIA-20-00 (Version 2.0)}, archivePrefix={arXiv}, eprint={quant-ph/0011122}, primaryClass={quant-ph cs.AI cs.CC cs.LG hep-th math-ph math.MP physics.comp-ph} }
schmidhuber2000algorithmic
arxiv-677150
quant-ph/0012111
Quantum error-correcting codes associated with graphs
<|reference_start|>Quantum error-correcting codes associated with graphs: We present a construction scheme for quantum error correcting codes. The basic ingredients are a graph and a finite abelian group, from which the code can explicitly be obtained. We prove necessary and sufficient conditions for the graph such that the resulting code corrects a certain number of errors. This allows a simple verification of the 1-error correcting property of fivefold codes in any dimension. As new examples we construct a large class of codes saturating the singleton bound, as well as a tenfold code detecting 3 errors.<|reference_end|>
arxiv
@article{schlingemann2000quantum, title={Quantum error-correcting codes associated with graphs}, author={D. Schlingemann and R.F. Werner}, journal={arXiv preprint arXiv:quant-ph/0012111}, year={2000}, doi={10.1103/PhysRevA.65.012308}, archivePrefix={arXiv}, eprint={quant-ph/0012111}, primaryClass={quant-ph cs.IT math-ph math.IT math.MP} }
schlingemann2000quantum
arxiv-677151
quant-ph/0101133
Time and Space Bounds for Reversible Simulation
<|reference_start|>Time and Space Bounds for Reversible Simulation: We prove a general upper bound on the tradeoff between time and space that suffices for the reversible simulation of irreversible computation. Previously, only simulations using exponential time or quadratic space were known. The tradeoff shows for the first time that we can simultaneously achieve subexponential time and subquadratic space. The boundary values are the exponential time with hardly any extra space required by the Lange-McKenzie-Tapp method and the ($\log 3$)th power time with square space required by the Bennett method. We also give the first general lower bound on the extra storage space required by general reversible simulation. This lower bound is optimal in that it is achieved by some reversible simulations.<|reference_end|>
arxiv
@article{buhrman2001time, title={Time and Space Bounds for Reversible Simulation}, author={Harry Buhrman (CWI and Univ. Amsterdam), J. Tromp (CWI and BioInformatics Solutions), Paul Vitanyi (CWI and Univ. Amsterdam)}, journal={Journal of Physics A: Mathematical and General, 34(2001), 6821--6830.}, year={2001}, doi={10.1088/0305-4470/34/35/308}, archivePrefix={arXiv}, eprint={quant-ph/0101133}, primaryClass={quant-ph cs.CC cs.DS} }
buhrman2001time
arxiv-677152
quant-ph/0102054
Quantum Pushdown Automata
<|reference_start|>Quantum Pushdown Automata: Quantum finite automata, as well as quantum pushdown automata (QPA) were first introduced by C. Moore and J. P. Crutchfield. In this paper we introduce the notion of QPA in a non-equivalent way, including unitarity criteria, by using the definition of quantum finite automata of Kondacs and Watrous. It is established that the unitarity criteria of QPA are not equivalent to the corresponding unitarity criteria of quantum Turing machines. We show that QPA can recognize every regular language. Finally we present some simple languages recognized by QPA, not recognizable by deterministic pushdown automata.<|reference_end|>
arxiv
@article{golovkins2001quantum, title={Quantum Pushdown Automata}, author={Marats Golovkins}, journal={LNCS, 2000, vol. 1963, pp. 336-346}, year={2001}, archivePrefix={arXiv}, eprint={quant-ph/0102054}, primaryClass={quant-ph cs.CC cs.FL} }
golovkins2001quantum
arxiv-677153
quant-ph/0102108
Quantum Kolmogorov Complexity Based on Classical Descriptions
<|reference_start|>Quantum Kolmogorov Complexity Based on Classical Descriptions: We develop a theory of the algorithmic information in bits contained in an individual pure quantum state. This extends classical Kolmogorov complexity to the quantum domain retaining classical descriptions. Quantum Kolmogorov complexity coincides with the classical Kolmogorov complexity on the classical domain. Quantum Kolmogorov complexity is upper bounded and can be effectively approximated from above under certain conditions. With high probability a quantum object is incompressible. Upper- and lower bounds of the quantum complexity of multiple copies of individual pure quantum states are derived and may shed some light on the no-cloning properties of quantum states. In the quantum situation complexity is not sub-additive. We discuss some relations with ``no-cloning'' and ``approximate cloning'' properties.<|reference_end|>
arxiv
@article{vitanyi2001quantum, title={Quantum Kolmogorov Complexity Based on Classical Descriptions}, author={Paul M.B. Vitanyi}, journal={IEEE Transactions on Information Theory, Vol. 47, No. 6, September 2001, 2464-2479}, year={2001}, doi={10.1109/18.945258}, archivePrefix={arXiv}, eprint={quant-ph/0102108}, primaryClass={quant-ph cs.CC cs.IT math.IT math.LO} }
vitanyi2001quantum
arxiv-677154
quant-ph/0104053
Quantum Formulas: a Lower Bound and Simulation
<|reference_start|>Quantum Formulas: a Lower Bound and Simulation: We show that Nechiporuk's method for proving lower bounds for Boolean formulas can be extended to the quantum case. This leads to an $\Omega(n^2 / \log^2 n)$ lower bound for quantum formulas computing an explicit function. The only known previous explicit lower bound for quantum formulas states that the majority function does not have a linear-size quantum formula. We also show that quantum formulas can be simulated by Boolean circuits of almost the same size.<|reference_end|>
arxiv
@article{roychowdhury2001quantum, title={Quantum Formulas: a Lower Bound and Simulation}, author={Vwani P. Roychowdhury and Farrokh Vatan}, journal={arXiv preprint arXiv:quant-ph/0104053}, year={2001}, archivePrefix={arXiv}, eprint={quant-ph/0104053}, primaryClass={quant-ph cs.CC} }
roychowdhury2001quantum
arxiv-677155
quant-ph/0104100
Lower bounds in the quantum cell probe model
<|reference_start|>Lower bounds in the quantum cell probe model: We introduce a new model for studying quantum data structure problems -- the "quantum cell probe model". We prove a lower bound for the static predecessor problem in the address-only version of this model where we allow quantum parallelism only over the `address lines' of the queries. The address-only quantum cell probe model subsumes the classical cell probe model, and many quantum query algorithms like Grover's algorithm fall into this framework. Our lower bound improves the previous known lower bound for the predecessor problem in the classical cell probe model with randomised query schemes, and matches the classical deterministic upper bound of Beame and Fich. Beame and Fich have also proved a matching lower bound for the predecessor problem, but only in the classical deterministic setting. Our lower bound has the advantage that it holds for the more general quantum model, and also, its proof is substantially simpler than that of Beame and Fich. We prove our lower bound by obtaining a round elimination lemma for quantum communication complexity. A similar lemma was proved by Miltersen, Nisan, Safra and Wigderson for classical communication complexity, but it was not strong enough to prove a lower bound matching the upper bound of Beame and Fich. Our quantum round elimination lemma also allows us to prove rounds versus communication tradeoffs for some quantum communication complexity problems like the "greater-than" problem. We also study the "static membership" problem in the quantum cell probe model. Generalising a result of Yao, we show that if the storage scheme is implicit, that is it can only store members of the subset and `pointers', then any quantum query scheme must make $\Omega(\log n)$ probes.<|reference_end|>
arxiv
@article{sen2001lower, title={Lower bounds in the quantum cell probe model}, author={Pranab Sen and S. Venkatesh}, journal={arXiv preprint arXiv:quant-ph/0104100}, year={2001}, archivePrefix={arXiv}, eprint={quant-ph/0104100}, primaryClass={quant-ph cs.CC} }
sen2001lower
arxiv-677156
quant-ph/0107129
Algebraic geometric construction of a quantum stabilizer code
<|reference_start|>Algebraic geometric construction of a quantum stabilizer code: The stabilizer code is the most general algebraic construction of quantum error-correcting codes proposed so far. A stabilizer code can be constructed from a self-orthogonal subspace of a symplectic space over a finite field. We propose a construction method of such a self-orthogonal space using an algebraic curve. By using the proposed method we construct an asymptotically good sequence of binary stabilizer codes. As a byproduct we improve the Ashikhmin-Litsyn-Tsfasman bound of quantum codes. The main results in this paper can be understood without knowledge of quantum mechanics.<|reference_end|>
arxiv
@article{matsumoto2001algebraic, title={Algebraic geometric construction of a quantum stabilizer code}, author={Ryutaroh Matsumoto}, journal={arXiv preprint arXiv:quant-ph/0107129}, year={2001}, doi={10.1109/TIT.2002.1013156}, archivePrefix={arXiv}, eprint={quant-ph/0107129}, primaryClass={quant-ph cs.IT math.AG math.IT math.SG} }
matsumoto2001algebraic
arxiv-677157
quant-ph/0108010
Classical simulation of noninteracting-fermion quantum circuits
<|reference_start|>Classical simulation of noninteracting-fermion quantum circuits: We show that a class of quantum computations that was recently shown to be efficiently simulatable on a classical computer by Valiant corresponds to a physical model of noninteracting fermions in one dimension. We give an alternative proof of his result using the language of fermions and extend the result to noninteracting fermions with arbitrary pairwise interactions, where gates can be conditioned on outcomes of complete von Neumann measurements in the computational basis on other fermionic modes in the circuit. This last result is in remarkable contrast with the case of noninteracting bosons where universal quantum computation can be achieved by allowing gates to be conditioned on classical bits (quant-ph/0006088).<|reference_end|>
arxiv
@article{terhal2001classical, title={Classical simulation of noninteracting-fermion quantum circuits}, author={Barbara M. Terhal and David P. DiVincenzo}, journal={Phys. Rev. A 65, 032325/1-10 (2002)}, year={2001}, doi={10.1103/PhysRevA.65.032325}, archivePrefix={arXiv}, eprint={quant-ph/0108010}, primaryClass={quant-ph cond-mat cs.CC} }
terhal2001classical
arxiv-677158
quant-ph/0108033
Fermionic Linear Optics and Matchgates
<|reference_start|>Fermionic Linear Optics and Matchgates: Fermionic linear optics is efficiently classically simulatable. Here it is shown that the set of states achievable with fermionic linear optics and particle measurements is the closure of a low dimensional Lie group. The weakness of fermionic linear optics and measurements can therefore be explained and contrasted with the strength of bosonic linear optics with particle measurements. An analysis of fermionic linear optics is used to show that the two-qubit matchgates and the simulatable matchcircuits introduced by Valiant generate a monoid of extended fermionic linear optics operators. A useful interpretation of efficient classical simulations such as this one is as a simulation of a model of non-deterministic quantum computation. Problem areas for future investigations are suggested.<|reference_end|>
arxiv
@article{knill2001fermionic, title={Fermionic Linear Optics and Matchgates}, author={E. Knill}, journal={arXiv preprint arXiv:quant-ph/0108033}, year={2001}, archivePrefix={arXiv}, eprint={quant-ph/0108033}, primaryClass={quant-ph cs.CC} }
knill2001fermionic
arxiv-677159
quant-ph/0108073
Quantum Information in Space and Time
<|reference_start|>Quantum Information in Space and Time: Many important results in modern quantum information theory have been obtained for an idealized situation when the spacetime dependence of quantum phenomena is neglected. However the transmission and processing of (quantum) information is a physical process in spacetime. Therefore such basic notions in quantum information theory as the notions of composite systems, entangled states and the channel should be formulated in space and time. We emphasize the importance of the investigation of quantum information in space and time. Entangled states in space and time are considered. A modification of Bell`s equation which includes the spacetime variables is suggested. A general relation between quantum theory and theory of classical stochastic processes is proposed. It expresses the condition of local realism in the form of a {\it noncommutative spectral theorem}. Applications of this relation to the security of quantum key distribution in quantum cryptography are considered.<|reference_end|>
arxiv
@article{volovich2001quantum, title={Quantum Information in Space and Time}, author={Igor V. Volovich}, journal={arXiv preprint arXiv:quant-ph/0108073}, year={2001}, archivePrefix={arXiv}, eprint={quant-ph/0108073}, primaryClass={quant-ph cond-mat.mes-hall cs.IT gr-qc hep-ph hep-th math-ph math.IT math.MP math.PR} }
volovich2001quantum
arxiv-677160
quant-ph/0108133
On Classical and Quantum Cryptography
<|reference_start|>On Classical and Quantum Cryptography: Lectures on classical and quantum cryptography. Contents: Private key cryptosystems. Elements of number theory. Public key cryptography and RSA cryptosystem. Shannon`s entropy and mutual information. Entropic uncertainty relations. The no cloning theorem. The BB84 quantum cryptographic protocol. Security proofs. Bell`s theorem. The EPRBE quantum cryptographic protocol.<|reference_end|>
arxiv
@article{volovich2001on, title={On Classical and Quantum Cryptography}, author={I.V. Volovich and Ya.I. Volovich}, journal={arXiv preprint arXiv:quant-ph/0108133}, year={2001}, archivePrefix={arXiv}, eprint={quant-ph/0108133}, primaryClass={quant-ph cond-mat.mes-hall cs.IT hep-th math-ph math.IT math.MP} }
volovich2001on
arxiv-677161
quant-ph/0109063
Universal Simulation of Hamiltonians Using a Finite Set of Control Operations
<|reference_start|>Universal Simulation of Hamiltonians Using a Finite Set of Control Operations: Any quantum system with a non-trivial Hamiltonian is able to simulate any other Hamiltonian evolution provided that a sufficiently large group of unitary control operations is available. We show that there exist finite groups with this property and present a sufficient condition in terms of group characters. We give examples of such groups in dimension 2 and 3. Furthermore, we show that it is possible to simulate an arbitrary bipartite interaction by a given one using such groups acting locally on the subsystems.<|reference_end|>
arxiv
@article{wocjan2001universal, title={Universal Simulation of Hamiltonians Using a Finite Set of Control Operations}, author={Pawel Wocjan, Martin Roetteler, Dominik Janzing, Thomas Beth (Universitaet Karlsruhe)}, journal={Quantum Information & Computation 2(2): 133-150, 2002}, year={2001}, archivePrefix={arXiv}, eprint={quant-ph/0109063}, primaryClass={quant-ph cs.ET} }
wocjan2001universal
arxiv-677162
quant-ph/0109068
Improved Quantum Communication Complexity Bounds for Disjointness and Equality
<|reference_start|>Improved Quantum Communication Complexity Bounds for Disjointness and Equality: We prove new bounds on the quantum communication complexity of the disjointness and equality problems. For the case of exact and non-deterministic protocols we show that these complexities are all equal to n+1, the previous best lower bound being n/2. We show this by improving a general bound for non-deterministic protocols of de Wolf. We also give an O(sqrt{n}c^{log^* n})-qubit bounded-error protocol for disjointness, modifying and improving the earlier O(sqrt{n}log n) protocol of Buhrman, Cleve, and Wigderson, and prove an Omega(sqrt{n}) lower bound for a large class of protocols that includes the BCW-protocol as well as our new protocol.<|reference_end|>
arxiv
@article{hoyer2001improved, title={Improved Quantum Communication Complexity Bounds for Disjointness and Equality}, author={Peter Hoyer (U Calgary) and Ronald de Wolf (CWI, Amsterdam)}, journal={19th Annual Symposium on Theoretical Aspects of Computer Science, STACS 2002, LNCS 2285, pp. 299-310, 2002}, year={2001}, doi={10.1007/3-540-45841-7_24}, archivePrefix={arXiv}, eprint={quant-ph/0109068}, primaryClass={quant-ph cs.CC} }
hoyer2001improved
arxiv-677163
quant-ph/0109070
On Quantum Versions of the Yao Principle
<|reference_start|>On Quantum Versions of the Yao Principle: The classical Yao principle states that the complexity R_epsilon(f) of an optimal randomized algorithm for a function f with success probability 1-epsilon equals the complexity max_mu D_epsilon^mu(f) of an optimal deterministic algorithm for f that is correct on a fraction 1-epsilon of the inputs, weighed according to the hardest distribution mu over the inputs. In this paper we investigate to what extent such a principle holds for quantum algorithms. We propose two natural candidate quantum Yao principles, a ``weak'' and a ``strong'' one. For both principles, we prove that the quantum bounded-error complexity is a lower bound on the quantum analogues of max mu D_epsilon^mu(f). We then prove that equality cannot be obtained for the ``strong'' version, by exhibiting an exponential gap. On the other hand, as a positive result we prove that the ``weak'' version holds up to a constant factor for the query complexity of all symmetric Boolean functions<|reference_end|>
arxiv
@article{de graaf2001on, title={On Quantum Versions of the Yao Principle}, author={Mart de Graaf and Ronald de Wolf (CWI Amsterdam)}, journal={arXiv preprint arXiv:quant-ph/0109070}, year={2001}, archivePrefix={arXiv}, eprint={quant-ph/0109070}, primaryClass={quant-ph cs.CC} }
de graaf2001on
arxiv-677164
quant-ph/0109088
Simulating Hamiltonians in Quantum Networks: Efficient Schemes and Complexity Bounds
<|reference_start|>Simulating Hamiltonians in Quantum Networks: Efficient Schemes and Complexity Bounds: We address the problem of simulating pair-interaction Hamiltonians in n node quantum networks where the subsystems have arbitrary, possibly different, dimensions. We show that any pair-interaction can be used to simulate any other by applying sequences of appropriate local control sequences. Efficient schemes for decoupling and time reversal can be constructed from orthogonal arrays. Conditions on time optimal simulation are formulated in terms of spectral majorization of matrices characterizing the coupling parameters. Moreover, we consider a specific system of n harmonic oscillators with bilinear interaction. In this case, decoupling can efficiently be achieved using the combinatorial concept of difference schemes. For this type of interactions we present optimal schemes for inversion.<|reference_end|>
arxiv
@article{wocjan2001simulating, title={Simulating Hamiltonians in Quantum Networks: Efficient Schemes and Complexity Bounds}, author={Pawel Wocjan, Martin Roetteler, Dominik Janzing, Thomas Beth (Universitaet Karlsruhe)}, journal={Phys. Rev. A 65, 042309 (2002)}, year={2001}, doi={10.1103/PhysRevA.65.042309}, archivePrefix={arXiv}, eprint={quant-ph/0109088}, primaryClass={quant-ph cs.ET} }
wocjan2001simulating
arxiv-677165
quant-ph/0110006
Quantum Certificate Verification: Single versus Multiple Quantum Certificates
<|reference_start|>Quantum Certificate Verification: Single versus Multiple Quantum Certificates: The class MA consists of languages that can be efficiently verified by classical probabilistic verifiers using a single classical certificate, and the class QMA consists of languages that can be efficiently verified by quantum verifiers using a single quantum certificate. Suppose that a verifier receives not only one but multiple certificates. In the classical setting, it is obvious that a classical verifier with multiple classical certificates is essentially the same with the one with a single classical certificate. However, in the quantum setting where a quantum verifier is given a set of quantum certificates in tensor product form (i.e. each quantum certificate is not entangled with others), the situation is different, because the quantum verifier might utilize the structure of the tensor product form. This suggests a possibility of another hierarchy of complexity classes, namely the QMA hierarchy. From this point of view, we extend the definition of QMA to QMA(k) for the case quantum verifiers use k quantum certificates, and analyze the properties of QMA(k). To compare the power of QMA(2) with that of QMA(1) = QMA, we show one interesting property of ``quantum indistinguishability''. This gives a strong evidence that QMA(2) is more powerful than QMA(1). Furthermore, we show that, for any fixed positive integer $k \geq 2$, if a language L has a one-sided bounded error QMA(k) protocol with a quantum verifier using k quantum certificates, L necessarily has a one-sided bounded error QMA(2) protocol with a quantum verifier using only two quantum certificates.<|reference_end|>
arxiv
@article{kobayashi2001quantum, title={Quantum Certificate Verification: Single versus Multiple Quantum Certificates}, author={Hirotada Kobayashi, Keiji Matsumoto, Tomoyuki Yamakami}, journal={arXiv preprint arXiv:quant-ph/0110006}, year={2001}, archivePrefix={arXiv}, eprint={quant-ph/0110006}, primaryClass={quant-ph cs.CC} }
kobayashi2001quantum
arxiv-677166
quant-ph/0110018
Algorithmic Information Theoretic Issues in Quantum Mechanics
<|reference_start|>Algorithmic Information Theoretic Issues in Quantum Mechanics: taking aside the review part, a finite-cardinality's set of new ideas concerning algorithmic information issues in Quantum Mechanics is introduced and analyzed<|reference_end|>
arxiv
@article{segre2001algorithmic, title={Algorithmic Information Theoretic Issues in Quantum Mechanics}, author={Gavriel Segre}, journal={arXiv preprint arXiv:quant-ph/0110018}, year={2001}, archivePrefix={arXiv}, eprint={quant-ph/0110018}, primaryClass={quant-ph cs.CC math-ph math.MP} }
segre2001algorithmic
arxiv-677167
quant-ph/0110067
Frontier between separability and quantum entanglement in a many spin system
<|reference_start|>Frontier between separability and quantum entanglement in a many spin system: We discuss the critical point $x_c$ separating the quantum entangled and separable states in two series of N spins S in the simple mixed state characterized by the matrix operator $\rho=x|\tilde{\phi}><\tilde{\phi}| + \frac{1-x}{D^N} I_{D^N}$ where $x \in [0,1]$, $D =2S+1$, ${\bf I}_{D^N}$ is the $D^N \times D^N$ unity matrix and $|\tilde {\phi}>$ is a special entangled state. The cases x=0 and x=1 correspond respectively to fully random spins and to a fully entangled state. In the first of these series we consider special states $|\tilde{\phi}>$ invariant under charge conjugation, that generalizes the N=2 spin S=1/2 Einstein-Podolsky-Rosen state, and in the second one we consider generalizations of the Weber density matrices. The evaluation of the critical point $x_c$ was done through bounds coming from the partial transposition method of Peres and the conditional nonextensive entropy criterion. Our results suggest the conjecture that whenever the bounds coming from both methods coincide the result of $x_c$ is the exact one. The results we present are relevant for the discussion of quantum computing, teleportation and cryptography.<|reference_end|>
arxiv
@article{alcaraz2001frontier, title={Frontier between separability and quantum entanglement in a many spin system}, author={F.C. Alcaraz (Departamento de Fisica - Universidade Federal de Sao Carlos, Brazil) and C. Tsallis (Centro Brasileiro de Pesquisas Fisicas - Rio de Janeiro and Erwin Schroedinger International Institute for Mathematical Physics - Vienna)}, journal={Phys. Lett. A {\bf 301}, 105 (2002).}, year={2001}, doi={10.1016/S0375-9601(02)01037-X}, archivePrefix={arXiv}, eprint={quant-ph/0110067}, primaryClass={quant-ph cond-mat.stat-mech cs.CC} }
alcaraz2001frontier
arxiv-677168
quant-ph/0110103
Quantum entanglement and geometry of determinantal varieties
<|reference_start|>Quantum entanglement and geometry of determinantal varieties: Quantum entanglement was first recognized as a feature of quantum mechanics in the famous paper of Einstein, Podolsky and Rosen [18]. Recently it has been realized that quantum entanglement is a key ingredient in quantum computation, quantum communication and quantum cryptography ([16],[17],[6]). In this paper, we introduce algebraic sets, which are determinantal varieties in the complex projective spaces or the products of complex projective spaces, for the mixed states in bipartite or multipartite quantum systems as their invariants under local unitary transformations. These invariants are naturally arised from the physical consideration of measuring mixed states by separable pure states. In this way algebraic geometry and complex differential geometry of these algebraic sets turn to be powerful tools for the understanding of quantum enatanglement. Our construction has applications in the following important topics in quantum information theory: 1) separability criterion, it is proved the algebraic sets have to be the sum of the linear subspaces if the mixed states are separable; 2) lower bound of Schmidt numbers, that is, generic low rank bipartite mixed states are entangled in many degrees of freedom; 3) simulation of Hamiltonians, it is proved the simulation of semi-positive Hamiltonians of the same rank implies the projective isomorphisms of the corresponding algebraic sets; 4) construction of bound enatanglement, examples of the entangled mixed states which are invariant under partial transpositions (thus PPT bound entanglement) are constructed systematically from our new separability criterion. On the other hand many examples of entangled mixed states with rich algebraic-geometric structure in their associated determinantal varieties are constructed and studied from this point of view.<|reference_end|>
arxiv
@article{chen2001quantum, title={Quantum entanglement and geometry of determinantal varieties}, author={Hao Chen}, journal={arXiv preprint arXiv:quant-ph/0110103}, year={2001}, archivePrefix={arXiv}, eprint={quant-ph/0110103}, primaryClass={quant-ph cs.IT math.AG math.IT} }
chen2001quantum
arxiv-677169
quant-ph/0110136
Quantum Algorithm for Hilbert's Tenth Problem
<|reference_start|>Quantum Algorithm for Hilbert's Tenth Problem: We explore in the framework of Quantum Computation the notion of {\em Computability}, which holds a central position in Mathematics and Theoretical Computer Science. A quantum algorithm for Hilbert's tenth problem, which is equivalent to the Turing halting problem and is known to be mathematically noncomputable, is proposed where quantum continuous variables and quantum adiabatic evolution are employed. If this algorithm could be physically implemented, as much as it is valid in principle--that is, if certain hamiltonian and its ground state can be physically constructed according to the proposal--quantum computability would surpass classical computability as delimited by the Church-Turing thesis. It is thus argued that computability, and with it the limits of Mathematics, ought to be determined not solely by Mathematics itself but also by Physical Principles.<|reference_end|>
arxiv
@article{kieu2001quantum, title={Quantum Algorithm for Hilbert's Tenth Problem}, author={Tien D Kieu}, journal={Int.J.Theor.Phys. 42 (2003) 1461-1478}, year={2001}, archivePrefix={arXiv}, eprint={quant-ph/0110136}, primaryClass={quant-ph cs.LO hep-th math.LO math.NT} }
kieu2001quantum
arxiv-677170
quant-ph/0111038
Discrete Cosine Transforms on Quantum Computers
<|reference_start|>Discrete Cosine Transforms on Quantum Computers: A classical computer does not allow to calculate a discrete cosine transform on N points in less than linear time. This trivial lower bound is no longer valid for a computer that takes advantage of quantum mechanical superposition, entanglement, and interference principles. In fact, we show that it is possible to realize the discrete cosine transforms and the discrete sine transforms of size NxN and types I,II,III, and IV with as little as O(log^2 N) operations on a quantum computer, whereas the known fast algorithms on a classical computer need O(N log N) operations.<|reference_end|>
arxiv
@article{klappenecker2001discrete, title={Discrete Cosine Transforms on Quantum Computers}, author={Andreas Klappenecker (Texas A&M University), Martin Roetteler (Universitaet Karlsruhe)}, journal={IEEE R8-EURASIP Symposium on Image and Signal Processing and Analysis, pp. 464-468, Pula, Croatia, 2001}, year={2001}, archivePrefix={arXiv}, eprint={quant-ph/0111038}, primaryClass={quant-ph cs.ET} }
klappenecker2001discrete
arxiv-677171
quant-ph/0111039
On the Irresistible Efficiency of Signal Processing Methods in Quantum Computing
<|reference_start|>On the Irresistible Efficiency of Signal Processing Methods in Quantum Computing: We show that many well-known signal transforms allow highly efficient realizations on a quantum computer. We explain some elementary quantum circuits and review the construction of the Quantum Fourier Transform. We derive quantum circuits for the Discrete Cosine and Sine Transforms, and for the Discrete Hartley transform. We show that at most O(log^2 N) elementary quantum gates are necessary to implement any of those transforms for input sequences of length N.<|reference_end|>
arxiv
@article{klappenecker2001on, title={On the Irresistible Efficiency of Signal Processing Methods in Quantum Computing}, author={Andreas Klappenecker (Texas A&M University), Martin Roetteler (Universitaet Karlsruhe)}, journal={Proceedings of the First International Workshop on Spectral Techniques and Logic Design (SPECLOG 2000), pp. 483-497, 2000}, year={2001}, archivePrefix={arXiv}, eprint={quant-ph/0111039}, primaryClass={quant-ph cs.ET} }
klappenecker2001on
arxiv-677172
quant-ph/0111097
A proposal for founding mistrustful quantum cryptography on coin tossing
<|reference_start|>A proposal for founding mistrustful quantum cryptography on coin tossing: A significant branch of classical cryptography deals with the problems which arise when mistrustful parties need to generate, process or exchange information. As Kilian showed a while ago, mistrustful classical cryptography can be founded on a single protocol, oblivious transfer, from which general secure multi-party computations can be built. The scope of mistrustful quantum cryptography is limited by no-go theorems, which rule out, inter alia, unconditionally secure quantum protocols for oblivious transfer or general secure two-party computations. These theorems apply even to protocols which take relativistic signalling constraints into account. The best that can be hoped for, in general, are quantum protocols computationally secure against quantum attack. I describe here a method for building a classically certified bit commitment, and hence every other mistrustful cryptographic task, from a secure coin tossing protocol. No security proof is attempted, but I sketch reasons why these protocols might resist quantum computational attack.<|reference_end|>
arxiv
@article{kent2001a, title={A proposal for founding mistrustful quantum cryptography on coin tossing}, author={Adrian Kent}, journal={Phys. Rev. A 68, 012312 (2003).}, year={2001}, doi={10.1103/PhysRevA.68.012312}, archivePrefix={arXiv}, eprint={quant-ph/0111097}, primaryClass={quant-ph cs.CR} }
kent2001a
arxiv-677173
quant-ph/0111099
Quantum Bit String Commitment
<|reference_start|>Quantum Bit String Commitment: A bit string commitment protocol securely commits $N$ classical bits in such a way that the recipient can extract only $M<N$ bits of information about the string. Classical reasoning might suggest that bit string commitment implies bit commitment and hence, given the Mayers-Lo-Chau theorem, that non-relativistic quantum bit string commitment is impossible. Not so: there exist non-relativistic quantum bit string commitment protocols, with security parameters $\epsilon$ and $M$, that allow $A$ to commit $N = N(M, \epsilon)$ bits to $B$ so that $A$'s probability of successfully cheating when revealing any bit and $B$'s probability of extracting more than $N'=N-M$ bits of information about the $N$ bit string before revelation are both less than $\epsilon$. With a slightly weakened but still restrictive definition of security against $A$, $N$ can be taken to be $O(\exp (C N'))$ for a positive constant $C$. I briefly discuss possible applications.<|reference_end|>
arxiv
@article{kent2001quantum, title={Quantum Bit String Commitment}, author={Adrian Kent (Centre for Quantum Computation, University of Cambridge)}, journal={Phys. Rev. Lett. 90 237901 (2003)}, year={2001}, doi={10.1103/PhysRevLett.90.237901}, archivePrefix={arXiv}, eprint={quant-ph/0111099}, primaryClass={quant-ph cs.CR} }
kent2001quantum
arxiv-677174
quant-ph/0111102
Quantum Lower Bound for the Collision Problem
<|reference_start|>Quantum Lower Bound for the Collision Problem: The collision problem is to decide whether a function X:{1,..,n}->{1,..,n} is one-to-one or two-to-one, given that one of these is the case. We show a lower bound of Theta(n^{1/5}) on the number of queries needed by a quantum computer to solve this problem with bounded error probability. The best known upper bound is O(n^{1/3}), but obtaining any lower bound better than Theta(1) was an open problem since 1997. Our proof uses the polynomial method augmented by some new ideas. We also give a lower bound of Theta(n^{1/7}) for the problem of deciding whether two sets are equal or disjoint on a constant fraction of elements. Finally we give implications of these results for quantum complexity theory.<|reference_end|>
arxiv
@article{aaronson2001quantum, title={Quantum Lower Bound for the Collision Problem}, author={Scott Aaronson}, journal={arXiv preprint arXiv:quant-ph/0111102}, year={2001}, archivePrefix={arXiv}, eprint={quant-ph/0111102}, primaryClass={quant-ph cs.CC} }
aaronson2001quantum
arxiv-677175
quant-ph/0201007
A lower bound on the quantum query complexity of read-once functions
<|reference_start|>A lower bound on the quantum query complexity of read-once functions: We establish a lower bound of $\Omega{(\sqrt{n})}$ on the bounded-error quantum query complexity of read-once Boolean functions, providing evidence for the conjecture that $\Omega(\sqrt{D(f)})$ is a lower bound for all Boolean functions. Our technique extends a result of Ambainis, based on the idea that successful computation of a function requires ``decoherence'' of initially coherently superposed inputs in the query register, having different values of the function. The number of queries is bounded by comparing the required total amount of decoherence of a judiciously selected set of input-output pairs to an upper bound on the amount achievable in a single query step. We use an extension of this result to general weights on input pairs, and general superpositions of inputs.<|reference_end|>
arxiv
@article{barnum2002a, title={A lower bound on the quantum query complexity of read-once functions}, author={Howard Barnum, Michael Saks}, journal={arXiv preprint arXiv:quant-ph/0201007}, year={2002}, archivePrefix={arXiv}, eprint={quant-ph/0201007}, primaryClass={quant-ph cs.CC} }
barnum2002a
arxiv-677176
quant-ph/0201082
Quantum Computers and Quantum Computer Languages: Quantum Assembly Language and Quantum C Language
<|reference_start|>Quantum Computers and Quantum Computer Languages: Quantum Assembly Language and Quantum C Language: We show a representation of Quantum Computers defines Quantum Turing Machines with associated Quantum Grammars. We then create examples of Quantum Grammars. Lastly we develop an algebraic approach to high level Quantum Languages using Quantum Assembly language and Quantum C language as examples.<|reference_end|>
arxiv
@article{blaha2002quantum, title={Quantum Computers and Quantum Computer Languages: Quantum Assembly Language and Quantum C Language}, author={Stephen Blaha}, journal={arXiv preprint arXiv:quant-ph/0201082}, year={2002}, archivePrefix={arXiv}, eprint={quant-ph/0201082}, primaryClass={quant-ph cs.PL} }
blaha2002quantum
arxiv-677177
quant-ph/0202015
Semiclassical Neural Network
<|reference_start|>Semiclassical Neural Network: We have constructed a simple semiclassical model of neural network where neurons have quantum links with one another in a chosen way and affect one another in a fashion analogous to action potentials. We have examined the role of stochasticity introduced by the quantum potential and compare the system with the classical system of an integrate-and-fire model by Hopfield. Average periodicity and short term retentivity of input memory are noted.<|reference_end|>
arxiv
@article{shafee2002semiclassical, title={Semiclassical Neural Network}, author={Fariel Shafee}, journal={stochastics and dynamics, Vol. 7, No. 3 (2007) 403-416}, year={2002}, archivePrefix={arXiv}, eprint={quant-ph/0202015}, primaryClass={quant-ph cond-mat.dis-nn cs.AI q-bio} }
shafee2002semiclassical
arxiv-677178
quant-ph/0202016
Neural Networks with c-NOT Gated Nodes
<|reference_start|>Neural Networks with c-NOT Gated Nodes: We try to design a quantum neural network with qubits instead of classical neurons with deterministic states, and also with quantum operators replacing teh classical action potentials. With our choice of gates interconnecting teh neural lattice, it appears that the state of the system behaves in ways reflecting both the strengths of coupling between neurons as well as initial conditions. We find that depending whether there is a threshold for emission from excited to ground state, the system shows either aperiodic oscillations or coherent ones with periodicity depending on the strength of coupling.<|reference_end|>
arxiv
@article{shafee2002neural, title={Neural Networks with c-NOT Gated Nodes}, author={Fariel Shafee}, journal={arXiv preprint arXiv:quant-ph/0202016}, year={2002}, archivePrefix={arXiv}, eprint={quant-ph/0202016}, primaryClass={quant-ph cond-mat.dis-nn cs.AI q-bio} }
shafee2002neural
arxiv-677179
quant-ph/0203010
Entangled Quantum Networks
<|reference_start|>Entangled Quantum Networks: We present some results from simulation of a network of nodes connected by c-NOT gates with nearest neighbors. Though initially we begin with pure states of varying boundary conditions, the updating with time quickly involves a complicated entanglement involving all or most nodes. As a normal c-NOT gate, though unitary for a single pair of nodes, seems to be not so when used in a network in a naive way, we use a manifestly unitary form of the transition matrix with c?-NOT gates, which invert the phase as well as flipping the qubit. This leads to complete entanglement of the net, but with variable coefficients for the different components of the superposition. It is interesting to note that by a simple logical back projection the original input state can be recovered in most cases. We also prove that it is not possible for a sequence of unitary operators working on a net to make it move from an aperiodic regime to a periodic one, unlike some classical cases where phase-locking happens in course of evolution. However, we show that it is possible to introduce by hand periodic orbits to sets of initial states, which may be useful in forming dynamic pattern recognition systems.<|reference_end|>
arxiv
@article{shafee2002entangled, title={Entangled Quantum Networks}, author={Fariel Shafee}, journal={microelectronics journal, 2006}, year={2002}, archivePrefix={arXiv}, eprint={quant-ph/0203010}, primaryClass={quant-ph cond-mat.dis-nn cs.AI} }
shafee2002entangled
arxiv-677180
quant-ph/0203019
Unpredictability of wave function's evolution in nonintegrable quantum systems
<|reference_start|>Unpredictability of wave function's evolution in nonintegrable quantum systems: It is shown that evolution of wave functions in nonintegrable quantum systems is unpredictable for a long time T because of rapid growth of number of elementary computational operations $\mathcal O(T)\sim T^\alpha$. On the other hand, the evolution of wave functions in integrable systems can be predicted by the fast algorithms $\mathcal O(T)\sim (log_2 T)^\beta$ for logarithmically short time and thus there is an algorithmic "compressibility" of their dynamics. The difference between integrable and nonintegrable systems in our approach looks identically for classical and quantum systems. Therefore the minimal number of bit operations $\mathcal O(T)$ needed to predict a state of system for time interval T can be used as universal sign of chaos.<|reference_end|>
arxiv
@article{ivanov2002unpredictability, title={Unpredictability of wave function's evolution in nonintegrable quantum systems}, author={I. B. Ivanov}, journal={arXiv preprint arXiv:quant-ph/0203019}, year={2002}, archivePrefix={arXiv}, eprint={quant-ph/0203019}, primaryClass={quant-ph cs.CC nlin.CD} }
ivanov2002unpredictability
arxiv-677181
quant-ph/0203034
Computing the Noncomputable
<|reference_start|>Computing the Noncomputable: We explore in the framework of Quantum Computation the notion of computability, which holds a central position in Mathematics and Theoretical Computer Science. A quantum algorithm that exploits the quantum adiabatic processes is considered for the Hilbert's tenth problem, which is equivalent to the Turing halting problem and known to be mathematically noncomputable. Generalised quantum algorithms are also considered for some other mathematical noncomputables in the same and of different noncomputability classes. The key element of all these algorithms is the measurability of both the values of physical observables and of the quantum-mechanical probability distributions for these values. It is argued that computability, and thus the limits of Mathematics, ought to be determined not solely by Mathematics itself but also by physical principles.<|reference_end|>
arxiv
@article{kieu2002computing, title={Computing the Noncomputable}, author={Tien D. Kieu}, journal={Contemporary Physics 44 (2003) 51- 71}, year={2002}, doi={10.1080/00107510302712}, archivePrefix={arXiv}, eprint={quant-ph/0203034}, primaryClass={quant-ph cs.LO math.LO} }
kieu2002computing
arxiv-677182
quant-ph/0203105
The capacity of hybrid quantum memory
<|reference_start|>The capacity of hybrid quantum memory: The general stable quantum memory unit is a hybrid consisting of a classical digit with a quantum digit (qudit) assigned to each classical state. The shape of the memory is the vector of sizes of these qudits, which may differ. We determine when N copies of a quantum memory A embed in N(1+o(1)) copies of another quantum memory B. This relationship captures the notion that B is as at least as useful as A for all purposes in the bulk limit. We show that the embeddings exist if and only if for all p >= 1, the p-norm of the shape of A does not exceed the p-norm of the shape of B. The log of the p-norm of the shape of A can be interpreted as the maximum of S(\rho) + H(\rho)/p (quantum entropy plus discounted classical entropy) taken over all mixed states \rho on A. We also establish a noiseless coding theorem that justifies these entropies. The noiseless coding theorem and the bulk embedding theorem together say that either A blindly bulk-encodes into B with perfect fidelity, or A admits a state that does not visibly bulk-encode into B with high fidelity. In conclusion, the utility of a hybrid quantum memory is determined by its simultaneous capacity for classical and quantum entropy, which is not a finite list of numbers, but rather a convex region in the classical-quantum entropy plane.<|reference_end|>
arxiv
@article{kuperberg2002the, title={The capacity of hybrid quantum memory}, author={Greg Kuperberg (UC Davis)}, journal={IEEE Trans. Inform. Theory 49 (2003), 1465-1473}, year={2002}, archivePrefix={arXiv}, eprint={quant-ph/0203105}, primaryClass={quant-ph cs.IT math-ph math.IT math.MP math.OA} }
kuperberg2002the
arxiv-677183
quant-ph/0204010
Quantum Optimization Problems
<|reference_start|>Quantum Optimization Problems: Krentel [J. Comput. System. Sci., 36, pp.490--509] presented a framework for an NP optimization problem that searches an optimal value among exponentially-many outcomes of polynomial-time computations. This paper expands his framework to a quantum optimization problem using polynomial-time quantum computations and introduces the notion of an ``universal'' quantum optimization problem similar to a classical ``complete'' optimization problem. We exhibit a canonical quantum optimization problem that is universal for the class of polynomial-time quantum optimization problems. We show in a certain relativized world that all quantum optimization problems cannot be approximated closely by quantum polynomial-time computations. We also study the complexity of quantum optimization problems in connection to well-known complexity classes.<|reference_end|>
arxiv
@article{yamakami2002quantum, title={Quantum Optimization Problems}, author={Tomoyuki Yamakami}, journal={arXiv preprint arXiv:quant-ph/0204010}, year={2002}, archivePrefix={arXiv}, eprint={quant-ph/0204010}, primaryClass={quant-ph cs.CC} }
yamakami2002quantum
arxiv-677184
quant-ph/0204022
A New Protocol and Lower Bounds for Quantum Coin Flipping
<|reference_start|>A New Protocol and Lower Bounds for Quantum Coin Flipping: We present a new protocol and two lower bounds for quantum coin flipping. In our protocol, no dishonest party can achieve one outcome with probability more than 0.75. Then, we show that our protocol is optimal for a certain type of quantum protocols. For arbitrary quantum protocols, we show that if a protocol achieves a bias of at most $\epsilon$, it must use at least $\Omega(\log \log \frac{1}{\epsilon})$ rounds of communication. This implies that the parallel repetition fails for quantum coin flipping. (The bias of a protocol cannot be arbitrarily decreased by running several copies of it in parallel.)<|reference_end|>
arxiv
@article{ambainis2002a, title={A New Protocol and Lower Bounds for Quantum Coin Flipping}, author={Andris Ambainis}, journal={Journal of Computer and System Sciences, 68(2): 398-416, 2004.}, year={2002}, archivePrefix={arXiv}, eprint={quant-ph/0204022}, primaryClass={quant-ph cs.CR} }
ambainis2002a
arxiv-677185
quant-ph/0204063
Lower bound for a class of weak quantum coin flipping protocols
<|reference_start|>Lower bound for a class of weak quantum coin flipping protocols: We study the class of protocols for weak quantum coin flipping introduced by Spekkens and Rudolph (quant-ph/0202118). We show that, for any protocol in this class, one party can win the coin flip with probability at least $1/\sqrt{2}$.<|reference_end|>
arxiv
@article{ambainis2002lower, title={Lower bound for a class of weak quantum coin flipping protocols}, author={Andris Ambainis}, journal={arXiv preprint arXiv:quant-ph/0204063}, year={2002}, archivePrefix={arXiv}, eprint={quant-ph/0204063}, primaryClass={quant-ph cs.CR} }
ambainis2002lower
arxiv-677186
quant-ph/0205083
Quantum Random Walks Hit Exponentially Faster
<|reference_start|>Quantum Random Walks Hit Exponentially Faster: We show that the hitting time of the discrete time quantum random walk on the n-bit hypercube from one corner to its opposite is polynomial in n. This gives the first exponential quantum-classical gap in the hitting time of discrete quantum random walks. We provide the framework for quantum hitting time and give two alternative definitions to set the ground for its study on general graphs. We then give an application to random routing.<|reference_end|>
arxiv
@article{kempe2002quantum, title={Quantum Random Walks Hit Exponentially Faster}, author={Julia Kempe}, journal={Probability Theory and Related Fields, Vol. 133(2), p. 215-235 (2005), conference version in Proc. 7th RANDOM, p. 354-69, 2003}, year={2002}, archivePrefix={arXiv}, eprint={quant-ph/0205083}, primaryClass={quant-ph cs.CC} }
kempe2002quantum
arxiv-677187
quant-ph/0205128
Authentication of Quantum Messages
<|reference_start|>Authentication of Quantum Messages: Authentication is a well-studied area of classical cryptography: a sender S and a receiver R sharing a classical private key want to exchange a classical message with the guarantee that the message has not been modified by any third party with control of the communication line. In this paper we define and investigate the authentication of messages composed of quantum states. Assuming S and R have access to an insecure quantum channel and share a private, classical random key, we provide a non-interactive scheme that enables S both to encrypt and to authenticate (with unconditional security) an m qubit message by encoding it into m+s qubits, where the failure probability decreases exponentially in the security parameter s. The classical private key is 2m+O(s) bits. To achieve this, we give a highly efficient protocol for testing the purity of shared EPR pairs. We also show that any scheme to authenticate quantum messages must also encrypt them. (In contrast, one can authenticate a classical message while leaving it publicly readable.) This has two important consequences: On one hand, it allows us to give a lower bound of 2m key bits for authenticating m qubits, which makes our protocol asymptotically optimal. On the other hand, we use it to show that digitally signing quantum states is impossible, even with only computational security.<|reference_end|>
arxiv
@article{barnum2002authentication, title={Authentication of Quantum Messages}, author={Howard Barnum, Claude Crepeau, Daniel Gottesman, Adam Smith and Alain Tapp}, journal={Proc. 43rd Annual IEEE Symposium on the Foundations of Computer Science (FOCS '02), pp. 449-458. IEEE Press, 2002.}, year={2002}, doi={10.1109/SFCS.2002.1181969}, archivePrefix={arXiv}, eprint={quant-ph/0205128}, primaryClass={quant-ph cs.CR} }
barnum2002authentication
arxiv-677188
quant-ph/0205133
Adaptive Quantum Computation, Constant Depth Quantum Circuits and Arthur-Merlin Games
<|reference_start|>Adaptive Quantum Computation, Constant Depth Quantum Circuits and Arthur-Merlin Games: We present evidence that there exist quantum computations that can be carried out in constant depth, using 2-qubit gates, that cannot be simulated classically with high accuracy. We prove that if one can simulate these circuits classically efficiently then the complexity class BQP is contained in AM.<|reference_end|>
arxiv
@article{terhal2002adaptive, title={Adaptive Quantum Computation, Constant Depth Quantum Circuits and Arthur-Merlin Games}, author={Barbara M. Terhal and David P. DiVincenzo}, journal={Quant. Inf. Comp. Vol. 4 (No. 2), pages 134--145 (2004)}, year={2002}, archivePrefix={arXiv}, eprint={quant-ph/0205133}, primaryClass={quant-ph cs.CC} }
terhal2002adaptive
arxiv-677189
quant-ph/0205161
Contextualizing Concepts using a Mathematical Generalization of the Quantum Formalism
<|reference_start|>Contextualizing Concepts using a Mathematical Generalization of the Quantum Formalism: We outline the rationale and preliminary results of using the State Context Property (SCOP) formalism, originally developed as a generalization of quantum mechanics, to describe the contextual manner in which concepts are evoked, used, and combined to generate meaning. The quantum formalism was developed to cope with problems arising in the description of (1) the measurement process, and (2) the generation of new states with new properties when particles become entangled. Similar problems arising with concepts motivated the formal treatment introduced here. Concepts are viewed not as fixed representations, but entities existing in states of potentiality that require interaction with a context--a stimulus or another concept--to 'collapse' to an instantiated form (e.g. exemplar, prototype, or other possibly imaginary instance). The stimulus situation plays the role of the measurement in physics, acting as context that induces a change of the cognitive state from superposition state to collapsed state. The collapsed state is more likely to consist of a conjunction of concepts for associative than analytic thought because more stimulus or concept properties take part in the collapse. We provide two contextual measures of conceptual distance--one using collapse probabilities and the other weighted properties--and show how they can be applied to conjunctions using the pet fish problem<|reference_end|>
arxiv
@article{gabora2002contextualizing, title={Contextualizing Concepts using a Mathematical Generalization of the Quantum Formalism}, author={Liane Gabora and Diederik Aerts}, journal={Journal of Experimental and Theoretical Artificial Intelligence, 14, pp. 327-358 (2002)}, year={2002}, archivePrefix={arXiv}, eprint={quant-ph/0205161}, primaryClass={quant-ph cs.AI q-bio.NC} }
gabora2002contextualizing
arxiv-677190
quant-ph/0206122
On communication over an entanglement-assisted quantum channel
<|reference_start|>On communication over an entanglement-assisted quantum channel: Shared entanglement is a resource available to parties communicating over a quantum channel, much akin to public coins in classical communication protocols. Whereas shared randomness does not help in the transmission of information, or significantly reduce the classical complexity of computing functions (as compared to private-coin protocols), shared entanglement leads to startling phenomena such as ``quantum teleportation'' and ``superdense coding.'' The problem of characterising the power of prior entanglement has puzzled many researchers. In this paper, we revisit the problem of transmitting classical bits over an entanglement-assisted quantum channel. We derive a new, optimal bound on the number of quantum bits required for this task, for any given probability of error. All known lower bounds in the setting of bounded error entanglement-assisted communication are based on sophisticated information theoretic arguments. In contrast, our result is derived from first principles, using a simple linear algebraic technique.<|reference_end|>
arxiv
@article{nayak2002on, title={On communication over an entanglement-assisted quantum channel}, author={Ashwin Nayak (Caltech) and Julia Salzman (Princeton)}, journal={arXiv preprint arXiv:quant-ph/0206122}, year={2002}, archivePrefix={arXiv}, eprint={quant-ph/0206122}, primaryClass={quant-ph cs.CC} }
nayak2002on
arxiv-677191
quant-ph/0206123
On bit-commitment based quantum coin flipping
<|reference_start|>On bit-commitment based quantum coin flipping: In this paper, we focus on a special framework for quantum coin flipping protocols,_bit-commitment based protocols_, within which almost all known protocols fit. We show a lower bound of 1/16 for the bias in any such protocol. We also analyse a sequence of multi-round protocol that tries to overcome the drawbacks of the previously proposed protocols, in order to lower the bias. We show an intricate cheating strategy for this sequence, which leads to a bias of 1/4. This indicates that a bias of 1/4 might be optimal in such protocols, and also demonstrates that a cleverer proof technique may be required to show this optimality.<|reference_end|>
arxiv
@article{nayak2002on, title={On bit-commitment based quantum coin flipping}, author={Ashwin Nayak (Caltech) and Peter Shor (AT&T Labs-Research)}, journal={arXiv preprint arXiv:quant-ph/0206123}, year={2002}, doi={10.1103/PhysRevA.67.012304}, number={Caltech CS TR 2002.004}, archivePrefix={arXiv}, eprint={quant-ph/0206123}, primaryClass={quant-ph cs.CR} }
nayak2002on
arxiv-677192
quant-ph/0207069
Data compression limit for an information source of interacting qubits
<|reference_start|>Data compression limit for an information source of interacting qubits: A system of interacting qubits can be viewed as a non-i.i.d quantum information source. A possible model of such a source is provided by a quantum spin system, in which spin-1/2 particles located at sites of a lattice interact with each other. We establish the limit for the compression of information from such a source and show that asymptotically it is given by the von Neumann entropy rate. Our result can be viewed as a quantum analog of Shannon's noiseless coding theorem for a class of non - i.i.d. quantum information sources.<|reference_end|>
arxiv
@article{datta2002data, title={Data compression limit for an information source of interacting qubits}, author={Nilanjana Datta and Yuri Suhov}, journal={arXiv preprint arXiv:quant-ph/0207069}, year={2002}, archivePrefix={arXiv}, eprint={quant-ph/0207069}, primaryClass={quant-ph cs.IT math-ph math.IT math.MP} }
datta2002data
arxiv-677193
quant-ph/0207131
Efficient Quantum Algorithms for Estimating Gauss Sums
<|reference_start|>Efficient Quantum Algorithms for Estimating Gauss Sums: We present an efficient quantum algorithm for estimating Gauss sums over finite fields and finite rings. This is a natural problem as the description of a Gauss sum can be done without reference to a black box function. With a reduction from the discrete logarithm problem to Gauss sum estimation we also give evidence that this problem is hard for classical algorithms. The workings of the quantum algorithm rely on the interaction between the additive characters of the Fourier transform and the multiplicative characters of the Gauss sum.<|reference_end|>
arxiv
@article{van dam2002efficient, title={Efficient Quantum Algorithms for Estimating Gauss Sums}, author={Wim van Dam (HP, MSRI, UC Berkeley) and Gadiel Seroussi (HP)}, journal={arXiv preprint arXiv:quant-ph/0207131}, year={2002}, number={HPL-2002-208}, archivePrefix={arXiv}, eprint={quant-ph/0207131}, primaryClass={quant-ph cs.DM math.NT} }
van dam2002efficient
arxiv-677194
quant-ph/0207158
Non-Interactive Quantum Statistical and Perfect Zero-Knowledge
<|reference_start|>Non-Interactive Quantum Statistical and Perfect Zero-Knowledge: This paper introduces quantum analogues of non-interactive perfect and statistical zero-knowledge proof systems. Similar to the classical cases, it is shown that sharing randomness or entanglement is necessary for non-trivial protocols of non-interactive quantum perfect and statistical zero-knowledge. It is also shown that, with sharing EPR pairs a priori, the class of languages having one-sided bounded error non-interactive quantum perfect zero-knowledge proof systems has a natural complete problem. Non-triviality of such a proof system is based on the fact proved in this paper that the Graph Non-Automorphism problem, which is not known in BQP, can be reduced to our complete problem. Our results may be the first non-trivial quantum zero-knowledge proofs secure even against dishonest quantum verifiers, since our protocols are non-interactive, and thus the zero-knowledge property does not depend on whether the verifier in the protocol is honest or not. A restricted version of our complete problem derives a natural complete problem for BQP.<|reference_end|>
arxiv
@article{kobayashi2002non-interactive, title={Non-Interactive Quantum Statistical and Perfect Zero-Knowledge}, author={Hirotada Kobayashi}, journal={arXiv preprint arXiv:quant-ph/0207158}, year={2002}, archivePrefix={arXiv}, eprint={quant-ph/0207158}, primaryClass={quant-ph cs.CC} }
kobayashi2002non-interactive
arxiv-677195
quant-ph/0208043
Quantum Circuits with Unbounded Fan-out
<|reference_start|>Quantum Circuits with Unbounded Fan-out: We demonstrate that the unbounded fan-out gate is very powerful. Constant-depth polynomial-size quantum circuits with bounded fan-in and unbounded fan-out over a fixed basis (denoted by QNCf^0) can approximate with polynomially small error the following gates: parity, mod[q], And, Or, majority, threshold[t], exact[q], and Counting. Classically, we need logarithmic depth even if we can use unbounded fan-in gates. If we allow arbitrary one-qubit gates instead of a fixed basis, then these circuits can also be made exact in log-star depth. Sorting, arithmetical operations, phase estimation, and the quantum Fourier transform with arbitrary moduli can also be approximated in constant depth.<|reference_end|>
arxiv
@article{hoyer2002quantum, title={Quantum Circuits with Unbounded Fan-out}, author={Peter Hoyer (U Calgary) and Robert Spalek (CWI)}, journal={Theory of Computing, 1(5):81-103, 2005}, year={2002}, doi={10.4086/toc.2005.v001a005}, archivePrefix={arXiv}, eprint={quant-ph/0208043}, primaryClass={quant-ph cs.CC} }
hoyer2002quantum
arxiv-677196
quant-ph/0208062
Exponential Lower Bound for 2-Query Locally Decodable Codes via a Quantum Argument
<|reference_start|>Exponential Lower Bound for 2-Query Locally Decodable Codes via a Quantum Argument: A locally decodable code encodes n-bit strings x in m-bit codewords C(x), in such a way that one can recover any bit x_i from a corrupted codeword by querying only a few bits of that word. We use a quantum argument to prove that LDCs with 2 classical queries need exponential length: m=2^{Omega(n)}. Previously this was known only for linear codes (Goldreich et al. 02). Our proof shows that a 2-query LDC can be decoded with only 1 quantum query, and then proves an exponential lower bound for such 1-query locally quantum-decodable codes. We also show that q quantum queries allow more succinct LDCs than the best known LDCs with q classical queries. Finally, we give new classical lower bounds and quantum upper bounds for the setting of private information retrieval. In particular, we exhibit a quantum 2-server PIR scheme with O(n^{3/10}) qubits of communication, improving upon the O(n^{1/3}) bits of communication of the best known classical 2-server PIR.<|reference_end|>
arxiv
@article{kerenidis2002exponential, title={Exponential Lower Bound for 2-Query Locally Decodable Codes via a Quantum Argument}, author={Iordanis Kerenidis (UC Berkeley) and Ronald de Wolf (CWI, Amsterdam)}, journal={arXiv preprint arXiv:quant-ph/0208062}, year={2002}, archivePrefix={arXiv}, eprint={quant-ph/0208062}, primaryClass={quant-ph cs.CC} }
kerenidis2002exponential
arxiv-677197
quant-ph/0208130
Engineering Functional Quantum Algorithms
<|reference_start|>Engineering Functional Quantum Algorithms: Suppose that a quantum circuit with K elementary gates is known for a unitary matrix U, and assume that U^m is a scalar matrix for some positive integer m. We show that a function of U can be realized on a quantum computer with at most O(mK+m^2log m) elementary gates. The functions of U are realized by a generic quantum circuit, which has a particularly simple structure. Among other results, we obtain efficient circuits for the fractional Fourier transform.<|reference_end|>
arxiv
@article{klappenecker2002engineering, title={Engineering Functional Quantum Algorithms}, author={Andreas Klappenecker (Texas A&M University) and Martin Roetteler (Universitaet Karlsruhe)}, journal={Physical Review A, 67, 010302, 2003}, year={2002}, doi={10.1103/PhysRevA.67.010302}, archivePrefix={arXiv}, eprint={quant-ph/0208130}, primaryClass={quant-ph cs.ET} }
klappenecker2002engineering
arxiv-677198
quant-ph/0209060
Quantum Lower Bound for Recursive Fourier Sampling
<|reference_start|>Quantum Lower Bound for Recursive Fourier Sampling: One of the earliest quantum algorithms was discovered by Bernstein and Vazirani, for a problem called Recursive Fourier Sampling. This paper shows that the Bernstein-Vazirani algorithm is not far from optimal. The moral is that the need to "uncompute" garbage can impose a fundamental limit on efficient quantum computation. The proof introduces a new parameter of Boolean functions called the "nonparity coefficient," which might be of independent interest.<|reference_end|>
arxiv
@article{aaronson2002quantum, title={Quantum Lower Bound for Recursive Fourier Sampling}, author={Scott Aaronson}, journal={Quantum Information and Computation 3(2):165-174, 2003}, year={2002}, archivePrefix={arXiv}, eprint={quant-ph/0209060}, primaryClass={quant-ph cs.CC} }
aaronson2002quantum
arxiv-677199
quant-ph/0210020
Quantum Certificate Complexity
<|reference_start|>Quantum Certificate Complexity: Given a Boolean function f, we study two natural generalizations of the certificate complexity C(f): the randomized certificate complexity RC(f) and the quantum certificate complexity QC(f). Using Ambainis' adversary method, we exactly characterize QC(f) as the square root of RC(f). We then use this result to prove the new relation R0(f) = O(Q2(f)^2 Q0(f) log n) for total f, where R0, Q2, and Q0 are zero-error randomized, bounded-error quantum, and zero-error quantum query complexities respectively. Finally we give asymptotic gaps between the measures, including a total f for which C(f) is superquadratic in QC(f), and a symmetric partial f for which QC(f) = O(1) yet Q2(f) = Omega(n/log n).<|reference_end|>
arxiv
@article{aaronson2002quantum, title={Quantum Certificate Complexity}, author={Scott Aaronson}, journal={arXiv preprint arXiv:quant-ph/0210020}, year={2002}, archivePrefix={arXiv}, eprint={quant-ph/0210020}, primaryClass={quant-ph cs.CC} }
aaronson2002quantum
arxiv-677200
quant-ph/0210064
A Quantum Random Walk Search Algorithm
<|reference_start|>A Quantum Random Walk Search Algorithm: Quantum random walks on graphs have been shown to display many interesting properties, including exponentially fast hitting times when compared with their classical counterparts. However, it is still unclear how to use these novel properties to gain an algorithmic speed-up over classical algorithms. In this paper, we present a quantum search algorithm based on the quantum random walk architecture that provides such a speed-up. It will be shown that this algorithm performs an oracle search on a database of $N$ items with $O(\sqrt{N})$ calls to the oracle, yielding a speed-up similar to other quantum search algorithms. It appears that the quantum random walk formulation has considerable flexibility, presenting interesting opportunities for development of other, possibly novel quantum algorithms.<|reference_end|>
arxiv
@article{shenvi2002a, title={A Quantum Random Walk Search Algorithm}, author={Neil Shenvi, Julia Kempe, and K. Birgitta Whaley}, journal={Phys. Rev. A, Vol. 67 (5), 052307 (2003)}, year={2002}, doi={10.1103/PhysRevA.67.052307}, archivePrefix={arXiv}, eprint={quant-ph/0210064}, primaryClass={quant-ph cs.DS} }
shenvi2002a