source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
12,412
I'd like to understand Applicative in terms of category theory. The documentation for Applicative says that it's a strong lax monoidal functor . First, Wikipedia page about monoidal functors says that a monoidal functor is either lax or strong . So it seems to me that either one of the sources is wrong, or they use the terms differently. Can anybody explain that? Second, what are the monoidal categories of which Applicative are monoidal functors? I assume that the functors are endo-functors on the standard Haskell category (objects = types, morphisms = functions), but I have no idea what is the monoidal structure on this category. Thanks for help.
There are actually two uses of the word "strength" in play here. A strong endofunctor $F : C \to C$ over a monoidal category $(C, \otimes, I)$ is one which comes with a natural transformation $\sigma : A \otimes F(B) \to F(A \otimes B)$, satisfying some coherence conditions with respect to the associator which I will gloss over. This condition is sometimes also pronounced "$F$ has a strength". A lax monoidal functor $F : C \to D$ is a functor between two monoidal categories $(C, \otimes, I)$ and $(D, \oplus, J)$ with natural transformations $\phi : F(A) \oplus F(B) \to F(A \otimes B)$ and $i : J \to F(I)$, again satisfying a coherence condition with respect to the associators. A strong monoidal functor $F : C \to D$ is one in which $\phi$ and $i$ are natural isomorphisms. That is, $F(A \otimes B) \simeq F(A) \oplus F(B)$, with $\phi$ and its inverse describing the isomorphism. An applicative functor, in the sense of Haskell programs, is a lax monoidal endofunctor with a strength , with the monoidal structure in question being the Cartesian products. So this is why you get the paradoxical-sounding term "strong lax monoidal functor". As an aside, in a Cartesian closed category, $F$ having a strength is equivalent to the existence of a natural transformation $\mathrm{map} : (A \Rightarrow B) \to (F(A) \Rightarrow F(B))$. That is, having a strength means that the functorial action is definable as a higher-order function in the programming language. Finally, if you're interested in the type theory of Haskell-style applicative functors, I've just blogged about it.
{ "source": [ "https://cstheory.stackexchange.com/questions/12412", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/10336/" ] }
12,568
Edit: I choice the answer with highest score by December 06, 2012. This is a soft question. The concept of (deterministic) algorithms dates back to BC. What about the probabilistic algorithms? In this wiki entry , Rabin's algorithm for the closest pair problem in computational geometry was given as the first randomized algorithm (year???). Lipton introduced Rabin's algorithm as the start of the modern era of random algorithms here , but not as the first one. I also know many algorithms for probabilistic finite automata (a very simple computational model) discovered during 1960s. Do you know any probabilistic/randomized algorithms (or method) even before 1960s? or Which finding can be seen as the first probabilistic/randomized algorithm?
This is discussed a bit in my paper with H. C. Williams, "Factoring Integers before Computers" In a 1917 paper, H. C. Pocklington discussed an algorithm for finding sqrt(a), modulo p, which depended on choosing elements at random to get a nonresidue of a certain form. In it, he said, "We have to do this [find the nonresidue] by trial, using the Law of Quadratic Reciprocity, which is a defect in the method. But as for each value of u half the values of t are suitable, there should be no difficulty in finding one." So this is one of the first explicit mentions of a randomized algorithm.
{ "source": [ "https://cstheory.stackexchange.com/questions/12568", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/7358/" ] }
12,585
The Halting problem states that it is impossible to write a program that can determine if another program halts, for all possible input programs . I can, however, certainly write a program that can compute the running time of a program of like: for(i=0; i<N; i++) { x = 1; } and return a time complexity of $N$, without ever running it. For all other input programs, it would return a flag indicating it was unable to determine the time-complexity. My question is this: What conditions must hold, such that we can algorithmically determine the time-complexity of a given program? *If there is a canonical reference or review article to this I would appreciate a link to it in the comments.
In general you cannot determine complexity, even for halting programs: let $T$ be some arbitrary Turing machine and let $p_T$ be the program (that always returns 0): input: n run T for n steps if T is in halting state, output: 0 otherwise, loop for n^2 steps and output: 0 It is clear that it is undecidable in general whether $p_T$ is linear-time or quadratic-time. However, much work has been carried out on the effective computation of program complexity. I have particular fondness for Implicit Complexity Theory which aims at creating languages that, using special constructs or type disciplines, allows one to write only programs that inhabit a certain well-defined complexity class. By what I consider to be something of a miracle, these languages are often complete for that class! One particularly nice example is described in this paper by J.-Y. Marion, which describes a tiny imperative language, with a type discipline inspired from information-flow and security analysis techniques, which allows a characterization of algorithms in P .
{ "source": [ "https://cstheory.stackexchange.com/questions/12585", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/10692/" ] }
12,678
Over the years I have gotten used to seeing many TCS theorems proved using discrete Fourier analysis. The Walsh-Fourier (Hadamard) transform is useful in virtually every subfield of TCS, including property testing, pseudorandomness, communication complexity, and quantum computing. While I got comfortable using Fourier analysis of Boolean functions as a very useful tool when I'm tackling a problem, and even though I have a pretty good hunch for which cases using Fourier analysis would probably yield some nice results; I have to admit that I'm not really sure what it is that makes this change of basis so useful. Does anyone has an intuition as to why Fourier analysis is so fruitful in the study of TCS? Why so many seemingly hard problems get solved by writing the Fourier expansion and performing some manipulations? Note: my main intuition thus far, meagre as it may be, is that we have a pretty good understanding of how polynomials behave, and that the Fourier transform is a natural way of looking at a function as a multilinear polynomial. But why specifically this base? what is so unique in the base of parities?
Here is my point of view, which I learned from Guy Kindler, though someone more experienced can probably give a better answer: Consider the linear space of functions $f: \{0,1\}^n\to\mathbb{R}$, and consider a linear operator of the form $\sigma_w$ (for $w\in\{0,1\}^n$), that maps a function $f(x)$ as above to the function $f(x+w)$. In many of the questions of TCS, there is an underlying need to analyze the effects that such operators have on certain functions. Now, the point is that the Fourier basis is the basis that diagonalizes all those operators at the same time, which makes the analysis of those operators much simpler. More generally, the Fourier basis diagonalizes the convolution operator, which also underlies many of those questions. Thus, Fourier analysis is likely to be effective whenever one needs to analyze those operators. By the way, Fourier analysis is just a special case of the representation theory of finite groups. This theory considers the more general space of functions $f:G\to \mathbb{C}$ where $G$ is a finite group, and operators of the form $\sigma_h$ (for $h\in G$) that map $f(x)$ to $f(x\cdot h)$, The theory then allows you to find a basis that makes the analysis of such operators easier - even though for general groups you don't get to actually diagonalize the operators.
{ "source": [ "https://cstheory.stackexchange.com/questions/12678", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/-1/" ] }
12,691
Inspired by this question and in particular the final paragraph of Or's answer, I have the following question: Do you know of any applications of the representation theory of the symmetric group in TCS? The symmetric group $S_n$ is the group of all permutations of $\{1, \ldots, n\}$ with group operation composition. A representation of $S_n$ is a homomorphism from $S_n$ to the general linear group of invertible $n \times n$ complex matrices. A representation acts on $\mathbb{C}^n$ by matrix multiplication. An irreducible representation of $S_n$ is an action that leaves no proper subspace of $\mathbb{C}^n$ invariant. Irreducible representations of finite groups allow one to define a Fourier transform over non-abelian groups . This Fourier transform shares some of the nice properties of the discrete Fourier transform over cyclic/abelian groups. For example convolution becomes pointwise multiplication in the Fourier basis. The representation theory of the symmetric group is beautifully combinatorial. Each irreducible representation of $S_n$ corresponds to an integer partition of $n$. Has this structure and/or the Fourier transform over the symmetric group found any application in TCS?
Here are a few other examples. Diaconis and Shahshahani (1981) studied how many random transpositions are required in order to generate a near uniform permutation. They proved a sharp threshold of 1/2 n log(n) +/- O(n). Generating a Random Permutation with Random Transpositions . Kassabov (2005) proved that one can build a bounded degree expander on the symmetric group. Symmetric Groups and Expander Graphs . Kuperberg, Lovett and Peled (2012) proved that there exist small sets of permutations which act uniformly on k-tuples. Probabilistic existence of rigid combinatorial structures .
{ "source": [ "https://cstheory.stackexchange.com/questions/12691", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/4896/" ] }
13,958
Let $S_1,S_2,\ldots,S_n$ be sets that may have elements in common. I'm looking for a smallest set $X$ such that $\forall i,\,X\cap S_i \ne \emptyset$. Does this problem have a name? Or does it reduce to some known problem? In my context $S_1,\ldots,S_n$ describe the elementary cycles of a strongly connected component, and I'm looking for a smallest set of vertices $X$ that intersects all cycles.
Your first problem is the hypergraph transversal problem (aka the HITTING SET problem). The second problem is the FEEDBACK VERTEXΒ SET problem. Both the problems are NP -complete.
{ "source": [ "https://cstheory.stackexchange.com/questions/13958", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/4068/" ] }
14,012
I am currently studying mathematics. However, I don't think I want to become a professional mathematician in the future. I am thinking of applying my knowledge of mathematics to do research in artificial intelligence. However, I am not sure how many mathematics courses I should follow. (And which CS theory courses I should follow.) From Quora, I learned that the subjects Linear Algebra, Statistics and Convex Optimization are most relevant for Machine Learning (see this question). Someone else mentioned that learning Linear Algebra, Probability/Statistics, Calculus, Basic Algorithms and Logic are needed to study artificial intelligence (see this question). I can learn about all of these subjects during my first 1.5 years of the mathematics Bachelor at our university. I was wondering, though, if there are some upper-undergraduate of even graduate-level mathematics subjects that are useful or even needed to study artificial intelligence. What about ODEs, PDEs, Topology, Measure Theory, Linear Analysis, Fourier Analysis and Analysis on Manifolds? One book that suggests that some quite advanced mathematics is useful in the study of artificial intelligence is Pattern Theory: The Stochastic Analysis of Real-World signals by David Mumford and Agnes Desolneux (see this page). It includes chapters on Markov Chains, Piecewise Gaussian Models, Gibbs Fields, Manifolds, Lie Groups and Lie Algebras and their applications to pattern theory. To what extend is this book useful in A.I. research?
I do not want to sound condescending, but the math you are studying at the undergraduate and even graduate level courses is not advanced. It is the basics . The title of your question should be: Is "basic" math needed/useful in AI research? So, gobble up as much as you can, I have never met a computer scientist who complained about knowing too much math, although I met many who complained about not knowing enough of it. I remember helping a fellow graduate student in AI understand a page-rank-style algorithm. It was just some fairly easy linear algebra to me, but he suffered because he had no feeling for what eigenvalues and eigenvectors were about. Imagine the things AI people could do if they actually knew a lot of math! I teach at a math department and I regularly get requests from my CS colleagues to recommend math majors for CS PhD's becase they prefer math students. You see, math is really, really hard to learn on your own, but most aspects of computer science are not. I know, I was a math major who got into a CS graduate school. Sure, I was "behind" on operating systems knowledge (despite having decent knowledge of Unix and VMS), but I was way, way ahead on "theory". It is not a symmetric situation.
{ "source": [ "https://cstheory.stackexchange.com/questions/14012", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/5712/" ] }
14,128
Is it possible to build a single purpose (non Turing complete) mechanical implementation of say, Microsoft Word? Is it possible to implement such things as iterators, first-order functions, the whole gamut of programming techniques? Could gears and other mechanical parts represent data structures or even program objects? At a certain point does this necessitate building a general purpose Turing-equivalent machine, or can each function, variable, etc, have its own unique mechanical construct in the form of flywheels and/or gears, ratchets, what have you? In summary I wonder if any given piece of software on a standard computer could be compiled to a mechanical blueprint.
Yes, it is. Here's how you do it: You can compile basically any program you like to circuits. See for instance the work of Dan Ghica and his collaborators on the Geometry of Synthesis, which shows how to compile programs into circuits. Dan R. Ghica. Geometry of Synthesis: A structured approach to VLSI design Dan R. Ghica, Alex Smith. Geometry of Synthesis II: From Games to Delay-Insensitive Circuits Dan R. Ghica, Alex Smith. Geometry of Synthesis III: Resource management through type inference. Dan R. Ghica, Alex Smith, Satnam Singh. Geometry of synthesis IV: compiling affine recursion into static hardware. Circuits then turn out to reappear over and over in engineering. John Baez gives a big table of analogies of concepts, and works out a lot of connections, in This Week's Finds 288-296. So the circuit diagrams Dan's compiler generates could be instantiated as mechanical or hydraulic systems, if you really wanted to! ╔══════════════════════════════════════════════════════════════╗ β•‘ displacement flow momentum effort β•‘ ╠══════════════════════════════════════════════════════════════╣ β•‘ Mechanics position velocity momentum force β•‘ β•‘ (translation) β•‘ β•‘ β•‘ β•‘ Mechanics angle angular angular torque β•‘ β•‘ (rotation) velocity momentum β•‘ β•‘ β•‘ β•‘ Electronics charge current flux voltage β•‘ β•‘ linkage β•‘ β•‘ β•‘ β•‘ Hydraulics volume flow pressure pressure β•‘ β•‘ momentum β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β• http://math.ucr.edu/home/baez/week288.html http://math.ucr.edu/home/baez/week289.html http://math.ucr.edu/home/baez/week290.html http://math.ucr.edu/home/baez/week291.html http://math.ucr.edu/home/baez/week294.html http://math.ucr.edu/home/baez/week296.html
{ "source": [ "https://cstheory.stackexchange.com/questions/14128", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/10579/" ] }
14,159
We know that $\mathsf{L} \subseteq \mathsf{NL} \subseteq \mathsf{P}$ and that $\mathsf{L} \subseteq \mathsf{NL} \subseteq \mathsf{L}^2 \subseteq $ $\mathsf{polyL}$ , where $\mathsf{L}^2 = \mathsf{DSPACE}(\log^2 n)$. We also know that $\mathsf{polyL} \neq \mathsf{P}$ because the latter has complete problems under logarithmic space many-one reductions while the former does not (due to the space hierarchy theorem). In order to understand the relationship between $\mathsf{polyL}$ and $\mathsf{P}$, it may help to first understand the relationship between $\mathsf{L}^2$ and $\mathsf{P}$. What are the consequences of $\mathsf{L}^2 \subseteq \mathsf{P}$? What about the stronger $\mathsf{L}^{k} \subseteq \mathsf{P}$ for $k>2$, or the weaker $\mathsf{L}^{1 + \epsilon} \subseteq \mathsf{P}$ for $\epsilon > 0$?
The following is an obvious consequence: $\mathsf{L}^{1+\epsilon} \subseteq \mathsf{P}$ would imply $\mathsf{L} \subsetneq \mathsf{P}$ and therefore $\mathsf{L} \neq \mathsf{P}$. By the space hierarchy theorem, $\forall \epsilon > 0: \mathsf{L} \subsetneq \mathsf{L}^{1+\epsilon}$ . If $\mathsf{L}^{1+\epsilon} \subseteq \mathsf{P}$ then $\mathsf{L} \subsetneq \mathsf{L}^{1+\epsilon} \subseteq \mathsf{P}$.
{ "source": [ "https://cstheory.stackexchange.com/questions/14159", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/4193/" ] }
14,396
I often hear that for many problems we know very elegant randomized algorithms, but no, or only more complicated, deterministic solutions. However, I only know a few examples for this. Most prominently Randomized Quicksort (and related geometric algorithms, e.g. for convex hulls) Randomized Mincut Polynomial Identity Testing Klee's Measure problem Among these, only polynomial identity testing seems to be really hard without the use of randomness. Do you know more examples of problems where a randomized solution is very elegant or very efficient, but deterministic solutions are not? Ideally, the problems should be easy to motivate for laymen (unlike e.g. polynomial identity testing).
Sorting nuts and bolts The following problem was suggested by Rawlins in 1992: Suppose you are given a collection of n nuts and n bolts. Each bolt fits exactly one nut, and otherwise, the nuts and bolts have distinct sizes. The sizes are too close to allow direct comparison between pairs of bolts or pairs of nuts. However, you can compare any nut to any bolt by trying to screw them together; in constant time, you will discover whether the bolt is too large, too small, or just right for the nut. Your task is to discover which bolt fits each nut, or equivalently, to sort the nuts and bolts by size. A straightforward variant of randomized quicksort solves the problem in $O(n \log n)$ time with high probability. Pick a random bolt; use it to partition the nuts; use the matching nut to partition the bolts; and recurse. However, finding a deterministic algorithm that even runs in $o(n^2)$ is nontrivial. Deterministic $O(n\log n)$-time algorithms were finally found in 1995 by Bradford and independently by KomlΓ³s, Ma, and SzemerΓ©di. Under the hood, both algorithms use variants of the AKS parallel sorting network, so the hidden constant in the $O(n\log n)$ time bound is quite large; the hidden constant for the randomized algorithm is 4. Noga Alon, Manuel Blum, Amos Fiat, Sampath Kannan, Moni Noar, and Rafail Ostrovsky. Matching nuts and bolts. Proc. 5th Ann. ACM-SIAM Symp. Discrete Algorithms , 690–696, 1994. Noga Alon, PhillipΒ G. Bradford, and Rudolf Fleischer. Matching nuts and bolts faster. Inform. Proc. Lett. 59(3):123–127, 1996. PhillipΒ G. Bradford. Matching nuts and bolts optimally. Tech. Rep. MPI-I-95-1-025, Max-Planck-Institut fΓΌr Informatik, 1995. http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-025 PhillipΒ G. Bradford and Rudolf Fleischer. Matching nuts and bolts faster. Proc. 6th. Int. Symp. Algorithms Comput. , 402–408, 1995. Lecture Notes Comput. Sci. 1004. JΓ‘nos KomlΓ³s, Yuan Ma, and Endre SzemerΓ©di. Matching nuts and bolts in $O(n\log n)$ time. SIAM J. Discrete Math. 11(3):347–372, 1998. Gregory J.Β E. Rawlins. Compared To What? : An Introduction to The Analysis of Algorithms . Computer Science Press/W. H. Freeman, 1992.
{ "source": [ "https://cstheory.stackexchange.com/questions/14396", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/12560/" ] }
14,425
While reasoning a bit on this question , I've tried to identify all the different reasons for which a graph $G = (V_G,E_G)$ may fail to be $k$ colorable. These are the only 2 reasons that I was able to identify so far: $G$ contains a clique of size $k+1$. This is the obvious reason. There exists a subgraph $H = (V_H, E_H)$ of $G$ such that both the following statements are true: $H$ is not $k-1$ colorable. $\exists x \in V_G - V_H\ \forall y \in V_H\ \{x,y\} \in E_G$. In other words there exists a node $x$ in $G$ but not in $H$, such that $x$ is connected to each node in $H$. We can see the 2 reasons above as rules. By recursively applying them, the only 2 ways to build a non $k$ colorable graph which does not contain a $k+1$ clique are: Start from a cycle of even length (which is $2$ colorable), then apply rule 2 for $k-1$ times. Note that an edge is not considered to be a cycle of length $2$ (otherwise this process would have the effect of building a $k+1$ clique). Start from a cycle of odd length (which is $3$ colorable), then apply rule 2 for $k-2$ times. The length of the starting cycle must be greater than $3$ (otherwise this process would have the effect of building a $k+1$ clique). Question Is there any further reason, other than those 2 above, that makes a graph non $k$ colorable? $\ $ Update 30/11/2012 More precisely, what I need is some theorem of the form: A graph $G$ has chromatic number $\chi(G) = k + 1$ if and only if... HajΓ³s calculus , pointed out by Yuval Filmus in his answer, is a perfect example of what I am looking for, as a graph $G$ has chromatic number $\chi(G) = k + 1$ if and only if it can be derived from axiom $K_{k+1}$ by repeatedly applying the 2 rules of inference of the calculus. The HajΓ³s number $h(G)$ is then the minimum number of steps necessary to derive $G$ (i.e. it is the length of the shortest proof). It is very interesting that: The question of whether there exists a graph $G$ whose $h(G)$ is exponential in the size of $G$ is still open. If such $G$ does not exist, then $NP = coNP$.
You should check the HajΓ³s calculus . HajΓ³s showed that every graph with chromatic number at least $k$ has a subgraph which has a "reason" for requiring $k$ colors. The reason in question is a proof system for requiring $k$ colors. The only axiom is $K_k$, and there are two rules of inference. See also this paper by Pitassi and Urquhart on the efficiency of this proof system.
{ "source": [ "https://cstheory.stackexchange.com/questions/14425", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/947/" ] }
14,445
One of the most celebrated results in computer science is that the halting problem is undecidable. However there are still notions of complexity that are applicable. Here are 3 that I have in mind: $K(n)$ is the Kolomogorov complexity of the string $h_{<2^n}$ of length $2^n$ whose $k$-th bit is 1 iff the $k$-th program halts $C(n)$ is the minimal size of a Boolean circuit solving the halting problem for programs of size at most $n$ $T(n)$ is the time complexity of the halting problem made solvable by introducing an extra-tape into our Turing machine on which an infinite bit-string is written in the initial state. For example the bit-string can be an infinite look-up table: the $k$-th bit is 1 iff the $k$-th program halts. This allows a simple look-up algorithm to solve the halting problem but the complexity would be $O(2^n)$ $K$ is at most $O(n)$ since we can construct a program which encodes the longest-running program of length $n$ and checks whether the input program halts before that one. $C(n)$ is at most $O(2^n)$ because we can construct a "look-up table" circuit and $T(n)$ is at most $O(2^n)$ as noted above What other bounds on $K$, $C$, $T$ can be found? In particular, are $C$ and/or $T$ less than exponential? EDIT: Actually, $K(n) = n + O(1)$. To see this, consider $H_n$ an algorithm solving the halting problem for all inputs of length at most $n$ and $P$ the following program. $P$ runs $H_n$ on $P$ itself. If result is "halts", it goes into an infinite loop. If result is "doesn't halt", it terminates. $H_n$ fails to evaluate $P$'s halting correctly therefore the length of $P$ is greater than $n$. On the other hand $P$ is only longer than $H_n$ by a constant so $H_n$ can't be much shorter than $n$. EDIT: If the halting problem is in $P/poly$ i.e. $C$ is polynomial, then $NP \subset P/poly$ (which implies $PH = \Sigma_2$). To see this consider $S \subset \{0,1\}^*$ a decision problem in $NP$ and $V$ a verifier program for $S$. Deciding whether $x \in S$ is equivalent to solving the halting problem for the following program $Q_x$: "Loop over all $p \in \{0,1\}^*$, halt if $V(x,p) = 1$". The size of $Q_x$ is the same as the size of $x$, up to a constant. Therefore if we can solve the halting problem for $Q_x$ in polynomial time with polynomial advice, we can decide $x \in S$ in polynomial time with polynomial advice Note that $C$ is polynomial iff $T$ is polynomial. Consider $R_n$ a family of circuits solving the halting problem. Then we can construct an infinite-advice program $H$ for solving the halting problem by encoding $R_n$ as advice. This yields $$T(n) = O(n \, C(n) \ln C(n))$$ On the other hand if we have $H$ an infinite-advice program solving the halting problem, we can construct a circuit $R_n$ representing the computation process of $H$ on an input of size $n$. The size of this circuit is the product of the spatial complexity by the temporal complexity so $$C(n) = O(T(n)^2)$$ EDIT: If the halting problem is in $coNP/poly$ then $NP \subset coNP/poly$. This is due to reasoning similar to above i.e. an existential quantifier can be replaced by a universal quantifier at the cost of requiring polynomial advice. I think this also implies some kind of collapse of the polynomial hierarchy EDIT: It is possible to construct a specific infinite-advice algorithm of optimal complexity, analogous to Levin search for $NP$ problems. As opposed to the case of $NP$, there is no way to verify correctness of solutions, on the other hand it is possible to restrict the dovetailing only to valid programs. This is done by encoding all programs which solve the halting problem together with their respective infinite advice sequences in the infinite advice of our algorithm. The penalty incurred by using this encoding is at most polynomial, hence the resulting algorithm has complexity which is optimal up to a polynomial
You should check the HajΓ³s calculus . HajΓ³s showed that every graph with chromatic number at least $k$ has a subgraph which has a "reason" for requiring $k$ colors. The reason in question is a proof system for requiring $k$ colors. The only axiom is $K_k$, and there are two rules of inference. See also this paper by Pitassi and Urquhart on the efficiency of this proof system.
{ "source": [ "https://cstheory.stackexchange.com/questions/14445", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/7030/" ] }
14,471
Is there an reverse Chernoff bound which bounds that the tail probability is at least so much. i.e if $X_1,X_2,\ldots,X_n$ are independent binomial random variables and $\mu=\mathbb{E}[\sum_{i=1}^n X_i]$. Then can we prove $Pr[\sum_{i=1}^n X_i\geq (1+\delta)\mu]\geq f(\mu,\delta,n)$ for some function $f$.
Here is an explicit proof that a standard Chernoff bound is tight up to constant factors in the exponent for a particular range of the parameters. (In particular, whenever the variables are 0 or 1, and 1 with probability 1/2 or less, and $\epsilon\in(0,1/2)$, and the Chernoff upper bound is less than a constant.) If you find a mistake, please let me know. Lemma 1. (tightness of Chernoff bound) Let $X$ be the average of $k$ independent, 0/1 random variables (r.v.). For any $\epsilon\in(0,1/2]$ and $p\in(0,1/2]$, assuming $\epsilon^2 p k \ge 3$, (i) If each r.v. is 1 with probability at most $p$, then $$\displaystyle \Pr[X\le (1-\epsilon)p] ~\ge~ \exp\big({-9\epsilon^2 pk}\big).$$ (ii) If each r.v. is 1 with probability at least $p$, then $$\displaystyle \Pr[X\ge (1+\epsilon)p] ~\ge~ \exp\big({-9\epsilon^2 pk}\big).$$ Proof. We use the following observation: Claim 1. If $1\le \ell \le k-1$, then $\displaystyle {k \choose \ell} ~\ge~ \frac{1}{e\sqrt{2\pi\ell}} \Big(\frac{k}{\ell}\Big)^{\ell} \Big(\frac{k}{k-\ell}\Big)^{k-\ell}$ Proof of Claim 1. By Stirling's approximation, $i!=\sqrt{2\pi i}(i/e)^ie^\lambda$ where $\lambda\in[1/(12i+1),1/12i].$ Thus, $k\choose \ell$, which is $\frac{k!}{\ell! (k-\ell)!}$, is at least $$ \frac{\sqrt{2\pi k}\,(\frac{k}{e})^k} { \sqrt{2\pi \ell}\,(\frac{\ell}{e})^\ell ~~\sqrt{2\pi (k-\ell)}\,(\frac{k-\ell}{e})^{k-\ell} } \exp\Big(\frac{1}{12k+1} - \frac{1}{12\ell} - \frac{1}{12(k-\ell)}\Big)$$ $$ ~\ge~ \frac{1}{\sqrt{2\pi\ell}} \Big(\frac{k}{\ell}\Big)^{\ell} \Big(\frac{k}{k-\ell}\Big)^{k-\ell}e^{-1}. $$ QED Proof of Lemma 1 Part (i). Without loss of generality assume each 0/1 random variable in the sum $X$ is 1 with probability exactly $p$. Note $\Pr[X\le (1-\epsilon)p]$ equals the sum $\sum_{i = 0}^{\lfloor(1-\epsilon)pk\rfloor} \Pr[X=i/k]$, and $\Pr[X=i/k] = {k \choose i} p^i (1-p)^{k-i}$. Fix $\ell = \lfloor(1-2\epsilon)pk\rfloor+1$. The terms in the sum are increasing, so the terms with index $i\ge\ell$ each have value at least $\Pr[X=\ell/k]$, so their sum has total value at least $(\epsilon pk - 2) \Pr[X=\ell/k]$. To complete the proof, we show that $$(\epsilon pk - 2) \Pr[X=\ell/k] ~\ge~ \exp({-9\epsilon^2 pk}).$$ The assumptions $\epsilon^2pk\ge 3$ and $\epsilon\le 1/2$ give $\epsilon pk \ge 6$, so the left-hand side above is at least $\frac{2}{3}\epsilon pk\, {k \choose \ell} p^\ell(1-p)^{k-\ell}$. Using Claim 1, to bound $k\choose \ell$, this is in turn at least $A\, B$ where $A = \frac{2}{3e}\epsilon p k/ \sqrt{2\pi \ell}$ and $ B= \big(\frac{k}{\ell}\big)^\ell \big(\frac{k}{k-\ell}\big)^{k-\ell} p^\ell (1-p)^{k-\ell}. $ To finish we show $A\ge \exp(-\epsilon^2pk)$ and $B \ge \exp(-8\epsilon^2 pk)$. Claim 2. $A \ge \exp({-\epsilon^2 pk})$ Proof of Claim 2. The assumptions $\epsilon^2 pk \ge 3$ and $\epsilon\le 1/2$ imply (i) $pk\ge 12$. By definition, $\ell \le pk + 1$. By (i), $p k \ge 12$. Thus, (ii) $\ell \,\le\, 1.1 pk$. Substituting the right-hand side of (ii) for $\ell$ in $A$ gives (iii) $A \ge \frac{2}{3e} \epsilon \sqrt{p k / 2.2\pi}$. The assumption, $\epsilon^2 pk \ge 3$, implies $\epsilon\sqrt{ pk} \ge \sqrt 3$, which with (iii) gives (iv) $A \ge \frac{2}{3e}\sqrt{3/2.2\pi} \ge 0.1$. From $\epsilon^2pk \ge 3$ it follows that (v) $\exp(-\epsilon^2pk) \le \exp(-3) \le 0.04$. (iv) and (v) together give the claim. QED Claim 3. $B\ge \exp({-8\epsilon^2 pk})$. Proof of Claim 3. Fix $\delta$ such that $\ell=(1-\delta)pk$. The choice of $\ell$ implies $\delta\le 2\epsilon$, so the claim will hold as long as $B \ge \exp(-2\delta^2pk)$. Taking each side of this latter inequality to the power $-1/\ell$ and simplifying, it is equivalent to $$ \frac{\ell}{p k} \Big(\frac{k-\ell}{(1-p) k}\Big)^{k/\ell-1} ~\le~ \exp\Big(\frac{2\delta^2 pk}{\ell}\Big). $$ Substituting $\ell= (1-\delta)pk$ and simplifying, it is equivalent to $$ (1-\delta) \Big(1+\frac{\delta p}{1-p}\Big)^{\displaystyle \frac{1}{(1-\delta)p}-1} ~\le~ \exp\Big(\frac{2\delta^2}{1-\delta}\Big). $$ Taking the logarithm of both sides and using $\ln(1+z)\le z$ twice, it will hold as long as $$ -\delta\, +\,\frac{\delta p}{1-p}\Big(\frac{1}{(1-\delta)p}-1\Big) ~\le~ \frac{2\delta^2}{1-\delta}. $$ The left-hand side above simplifies to $\delta^2/\,(1-p)(1-\delta)$, which is less than $2\delta^2/(1-\delta)$ because $p\le 1/2$. QED Claims 2 and 3 imply $A B \ge \exp({-\epsilon^2pk})\exp({- 8\epsilon^2pk})$. This implies part (i) of the lemma. Proof of Lemma 1 Part (ii). Without loss of generality assume each random variable is $1$ with probability exactly $p$. Note $\Pr[X\ge (1+\epsilon)p] = \sum_{i = \lceil(1-\epsilon)pk\rceil}^n \Pr[X=i/k]$. Fix $\hat\ell = \lceil (1+2\epsilon)pk \rceil - 1$. The last $\epsilon pk$ terms in the sum total at least $(\epsilon pk-2)\Pr[X=\hat\ell/k]$, which is at least $\exp({-9\epsilon^2 pk})$. (The proof of that is the same as for (i), except with $\ell$ replaced by $\hat\ell$ and $\delta$ replaced by $-\hat\delta$ such that $\hat\ell = (1+\hat\delta)pk$.) QED
{ "source": [ "https://cstheory.stackexchange.com/questions/14471", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/372/" ] }
14,530
Suppose we are throwing $m$ balls into $n$ bins, where $m \gg n$. Let $X_i$ be the number of balls ending up in bin $i$, $X_\max$ be the heaviest bin, $X_\min$ be the lightest bin, and $X_{\mathrm{sec-max}}$ be the second heaviest bin. Roughly speaking, $X_i - X_j \sim N(0,2m/n)$, and so we expect $|X_i - X_j| = \Theta(\sqrt{m/n})$ for any two fixed $i,j$. Using a union bound, we expect $X_{\max} - X_{\min} = O(\sqrt{m\log n/n})$; presumably, we can get a matching lower bound by considering $n/2$ pairs of disjoint bins. This (not completely formal) argument leads us to expect that the gap between $X_{\max}$ and $X_{\min}$ is $\Theta(\sqrt{m\log n/n})$ with high probability. I am interested in the gap between $X_\max$ and $X_{\mathrm{sec-max}}$. The argument outlined above shows that $X_\max - X_{\mathrm{sec-max}} = O(\sqrt{m\log n/n})$ with high probability, but the $\sqrt{\log n}$ factor seems extraneous. Is anything known about the distribution of $X_\max - X_{\mathrm{sec-max}}$? More generally, suppose that each ball is associated with a non-negative score for each bin, and we are interested in the total score of each bin after throwing $m$ balls. The usual scenario corresponds to scores of the form $(0,\ldots,0,1,0,\ldots,0)$. Suppose that the probability distribution of the scores is invariant under permutation of the bins (in the usual scenario, this corresponds to the fact that all bins are equiprobable). Given the distribution of the scores, we can use the method of the first paragraph to get a good bound on $X_{\max} - X_{\min}$. The bound will contain a factor of $\sqrt{\log n}$ that comes from a union bound (via the tail probabilities of a normal variable). Can this factor be reduced if we're interested in bounding $X_{\max} - X_{\mathrm{sec-max}}$?
Answer: $\Theta\left(\sqrt{\frac{m}{n\log n}}\right)$. Applying a multidimensional version of the Central Limit Theorem, we get that the vector $(X_1,\dots, X_n)$ has asymptotically multivariate Gaussian distribution with $$\mathrm{Var}[X_i] = m\left(\frac{1}{n} - \frac{1}{n^2}\right),$$ and $$\mathrm{Cov}(X_i, X_j) = -m/n^2.$$ We will assume below that $X$ is a Gaussian vector (and not only approximately a Gaussian vector). Let us add a Gaussian random variable $Z$ with variance $m/n^2$ to all $X_i$ ($Z$ is independent from all $X_i$). That is, let $$ \begin{pmatrix} Y_1\\Y_2\\ \vdots\\Y_n \end{pmatrix} = \begin{pmatrix} X_1+Z\\X_2+Z\\ \vdots\\X_n +Z \end{pmatrix}. $$ We get a Gaussian vector $(Y_1, \dots, Y_n)$. Now each $Y_i$ has variance $m/n$: $$\mathrm{Var}[Y_i] = \mathrm{Var}[X_i] + \underbrace{2\mathrm{Cov}(X_i,Z)}_{=\, 0}+\mathrm{Var}[Z] = m/n,$$ and all $Y_i$ are independent: $$\mathrm{Cov}(Y_i, Y_j) = \mathrm{Cov}(X_i, X_j) + \underbrace{\mathrm{Cov}(X_i,Z) + \mathrm{Cov}(X_j,Z)}_{=\, 0} +\mathrm{Cov}(Z, Z) = 0.$$ Note that $Y_i - Y_j = X_i - X_j$. Thus our original problem is equivalent to the problem of finding $Y_{\mathrm{max}} - Y_{\mathrm{sec-max}}$. Let us first for simplicity analyze the case when all $Y_i$ have variance $1$. Problem. We are given $n$ independent Gaussian r.v. $\gamma_1,\dots, \gamma_n$ with mean $\mu$ and variance $1$. Estimate the expectation of $\gamma_{\mathrm{max}} - \gamma_{\mathrm{sec-max}}$. Answer: $\Theta\left(\frac{1}{\sqrt{\log n}}\right)$. Informal Proof. Here is an informal solution to this problem (it's not hard to make it formal). Since the answer does not depend on the mean, we assume that $\mu = 0$. Let $\bar\Phi(t) = \Pr[\gamma > t]$, where $\gamma\sim{\cal N}(0,1)$. We have (for moderately large $t$), $$\bar\Phi(t)\approx \frac{1}{\sqrt{2\pi}t} e^{-\frac{1}{2}t^2}.$$ Note that $\Phi(\gamma_i)$ are uniformly and independently distributed on $[0,1]$, $\Phi(\gamma_{\mathrm{max}})$ is the smallest among $\Phi(\gamma_i)$, $\Phi(\gamma_{\mathrm{sec-max})}$ is the second smallest among $\Phi(\gamma_i)$. Thus $\Phi(\gamma_{\mathrm{max}})$ is close to $1/n$ and $\Phi(\gamma_{\mathrm{max}})$ is close to $2/n$ (there is no concentration but if we don't care about constants these estimates are good enough; in fact, they are even pretty good if we care about constants β€” but that needs a justification). Using the formula for $\bar\Phi(t)$, we get that $$ 2\approx \bar\Phi(\gamma_{\mathrm{sec-max}})\left/\bar\Phi(\gamma_{\mathrm{max}})\right. \approx e^{\frac{1}{2}\left(\gamma_{\mathrm{max}}^2 - \gamma_{\mathrm{sec-max}}^2\right)}. $$ Thus $\gamma_{\mathrm{max}}^2 - \gamma_{\mathrm{sec-max}}^2$ is $\Theta(1)$ w.h.p. Note that $\gamma_{\mathrm{max}}\approx \gamma_{\mathrm{sec-max}} = \Theta(\sqrt{\log n})$. We have, $$\gamma_{\mathrm{max}} - \gamma_{\mathrm{sec-max}}\approx \frac{\Theta(1)}{\gamma_{\mathrm{max}} + \gamma_{\mathrm{sec-max}}} \approx \frac{\Theta(1)}{\sqrt{\log n}}.$$ QED We get that \begin{align} \mathbb{E}[{X_{\mathrm{max}} - X_{\mathrm{sec-max}}}] &= \mathbb{E}[{Y_{\mathrm{max}} - Y_{\mathrm{sec-max}}}] \\ &= \sqrt{\mathrm{Var}[Y_i]} \times\mathbb{E}[{\gamma_{\mathrm{max}} - \gamma_{\mathrm{sec-max}}}] = \Theta\left(\sqrt{\frac{m}{n\log n}}\right). \end{align} The same argument goes through when we have arbitrary scores. It shows that $$\mathbb{E}[X_{\mathrm{max}}- X_{\mathrm{sec-max}}] = c\, \left. \mathbb{E}[X_{\mathrm{max}}- X_{\mathrm{min}}]\right/\log n.$$
{ "source": [ "https://cstheory.stackexchange.com/questions/14530", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/40/" ] }
14,568
An important application of the PCP theorem is that it yields "hardness of approximation" type results. In some relatively simpler cases one can prove such hardness without PCP. Is there, however, any case where the hardness of approximation result was first proved using the PCP theorem, i.e., the result was not known before, but later a more direct proof was found that does not depend on PCP? In other words, is there any case where PCP appeared necessary first, but later it could be eliminated?
An example is this paper: Guruswami, V., & Khanna, S. (2004). On the hardness of 4-coloring a 3-colorable graph. SIAM Journal on Discrete Mathematics , 18(1): 30-40. link Using the PCP-Theorem, Khanna, Linial, and Safra (2000) proved that it is NP-hard to color a 3-colorable graph using just 4 colors. Later, Guruswami & Khanna (2004) gave, among other nice things, a PCP-free proof for the same result.
{ "source": [ "https://cstheory.stackexchange.com/questions/14568", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/12710/" ] }
14,811
I've been revising Theory of Computation for fun and this question has been nagging me for a while (funny never thought of it when I learnt Automata Theory in my undergrad). So "why" exactly do we study deterministic and non-deterministic finite automata (DFA/NFAs)? So here are some answers I came up with after soliloquing but still fail to see their overall contribution to the 'aha' moment: To study what they are and aren't capable of i.e. limitations Why? Since they are the basic models of theoretical computation and would lay the foundation of other more capable models of computation. What makes them 'basic'? Is it that they have only one bit of storage and state transitions? Okay, so what? How does all this contribute to answer the question of computability? It seems Turing machines help understand this really well and there are 'lesser' models of computations like PDAs, DFA/NFAs/Regexes etc. But if one didn't know FAs what is it that they are missing out on? So although I 'get it' to some extent, I am unable to answer this question to myself? How best would you explain 'why study D/N-FAs'? What's the question they seek to answer? How does it help and why is it the first thing taught in Automata Theory? PS: I'm aware of the various lexicographic applications and pattern matchers that can be implemented as such. However, I don't wish to know what it can be used for practically but what was their reason for use/invention/design during the culmination of studying the theory of computation. Historically speaking what led one to start with this and what 'aha' understanding is it supposed to lead to? If you were to explain their importance to CS students just beginning to study Automata Theory, how'd you do it?
I have personally enjoyed several Aha! moments from studying basic automata theory. NFAs and DFAs form a microcosm for theoretical computer science as a whole. Does Non-determinism Lead to Efficiency? There are standard examples where the minimal deterministic automaton for a language is exponentially larger than a minimal non-deterministic automaton. Understanding this difference for Turing machines is at the core of (theoretical) computer science. NFAs and DFAs provide the simplest example I know where you can explicitly see the strict gap between determinism and non-determinism. Computability != Complexity. NFAs and DFAs both represent regular languages and are equivalent in what they compute. They differ in how they compute. Machines Refine Languages. This is a different take on what we compute and how we compute. You can think of computable languages (and functions) as defining an equivalence class of automata. This is a fundamental perspective change in TCS, where we focus not just on the what, but the how of computation and try to choose the right 'how' when designing an algorithm or understand the space of different how's in studying complexity classes. The Value of Canonical Representation. DFAs are the quintessential example of a data-structure admitting a canonical representation. Every regular language has a unique, minimal DFA. This means that given a minimal DFA, important operations like language inclusion, complementation, and checking acceptance of a word become trivial. Devising and exploiting canonical representations is a useful trick when developing algorithms. The Absence of Canonical Representations. There is no well accepted canonical representation of regular expressions or NFA. So, despite the point above, canonical representations do not always exist. You will see this point in many different areas in computer science. (for example, propositional logic formulae also do not have canonical representations, while ROBDDs do). The Cost of a Canonical Representation. You can even understand the difference between NFAs and DFAs as an algorithmic no-free-lunch theorem. If we want to check language inclusion between, or complement an NFA, you can determinize and minimize it and continue from there. However, this "reduction" operation comes at a cost. You will see examples of canonization at a cost in several other areas of computer science. Infinite != Undecidable. A common misconception is that problems of an infinitary nature are inherently undecidable. Regular languages contain infinitely many strings and yet have several decidable properties. The theory of regular languages shows you that infinity alone is not the source of undecidability. Hold Infinity in the Palm of Your Automaton. You can view a finite automaton purely as a data-structure for representing infinite sets. An ROBDD is a data-structure for representing Boolean functions, which you can understand as representing finite sets. A finite-automaton is a natural, infinitary extension of an ROBDD. The Humble Processor. A modern processor has a lot in it, but you can understand it as a finite automaton. Just this realisation made computer architecture and processor design far less intimidating to me. It also shows that, in practice, if you structure and manipulate your states carefully, you can get very far with finite automata. The Algebraic Perspective. Regular languages form a syntactic monoid and can be studied from that perspective. More generally, you can in later studies also ask, what is the right algebraic structure corresponding to some computational problem. The Combinatorial Perspective. A finite-automaton is a labelled graph. Checking if a word is accepted reduces to finding a path in a labelled graph. Automata algorithms amount to graph transformations. Understanding the structure of automata for various sub-families of regular languages is an active research area. The Algebra-Language-Combinatorics love triangle. The Myhill-Nerode theorem allows you to start with a language and generate an automaton or a syntactic monoid. Mathematically, we obtain a translation between very different types of mathematical objects. It is useful to keep such translations in mind and look for them in other areas of computer science, and to move between them depending on your application. Mathematics is the Language of Big-Pictures. Regular languages can be characterised by NFAs (graphs), regular expressions (formal grammar), read-only Turing machines (machine), syntactic monoids (algebra), Kleene algebras (algebra), monadic second-order logic, etc. The more general phenomenon is that important, enduring concepts have many different mathematical characterizations, each of which brings different flavours to our understanding of the idea. Lemmas for the Working Mathematician. The Pumping Lemma is a great example of a theoretical tool that you can leverage to solve different problems. Working with Lemmas is good practice for trying to build upon existing results. Necessary != Sufficient. The Myhill-Nerode theorem gives you necessary and sufficient conditions for a language to be regular. The Pumping Lemma gives us necessary conditions. Comparing the two and using them in different situations helped me understand the difference between necessary and sufficient conditions in mathematical practice. I also learnt that a reusable necessary and sufficient condition is a luxury. The Programming Language Perspective. Regular expressions are a simple and beautiful example of a programming language. In concatenation, you have an analogue of sequential composition and in Kleene star, you have the analogue of iteration. In defining the syntax and semantics of regular expressions, you make a baby step in the direction of programming language theory by seeing inductive definitions and compositional semantics. The Compiler Perspective. The translation from a regular expression to a finite automaton is also a simple, theoretical compiler. You can see the difference between parsing, intermediate-code generation, and compiler optimizations, because of the difference in reading a regular expression, generating an automaton, and then minimizing/determinizing the automaton. The Power of Iteration. In seeing what you can do in a finite-automaton with a loop and one without, you can appreciate the power of iteration. This can help understanding differences between circuits and machines, or between classical logics and fixed point logics. Algebra and Coalgebra. Regular languages form a syntactic monoid, which is an algebraic structure. Finite automata form what in the language of category theory is called a coalgebra. In the case of a deterministic automaton, we can easily move between an algebraic and a coalgebraic representation, but in the case of NFAs, this is not so easy. The Arithmetic Perspective. There is a deep connection between computation and number-theory. You may choose to understand this as a statement about the power of number theory, and/or the universality of computation. You usually know that finite automata can recognize an even number of symbols, and that they cannot count enough to match parenthesis. But how much arithmetic are they capable of? Finite automata can decide Presburger arithmetic formulae. The simplest decision procedure I know for Presburger arithmetic reduces a formula to an automaton. This is one glimpse from which you can progress to Hilbert's 10th problem and it's resolution which led to discovery of a connection between Diophantine equations and Turing machines. The Logical Perspective. Computation can be understood from a purely logical perspective. Finite automata can be characterised by weak, monadic second order logic over finite words. This is my favourite, non-trivial example of a logical characterisation of a computational device. Descriptive complexity theory shows that many complexity classes have purely logical characterisations too. Finite Automata are Hiding in Places you Never Imagined. (Hat-tip to Martin Berger's comment on the connection to coding theory) The 2011 Nobel Prize in Chemistry was given to the discovery of quasi-crystals. The mathematics behind quasi-crystals is connected to aperiodic tilings. One specific aperiodic tiling of the plane is called the Cartwheel Tiling, which consists of a kite shape and a bow-tie shape. You can encode these shapes in terms of 0s and 1s and then study properties of these sequences, which code sequences of patterns. In fact, if you map 0 to 01 and 1 to 0, and repeatedly apply this map to the digit 0, you will get, 0, 01, 010, 01001, etc. Observe that the lengths of these strings follow the Fibonacci sequence. Words generated in this manner are called Fibonacci words. Certain shape sequences observed in Penrose tilings can be coded as Fibonacci words. Such words have been studied from an automat-theoretic perspective, and guess what, some families of words are accepted by finite automata, and even provide examples of worst-case behaviour for standard algorithms such as Hopcroft's minimization algorithm. Please tell me you are dizzy. I could go on.(And on.)* I find it useful to have automata in the back of my head and recall them every now and then to understand a new concept or to gain intuition about high-level mathematical ideas. I doubt that everything I mention above can be communicated in the first few lectures of a course, or even in a first course. These are long-term rewards based on an initial investment made in the initial lectures of an automata theory course. To address your title: I don't always seek enlightenment, but when I do, I prefer finite automata. Stay thirsty, my friend.
{ "source": [ "https://cstheory.stackexchange.com/questions/14811", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/7014/" ] }
14,999
There are many places where the numbers $\pi$ and $(1+\sqrt5)/2$ show up. I'm curious to know about algorithms whose running time contains the golden ratio or $\pi$ in the exponent.
It's the base rather than the exponent, but there's an $O(\varphi^k n^2)$ FPT time bound in " An Efficient Fixed Parameter Tractable Algorithm for 1-Sided Crossing Minimization ", Vida Dujmovic, Sue Whitesides, Algorithmica 40:15–31, 2004. Also, it's a lower bound rather than an upper bound, but: " An $n^{1.618}$ lower bound on the time to simulate one queue or two pushdown stores by one tape ", Paul M.B. VitΓ‘nyi, Inf. Proc. Lett. 21:147–152, 1985. Finally, the one I was trying to find when I ran across those other two: the ham sandwich tree, a now-obsolete data structure in computational geometry for triangular range queries, has query time $O(n^{\log_2\varphi})\approx O(n^{0.695})$. So the golden ratio is properly in the exponent, but with a log rather than as itself. The data structure is a hierarchical partition of the plane into convex cells, with the overall structure of a binary tree, where each cell and its sibling in the tree are partitioned with a ham sandwich cut. The query time is determined by the recurrence $Q(n)=Q(\frac{n}{2})+Q(\frac{n}{4})+O(\log n)$, which has the above solution. It's described (with a more boring name) by " Halfplanar range search in linear space and $O(n^{0.695})$ query time ", Herbert Edelsbrunner, Emo Welzl, Inf. Proc. Lett. 23:289–293, 1986.
{ "source": [ "https://cstheory.stackexchange.com/questions/14999", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/13252/" ] }
16,119
Let $X$ denote a (decision) problem in NP and let #$X$ denote its counting version. Under what conditions is it known that "X is NP-complete" $\implies$ "#X is #P-complete"? Of course the existence of a parsimonious reduction is one such condition, but this is obvious and the only such condition of which I am aware. The ultimate goal would be to show that no condition is needed. Formally speaking, one should start with the counting problem #$X$ defined by a function $f : \{0,1\}^* \to \mathbb{N}$ and then define the decision problem $X$ on an input string $s$ as $f(s) \ne 0$?
The most recent paper on this question seems to be: Noam Livne, A note on #P-completeness of NP-witnessing relations , Information Processing Letters, Volume 109, Issue 5, 15 February 2009, Pages 259–261 http://www.sciencedirect.com/science/article/pii/S0020019008003141 which gives some sufficient conditions. Interestingly the introduction states "To date, all known NP complete sets have a defining relation which is #P complete", so the answer to Suresh's comment is "no examples are known".
{ "source": [ "https://cstheory.stackexchange.com/questions/16119", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/3964/" ] }
16,244
Many experts believe that the $\mathsf{P} \neq \mathsf{NP}$ conjecture is true and use it in their results. My concern is that the complexity strongly depends on the $\mathsf{P} \neq \mathsf{NP}$ conjecture. So my question is: As long as the $\mathsf{P}\neq\mathsf{NP}$ conjecture is not proven, can/should one consider it as a law of nature, as indicated in the quote from Strassen? Or should we treat it as a mathematical conjecture that maybe proved or disproved someday? Quote: "The evidence in favor of Cook's and Valiant's hypotheses is so overwhelming, and the consequences of their failure are so grotesque, that their status may perhaps be compared to that of physical laws rather than that of ordinary mathematical conjectures." [Volker Strassen's laudation to the Nevanlinna Prize winner, Leslie G. Valian, in 1986] I ask this question when reading the post Physics results in TCS? . It is perhaps interesting to note that computational complexity has some similarities to (theoretical) physic: many important complexity results have been proved by assuming $\mathsf{P} \neq \mathsf{NP}$ , while in theoretical physic results are proven by assuming some physical laws . In this sense, $\mathsf{P} \neq \mathsf{NP}$ can be considered something like $E = mc^2$ . Back to Physics results in TCS? : Could (part of) TCS be a branch of natural sciences? Clarification: (c.f. Suresh's answer below) Is it legitimate to say that the $\mathsf{P}\neq\mathsf{NP}$ conjecture in complexity theory is as fundamental as a physical laws in theoretical physics (as Strassen said)?
Strassen's statement needs to be put into context. This was an address to an audience of mathematicians in 1986, a time when many mathematicians did not have a high opinion of theoretical computer science. The complete statement is For some of you it may seem that the theories discussed here rest on weak foundations. They do not. The evidence in favor of Cook's and Valiant's hypotheses is so overwhelming, and the consequences of their failure is so grotesque, that their status may perhaps be compared to that of physical laws rather than that of ordinary mathematical conjectures. I am sure that Strassen had had conversations with pure mathematicians who said something along the lines of "You're basing the whole of complexity theory on a house of cards. What if P=NP? Then all your theorems will be meaningless. Why don't you just put forth a little effort and prove that P$\neq$NP, rather than keep building a theory on such weak foundations." In 2013, when P$\neq$NP has been a Clay prize problem for a dozen years, it may seem difficult to believe that any mathematicians actually had such attitudes; however, I can personally vouch that some did. Strassen continues by saying that we should not give up looking for a proof of P$\neq$NP (thus indirectly implying that it is indeed a mathematical conjecture): Nevertheless, a traditional proof would be of great interest, and it seems to me that Valiant's hypothesis may be easier to confirm than Cook's... so maybe I would label it as a "working hypothesis" rather than a "physical law". Let me finally note that mathematicians also use such working hypotheses. There are a large number of mathematics papers proving theorems whose statements run "Assuming the Riemann hypothesis is true, then ...".
{ "source": [ "https://cstheory.stackexchange.com/questions/16244", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/6706/" ] }
16,335
I've never seen an algorithm with a log in the denominator before, and I'm wondering if there are any actually useful algorithms with this form? I understand lots of things that might cause a log factor to be multiplied in the run time, e.g. sorting or tree based algorithms, but what could cause you to divide by a log factor?
The usual answer to "what could cause you to divide by a log?" is a combination of two things: a model of computation in which constant time arithmetic operations on word-sized integers are allowed, but in which you want to be conservative about how long the words are, so you assume $O(\log n)$ bits per word (because any fewer than that and you couldn't even address all of memory, and also because algorithms that use table lookups would take too much time to set up the tables if the words were longer), and an algorithm that compresses the data by packing bits into words and then operates on the words. I think there are many examples, but the classic example is the Four Russians Algorithm for longest common subsequences etc. It actually ends up being $O(n^2/\log^2 n)$, because it uses the bit-packing idea but then saves a second log factor using another idea: replacing blocks of $O(\log^2 n)$ bit operations by a single table lookup.
{ "source": [ "https://cstheory.stackexchange.com/questions/16335", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/10267/" ] }
16,354
I'm looking for undirected, unweighted, connected graphs $G=(V,E)$, in which for every pair $u,v \in V$, there is a unique $u \rightarrow v$ path that realizes the distance $d(u,v)$. Is this class of graphs well-known? What other properties does it have? For example, every tree is of this kind, as well as every graph without an even cycle. However, there are graphs containing even cycles that are of this kind.
According to Information System on Graph Classes and their Inclusions, these graphs are studied under the name β€œ geodetic graphs .”
{ "source": [ "https://cstheory.stackexchange.com/questions/16354", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/1609/" ] }
16,401
Let $\mathsf{REG}$ be the class of all regular languages. It is known $\mathsf{AC}^0 \not\subset \mathsf{REG}$ and $\mathsf{REG} \not\subset \mathsf{AC}^0$. But is there any characterization for languages in $\mathsf{AC}^0 \cap \mathsf{REG}$?
The following paper seems to contain an answer: Mix Barrington, D. A., Compton, K., Straubing, H., Therien, D.: Regular languages in $\mathsf{NC}^1$. Journal of Computer and System Sciences 44(3), 478-499 (1992) ( link ) One of the characterizations obtained there is as follows. The class $\mathsf{REG} \cap \mathsf{AC}^0 \subset \{0, 1\}^*$ contains exactly those languages that can be obtained from $\{0\}$, $\{1\}$ and $\mathsf{LENGTH}(q)$ for $q > 1$ with a finite number of Boolean operations and concatenations. Here every language $\mathsf{LENGTH}(q)$ contains all strings whose length is divisible by $q$. (There is also a logical characterization and two algebraic ones.)
{ "source": [ "https://cstheory.stackexchange.com/questions/16401", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/10564/" ] }
16,512
Is it possible to algorithmically test if a computable number is rational or integer? In other words, would it be possible for a library that implements computable numbers to provide the functions isInteger or isRational ? I am guessing that it is not possible, and that this is somehow related to the fact that it is not possible to test if two numbers are equal, but I don't see how to prove it. Edit: A computable number $x$ is given by a function $f_x(\epsilon)$ that can return a rational approximation of $x$ with precision $\epsilon$: $|x - f_x(\epsilon)| \leq \epsilon$, for any $\epsilon > 0$. Given such function, is it possible to test if $x \in \mathrm{Q}$ or $x \in \mathrm{Z}$?
It is easy to get confused about what it means to "represent" or "implement" a real number. In fact, we are witnessing a discussion in the comments where the representation is contentious. So let me address this first. How do we know that an implementation is correct? The theory which explains how to represent things in a computer is realizability . The basic idea is that, given a set $X$ , we pick a datatype $\tau$ and to every $x \in X$ a set of values of type $\tau$ which realize it. We write $v \vdash x \in X$ when $v$ is a value that realizes $x$ . For example (I shall use Haskell for no good reason), a sensible implementation of $\mathbb{N}$ might be the datatype Integer where $v \vdash k \in \mathbb{N}$ when $v$ evaluates to the numeral $\overline{k}$ (thus in particular -42 does not represent a natural number, and neither does a diverging program). But some joker could walk by and suggest that we use Bool to represent natural numbers with $\mathtt{True} \vdash 42 \in \mathbb{N}$ and $\mathtt{False} \vdash n \in \mathbb{N}$ for $n \neq 42$ . Why is this incorrect? We need a criterion . In the case of "joker numbers" the easy observation is that addition cannot be implemented. Suppose I tell you I have two numbers, both represented by $\mathtt{False}$ . Can you give a realizer for their sum? Well, that depends on whether the sum is 42, but you cannot tell. Since addition is an "essential part of what natural numbers are", this is unacceptable. In other words, implementation is not about sets, but about structures , i.e., we have to represent sets in such a way that it is possible to also implement the relevant structure. Let me stress this: We implement structures, not bare sets. Therefore, we have to be able to implement the entire structure, together with operations and all the axioms, in order for the implementation to be correct. If you do not abide by this principle, then you have to suggest an alternative mathematical criterion of correctness. I do not know of one. Example: representation of natural numbers For natural numbers the relevant structure is described by Peano axioms, and the crucial axiom that has to be implemented is induction (but also $0$ , successor, $+$ and $\times$ ). We can compute, using realizability, what the implementation of induction does. It turns out to be a map (where nat is the yet unknown datatype which represents natural numbers) induction : 'a -> (nat -> 'a -> 'a) -> 'nat -> 'a satisfying induction x f zero = x and induction x f (succ n) = f n (induction x f n) . All this comes out of realizability. We have a criterion: an implementation of natural numbers is correct when it allows an implementation of Peano axioms. A similar result would be obtained if we used the characterization of numbers as the initial algebra for the functor $X \mapsto 1 + X$ . Correct implementation of real numbers Let us turn attention to the real numbers and the question at hand. The first question to ask is "what is the relevant structure of the real numbers?" The answer is: Archimedean Cauchy complete ordered field . This is the established meaning of "real numbers". You do not get to change it, it has been fixed by others for you (in our case the alternative Dedekind reals turn out to be isomorphic to the Cauchy reals, which we are considering here.) You cannot take away any part of it, you are not allowed to say "I do not care about implementing addition", or "I do not care about the order". If you do that, you must not call it "real numbers", but something like "real numbers where we forget the linear order". I am not going to go into all the details, but let me just explain how the various parts of the structure give various operations on reals: the Archimedean axiom is about computing rational approximations of reals the field structure gives the usual arithmetical operations the linear order gives us a semidecidable procedure for testing $x < y$ the Cauchy completeness gives us a function lim : (nat -> real) -> real which takes a (representation of) rapid Cauchy sequence and returns its limit. (A sequence $(x_n)_n$ is rapid if $|x_n - x_m| \leq 2^{-\min(n,m)}$ for all $m, n$ .) What we do not get is a test function for equality. There is nothing in the axioms for reals which asks that $=$ be decidable. (In contrast, the Peano axioms imply that the natural numbers are decidable, and you can prove that by implementing eq : nat -> nat -> Bool using only induction as a fun exercise). It is a fact that the usual decimal representation of reals that humanity uses is bad because with it we cannot even implement addition. Floating point with infinite mantissa fails as well (exercise: why?). What works, however is signed digit representation, i.e., one in which we allow negative digits as well as positive ones. Or we could use sequences of rationals which satisfy the rapid Cauchy test, as stated above. The Tsuyoshi representation also implements something, but not $\mathbb{R}$ Let us consider the following representation of reals: a real $x$ is represented by a pair $(q,b)$ where $(q_n)_n$ is a rapid Cauchy sequence converging to $x$ and $b$ is a Boolean indicating whether $x$ is an integer. For this to be a representation of the reals, we would have to implement addition, but as it turns out we cannot compute the Boolean flags. So this is not a representation of the reals. But it still does represent something, namely the subset of the reals $\mathbb{Z} \cup (\mathbb{R} \setminus \mathbb{Z})$ . Indeed, according to the realizability interpretation a union is implemented with a flag indicating which part of the union we are in. By the way, $\mathbb{Z} \cup (\mathbb{R} \setminus \mathbb{Z})$ is a not equal to $\mathbb{R}$ , unless you believe in excluded middle, which cannot be implemented and is therefore quite irrelevant for this discussion. We are of forced by computers to do things intuitionistically. We cannot test whether a real is an integer Finally, let me answer the question that was asked. We now know that an acceptable representation of the reals is one by rapid Cauchy sequences of rationals. (An important theorem states that any two representations of reals which are acceptable are actually computably isomorphic.) Theorem: Testing whether a real is an integer is not decidable. Proof. Suppose we could test whether a real is an integer (of course, the real is realized by a rapid Cauchy sequence). The idea, which will allow you to prove a much more general theorem if you want, is to construct a rapid Cauchy sequence $(x_n)_n$ of non-integers which converges to an integer. This is easy, just take $x_n = 2^{-n}$ . Next, solve the Halting problem as follows. Given a Turing machine $T$ , define a new sequence $(y_n)_n$ by $$y_n = \begin{cases} x_n & \text{if $T$ has not stopped within $n$ steps}\\ x_m & \text{if $T$ stopped in step $m$ and $m \leq n$} \end{cases}$$ That is, the new sequence looks like the sequence $(x_n)_n$ as long as $T$ runs, but then it gets "stuck" at $x_m$ if $T$ halts in step $m$ . Very importantly, the new sequence is also a rapid Cauchy sequence (and we can prove this without knowing whether $T$ halts). Therefore, we can compute its limit $z = \lim_n y_n$ , because our representation of reals is correct. Test whether $z$ is an integer. If it is, then it must be $0$ and this only happens if $T$ runs forever. Otherwise, $z$ is not an integer, so $T$ must have stopped. QED. Exercise: adapt the above proof to show that we cannot test for rational numbers. Then adapt it to show we cannot test for anything non-trivial (this is a bit harder). Sometimes people get confused about all this testing business. They think we have proved that we can never test whether a real is an integer. But surely, 42 is a real and we can tell whether it is an integer. In fact, any particular real we come up with, $\sin 11$ , $88 \ln 89$ , $e^{\pi \sqrt{163}}$ , etc., we can perfectly well tell whether they are integers. Precisely, we can tell because we have extra information: these reals are not given to us as sequences, but rather as symbolic expressions from which we can compute the Tsuyoshi bit. As soon as the only information we have about the real is a sequence of rational approximations converging to it (and I do not mean a symbolic expression describing the sequence, but a black box which outputs the $n$ -th term on input $n$ ) then we will be just as helpless as machines. The moral of the story It makes no sense to talk about implementation of a set unless we know what sort of operations we want to perform on it.
{ "source": [ "https://cstheory.stackexchange.com/questions/16512", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/7330/" ] }
16,628
How can we express "$P=PSPACE$" as a first-order formula? Which level of the arithmetic hierarchy contains this formula (and what is the currently known minimum level of the hierarchy that contains it)? For reference, see this blog post by Lipton .
Firstly, I want to address the comments to the question, where it was suggested that "false" expresses $P = PSPACE$ because the statement is false. While this might be a good joke, it is actually very much harmful to think this way. When we ask how to express a certain sentence in a certain formal system, we are not talking about truth values. If we were, then when someone asked "How do I write down the fact that there are infinitely many primes?" we could answer "3 + 3 = 6", but this clearly will not do. For the same reason "false" is not a valid answer to "how do I write down $P = PSPACE$?". I think Frege and Russell tried hard to teach us that lesson. Ok, now to the answer. Let me show how to express $PSPACE \subseteq P$, the other direction is similar, and then you can put them together in a conjunction to get $PSPACE = P$. In any case, for your purposes it may be sufficient to express just $PSPACE \subseteq P$, depending on what you are doing. Using techniques similar to those in the construction of Kleene's predicate $T$ , we can construct a bounded quantifer formula $accept_{space}(k, m, n)$ (which thus resides in $\Sigma^0_0 = \Pi^0_0$) saying "when we run the machine encoded by $k$ and bound its space usage to $|n|^m$, the machine accepts the input $n$." Here $|n|$ is the length of $n$. An informal way of seeing that such formulas exists is this: given $k$, $m$, and $n$ we can compute primitive recursive bound on how much time and how much space we are ever going to need (i.e., at most $|n|^m$ space and at most $2^{|n|^m}$ time). We then simply search through all possible execution traces which are within the computed bounds--such a search is rather inefficient, but it is primitive recursive and so we can express it as a bounded formula. There is a similar formula $accept_{time}(k, m, n)$ in which the running time is bound by $|n|^m$. Now consider the formula: $$\forall k, m . \exists k', m' . \forall n . accept_{space}(k,m,n) \Leftrightarrow accept_{time}(k',m', n). $$ It says that for every machine $k$ which uses at most space $|n|^m$ there is a machine $k'$ which uses at most time $|n|^{m'}$ such that the two machines accepts exactly the same $n$'s. In other words, the formula says $PSPACE \subseteq P$. This formula is $\Pi^0_3$. We can improve this if we are willing to express instead the sentence "$TQBF$ is in polytime" , which should be good enough for most applications, as TQBF is PSPACE complete and so it being in polytime is equivalent to $PSPACE \subseteq P$. Let $k_0$ be (the code of) a machine which recognizes TQBF in space $|n|^{m_0}$. Then "$TQBF \in P$" can be expressed as $$\exists k', m' . \forall n . accept_{space}(k_0, m_0, n) \Leftrightarrow accept_{time}(k', m', n).$$ This formula is just $\Sigma^0_2$. If I were a complexity theorist I would know if it is possible to do even better (but I doubt it).
{ "source": [ "https://cstheory.stackexchange.com/questions/16628", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/13936/" ] }
16,904
In many papers involving context-free grammars (CFGs), the examples of such grammars presented there often admit easy characterizations of the language they generate. For example: $S \to a a S b$ $S \to $ generates $\{ a^{2i} b^i | i \geq 0\}$, $S \to a S b$ $S \to a a S b$ $S \to $ generates $\{ a^i b^j \mid i \geq j \geq 0 \}$, and $S \to a S a$ $S \to b S b$ $S \to $ generates $\{ w w^R \mid w \in (a|b)^* \}$, or equivalently $\{ ((a|b)^*)_1 ((a|b)^*)_2 \mid p_1 = p_2^R \}$ (where $p_1$ refers to the part captured by $(...)_1$). The above examples can all be generated by adding indices ($a^i$), simple constraints on these indices ($i > j$) and pattern matching to regular expressions. This makes me wonder whether all context-free languages can be generated by some extension of the regular expressions. Is there an extension of regular expressions that can generate all of or some significant subset of the context free languages?
Yes, there is. Define a context-free expression to be a term generated by the following grammar: $$ \begin{array}{lcll} g & ::= & \epsilon & \mbox{Empty string}\\ & | & c & \mbox{Character $c$ in alphabet $\Sigma$} \\ & | & g \cdot g & \mbox{Concatenation} \\ & | & \bot & \mbox{Failing pattern} \\ & | & g \vee g & \mbox{Disjunction}\\ & | & \mu \alpha.\; g & \mbox{Recursive grammar expression} \\ & | & \alpha & \mbox{Variable expression} \end{array} $$ This is all of the constructors for regular languages except Kleene star, which is replaced by a general fixed-point operator $\mu \alpha.\;g$, and a variable reference mechanism. (The Kleene star is not needed, since it can be defined as $g\ast \triangleq \mu \alpha.\;\epsilon \vee g\cdot\alpha$.) The interpretation of a context-free expression requires accounting for the interpretation of free variables. So define an environment $\rho$ to be a map from variables to languages (i.e., subsets of $\Sigma^*$), and let $[\rho|\alpha:L]$ be the function that behaves like $\rho$ on all inputs except $\alpha$, and which returns the language $L$ for $\alpha$. Now, define the interpretation of a context-free expression as follows: $$ \newcommand{\interp}[2]{[\![{#1}]\!]\;{#2}} \newcommand{\setof}[1]{\left\{#1\right\}} \newcommand{\comprehend}[2]{\setof{{#1}\;\mid|\;{#2}}} \begin{array}{lcl} \interp{\epsilon}{\rho} & = & \setof{\epsilon} \\ \interp{c}{\rho} & = & \setof{c} \\ \interp{g_1\cdot g_2}{\rho} & = & \comprehend{w_1 \cdot w_2}{w_1 \in \interp{g_1}{\rho} \land w_2 \in \interp{g_2}{\rho}} \\ \interp{\bot}{\rho} & = & \emptyset \\ \interp{g_1 \vee g_2}{\rho} & = & \interp{g_1}{\rho} \cup \interp{g_2}{\rho} \\ \interp{\alpha}{\rho} & = & \rho(\alpha) \\ \interp{\mu \alpha.\; g}{\rho} & = & \bigcup_{n \in \mathbb{N}} L_n \\ \mbox{where} & & \\ L_0 & = & \emptyset \\ L_{n+1} & = & L_n \cup \interp{g}{[\rho|\alpha:L_n]} \end{array} $$ Using the Knaster-Tarski theorem, it's easy to see that the interpretation of $\mu \alpha.g$ is the least fixed of the expression. It's straightforward (albeit not entirely trivial) to show that you can give a context-free expression deriving the same language as any context-free grammar, and vice-versa. The non-triviality arises from the fact that context-free expressions have nested fixed points, and context-free grammars give you a single fixed point over a tuple. This requires the use of Bekic's lemma, which says precisely that a nested fixed points can be converted to a single fixed point over a product (and vice-versa). But that's the only subtlety. EDIT: No, I don't know a standard reference for this: I worked it out for my own interest. However, it's an obvious enough construction that I'm confident it's been invented before. Some casual Googling reveals Joost Winter, Marcello Bonsangue and Jan Rutten's recent paper Context-Free Languages, Coalgebraically , where they give a variant of this definition (requiring all fixed points to be guarded) which they also call context-free expressions.
{ "source": [ "https://cstheory.stackexchange.com/questions/16904", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/988/" ] }
17,006
I was recently reading The Two Dualities of Computation: Negative and Fractional Types . The paper expands on sum-types and product-types, giving semantics to the types a - b and a/b . Unlike addition and multiplication, there are not one but two inverses of exponentiation, logarithms and rooting. If function types (a β†’ b) are type-theoretic exponentiation, given the type a β†’ b (or b^a ) what does it mean to have the type logb(c) or the type a√c ? Does it make sense to extend logarithms and roots to types at all? If so, has there been any work in this area, and what are some good directions on how to comprehend the repercussions? I tried looking up information on this via logic, hoping the Curry-Howard correspondence could help me, but to no avail.
A type $C$ has a logarithm to base $X$ of $P$ exactly when $C \cong P\to X$. That is, $C$ can be seen as a container of $X$ elements in positions given by $P$. Indeed, it's a matter of asking to what power $P$ we must raise $X$ to obtain $C$. It makes sense to work with $\mathop{log}F$ where $F$ is a functor, whenever the logarithm exists, meaning $\mathop{log}\!_X(F\:X)$. Note that if $F\:X\cong \mathop{log}F\to X$, then we certainly have $F\:1\cong 1$, so the container tells us nothing interesting other than its elements: containers with a choice of shapes do not have logarithms. Familiar laws of logarithms make sense when you think in terms of position sets $$\begin{array}{rcl@{\qquad}l} \mathop{log} (\mathop{K}1) &=& 0 & \mbox{no positions in empty container}\\ \mathop{log} I &=& 1 & \mbox{container for one, one position}\\ \mathop{log} (F\times G) &=& \mathop{log}F+\mathop{log}G & \mbox{pair of containers, choice of positions} \\ \mathop{log} (F\cdot G) &=& \mathop{log}F\times\mathop{log}G & \mbox{container of containers, pair of positions} \end{array}$$ We also gain $\mathop{log}\!_X(\nu Y. T) = \mu Z. \mathop{log}\!_X T$ where $Z=\mathop{log}\!_XY$ under the binder. That is, the path to each element in some codata is defined inductively by iterating the logarithm. E.g., $$\mathop{log}\mathop{Stream} = \mathop{log}\!_X(\nu Y. X\times Y) = \mu Z. 1 + Z = \mathop{Nat}$$ Given that the derivative tells us the type in one-hole contexts and the logarithm tells us positions, we should expect a connection, and indeed $$F\:1\cong 1 \;\Rightarrow\; \mathop{log}F\cong \partial F\:1$$ Where there is no choice of shape, a position is just the same as a one-hole context with the elements rubbed out. More generally, $\partial F\:1$ always represents the choice of an $F$ shape together with an element position within that shape. I'm afraid I have less to say about roots, but one could start from a similar definition and follow one's nose. For more uses of logarithms of types, check Ralf Hinze's "Memo functions, polytypically!". Gotta run...
{ "source": [ "https://cstheory.stackexchange.com/questions/17006", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/14328/" ] }
17,396
I have a historical question. I’m trying to determine the reference for the fact that 3-colourability of graphs (alternatively, $k$-colourability for given $k\geq 3$) is NP-hard. The tempting answer is β€œKarp’s original paper”, but that is wrong. Here’s a scan: Reducibility among Combinatorial Problems, Karp (1972) . It proves that Chromatic number (Input: a graph. Output: $\chi(G)$) is hard. That’s a harder problem, and the reduction is different from the standard gadget construction (with 3 colours, True, False, and Ground) that implies hardness of 3-colourability. Garey and Johnson, Computers and intractability , have $k$-colourability as [GT4] and refer to Karp (1972).
LΓ‘szlΓ³ LovΓ‘sz , Coverings and coloring of hypergraphs , Proceedings of the Fourth Southeastern Conference on Combinatorics, Graph Theory, and Computing, Utilitas Math., Winnipeg, Man., 1973, pp. 3--12, proved that Chromatic number reduces to 3-colourability. I think, that is the first proof for NP-completeness of 3-colourability. Here is LovΓ‘sz's paper; see also VaΕ‘ek ChvΓ‘tal's excellent explanation to LovΓ‘sz's reduction.
{ "source": [ "https://cstheory.stackexchange.com/questions/17396", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/5095/" ] }
17,545
I posted this earlier on MSE, but it was suggested that here may be a better place to ask. Universal approximation theorem states that "the standard multilayer feed-forward network with a single hidden layer, which contains finite number of hidden neurons, is a universal approximator among continuous functions on compact subsets of Rn, under mild assumptions on the activation function." I understand what this means, but the relevant papers are too far over my level of math understanding to grasp why it is true or how a hidden layer approximates non-linear functions. So, in terms little more advanced than basic calculus and linear algebra, how does a feed-forward network with one hidden layer approximate non-linear functions? The answer need not necessarily be totally concrete.
Cybenko's result is fairly intuitive, as I hope to convey below; what makes things more tricky is he was aiming both for generality, as well as a minimal number of hidden layers. Kolmogorov's result (mentioned by vzn) in fact achieves a stronger guarantee, but is somewhat less relevant to machine learning (in particular, it does not build a standard neural net, since the nodes are heterogeneous); this result in turn is daunting since on the surface it is just 3 pages recording some limits and continuous functions, but in reality it is constructing a set of fractals. While Cybenko's result is unusual and very interesting due to the exact techniques he uses, results of that flavor are very widely used in machine learning (and I can point you to others). Here is a high-level summary of why Cybenko's result should hold. A continuous function on a compact set can be approximated by a piecewise constant function. A piecewise constant function can be represented as a neural net as follows. For each region where the function is constant, use a neural net as an indicator function for that region. Then build a final layer with a single node, whose input linear combination is the sum of all the indicators, with a weight equal to the constant value of the corresponding region in the original piecewise constant function. Regarding the first point above, this can be taken as the statement "a continuous function over a compact set is uniformly continuous". What this means to us is you can take your continuous function over $[0,1]^d$, and some target error $\epsilon>0$, then you can grid $[0,1]^d$ at scale $\tau>0$ (ending up with roughly $(1/\tau)^d$ subcubes) so that a function which is constant over each subcube is within $\epsilon$ of the target function. Now, a neural net can not precisely represent an indicator, but you can get very close. Suppose the "transfer function" is a sigmoid. (Transfer function is the continuous function you apply to a linear combination of inputs in order to get the value of the neural net node.) Then by making the weights huge, you output something close to 0 or close to 1 for more inputs. This is consistent with Cybenko's development: notice he needs the functions involved to equal 0 or 1 in the limit: by definition of limit, you get exactly what I'm saying, meaning you push things arbitrarily close to 0 or 1. (I ignored the transfer function in the final layer; if it's there, and it's continuous, then we can fit anything mapping to $[0,1]$ by replacing the constant weights with the something in the inverse image of that constant according to the transfer function.) Notice that the above may seem to take a couple layers: say, 2 to build the indicators on cubes, and then a final output layer. Cybenko was trying for two points of generality: minimal number of hidden layers, and flexibility in the choice of transfer function. I've already described how he works out flexibility in transfer function. To get the minimum number of layers, he avoids the construction above, and instead uses functional analysis to develop a contradiction. Here's a sketch of the argument. The final node computes a linear combination of the elements of the layer below it, and applies a transfer function to it. This linear combination is a linear combination of functions, and as such, is itself a function, a function within some subspace of functions, spanned by the possible nodes in the hidden layer. A subspace of functions is just like an ordinary finite-dimensional subspace, with the main difference that it is potentially not a closed set; that's why cybenko's arguments all take the closure of that subspace. We are trying to prove that this closure contains all continuous functions; that will mean we are arbitrarily close to all continuous functions. If the function space were simple (a Hilbert space), we could argue as follows. Pick some target continuous function which is contradictorily supposed to not lie in the subspace, and project it onto the orthogonal complement of the subspace. This residual must be nonzero. But since our subspace can represent things like those little cubes above, we can find some region of this residual, fit a little cube to it (as above), and thereby move closer to our target function. This is a contradiction since projections choose minimal elements. (Note, I am leaving something out here: Cybenko's argument doesn't build any little cubes, he handles this in generality too; this is where he uses a form of the Riesz representation theorem, and properties of the transfer functions (if I remember correctly, there is a separate lemma for this step, and it is longer than the main theorem).) We aren't in a Hilbert space, but we can use the Hahn-Banach theorem to replace the projection step above (note, proving Hahn-Banach uses the axiom of choice). Now I'd like to say a few things about Kolmogorov's result. While this result does not apparently need the sort of background of Cybenko's, I personally think it is much more intimidating. Here is why. Cybenko's result is an approximation guarantee : it does not say we can exactly represent anything. On the other hand, Kolmogorov's result is provides an equality . More ridiculously, it says the size of the net: you need just $\mathcal O(d^2)$ nodes. To achieve this strengthening, there is a catch of course, the one I mentioned above: the network is heteregeneous, by which I mean all the transfer functions are not the same. Okay, so with all that, how can this thing possible work?! Let's go back to our cubes above. Notice that we had to bake in a level of precision: for every $\epsilon>0$, we have to go back and pick a more refined $\tau >0$. Since we are working with (finite) linear combinations of indicators, we are never exactly representing anything. (things only get worse if you include the approximating effects of sigmoids.) So what's the solution? Well, how about we handle all scales simultaneously? I'm not making this up: Kolmogorov's proof is effectively constructing the hidden layer as a set of fractals. Said another way, they are basically space filling curves which map $[0,1]$ to $[0,1]^d$; this way, even though we have a combination of univariate functions, we can fit any multivariate function. In fact, you can heuristically reason that $\mathcal O(d^2)$ is "correct" via a ridiculous counting argument: we are writing a continuous function from $\mathbb{R}^d$ to $\mathbb R$ via univariate continuous functions, and therefore, to capture all inter-coordinate interactions, we need $\mathcal O(d^2)$ functions... Note that Cybenko's result, due to using only one type of transfer function, is more relevant to machine learning. Theorems of this type are very common in machine learning (vzn suggested this in his answer, however he referred to Kolmogorov's result, which is less applicable due to the custom transfer functions; this is weakened in some more fancy versions of Kolmogorov's result (produced by other authors), but those still involve fractals, and at least two transfer functions). I have some slides on these topics, which I could post if you are interested (hopefully less rambly than the above, and have some pictures; I wrote them before I was adept with Hahn-Banach, however). I think both proofs are very, very nice. (Also, I have another answer here on these topics, but I wrote it before I had grokked Kolmogorov's result.)
{ "source": [ "https://cstheory.stackexchange.com/questions/17545", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/4008/" ] }
17,610
Can someone provide a concise explanation of Mulmuley's GCT approach understandable by non-experts? An explanation that would be suitable for a Wikipedia page on the topic (which is stub at the moment). Motivation: I am "co-reading" Scott Aaronson's book Quantum Computing since Democritus with a friend of mine who is a researcher in string theory. In the preface of the book, Aaronson calls GCT "the string theory of computer science". Being a string theorist, my friend got excited about this claim and asked me what GCT is. At that point I shamefully realized I didn't have a Wikipedia-ready answer for his question.
I'm not exactly sure what level is suitable for Wikipedia article (different articles seem to be aimed at different levels of expertise) or exactly what you're looking for. So here's a try, but I'm open to feedback. Geometric complexity theory proposes to study the computational complexity of computing functions (say, polynomials) by exploiting the inherent symmetries in complexity and any additional symmetries of the functions being studied. As with many previous approaches, the ultimate goal is to separate two complexity classes $\mathcal{C}_{easy}, \mathcal{C}_{hard}$ by showing that there is a polynomial $p$ which takes functions $f$ as inputs (say, by their coefficient vectors) such that $p$ vanishes on every function $f \in \mathcal{C}_{easy}$ but does not vanish on some function $g_{hard} \in \mathcal{C}_{hard}$. The first key idea (cf. [GCT1, GCT2]) is to use symmetries to organize not the functions themselves, but to organize the ( algebro-geometric ) properties of these functions, as captured by polynomials such as $p$ above. This enables the use of representation theory in attempting to find such a $p$. Similar ideas relating representation theory and algebraic geometry had been used in algebraic geometry before, but to my knowledge never quite in this way. The second key idea (cf. [GCT6]) is to find combinatorial (and polynomial-time) algorithms for the resulting representation-theoretic problems, and then reverse-engineer these algorithms to show that such a $p$ exists. This may be taken in the spirit of using Linear Programming (an algorithm) to prove certain purely combinatorial statements. Indeed, [GCT6] suggests reducing the representation-theoretic problems above to Integer Programming problems, then showing that the resulting IPs are solved by their LP relaxations, and finally giving combinatorial algorithms for the resulting LPs. The conjectures in [GCT6] are themselves motivated by reverse-engineering known results for the Littlewood-Richardson coefficients, an analogous but easier problem in representation theory. In the case of LR coefficients, the Littlewood-Richardson combinatorial rule came first. Later Berenstein and Zelevinsky [BZ] and Knutson and Tao [KT] (see [KT2] for a friendly overview) gave an IP for LR coefficients. Knutson and Tao also proved the saturation conjecture, which implies that the IP is solved by its LP relaxation (cf. [GCT3,BI]). The results of [GCT5] show that explicitly derandomizing Noether's Normalization Lemma is essentially equivalent to the notorious open problem in complexity theory of black-box derandomization of polynomial identity testing . Roughly how this fits into the larger program is that finding an explicit basis for the functions $p$ that (do not) vanish on $\mathcal{C}_{easy}$ (in this case, the class for which the determinant is complete) could be used to derive a combinatorial rule for the desired problem in representation theory, as has happened in other settings in algebraic geometry. An intermediate step here would be to find a basis for those $p$ that (do not) vanish on the normalization of $\mathcal{C}_{easy}$, which is by construction a nicer algebraic variety -- in other words, to derandomize Noether's Normalization Lemma for DET. Examples of symmetries of complexity and functions For example, the complexity of a function $f(x_1, \dotsc, x_n)$ - for most natural notions of complexity - is unchanged if we permute the variables $f(x_{\pi(1)}, \dotsc, x_{\pi(n)})$ by some permutation $\pi$. Thus permutations are symmetries of complexity itself. For some notions of complexity (such as in algebraic circuit complexity) all invertible linear changes of the variables are symmetries. Individual functions may have additional symmetries. For example, the determinant $\det(X)$ has the symmetries $\det(AXB) = \det(X^{T}) = \det(X)$ for all matrices $A,B$ such that $\det(AB) = 1$. (From what little I picked up about this, I gather that this is analogous to the phenomenon of spontaneous symmetry-breaking in physics.) Some Recent Progress [this section definitely incomplete and more technical, but a complete account would take tens of pages....I just wanted to highlight some recent progress] Burgisser and Ikenmeyer [BI2] showed a $\frac{3}{2}n^2$ lower bound on matrix multiplication following the GCT program as far as using representations with zero vs nonzero multiplicities. Landsberg and Ottaviani [LO] gave the best known lower bound of essentially $2n^2$ on the border rank of matrix multiplication using representation theory to organize algebraic properties, but not using representation multiplicities nor combinatorial rules. The next problem after Littlewood-Richardson coefficients is the Kronecker coefficients . These show up both in a series of problems that is suspected to eventually reach the representation-theoretic problems arising in GCT, and more directly as bounds on the multiplicities in the GCT approach to matrix multiplication and permanent versus determinant. Finding a combinatorial rule for Kronecker coefficients is a long-standing open problem in representation theory; Blasiak [B] recently gave such a combinatorial rule for Kronecker coefficients with one hook shape. Kumar [K] showed that certain representations appear in the coordinate ring of the determinant with nonzero multiplicity, assuming the column Latin square conjecture (cf. Huang-Rota and Alon-Tarsi; this conjecture also, perhaps not coincidentally, shows up in [BI2]). Hence these representations cannot be used to separate permanent from determinant on the basis of zero vs nonzero multiplicities, though it still might be possible to use them to separate permanent from determinant by a more general inequality between multiplicities. References [B] J. Blasiak. Kronecker coefficients for one hook shape. arXiv:1209.2018, 2012. [BI] P. Burgisser and C. Ikenmeyer. A max-flow algorithm for positivity of Littlewood-Richardson coefficients. FPSAC 2009. [BI2] P. Burgisser and C. Ikenmeyer. Explicit Lower Bounds via Geometric Complexity Theory. arXiv:1210.8368, 2012. [BZ] A. D. Berenstein and A. V. Zelevinsky. Triple multiplicities for $\mathfrak{sl}(r+1)$ and the spectrum of the exterior algebra of the adjoint representation. J. Algebraic Combin. 1 (1992), no. 1, 7–22. [GCT1] K. D. Mulmuley and M. Sohoni. Geometric Complexity Theory I: An Approach to the P vs. NP and Related Problems. SIAM J. Comput. 31(2), 496–526, 2001. [GCT2] K. D. Mulmuley and M. Sohoni. Geometric Complexity Theory II: Towards Explicit Obstructions for Embeddings among Class Varieties. SIAM J. Comput., 38(3), 1175–1206, 2008. [GCT3] K. D. Mulmuley, H. Narayanan, and M. Sohoni. Geometric complexity theory III: on deciding nonvanishing of a Littlewood-Richardson coefficient. J. Algebraic Combin. 36 (2012), no. 1, 103–110. [GCT5] K. D. Mulmuley. Geometric Complexity Theory V: Equivalence between blackbox derandomization of polynomial identity testing and derandomization of Noether's Normalization Lemma. FOCS 2012, also arXiv:1209.5993. [GCT6] K. D. Mulmuley. Geometric Complexity Theory VI: the flip via positivity. , Technical Report, Computer Science department, The University of Chicago, January 2011. [K] S. Kumar. A Study of the representations supported by the orbit closure of the determinant. arXiv:1109.5996, 2011. [LO] J. M. Landsberg and G. Ottaviani. New lower bounds for the border rank of matrix multiplication. arXiv:1112.6007, 2011. [KT] A. Knutson and T. Tao. The honeycomb model of $\text{GL}_n(\mathbb{C})$ tensor products. I. Proof of the saturation conjecture. J. Amer. Math. Soc. 12 (1999), no. 4, 1055–1090. [KT2] A. Knutson and T. Tao. Honeycombs and sums of Hermitian matrices. Notices Amer. Math. Soc. 48 (2001), no. 2, 175–186.
{ "source": [ "https://cstheory.stackexchange.com/questions/17610", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/1542/" ] }
17,868
In the spirit of some general discussions like this one, I'm opening this thread with the intention to gather opinions on what are the open challenges and hot topics in research on programming languages . I hope that the discussion might even bring to surface opinions regarding the future of research in programming languages. I believe that this kind of discussion will help new student researchers, like myself, interested in PL, as well as those who are already somewhat involved.
I think the overall goal of PL theory is to lower the cost of large-scale programming by way of improving programming languages and the techincal ecosystem wherein languages are used. Here are some high-level, somewhat vague descriptions of PL research areas that have received sustained attention, and will probably continue to do so for a while. Most programming language research has been done in the context of sequential computation, and by now we have arguably converged on a core of features that are available in most modern programming languages (e.g. higher-order functions, (partial) type-inference, pattern matching, ADTs, parametric polymorphism) and are well understood. There is as yet no such consensus about programming language features for concurrent and parallel computation. Related to the previous point, the research field of typing systems has seen most of its activity being about sequential computation. Can we generalise this work to find tractable and useful typing disciplines constraining concurrent and parallel computation? As a special case of the previous point, the Curry-Howard correspondence relates structural proof theory and functional programming, leading to sustained technology transfer between computer science and (foundations of) mathematics, with e.g. homotopy type theory being an impressive example. There are many tantalising hints that it can be extended to (some forms of) concurrent and parallel computation. Specification and verification of programs has matured a lot in recent years, e.g. with interactive proof assistants like Isabelle and Coq, but the technology is still far away from being usable at large scale in everyday programming. There is still much work to be done to improve this state of affairs. Programming languages and verification technology for novel forms of computation. I'm thinking here in particular of quantum computation, and the biologically inspired computational mechanisms, see e.g. here . Unification. There are many approaches to programming languages, types, verification, and one sometimes feels that there is a lot of overlap between them, and that there is some more abstract approach waiting to be discovered. In particular, biologically inspired computational mechanisms are likely to continue to overwhelm us. One problem of PL research is that there are no clear-cut open problems like the P/NP question where we can immediately say if a proposed solution works or not.
{ "source": [ "https://cstheory.stackexchange.com/questions/17868", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/16196/" ] }
18,119
This is an interesting question I have found on the web. Given an array containing n numbers (with no information about them), we should pre-process the array in linear time so that we can return the k smallest elements in O(k) time, when we are given a number 1 <= k <= n I have been discussing this problem with some friends but no one could find a solution; any help would be appreciated! quick notes: -the order of the k smallest elements is not important -the elements in the array are number , might be integers and might be not (so no radix sort) -the number k is not know in the pre-processing stage.the preprocessing is O(n) time. the function ( find k smallest elements) on O(k) time .
Preprocess the array of $n$ values in time $O(n)$: $i\leftarrow n$ while $i>2$ Compute the median $m$ of $A[1..i]$ in time $O(i)$ partition $A[1..i]$ into $A[1..i/2-1] \leq m$ and $A[i/2+1..i]\geq m$ in the same time. $i \leftarrow \lfloor i/2 \rfloor$ The total precomputation time is within $O(1+2+4+...+n)\subseteq O(n)$ Answer a query for the $k$ smallest elements in $A$ in time $O(k)$: $l\leftarrow \lfloor \log_2 k \rfloor$ select the $(k-2^l)$th element $x$ of $A[2^l..2^{l+1}]$ in time $O(2^l)\subseteq O(k)$ partition $A[2^l..2^{l+1}]$ by $x$ in the same time $A[1..k]$ contains the $k$ smallest elements. References: In 1999, Dor and Zwick gave an algorithm to compute the median of $n$ elements in time within $2.942 n + o(n)$ comparisons, which yields an algorithm to select the $k$th element from $n$ unordered elements in less than $6n$ comparisons.
{ "source": [ "https://cstheory.stackexchange.com/questions/18119", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/16687/" ] }
18,587
Motivated by Shor's answer related to different notions of NP-completeness, I am looking for a problem that is NP-complete under P reductions but not known to be NP-complete under Logspace reductions (preferably for a long time). Also, Is finding Logspace reductions between NP-complete problems harder than finding P reductions?
Kaveh is correct in saying that all of the "natural" NP-complete problems are easily seen to be complete under (uniform) $\mathrm{AC}^0$ reductions. However, one can construct sets that are complete for NP under logspace reductions that are not complete under $\mathrm{AC}^0$ reductions. For instance, in [Agrawal et al, Computational Complexity 10(2): 117-138 (2001)) an error-correcting encoding of SAT was shown to have this property. As regards a "likely" candidate for a problem that is complete under poly-time reductions but not under logspace reductions, one can try to cook up an example of the form {$(\phi,b)$ : $\phi$ is in SAT and $z$ is in CVP [or some other P-complete set] iff $b=1$, where $z$ is the string that results by taking every 2nd bit of $\phi$}. Certainly the naive way to show that this set is complete will involve computing the usual reduction to SAT, and then constructing $z$ and computing the bit $b$, which is inherently poly-time. However, with a bit of work, schemes such as this can usually be shown to be complete under logspace reductions via some non-naive reduction. (I haven't worked out this particular example...)
{ "source": [ "https://cstheory.stackexchange.com/questions/18587", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/495/" ] }
18,846
I am preparing for a talk aimed at undergraduate math majors, and as part of it, I am considering discussing the concept of decidability. I want to give an example of a problem that we do not currently know to be decidable or undecidable. There are many such problems, but none seem to stand out as nice examples so far. What is a simple-to-describe decision problem whose decidability is open?
The Matrix Mortality Problem for 2x2 matrices. I.e., given a finite list of 2x2 integer matrices M 1 ,...,M k , can the M i 's be multiplied in any order (with arbitrarily many repetitions) to produce the all-0 matrix? (The 3x3 case is known to be undecidable. The 1x1 case, of course, is decidable.)
{ "source": [ "https://cstheory.stackexchange.com/questions/18846", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/123/" ] }
19,675
The only definition of "calculus" I'm aware of is the study of limits, derivatives, integrals, etc. in analysis. In what sense is lambda calculus (or things like mu calculus) a "calculus"? How does it relate to calculus in analysis?
A calculus is just a system of reasoning. One particular calculus (well, actually two closely related calculi: the differential calculus and the integral calculus) has become so widespread that it is just known as "calculus", as if it were the only one. But, as you have observed, there are other calculi, such as the lambda calculus, mu calculus, pi calculus, propositional calculus, predicate calculus, sequent calculus and Professor Calculus.
{ "source": [ "https://cstheory.stackexchange.com/questions/19675", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/17584/" ] }
19,708
This is written in the wiki entry of Symbolic Execution , but I can't find any reference for it. Can anyone show me a pointer? Thank you.
I am not aware of a paper concerned with the comparison between symbolic execution and abstract interpretation. Nor do I think one is needed. Reading the original descriptions of these two techniques should be enough. King, Symbolic Execution and Program Testing , 1976 Cousot, Cousot, Abstract Interpretation: a Unified Lattice Model for Static Analysis of Programs by Construction of Approximation of Fixpoints , 1977 (Conversely, if there would be some unexpected connection, then that would be worth describing. But I very much doubt this is the case.) The main idea of symbolic execution is that, at an arbitrary point in execution, you can express the values of all variables as functions of the initial values. The main idea of abstract interpretation is that you can systematically explore all executions of a program by a series of over-approximations. (I can hear several AI enthusiasts groaning at the previous approximation.) Thus, at least in the original formulation, symbolic execution was not concerned with exploring all possible executions. You can see this even in the title: it includes the word β€˜testing’. But here's more from Section 8: "For programs with infinite execution trees, the symbolic testing cannot be exhaustive and no absolute proof of correctness can be established." In contrast, abstract interpretation aims to explore all executions. To do so, it uses several ingredients, one of which is very similar to the main idea of symbolic execution. These ingredients are (1) abstract states, (2) joining and widening (hence, β€˜lattice’ in the title). Abstract states. The concrete state of a program at a particular point in time is basically a snapshot of the memory content (including the program code itself and the program counter). This has a lot of detail, which is hard to track. When you analyze a particular property, you may want to ignore large parts of the concrete state. Or you may want to care only whether a particular variable is negative, zero, or positive, but not care about its exact value. In general, you want to consider an abstract version of the concrete state. For this to work out, you must have a commutativity property: If you take a concrete state, execute a statement, and then abstract the resulting state, you should obtain the same result as if you abstract the initial state, and then execute the same statement but on the abstract state. This commutativity diagram appears in both papers. This is the common idea. Again, abstract interpretation is more general, for it does not dictate how to abstract a state -- it just says there should be a way to do it. In contrast, symbolic execution says that you use (symbolic) expressions that mention the initial values. Joining and Widening. If program execution reaches a certain statement in two different ways, symbolic execution does not try to merge the two analyzes. That is why the quote above talks about execution trees, rather than dags. But, remember that abstract interpretation wants to cover all executions. Thus, it asks for a way to merge the analyses of two executions at the point where they have the same program counter. (The join could be very dumb ({a} join {b} = {a,b}) such that it amounts to what symbolic execution does.) In general, joining itself is not sufficient to guarantee that you'll eventually finish analyzing all executions. (In particular, the dumb join mentioned earlier won't work.) Consider a program with a loops: "n=input(); for i in range(n): dostuff()". How many times should you go around the loop and keep joining? No fixed answer works. Thus, something else is needed, and that is widening , which can be seen as a heuristic. Suppose you went around the loop 3 times and you learned that "i=0 or i=1 or i=2". Then you say: hmmm, ... let's widen, and you get "i>=0". Again abstract interpretation does not say how to do widening -- it just says what properties widening should have to work out. (Sorry for this long answer: I really didn't have time to make it shorter.)
{ "source": [ "https://cstheory.stackexchange.com/questions/19708", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/10526/" ] }
19,759
To demonstrate the importance of algorithms (e.g. to students and professors who don't do theory or are even from entirely different fields) it is sometimes useful to have ready at hand a list of examples where core algorithms have been deployed in commercial, governmental, or widely-used software/hardware. I am looking for such examples that satisfy the following criteria: The software/hardware using the algorithm should be in wide use right now. The example should be specific. Please give a reference to a specific system and a specific algorithm. E.g., in "algorithm X is useful for image processing" the term "image processing" is not specific enough; in "Google search uses graph algorithms" the term "graph algorithms" is not specific enough. The algorithm should be taught in typical undergraduate or Ph.D. classes in algorithms or data structures. Ideally, the algorithm is covered in typical algorithms textbooks. E.g., "well-known system X uses little-known algorithm Y" is not good. Update: Thanks again for the great answers and links! Some people comment that it is hard to satisfy the criteria because core algorithms are so pervasive that it's hard to point to a specific use. I see the difficulty. But I think it is worthwhile to come up with specific examples because in my experience telling people: "Look, algorithms are important because they are just about everywhere !" does not work.
Algorithms that are the main driver behind a system are, in my opinion, easier to find in non-algorithms courses for the same reason theorems with immediate applications are easier to find in applied mathematics rather than pure mathematics courses. It is rare for a practical problem to have the exact structure of the abstract problem in a lecture. To be argumentative, I see no reason why fashionable algorithms course material such as Strassen's multiplication, the AKS primality test, or the Moser-Tardos algorithm is relevant for low-level practical problems of implementing a video database, an optimizing compiler, an operating system, a network congestion control system or any other system. The value of these courses is learning that there are intricate ways to exploit the structure of a problem to find efficient solutions. Advanced algorithms is also where one meets simple algorithms whose analysis is non-trivial. For this reason, I would not dismiss simple randomized algorithms or PageRank. I think you can choose any large piece of software and find basic and advanced algorithms implemented in it. As a case study, I've done this for the Linux kernel, and shown a few examples from Chromium. Basic Data Structures and Algorithms in the Linux kernel Links are to the source code on github . Linked list , doubly linked list , lock-free linked list . B+ Trees with comments telling you what you can't find in the textbooks. A relatively simple B+Tree implementation. I have written it as a learning exercise to understand how B+Trees work. Turned out to be useful as well. ... A tricks was used that is not commonly found in textbooks. The lowest values are to the right, not to the left. All used slots within a node are on the left, all unused slots contain NUL values. Most operations simply loop once over all slots and terminate on the first NUL. Priority sorted lists used for mutexes , drivers , etc. Red-Black trees are used for scheduling, virtual memory management, to track file descriptors and directory entries,etc. Interval trees Radix trees , are used for memory management , NFS related lookups and networking related functionality. A common use of the radix tree is to store pointers to struct pages; Priority heap , which is literally, a textbook implementation, used in the control group system . Simple insertion-only static-sized priority heap containing pointers, based on CLR, chapter 7 Hash functions , with a reference to Knuth and to a paper. Knuth recommends primes in approximately golden ratio to the maximum integer representable by a machine word for multiplicative hashing. Chuck Lever verified the effectiveness of this technique: http://www.citi.umich.edu/techreports/reports/citi-tr-00-1.pdf These primes are chosen to be bit-sparse, that is operations on them can use shifts and additions instead of multiplications for machines where multiplications are slow. Some parts of the code, such as this driver , implement their own hash function. hash function using a Rotating Hash algorithm Knuth, D. The Art of Computer Programming, Volume 3: Sorting and Searching, Chapter 6.4. Addison Wesley, 1973 Hash tables used to implement inodes , file system integrity checks etc. Bit arrays , which are used for dealing with flags, interrupts, etc. and are featured in Knuth Vol. 4. Semaphores and spin locks Binary search is used for interrupt handling , register cache lookup , etc. Binary search with B-trees Depth first search and variant used in directory configuration . Performs a modified depth-first walk of the namespace tree, starting (and ending) at the node specified by start_handle. The callback function is called whenever a node that matches the type parameter is found. If the callback function returns a non-zero value, the search is terminated immediately and this value is returned to the caller. Breadth first search is used to check correctness of locking at runtime. Merge sort on linked lists is used for garbage collection , file system management , etc. Bubble sort is amazingly implemented too, in a driver library. Knuth-Morris-Pratt string matching , Implements a linear-time string-matching algorithm due to Knuth, Morris, and Pratt [1]. Their algorithm avoids the explicit computation of the transition function DELTA altogether. Its matching time is O(n), for n being length(text), using just an auxiliary function PI[1..m], for m being length(pattern), precomputed from the pattern in time O(m). The array PI allows the transition function DELTA to be computed efficiently "on the fly" as needed. Roughly speaking, for any state "q" = 0,1,...,m and any character "a" in SIGMA, the value PI["q"] contains the information that is independent of "a" and is needed to compute DELTA("q", "a") 2 . Since the array PI has only m entries, whereas DELTA has O(m|SIGMA|) entries, we save a factor of |SIGMA| in the preprocessing time by computing PI rather than DELTA. [1] Cormen, Leiserson, Rivest, Stein Introdcution to Algorithms, 2nd Edition, MIT Press [2] See finite automation theory Boyer-Moore pattern matching with references and recommendations for when to prefer the alternative. Implements Boyer-Moore string matching algorithm: [1] A Fast String Searching Algorithm, R.S. Boyer and Moore. Communications of the Association for Computing Machinery, 20(10), 1977, pp. 762-772. http://www.cs.utexas.edu/users/moore/publications/fstrpos.pdf [2] Handbook of Exact String Matching Algorithms, Thierry Lecroq, 2004 http://www-igm.univ-mlv.fr/~lecroq/string/string.pdf Note: Since Boyer-Moore (BM) performs searches for matchings from right to left, it's still possible that a matching could be spread over multiple blocks, in that case this algorithm won't find any coincidence. If you're willing to ensure that such thing won't ever happen, use the Knuth-Pratt-Morris (KMP) implementation instead. In conclusion, choose the proper string search algorithm depending on your setting. Say you're using the textsearch infrastructure for filtering, NIDS or any similar security focused purpose, then go KMP. Otherwise, if you really care about performance, say you're classifying packets to apply Quality of Service (QoS) policies, and you don't mind about possible matchings spread over multiple fragments, then go BM. Data Structures and Algorithms in the Chromium Web Browser Links are to the source code on Google code . I'm only going to list a few. I would suggest using the search feature to look up your favourite algorithm or data structure. Splay trees . The tree is also parameterized by an allocation policy (Allocator). The policy is used for allocating lists in the C free store or the zone; see zone.h. Voronoi diagrams are used in a demo. Tabbing based on Bresenham's algorithm . There are also such data structures and algorithms in the third-party code included in the Chromium code. Binary trees Red-Black trees Conclusion of Julian Walker Red black trees are interesting beasts. They're believed to be simpler than AVL trees (their direct competitor), and at first glance this seems to be the case because insertion is a breeze. However, when one begins to play with the deletion algorithm, red black trees become very tricky. However, the counterweight to this added complexity is that both insertion and deletion can be implemented using a single pass, top-down algorithm. Such is not the case with AVL trees, where only the insertion algorithm can be written top-down. Deletion from an AVL tree requires a bottom-up algorithm. ... Red black trees are popular, as most data structures with a whimsical name. For example, in Java and C++, the library map structures are typically implemented with a red black tree. Red black trees are also comparable in speed to AVL trees. While the balance is not quite as good, the work it takes to maintain balance is usually better in a red black tree. There are a few misconceptions floating around, but for the most part the hype about red black trees is accurate. AVL trees Rabin-Karp string matching is used for compression. Compute the suffixes of an automaton . Bloom filter implemented by Apple Inc. Bresenham's algorithm . Programming Language Libraries I think they are worth considering. The programming languages designers thought it was worth the time and effort of some engineers to implement these data structures and algorithms so others would not have to. The existence of libraries is part of the reason we can find basic data structures reimplemented in software that is written in C but less so for Java applications. The C++ STL includes lists, stacks, queues, maps, vectors, and algorithms for sorting, searching and heap manipulation . The Java API is very extensive and covers much more. The Boost C++ library includes algorithms like Boyer-Moore and Knuth-Morris-Pratt string matching algorithms. Allocation and Scheduling Algorithms I find these interesting because even though they are called heuristics, the policy you use dictates the type of algorithm and data structure that are required, so one need to know about stacks and queues. Least Recently Used can be implemented in multiple ways. A list-based implementation in the Linux kernel. Other possibilities are First In First Out, Least Frequently Used, and Round Robin. A variant of FIFO was used by the VAX/VMS system. The Clock algorithm by Richard Carr is used for page frame replacement in Linux. The Intel i860 processor used a random replacement policy. Adaptive Replacement Cache is used in some IBM storage controllers, and was used in PostgreSQL though only briefly due to patent concerns . The Buddy memory allocation algorithm , which is discussed by Knuth in TAOCP Vol. 1 is used in the Linux kernel, and the jemalloc concurrent allocator used by FreeBSD and in facebook . Core utils in *nix systems grep and awk both implement the Thompson-McNaughton-Yamada construction of NFAs from regular expressions, which apparently even beats the Perl implementation . tsort implements topological sort. fgrep implements the Aho-Corasick string matching algorithm. GNU grep , implements the Boyer-Moore algorithm according to the author Mike Haertel. crypt(1) on Unix implemented a variant of the encryption algorithm in the Enigma machine. Unix diff implemented by Doug McIllroy, based on a prototype co-written with James Hunt, performs better than the standard dynamic programming algorithm used to compute Levenshtein distances. The Linux version computes the shortest edit distance. Cryptographic Algorithms This could be a very long list. Cryptographic algorithms are implemented in all software that can perform secure communications or transactions. Merkle trees , specifically the Tiger Tree Hash variant, were used in peer-to-peer applications such as GTK Gnutella and LimeWire . MD5 is used to provide a checksum for software packages and is used for integrity checks on *nix systems ( Linux implementation ) and is also supported on Windows and OS X. OpenSSL implements many cryptographic algorithms including AES, Blowfish, DES, SHA-1, SHA-2, RSA, DES, etc. Compilers LALR parsing is implemented by yacc and bison. Dominator algorithms are used in most optimizing compilers based on SSA form. lex and flex compile regular expressions into NFAs. Compression and Image Processing The Lempel-Ziv algorithms for the GIF image format are implemented in image manipulation programs, starting from the *nix utility convert to complex programs. Run length encoding is used to generate PCX files (used by the original Paintbrush program), compressed BMP files and TIFF files. Wavelet compression is the basis for JPEG 2000 so all digital cameras that produce JPEG 2000 files will be implementing this algorithm. Reed-Solomon error correction is implemented in the Linux kernel , CD drives, barcode readers and was combined with convolution for image transmission from Voyager. Conflict Driven Clause Learning Since the year 2000, the running time of SAT solvers on industrial benchmarks (usually from the hardware industry, though though other sources are used too) has decreased nearly exponentially every year. A very important part of this development is the Conflict Driven Clause Learning algorithm that combines the Boolean Constraint Propagation algorithm in the original paper of Davis Logemann and Loveland with the technique of clause learning that originated in constraint programming and artificial intelligence research. For specific, industrial modelling, SAT is considered an easy problem ( see this discussion ). To me, this is one of the greatest success stories in recent times because it combines algorithmic advances spread over several years, clever engineering ideas, experimental evaluation, and a concerted communal effort to solve the problem. The CACM article by Malik and Zhang is a good read. This algorithm is taught in many universities (I have attended four where it was the case) but typically in a logic or formal methods class. Applications of SAT solvers are numerous. IBM, Intel and many other companies have their own SAT solver implementations. The package manager in OpenSUSE also uses a SAT solver.
{ "source": [ "https://cstheory.stackexchange.com/questions/19759", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/259/" ] }
20,364
Is there any system similar to the lambda calculus that is strong normalizing, without the need to add a type system on top of it?
I can think of a few possible answers coming from linear logic. The simplest one is the affine lambda-calculus: consider only lambda-terms in which every variable appears at most once. This condition is preserved by reduction and it is immediate to see that the size of affine terms strictly decreases with each reduction step. Therefore, the untyped affine lambda-calculus is strongly normalizing. More interesting examples (in terms of expressiveness) are given by the so-called "light" lambda-calculi, arising from the subsystems of linear logic introduced by Girard in "Light Linear Logic" (Information and Computation 143, 1998), as well as Lafont's "Soft Linear Logic" (Theoretical Computer Science 318, 2004). There are several such calculi in the literature, perhaps a good reference is Terui's "Light affine lambda calculus and polynomial time strong normalization" (Archive for Mathematical Logic 46, 2007). In that paper, Terui defines a lambda-calculus derived from light affine logic and proves a strong normalization result for it. Even though types are mentioned in the paper, they are not used in the normalization proof. They are useful for a neat formulation of the main property of the light affine lambda-calculus, namely that the terms of a certain type represent exactly the Polytime functions. Similar results are known for elementary computation, using other "light" lambda-calculi (Terui's paper contains further references). As a side note, it is interesting to observe that, in proof-theoretic terms, the affine lambda-calculus corresponds to intuitionistic logic without the contraction rule. Grishin observed (before linear logic was introduced) that, in the absence of contraction, naive set theory (i.e., with unrestricted comprehension) is consistent (i.e., Russel's paradox does not give a contradiction). The reason is that cut-elimination for naive set-theory without contraction may be proved by a straightforward size-decreasing argument (as the one I gave above) which does not rely on the complexity of formulas. Via the Curry-Howard correspondence, this is exactly the normalization of the untyped affine lambda-calculus. It is by translating Russel's paradox in linear logic and by "tweaking" the exponential modalities so that no contradiction could be derived that Girard came up with light linear logic. As I mentioned above, in computational terms light linear logic gives a characterization of the polynomial-time computable functions. In proof-theoretic terms, a consistent naive set theory may be defined in light linear logic such that the provably total functions are exactly the polynomial-time computable functions (there is another paper by Terui on this, "Light affine set theory: A naive set theory of polynomial time", Studia Logica 77, 2004).
{ "source": [ "https://cstheory.stackexchange.com/questions/20364", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/17388/" ] }
20,883
Given a new problem in $\mathsf{NP}$ whose true complexity is somewhere between $\mathsf{P}$ and being NP-complete, there are two methods that I know of that might be used to prove that resolving this is difficult: Show that the problem is GI-complete (GI = Graph Isomorphism) Show that the problem is in $\mathsf{co-AM}$. By known results, such a result implies that if the problem is NP-complete, then PH collapses to the second level. For example, the famous protocol for Graph Nonisomorphism does exactly this. Are there any other methods (maybe with different "strengths of belief") that have been used ? For any answer, an example of where it has actually been used is required: obviously there are many ways one might try to show this, but examples make the argument more convincing.
Showing that your problem is in coAM (or SZK) is indeed one of the main ways to adduce evidence for "hardness limbo." But besides that, there are several others: Show that your problem is in NP ∩ coNP. (Example: Factoring.) Show that your problem is solvable in quasipolynomial time. (Examples: VC dimension, approximating free games.) Show that your problem is no harder than inverting one-way functions or solving NP on average. (Examples: Lots of problems in cryptography.) Show that your problem reduces to (e.g.) Unique Games or Small-Set Expansion. Show that your problem is in BQP. (Example: Factoring, though of course that's also in NP ∩ coNP.) Rule out large classes of NP-completeness reductions. (Example: The Circuit Minimization Problem, studied by Kabanets and Cai.) I'm sure there are others that I'm forgetting.
{ "source": [ "https://cstheory.stackexchange.com/questions/20883", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/80/" ] }
20,978
I know that the halting problem is undecidable in general but there are some Turing machines that obviously halt and some that obviously don't. Out of all possible turing machines what is the smallest one where nobody has a proof whether it halts or not?
The largest Turing machines for which the halting problem is decidable are: $TM(2,3), TM(2,2), TM(3,2)$ (where $TM(k,l)$ is the set of Turing machines with $k$ states and $l$ symbols). The decidability of $TM(2,4)$ and $TM(3,3)$ is on the boundary and it is difficult to settle because it depends on the Collatz conjecture which is an open problem. See also my answer on cstheory about Collatz-like Turing machines and " Small Turing machines and generalized busy beaver competition " by P. Michel (2004) (in which it is conjectured that $TM(4,2)$ is also decidable). Kaveh's comment and Mohammad's answer are correct, so for a formal definition of the standard/non-standard Turing machines used in this kind of results see Turlough Neary and Damien Woods works on small universal Turing machines, e.g. The complexity of small universal Turing machines: a survey (Rule 110 TMs are weakly universal).
{ "source": [ "https://cstheory.stackexchange.com/questions/20978", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/21538/" ] }
21,016
I am looking for nice examples, where the following phenomenon occurs: (1) An algorithmic problem looks hard, if you want to solve it working from the definitions and using standard results only. (2) On the other hand, it becomes easy, if you know some (not so standard) theorems. The goal of this is to illustrate for students that learning more theorems can be useful, even for those who are outside of the theory field (such as software engineers, computer engineers etc). Here is an example: Question: Given integers $n, k, l, d$, does there exist an $n$-vertex graph (and if so, find one), such that its vertex connectivity is $k$, its edge connectivity is $l$, and its minimum degree is $d$? Note that we require that the parameters are exactly equal to the given numbers, they are not just bounds. If you want to solve this from scratch, it might appear rather hard. On the other hand, if you are familiar with the following theorem (see Extremal Graph Theory by B. Bollobas), the situation becomes quite different. Theorem: Let $n, k, l, d$ be integers. There exists an $n$-vertex graph with vertex connectivity $k$, edge connectivity $l$, and minimum degree $d$, if and only if one of the following conditions is satisfied: $0\leq k\leq l \leq d <\lfloor n/2 \rfloor$, $1\leq 2d+2-n\leq k\leq l = d< n-1$ $k=l=d=n-1.$ These conditions are very easy to check, being simple inequalities among the input parameters, so the existence question can be answered effortlessly. Furthermore, the proof of the theorem is constructive, resolving the construction issue, as well. On the other hand, this result does not appear standard enough, so that you can expect everybody to know about it. Can you provide further examples in this spirit, where knowing a (not so standard) theorem greatly simplifies a task?
Deciding isomorphism of simple groups, given by their multiplication tables. The fact that this can be done in polynomial time follows directly from the fact that all finite simple groups can be generated by at most 2 elements, and currently the only known proof of that fact uses the Classification of Finite Simple Groups (perhaps the largest theorem - in terms of authors, papers, and pages - ever proven).
{ "source": [ "https://cstheory.stackexchange.com/questions/21016", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/12710/" ] }
21,026
What would be the nasty consequences of NP=PSPACE? I am surprised I did not found anything on this, given that these classes are among the most famous ones. In particular, would it have any consequences on the lower classes?
If $\mathsf{NP} = \mathsf{PSPACE}$, this would imply: $\mathsf{P^{\#P}} = \mathsf{NP}$ That is, counting the solutions to a problem in $\mathsf{NP}$ would be polytime reducible to finding a single solution; $\mathsf{PP} = \mathsf{NP}$ That is, polynomial-time randomized algorithms with success probability arbitrarily close to 1/2 is polynomial-time reducible to polynomial-time randomized algorithms with one-sided error, where YES instances are accepted with arbitrarily small probability; $\mathsf{MA} = \mathsf{NP}$ That is, for any problem which is verifiable in polynomial time, randomization provides a polynomial-time speedup at best (but this is just a corollary of the polynomial-time hierarchy collapsing); $\mathsf{BQP} \subseteq \mathsf{NP}$ That is, any problem which is solvable by a quantum computer has easily verified certificates for its answers; this would be an important positive result in the philosophy of quantum mechanics, and would probably be helpful to the effort to construct quantum computers (for verifying that they are doing what they are meant to be doing). All of these are due to containments of the classes on the left-hand sides in $\mathsf{PSPACE}$ (though we also have $\mathsf{BQP \subseteq PP}$).
{ "source": [ "https://cstheory.stackexchange.com/questions/21026", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/8953/" ] }
21,060
Consider optimization problems of the following form. Let $f(x)$ be a polynomial-time computable function that maps a string $x$ into a rational number. The optimization problem is this: what is the maximum value of $f(x)$ over $n$-bit strings $x$? Let us say that such a problem has a minimax characterization , if there is another polynomial-time computable function $g$, such that $$\max_x f(x) = \min_y g(y)$$ holds. Here $x$ runs over all $n$-bit strings, and $y$ runs over all $m$-bit strings; $n$ and $m$ may be different, but they are polynomially related. Numerous natural and important optimization problems have such minimax characterization. A few examples (the theorems on which the characterizations are based shown in parenthesis): Linear Programming (LP Duality Thm), Maximum Flow (Max Flow Min Cut Thm), Max Bipartite Matching (Konig-Hall Thm), Max Non-Bipartite Matching (Tutte's Thm, Tutte-Berge formula), Max Disjoint Arborescences in directed graph (Edmond's Disjoint Branching Thm), Max Spanning Tree Packing in undirected graph (Tutte's Tree Packing Thm), Min Covering by Forests (Nash-Williams Thm), Max Directed Cut Packing (Lucchesi-Younger Thm), Max 2-Matroid Intersection (Matroid Intersection Thm), Max Disjoint Paths (Menger's Thm), Max Antichain in Partially Ordered Set (Dilworth Thm), and many others. In all these examples, a polynomial-time algorithm is also available to find the optimum. My question: Is there any optimization problem with a minimax characterization, for which no polynomial-time algorithm has been found so far? Note: Linear Programming was in this status for about 30 years!
In some technical sense you are asking whether $P = NP \cap coNP$. Suppose that $L \in NP \cap coNP$, thus there exists poly-time $F$ and $G$ so that $x \in L$ iff $\exists y: F(x,y)$ and $x \not\in L$ iff $\exists y: G(x,y)$. This can be recast as a minmax characterization by $f_x(y) = 1$ if $F(x,y)$ and $f_x(y) = 0$ otherwise; $g_x(y) = 0$ if $G(x,y)$ and $g_x(y) = 1$ otherwise. Now indeed we have $max_y f_x(y) = min_y g_x(y)$. So in this sense, any problem known to be in $NP \cap coNP$ but not known to be in $P$ can be turned into an answer to your question. E.g. Factoring (say, the decision version of whether the $i$'th bit of the largest factor is 1).
{ "source": [ "https://cstheory.stackexchange.com/questions/21060", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/12710/" ] }
21,338
It is well-known that palindromes can be recognized in linear time on $2$-tape Turing machines, but not on single-tape Turing machines (in which case the time needed is quadratic). The linear-time algorithm uses a copy of the input, and thus also uses a linear space. Can we recognize palindromes in linear time of a multitape Turing machine, using only a logarithmic space? More generally, what kind of space-time trade-off is known for palindromes?
Using crossing sequences or communication complexity it is simple to derive the tradeoff $T(n)S(n) = \Omega(n^2)$ for a sequential Turing machine using time $O(T(n))$ and space $O(S(n))$. This result was first obtained by Alan Cobham using crossing sequences in the paper The recognition problem for the set of perfect squares which appeared at SWAT (later FOCS) 1966.
{ "source": [ "https://cstheory.stackexchange.com/questions/21338", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/976/" ] }
21,571
Is NP in $DTIME(n^{poly\log n})$?
$DTIME(n^{polylogn})$ is known as $QP$ (quasi-polynomial) . It is widely believed that $NP\not \subset QP$, although it is a stronger statement than $P\neq NP$. Some common conjectures, such as the Exponential Time Hypothesis imply $NP\not \subset QP$.
{ "source": [ "https://cstheory.stackexchange.com/questions/21571", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/-1/" ] }
21,705
I'm just reading up on lambda calculus to "get to know it". I see it as an alternate form of computation as opposed to the Turing Machine. It's an interesting way of doing things with functions/reductions (crudely speaking). Some questions keep nagging at me though: What's the point of lambda calculus? Why go through all these functions/reductions? What is the purpose? As a result I'm left to wonder: What exactly did lambda calculus do to advance the theory of CS? What were it's contributions that would allow me to have an "aha" moment of understanding the need for its existence? Why is lambda calculus not covered in texts on automata theory? The common route is to go through various automata, grammars, Turing Machines and complexity classes. Lambda calculus is only included in the syllabus for SICP style courses (perhaps not?). But I've rarely seen it be a part of the core curriculum of CS. Does this imply it's not all that valuable? Maybe not and I maybe missing something here? I'm aware that functional programming languages are based on lambda calculus but I'm not considering that as a valid contribution, since it was created much before we had programming languages. So, really what is the point of knowing/understanding lambda calculus, w.r.t. its applications/contributions to theory?
$\lambda$-calculus has two key roles. It is a simple mathematical foundation of sequential, functional, higher-order computational behaviour. It is a representation of proofs in constructive logic. This is also known as the Curry-Howard correspondence . Jointly, the dual view of $\lambda$-calculus as proof and as (sequential, functional, higher-order) programming language, strengthened by the algebraic feel of $\lambda$-calculus (which is not shared by Turing machines), has lead to massive technology transfer between logic, the foundations of mathematics, and programming. This transfer is still ongoing, for example in homotopy type theory . In particular the development of programming languages in general, and typing disciplines in particular, is inconceivable without $\lambda$-calculus. Most programming languages owe some degree of debt to Lisp and ML (e.g. garbage collection was invented for Lisp), which are direct descendants of the $\lambda$-calculus. A second strand of work strongly influenced by $\lambda$-calculus are interactive proof assistants . Does one have to know $\lambda$-calculus to be a competent programmer, or even a theoretician of computer science? No. If you are not interested in types, verification and programming languages with higher-order features, then it's probably a model of computation that's not terribly useful for you. In particular, if you are interested in complexity theory, then $\lambda$-calculus is probably not an ideal model because the basic reduction step $$(\lambda x.M) N \rightarrow_{\beta} M[N/x]$$ is powerful: it can make an arbitrary number of copies on $N$, so $\rightarrow_{\beta}$ is an unrealistic basic notion in accounting for the microscopic cost of computation. I think this is the main reason why Theory A is not so enamoured of $\lambda$-calculus. Conversely, Turing machines are not terribly inspirational for programming language development, because there are no natural notions of machine composition, whereas with $\lambda$-calculus, if $M$ and $N$ are programs, then so is $MN$. This algebraic view of computation relates naturally to programming languages used in practice, and much language development can be understood as the search for, and investigation of novel program composition operators. For an encyclopedic overview of the history of $\lambda$-calculus see History of Lambda-calculus and Combinatory Logic by Cardone and Hindley .
{ "source": [ "https://cstheory.stackexchange.com/questions/21705", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/7014/" ] }
21,730
I need to calculate the running median: Input: $n$, $k$, vector $(x_1, x_2, \dotsc, x_n)$. Output: vector $(y_1, y_2, \dotsc, y_{n-k+1})$, where $y_i$ is the median of $(x_i, x_{i+1}, \dotsc, x_{i+k-1})$. (No cheating with approximations; I would like to have exact solutions. Elements $x_i$ are large integers.) There is a trivial algorithm that maintains a search tree of size $k$; the total running time is $O(n \log k)$. (Here a "search tree" refers to some efficient data structure that supports insertions, deletions, and median queries in logarithmic time.) However, this seems a bit stupid to me. We will effectively learn all order statistics within all windows of size $k$, not just the medians. Moreover, this is not too attractive in practice, especially if $k$ is large (large search trees tend to be slow, overhead in memory consumption is non-trivial, cache-efficiency is often poor, etc.). Can we do anything substantially better? Are there any lower bounds (e.g., is the trivial algorithm asymptotically optimal for the comparison model)? Edit: David Eppstein gave a nice lower bound for the comparison model! I wonder if it is nevertheless possible to do something slightly more clever than the trivial algorithm? For example, could we do something along these lines: divide the input vector to parts of size $k$; sort each part (keeping track of the original positions of each element); and then use the piecewise sorted vector to find the running medians efficiently without any auxiliary data structures? Of course this would still be $O(n \log k)$, but in practice sorting arrays tends to be much faster than maintaining search trees. Edit 2: Saeed wanted to see some reasons why I think sorting is faster than search tree operations. Here are very quick benchmarks, for $k = 10^7$, $n = 10^8$: β‰ˆΒ 8s: sorting $n/k$ vectors with $k$ elements each β‰ˆ 10s: sorting a vector with $n$ elements β‰ˆ 80s: $n$ insertions & deletions in a hash table of size $k$ β‰ˆ 390s: $n$ insertions & deletions in a balanced search tree of size $k$ The hash table is there just for comparison; it is of no direct use in this application. In summary, we have almost a factor 50 difference in the performance of sorting vs. balanced search tree operations. And things get much worse if we increase $k$. (Technical details: Data = random 32-bit integers. Computer = a typical modern laptop. The test code was written in C++, using the standard library routines (std::sort) and data structures (std::multiset, std::unsorted_multiset). I used two different C++ compilers (GCC and Clang), and two different implementations of the standard library (libstdc++ and libc++). Traditionally, std::multiset has been implemented as a highly optimised red-black tree.)
Here's a lower bound from sorting. Given an input set $S$ of length $n$ to be sorted, create an input to your running median problem consisting of $n-1$ copies of a number smaller than the minimum of $S$, then $S$ itself, then $n-1$ copies of a number larger than the maximum of $S$, and set $k=2n-1$. The running medians of this input are the same as the sorted order of $S$. So in a comparison model of computation, $\Omega(n\log n)$ time is required. Possibly if your inputs are integers and you use integer sorting algorithms you can do better.
{ "source": [ "https://cstheory.stackexchange.com/questions/21730", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/74/" ] }
21,792
I am a 2nd year graduate student in theory. I have been working on a problem for the last year (in graph theory/algorithms). Until yesterday I thought I am doing well (I was extending a theorem from a paper). Today I realized that I have made a simple mistake. I realized that it will be much harder than I thought to do what I intended to do. I feel disappointed so much I am thinking about leaving grad school. Is this a common situation that a researchers notices that her idea is not going to work after considerable amount of work? What do you do when you realized that an approach you had in mind is not going to work and the problem seems too difficult to solve? What advice would you give to a student in my situation?
Is this a common situation that a researchers notices that her idea is not going to work after considerable amount of work? Yes. But as you get more experienced, you're able to "fail fast" - learn how to test the idea quickly to see if it passes a 'smell test'. What do you do when you realized that an approach you had in mind is not going to work and the problem seems too difficult to solve? It depends. Sometimes the best thing to do is put the problem away for a while and work on something else. Sometimes the failure suggests a different question. What advice would you give to a student in my situation?
{ "source": [ "https://cstheory.stackexchange.com/questions/21792", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/14028/" ] }
21,836
Coq has a type Prop of proof irrelevant propositions which are discarded during extraction. What are the reason for having this if we use Coq only for proofs. Prop is impredicative, so Prop : Prop, however, Coq automatically infers universe indexes and we can use Type(i) instead everywhere. It seems Prop complicates everything a lot. I read that there're philosophical reasons for separating Set and Prop in Luo's book, however, I didn't find them in the book. What are they?
$\mathtt{Prop}$ is very useful for program extraction because it allows us to delete parts of code that are useless. For example, to extract a sorting algorithm we would prove the statement "for every list $\ell$ there is a list $k$ such that $k$ is ordered and $k$ is a permutatiom of $\ell$ ". If we write this down in Coq and extract without using $\mathtt{Prop}$ , we will get: "for all $\ell$ there is $k$ " will give us a map sort which takes lists to lists, "such that $k$ is ordered" will give a funciton verify which runs through $k$ and checks that it is sorted, and " $k$ is a permutation of $\ell$ " will give a permutation pi which takes $\ell$ to $k$ . Note that pi is not just a mapping, but also the inverse mapping together with programs verifying that the two maps really are inverses. While the extra stuff is not totally useless, in many applications we want to get rid of it and keep just sort . This can be accomplished if we use $\mathtt{Prop}$ to state " $k$ is ordered" and " $k$ is a permutation of $\ell$ ", but not "for all $\ell$ there is $k$ ". In general, a common way to extract code is to consider a statement of the form $\forall x : A \,.\, \exists y : B \,.\, \phi(x, y)$ where $x$ is input, $y$ is output, and $\phi(x,y)$ explains what it means for $y$ to be a correct output. (In the above example $A$ and $B$ are the types of lists and $\phi(\ell, k)$ is " $k$ is ordered and $k$ is a permutation of $\ell$ .") If $\phi$ is in $\mathtt{Prop}$ then extraction gives a map $f : A \to B$ such that $\phi(x, f(x))$ holds for all $x \in A$ . If $\phi$ is in $\mathtt{Set}$ then we also get a function $g$ such that $g(x)$ is the proof that $\phi(x, f(x))$ holds, for all $x \in A$ . Often the proof is computationally useless and we prefer to get rid of it, especially when it is nested deeply inside some other statement. $\mathtt{Prop}$ gives us the possibility to do so. Added 2015-07-29: There is a question whether we could avoid $\mathsf{Prop}$ altogether by automatically optimizing away "useless extracted code". To some extent we can do that, for instance all code extracted from the negative fragment of logic (stuff built from the empty type, unit type, products) is useless as it just shuffles around the unit. But there are genuine design decisions one has to make when using $\mathsf{Prop}$ . Here is a simpe example, where $\Sigma$ means that we are in $\mathsf{Type}$ and $\exists$ means we are in $\mathsf{Prop}$ . If we extract from $$\Pi_{n : \mathbb{N}} \Sigma_{b : \{0,1\}} \Sigma_{k : \mathbb{N}} \; n = 2 \cdot k + b$$ we will get a program which decomposes $n$ into its lowest bit $b$ and the remaining bits $k$ , i.e., it computes everything. If we extract from $$\Pi_{n : \mathbb{N}} \Sigma_{b : \{0,1\}} \exists_{k : \mathbb{N}} \; n = 2 \cdot k + b$$ then the program will only compute the lowest bit $b$ . The machine cannot tell which is the correct one, the user has to tell it what they want.
{ "source": [ "https://cstheory.stackexchange.com/questions/21836", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/21833/" ] }
22,055
This paper claims that the traditional analysis of the error rate in Bloom filters is incorrect, then provides a lengthy and nontrivial analysis of the actual error rate. The linked paper was published in 2010, yet I've seen the traditional analysis of Bloom filters continued to be taught in various algorithms and data structures courses. Is the traditional analysis of Bloom filters indeed incorrect? Thanks!
The traditional analysis is fine. The "traditional" analysis is, if it is explained correctly, an approximation; it's based on calculating the expected number of cells that are 0/1 when you hash the keys into the filter, and then analyzing as though that was the actual number. The point is that the number of cells that are 0 (or 1) are tightly concentrated around their expectation, so it's a fine approximation. This was well known, and can be found, I think, even back in my survey article with Andrei Broder. This paper says that really the performance of a Bloom filter is a random variable (corresponding to the actual fraction of 0/1 entries), and if you want to calculate that performance exactly for some reason, you need to do the combinatorics. For smaller filters, you'll see an arguably non-trivial difference. I've talked with the authors of this paper. Their analysis is all well and good (though I'd argue that it isn't deep or new); their motivation that the "traditional analysis is wrong" was, I think, exaggerated.
{ "source": [ "https://cstheory.stackexchange.com/questions/22055", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/4354/" ] }
22,093
The complexity class $\mathsf{UP}$ consists of those $\mathsf{NP}$-problems that can be decided by a polynomial time nondeterministic Turing machine which has at most one accepting computational path. That is, the solution, if any, is unique in this sense. It is thought highly unlikely that all $\mathsf{UP}$-problems are in $\mathsf{P}$, because by the Valiant-Vazirani Theorem this would imply the collapse $\mathsf{NP}=\mathsf{RP}$. On the other hand, no $\mathsf{UP}$-problem is known to be $\mathsf{NP}$-complete, which suggests that the unique solution requirement still somehow makes them easier. I am looking for examples, where the uniqueness assumption leads to a faster algorithm. For example, looking at graph problems, can a maximum clique in a graph be found faster (though possibly still in exponential time), if we know that the graph has a unique maximum clique? How about unique $k$-colorability, unique Hamiltonian path, unique minimum dominating set etc.? In general, we can define a unique-solution version of any $\mathsf{NP}$-complete problem, scaling them down to $\mathsf{UP}$. Is it known for any of them that adding the uniqueness assumption leads to a faster algorithm? (Allowing that it still remains exponential.)
3-SAT may be one such problem. Currently the best upper bound for Unique 3-SAT is exponentially faster than for general 3-SAT. (The speedup is exponential, although the reduction in the exponent is tiny.) The record-holder for the unique case is this paper by Timon Hertli. Hertli's algorithm builds upon the important PPSZ algorithm of Paturi, PudlΓ‘k, Saks, and Zane for $k$-SAT, which I believe is still the fastest for $k \geq 5$ (see also this encyclopedia article). The original analysis showed better bounds for Unique $k$-SAT than for general $k$-SAT when $k = 3, 4$; subsequently, however, Hertli showed in a different paper that you could get the same bounds for (a slightly tweaked) PPSZ algorithm, without assuming uniqueness. So, maybe uniqueness helps, and it can definitely simplify the analysis of some algorithms, but our understanding of the role of uniqueness for $k$-SAT is still growing. There is evidence that Unique $k$-SAT is not too much easier than general $k$-SAT. The Strong Exponential Time Hypothesis (SETH) asserts there is no $\delta < 1$ such that $n$-variable $k$-SAT is solvable in $O^*(2^{\delta n})$ time for each constant $k \geq 3$. It was shown in a paper of Calabro, Impagliazzo, Kabanets, and Paturi that, if SETH holds, then the same statement is true for Unique $k$-SAT. Also, if general $k$-SAT requires exponential time, i.e. there is some $k \geq 3, \epsilon > 0$ such that general $k$-SAT cannot be solved in time $O^*(2^{\epsilon n})$, then the same must be true for Unique 3-SAT. See the paper for the most general statement. (Note: the $O^*$ notation suppresses polynomial factors in the input length.)
{ "source": [ "https://cstheory.stackexchange.com/questions/22093", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/12710/" ] }
22,109
Let's say that I wanted to use a BSP not just for partitioning points, but also to define surfaces, i.e. that I have $\mathbb{R}^2$ and I want to be able to continuously map at least some easily known/calculated continuous subset of it to the points on the surface at every branch. Let's furthermore say that I wanted those surfaces to be curved. I could arbitrarily approximate curved partitioning surfaces by using a helluva lot of straight planes ala the classic BSP, but that seems silly. Is there prior art in using NURBS or kernelized support vectors or whatever to define smooth curved surfaces in a binary space partitioning tree s.t. extraction of some of the boundaries' points from the representation (enough to illustrate the boundary) is easily (and preferably deterministically, avoiding monte carlo methods) accomplished? Or is this one of those trivial knowledge-synthesis problems and I'm better off just winging it? If not, could someone please point me in the right direction?
3-SAT may be one such problem. Currently the best upper bound for Unique 3-SAT is exponentially faster than for general 3-SAT. (The speedup is exponential, although the reduction in the exponent is tiny.) The record-holder for the unique case is this paper by Timon Hertli. Hertli's algorithm builds upon the important PPSZ algorithm of Paturi, PudlΓ‘k, Saks, and Zane for $k$-SAT, which I believe is still the fastest for $k \geq 5$ (see also this encyclopedia article). The original analysis showed better bounds for Unique $k$-SAT than for general $k$-SAT when $k = 3, 4$; subsequently, however, Hertli showed in a different paper that you could get the same bounds for (a slightly tweaked) PPSZ algorithm, without assuming uniqueness. So, maybe uniqueness helps, and it can definitely simplify the analysis of some algorithms, but our understanding of the role of uniqueness for $k$-SAT is still growing. There is evidence that Unique $k$-SAT is not too much easier than general $k$-SAT. The Strong Exponential Time Hypothesis (SETH) asserts there is no $\delta < 1$ such that $n$-variable $k$-SAT is solvable in $O^*(2^{\delta n})$ time for each constant $k \geq 3$. It was shown in a paper of Calabro, Impagliazzo, Kabanets, and Paturi that, if SETH holds, then the same statement is true for Unique $k$-SAT. Also, if general $k$-SAT requires exponential time, i.e. there is some $k \geq 3, \epsilon > 0$ such that general $k$-SAT cannot be solved in time $O^*(2^{\epsilon n})$, then the same must be true for Unique 3-SAT. See the paper for the most general statement. (Note: the $O^*$ notation suppresses polynomial factors in the input length.)
{ "source": [ "https://cstheory.stackexchange.com/questions/22109", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/14359/" ] }
22,493
After studying deterministic finite state automata (DFA) in undergrad, I felt they are extremely well understood. My question is whether there is something we still don't understand about them. I don't mean generalisations of DFAs but the original unmodified DFAs we study in undergrad. This is a vague question but I hope you get the idea. I want to understand if it is fair to say that we completely understand DFAs. So I really mean questions that are inherently about DFAs, not problems artificially made to look like a problem about DFAs. Let me give an example of such a problem. Let L be the empty language if P=NP and some fixed non-regular language if P is not NP. Can L be accepted by a DFA? This question is about DFAs, but it isn't about them in spirit. I hope my point is clear and I don't get pedantic non-answers from people. In short is it fair to say We essentially completely understand DFAs. I am sorry if it turns out that this is a huge area of research that I was not aware of and I have just insulted an entire community of people.
Here is one problem described in the book "A second course in formal languages and automata theory" by Shallit. Let $u$ and $v$ be two distinct words with $|u|=|v|=n$. What is the size of the smallest DFA that accepts $u$ but rejects $v$, or vice versa? Robson, in his paper " Separating strings with small automata " in 1989 proved an upper bound $O(n^{2/5}(\log n)^{3/5})$. The best known lower bound in $\Omega(\log n)$. For a survey see this .
{ "source": [ "https://cstheory.stackexchange.com/questions/22493", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/22872/" ] }
23,798
In the introduction and explanation P and NP complexity classes often given through Turing machine. One of the model of computation is the lambda-calculus. I understand, that all of models of computation are equivalent (and if we can introduce anything in terms Turing machine, we can introduce this in terms any model of computation), but I never seen explanation idea P and NP complexity classes through lambda-calculus. Can anybody explain notions P and NP complexity classes without Turing machine and only with lambda calculus as model of computation.
Turing-machines and $\lambda$ -calculus are equivalent only w.r.t. the functions $\mathbb{N} \rightarrow \mathbb{N}$ they can define. From the point of view of computational complexity they seem to behave differently. The main reason people use Turing machines and not $\lambda$ -calculus to reason about complexity is that using $\lambda$ -calculus naively leads to unrealistic complexity measures, because you can copy terms (of arbitrary size) freely in single $\beta$ -reduction steps, e.g. $(\lambda x.xxx)M \rightarrow MMM.$ In other words, single reduction steps in $\lambda$ -calculus are a lousy cost model. In contrast, single Turing-machine reduction steps work great (in the sense of being good predictors of real-world program run-time). It is not known how fully to recover conventional Turing-machine based complexity theory in $\lambda$ -calculus. In a recent (2014) breakthrough Accattoli and Dal Lago managed to show that large classes of time-complexity such as $P$ , $NP$ and $EXP$ can be given a natural $\lambda$ -calculus formulation. But smaller classes like $O(n^2)$ or $O(n \, log\, n)$ cannot be presented using the Accattoli / Dal Lago techniques. How to recover conventional space complexity using $\lambda$ -calculus is unknown.
{ "source": [ "https://cstheory.stackexchange.com/questions/23798", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/13607/" ] }
24,943
Rather than empirical evidence, by what formal principles have we proved that quantum computing will be faster than traditional/classical computing?
This is a question that is a little bit difficult to unpack if you are not familiar with computational complexity. Like most of the field of computational complexity, the main results are widely believed but conjectural. The complexity classes typically associated with efficient classical computation are $\mathsf{P}$ (for deterministic algorithms) and $\mathsf{BPP}$ (for randomized). The quantum counterpart of these classes is $\mathsf{BQP}$. All three classes are subsets of $\mathsf{PSPACE}$ (a very powerful class). However, our current methods of proof are not strong enough to definitively show that $\mathsf{P}$ is not the same thing as $\mathsf{PSPACE}$. Thus, we do not know how to formally separate $\mathsf{P}$ from $\mathsf{BQP}$ either β€” since $\mathsf{P \subseteq BQP \subseteq PSPACE}$, separating those two classes is harder than the already formidable task of separating $\mathsf{P}$ from $\mathsf{PSPACE}$. (If we could prove $\mathsf{P \ne BQP}$, we would immediately obtain a proof that $\mathsf{P \ne PSPACE}$, so proving $\mathsf{P \ne BQP}$ has to be at least as hard as the already-very-hard problem of proving $\mathsf{P \ne PSPACE}$). For this reason, within the current state of the art, it is difficult to obtain a rigorous mathematical proof showing that quantum computing will be faster than classical computing. Thus, we usually rely on circumstantial evidence for complexity class separations. Our strongest and most famous evidence is Shor's algorithm that it allows us to factor in $\mathsf{BQP}$. In contrast, we do not know of any algorithm that can factor in $\mathsf{BPP}$ β€” and most people believe one doesn't exist; that is part of the reason why we use RSA for encryption, for instance. Roughly speaking, this implies that it is possible for a quantum computer to factor efficiently, but suggests that it may not be possible for a classical computer to factor efficiently. For these reasons, Shor's result has suggested to many that $\mathsf{BQP}$ is strictly more powerful than $\mathsf{BPP}$ (and thus also more powerful than $\mathsf{P}$). I don't know of any serious arguments that $\mathsf{BQP = P}$, except from those people that believe in much bigger complexity class collapses (which are a minority of the community). The most serious arguments I have heard against quantum computing come from stances closer to the physics and argue that $\mathsf{BQP}$ does not correctly capture the nature of quantum computing. These arguments typically say that macroscopic coherent states are impossible to maintain and control (e.g., because there is some yet-unknown fundamental physical roadblock), and thus the operators that $\mathsf{BQP}$ relies on cannot be realized (even in principle) in our world. If we start to move to other models of computation, then a particularly easy model to work with is quantum query complexity (the classical version that corresponds to it is decision tree complexity). In this model, for total functions we can prove that (for some problems) quantum algorithms can achieve a quadratic speedup, although we can also show that for total functions we cannot do better than a power-6 speed up and believe that quadratic is the best possible . For partial functions, it is a totally different story, and we can prove that exponential speed ups are achievable. Again, these arguments rely on a belief that we have a decent understanding of quantum mechanics and there isn't some magical unknown theoretical barrier to stopping macroscopic quantum states from being controlled.
{ "source": [ "https://cstheory.stackexchange.com/questions/24943", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/25351/" ] }
24,986
Any language which is not Turing complete can not write an interpreter for it self. I have no clue where I read that but I have seen it used a number of times. It seems like this gives rise to a kind of "ultimate" non Turing complete language; the one(s) that can only be interpreted by a Turing machine. These languages would not necessarily be able to compute all total functions from naturals to naturals nor would they necessarily be isomorphic (that is maybe ultimate languages A and B exists such that there exists a function F that A can compute but B cannot). Agda can interpret Godel's system T and Agda is total so that such an ultimate language should be strictly more powerful that Godel's system T it would seem. It seems to me that such a language would be at least as powerful as agda too (though I have no evidence, just a hunch). Has any research been done in this vein? What results are known (namely is such an "ultimate" language known)? Bonus: I am worried that there exists a pathological case that can not compute functions that Godel's System T could yet can still only be interpreted by a Turing machine because it allows some really odd functions to be computed. Is this the case or can we know that such a language would be able to compute anything Godel's System T could compute?
This is a badly phrased question, so let's first make sense of it. I am going to do it the style of computability theory. Thus I will use numbers instead of strings: a piece of source code is a number, rather than a string of symbols. It does not really matter, you may replace $\mathbb{N}$ with $\mathtt{string}$ throughout below. Let $\langle m, n\rangle$ be a pairing function . Let us say that a programming language $L = (P, ev)$ is given by the following data: a decidable set $P \subseteq \mathbb{N}$ of "valid programs", and a computable and partial function $ev : P \times \mathbb{N} \to \mathbb{N}$ . The fact that $P$ is decidable means there is a total computable map $valid : \mathbb{N} \to \{0,1\}$ such that $valid(n) = 1 \iff n \in P$ . Informally, we are saying that it is possible to tell whether a given string is a valid piece of code. The function $ev$ is essentially an interpreter for our language: $ev(m,n)$ runs code $m$ on input $n$ – the result may be undefined. We can now introduce some terminology: A language is total if $n \mapsto ev(m,n)$ is a total function for all $m \in P$ . A language $L_1 = (P_1, ev_1)$ interprets language $L_2 = (P_2, ev_2)$ if there exists $u \in P_1$ such that $ev_1(u, \langle n, m \rangle) \simeq ev_2(n, m)$ for all $n \in P_2$ and $m \in \mathbb{N}$ . Here $u$ is the simulator for $L_2$ implemented in $L_1$ . It is also known as the universal program for $L_2$ . Other definitions of " $L_1$ interprets $L_2$ " are possible, but let me not get into this now. We say that $L_1$ and $L_2$ are equivalent if they interpret each other. There is "the most powerful" language $T = (\mathbb{N}, \varphi)$ of Turing machines (which you refer to as "a Turing machine") in which $n \in \mathbb{N}$ is an encoding of a Turing machine and $\varphi(n,m)$ is the partial computable function that "runs the Turing machine encoded by $n$ on input $m$ ". This language can interpret all other languages, obviously since we required $ev$ to be computable. Our definition of programming languages is very relaxed. For the following to go through, let us require three more conditions: $L$ implements the successor function: there is $succ \in P$ such that $ev(succ,m) = m+1$ for all $m \in \mathbb{N}$ , $L$ implements the diagonal function: there is $diag \in P$ such that $ev(diag,m) = \langle m, m \rangle$ for all $m \in \mathbb{N}$ , $L$ is closed under composition of functions: if $L$ implements $f$ and $g$ then it also implements $f \circ g$ , A classic result is this: Theorem: If a language can interpret itself then it is not total. Proof. Suppose $u$ is the universal program for a total language $L$ implemented in $L$ , i.e., for all $m \in P$ and $n \in \mathbb{N}$ , $$ev(u, \langle m, n \rangle) \simeq ev(m, n).$$ As successor, diagonal, and $ev(u, {-})$ are implemented in $L$ , so is their composition $k \mapsto ev(u, \langle k, k \rangle) + 1$ . There exists $n_0 \in P$ such that $ev(n_0, k) \simeq ev(u, \langle k, k \rangle) + 1$ , but then $$ev(u, \langle n_0, n_0\rangle) \simeq ev(n_0, n_0) \simeq ev(u, \langle n_0, n_0 \rangle) + 1$$ As there is no number equal its own successor, it follows that $L$ is not total or that $L$ does not interpret itself. QED. Observe that we could replace the successor map with any other fixpoint-free map. Here is a little theorem which I think will clean up a misunderstanding. Theorem: Every total language can be interpreted by another total language. Proof. Let $L$ be a total language. We get a total $L'$ which interprets $L$ by adjoining to $L$ its evaluator $ev$ . More precisely, let $P' = \{\langle 0, n\rangle \mid n \in P\} \cup \{\langle 1, 0\rangle\}$ and define $ev'$ as $$ev'(\langle b, n \rangle, m) = \begin{cases} ev(n,m) & \text{if $b = 0$},\\ ev(m_0, m_1) & \text{if $b = 1$ and $m = \langle m_0, m_1 \rangle$} \end{cases} $$ Obviously, $L'$ is total because $L$ is total. To see that $L'$ can simulate $L$ just take $u = \langle 1, 0\rangle$ , since then $ev'(u, \langle m, n\rangle) \simeq ev(m, n)$ , as required. QED. Exercise: [added 2014-06-27] The language $L'$ constructed above is not closed under composition. Fix the proof of the theorem so that $L'$ satisfies the extra requirements if $L$ does. In other words, you never need the full power of Turing machines to interpret a total language $L$ – a slightly more powerful total language $L'$ suffices. The language $L'$ is strictly more powerful than $L$ because it interprets $L$ , but $L$ does not interpret itself.
{ "source": [ "https://cstheory.stackexchange.com/questions/24986", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/21112/" ] }
25,466
I've seen in multiple places stating that factoring is in BQP and referencing Shor's algorithm, but Shor's algorithm is not solving a decision problem. How can factoring be restated in a decision problem? And is there a paper which shows that Shor's algorithm implies this decision problem is in BQP?
Here the goal is to construct a decision problem D so that (a) if you can factor you can solve the decision problem in polynomial time and (b) if you can solve the decision problem you can factor in polynomial time. There are a number of ways to do this. To name just two: D: given n and k, does n have a divisor d satisfying 1 < d <= k? D: given n and j, is the j'th bit of the smallest nonunit divisor of n equal to 1? If you can solve 1, then you can identify d using binary search. Once you have d, you can then continue with n/d until the complete factorization is achieved. 2 is similar.
{ "source": [ "https://cstheory.stackexchange.com/questions/25466", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/25551/" ] }
25,512
This is a "historical question" more than it is a research question, but was the classical reduction to order-finding in Shor's algorithm for factorization initially discovered by Peter Shor, or was it previously known? Is there a paper that describes the reduction that pre-dates Shor, or is it simply a so-called "folk result?" Or was it simply another breakthrough in the same paper?
I have to admit (surprising as it sounds) that I don't know really the answer. I either discovered or rediscovered this reduction myself. I discovered the discrete log algorithm first, and the factoring algorithm second, so I knew from discrete log that periodicity was useful. I knew that factoring was equivalent to finding two unequal numbers with equal squares (mod N) β€” this is the basis for the quadratic sieve algorithm. I had also seen the reduction of factoring to finding the Euler $\phi$ function, which is quite similar. While I came up with the reduction of this question to order-finding, it's not hard, so I wouldn't be surprised if there was another paper describing this reduction that predates mine. However, I don't think this could be a widely known "folk result". Even if somebody had discovered it, before quantum computing why would anybody care about reducing factoring to the question of order-finding (provably exponential on a classical computer)? EDIT: Note that order-finding is provably exponential only in an oracle setting; order finding modulo $N$ is equivalent to factoring $N$, and this had been proved earlier by Heather Woll, as the other answer points out.
{ "source": [ "https://cstheory.stackexchange.com/questions/25512", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/-1/" ] }
25,551
Given two (deterministic) finite automata $A, B$ over $\Sigma$, a mapping $h:\Sigma\rightarrow \Sigma'$ Naturally $h$ can be extended to a mapping in $\Sigma^*\rightarrow \Sigma'^*$ which is denoted by $h$ as well. Is the set $$\{w\in L(A)\mid h^{-1}(h(w))\subseteq L(B)\}$$ regular?
I have to admit (surprising as it sounds) that I don't know really the answer. I either discovered or rediscovered this reduction myself. I discovered the discrete log algorithm first, and the factoring algorithm second, so I knew from discrete log that periodicity was useful. I knew that factoring was equivalent to finding two unequal numbers with equal squares (mod N) β€” this is the basis for the quadratic sieve algorithm. I had also seen the reduction of factoring to finding the Euler $\phi$ function, which is quite similar. While I came up with the reduction of this question to order-finding, it's not hard, so I wouldn't be surprised if there was another paper describing this reduction that predates mine. However, I don't think this could be a widely known "folk result". Even if somebody had discovered it, before quantum computing why would anybody care about reducing factoring to the question of order-finding (provably exponential on a classical computer)? EDIT: Note that order-finding is provably exponential only in an oracle setting; order finding modulo $N$ is equivalent to factoring $N$, and this had been proved earlier by Heather Woll, as the other answer points out.
{ "source": [ "https://cstheory.stackexchange.com/questions/25551", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/22750/" ] }
25,573
I'm recently studying Haskell and programming languages. Could someone recommend some books on type theory?
Software Foundations by Benjamin C. Pierce would be a good place to start. It would be a make a good precursor to his Types and Programming Languages . There is also Simon Thompson's Type Theory and Functional Programming and Girard's Proofs and Types .
{ "source": [ "https://cstheory.stackexchange.com/questions/25573", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/21930/" ] }
25,617
Despite several years of classes, I'm still at a loss when it comes to choosing a research topic. I've been looking over papers from different areas and spoken with professors, and I'm beginning to think this is the wrong approach. I've read that it helps to find an interesting problem (nevermind the area) and to then work on that. Textbooks mention famous unsolved ones but I wouldn't want to tackle them directly. Research papers only mentioned positive results, not failed attempts. How can I find interesting research problems? How do you find interesting research problems? Is there a list somewhere? How do you decide if it is worth to work on a particular problem?
I strongly disagree with the "find a list of open problems" approach. Usually open problems are quite hard to make progress on, and I'm thoroughly unconvinced that good research is done by tackling some hard but uninteresting problem in a technical area. That being said, of course solving an open problem is really good for academic credentials. But that's not what you are asking. Research is a process designed to generate understanding at a high level. Solving technical problems is a means to that end: often the problem and its solution illuminate the structure or behavior of some scientific phenomenon (a mathematical structure, a programing language practice, etc). So my first suggestion is: find a problem that you want to understand. Research is fundamentally about confusion. Are there some specific topics you are interested in, but that you feel you have a fundamentally incomplete comprehension of, or that seem technically clear, but that you lack any good intuition for? Those are good starting points. Follow Terry Tao's advice ask yourself dumb questions! A lot of good research comes out of these considerations. In fact, this whole page contains a lot of good advice. Note that if you are looking at a well-explored problem or field, it's unlikely you'll get original insights right away, so it's important to read up on literature concurrently with your own explorations. Second, don't discount communicating with your Professors. Ask them about their own research, not necessarily about projects they want to give you. Engage in a conversation! This helps you find out what you are interested in, but also what the research landscape looks like in their field. Research doesn't happen in a vacuum, so you should speak to your fellow students, PhDs in your department, go to talks and workshops at your university, etc. You'll find that being immersed in a research environment helps you do research a lot more than finding a list or specific problem and locking yourself in your office. Finally, I would suggest working on something small . Research is bottom-up much more than it is top down, and it's rare that a very simple task (writing a proof or a program) turns out to be as simple as you expected it to. Doing several small projects that are not research-scale (expanding on homework, writing up an explanation of something you learned) often build up into genuine research level stuff. It's common to try to "go big" at the beginning, but that's just now how our brains work.
{ "source": [ "https://cstheory.stackexchange.com/questions/25617", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/27077/" ] }
25,671
Deciding if a quantified boolean formula such as $\forall x_1 \exists x_2 \forall x_3\cdots \exists x_n \varphi(x_1, x_2,\ldots , x_n),$ always evaluates to true is a classical PSPACE-complete problem. This can be viewed as a game between two players, with alternating moves. The first player decides the truth value of the odd-numbered variables and the second player decides the truth value of the even-numbered variables. The first player tries to make $\varphi$ false and the second player tries to make it true. Deciding who has a winning strategy is PSPACE-complete. I am considering a similar problem with two players, one trying to make a boolean formula $\varphi$ true and the other trying to make it false. The difference is that on a move, a player can choose a variable and a truth value for it (for example, as the very first move, player one might decide to set $x_8$ to true and then in the next move, player two might decide to set $x_3$ to false). This means that the players can decide which of the variables (of those that have not yet been assigned a truth value) they want to assign a truth value, instead of having to play the game in the order $x_1 , \ldots , x_n$ . The problem is given a boolean formula $\varphi$ on $n$ variables to decide if player one (trying to make it false) or player two (trying to make it true) has a winning strategy. This problem is clearly still in PSPACE, since the game tree has linear depth. Does it remain PSPACE complete?
It is an Unordered Constraint Satisfaction game and it is PSPACE-complete and it has been proved to be PSPACE-complete only recently ; a proof can be found in: Lauri Ahlroth and Pekka Orponen, Unordered Constraint Satisfaction Games . Lecture Notes in Computer Science Volume 7464, 2012, pp 64-75. Abstract: We consider two-player constraint satisfaction games on systems of Boolean constraints, in which the players take turns in selecting one of the available variables and setting it to true or false, with the goal of maximising (for Player I) or minimising (for Player II) the number of satisfied constraints. Unlike in standard QBF-type variable assignment games, we impose no order in which the variables are to be played. This makes the game setup more natural, but also more challenging to control. We provide polynomial-time, constant-factor approximation strategies for Player I when the constraints are parity functions or threshold functions with a threshold that is small compared to the arity of the constraints. Also, we prove that the problem of determining if Player I can satisfy all constraints is PSPACE-complete even in this unordered setting, and when the constraints are disjunctions of at most 6 literals (an unordered-game analogue of 6-QBF). From the content: ... Our generic example of an unordered constraint satisfaction game is the Game on Boolean Formulas ( GBF ). An instance of this game is given by a set of m non-constant Boolean formulas $C = \{c_1,...,c_m\}$ over a common set of n variables $X = \{x_1,...,x_n\}$. We refer to the formulas in $C$ as clauses even though we do not in general require them to be disjunctions. ... A game on $C$ proceeds so that on each turn the player to move selects one of the previously nonselected variables and assigns a truth value to it. Player I starts, and the game ends when all variables have been assigned a value. In the decision version of GBF , the question is whether Player I has a comprehensive winning strategy, by which she can make all clauses satisfied no matter what Player II does. In the positive case we say that the instance is GBF-satisfiable. .. ... Theorem 4 : The problem of deciding GBF-satisfiability of a Boolean formula is PSPACE-complete. EDIT : Daniel Grier's has found out that the result was also settled by Schaefer in the '70s, see his answer on this page for the reference (and upvote it :-). Schaefer proved that the game is still PSPACE-complete even if restricted to positive CNF formulas (i.e. propositional formulas in conjunctive normal form in which no negated variables occur) with at most 11 variables in each conjunction.
{ "source": [ "https://cstheory.stackexchange.com/questions/25671", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/9663/" ] }
27,152
I am currently trying to find EXPSPACE-complete problems (mainly to find inspiration for a reduction), and I am surprised by the small number of results coming up. So far, I found these, and I have trouble expanding the list: universality (or other properties) of regular expressions with exponentiation. problems related to vector addition systems unobservable games (see for instance this blog ) some fragment of FO-LTL, in On the Computational Complexity of Decidable Fragments of First-Order-Linear Temporal Logics Do you know other contexts when EXPSPACE-completeness appears naturally?
Extending the example pointed out by Emil Jerabek in the comments, $\mathsf{EXPSPACE}$-complete problems arise naturally all over algebraic geometry. This started (I think) with the Ideal Membership Problem ( Mayr–Meyer and Mayr ) and hence the computation of GrΓΆbner bases. This was then extended to the computation of syzygies ( Bayer and Stillman ). Many natural problems in computational algebraic geometry end up being equivalent to one of these problems. Also see the Bayer–Mumford survey "What can be computed in algebraic geometry?"
{ "source": [ "https://cstheory.stackexchange.com/questions/27152", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/8953/" ] }
27,449
Grothendieck has passed away . He had massive impact on 20th century mathematics continuing into the 21st century. This question is asked somewhat in the style/spirit, for example, of Alan Turing's Contributions to Computer Science . What are Grothendieck 's major influences on theoretical computer science?
Grothendieck's inequality , from his days in functional analysis, was initially proved to relate fundamental norms on tensor product spaces. Grothendieck called the inequality "the fundamental theorem of the metric theory of tensor product spaces", and published it in a now famous paper in 1958, in French, in a limited circulation Brazilian journal. The paper was largely ignored for 15 years, until it was rediscovered by Lindenstrauss and Pelczynski (after Grothendieck had left functional analysis). They gave many reformulations of the paper's main results, related it to research on absolutely summing operators and factorization norms, and observed that Grothendieck had solved "open" problems which had been raised after the paper was published. Pisier gives a very detailed account of the inequality, its variants, and its tremendous influence on functional analysis in his survey . Grothendieck's inequality is very naturally expressed in the language of combinatorial optimization and approximation algorithms. It says that the non-convex, NP-hard optimization problem $$ \max\{x^TAy: x \in \{-1, 1\}^m, y \in \{-1, 1\}^n\} $$ is approximated up to a fixed constant by its semidefinite relaxation $$ \max\{\sum_{i,j}{a_{ij}\langle u_i, v_j\rangle}: u_1, \ldots, u_m, v_1, \ldots, v_n \in \mathbb{S}^{n+m-1}\}, $$ where $\mathbb{S}^{n+m-1}$ is the unit sphere in $\mathbb{R}^{n+m}$. Proofs of the inequality give "rounding algorithms", and in fact the Goemans-Williamson random hyperplane rounding does the job (but gives a suboptimal constant). However, Grothendieck's inequality is interesting because the analysis of the rounding algorithm has to be "global", i.e. look at all terms of the objective function together. Having said this, it should not be surprising that Grothendiecks's inequality has found a second (third? fourth?) life in computer science. Khot and Naor survey its multiple applications and connections to combinatorial optimization. The story does not end there. The inequality is related to Bell inequality violations in quantum mechanics (see Pisier's paper), has been used by Linial and Shraibman in work on communication complexity, and even turned out useful in work on private data analysis (shameless plug).
{ "source": [ "https://cstheory.stackexchange.com/questions/27449", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/7884/" ] }
27,824
I know that the complexity of most varieties of typed lambda calculi without the Y combinator primitive is bounded, i.e. only functions of bounded complexity can be expressed, with the bound becoming larger as the expressiveness of the type system grows. I recall that, e.g., the Calculus of Constructions can express at most doubly exponential complexity. My question concerns whether the typed lambda calculi can express all algorithms below a certain complexity bound, or only some? E.g. are there any exponential-time algorithms not expressible by any formalism in the Lambda Cube? What is the "shape" of the complexity space which is completely covered by different vertices of the Cube?
I will give a partial answer, I hope others will fill in the blanks. In typed $\lambda$-calculi, one may give a type to usual representations of data ($\mathsf{Nat}$ for Church (unary) integers, $\mathsf{Str}$ for binary strings, $\mathsf{Bool}$ for Booleans) and wonder what is the complexity of the functions/problems representable/decidable by typed terms. I know a precise asnwer only in some cases, and in the simply typed case it depends on the convention used when defining "representable/decidable". Anyhow, I don't know of any case in which there is a doubly exponential upper bound. First, a brief recap on the Lambda Cube. Its 8 calculi are obtained by enabling or disabling the following 3 kinds of dependencies on top of the simply typed $\lambda$-calculus (STLC): polymorphism : terms may depend on types; dependent types : types may depend on terms; higher order : types may depend on types. (The dependency of terms on terms is always there). Adding polymorphism yields System F. Here, you can type the Church integers with $\mathsf{Nat}:=\forall X.(X\rightarrow X)\rightarrow X\rightarrow X$, and similarly for binary strings and Booleans. Girard proved that System F terms of type $\mathsf{Nat}\rightarrow\mathsf{Nat}$ represent exactly the numerical functions whose totality is provable in second order Peano arithmetic. That's pretty much everyday mathematics (albeit without any form of choice), so the class is huge, the Ackermann function is a sort of tiny microbe in it, let alone the function $2^{2^n}$. I don't know of any "natural" numerical function which cannot be represented in System F. Examples usually are built by diagonalization, or encoding the consistency of second order PA, or other self-referential tricks (like deciding $\beta$-equality within System F itself). Of course in System F you can convert between unary integers $\mathsf{Nat}$ and their binary representation $\mathsf{Str}$, and then test for instance whether the first bit is 1, so the class of decidable problems (by terms of type $\mathsf{Str}\rightarrow\mathsf{Bool}$) is equally huge. The other 3 calculi of the Lambda Cube which include polymorphism are therefore at least as expressive as System F. These include System F$_\omega$ (polymorphism + higher order), which can express exactly the provably total functions in higher order PA, and the Calculus of Constructions (CoC), which is the most expressive calculus of the Cube (all dependencies are enabled). I don't know a characterization of the expressiveness of the CoC in terms of arithmetical theories or set theories, but it must be pretty frightening :-) I am much more ignorant regarding the calculi obtained by just enabling dependent types (essentially Martin-LΓΆf type theory without equality and natural numbers), higher order types or both. In these calculi, types are powerful but terms can't access this power, so I don't know what you get. Computationally, I don't think you get much more expressiveness than with simple types, but I may be mistaken. So we are left with the STLC. As far as I know, this is the only calculus of the Cube with interesting (i.e., not monstrously big) complexity upper bounds. There is an unanswered question about this on TCS.SE, and in fact the situation is a bit subtle. First, if you fix an atom $X$ and define $\mathsf{Nat}:=(X\rightarrow X)\rightarrow X\rightarrow X$, there is Schwichtenberg's result (I know there's an english translation of that paper somewhere on the web but I can't find it now) which tells you that the functions of type $\mathsf{Nat}\rightarrow\mathsf{Nat}$ are exactly the extended polynomials (with if-then-else). If you allow some "slack", i.e. you allow the parameter $X$ to be instantiated at will and consider terms of type $\mathsf{Nat}[A]\rightarrow\mathsf{Nat}$ with $A$ arbitrary, much more can be represented. For example, any tower of exponentials (so you may go well beyond doubly exponential) as well as the predecessor function, but still no subtraction (if you consider binary functions and try to type them with $\mathsf{Nat}[A]\rightarrow\mathsf{Nat}[A']\rightarrow\mathsf{Nat}$). So the class of numerical functions representable in the STLC is a bit weird, it is a strict subset of the elementary functions but does not correspond to anything well known. In apparent contradiction with the above, there's this paper by Mairson which shows how to encode the transition function of an arbitrary Turing machine $M$, from which you obtain a term of type $\mathsf{Nat}[A]\rightarrow\mathsf{Bool}$ (for some type $A$ depending on $M$) which, given a Church integer $n$ as input, simulates the execution of $M$ starting from a fixed initial configuration for a number of steps of the form $$2^{2^{\vdots^{2^n}}},$$ with the height of the tower fixed. This does not show that every elementary problem is decidable by the STLC, because in the STLC there is no way of converting a binary string (of type $\mathsf{Str}$) representing the input of $M$ to the type used for representing the configurations of $M$ in Mairson's encoding. So the encoding is somehow "non-uniform": you can simulate elementarily-long executions from a fixed input, using a distinct term for each input, but there is no term that handles arbitrary inputs. In fact, the STLC is extremely weak in what it can decide "uniformly". Let us call $\mathcal C_{ST}$ the class of languages decidable by simply typed terms of type $\mathsf{Str}[A]\rightarrow\mathsf{Bool}$ for some $A$ (like above, you allow arbitrary "slack" in the typing). As far as I know, a precise characterization of $\mathcal C_{ST}$ is missing. However, we do know that $\mathcal C_{ST}\subsetneq\mathrm{LINTIME}$ (deterministic linear time). Both the containment and the fact that it is strict may be shown by very neat semantic arguments (using the standard denotational semantics of the STLC in the category of finite sets). The former was shown recently by Terui . The latter is essentially a reformulation of old results of Statman. An example of problem in $\mathrm{LINTIME}\setminus\mathcal C_{ST}$ is MAJORITY (given a binary string, tell whether it contains strictly more 1s than 0s). (Much) Later add-on: I just found out that the class I call $\mathcal C_{ST}$ above actually does have a precise characterization, which is moreover extremely simple. In this beautiful 1996 paper , Hillebrand and Kanellakis prove, among other things, that Theorem. $\mathcal C_{ST}=\mathsf{REG}$ (the regular languages on $\{0,1\}$). (This is Theorem 3.4 in their paper). I find this doubly surprising: I am surprised by the result itself (it never occurred to me that $\mathcal C_{ST}$ could correspond to something so "neat") and by how little known it is. It is also amusing that Terui's proof of the $\mathrm{LINTIME}$ upper bound uses the same methods employed by Hillebrand and Kanellakis (interpreting the simply-typed $\lambda$-calculus in the category of finite sets). In other words, Terui (and myself) could have easily re-discovered this result were it not for the fact that we were somehow happy with $\mathcal C_{ST}$ being a "weird" class :-) (Incidentally, I shared my surprise in this answer to a MO question about "unknown theorems").
{ "source": [ "https://cstheory.stackexchange.com/questions/27824", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/326/" ] }
29,340
In the paper "THE COMPLEXITY OF SATISFIABILITY PROBLEMS" by Thomas J. Schaefer, the author has mentioned that This raises the intriguing possibility of computer-assisted NP-completeness proofs. Once the researcher has established the basic framework for simulating conjunctions of clauses, the relational complexity could be explored with the help of a computer. The computer would be instructed to randomly generate various input configurations and test whether the defined relation was non-affine, non-bijunctive, etc. Of course, this is a limitation: The fruitfulness of such an approach remains to be proved: the enumeration of the elements of a relation on lO or 15 variables is Surely not a light computational task. I am curious that Are there follow-up researches in developing this idea of "computer-assisted NP-completeness proofs"? What is the state-of-the-art (may be specific to $\textsf{3SAT}$ or $\textsf{3-Partition}$)? Since Schaefer has proposed the idea of "computer-assisted" NP-Completeness proof (at least for reductions from $\textsf{SAT}$), does this mean there are some general principles/structures underlying these reductions (for the ones from $\textsf{3SAT}$ or $\text{3-Partition}$)? If so, what are they? Does anyone have experience in proving NP-completeness with a computer-assistant? Or can anyone make up an artificial example?
As for question 2, there are at least two examples of $NP$-completeness proofs that involve computer-assistant. Erickson and Ruskey provided a computer-aided proof that Domino Tatami Covering is NP-complete. They gave a polynomial time reduction from planar 3-SAT to tatami domino covering. A SAT-solver (Minisat) was used to automate gadgets discovery in the reduction. No other $NP$-completeness proof is known for it. Ruepp and Holzer proved that pencil puzzle Kakuro is $NP$-complete. Some parts of the $NP$-completeness proof were generated automatically using a SAT-solver ( again Minisat).
{ "source": [ "https://cstheory.stackexchange.com/questions/29340", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/12739/" ] }
29,383
Suppose that P = NP is true. Would there then be any practical application to building a quantum computer such as solving certain problems faster, or would any such improvement be irrelevant based on the fact that P = NP is true? How would you characterize the improvement in efficiency that would come about if a quantum computer could be built in a world where P = NP, as opposed to a world in which P != NP? Here's a made-up example of about what I'm looking for: If P != NP, we see that complexity class ABC is equal to quantum complexity class XYZ...but if P = NP, class ABC collapses to related class UVW anyway. (Motivation: I am curious about this, and relatively new to quantum computing; please migrate this question if it is insufficiently advanced.)
The paper " BQP and the Polynomial Hierarchy " by Scott Aaronson directly addresses your question. If P=NP, then PH would collapse. If furthermore BQP were in PH, then no quantum speed-up would be possible in that case. On the other hand, Aaronson gives evidence for a problem with quantum speedup outside PH, thus such a speed-up would survive a collapse of PH.
{ "source": [ "https://cstheory.stackexchange.com/questions/29383", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/-1/" ] }
29,458
I've been trying to wrap my head around the what, why and how of $\lambda$-calculus but I'm unable to come to grips with "why does it work"? "Intuitively" I get the computability model of Turing Machines (TM). But this $\lambda$-abstraction just leaves me confounded. Let's assume, TMs don't exist - then how can one be "intuitively" convinced about $\lambda$-calculus's ability to capture this notion of computability. How does having a bunch of functions for everything and their composobility imply computability? What am I missing here? I read Alonzo Church's paper on that but I'm still confused and looking for a more "dummed down" understanding of the same.
You're in good company. Kurt GΓΆdel criticized $\lambda$-calculus (as well as his own theory of general recursive functions) as not being a satisfactory notion of computability on the grounds that it is not intuitive, or that it does not sufficiently explain what is going on. In contrast, he found Turing's analysis of computability and the ensuing notion of machine totally convincing. So, don't worry. On the other hand, to get some idea on how a model of computability works, it's best to write some programs in it. But you do not have to do it in pure $\lambda$-calculus, although it's fun (in the same sort of way that firewalking is). You can use a modern descendant of $\lambda$-calculus, such as Haskell.
{ "source": [ "https://cstheory.stackexchange.com/questions/29458", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/7014/" ] }
29,476
I've read that initially Church proposed the $\lambda$-calculus as part of his Postulates of Logic paper (which is a dense read). But Kleene proved his "system" inconsistent after which, Church extracted relevant things for his work on "effective computability" and abandoned his prior work on logic. So as I understand it, the $\lambda$-system and its notations took form as part of something to do with logic. What was Church initially trying to achieve that he forked off from later? What were the initial reasons for creating $\lambda$-calculus?
He wanted to create a formal system for the foundations of logic and mathematics that was simpler than Russell's type theory and Zermelo's set theory. The basic idea was to add a constant $\Xi$ to the untyped lambda calculus (or combinatory logic) and interpret $XZ$ as expressing "$Z$ satisfies the predicate $X$" and $\Xi XY$ as expressing "$X\subseteq Y$". With rules expressing these intentions one can then interpret the ${\to}{\forall}$-fragment of intuitionistic predicate logic and unrestricted comprehension, the only problem being that by Curry's paradox, every $X$ is derivable. See p. 7 of: Cardone and Hindley, History of Lambda-calculus and Combinatory Logic , 2006: http://www.users.waitrose.com/~hindley/SomePapers_PDFs/2006CarHin,HistlamRp.pdf As well as the introduction to: Barendregt, Bunder and Dekkers, Systems of Illative Combinatory Logic Complete for First-Order Propositional and Predicate Calculus , JSL 58-3 (1993): http://ftp.cs.ru.nl/CompMath.Found/ICL1.ps
{ "source": [ "https://cstheory.stackexchange.com/questions/29476", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/7014/" ] }
29,519
In this wikipedia article on Turing Completeness it states that: The untyped lambda calculus is Turing complete, but many typed lambda calculi, including System F, are not. The value of typed systems is based in their ability to represent most typical computer programs while detecting more errors. What is an example of a total computable function that is uncomputable by system F ? In addition, since hindley-milner is: A restriction of System F because of the fact that: type checking is undecidable for a Curry-style variant of System F, that is, one that lacks explicit typing annotations. Does this mean that the lambda calculus underlying hindley-milner type systems is not turing complete as well? If this is true, since haskell is clearly turing complete and we know that it's basis is the lambda calculus and the hindley-milner type system, what features that are not present in the lambda calculus are added in order to make haskell turing complete?
System $F$ is quite expressive. As proved by Girard here , the functions of type $\mathbb{N}\rightarrow\mathbb{N}$ (where $\mathbb{N}$ is defined to be $\forall X.\ X\rightarrow (X\rightarrow X)\rightarrow X$) are exactly the definable functions ($\mathbb{N}\rightarrow\mathbb{N}$) in second order Heyting Arithmetic $\mathrm{HA}_2$. Note that this is the same as the functions definable in second order Peano Arithmetic . You'll probably want to check Proofs and Types as a more readable reference. Note that this means that a lot of programs can be written in system F, from the Ackermann function to interpreters for GΓΆdel's system $T$. As for any total programing language (with some mild conditions) system $F$ cannot implement a self interpreter , i.e. a function $\mathrm{eval}:\mathbb{N}\rightarrow\mathbb{N}$ which takes as input a code for a term $t$ of system $F$ and returns a (code for a) normal form for $t$. The proof involves a variant of the diagonalizing trick used for undecidability of the halting problem. Andrej explains it beautifully here . To answer your other questions: The $\lambda$-calculus underlying Hindley-Milner (HM) languages is also not Turing complete. In fact it is significantly weaker than system $F$, closer in expressiveness to the simply typed $\lambda$-calculus. Haskell is indeed Turing complete. The most distinctive feature enabling this (though there are others) is the presence of unrestricted recursion : the definition of any program (function) can refer to the program itself. This is similar to the addition of a $Y$ combinator, such as is done in the definition of PCF which is simply-typed but retains Turing-completeness with the $Y$ combinator. Note there are other features which make Haskell Turing complete, but they are not usually taken to be part of the core language, e.g. references to functions, unrestricted datatypes, etc.
{ "source": [ "https://cstheory.stackexchange.com/questions/29519", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/20014/" ] }
30,596
Define $io$-$SUBEXP$ to be the class of languages $L$ such that there is a language $L' \in \cap_{\varepsilon > 0} TIME(2^{n^{\varepsilon}})$ and for infinitely many $n$, $L$ and $L'$ agree on all instances of length $n$. (That is, this is the class of languages which can be "solved infinitely often, in subexponential time".) Is there an oracle $A$ such that $NP^A \not\subset io$-$SUBEXP^A$? If we equip SAT with the oracle $A$ in the usual way, can we say that $SAT^A$ is not in this class? (I'm asking separate questions here, because we have to be careful with infinitely-often time classes: just because you have a reduction from problem $B$ to problem $C$ and $C$ is solvable infinitely often, you may not actually get that $B$ is solvable infinitely often without further assumptions on the reduction: what if your reduction from $B$ "misses" the input lengths that you can solve $C$ on?)
You can just take the oracle A s.t. NP$^A$=EXP$^A$ since EXP is not in i.o.-subexp. For SAT$^A$ it depends on the encoding, for example if the only valid SAT instances have even length then it is easy to solve SAT on odd-length strings. But if you use a language like $L=\{\phi 01^*\ |\ \phi\in SAT^A\}$ then you should be fine.
{ "source": [ "https://cstheory.stackexchange.com/questions/30596", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/225/" ] }
30,820
Counting triangles in general graphs can be done trivially in $O(n^3)$ time and I think that doing much faster is hard (references welcome). What about planar graphs? The following straightforward procedure shows that it can be done in $O(n\log{n})$ time. My question is two-fold: What is a reference for this procedure? Can the time be made linear? From the algorithmic proof of Lipton-Tarjan's planar separator theorem we can, in time linear in the size of the graph, find a partition of vertices of the graph into three sets $A,B,S$ such that there are no edges with one endpoint in $A$ and the other in $B$, $S$ has size bounded by $O(\sqrt{n})$ and both $A,B$ have sizes upper bounded by $\frac{2}{3}$ of the number of vertices. Notice that any triangle in the graph either lies entirely inside $A$ or entirely inside $B$ or uses at least one vertex of $S$ with the other two vertices from $A \cup S$ or both from $B \cup S$. Thus it suffices to count the number of triangles in the graph on $S$ and the neighbours of $S$ in $A$ (and similarly for $B$). Notice that $S$ and its $A$-neighbours induce a $k$-outer planar graph (the said graph is a subgraph of a planar graph of diameter $4$). Thus counting the number of triangles in such a graph can be done directly by dynamic programming or by an application of Courcelle's theorem (I know for sure that such a counting version exists in the Logspace world by Elberfeld et al and am guessing that it also exists in the linear time world) since forming an undirected triangle is an $\mathsf{MSO}_1$ property and since a bounded width tree decomposition is easy to obtain from an embedded $k$-outer planar graph. Thus we have reduced the problem to a pair of problems which are each a constant fraction smaller at the expense of a linear time procedure. Notice that the procedure can be extended to find the count of the number of instances of any fixed connected graph inside an input graph in $O(n\log{n})$ time.
The number of occurrences of any fixed subgraph H in a planar graph G can be counted in O(n) time, even if H is disconnected. This, and several related results, are described in the paper Subgraph Isomorphism in Planar Graphs and Related Problems by David Eppstein of 1999; see Theorem 1. The paper indeed uses treewidth techniques.
{ "source": [ "https://cstheory.stackexchange.com/questions/30820", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/214/" ] }
31,054
In my computer science education, I increasingly notice that most discrete problems are NP-complete (at least), whereas optimizing continuous problems is almost always easily achievable, usually through gradient techniques. Are there exceptions to this?
An example that I love is the problem where, given distinct $a_1, a_2, \ldots, a_n \in \mathbb{N}$, decide if: $$\int_{-\pi}^{\pi} \cos(a_1 z) \cos(a_2 z) \ldots \cos(a_n z) \, dz \ne 0$$ This at first seems like a continuous problem to evaluate this integral, however it is easy to show that this integral is not zero iff there exists a balanced partition of the set $\{a_1, \ldots, a_n\}$, so this integral problem is actually NP-complete. Of course, I encourage playing around with some numerical tools to convince yourself that most (if not all) numerical tricks to evaluate this integral are doomed to failure once $n$ gets large enough.
{ "source": [ "https://cstheory.stackexchange.com/questions/31054", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/32670/" ] }
31,448
Of course, some complexity results may collapse for unary languages but I wonder if there is somewhere a survey summarizing the known results in this case: a kind of complexity zoo for unary languages. Would you know of such a reference ?
There is no Zoo-style reference yet, but a recent automata-theoretic survey of Giovanni Pighizzini has been useful to me, especially the slides from his talk. Giovanni Pighizzini, Investigations on Automata and Languages over a Unary Alphabet , CIAA 2014. doi: 10.1007/978-3-319-08846-4_3
{ "source": [ "https://cstheory.stackexchange.com/questions/31448", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/17203/" ] }
31,511
This question was previously posted to Computer Science Stack Exchange here . Imagine you're a very successful travelling salesman with clients all over the country. To speed up shipping, you've developed a fleet of disposable delivery drones, each with an effective range of 50 kilometers. With this innovation, instead of travelling to each city to deliver your goods, you only need to fly your helicopter within 50km and let the drones finish the job. Problem: How should your fly your helicopter to minimize travel distance? More precisely, given a real number $R>0$ and $N$ distinct points $\{p_1, p_2, \ldots, p_N\}$ in the Euclidean plane, which path intersecting a closed disk of radius $R$ about each point minimizes total arc length? The path need not be closed and may intersect the disks in any order. Clearly this problem reduces to TSP as $R \to 0$, so I don't expect to find an efficient exact algorithm. I would be satisfied to know what this problem is called in the literature and if efficient approximation algorithms are known.
This is a special case of the Travelling Salesman with Neighborhoods (TSPN) problem. In the general version, the neighborhoods need not all be the same. A paper by Dumitrescu and Mitchell, Approximation algorithms for TSP with neighborhoods in the plane , addresses your question. They give a constant factor approximation algorithm for a slightly more general problem (case 1), and a PTAS when the neighborhoods are disjoint balls of the same size (case 2). As a side comment, I think Mitchell has done a lot of work on geometric TSP variants, so you might want to look at his other papers.
{ "source": [ "https://cstheory.stackexchange.com/questions/31511", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/21580/" ] }
31,519
Given a boolean function $f:\{0,1\}^n\rightarrow\{0,1\}$, let $P_{i,\epsilon}$ be minimum multivariate polynomial such that $P_{i,\epsilon}=i\iff f=i$, $P_{i,\epsilon}\in(i-\epsilon,i+\epsilon)\iff f=1-i$ where $\epsilon\in(0,1)$ at each $i\in\{0,1\}$. Is atleast for one of $i\in\{0,1\}$, degrees of $P_{i,\epsilon},P_{i,\epsilon/2}$ polynomially always related? They should be, however I cannot find a reference. Straight forward interpolation arguments seems to fail as $\epsilon$ gets closer to 1 (but not exactly $1$).
This is a special case of the Travelling Salesman with Neighborhoods (TSPN) problem. In the general version, the neighborhoods need not all be the same. A paper by Dumitrescu and Mitchell, Approximation algorithms for TSP with neighborhoods in the plane , addresses your question. They give a constant factor approximation algorithm for a slightly more general problem (case 1), and a PTAS when the neighborhoods are disjoint balls of the same size (case 2). As a side comment, I think Mitchell has done a lot of work on geometric TSP variants, so you might want to look at his other papers.
{ "source": [ "https://cstheory.stackexchange.com/questions/31519", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/1812/" ] }
32,403
I have two historical questions: Who first described nondeterministic computation? I know that Cook described NP-complete problems, and that Edmonds proposed that P algorithms are "efficient" or "good" algorithms. I searched this Wikipedia article and skimmed "On the Computational Complexity of Algorithms," but couldn't find any reference to when nondeterministic computation was first discussed. What was the first reference to the class NP? Was it Cook's 1971 paper?
I have always seen the notion of nondeterminism in computation attributed to Michael Rabin and Dana Scott. They defined nondeterministic finite automata in their famous paper Finite Automata and Their Decision Problems , 1959. Rabin's Turing Award citation also suggests that Rabin and Scott introduced nondeterministic machines.
{ "source": [ "https://cstheory.stackexchange.com/questions/32403", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/-1/" ] }
32,538
I know that it is undecidable to determine if a set of tiles can tile the plane, a result of Berger using Wang tiles . My question is whether it is also known to be undecidable to determine if a single given tile can tile the plane, a monohedral tiling. If this remains unsettled, I would be interested to know what is the minimum cardinality of a set of tiles for which there is an undecidability proof. (I have not yet accessed Berger's proof.)
According to the introduction of [1], The complexity of determining if a single polyomino tiles the plane remains open [2,3], and There is an undecidability proof for sets of 5 polyominoes [4]. [1] Stefan Langerman, Andrew Winslow. A Quasilinear-Time Algorithm for Tiling the Plane Isohedrally with a Polyomino . ArXiv e-prints, 2015. arXiv:1507.02762 [cs.CG] [2] C. Goodman-Strauss. Open questions in tiling . Online, published 2000. [3] C. Goodman-Strauss. Can’t decide? undecide! Notices of the American Mathematical Society, 57(3):343–356, 2010. [4] N. Ollinger. Tiling the plane with a fixed number of polyominoes . In A. H. Dediu, A. M. Ionescu, and C. MartΒ΄Δ±n-Vide, editors, LATA 2009, volume 5457 of LNCS, pages 638–647. Springer, 2009.
{ "source": [ "https://cstheory.stackexchange.com/questions/32538", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/337/" ] }
33,005
The $k$-cycle problem is as follows: Instance: An undirected graph $G$ with $n$ vertices and up to $n \choose 2$ edges. Question: Does there exist a (proper) $k$-cycle in $G$? Background: For any fixed $k$, we can solve $2k$-cycle in $O(n^2)$ time. Raphael Yuster, Uri Zwick: Finding Even Cycles Even Faster. SIAM J. Discrete Math. 10(2): 209-222 (1997) However, it is not known if we can solve 3-cycle (i.e. 3-clique) in less than matrix multiplication time. My Question: Assuming that $G$ contains no 4-cycles, can we solve the 3-cycle problem in $O(n^2)$ time? David suggested an approach for solving this variant of the 3-cycle problem in $O(n^{2.111})$ time.
Yes, this is known. It appears in one of the must-cite references on triangle finding... Namely, Itai and Rodeh show in SICOMP 1978 how to find, in $O(n^2)$ time, a cycle in a graph that has at most one more edge than the minimum length cycle. (See the first three sentences of the abstract here: http://www.cs.technion.ac.il/~itai/publications/Algorithms/min-circuit.pdf ) It is a simple procedure based on properties of breadth-first search. So, if your graph is 4-cycle free and there is a triangle, their algorithm must output it, because it cannot output a 5-cycle or larger.
{ "source": [ "https://cstheory.stackexchange.com/questions/33005", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/14207/" ] }
34,398
Real computers have limited memory and only a finite number of states. So they are essentially finite automata. Why do theoretical computer scientists use the Turing machines (and other equivalent models) for studying computers? What is the point of studying these much stronger models with respect to real computers? Why is the finite automata model not enough?
There are two approaches when considering this question: historical that pertains to how concepts were discovered and technical which explains why certain concepts were adopted and others abandoned or even forgotten. Historically, the Turing Machine is perhaps the most intuitive model of several developed trying to answer the Entscheidungsproblem . This is intimately related to the great effort in the first decades of the 20th century to completely axiomatize mathematics. The hope was that once you have proven a small set of axioms to be correct (which would require substantial effort), you could then use a systematic method to derive a proof for the logical statement you were interested in. Even if someone considered finite automata in this context, they would be quickly dismissed since they fail to compute even simple functions. Technically, the statement that all computers are finite automata is false. A finite automaton has constant memory that cannot be altered depending on the size of the input. There is no limitation, either in mathematics or in reality, that prevented from providing additional tape, hard disks, RAM or other forms of memory, once the memory in the machine was being used. I believe this was often employed in the early days of computing, when even simple calculations could fill the memory, whereas now for most problems and with the modern infrastructure that allows for far more efficient memory management, this is most of the time not an issue. EDIT: I considered both points raised in the comments but elected not to include them both of brevity and time I had available to write down the answer. This is my reasoning as to why I believe these points do not diminish the effectiveness of Turing machines in simulating modern computers, especially when compared to finite automata: Let me first address the physical issue of a limit on memory by the universe. First of all, we don't really know if the universe is finite or not. Furthermore, the concept of the observable universe which is by definition finite, is also by definition irrelevant to a user that can travel to any point of the observable universe to use memory. The reason is that the observable universe refers to what we can observe from a specific point, namely Earth, and it would be different if the observer could travel to a different location in the universe. Thus, any argumentation about the observable universe devolves into the question of the universe's finiteness. But let's suppose that through some breakthrough we acquire knowledge that the universe is indeed finite. Although this would have a great impact on scientific matters, I doubt it would have any impact on the use of computers. Simply put, it might be that in principle the computers are indeed finite automata and not Turing machines. But for the sheer majority for computations and in all likelihood every computation humans are interested in, Turing machines and the associated theory offers us a better understanding. In a crude example, although we know that Newtonian physics are essentially wrong, I doubt mechanical engineers use primarily quantum physics to design cars or factory machinery; the corner cases where this is needed can be dealt at an individual level. Any technical restrictions such as buses and addressing are simply technical limitations of existing hardware and can be overcome physically. The reason this is not true for current computers is because the 64-bit addressing allowed us to move the upper bound on the address space to heights few if any applications can achieve. Furthermore, the implementation of an "extendable" addressing system could potentially have an impact on the sheer majority of computations that will not need it and thus is inefficient to have. Nothing stops you from organizing a hierarchical addressing system, e.g. for two levels the first address could refer to any of $2^{64}$ memory banks and then each bank has $2^{64}$ different addresses. Essentially networking is a great way of doing this, every machine only cares for its local memory but they can compute together.
{ "source": [ "https://cstheory.stackexchange.com/questions/34398", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/38584/" ] }
34,740
If one restricts Turing Machines to a finite tape (i.e., to use bounded space $S$), then the halting problem is decidable, essentially because after a number of steps (which can be calculated from the number of states $Q$, and $S$, and the alphabet size), a configuration must be repeated. Are there other natural Turing Machine restrictions that render halting decidable? Certainly if the state-transition graph has no loops or cycles, halting is decidable. Any others?
A fairly natural and studied variation is the Tape-Reversal Bounded Turing machine (the number of tape-reversals are bounded); see for example: Juris Hartmanis: Tape-Reversal Bounded Turing Machine Computations. J. Comput. Syst. Sci. 2(2): 117-135 (1968) Edit : [this variation is more artificial] the halting problem is decidable for a Non-erasing Turing machine that has at most two left instructions on alphabet $\{0,1\}$; see Maurice Margenstern: Nonerasing Turing Machines: A Frontier Between a Decidable Halting Problem and Universality. Theor. Comput. Sci. 129(2): 419-424 (1994)
{ "source": [ "https://cstheory.stackexchange.com/questions/34740", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/337/" ] }
36,054
The CoC is said to be the culmination of all three dimensions of the Lambda Cube. This isn't apparent to me at all. I think I understand the individual dimensions, and the combination of any two seems to result in a relatively straightforward union (maybe I'm missing something?). But when I look at the CoC, instead of looking like a combination of all three, it looks like a completely different thing. Which dimension do Type, Prop, and small/large types come from? Where did dependent products disappear to? And why is there a focus on propositions and proofs instead of types and programs? Is there something equivalent that does focus on types and programs? Edit: In case it isn't clear, I'm asking for an explanation of how the CoC is equivalent to the straightforward union of the Lambda Cube dimensions. And is there an actual union of all three out there somewhere I can study (that is in terms of programs and types, not proofs and propositions)? This is in response to comments on the question, not to any current answers.
First, to reiterate one of cody's points, the Calculus of Inductive Constructions (which Coq's kernel is based on) is very different from the Calculus of Constructions. It is best thought of as starting at Martin-Löf type theory with universes, and then adding a sort Prop at the bottom of the type hierarchy. This is a very different beast than the original CoC, which is best thought of as a dependent version of F-omega. (For instance, CiC has set-theoretic models and the CoC doesn't.) That said, the lambda-cube (of which the CoC is a member) is typically presented as a pure type system for reasons of economy in the number of typing rules. By treating sorts, types, and terms as elements of the same syntactic category, you can write down many fewer rules and your proofs get quite a bit less redundant as well. However, for understanding, it can be helpful to separate out the different categories explicitly. We can introduce three syntactic categories, kinds (ranged over by the metavariable k ), types (ranged over by the metavariable A ), and terms (ranged over by the metavariable e ). Then all eight systems can be understood as variations on what is permitted at each of the three levels. Ξ»β†’ (Simply-typed lambda calculus) k ::= βˆ— A ::= p | A β†’ B e ::= x | Ξ»x:A.e | e e This is the basic typed lambda calculus. There is a single kind βˆ— , which is the kind of types. The types themselves are atomic types p and function types A β†’ B . Terms are variables, abstractions or applications. λω_ (STLC + higher-kinded type operators) k ::= βˆ— | k β†’ k A ::= a | p | A β†’ B | Ξ»a:k.A | A B e ::= x | Ξ»x:A.e | e e The STLC only permits abstraction at the level of terms. If we add it at the level of types, then we add a new kind k β†’ k which is the type of type-level functions, and abstraction Ξ»a:k.A and application A B at the type level as well. So now we don't have polymorphism, but we do have type operators. If memory serves, this system does not have any more computational power than the STLC; it just gives you the ability to abbreviate types. Ξ»2 (System F) k ::= βˆ— A ::= a | p | A β†’ B | βˆ€a:k. A e ::= x | Ξ»x:A.e | e e | Ξ›a:k. e | e [A] Instead of adding type operators, we could have added polymorphism. At the type level, we add βˆ€a:k. A which is a polymorphic type former, and at the term level, we add abstraction over types Ξ›a:k. e and type application e [A] . This system is much more powerful than the STLC -- it is as strong as second-order arithmetic. λω (System F-omega) k ::= βˆ— | k β†’ k A ::= a | p | A β†’ B | βˆ€a:k. A | Ξ»a:k.A | A B e ::= x | Ξ»x:A.e | e e | Ξ›a:k. e | e [A] If we have both type operators and polymorphism, we get F-omega. This system is more or less the kernel type theory of most modern functional languages (like ML and Haskell). It is also vastly more powerful than System F -- it is equivalent in strength to higher order arithmetic. Ξ»P (LF) k ::= βˆ— | Ξ x:A. k A ::= a | p | Ξ x:A. B | Ξ›x:A.B | A [e] e ::= x | Ξ»x:A.e | e e Instead of polymorphism, we could have gone in the direction of dependency from simply-typed lambda calculus. If you permitted the function type to let its argument be used in the return type (ie, write Ξ x:A. B(x) instead of A β†’ B ), then you get Ξ»P. To make this really useful, we have to extend the set of kinds with a kind of type operators which take terms as arguments Ξ x:A. k , and so we have to add a corresponding abstraction Ξ›x:A.B and application A [e] at the type level as well. This system is sometimes called LF, or the Edinburgh Logical Framework. It has the same computational strength as the simply-typed lambda calculus. Ξ»P2 (no special name) k ::= βˆ— | Ξ x:A. k A ::= a | p | Ξ x:A. B | βˆ€a:k.A | Ξ›x:A.B | A [e] e ::= x | Ξ»x:A.e | e e | Ξ›a:k. e | e [A] We can also add polymorphism to Ξ»P, to get Ξ»P2. This system is not often used, so it doesn't have a particular name. (The one paper I've read which used it is Herman Geuvers' Induction is Not Derivable in Second Order Dependent Type Theory .) This system has the same strength as System F. Ξ»PΟ‰_ (no special name) k ::= βˆ— | Ξ x:A. k | Ξ a:k. k' A ::= a | p | Ξ x:A. B | Ξ›x:A.B | A [e] | Ξ»a:k.A | A B e ::= x | Ξ»x:A.e | e e We could also add type operators to Ξ»P, to get Ξ»PΟ‰_. This involves adding a kind Ξ a:k. k' for type operators, and corresponding type-level abstraction Ξ›x:A.B and application A [e] . Since there's again no jump in computational strength over the STLC, this system should also make a fine basis for a logical framework, but no one has done it. Ξ»PΟ‰ (the Calculus of Constructions) k ::= βˆ— | Ξ x:A. k | Ξ a:k. k' A ::= a | p | Ξ x:A. B | βˆ€a:k.A | Ξ›x:A.B | A [e] | Ξ»a:k.A | A B e ::= x | Ξ»x:A.e | e e | Ξ›a:k. e | e [A] Finally, we get to Ξ»PΟ‰, the Calculus of Constructions, by taking Ξ»PΟ‰_ and adding a polymorphic type former βˆ€a:k.A and term-level abstraction Ξ›a:k. e and application e [A] for it. The types of this system are much more expressive than in F-omega, but it has the same computational strength.
{ "source": [ "https://cstheory.stackexchange.com/questions/36054", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/40594/" ] }
36,428
Classical algorithms can solve 3-SAT in $1.3071^n$ time (randomized) or $1.3303^n$ time (deterministic). (Reference: Best Upper Bounds on SAT ) For comparison, using Grover's algorithm on a quantum computer would look for and provide a solution in $1.414^n$, randomized. (This may still require some knowledge of how many solutions there may or may not be, I'm not sure how necessary those bounds still are.) This is clearly significantly worse. Are there are any quantum algorithms that do better than the best classical algorithms (or at least -- almost as good?) Of course the classical algorithms could be used on a quantum computer assuming sufficient working space; I'm wondering about inherently quantum algorithms.
I think one can obtain a non-trivial upper bound from quantum computing by speeding up the randomized algorithms of SchΓΆning for 3-SAT. The algorithm of SchΓΆning runs in time $(4/3)^n$ and using standard amplitude amplification techniques one can obtain a quantum algorithm that runs in time $(2/\sqrt{3})^n=1.15^n$ which is significantly faster than the classical algorithm.
{ "source": [ "https://cstheory.stackexchange.com/questions/36428", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/12211/" ] }
36,436
I am a graduate student in math, and theoretical computer science is a domain which I never understood what it is about because I couldn't find a good read about the topic. I want to know what this domain is actually about, what kind of topics it is concerned with, what prerequisites are needed to embark into it, etc. For now, I just want to know: What is a good introductory book to theoretical computer science? Given that there is such a thing. If not, where should a mathematician who has basic knowledge about computer science (i.e. they know the basics of one or two programming languages) start if they want to understand what theoretical computer science is about? What do you recommend? thanks!
First, "theoretical computer science" means different things to different people. I think for most users on this site, a historical caricature (which reflects some modern sociological tendencies) is that there is "Theory A" and "Theory B" (with no implied order relation between them): Theory A consists of the theory of algorithms, complexity theory, cryptography, and similar. Theory B consists of things like the theory of programming languages, theory of automata, etc. Depending on your tastes in mathematics, you may prefer one over the other (or like both equally). I am more familiar with "Theory A," so let me give some references there: Start with Sipser's book. This will give you a good introduction to automata, Turing machines, computability, Kolmogorov complexity, P vs NP, and a few other complexity classes. It is very well-written (in my opinion, it is one of the best-written technical books ever ) For algorithms, I have a slighty preference for Kleinberg-Tardos, but there are many good introductory books out there. You might be especially interested in computational geometry, which has its own set of great books. Given that you are a mathematics graduate student, a major branch of TCS that is missing from these books is algebraic complexity theory, which often is closely related to algebra (both commutative and non-commutative), representation theory, group theory, and algebraic geometry. There is a canonical text here, which is Burgisser-Clausen-Shokrollahi. It is somewhat encyclopedic, so may not be the best introduction, but I'm not sure there is a really introductory book in this area. You might also check out the surveys by Chen-Kayal-Wigderson and Shiplka-Yehudayoff. After that, I'd suggest browsing through more advanced books on particular topics, depending on your mathematical taste: Arora-Barak is more modern complexity theory (continues on where Sipser's book ends, so to speak), giving you a flavor of the techniques involved (mix of combinatorics and algebra, mostly) Jukna's book on Boolean function complexity does similar, but more in-depth for Boolean circuit complexity in particular (very combinatorial in flavor) Geometric complexity theory. See here or Landsberg's introduction for geometers . O'Donnell's book Analysis of Boolean Functions has a more Fourier-analytic bent. Cryptography. The more advanced mathematical aspects here are typically number theory and algebraic geometry. While these pure mathematical aspects represent only a small portion of cryptography, they are an important one that you might find interesting. Not being my area, I'm not sure of what a good starting book is here. Coding theory. Here, the mathematical theory ranges from sphere-packing (see the book by Conway and Sloane) to algebraic geometry (e.g., the book by Stichtenoth). Again, not my area, so I'm not sure if these are the best starting points, but flipping through them you will quickly get the flavor and decide if you want to delve deeper. And then there are many other mathematical topics that only appear in the research literature, like connections with foams, graph theory, C*-algebras (let me just point you to the Kadison-Singer conjecture ), invariant theory, representation theory, quadratures, and on and on. See also these related questions Resources for mathematicians hoping to learn more computer science Are there any topics in theoretical CS that are more about pure math? Applications of TCS to classical mathematics? Solid applications of category theory in TCS?
{ "source": [ "https://cstheory.stackexchange.com/questions/36436", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/-1/" ] }
37,035
Is there a (preferably natural) NP-complete language $L\subseteq \{0,1\}^*$, such that for every $n\geq 1$ $$|L\cap \{0,1\}^n|=2^{n-1}$$ holds? In other words, $L$ contains precisely half of all $n$-bit instances.
I asked this question a few years ago and Boaz Barak positively answered it . The statement is equivalent to the existence of an NP-complete language $L$ where $|L_n|$ is polynomial-time computable. Consider Boolean formulas and SAT. Using padding and slightly modifying the encoding of formulas we can make sure that $\varphi$ and $\lnot \varphi$ have the same length. Let $\langle\ \rangle$ be an encoding that for all formulas $\varphi$ and for all truth assignment $\tau \in \{0,1\}^{|\varphi|}$, $|\langle\varphi\rangle| = |\langle\varphi, \tau\rangle|$. $|\langle\varphi\rangle| \mapsto |\varphi|$ is polynomial-time computable. the number of formulas with encoded length $n$ is polynomial-time computable. Consider $$L := \{\langle\varphi\rangle \mid \varphi \in \mathsf{SAT} \} \cup \{\langle \varphi, \tau \rangle \mid \tau \vDash \varphi \text{ and } \exists \sigma<\tau\ \sigma\vDash\varphi \}$$ It is easy to see that $L$ is NP-complete. If $\varphi \in \mathsf{SAT}$, the number of truth assignments satisfying $$\tau \vDash \varphi \text{ and } \exists \sigma<\tau\ \sigma\vDash\varphi$$ is equal to the number of satisfying truth assignments $- 1$. Adding $\varphi$ itself it adds up to the number of satisfying truth assignments for $\varphi$. There are $2^{|\varphi|}$ truth assignments. Each $\tau$ either satisfies $\varphi$ or $\lnot \varphi$ (and not both). For every formula $\varphi$, consider the $2(2^{|\varphi|}+1)$ strings $\langle\varphi\rangle$, $\langle\lnot \varphi\rangle$, $\langle\varphi, \tau\rangle$, and $\langle \lnot\varphi, \tau\rangle$ for $\tau \in \{0,1\}^{|\varphi|}$. Exactly $2^{|\varphi|}$ of these $2^{|\varphi|+1}+2$ strings are in $L$. This means that the number of strings of length $n$ in $L$ is the number of formulas $\varphi$ of encoded length $n$ multiplied by $2^{|\varphi|}$ which polynomial-time computable.
{ "source": [ "https://cstheory.stackexchange.com/questions/37035", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/12710/" ] }
37,382
I found this paper to be very interesting. To summarize: it discusses why in practice you rarely find a worst-case instance of a NP-complete problem. The idea in the article is that instances usually are either very under- or very overconstrained, both of which are relatively easy to solve. It then proposes for a few problems a measure of 'constrainedness'. Those problems appear to have a 'phase transition' from 0 likelihood of a solution to 100% likelihood. It then hypothesizes: That all NP-complete (or even all NP-problems) problems have a measure of 'constrainedness'. That for each NP-complete problem, you can create a graph of the probability of a solution existing as a function of the 'constrainedness'. Moreover, that graph will contain a phase-transition where that probability quickly and dramatically increases. The worst case examples of the NP-complete problems lie in that phase-transition. The fact whether a problem lies on that phase-transition remains invariant under transformation of one NP-complete problem to another. This paper was published in 1991. My question is was there any follow-up research on these ideas the last 25 years? And if so, what is the current mainstream thinking on them? Were they found correct, incorrect, irrelevant?
Here is a rough summary of the status based on a presentation given by Vardi at a Workshop on Finite and Algorithmic Model Theory (2012): It was observed that hard instances lie at the phase transition from under- to over-constrained region. The fundamental conjecture is that there is strong connection between phase-transitions and computational complexity of NP problems. Achlioptas–Coja-Oghlan, found that there is a density in the satisfiabe region where the solution space shatters into exponentially many small clusters. Vinay Deolalikar based his famous attempt to proof $P \ne NP$ on the assumption that shattering implies computational hardness. Deolalikar’s Proof was refuted by the fact that XOR-SAT is in $P$ and it shatters. Therefore, shattering can not be used to prove computational hardness. The current mainstream thinking seems to be (as stated by Vardi) that phase-transitions are not intrinsically connected to computational complexity. Finally, Here is an article published in Nature which investigates the connection between phase-transitions and computational hardness of K-SAT.
{ "source": [ "https://cstheory.stackexchange.com/questions/37382", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/44057/" ] }
37,586
Or with other words, do we have that for every language $A$ and $B$, $A \leq_p B$ or $B \leq_p A$?
Far from it. Indeed, any countable distributive lattice embeds as a sub-partial-order of $\leq_p$, even if we only consider those degrees in between two given fixed languages ( K. Ambos-Spies, Sublattices of the polynomial time degrees , Inform. & Control 65(1):63-84, 1985).
{ "source": [ "https://cstheory.stackexchange.com/questions/37586", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/43469/" ] }
37,588
Integer programming is NP-hard. What is the status of integer programming problem that decides between existence of $\leq1$ solution and $>1$ solutions (note $0$ solutions falls in $\leq1$ category)? Integer programming in fixed parameters is P. What is the status of integer programming problem in fixed parameters that decides between existence of $\leq1$ solution and $>1$ solutions (note $0$ solutions falls in $\leq1$ category)?
Far from it. Indeed, any countable distributive lattice embeds as a sub-partial-order of $\leq_p$, even if we only consider those degrees in between two given fixed languages ( K. Ambos-Spies, Sublattices of the polynomial time degrees , Inform. & Control 65(1):63-84, 1985).
{ "source": [ "https://cstheory.stackexchange.com/questions/37588", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/1812/" ] }
38,538
In Mike and Ike's "Quantum Computation and Quantum Information", Grover's algorithm is explained in great detail. However, in the book, and in all explanations I have found online for Grover's algorithm, there seems to be no mention of how Grover's Oracle is constructed, unless we already know which state it is that we are searching for, defeating the purpose of the algorithm. Specifically, my question is this: given some f(x) such that for some x value, f(x)=1, but for all others, f(x)=0, how does one construct an oracle that will get us from our initial, arbitrary state |x>|y> to |x>|y+f(x)>? As much explicit detail as possible (perhaps an example?) would be greatly appreciated. If such a construction for any arbitrary function is possible with Hadamard, Pauli, or other standard quantum gates, a method for construction with these would be appreciated.
The oracle is basically just an implementation of the predicate you want to search for a satisfying solution to. For example, suppose you have a 3-sat problem: (¬x1 ∨ ¬x3 ∨ ¬x4) ∧ (x2 ∨ x3 ∨ ¬x4) ∧ (x1 ∨ ¬x2 ∨ x4) ∧ (x1 ∨ x3 ∨ x4) ∧ (¬x1 ∨ x2 ∨ ¬x3) Or, in table form with each row being a 3-clause, x meaning "this variable false", o meaning "this variable true", and space meaning "not in clause": 1 2 3 4 ------- x x x o o x o x o x o x Now make a circuit that computes whether the input is a solution, like this: Now, to turn your circuit into an oracle, hit the output bit with a Z gate and uncompute any garbage you made (i.e. run the compute circuit in reverse order): That's all there is to it. Compute the predicate, hit the result with a Z, uncompute the predicate. That's an oracle. Iterate diffusion steps with oracle steps, and you've got yourself a grover search : ... although you should probably pick an example with fewer solutions, so the progress is gradual (instead of rotating along the start-state-solution-state plane by more than 90 degrees per step as my example is).
{ "source": [ "https://cstheory.stackexchange.com/questions/38538", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/45964/" ] }
38,560
I'm wondering if there are some known sources of open TCS problems? I'm a junior studying math/CS and would like to know of some accessible problems that I could start thinking about! Thanks so much!
Here's a partial list of collections of open problems in TCS, broadly construed. Note that a collection of "major open problems" exists already on this site: http://cstheory.stackexchange.com/questions/174/major-unsolved-problems-in-theoretical-computer-science/251#251 . In Computer Science (Wikipedia): https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_computer_science Sublinear time algorithms: http://sublinear.info/index.php?title=Main_Page Analysis of Boolean Functions: http://lanl.arxiv.org/abs/1204.6447 Computational geometry: http://cs.smith.edu/~orourke/TOPP/ Exact algorithms: http://faculty.cs.tamu.edu/chen/courses/cpsc669/2011/notes/ww1.pdf Formal languages, etc.: https://www.student.cs.uwaterloo.ca/~cs462/openproblems.html Parameterized complexity: http://fpt.wikidot.com/open-problems Topological graph theory: http://www.cems.uvm.edu/~darchdea/problems/problems.html Embeddings of finite metric spaces: http://kam.mff.cuni.cz/~matousek/metrop.ps Lambda calculus, proof theory, semantics, and programming languages: http://tlca.di.unito.it/opltlca/ Perfect graphs: http://www.aimath.org/WWN/perfectgraph/ Real analysis in computer science: https://simons.berkeley.edu/sites/default/files/openprobsmerged.pdf Fine-grained complexity: http://duch.mimuw.edu.pl/~malcin/opl.pdf Communication complexity: https://sublinear.info/index.php?title=Workshops:Banff_2017 ErdΕ‘s problems: http://www.math.ucsd.edu/~erdosproblems/All.html
{ "source": [ "https://cstheory.stackexchange.com/questions/38560", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/44912/" ] }
38,594
Let $C$ be an arithmetic circuit that represents a polynomial $f\in\mathbb K[x_1,\dotsc,x_n]$, with the promise that $f$ has at most $k$ nonzero terms. What is (known about) the complexity of computing $f$ in its sparse representation, given $C$? I am interested in deterministic and randomized complexity, and in the link with PIT . In particular, does the promise that $f$ is sparse imply good algorithms? A priori , I am more interested in the case of $\mathbb K$ being some finite field, though results over other fields may be relevant.
Here's a partial list of collections of open problems in TCS, broadly construed. Note that a collection of "major open problems" exists already on this site: http://cstheory.stackexchange.com/questions/174/major-unsolved-problems-in-theoretical-computer-science/251#251 . In Computer Science (Wikipedia): https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_computer_science Sublinear time algorithms: http://sublinear.info/index.php?title=Main_Page Analysis of Boolean Functions: http://lanl.arxiv.org/abs/1204.6447 Computational geometry: http://cs.smith.edu/~orourke/TOPP/ Exact algorithms: http://faculty.cs.tamu.edu/chen/courses/cpsc669/2011/notes/ww1.pdf Formal languages, etc.: https://www.student.cs.uwaterloo.ca/~cs462/openproblems.html Parameterized complexity: http://fpt.wikidot.com/open-problems Topological graph theory: http://www.cems.uvm.edu/~darchdea/problems/problems.html Embeddings of finite metric spaces: http://kam.mff.cuni.cz/~matousek/metrop.ps Lambda calculus, proof theory, semantics, and programming languages: http://tlca.di.unito.it/opltlca/ Perfect graphs: http://www.aimath.org/WWN/perfectgraph/ Real analysis in computer science: https://simons.berkeley.edu/sites/default/files/openprobsmerged.pdf Fine-grained complexity: http://duch.mimuw.edu.pl/~malcin/opl.pdf Communication complexity: https://sublinear.info/index.php?title=Workshops:Banff_2017 ErdΕ‘s problems: http://www.math.ucsd.edu/~erdosproblems/All.html
{ "source": [ "https://cstheory.stackexchange.com/questions/38594", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/976/" ] }
38,803
Norbert Blum recently posted a 38-page proof that $P \ne NP$. Is it correct? Also on topic: where else (on the internet) is its correctness being discussed? Note: the focus of this question text has changed over time. See question comments for details.
As noted here before, Tardos' example clearly refutes the proof; it gives a monotone function, which agrees with CLIQUE on T0 and T1, but which lies in P. This would not be possible if the proof were correct, since the proof applies to this case too. However, can we pinpoint the mistake? Here is, from a post on the lipton's blog, what seems to be the place where the proof fails: The single error is one subtle point in the proof of Theorem 6, namely in Step 1, on page 31 (and also 33, where the dual case is discussed) - a seemingly obvious claim that $C'_g$ contains all the corresponding clauses contained in $CNF'(g)$ etc, seems wrong. To explain this in more detail, we need to go into the proof and approximation method of Berg and Ulfberg, which restates the Razborov's original proof of the exponential monotone complexity for CLIQUE in terms of DNF/CNF switches. This is how I see it: To every node/gate $g$ of a logic circuit $\beta$ (containing binary OR/AND gates only), a conjunctive normal form $CNF(g)$, a disjunctive normal form $DNF(g)$, and approximators $C^k_g$ and $D^r_g$ are attached. $CNF$ and $DNF$ are simply the corresponding disjunctive and conjunctive normal forms of the gate output. $D^r_g$ and $C^k_g$ are also disjunctive and conjunctive forms, but of some other functions, "approximating" the gate output. They are however required to have bounded number of variables in each monomial for $D^r_g$ (less than a constant r) and in each clause for $C^k_g$ (less than a constant k). There is notion of an "error" introduced with this approximation. How is this error computed? We are only interested in some set T0 of inputs on which our total function takes value 0, and T1 of inputs on which our total function takes value 1 (a "promise") . Now at each gate, we look only at those inputs from T0 and T1, which are correctly computed (by both $DNF(g)$ and $CNF(g)$, which represent the same function - output of gate $g$ in $\beta$) at gate output, and look how many mistakes/errors are for $C^k_g$ and $D^r_g$, compared to that. If the gate is a conjunction, then the gate output might compute more inputs from T0 correctly (but the correctly computed inputs from T1 are possibly decreased). For $C^k_g$, which is defined as a simple conjunction, there are no new errors however on all of these inputs. Now, $D^r_g$ is defined as a CNF/DNF switch of $C^k_g$, so there might be a number of new errors on T0, coming from this switch. On T1 also, there are no new errors on $C^k_g$ - each error has to be present on either of gate inputs, and similarly on $D^r_g$, switch does not introduce new errors on T1. The analysis for OR gate is dual. So the number of errors for the final approximators is bounded by number of gates in $\beta$, times the maximal possible number of errors introduced by a CNF/DNF switch (for T0), or by a DNF/CNF switch (for T1). But the total number of errors has to be "large" in at least one case (T0 or T1), since this is a property of positive conjunctive normal forms with clauses bounded by $k$, which was the key insight of Razborov's original proof (Lemma 5 in the Blum's paper). So what did Blum do in order to deal with negations (which are pushed to the level of inputs, so the circuit $\beta$ is still containing only binary OR/AND gates)? His idea is to preform CNF/DNF and DNF/CNF switches restrictively, only when all variables are positive. Then the switches would work EXACTLY like in the case of Berg and Ulfberg, introducing the same amount of errors. It turns out this is the only case which needs to be considered. So, he follows along the lines of Berg and Ulfberg, with a few distinctions. Instead of attaching $CNF(g)$, $DNF(g)$, $C^k_g$ and $D^r_g$ to each gate $g$ of circuit $\beta$, he attaches his modifications, $CNF'(g)$, $DNF'(g)$, ${C'}^k_g$ and ${D'}^r_g$, i.e. the "reduced" disjunctive and conjunctive normal forms, which he defined to differ from $CNF(g)$ and $DNF(g)$ by "absorption rule", removing negated variables from all mixed monomials/clauses (he also uses for this purpose operation denoted by R, removing some monomials/clauses entirely; as we discussed before, his somewhat informal definition of R is not really the problem, R can be made precise so it is applied at each gate but what is removed depends not only on previous two inputs but on the whole of the circuit leading up to that gate), and their approximators ${C'}^r_g$ and ${D'}^r_g$, that he also introduced. He concludes, in Theorem 5, that for a monotone function, reduced $CNF'$ and $DNF'$ will really compute 1 and 0 on sets T1 and T0, at root node $g_0$ (whose output is the output of the whole function in $\beta$). This theorem is, I believe, correct. Now comes the counting of errors. I believe the errors at each node are meant to be computed by comparing reduced $CNF'(g)$ and $DNF'(g)$ (which are now possibly two different functions), to ${C'}^r_g$ and ${D'}^k_g$ as he defined them. The definitions of approximators parrot definitions of $CNF'$ and $DNF'$ (Step 1) when mixing variables with negated ones, but when he deals with positive variables, he uses the switch like in the case of Berg and Ulfberg (Step 2). And indeed, in Step 2 he will introduce the same number of possible errors like before (it is the same switch, and all the involved variables are positive). But the proof is wrong in Step 1. I think Blum is confusing $\gamma_1$, $\gamma_2$, which really come, as he defined them, from previous approximators (for gates $h_1$, $h_2$), with positive parts of $CNF'_\beta(h_1)$ and $CNF'_\beta(h_2)$. There is a difference, and hence, the statement "$C_g'$ contains still all clauses contained in $CNF'_\beta(g)$ before the approximation of the gate g which use a clause in $\gamma_1'$ or $\gamma_2'$" seems to be wrong in general.
{ "source": [ "https://cstheory.stackexchange.com/questions/38803", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/46348/" ] }
39,709
I find some books about computers, but all of them are about technology. I want something more linked to theory.
Try the 50+ page essay "Why Philosophers Should Care About Computational Complexity" https://arxiv.org/abs/1108.1791
{ "source": [ "https://cstheory.stackexchange.com/questions/39709", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/47816/" ] }
39,844
I have a (hopefully simple, maybe dumb) question on Babai's landmark paper showing that $\mathsf{GI}$ is quasipolynomial. Babai showed how to produce a certificate that two graphs $G_i=(V_i,E_i)$ for $i\in\{1,2\}$ are isomorphic, in time quasipolynomial in $v=|V_i|$. Did Babai actually show how to find an element $\pi\in S_v$ that permutes the vertices of $G_1$ to $G_2$, or is the certificate merely an existence-statement? If an oracle tells me that $G_1$ and $G_2$ are isomorphic, do I still need to look through all $v!$ permutations of the vertices? I ask because I also think about knot equivalence. As far as I know, it's not known to be, but say detecting the unknot were in $\mathsf{P}$. Actually finding a sequence of Reidemeister moves that untie the knot might still take exponential time...
These problems are polynomially equivalent. Indeed, suppose that you have an algorithm that can decide whether two graphs are isomorphic or not, and it claims that they are. Attach a clique of size $n+1$ to an arbitrary vertex of each graph. Test whether the resulting graphs are isomorphic or not. If they are, then we can conclude that there's an isomorphism that maps the respective vertices to each other, thus we can delete them. By repeating this test $n$ times, we can find (a possible) image for any vertex. After this, we attach another clique, this time of size $n+2$ to a (different) arbitrary vertex of each original graph, and proceed as before, etc. Eventually, we'll end up with two graphs that are isomorphic, with cliques of size $n+1,\ldots n+n$ hanging from their vertices, which makes the isomorphism unique.
{ "source": [ "https://cstheory.stackexchange.com/questions/39844", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/46687/" ] }
39,998
Let's say we wanted a typeful, pure functional programming language, like Haskell or Idris, that is aimed at systems programming without garbage collection and has no runtime (or at least not more than the C and Rust "runtimes"). Something that can run, more or less, on bare metal. What are some of the options for static memory safety that don't require manual memory management or runtime garbage collection, and how might the problem be solved using the type system of a pure functional similar to Haskell or Idris?
Roughly speaking, there are two main strategies for safe manual memory management. The first approach is to use some substructural logic like linear logic to control resource usage. This idea has floated around basically since linear logic's inception, and basically works on the observation that by banning the structural rule of contraction, every variable is used at most once, and so there is no aliasing. As a result, the difference between in-place update and re-allocation is invisible to the program, and so you can implement your language with manual memory management. This is what Rust does (it uses an affine type system). If you are interested in the theory of Rust-style languages, one of the best papers to read is Ahmed et al's L3: A Linear Language with Locations . As an aside, the LFPL calculus Damiano Mazza mentioned is also linear, has a full language derived from it in the RAML language . If you are interested in Idris-style verification, you should look at Xi et al's ATS language , which is a Rust/L3 style language with support for verification based on Haskell-style indexed types, only made proof-irrelevant and linear to give more control over performance. An even more aggressively dependent approach is the F-star language developed at Microsoft Research, which is a full dependent type theory. This language has a monadic interface with pre- and post-conditions in the spirit of Nanevski et al's Hoare Type Theory (or even my own Integrating Linear and Dependent Types ), and has a defined subset which can be compiled to low-level C code -- in fact, they are shipping verified crypto code as part of Firefox already! To be clear, neither F-star nor HTT are linearly-typed languages, but the index language for their monads are usually based on Reynold and O'Hearn's separation logic , which is a substructural logic related to linear logic that has seen great success as the assertion language for Hoare logics for pointer programs. The second approach is to simply specify what assembly (or whatever low level IR you want) does, and then use some form of linear or separation logic to reason about its behaviour in a proof assistant directly. Essentially, you can use the proof assistant or dependently-typed language as a very fancy macro assembler that only generates correct programs. Jensen et al's High-level separation logic for low-level code is a particularly pure example of this -- it builds separation logic for x86 assembly! However, there are many projects in this vein, like the Verified Software Toolchain at Princeton and the CertiKOS project at Yale. All of these approaches will "feel" like a bit like Rust, since tracking ownership by restricting the usage of variables is key to them all.
{ "source": [ "https://cstheory.stackexchange.com/questions/39998", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/48295/" ] }
42,704
I am a third year PhD student in an area of theoretical CS that would like advice for a difficult situation with my advisor. My advisor is not involved in my research projects at all. In particular, I have come up with all of my paper ideas, and have executed the papers alone. However, she always insists on adding her name as a co-author. This has started to increasingly bother me, as I work very hard (alone) on my research and believe I should get credit for that. In addition, she is a bully and treats me quite badly, so it makes it even harder for me to benefit her in this way. For my most recent paper, I brought up how I didn't believe she was meeting the IEEE 1 or ACM 2 guidelines for authorship, and told her that I believed I should be sole author on my paper. She agreed that she shouldn't be an author, although she was visibly angry. She said that I was a "weirdo" for doing this, and said that everybody already knows that advisors take credit for their student's work and that publishing with your advisor is the same as publishing alone. But most importantly, she told me that she would not approve my proposal/dissertation if I did not add her name to several more top-tier papers because then I "have no ties to the university" since I am not working with a professor, and therefore cannot receive my PhD. Obviously, I need a new advisor. However, there is really no one in my department in my research area. Switching research areas or departments are not options. So the remaining options are the following: (1) Add her name to several more papers. I do not like this idea because it is unethical, and there is no guarantee that anything is even gained in this option. She could simply refuse to recommend me in the end after I got her a bunch of papers. (2) Ignore her threats, and force my way to finishing my PhD while publishing single author papers. I do not believe she could stop me from graduating since I already have a decent publication record, and presumably will continue getting my work out. I have a fellowship, so she can't control my funding. Clearly, I will not have a letter of recommendation in this case. On the other hand, I will have a bunch of single author papers. (3) Try to convince a professor in an unrelated research area to be my advisor, emphasizing that I am independent and can do my work alone. There are a few theory professors in my dept, although they are totally different areas. I have no idea the chance of this working out. (4) Go to the department chair and tell him the whole story, ask what to do. What do you think I should do?
As a department chair, I can say you aren't alone. These situations come up all too often. Please do reach out to your department chair, graduate program director or grad student ombudsperson if your institution has one. We want to know when our faculty are behaving badly and often we can help.
{ "source": [ "https://cstheory.stackexchange.com/questions/42704", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/52568/" ] }
46,209
I am interested in any relation between "almost all objects(from a universe) possessing a particular property P" versus "testing whether an object has property P being poly. time decidable". My guess is that they are completely separate (that is, one doesn't imply the other). Am I missing something? (Note: Almost all in the sense of probability) PS: I am not sure whether the tag probability is appropriate here, sorry.
They are separate (assuming $P \ne NP$ ). Consider the following property $P(x)$ : $x$ is a $2n$ -bit string, where either the first $n$ bits are not all zeros, or the last $n$ bits are a yes-instance of 3SAT. It's clear that testing whether $x$ satisfies $P$ is NP-hard, yet almost all strings satisfy it: the density $\to 1$ as $n \to \infty$ .
{ "source": [ "https://cstheory.stackexchange.com/questions/46209", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/47855/" ] }
47,337
Edit: As indicated below by Mahdi Cheraghchi and in the comments, the paper has been withdrawn. Thanks for the multiple excellent answers on the implications of this claim. I, and hopefully others, have benefited from them. It would probably be unfair to accept just one one answer in this case. I apologise if this is off topic. In the paper just uploaded today (Edit: the paper is now withdrawn due to a flaw, see the comments below) https://arxiv.org/abs/2008.00601 A. Farago claims to prove that NP=RP. From the abstract: We (claim to) prove the extremely surprising fact that NP=RP. It is achieved by creating a Fully Polynomial-Time Randomized Approximation Scheme (FPRAS) for approximately counting the number of independent sets in bounded degree graphs, with any fixed degree bound, which is known to imply NP=RP. While our method is rooted in the well known Markov Chain Monte Carlo (MCMC) approach, we overcome the notorious problem of slow mixing by a new idea for generating a random sample from among the independent sets. I am not an expert in the complexity hierarchies, why is this thought to be so surprising? And what are the implications, if the claim is correct?
Prelude: the below is just one consequence of $\mathsf{RP}=\mathsf{NP}$ and probably not the most important, e.g. compared to collapse of the polynomial hierarchy. There was a great and more comprehensive answer than this, but its author removed it for some reason. Hopefully the question can continue to get more answers. $\mathsf{P}/\mathsf{poly}$ is the set of decision problems solvable by polynomial-size circuits. We know $\mathsf{RP} \subseteq \mathsf{BPP}$ and, by Adleman's theorem, $\mathsf{BPP} \subseteq \mathsf{P}/\mathsf{poly}$ . So among the only mildly shocking implications of $\mathsf{RP}=\mathsf{NP}$ would be $\mathsf{NP} \subseteq \mathsf{P}/\mathsf{poly}$ . Another way to put it is that instead of each "yes" instance of an $\mathsf{NP}$ problem having its own witness, there would exist for each $n$ a single witness string that can be used to verify, in polynomial time, membership of any instance of size $n$ .
{ "source": [ "https://cstheory.stackexchange.com/questions/47337", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/21948/" ] }
47,932
I recently started reading about Descriptive Complexity, the branch of Complexity Theory studying the logic languages needed to express complexity classes. The main milestone in the area seems to be Neil Immerman's book, but this is already quite old. Seems like this line of research is dead. Is this the case? If so, why?
I also have the impression that Descriptive Complexity is a less active area of research nowadays. Nevertheless, there are some topics in which people are still active: Rank logics: Rank Logic is Dead, Long Live Rank Logic! by GrΓ€del and Pakusa Symmetric Circuits for Rank Logic by Dawar and Wilsenach Separating Rank Logic from Polynomial Time by Lichter Choiceless Polynomial Time: Canonization for Bounded and Dihedral Color Classes in Choiceless Polynomial Time by Lichter and Schweitzer Choiceless Logarithmic Space by GrΓ€del and SchalthΓΆfer Dynamic Complexity: Dynamic Complexity of Parity Exists Queries by Vortmeier and Zeume Reachability Is in DynFO by Datta, Kulkarni, Mukherjee, Schwentick and Zeume PHD thesis of Thomas Zeume Other interesting things: Descriptive Complexity for Counting Complexity Classes by Arenas MuΓ±oz and Riveros Descriptive complexity of real computation and probabilistic independence logic by Hannula, Kontinen, Van den Bussche and Virtema Descriptive Complexity of Deterministic Polylogarithmic Time by Ferrarotti et al On the Power of Symmetric Linear Programs by Atserias, Dawar and Ochremiak Traversal-invariant characterizations of logarithmic space by Bhaskar, Lindell and Weinstein The list is not supposed to be complete. Just giving you a glimpse on what kind of problems are people looking at.
{ "source": [ "https://cstheory.stackexchange.com/questions/47932", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/53377/" ] }
14
I am sure data science as will be discussed in this forum has several synonyms or at least related fields where large data is analyzed. My particular question is in regards to Data Mining. I took a graduate class in Data Mining a few years back. What are the differences between Data Science and Data Mining and in particular what more would I need to look at to become proficient in Data Mining?
@statsRus starts to lay the groundwork for your answer in another question What characterises the difference between data science and statistics? : Data collection : web scraping and online surveys Data manipulation : recoding messy data and extracting meaning from linguistic and social network data Data scale : working with extremely large data sets Data mining : finding patterns in large, complex data sets, with an emphasis on algorithmic techniques Data communication : helping turn "machine-readable" data into "human-readable" information via visualization Definition data-mining can be seen as one item (or set of skills and applications) in the toolkit of the data scientist. I like how he separates the definition of mining from collection in a sort of trade-specific jargon. However, I think that data-mining would be synonymous with data-collection in a US-English colloquial definition. As to where to go to become proficient? I think that question is too broad as it is currently stated and would receive answers that are primarily opinion based. Perhaps if you could refine your question, it might be easier to see what you are asking.
{ "source": [ "https://datascience.stackexchange.com/questions/14", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/66/" ] }
19
Lots of people use the term big data in a rather commercial way, as a means of indicating that large datasets are involved in the computation, and therefore potential solutions must have good performance. Of course, big data always carry associated terms, like scalability and efficiency, but what exactly defines a problem as a big data problem? Does the computation have to be related to some set of specific purposes, like data mining/information retrieval, or could an algorithm for general graph problems be labeled big data if the dataset was big enough ? Also, how big is big enough (if this is possible to define)?
To me (coming from a relational database background), "Big Data" is not primarily about the data size (which is the bulk of what the other answers are so far). "Big Data" and "Bad Data" are closely related. Relational Databases require 'pristine data'. If the data is in the database, it is accurate, clean, and 100% reliable. Relational Databases require "Great Data" and a huge amount of time, money, and accountability is put on to making sure the data is well prepared before loading it in to the database. If the data is in the database, it is 'gospel', and it defines the system understanding of reality. "Big Data" tackles this problem from the other direction. The data is poorly defined, much of it may be inaccurate, and much of it may in fact be missing. The structure and layout of the data is linear as opposed to relational. Big Data has to have enough volume so that the amount of bad data, or missing data becomes statistically insignificant. When the errors in your data are common enough to cancel each other out, when the missing data is proportionally small enough to be negligible and when your data access requirements and algorithms are functional even with incomplete and inaccurate data, then you have "Big Data". "Big Data" is not really about the volume, it is about the characteristics of the data.
{ "source": [ "https://datascience.stackexchange.com/questions/19", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/84/" ] }