source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
948
In this guest post by Josh Grochow at the complexity weblog he reports on a recent workshop devoted to GCT that was held at Princeton in July. Several of the attendees argued that we should use GCT to attack easier problems than $\mathsf{P}$ vs. $\mathsf{NP}$ in order to build intuition and see if the method has potential. The question that has been bugging me: Is it possible to use GCT to show known separations like $\mathsf{P} \neq \mathsf{EXP}$ or $\mathsf{L} \neq \mathsf{PSPACE}$? Does something like $\mathsf{L} \neq \mathsf{PSPACE}$ Not even make sense in the GCT context, or Is utterly trivial and uninteresting in GCT framework, or Lead to conjectures just as hard as $\mathsf{P}$ vs. $\mathsf{NP}$ ?
Short answer: probably not (1), definitely not (2), and possibly (3). This is something I have been thinking about off-and-on for a while now. First, in a sense GCT is really aimed at giving lower bounds on computing functions, rather than decision problems. But your question makes perfect sense for the function class versions of $L$, $P$, $PSPACE$, and $EXP$. Second, actually proving the boolean versions -- the ones we know and love, like $FP \neq FEXP$ -- is probably incredibly difficult in a GCT approach, since that would require the use of modular representation theory (representation theory over finite fields), which is not well understood in any context. But a reasonable goal might be to use GCT to prove an algebraic analog of $FP \neq FEXP$. To get to your question: I believe that these questions can be formulated in a GCT context, though it's not immediately obvious how. More or less, you need a function that is complete for the class and characterized by its symmetries; extra bonus if the representation theory associated to the function is easy to understand, but this latter is usually quite difficult. Even once the questions are formulated in a GCT context, I have no idea how difficult it will be to use GCT to prove (algebraic analogs of) $FP \neq FEXP$ etc. The representation-theoretic conjectures that will arise in these contexts will likely have a very similar flavor to the ones arising in $P$ vs $NP$ or permanent vs determinant. One might hope that the classical proofs of these separation results might give some idea of how to find the representation-theoretic "obstructions" needed for a GCT proof. However, the proofs of the statements you mention are all hierarchy theorems based on diagonalization, and I do not see how diagonalization will really give you much insight into the representation theory associated with a function that is complete for (the algebraic analog of) $FEXP$, say. On the other hand, I haven't yet seen how to formulate $FEXP$ in a GCT context, so it's a little early to say. Finally, as I mentioned in that blog post, Peter Burgisser and Christian Ikenmeyer have attempted to re-prove the lower bound on the border-rank of $2 \times 2$ matrix multiplication (which was proven to be 7 in 2006 by Joseph Landsberg). They were able to show the border-rank is at least 6 by a computer search for GCT obstructions. Update April 2013 : they have since managed to re-prove Landsberg's result using a GCT obstruction, and to show an asymptotic $\frac{3}{2}n^2 - 2$ lower bound on matrix multiplication using such obstructions. Although GCT has not so far reproduced the known lower bound on matrix multiplication , it does enable a computer search more efficient than the alternative (which would involve Grobner bases, which are doubly-exponential time in the worst case). In their talks at the workshop, both Peter and Christian pointed out (correctly, I'd say) that what we really hope to get of computing small examples is not re-proving known lower bounds, but some insight that will let us use these techniques to prove new lower bounds. The nice thing about GCT in the context of matrix multiplication is that the technique easily generalizes from $2 \times 2$ to $3 \times 3$ matrix multiplication (although computing the obstructions with the current techniques obviously gets more expensive), whereas Landsberg's approach seems very difficult to implement even for the $3 \times 3$ case. A similar thing could be said about the complexity class separations you mention: GCT is general enough that it may apply not only to known results like $FP \neq FEXP$, but also to unknown ones like $P \neq NP$, whereas we know diagonalization does not.
{ "source": [ "https://cstheory.stackexchange.com/questions/948", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/847/" ] }
990
Entanglement is often held up as the key ingredient that makes quantum algorithms well... quantum, and this can be traced back to the Bell states that destroy the idea of quantum physics as a hidden-state probabilistic model. In quantum information theory (from my rather weak understanding), entanglement can also be used as a concrete resource that bounds the ability to do certain kinds of coding. But from other conversations (I recently sat on the Ph.D committee of a physicist working in quantum methods) I gather that entanglement is difficult to quantify, especially for mixed-state quantum states. Specifically, it appears hard to say that a particular quantum state has X units of entanglement in it (the student's Ph.D thesis was about trying to quantify amounts of entanglement "added" by well known gate operations). In fact, a recent Ph.D thesis suggests that a notion termed "quantum discord" might also be relevant (and needed) to quantify the "quantumness" of an algorithm or a state. If we want to treat entanglement as a resource like randomness, it's fair to ask how to measure how much of it is "needed" for an algorithm. I'm not talking about complete dequantization , merely a way of measuring the quantity. So is there currently any accepted way of measuring the "quantumness" of a state or an operator, or an algorithm in general ?
It depends on the context. For quantum algorithms, the situation is tricky, since for all we know, P=BPP=BQP. So we can never say that a quantum algorithm does something that no classical algorithm can do; only something that a naive simulation would have trouble with. For example, if a quantum circuit is drawn as a graph, then there is a classical simulation that runs in time exponential in the treewidth of the graph ). So treewidth can be thought of as an upper bound to 'quantumness', although not a precise measure. Sometimes measuring quantumness in algorithms gets conflated with trying to measure the amount of entanglement produced by an algorithm, but we now think that a noisy quantum computer could have computational advantages over classical computer even with so much noise that its qubits are never in an entangled state (e.g. the one clean qubit model ). So the consensus is now more on the side of thinking of the quantumness in quantum algorithms as related to the dynamics rather than the states generated along the way. This can help explain why 'dequantizing' is not likely to be generally possible. For bipartite quantum states, where the context is two-party correlations, we have many many good measures of quantumness. Many have flaws, like being NP-hard, or not additive, but nevertheless we have a pretty sophisticated understanding of this situation. Here is a recent review . There are other contexts, such as when we have a quantum state and would like to choose between different incompatible measurements. In this setting, there are uncertainty principles that tell us things about how incompatible the measurements are. The more incompatible the measurements are, the more 'quantum' a situation we have. This is related to cryptography and zero-error capacities of noisy channels , among many other things.
{ "source": [ "https://cstheory.stackexchange.com/questions/990", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/80/" ] }
1,026
In 1995, Russell Impagliazzo proposed five complexity worlds: 1- Algorithmica: $P=NP$ with all the amazing consequences. 2- Heuristica: $NP$-complete problems are hard in the worst-case ($P \ne NP$) but are efficiently solvable in the average-case. 3- Pessiland: There exist average-case $NP$-complete problems but one-way functions do not exist. This implies that we can not generate hard instances of $NP$-complete problem with known solution. 4- Minicrypt: One-way functions exist but public-key cryptographic systems are impossible 5- Cryptomania: Public-key cryptographic systems exist and secure communication is possible. Which world is favored by the recent advances in computational complexity? What is the best evidence for the choice? Russell Impagliazzo, A Personal View of Average-Case Complexity , 1995 Impagliazzo's Five Worlds, The Computational Complexity blog
About a year ago I co organized a workshop on complexity and cryptography: status of Impagliazzo's worlds , and the slides and videos on web site may be of interest. The short answer is that not much has changed in the sense that most researchers still believe we live in "Cryptomania" and we still have more or less the same evidence for this, and not much progress on collapsing any of the worlds for one another. Perhaps the most significant piece of new information is Shor's algorithm that shows that at least if you replace P with BQP, the most commonly used public key cryptosystems are insecure. But, because of Lattice based cryptosystems, the default assumption is that we live in cryptomania even in this case, though perhaps the consensus here is a bit weaker than the classical case. Even in the classical case, there seems to be much more evidence for the existence of one-way functions ("Minicrypt") than the existence of public key encryption ("Cryptomania"). Still, given the effort people have spent in trying to break various public key cryptosystem, there's significant evidence for the latter as well.
{ "source": [ "https://cstheory.stackexchange.com/questions/1026", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/495/" ] }
1,046
The problem #SAT is the canonical #P-complete problem. It's a function problem rather than a decision problem. It asks, given a boolean formula $F$ in propositional logic, how many satisfying assignments $F$ has. Which are the best lower bounds on #SAT?
To my knowledge, no one has figured out how to exploit the "counting solutions" property of #SAT in any lower bound on deterministic algorithms, so unfortunately the best known lower bounds for #SAT are basically the same as that for SAT. However, there has been a little progress. Note that the decision version of #SAT is called "Majority-SAT": given a formula, do at least $1/2$ of the possible assignments satisfy it? "Majority-SAT" is $PP$ -complete, and given an algorithm for Majority-SAT, one can solve #SAT with $O(n)$ calls to the algorithm. The closest that people have gotten to new lower bounds for #SAT (that are not known to hold for SAT) is with lower bounds for "Majority-of-Majority-SAT": given a propositional formula over two sets of variables X and Y, for at least $1/2$ of the possible assignments to $X$ , is it true that at least $1/2$ of the assignments to $Y$ make the formula satisfiable? This problem is in the "second level" of the counting hierarchy (the class $PP^{PP}$ ). Quantum time-space lower bounds (and more) are known for this class. The survey at http://pages.cs.wisc.edu/~dieter/Papers/sat-lb-survey-fttcs.pdf gives an overview of results in this direction. UPDATE: As of 2019, the first paragraph in the above is obsolete. It is known that #SAT requires a time-space product that is basically $n^2$ . See for example "Quadratic Time-Space Lower Bounds for Computing Natural Functions with a Random Oracle" https://drops.dagstuhl.de/opus/volltexte/2018/10149/
{ "source": [ "https://cstheory.stackexchange.com/questions/1046", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/947/" ] }
1,064
"Michael I. Trofimov" claims that he has found a poly-time algorithm for graph isomorphism, which works for all graphs. The paper is given in arXiv . The companion website gives a proof-of-concept program which runs the algorithm. (The password for the program is given in the paper.) I wanted to know whether the community is aware of Trofimov's results, and whether it's been proved, refuted, or unresolved?
For some more discussion of this particular paper, see this thread on a related Wikipedia talk page. Some of the participants in that discussion found specific bugs, and the paper does not seem to have been updated in response. I tried to read it myself but rather than finding any specific bugs I just got lost in vague descriptions of matrices and matrix manipulations that did not make clear which variables were inputs and which were outputs. Based on that experience I don't think the paper should be taken seriously until it's passed some level of peer review (accepted to one of the usual journals or conferences). More generally, it is easy to define algorithms for graph isomorphism that attempt to amplify some sort of subtle asymmetry in the graph to the point where it is obvious how to match the vertices to each other, and it is hard to find counterexamples for these algorithms, but that is very difficult from having a clear proof of correctness that works for all graphs.
{ "source": [ "https://cstheory.stackexchange.com/questions/1064", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/873/" ] }
1,079
In "On determinism versus nondeterminism and related problems" (Proc. IEEE FOCS, pages 429–438, 1983), Paul, Pippenger, Szemerédi and Trotter proved that $\mathsf{NTIME}(n)\neq\mathsf{DTIME}(n)$. This answers my question with k=1. Is anything known about a similar result for another fixed k?
No unconditional lower bound is known for any $k \geq 2$ in the multitape TM model (or any model stronger than it). Ravi Kannan studied this problem in "Towards separating nondeterminism from determinism" (1984) . In the process of trying to show $NTIME(n^k) \neq TIME(n^k)$ he managed to prove the following: there is some universal constant $c \geq 1$ such that for every $k$ , $NTIME(n^k) \not \subseteq TIME-SPACE(n^k,n^{k/c})$ . Here, $TIME-SPACE(n^k, n^{k/c})$ is the class of languages recognized by machines using time $n^k$ and space $n^{k/c}$ simultaneously. Clearly $TIME-SPACE(n^k,n^{k/c}) \subseteq TIME(n^k)$ but it is not known whether they are equal. If you assume for some $k \geq 2$ that $NTIME(n^k) = TIME(n^k)$ , you get interesting consequences. $P=NP$ is obvious, but it also implies that ${\sf NL} \neq {\sf P}$ . This can be proved using an "alternation-trading" argument. Basically, for every $k$ and every language $L \in {\sf NL}$ , there is a constant $c$ and some alternating machine that recognizes $L$ and makes $c$ alternations, guesses $O(n)$ bits per alternation, then switches to a deterministic mode and runs in $n^k$ time. (This follows, for example, from playing around with the constructions in Fortnow, "Time-Space Tradeoffs for Satisfiability" (1997) .) Now if $TIME(n^k) = NTIME(n^k)$ then all these $c$ alternations can be removed with only a small amount of overhead, and you end up with a $TIME(n^k)$ computation that recognizes $L$ . Hence ${\sf NL }\subseteq TIME(n^k) \neq {\sf P}$ . Probably no such alternating simulation exists, but if you can rule it out, then you'll have the lower bound you seek. (Note: I believe that the above argument is also in Kannan's paper.)
{ "source": [ "https://cstheory.stackexchange.com/questions/1079", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/976/" ] }
1,117
I have three related subquestions, which are highlighted by bullet points below (no, they could not be split, if you are wondering). Andrej Bauer wrote, here , that some functions are realizable through a Turing machine, but not through lambda-calculus. A key step of his reasoning is: However, if we use the lambda calculus, then [the program] c is supposed to compute a numeral representing a Turing machine out of a lambda term representing a function f. This cannot be done (I can explain why, if you ask it as a separate question). I would like to see an explanation/informal proof. I don't see how to apply Rice's theorem here; it would apply to the problem "are this turing machine T and this lambda-term L equivalent?", because applying this predicate to equivalent terms gives the same result. However, the required function might compute different, but equivalent, TMs for different, but equivalent, lambda-terms. Moreover, if the problem is with introspection of a lambda-term, I think that passing a Gödel encoding of a lambda-term would be also acceptable, wouldn't it? On the one hand, given that his example involves computing, in the lambda calculus, the number of steps needed by a Turing Machine to complete a given task, I'm not very surprised. But since here lambda-calculus can't solve a Turing-machine-related problem, I wonder whether one can define a similar problem for lambda-calculus and prove it unsolvable for Turing machines, or there is actually a difference in power in favor of Turing Machines (which would surprise me).
John Longley has a very extensive survey article discussing the issues involved, "Notions of Computability at Higher Type" . The basic idea is that the Church-Turing thesis is only about functions from $\mathbb{N} \to \mathbb{N}$ -- and there's more to computation than that! In particular, when we write programs, we make use of functions of higher type (such as $(\mathbb{N} \to \mathbb{N}) \to \mathbb{N}$). In order to fully define a model of higher type computation, we need to specify the calling convention for functions, in order to allow one function to call another function it receives as an argument. In lambda calculus, the standard calling convention is that we represent functions by lambda-terms, and the only thing you can do with a lambda in the lambda calculus is to apply it. In typical encodings with Turing machines, we pass functions as arguments by fixing a particular Godel encoding, and then strings representing the index of the machine you want to pass as an argument. The difference in encoding means that you can analyze the syntax of the argument with a TM-style encoding, and you cannot with a standard lambda-calculus representation. So if you receive a lambda-term for a function of type $\mathbb{N} \to \mathbb{N}$, you can only test its behavior by passing it particular $n$'s -- you can't analyze the structure of the term in any way. This is just not enough information to figure out the code of the lambda term. One thing worth noting is that with higher types, if a language is less expressive at one order, it is more expressive one order up, because functions are contravariant. So similarly there are functions you can write in LC that you can't with a TM-style encoding (because they rely on the fact that you can pass functional arguments and know that the receiver can't look inside the function you give it). EDIT: Here's an example of a function definable in PCF, but not in TM+Goedel encodings. I'll declare the isAlwaysTrue function isAlwaysTrue : ((unit → bool) → bool) → bool which should return true if its argument ignores its argument and always returns true, should return false if its argument returns false on any inputs, and goes into a loop if its argument goes into a loop on any inputs. We can define this function pretty easily, as follows: isAlwaysTrue p = p (λ(). true) ∧ p (λ(). false) ∧ p (λ(). ⊥) where ⊥ is the looping computation and ∧ is the and operator on booleans. This works because there are only three inhabitants of unit → bool in PCF, and so we can exhaustively enumerate them. However, in a TM+Goedel-encoding style model, p could test how long its argument takes to return an answer, and return different answers based on that. So the implementation of isAlwaysTrue with TMs would fail to meet the spec.
{ "source": [ "https://cstheory.stackexchange.com/questions/1117", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/989/" ] }
1,130
I have heard that there are heuristic arguments in statistical physics that yield results in probability theory for which rigorous proofs are either unknown or very difficult to arrive at. What is a simple toy example of such a phenomenon? It would be good if the answer assumed little background in statistical physics and could explain what these mysterious heuristics are and how they can be informally justified. Also, perhaps someone can indicate the broad picture of how much of these heuristics can be rigorously justified and how the program of Lawler, Schramm and Werner fits into this.
The second paragraph of RJK's response deserves more detail. Let $\phi$ be a formula in conjunctive normal form, with m clauses, n variables, and at most k variables per clause. Suppose we want to determine if $\phi$ has a satisfying assignment. Formula $\phi$ is an instance of the k-SAT decision problem. When there are few clauses (so m is quite small compared to n), then it is almost always possible to find a solution. A simple algorithm will find a solution in roughly linear time in the size of the formula. When there are many clauses (so m is quite large compared to n), then it is almost always the case that there is no solution. This can be shown by a counting argument. However, during search it is almost always possible to prune large parts of the search space by means of consistency techniques, because the many clauses interact so extensively. Establishing unsatisfiability can then usually be done efficiently. V. Chvátal and B. Reed. Mick gets some (the odds are on his side) , FOCS 1992. doi: 10.1109/SFCS.1992.267789 In 1986 Fu and Anderson conjectured a relationship between optimisation problems and statistical physics, based on spin glass systems. Although they used sentences like Intuitively, the system must be sufficiently large, but it is difficult to be more specific. they do actually give specific predictions. Y Fu and P W Anderson. Application of statistical mechanics to NP-complete problems in combinatorial optimisation , J. Phys. A. 19 1605, 1986. doi: 10.1088/0305-4470/19/9/033 Based on arguments from statistical physics, Zecchina and collaborators conjectured that k-SAT should become hard when $\alpha = m/n$ is near a critical value. The precise critical value depends on k, but is in the region of 3.5 to 4.5 for 3-SAT. Rémi Monasson, Riccardo Zecchina, Scott Kirkpatrick, Bart Selman, Lidror Troyansky. Determining computational complexity from characteristic `phase transitions' , Nature 400 133–137, 1999. ( doi: 10.1038/22055 , free version ) Friedgut provided a rigorous proof of these heuristic arguments. For every fixed value of k, there are two thresholds $\alpha_1 < \alpha_2$. For $\alpha$ below $\alpha_1$, there is a satisfying assignment with high probability. For a value of $\alpha$ above $\alpha_2$, formula $\phi$ is unsatisfiable with high probability. Ehud Friedgut (with an appendix by Jean Bourgain), Sharp thresholds of graph properties, and the $k$-sat problem , J. Amer. Math. Soc. 12 1017–1054, 1999. ( PDF ) Dimitris Achlioptas worked on many of the remaining issues, and showed that the above argument holds for constraint satisfaction problems, too. These are allowed to use more than just two values for each variable. One key paper shows rigorously why the Survey Propagation algorithm works so well to solve random k-SAT instances. A. Braunstein, M. Mézard, R. Zecchina, Survey propagation: An algorithm for satisfiability , Random Structures & Algorithms 27 201–226, 2005. doi: 10.1002/rsa.20057 D. Achlioptas and F. Ricci-Tersenghi, On the Solution-Space Geometry of Random Constraint Satisfaction Problems , STOC 2006, 130–139. ( preprint )
{ "source": [ "https://cstheory.stackexchange.com/questions/1130", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/15/" ] }
1,168
This question is (inspired by)/(shamefully stolen from) a similar question at MathOverflow , but I expect the answers here will be quite different. We all have favorite papers in our own respective areas of theory. Every once in a while, one finds a paper so astounding (e.g., important, compelling, deceptively simple, etc.) that one wants to share it with everyone. So list these papers here! They don't have to be from theoretical computer science -- anything that you think might appeal to the community is a fine answer. You can give as many answers as you want; please put one paper per answer ! Also, notice this is community wiki, so vote on everything you like! (Note there has been a previous question about papers in recursion-theoretic complexity but that is quite specialized.)
The 1936 paper that arguably started computer science itself: Alan Turing, "On Computable Numbers, with an Application to the Entscheidungsproblem", Proceedings of the London Mathematical Society s2-42, 230–265, 1937. doi: 10.1112/plms/s2-42.1.230 In just 36 pages, Turing formulates (but does not name) the Turing Machine, recasts Gödel's famous First Incompleteness Theorem in terms of computation, describes the concept of universality, and in the appendix shows that computability by Turing machines is equivalent to computability by $\lambda$-definable functions (as studied by Church and Kleene).
{ "source": [ "https://cstheory.stackexchange.com/questions/1168", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/225/" ] }
1,198
Stanford University now has a Youtube channel , with free access to HD video of full courses on everything from dynamical systems to quantum entanglement. More conferences and workshops are videotaping their talks. What are videos online that you think everyone should know about? I'll seed this with a few answers to presentations that are mostly expository, but what I'm hoping might happen is that this community wiki could turn into a resource to share excellent presentations of new research, as well as a place to learn (or reinforce) background in an unfamiliar area.
Timothy Gowers has a set of videos on Computational Complexity and Quantum Computation online.
{ "source": [ "https://cstheory.stackexchange.com/questions/1198", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/30/" ] }
1,215
Several optimization problems that are known to be NP-hard on general graphs are trivially solvable in polynomial time (some even in linear time) when the input graph is a tree. Examples include minimum vertex cover, maximum independent set, subgraph isomorphism. Name some natural optimization problems that remain NP-hard on trees.
You can find "natural" and "well-known" examples of graph problems that are hard even if restricted to trees from our standard reference . Examples: Integral k -multicommodity flow , Common embedded subtree , Common subtree . (These are formulated as tree problems, but you can generalise them to arbitrary graphs. Then the above formulations are obtained as the special case when you restrict your input to trees.) A more general recipe for generating problems that are hard on trees: Take any NP-hard problem related to supersequences , superstrings , substrings , etc. Then re-interpret a string as a labelled path graph. Then pose the analogous question for general graphs (subsequence ≈ graph minor, substring ≈ subgraph). And we know that the problem is NP-hard even on trees (and on paths). There are also many problems that are hard on weighted stars, by reduction from the subset-sum problem. A natural example is: TSP with two travellers : given an edge-weighted graph $G$ and a limit $W$ , can we find two closed walks $C_1$ and $C_2$ in $G$ such that each walk has total weight at most $W$ , and each node of $G$ is covered by at least one walk? Again, it's easy to come up with variations of the theme.
{ "source": [ "https://cstheory.stackexchange.com/questions/1215", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/344/" ] }
1,233
In his "Computational Complexity" book, Papadimitriou writes: RP is in some sense a new and unusual kind of complexity class. Not any polynomially bounded nondeterministic Turing machine can be the basis of defining a language in RP. For a machine N to define a language in RP , it must have the remarkable property that on all inputs it either rejects unanimously , or it accepts by majority . Most nondeterministic machines behave in other ways for at least some inputs ... There is no easy way to tell whether a machine always halts with a certified output. We informally call such classes semantic classes , as opposed to the syntactic classes such as P and NP , where we can tell immediately by a superficial check whether an appropriately standardized machine indeed defines a language in the class. Several pages later, he points that: language L is in the class PP if there is a nondeterministic polynomially bounded Turing machine N such that, for all inputs x, $x \in L$ iff more than half of the computations of N on input x end up accepting. We say that N decides L by majority . Question 1: Why Papadimitriou concludes that PP is a syntactic class, while its definition is only slightly different from that of RP ? Question 2: Whether being "semantic" for a complexity class is equivalent to NOT having complete problems, or the lack of complete problems is thought of as a property that we GUESS semantic classes possess? Edit: See related topic Do all complexity classes have a leaf language characterization?
RP involves a promise, that either 0 paths accept or more than half accept, no matter what the input is. For PP, there is no such promise. If more than half the paths accept, then $x \in L$, otherwise, $x \notin L$. (PP can be defined so that the acceptance criteria are $\geq 1/2$ and $< 1/2$ respectively.) Or in other words, if I give you a probabilistic TM claiming it is a PP machine deciding some language, you can be sure that it decides some language. Clearly, the language it decides is this one: Try input $x$. See if more than 1/2 of the paths accept (or more than 1/2 random strings cause it to accept). If so, $x \in L$. If not, $x \notin L$. So we've defined a language using this TM. On the other hand, if I give you a probabilistic TM claiming it is a RP machine deciding some language, you can't even be sure that it decides any language. The problem is that when you observe only a few paths accepting, you don't know if $x$ is in $L$ or not. So if I give you a RP machine, you just have to take my word for it. Indeed, checking if this machine defines a language is uncomputable. As for your second question, for syntactic classes usually there's an obvious complete problem, which is like "Given machine M, decide if it accepts in time T on input x." If you're given a nondeterministic machine, this problem is NP-complete, if it's a PP-machine, then it's PP-complete, etc. The obvious complete problem for semantic classes is undecidable, as I mentioned. So we don't get a complete problem for free for semantic classes. But a semantic class can have a complete problem. For example if P = BPP (as is widely believed), then BPP has a syntactic characterization. EDIT : Since there's some discussion on how to define semantic and syntactic classes, I'd like to point out that Papadimitriou gives a definition in his book when talking about leaf languages. (See my question about leaf languages for some references.) He says that syntactic classes are those for which there exists some language that defines the class using the leaf language technique. Semantic classes are those for which all such characterizations require promise problems. This is a rigorous definition, but only works for those languages that have leaf language characterizations.
{ "source": [ "https://cstheory.stackexchange.com/questions/1233", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/873/" ] }
1,263
I am seeking a definitive answer to whether or not generation of "truly random" numbers is Turing computable. I don't know how to phrase this precisely. This StackExchange question on "efficient algorithms for random number generation" comes close to answering my question. Charles Stewart says in his answer, "it [Martin-Löf randomness] cannot be generated by a machine." Ross Snider says, "any deterministic process (such as Turing/Register Machines) can not produce 'philosophical' or 'true' random numbers." Is there a clear and accepted notion of what constitutes a truly random number generator? And if so, is it known that it cannot be computed by a Turing Machine? Perhaps pointing me to the relevant literature would suffice. Thanks for any help you can provide! Edit. Thanks to Ian and Aaron for the knowledgeable answers! I am relatively unschooled in this area, and I am grateful for the assistance. If I may extend the question a bit in this addendum: Is it the case that a TM with access to a pure source of randomness (an oracle?), can compute a function that a classical TM cannot?
I am joining the discussion fairly late, but I will try to address several questions that were asked earlier. First, as observed by Aaron Sterling, it is important to first decide what we mean by "truly random" numbers, and especially if we are looking at things from a computational complexity or computability perspective. Let me argue however that in complexity theory, people are mainly interested in pseudo -randomness, and pseudo -random generators, i.e. functions from strings to strings such that the distribution of the output sequences cannot be told apart from the uniform distribution by some efficient process (where several meanings of efficient can be considered, e.g. polytime computable, polynomial-size circuits etc). It is a beautiful and very active research area, but I think most people would agree that the objects it studies are not truly random, it is enough that they just look random (hence the term "pseudo"). In computability theory, a concensus has emerged to what should be a good notion of "true randomness", and it is indeed the notion of Martin-Löf randomness which prevailed (other ones have been proposed and are interesting to study but do not bare all the nice properties Martin-Löf randomness has). To simplify matters, we will consider randomness for infinite binary sequences (other objects such as functions from strings to strings can easily be encoded by such sequence). An infinite binary sequence $\alpha$ is Martin-Löf random if no computable process (even if we allow this process to be computable in triple exponential time or higher) can detect a randomness flaw. (1) What do we mean by "randomness flaw"? That part is easy: it is a set of measure 0, i.e. a property that almost all sequences do not have (here we talk about Lebesgue measure i.e. the measure where each bit has a $1/2$ probability to be $0$ independently of all the other bits). An example of such a flaw is "having asymptotically 1/3 of zeroes and 2/3 of ones", which violates the law of large numbers. Another example is "for every n, the first 2n bits of $\alpha$ are perfectly distributed (as many zeroes as ones)". In this case the law of large numbers is satified, but not the central limit theorem. Etc etc. (2) How can a computable process test that a sequence does not belong to a particular set of measure 0? In other words, what sets of measure 0 can be computably described? This is precisely what Martin-Löf tests are about. A Martin-Löf test is a computable procedure which, given an input k, computably (i.e., via a Turing machine with input $k$) generates a sequence of strings $w_{k,0}$, $w_{k,1}$, ... such that the set $U_k$ of infinite sequences starting by one of those $w_{k,i}$ has measure at most $2^{-k}$ (if you like topology, notice that this is an open set in the product topology for the set of infinite binary sequences). Then the set $G=\bigcap_k U_k$ has measure $0$ and is referred to as Martin-Löf nullset . We can now define Martin-Löf randomness by saying that an infinite binary sequence $\alpha$ is Martin-Löf random if it does not belong to any Martin-Löf nullset . This definition might seem technical but it is widely accepted as being the right one for several reasons: it is effective enough, i.e. its definition involves computable processes it is strong enough: any "almost sure" property you may find in a probability theory textbook (law of large numbers, law of iterated logarithm, etc) can be tested by a Martin-Löf test (although this is sometimes hard to prove) it has been independently proposed by several people using different definitions (in particular the Levin-Chaitin definition using Kolmogorov complexity); and the fact that they all lead to the same concept is a hint that it should be the right notion (a little bit like the notion of computable function, which can be defined via Turing machines, recursive functions, lambda-calculus, etc.) the mathematical theory behind it is very nice! see the three excellent books An Introduction to Kolmogorov Complexity and Its Applications (Li and Vitanyi), Algorithmic randomness and complexity (Downey and Hirschfeldt) Computability and Randomness (Nies). What does a Martin-Löf random sequence look like? Well, take a perfectly balanced coin and start flipping it. At each flip, write a 0 for heads and a 1 for tails. Continue until the end of time. That's what a Martin-Löf sequence looks like :-) Now back to the initial question: is there a computable way to generate a Martin-Löf random sequence? Intuitively the answer should be NO , because if we can use a computable process to generate a sequence $\alpha$, then we can certainly use a computable process to describe the singleton {$\alpha$}, so $\alpha$ is not random. Formally this is done as follows. Suppose a sequence $\alpha$ is computable. Consider the following Martin-Löf test: for all $k$, just output the prefix $a_k$ of $\alpha$ of length $k$, and nothing else. This has measure at most (in fact, exactly) $2^{-k}$, and the intersection of the sets $U_k$ as in the definition is exactly {${\alpha}$}. QED!! In fact a Martin-Löf random sequence $\alpha$ is incomputable in a much stronger sense: if some oracle computation with oracle $\beta$ (which itself is an infinite binary sequence) can compute $\alpha$, then for all $n$, $n-O(1)$ bits of $\beta$ are needed to compute the first $n$ bits of $\alpha$ (this is in fact a characterization of Martin-Löf randomness, which unfortunately is rarely stated as is in the literature). Ok, now the "edit" part of Joseph's question: Is it the case that a TM with access to a pure source of randomness (an oracle?), can compute a function that a classical TM cannot? From a computability perspective, the answer is "yes and no". If you are given access to a random source as an oracle (where the output is presented as an infinite binary sequence), with probability 1 you will get a Martin-Löf random oracle, and as we saw earlier Martin-Löf random implies non-computable, so it suffices to output the oracle itself! Or if you want a function $f: \mathbb{N} \rightarrow \mathbb{N}$, you can consider the function $f$ which for all $n$ tells you how many zeroes there are among the first $n$ bits of your oracle. If the oracle is Martin-Löf random, this function will be non-computable. But of course you might argue that this is cheating: indeed, for a different oracle we might get a different function, so there is a non-reproducibility problem. Hence another way to understand your question is the following: is there a function $f$ which is non-computable, but which can be "computed with positive probability", in the sense that there is an Turing machine with access to a random oracle which, with positive probability (over the oracle), computes $f$. The answer is no, due to a theorem of Sacks whose proof is quite simple. Actually it has mainly been answered by Robin Kothari: if the probability for the TM to be correct is greater than 1/2, then one can look for all $n$ at all the possible oracle computations with input $n$ and find the output which gets the "majority vote", i.e. which is produced by a set of oracles of measure more than 1/2 (this can be done effectively). The argument even extend to smaller probabilities: suppose the TM outputs $f$ with probability $\epsilon >0$. By Lebesgue's density theorem, there exists a finite string $\sigma$ such that if we fix the first bits of the oracle to be exactly $\sigma$, and then get the other bits at random, then we compute $f$ with probability at least 0.99. By taking such a $\sigma$, we can apply the above argument again.
{ "source": [ "https://cstheory.stackexchange.com/questions/1263", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/337/" ] }
1,348
I believe the answer to this question is well-known; but, unfortunately, I don't know. In quantum computing, we know that mixed states are represented by density matrices. And the trace norm of the difference of two density matrices characterizes the distinguishability of the two corresponding mixed states. Here, the definition of trace norm is the sum of all eigenvalues of the density matrix, with an extra multiplicative factor 1/2 (in accordance with statistical difference of two distributions). It is well-known that, when the the difference of two density matrices is one, then the corresponding two mixed states are totally distinguishable, while when the difference is zero, the two mixed states are totally indistinguishable. My question is, does the trace norm of the difference of two density matrices being one imply these two density matrices can be simultaneously diagonalizable? If this is the case, then taking the optimal measurement to distinguish these two mixed states will behave like to distinguish two distributions over the same domain with disjoint support.
Here is one way to prove the fact you are interested in. Suppose $\rho_0$ and $\rho_1$ are density matrices. Like every other Hermitian matrix, it is possible to express the difference $\rho_0-\rho_1$ as $$\rho_0-\rho_1 = P_0-P_1$$ for $P_0$ and $P_1$ being positive semidefinite and having orthogonal images. (Sometimes this is called a Jordan-Hahn decomposition; it is unique and easily obtained from a spectral decomposition of $\rho_0-\rho_1$.) Note that the fact that $P_0$ and $P_1$ have orthogonal images implies that they are simultaneously diagonalizable, which I interpret is the property you are interested in. The trace norm of the difference $\rho_0-\rho_1$ (as you define it, with the multiplicative factor 1/2), is given by $$\|\rho_0-\rho_1\|_{\text{tr}} = \frac{1}{2}\operatorname{Tr}(P_0) + \frac{1}{2}\operatorname{Tr}(P_1).$$ Under the assumption that this quantity is 1, we will conclude that $P_0=\rho_0$ and $P_1=\rho_1$, which proves what you want to prove. To draw this conclusion, note first that $\operatorname{Tr}(P_0)-\operatorname{Tr}(P_1)=0$ and $\operatorname{Tr}(P_0)+\operatorname{Tr}(P_1)=2$, so $\operatorname{Tr}(P_0)=\operatorname{Tr}(P_1)=1$. Next, take $\Pi_0$ and $\Pi_1$ to be the orthogonal projections onto the images of $P_0$ and $P_1$, respectively. We have $$\Pi_0 (\rho_0 - \rho_1) = \Pi_0 (P_0 - P_1) = P_0$$ so $$\operatorname{Tr}(\Pi_0 \rho_0) - \operatorname{Tr}(\Pi_0 \rho_1) = 1.$$ Both $\operatorname{Tr}(\Pi_0 \rho_0)$ and $\operatorname{Tr}(\Pi_0 \rho_1)$ must be contained in the interval [0,1], from which we conclude that $\operatorname{Tr}(\Pi_0\rho_0)=1$ and $\operatorname{Tr}(\Pi_0\rho_1) = 0$. From these equations it is not difficult to conclude $\Pi_0\rho_0=\rho_0$ and $\Pi_0\rho_1=0$, and therefore $P_0=\rho_0$ by the equation above. A similar argument shows $P_1=\rho_1$.
{ "source": [ "https://cstheory.stackexchange.com/questions/1348", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/412/" ] }
1,370
When encoding a logic into a proof assistant such as Coq or Isabelle, a choice needs to be made between using a shallow and a deep embedding. In a shallow embedding logical formulas are written directly in the logic of the theorem prover, whereas in a deep embedding logical formulas are represented as a datatype. What are the advantages and limitations of the various approaches? Are there any guidelines available for determining which to use? Is it possible to switch between the two representations in any systematic fashion? As motivation, I would like to encode various security related logics into Coq and am wondering what the pros and cons of the different approaches are.
What are the advantages and limitations of the various approaches? Pros of deep embeddings : You can prove and define things by induction on formulas' structure. Examples of interests are the size of a formula. Cons of deep embeddings: You have do deal explicitly with binding of variables. That's usually very laborious. Are there any guidelines available for determining which to use ? Shallow embeddings are very useful to import result proved in the object logic. For instance, if you have prove something in a small logic (e.g. separation logic) shallow embeddings can be a tool of choice to import your result in Coq. On the other side, deep embedding are almost mandatory when you want to prove meta-theorems about the object logic (like cut-elimination for instance). Is it possible to switch between the two representations in any systematic fashion? The idea behind the shallow embedding is really to work directly in a model of the object formulas. Usually people will maps an object formula P directly (using notations or by doing the translation by hand) to an inhabitant of Prop. Of course, there are inhabitants of Prop which cannot be obtained by embedding a formula of the object logic. Therefore you lose some kind of completeness. So it is possible to send every result obtained in a deep embedding setting through an interpretation function. Here is a little coq example: Inductive formula : Set := Ftrue : formula | Ffalse : formula | Fand : formula -> formula -> formula | For : formula -> formula -> formula. Fixpoint interpret (F : formula) : Prop := match F with Ftrue => True | Ffalse => False | Fand a b => (interpret a) /\ (interpret b) | For a b => (interpret a) \/ (interpret b) end. Inductive derivable : formula -> Prop := deep_axiom : derivable Ftrue | deep_and : forall a b, derivable a -> derivable b -> derivable (Fand a b) | deep_or1 : forall a b, derivable a -> derivable (For a b) | deep_or2 : forall a b, derivable b -> derivable (For a b). Inductive sderivable : Prop -> Prop := shallow_axiom : sderivable True | shallow_and : forall a b, sderivable a -> sderivable b -> sderivable (a /\ b) | shallow_or1 : forall a b, sderivable a -> sderivable (a \/ b) | shallow_or2 : forall a b, sderivable b -> sderivable (a \/ b). (* You can prove the following lemma: *) Lemma shallow_deep : forall F, derivable F -> sderivable (interpret F). (* You can NOT prove the following lemma :*) Lemma t : forall P, sderivable P -> exists F, interpret F = P.
{ "source": [ "https://cstheory.stackexchange.com/questions/1370", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/77/" ] }
1,410
I don't quite understand why almost all SAT solvers use CNF instead of DNF. It seems to me that solving SAT is easier using DNF. After all, you just have to scan through the set of implicants and check whether one of them contains not both a variable and its negation. For CNF, there's no simple procedure like this.
The textbook reduction from SAT to 3SAT, due to Karp, transforms an arbitrary boolean formula $\Phi$ into an “equivalent” CNF boolean formula $\Phi'$ of polynomial size , such that $\Phi$ is satisfiable if and only if $\Phi'$ is satisfiable. (Strictly speaking, these two formulas are not equivalent, because $\Phi'$ has additional variables, but the value of $\Phi'$ doesn't actually depend on those new variables.) No similar reduction from arbitrary boolean formulas into DNF formulas is known; all known transformations increase the size of the formula exponentially. Moreover, unless P=NP, no such reduction is possible!
{ "source": [ "https://cstheory.stackexchange.com/questions/1410", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/-1/" ] }
1,471
I'm often asked what a theoretical computer scientist does. It would be great to have some nice responses to this question. I tend to fall back to technical jargon and people's eyes usually glaze over at this point. What does a theoretical computer scientist do, in terms that can be understood by people who are not computer scientists? A good answer should be snappy, accurate in spirit, without sounding vague or trite. For bonus points, the answer should hint at why a theoretical computer scientist is neither a mathematician nor an IT practitioner. This question is inspired by the MO question https://mathoverflow.net/questions/3559/colloquial-catchy-statements-encoding-serious-mathematics although the intent is different.
My response is generally, "I study why some computations are hard to do". As an example, I typically compare addition and multiplication using the standard grade school methods. These are computations that everyone has done and that everyone appreciates the value of doing quickly. Everyone agrees that for large numbers, multiplication is much harder than addition. In fact, most people suggest that the elementary school method is as fast as you can go. Then I ask them why. How do they know that there isn't another way to do multiplication that is just as easy as addition? Pretty much everyone has at least some appreciation at this point for the difficulty of proving lower bounds (my particular interest), even though I haven't used that term. Depending on the background and interest of the audience, I may mention that someone has found a way to multiply that is much faster than the elementary school method (simply the word "algorithm" tends to bring a glaze to their eyes), but still slower than adding.
{ "source": [ "https://cstheory.stackexchange.com/questions/1471", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/109/" ] }
1,521
In a couple recent questions ( q1 q2 ), there has been discussion of "Theory A" vs "Theory B", seemingly to capture the divide between the study of logic and programming languages and the study of algorithms and complexity. This terminology was new to me, and a quick web search didn't come up with any obvious references explaining it. Does anyone know of a reference or references that explain the origin of this terminology, and what, if any, substantive benefit is intended to be derived from making this distinction?
It comes from the handbook on theoretical computer science , which had two volumes: A was for algorithms and complexity, and B was for logic and semantics. Jukka, did ICALP predate this ? Or was it in response to this ? As for benefits, I think there's always some utility in taxonomizing areas based on topics of interest, and forms of study. However, as with all taxonomizations, the problem comes when you forget to "go back up the tree and down the other side" :). EDIT : as ICALP explicitly states, this division comes from the Elsevier journal Theoretical Computer Science , which itself predates the handbook, so I think that's a more accurate source. EDIT ++ : From the history of the EATCS comes this snippet about TCS, the journal: Since that time M. Nivat, who is still Editor-in-Chief has reported regularly to council and general assembly and occasionally in the Bulletin - e.g. when the split into sections A (automata, algebra und algorithms) and B (logic, semantics and related topics) was decided upon (Bulletin no. 45, p.2,3, October 1991); which yields 1991 as when this first started happening at the journal. However, the Handbook was first published in September 1990 !
{ "source": [ "https://cstheory.stackexchange.com/questions/1521", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/629/" ] }
1,527
I'm looking at alternatives to PKI and I'm having trouble understanding exactly how certificateless public key algorithms (e.g. Al-Riyami and Paterson , Liu et al ) work in practice. It seems like the "partial private key" generated by the KGC in these systems is not actually confidential information (which would be awfully convenient for practical use of the system), but if it isn't, then I don't understand why the KGC and its master secret are necessary. (I hope this isn't too "practical" a question for this site.)
It comes from the handbook on theoretical computer science , which had two volumes: A was for algorithms and complexity, and B was for logic and semantics. Jukka, did ICALP predate this ? Or was it in response to this ? As for benefits, I think there's always some utility in taxonomizing areas based on topics of interest, and forms of study. However, as with all taxonomizations, the problem comes when you forget to "go back up the tree and down the other side" :). EDIT : as ICALP explicitly states, this division comes from the Elsevier journal Theoretical Computer Science , which itself predates the handbook, so I think that's a more accurate source. EDIT ++ : From the history of the EATCS comes this snippet about TCS, the journal: Since that time M. Nivat, who is still Editor-in-Chief has reported regularly to council and general assembly and occasionally in the Bulletin - e.g. when the split into sections A (automata, algebra und algorithms) and B (logic, semantics and related topics) was decided upon (Bulletin no. 45, p.2,3, October 1991); which yields 1991 as when this first started happening at the journal. However, the Handbook was first published in September 1990 !
{ "source": [ "https://cstheory.stackexchange.com/questions/1527", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/1311/" ] }
1,539
Since Chris Okasaki's 1998 book "Purely functional data structures", I haven't seen too many new exciting purely functional data structures appear; I can name just a few: IntMap (also invented by Okasaki in 1998, but not present in that book) Finger trees (and their generalization over monoids) There are also some interesting ways of implementing already known datastructures, such as using "nested types" or "generalized algebraic datatypes" to ensure tree invariants. Which other new ideas have appeared since 1998 in this area?
New purely functional data structures published since 1998: 2001: Ideal Hash Trees , and its 2000 predecessor, Fast And Space Efficient Trie Searches , by Phil Bagwell : Apparently used as a fundamental building block in Clojure's standard library. 2001: A Simple Implementation Technique for Priority Search Queues , by Ralf Hinze : a really simple and beautiful technique for implementing this important datastructure (useful, say, in the Dijkstra algorithm). The implementation is particularly beautiful and readable due to heavy use of "view patterns". 2002: Bootstrapping one-sided flexible arrays , by Ralf Hinze : Similar to Okasaki's random-access lists, but they can be tuned to alter the time tradeoff between cons and indexing. 2003: New catenable and non-catenable deques , by Radu Mihaescu and Robert Tarjan : A new take on some older work (by Kaplan and Tarjan) that Okasaki cites (The most recent version of Kaplan & Tarjan's work was published in 2000 ). This version is simpler in some ways. 2005: Maxiphobic heaps ( paper and code ), by Chris Okasaki : Presented not as a new, more efficient structure, but as a way to teach priority queues. 2006: Purely Functional Worst Case Constant Time Catenable Sorted Lists , by Gerth Stølting Brodal, Christos Makris, and Kostas Tsichlas : Answers an outstanding question of Kaplan and Tarjan by demonstrating a structure with O(lg n) insert, search, and delete and O(1) concat. 2008: Confluently Persistent Tries for Efficient Version Control , by Erik D. Demaine, Stefan Langerman, and Eric Price : Presents several data structures for tries that have efficient navigation and modification near the leaves. Some are purely functional. Others actually improve a long-standing data structure by Dietz et al. for fully persistent (but not confluently persistent or purely functional) arrays. This paper also presente purely functional link-cut trees , sometimes called "dynamic trees". 2010: A new purely functional delete algorithm for red-black trees , by Matt Might : Like Okasaki's red-black tree insertion algorithm, this is not a new data structure or a new operation on a data structure, but a new, simpler way to write a known operation. 2012: RRB-Trees: Efficient Immutable Vectors , by Phil Bagwell and Tiark Rompf : An extension to Hash Array Mapped Tries, supporting immutable vector concatenation, insert-at, and split in O(lg n) time, while maintaining the index, update, and insertion speeds of the original immutable vector. Known in 1997, but not discussed in Okasaki's book: Many other styles of balanced search tree . AVL, brother, rank-balanced, bounded-balance, and many other balanced search trees can be (and have been) implemented purely functionally by path copying. Perhaps deserving special mention are: Biased Search Trees , by Samuel W. Bent, Daniel D. Sleator, and Robert E. Tarjan : A key element in Brodal et al.'s 2006 paper and Demaine et al.'s 2008 paper. Infinite sets that admit fast exhaustive search , by Martín Escardó : Perhaps not a data structure per se . Three algorithms on Braun Trees , by Chris Okasaki : Braun trees offer many stack operations in worst-case O(lg n). This bound is surpassed by many other data structures, but Braun trees have a cons operation lazy in its second argument, and so can be used as infinite stacks in some ways that other structures cannot. The relaxed min-max heap: A mergeable double-ended priority queue and The KD heap: An efficient multi-dimensional priority queue , by Yuzheng Ding and Mark Allen Weiss : These happen to be purely functional, though this is not discussed in the papers. I do not think the time bounds achieved are any better than those that can be achieved by using finger trees (of Hinze & Paterson or Kaplan & Tarjan) as k-dimensional priority queues, but I think the structures of Ding & Weiss uses less space. The Zipper , by Gérard Huet : Used in many other data structures (such as Hinze & Paterson's finger trees), this is a way of turning a data structure inside-out. Difference lists are O(1) catenable lists with an O(n) transformation to usual cons lists. They have apparently been known since antiquity in the Prolog community, where they have an O(1) transformation to usual cons lists. The O(1) transformation seems to be impossible in traditional functional programming, but Minamide's hole abstraction , from POPL '98, discusses a way of allowing O(1) append and O(1) transformation within pure functional programming. Unlike the usual functional programming implementations of difference lists, which are based on function closures, hole abstractions are essentially the same (in both their use and their implementation) as Prolog difference lists. However, it seems that for years the only person that noticed this was one of Minamide's reviewers . Uniquely represented dictionaries support insert, update, and lookup with the restriction that no two structures holding the same elements can have distinct shapes. To give an example, sorted singly-linked lists are uniquely represented, but traditional AVL trees are not. Tries are also uniquely represented. Tarjan and Sundar, in "Unique binary search tree representations and equality-testing of sets and sequences" , showed a purely functional uniquely represented dictionary that supports searches in logarithmic time and updates in $O(\sqrt{n})$ time. However, it uses $\Theta(n \lg n)$ space. There is a simple representation using Braun trees that uses only linear space but has update time of $\Theta(\sqrt{n \lg n})$ and search time of $\Theta(\lg^2 n)$ Mostly functional data structures, before, during, and after Okasaki's book: Many procedures for making data structures persistent, fully persistent, or confluently persistent : Haim Kaplan wrote an excellent survey on the topic . See also above the work of Demaine et al., who demonstrate a fully persistent array in $O(m)$ space (where $m$ is the number of operations ever performed on the array) and $O(\lg \lg n)$ expected access time. 1989: Randomized Search Trees by Cecilia R. Aragon and Raimund Seidel : These were discussed in a purely functional setting by Guy E. Blelloch and Margaret Reid-Miller in Fast Set Operations Using Treaps and by Dan Blandford and Guy Blelloch in Functional Set Operations with Treaps ( code ). They provide all of the operations of purely functional fingertrees and biased search trees, but require a source of randomness, making them not purely functional. This may also invalidate the time complexity of the operations on treaps, assuming an adversary who can time operations and repeat the long ones. (This is the same reason why imperative amortization arguments aren't valid in a persistent setting, but it requires an adversary with a stopwatch) 1997: Skip-trees, an alternative data structure to Skip-lists in a concurrent approach , by Xavier Messeguer and Exploring the Duality Between Skip Lists and Binary Search Trees , by Brian C. Dean and Zachary H. Jones : Skip lists are not purely functional, but they can be implemented functionally as trees. Like treaps, they require a source of random bits. (It is possible to make skip lists deterministic, but, after translating them to a tree, I think they are just another way of looking at 2-3 trees.) 1998: All of the amortized structures in Okasaki's book! Okasaki invented this new method for mixing amortization and functional data structures, which were previously thought to be incompatible. It depends upon memoization, which, as Kaplan and Tarjan have sometimes mentioned, is actually a side effect. In some cases ( such as PFDS on SSDs for performance reasons ), this may be inappropriate. 1998: Simple Confluently Persistent Catenable Lists , by Haim Kaplan, Chris Okasaki, and Robert E. Tarjan : Uses modification under the hood to give amortized O(1) catenable deques, presenting the same interface as an earlier (purely functional, but with memoization) version appearing in Okasaki's book. Kaplan and Tarjan had earlier created a purely functional O(1) worst-case structure, but it is substantially more complicated. 2007: As mentioned in another answer on this page, semi-persistent data structures and persistent union-find by Sylvain Conchon and Jean-Christophe Filliâtre Techniques for verifying functional data structures, before, during, and after Okasaki's book: Phantom types are an old method for creating an API that does not allow certain ill-formed operations. A sophisticated use of them can be found in Oleg Kiselyov and Chung-chieh Shan's Lightweight Static Capabilities . Nested types are not actually more recent than 1998 - Okasaki even uses them in his book. There are many other examples that are not in Okasaki's book; some are new, and some are old. They include: Stefan Kahrs's Red-black trees with types ( code ) Ross Paterson's AVL trees ( mirror ) Chris Okasaki's From fast exponentiation to square matrices: an adventure in types Richard S. Bird and Ross Peterson's de Bruijn notation as a nested datatype Ralf Hinze's Numerical Representations as Higher-Order Nested Datatypes . GADTs are not all that new, either. They are a recent addition to Haskell and some MLs, but they have been present, I think, in various typed lambda calculi since the 1970s . 2004-2010: Coq and Isabelle for correctness . Several people have used theorem provers to verify the correctness of purely functional data structures. Coq can extract these verifications to working code in Haskell, OCaml, and Scheme; Isabelle can extract to Haskell, ML, and OCaml. Coq: Pierre Letouzey and Jean-Christophe Filliâtre formalized red-black and AVL(ish) trees, finding a bug in the OCaml standard library in the process . I formalized Brodal and Okasaki's asymptotically optimal priority queues . Arthur Charguéraud formalized 825 of the 1,700 lines of ML in Okasaki's book . Isabelle: Tobias Nipkow and Cornelia Pusch formalized AVL trees . Viktor Kuncak formalized unbalanced binary search trees . Peter Lammich published The Isabelle Collections framework , which includes formalizations of efficient purely functional data structures like red-black trees and tries, as well as data structures that are less efficient when used persistently, such as two-stack-queues (without Okasaki's laziness trick) and hash tables. Peter Lammich also published formalizations of tree automata , Hinze & Patterson's finger trees (with Benedikt Nordhoff and Stefan Körner), and Brodal and Okasaki's purely functional priority queues (with Rene Meis and Finn Nielsen). René Neumann formalized binomial priority queues . 2007: Refined Typechecking with Stardust , by Joshua Dunfield : This paper uses refinement types for ML to find errors in SMLNJ's red-black tree delete function. 2008: Lightweight Semiformal Time Complexity Analysis for Purely Functional Data Structures by Nils Anders Danielsson : Uses Agda with manual annotation to prove time bounds for some PFDS. Imperative data structures or analyses not discussed in Okasaki's book, but related to purely functional data structures: The Soft Heap: An Approximate Priority Queue with Optimal Error Rate , by Bernard Chazelle : This data structure does not use arrays, and so has tempted first the #haskell IRC channel and later Stack Overflow users , but it includes delete in o(lg n), which is usually not possible in a functional setting, and imperative amortized analysis, which is not valid in a purely functional setting. Balanced binary search trees with O(1) finger updates . In Making Data Structures Persistent , James R Driscoll, Neil Sarnak, Daniel D. Sleator, and Robert E. Tarjan present a method for grouping the nodes in a red-black tree so that persistent updates require only O(1) space. The purely functional deques and finger trees designed by Tarjan, Kaplan, and Mihaescu all use a very similar grouping technique to allow O(1) updates at both ends. AVL-trees for localized search by Athanasios K. Tsakalidis works similarly. Faster pairing heaps or better bounds for pairing heaps : Since Okasaki's book was published, several new analyses of imperative pairing heaps have appeared, including Pairing heaps with O(log log n) decrease Cost by Amr Elmasry and Towards a Final Analysis of Pairing Heaps by Seth Pettie. It may be possible to apply some of this work to Okasaki's lazy pairing heaps. Deterministic biased finger trees : In Biased Skip Lists , by Amitabha Bagchi, Adam L. Buchsbaum, and Michael T. Goodrich, a design is presented for deterministic biased skip lists. Through the skip list/tree transformation mentioned above, it may be possible to make deterministic biased search trees. The finger biased skip lists described by John Iacono and Özgür Özkan in Mergeable Dictionaries might then be possible on biased skip trees. A biased finger tree is suggested by Demaine et al. in their paper on purely functional tries (see above) as a way to reduce the time-and space bounds on finger update in tries. The String B-Tree: A New Data Structure for String Search in External Memory and its Applications by Paolo Ferragina and Roberto Grossi is a well studied data structure combining the benefits of tries and B-trees.
{ "source": [ "https://cstheory.stackexchange.com/questions/1539", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/326/" ] }
1,643
The Complexity Zoo points out in the entry on EXP that if L = P then PSPACE = EXP. Since NPSPACE = PSPACE by Savitch, as far as I can tell the underlying padding argument extends to show that $$(\text{NL} = \text{P}) \Rightarrow (\text{PSPACE} = \text{EXP}).$$ We also know that L $\subseteq$ NL $\subseteq$ NC $\subseteq$ P via Ruzzo's resource-bounded alternating hierarchy. If NC = P, does it follow that PSPACE = EXP? A different interpretation of the question, in the spirit of Richard Lipton: is it more likely that some problems in P cannot be parallelized, than that no exponential-time procedure requires more than polynomial space? I would also be interested in other "surprising" consequences of NC = P (the more unlikely the better). Edit: Ryan's answer leads to a further question: what is the weakest hypothesis that is known to guarantee PSPACE = EXP? W. Savitch. Relationships between nondeterministic and deterministic tape complexities, Journal of Computer and System Sciences 4(2):177-192, 1970. W. L. Ruzzo. On uniform circuit complexity, Journal of Computer and System Sciences 22(3):365-383, 1971. Edit (2014): updated old Zoo link and added links for all other classes.
Yes. $NC$ can be seen as the class of languages recognized by alternating Turing machines that use $O(\log n)$ space and $(\log n)^{O(1)}$ time. (This was first proved by Ruzzo.) $P$ is the class where alternating Turing machines use $O(\log n)$ space but can take up to $n^{O(1)}$ time. For brevity let's call these classes $ATISP[(\log n)^{O(1)},\log n] = NC$ and $ASPACE[O(\log n)] = P$. Suppose the two classes are equal. Replacing the $n$ with $2^n$ in the above (i.e., applying standard translation lemmas), one obtains $TIME[2^{O(n)}] = ASPACE[O(n)] = ATISP[n^{O(1)}, n] \subseteq ATIME[n^{O(1)}] = PSPACE$. If $TIME[2^{O(n)}] \subseteq PSPACE$ then $EXP = PSPACE$ as well, since there are $EXP$-complete languages in $TIME[2^{O(n)}]$. Edit: Although the above answer is perhaps more educational, here's a simpler argument: $EXP = PSPACE$ already follows from "$P$ is contained in polylog space" and standard translation. Note "$P$ is contained in polylog space" is a much weaker hypothesis than $NC = P$. More details: Since $NC$ circuit families have depth $(\log n)^c$ for some constant, every such circuit family can be evaluated in $O((\log n)^c)$ space. Hence $NC \subseteq \bigcup_{c > 0} SPACE[(\log n)^c]$. So $P = NC$ implies $P \subseteq \bigcup_{c > 0} SPACE[(\log n)^c]$. Applying translation (replacing $n$ with $2^n$) implies $TIME[2^{O(n)}] \subseteq PSPACE$. The existence of an $EXP$-complete language in $TIME[2^{O(n)}]$ finishes the argument. Update: Addressing Andreas' additional question, I believe it should be possible to prove something like: $EXP=PSPACE$ iff for all $c$, every polynomially sparse language in $n^{O(\log^c n)}$ time is solvable in polylog space. (Being polynomially sparse means that there are at most $poly(n)$ strings of length $n$ in the language, for all $n$.) If true, the proof would probably go along the lines of Hartmanis, Immerman, and Sewelson's proof that $NE = E$ iff every polynomially sparse language in $NP$ is contained in $P$. (Note, $n^{O(\log^c n)}$ time in polylog space is still enough to imply $PSPACE=EXP$.)
{ "source": [ "https://cstheory.stackexchange.com/questions/1643", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/109/" ] }
1,775
It is clear that any problem that is decidable in deterministic logspace ($L$) runs in at most polynomial time ($P$). There is a wealth of complexity classes between $L$ and $P$. Examples include $NL$, $LogCFL$, $NC^i$, $SAC^i$, $AC^i$, $SC^i$. It is widely believed that $L \neq P$. In one of my blog posts I mentioned two approaches (along with the corresponding conjectures) towards proving $L \neq P$. Both these approaches are based on branching programs and are 20 years apart !! Are there other approaches and/or conjectures towards separating $L$ from $P$ (or) separating any intermediate classes between $L$ and $P$.
Circuit depth lower bounds (equivalently, formula size lower bounds) are probably the most natural approach: A Super-$\log^2(n)$ depth lower bound for a problem in $\mathsf P$ would separate $\mathsf P$ from $\mathsf L$, and the Karchmer-Wigderson communication complexity technique may be the natural one for that.
{ "source": [ "https://cstheory.stackexchange.com/questions/1775", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/344/" ] }
1,794
In the 1980s, Razborov famously showed that there are explicit monotone Boolean functions (such as the CLIQUE function) that require exponentially many AND and OR gates to compute. However, the basis {AND,OR} over the Boolean domain {0,1} is just one example of an interesting gate set that falls short of being universal. This leads to my question: Is there any other set of gates, interestingly different from the monotone gates, for which exponential lower bounds on circuit size are known (with no depth or other restrictions on the circuit)? If not, is there any other set of gates that's a plausible candidate for such lower bounds---bounds that wouldn't necessarily require breaking through the Natural Proofs barrier, as Razborov's monotone-circuits result didn't? If such a gate set exists, then certainly it will be over a k-ary alphabet for k≥3. The reason is that, over a binary alphabet, the (1) monotone gates ({AND, OR}), (2) linear gates ({NOT, XOR}), and (3) universal gates ({AND, OR, NOT}) basically exhaust the interesting possibilities, as follows from Post's classification theorem. (Note that I assume that constants---0 and 1 in the binary case---are always available for free.) With the linear gates, every Boolean function f:{0,1} n →{0,1} that's computable at all is computable by a linear-size circuit; with a universal set, of course we're up against Natural Proofs and the other terrifying barriers. On the other hand, if we consider gate sets over a 3- or 4-symbol alphabet (for example), then a wider set of possibilities opens up---and at least to my knowledge, those possibilities have never been fully mapped out from the standpoint of complexity theory (please correct me if I'm wrong). I know that the possible gate sets are studied extensively under the name of "clones" in universal algebra; I wish I were more conversant with that literature so that I knew what if anything the results from that area mean for circuit complexity. In any case, it doesn't seem out of the question that there are other dramatic circuit lower bounds ripe for the proving, if we simply expand the class of gate sets over finite alphabets that we're willing to consider. If I'm wrong, please tell me why!
(Moved from comments as Suresh suggested. Note some errors in the comment are fixed here.) Thanks to Scott for a great question. Scott seems to suggest that the reason for the difficulty of lower bounds may be the restricted language of operations in the Boolean case. Shannon's counting argument that shows most circuits must be large relies on the gap between countable expressive power and uncountably many circuits. This gap seems to go away when the alphabet has at least 3 symbols. For alphabet size 2 (the Boolean case), the lattice of clones is countably infinite, and is called Post's lattice . Post's lattice also makes clear why there are only a few interesting bases of operations for the Boolean case. For alphabet size 3 or greater the lattice of clones is uncountable. Further, the lattice does not satisfy any nontrivial lattice identity, so it seems impossible to provide a complete description of the lattice. For alphabet size 4 or greater the lattice of clones actually contains every finite lattice as a sublattice. So there are infinitely many possibly interesting bases of operations to consider when the alphabet has 3 or more symbols. Bulatov, Andrei A., Conditions satisfied by clone lattices , Algebra Universalis 46 237–241, 2001. doi: 10.1007/PL00000340 Scott asked further: does the lattice of clones remain uncountable if we assume constants are available for free? The answer is that it does, see for instance Gradimir Vojvodić, Jovanka Pantović, and Ratko Tošić, The number of clones containing an unary function , NSJOM 27 83–87, 1997. ( PDF ) J. Pantović, R. Tošić, and G. Vojvodić, The cardinality of functionally complete algebras on a three element set , Algebra Universalis 38 136–140, 1997. doi: 10.1007/s000120050042 although apparently this was published earlier: Ágoston, I., Demetrovics, J., and Hannák, L. On the number of clones containing all constants , Coll. Math. Soc. János Bolyai 43 21–25, 1983. A nice specific statement is from: A. Bulatov, A. Krokhin, K. Safin, and E. Sukhanov, On the structure of clone lattices , In: "General Algebra and Discrete Mathematics", editors: K. Denecke and O. Lueders, 27–34. Heldermann Verlag, Berlin, 1995. ( PS ) Corollary 3 (attributed to Ágoston et al. as above): Let $k \ge 3$. Then the number of clones in $\mathcal{L}_k$ containing all constants is $2^{\aleph_0}$. To wrap up, I am not aware of any work on using non-Boolean clones for circuit lower bounds. This seems worth investigating in more depth. Given the relatively little that is known about the lattice of clones, there may be interesting bases of operations waiting to be discovered. More links between clone theory and computer science would probably also be of great interest to mathematicians working in universal algebra. A previous example of this kind of interaction came about when Peter Jeavons showed that algebras could be associated with constraint languages, in a way that allows tractability results to be translated into properties of the algebra. Andrei Bulatov used this to prove the dichotomy for CSPs with domain size 3. Going the other way, there has been a revival in interest in tame congruence theory as a result of the computer science application. I wonder what would follow from a link between clone theory and non-Boolean circuit complexity.
{ "source": [ "https://cstheory.stackexchange.com/questions/1794", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/1575/" ] }
1,825
Is the language {$a^{i}b^{j}c^{k} ~|~ i \neq j, i \neq k, j \neq k$} context-free or not? I realized that I have encountered almost all variants of this question with different conditions about the relationship between i, j, and k, but not this one. My guess is that it is not context-free, but do you have a proof?
Ogden's lemma should work: For a given $p$ choose $a^i b^p c^k$ and mark all the $b$'s (and nothing else). $i$ and $k$ are chosen such that for every choice of how many $b$'s are actually pumped there is one pumping exponent such that the number of $b$'s is equal to $i$ and one where it is equal to $k$. That is $i$ and $k$ have to be from the set $\bigcap_{1 \leq n \leq p} \lbrace p-n + m*n \mid m \in \mathbb{N}_0\rbrace$. I am quite sure but too lazy to formally prove that this set is infinite.
{ "source": [ "https://cstheory.stackexchange.com/questions/1825", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/1454/" ] }
1,854
I am thinking of the following problem: I want to find a regular expression that matches a particular set of strings (for ex. valid email addresses) and doesn't match others (invalid email addresses). Suppose by regular expression we mean some well-defined finite state machine, I am not familiar with the exact terminology, but let's agree on some class of allowed expressions. Instead of manually crafting the expression, I want to give it a set of positive and a set of negative examples. It should then come up with an expression that matches the + ones, rejects the - ones and is minimal in some well-defined sense (number of states in the automata?). My questions are: Has this problem been considered, how can it be defined in some more concrete way and can it be solved efficiently? Can we solve it in polynomial time? Is it NP complete, can we approximate it somehow? For what classes of expressions would it work? I would appreciate any pointer to textbooks, articles or such that discuss this topic. Is this related in any way to Kolmogorov complexity? Is this related in any way to learning? If the regular expression is consistent with my examples, by virtue of it being minimal, can we say something about its generalization power on yet unseen examples? What criterion for minimality would be more suitable for this? Which one would be more efficient? Does this have any connections with machine learning? Again any pointers would be helpful... Sorry for the messy question ... Point me in the right direction to figure this out. Thanks !
Yes, it is NP-Hard. Pitt and Warmuth showed that finding the smallest DFA consistent with a given sample cannot be approximated to within $OPT^k$ for any constant $k$ , unless $P = NP$ . Regarding the learning question: Kearns and Valiant proved that you can encode RSA into a DFA. So, even if the labeled examples come from the uniform distribution, being able to generalize to future examples (also even coming from the uniform distribution) would break RSA. Hence, we think that in the worst case, having labeled examples does not help with learning a DFA (in the PAC model). This is one of the classic cryptographic hardness results for learning. Both of these issues are intertwined due to what we call the Occam's Razor Theorem . It basically states that if we have a procedure for finding the smallest hypothesis from a given class that's consistent with a sample labeled by a hypothesis from the same class, then we can PAC learn that class. So, given the RSA hardness result, we would expect that finding the smallest consistent DFA would be hard in general! To add a positive learning result, Angluin showed that you can learn a DFA if you get to make up your own examples, but it requires the additional power being able to ask "is my current hypothesis correct?" This was also a seminal paper in learning. To answer your other question, this is all indeed related to Kolmogorov complexity, as the learning problem becomes easier when the canonical representation of the target DFA has low complexity.
{ "source": [ "https://cstheory.stackexchange.com/questions/1854", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/1609/" ] }
1,893
Updated below We all know the critical importance of peer-review. It is the main form of quality control and feedback on research. However, to an early-stage researcher (like me), it can sometimes be a confusing system/process. Accordingly, there are several treatises on the scientific refereeing process that give guidance. Two (very different) examples from computer science -- this 1994 article by Parberry and a more recent one by Cormode -- offer great advice (though the latter might be a shade mischievous). Here, I'd like to solicit broader advice from the more experienced members of this community about the review process, with particular regard to the peculiarities of theoretical computer science. What are the main criteria for determining the significance of a paper's results? How do I judge whether a paper should be accepted to the conference/journal? Is it important to verify correctness? What are the main elements of a referee report, and which parts are most important? Is it always necessary to give a recommendation of (non)acceptance? What goes in the report and what goes solely to the editor? How does assessment for conferences differ from that in journals? How do reports for conferences differ from those in journals? (How on earth do I rate my "confidence" in my recommendation?) Should the journal version be significantly different from the conference paper? What if I don't understand the paper? ...the proof? (Is it my fault or theirs?) What about typographical/grammatical mistakes? What if there are a lot of them? How much time should I spend on a report? How many reports a year am I expected to write? When is it acceptable to refuse a request to referee? Of course, any other relevant questions and answers on this topic are encouraged, since this is CW. This question is inspired by (stolen from) a similar post at MathOverflow . Update 15/02/2011: I am still very interested in getting more input on this question especially with regard to reviewing conference papers and program committee membership. (These two roles are themselves different beasts, and both very unlike being a referee for a journal article, IMO.) Granted, program committee membership is rarer than refereeing or reviewing (and it hasn't been my privilege yet), but is a responsibility that every researcher in theoretical computer science must take on eventually. Time. How much time am I expected to spend as a committee member or conference reviewer? Given the probability that I could get ten or perhaps many more to handle in the space of a few weeks, how do I avoid running out of time? What are the most important things to spend time on? Confidence. What if the paper is too far from my area of expertise? What factors should go into nominating/asking someone else to review a submission? If it is not too far from my area of expertise and I elect to review it, when is it permissible to give a confidence rating of 1? Criteria. There are critical differences between journals and conferences. Some very important papers are not published in journals. Some very important papers did not previously appear in conferences. What are the most significant distinctions in criteria on which to assess papers in these settings? Recommendations. Inherently, there are fewer recommendations that can be offered to the authors of a conference paper, primarily due to space and time constraints. Also, there is usually only one round of review. Another consideration is that my report becomes public to the entire strong committee. What is the scope of suggestions/directives that I can offer? As before, if you think I've missed out on asking any particular questions, do let me know, or edit directly. This is CW, after all. These new thoughts are partly motivated by reading a paper that Suresh mentioned on his blog .
To the best of your knowledge, does the paper make a significant, well-presented, and correct contribution to the state of the art? If the paper fails any of the three criteria, it's fair to reject it for that reason alone, regardless of the other two. Here's what I think a report should contain. Everything should be visible to the author, except possibly for serious accusations of misconduct. a. A quick summary of the paper, to help the editor judge the quality of the results, and to help convince both the author and the editor that you actually read and understood the paper. Place the result in its larger context. Include a history of prior versions, even if the authors include it in the submission. Be respectful, but brutally honest. b. A discussion of the strengths and weaknesses of the paper, in terms of correctness, novelty, clarity, importance, generality, potential impact, elegance, technical depth, robustness, etc. If you suspect unethical behavior (plagiarism, parallel submission, cooked data), describe your suspicions. Be respectful, but brutally honest. c. A recommendation to the editor for further action — accept, accept with minor revision, ask for a second round of reviewing, or reject outright. Keep in mind that you are making a recommendation, not a decision; if you can't make up your mind, just say so. Be respectful, but brutally honest. d. More detailed feedback to the author — more detailed justification for your recommendation, requests for clarification in the final version, missing references, bugs in the proofs, simplifications, generalizations, typos, etc. Be respectful, but brutally honest. Conference reports should be shorter; program committees have hundreds of papers to consider at once. Whether there should be a difference between conference and journal papers is up to the journal (and indirectly, up to the community). Most theoretical computer science journals do not insist on a significant difference; it is quite common for the conference and journal versions of a theory paper to be essentially identical. When in doubt, ask the editor! If you still don't understand the paper after making a good-faith effort, it's the author's fault, or possibly the editor's, but certainly not yours. The author's primary responsibility is to effectively communicate their result to their audience, and a good editor will send you a paper to referee only if they think you're a good representative of the paper's intended audience. But you do have to make a good-faith effort; do not expect to immediately understand everything (anything?) immediately on your first reading. If there are a lot of errors, don't even read the paper; just recommend rejection on the grounds that the paper is not professionally written. Otherwise, if you really want to be thorough, include a representative list of grammar, spelling, and punctuation mistakes, but don't knock yourself out finding every last bug. Be respectful, but brutally honest. Expect to spend about an hour per page, mostly on internalizing the paper's results and techniques. Be pleasantly surprised when it doesn't actually take that long. (If it takes significantly less time than that, either the paper is either exceedingly elegant and well-written, you know the area extremely well, or the paper is technically shallow. Don't confuse these three possibilities.) You should write at least as many referee reports as other people write for you. If this takes more time than writing your own papers, you're not spending enough time on your own papers.
{ "source": [ "https://cstheory.stackexchange.com/questions/1893", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/233/" ] }
1,920
Please list examples where a theorem from mathematics which was not normally considered to apply in computer science was first used to prove a result in computer science. The best examples are those where the connection was not obvious, but once it was discovered, it is clearly the "right way" to do it. This is the opposite direction of the question Applications of TCS to classical mathematics? For example, see "Green's Theorem and Isolation in Planar Graphs" , where an isolation theorem (which was already known using a technical proof) is re-proven using Green's Theorem from multivariate calculus. What other examples are there?
Maurice Herlihy, Michael Saks, Nir Shavit and Fotios Zaharoglou got the Godel prize in 2004 for their use of algebraic topology in the study of some problems in distributed computing.
{ "source": [ "https://cstheory.stackexchange.com/questions/1920", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/59/" ] }
1,923
Mathematicians sometimes worry about the Axiom of Choice (AC) and Axiom of Determinancy (AD). Axiom of Choice : Given any collection ${\cal C}$ of nonempty sets, there is a function $f$ that, given a set $S$ in ${\cal C}$, returns a member of $S$. Axiom of Determinacy : Let $S$ be a set of infinitely long bit strings. Alice and Bob play a game where Alice picks a 1st bit $b_1$, Bob picks a 2nd bit $b_2$, and so on, until an infinite string $x = b_1 b_2 \cdots $ is constructed. Alice wins the game if $x \in S$, Bob wins the game if $x \not \in S$. The assumption is that for every $S$, there is a winning strategy for one of the players. (For example, if $S$ consists only of the all-ones string, Bob can win in finitely many moves.) It is known that these two axioms are inconsistent with each other. (Think about it, or go here .) Other mathematicians pay little or no attention to the use of these axioms in a proof. They would seem to be almost irrelevant to theoretical computer science, since we believe that we work mostly with finite objects. However, because TCS defines computational decision problems to be infinite bit strings, and we measure (for example) the time complexity of an algorithm as an asymptotic function over the naturals, there is always a possibility that the usage of one of these axioms could creep into some proofs. What is the most striking example in TCS that you know where one of these axioms are required ? (Do you know any examples?) Just to foreshadow a bit, note that a diagonalization argument (over the set of all Turing machines, say) is not an application of the Axiom of Choice. Although the language that a Turing machine defines is an infinite bit string, each Turing machine has a finite description, so we really don't require a choice function for infinitely many infinite sets here. (I put a lot of tags because I have no idea where the examples will come from.)
Any arithmetical statement provable in ZFC is provable in ZF, and hence does not "need" the axiom of choice. By an "arithmetical" statement I mean a statement in the first-order language of arithmetic, meaning that it can be stated using only quantifiers over natural numbers ("for all natural numbers x" or "there exists a natural number x"), without quantifying over sets of natural numbers. At first glance it might seem very restrictive to forbid quantification over sets of integers; however, finite sets of integers can be "encoded" using a single integer, so it's O.K. to quantify over finite sets of integers. Virtually any statement of interest in TCS can, with perhaps a bit of finagling, be phrased as an arithmetical statement, and so doesn't need the axiom of choice. For example, $P\ne NP$ looks at first glance like an assertion about infinite sets of integers, but can be rephrased as, "for every polynomial-time Turing machine, there exists a SAT instance that it gets wrong," which is an arithmetical statement. Thus my answer to Ryan's question is, "There aren't any that I know of." But wait, you may say, what about arithmetical statements whose proof requires something like Koenig's lemma or Kruskal's tree theorem? Don't these require a weak form of the axiom of choice? The answer is that it depends on exactly how you state the result in question. For example, if you state the graph minor theorem in the form, "given any infinite set of unlabeled graphs, there must exist two of them such that one is a minor of the other," then some amount of choice is needed to march through your infinite set of data, picking out vertices, subgraphs, etc. [EDIT: I made a mistake here. As Emil Jeřábek explains , the graph minor theorem—or at least the most natural statement of it in the absence of AC—is provable in ZF. But modulo this mistake, what I say below is still essentially correct. ] However, if instead you write down a particular encoding by natural numbers of the minor relation on labeled finite graphs, and phrase the graph minor theorem as a statement about this particular partial order, then the statement becomes arithmetical and doesn't require AC in the proof. Most people feel that the "combinatorial essence" of the graph minor theorem is already captured by the version that fixes a particular encoding, and that the need to invoke AC to label everything, in the event that you're presented with the general set-theoretic version of the problem, is sort of an irrelevant artifact of a decision to use set theory rather than arithmetic as one's logical foundation. If you feel the same way, then the graph minor theorem doesn't require AC. (See also this post by Ali Enayat to the Foundations of Mathematics mailing list, written in response to a similar question that I once had.) The example of the chromatic number of the plane is similarly a matter of interpretation. There are various questions you can ask that turn out to be equivalent if you assume AC, but which are distinct questions if you don't assume AC. From a TCS point of view, the combinatorial heart of the question is the colorability of finite subgraphs of the plane, and the fact that you can then (if you want) use a compactness argument (this is where AC comes in) to conclude something about the chromatic number of the whole plane is amusing, but of somewhat tangential interest. So I don't think this is a really good example. I think ultimately you may have more luck asking whether there are any TCS questions that require large cardinal axioms for their resolution (rather than AC). Work of Harvey Friedman has shown that certain finitary statements in graph theory can require large cardinal axioms (or at least the 1-consistency of such axioms). Friedman's examples so far are slightly artificial, but I wouldn't be surprised to see similar examples cropping up "naturally" in TCS within our lifetimes.
{ "source": [ "https://cstheory.stackexchange.com/questions/1923", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/225/" ] }
1,948
I'm interested in an explicit Boolean function $f \colon \\{0,1\\}^n \rightarrow \\{0,1\\}$ with the following property: if $f$ is constant on some affine subspace of $\\{0,1\\}^n$, then the dimension of this subspace is $o(n)$. It is not difficult to show that a symmetric function does not satisfy this property by considering a subspace $A=\\{x \in \\{0,1\\}^n \mid x_1 \oplus x_2=1, x_3 \oplus x_4=1, \dots, x_{n-1} \oplus x_n=1\\}$. Any $x \in A$ has exactly $n/2$ $1$'s and hence $f$ is constant the subspace $A$ of dimension $n/2$. Cross-post: https://mathoverflow.net/questions/41129/a-boolean-function-that-is-not-constant-on-affine-subspaces-of-large-enough-dimen
The objects you are searching for are called seedless affine dispersers with one output bit. More generally, a seedless disperser with one output bit for a family $\mathcal{F}$ of subsets of $\{0,1\}^n$ is a function $f : \{0,1\}^n \to \{0,1\}$ such that on any subset $S \in \mathcal{F}$, the function $f$ is not constant. Here, you are interested in $\mathcal{F}$ being the family of affine subspaces Ben-Sasson and Kopparty in "Affine Dispersers from Subspace Polynomials" explicitly construct seedless affine dispersers for subspaces of dimension at least $6n^{4/5}$. The full details of the disperser are a bit too complicated to describe here. A simpler case also discussed in the paper is when we want an affine disperser for subspaces of dimension $2n/5+10$. Then, their construction views ${\mathbb{F}}_2^n$ as ${\mathbb{F}}_{2^n}$ and specifies the disperser to be $f(x) = Tr(x^7)$, where $Tr: {\mathbb{F}}_{2^n} \to {\mathbb{F}}_2$ denotes the trace map: $Tr(x) = \sum_{i=0}^{n-1} x^{2^i}$. A key property of the trace map is that $Tr(x+y) = Tr(x) + Tr(y)$.
{ "source": [ "https://cstheory.stackexchange.com/questions/1948", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/1650/" ] }
1,962
Is the following problem NP-hard? Given a board configuration for $n\times n$ international draughts , find a single legal move. The corresponding problem for $n\times n$ American checkers (aka English draughts) is trivially solvable in polynomial time. There are three major differences between these two games. The first and most significant difference is the “flying king” rule. In checkers, a king may jump over an adjacent opponent's piece into an empty square two steps away in any diagonal direction. In international draughts, a king may jump over an opponent's piece an arbitrary distance away by moving an arbitrary distance along a diagonal. As in checkers, the same piece can be used to capture a series of pieces in a single turn. However, unlike checkers, captured pieces in international draughts are not removed until the entire sequence is over. The capturing piece may jump over or land in the same empty square multiple times, but it may not jump over an opponent's piece more than once. Finally, both checkers and international draughts have a forced capture rule: If you can capture an opponent's piece, you must. However, the rules rules disagree when there are several options for multiple. In checkers, you may choose any maximal sequence of captures; in other words, you can choose any capture sequence that ends when the capturing piece cannot capture any more. In international draughts, you must choose the longest sequence of captures. Thus, my problem is equivalent to the following: Given a board configuration for $n\times n$ international draughts , find a move that captures the maximum number of opposing pieces. It would suffice to prove that the following problem is NP-complete. (It's obviously in NP.) Given a board configuration for $n\times n$ international draughts involving only kings , can (and therefore must) one player capture all her opponent's pieces in a single turn? The corresponding checkers problem can be answered in polynomial time; this is an entertaining homework exercise. The problem looks more similar to Demaine, Demaine, and Eppstein's analysis of Phutball endgames ; a solution to the entertaining homework exercise appears at the end of their paper. A solution also appears in the FOCS 1978 paper by Frankel et al. that proves that playing checkers optimally is PSPACE-hard; see also Robson's 1984 proof that checkers is actually EXPTIME-complete.
OK, here's the reduction. Turns out you don't need planarity after all. Also, for "find a legal move", I take the decision question as "is move X legal?". First, let's work instead with a game where pieces move orthogonally instead of diagonally. This game is equivalent (just look at the draughts board rotated 45 degrees) except for edge properties, which we will not use. We use two gadgets: merge / split and crossover. See http://www.hearn.to/draughts.pdf . We assume there is a single White king on the board to move. (No other piece will be able to capture any significant number of pieces.) It will move through the indicated corridors, capturing black pieces along the way. First, merge: if the king enters on any of the N paths A (via capturing a black piece, not shown), it can exit at B. Likewise, if we reverse the gadget and it enters at B, capturing the shown piece, it can exit along any path A (again, capturing an external black piece). This is a single-use gadget (because the exit black piece can be captured only once). Second, crossover. If the king enters via A (C), it can exit at B (D). It can't stop in the middle and change routes, because that would be a non-capturing move segment. Now, given a directed graph, construct a corresponding game configuration as follows. For each vertex, construct a merge which feeds into a split. Route the split outputs to the merge inputs of the vertex gadgets (merge + split) corresponding to the vertices the exiting edges connect to, using crossovers as necessary. Start the king on an extra input to any vertex (with a black piece to capture to let it enter the vertex). Finally, equalize all of the "edge lengths" by adding extra black pieces along the output / input pathways as needed. If there are V vertices, and k black pieces along each edge, then the king can capture 2V + kV + 1 pieces if and only if there is a Hamiltonian circuit of the corresponding graph. If the king has an alternative move available, capturing a simple chain of 2V + kV pieces, then determining whether that alternative move is legal is NP-complete.
{ "source": [ "https://cstheory.stackexchange.com/questions/1962", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/111/" ] }
1,982
I thought I would share this question as it might be interesting for other users here. Assume that a function which is in a uniform class (like $NP$) is also in a small nonuniform class (like $AC^0/poly$, i.e. nonuniform $AC^0$), does this imply that the function is contained in a smaller uniform class (like $P$)? If the answer to this question is positive, what is the smallest uniform complexity class that contains $NP \cap AC^0/poly$? If negative, can we find an interesting natural counterexample? Is $AC^0/poly \cap NP$ contained in $P$? Note: A friend has already partially answered my question offline, I will add his answer if he doesn't add it himself. The question is my second attempt to formalize the following informal question: Can non-uniformity help us in computing natural uniform problems? Related: Is there a candidate for a natural problem in $P/poly−P$?
Here's a simplification of Ryan's answer. Suppose that $\Lambda \in NE \setminus E$. Define the language $L = \{x : |x| \in \Lambda\}$. The assumption $\Lambda \in NE \setminus E$ translates to $L \in NP \setminus P$. Also, trivially $L \in AC^0/poly$.
{ "source": [ "https://cstheory.stackexchange.com/questions/1982", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/186/" ] }
1,988
I have been developing a SAT algorithm for a while, and have reached a point where I'd like to share it. I don't know many people in computer science, and I'm not sure exactly where to turn. I'm wondering what resources are available for someone with an algorithm who is considering publishing. I also need help analyzing the runtime and correctness of my algorithm. My major problem is in analyzing the runtime. I need help with a detailed analysis of this. I'm fairly certain that the algorithm is correct, but it would be helpful if someone would verify this as well. So is there anyone who would be willing to analyze my algorithm? Additionally, what resources are available for a task like this?
If your SAT algorithm is meant to be practical, then you should run the SAT competition benchmarks on it. The SAT solving community is going to take your work much more seriously if you can show that your approach is competitive with existing solvers. Your solver doesn't have to be faster than every solver, or solve more instances, but it should be a serious competitor. You don't need a very fast or powerful machine to run the benchmarks; you can simply compare runtime against one of the free SAT solvers like MiniSAT or PicoSAT . These solvers will also allow you to see what the answers should look like. If you are working on a practical solver that uses new techniques, and your approach is not yet competitive, I would still suggest trying these benchmarks. They would help you to understand the kinds of problems that you should be aiming to solve, and the kind of performance you should be aiming for. You might also want to read some of the key chapters of the Handbook of Satisfiability , or the recent survey Knot Pipatsrisawat and Adnan Darwiche, On Modern Clause-Learning Satisfiability Solvers , Journal of Automated Reasoning 44 277–301, 2010. ( PDF ) to see the kinds of arguments that support the major solvers. If you have new ideas that are not yet optimized to perform as well as the top solvers, you would need to explain the potential advantages of your approach to someone who knows the long sequence of theoretical reasoning that has led to the current set of "best practice" design decisions. If your contribution is purely theoretical, then you need to be aware of the many papers in this area, and explain in your paper why your approach is better in at least some way. Have a look at recent work by for instance Amin Coja-Oghlan or Alan Frieze to get a feel for the state of the art and for useful pointers to important papers.
{ "source": [ "https://cstheory.stackexchange.com/questions/1988", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/1549/" ] }
2,021
Which would be the consequences of #P = FP? I'm interested in both practical and theoretical consequences. From a practical point of view, I'm particularly interested in consequences on Artificial Intelligence. Pointers to papers or books are more than welcome. Please do not say that #P = FP implies P = NP, I already know that. Also, please do not say "there will be no practical consequences if the algorithm runs in time $\Omega(n^{\alpha})$, where $\alpha$ is the number of electrons in the Universe" : permit me to assume that, if a deterministic polynomial time algorithm for a #P-complete problem exists, its running time will be "clement" ($O(n^2)$, for example).
Here are a few theoretical consequences of the equality FP=#P, although they have nothing to do with artificial intelligence. The assumption FP=#P is equivalent to P= PP , so let me use the latter notation. If P=PP, then we have P= BQP : quantum polynomial-time computation can be simulated by classical, deterministic polynomial-time computation. This is a direct consequence of BQP⊆PP [ADH97, FR98] (and of an earlier result BQP⊆P PP [BV97]). On top of my knowledge, P=BQP is not known to follow from the assumption P=NP. This situation is different from the case of randomized computation ( BPP ): since BPP⊆NP NP [Lau83], the equality P=BPP follows from P=NP. Another consequence of P=PP is that the Blum-Shub-Smale model of computation over the reals with rational constants is equvalent to Turing machines in a certain sense. More precisely, P=PP implies P=BP(P ℝ 0 ); that is, if a language L ⊆{0,1} * is decidable by a constant-free program over the reals in polynomial time, then L is decidable by a polynomial-time Turing machine. (Here “BP” stands for “Boolean part” and has nothing to do with BPP.) This follows from BP(P ℝ 0 )⊆ CH [ABKM09]. See the paper for definitions. An important problem in BP(P ℝ 0 ) is the square-root sum problem and friends (e.g. “Given an integer k and a finite set of integer-coordinate points on the plane, is there a spanning tree of total length at most k ?”) [Tiw92]. Similarly to the second argument, the problem of computing a specific bit in x y when positive integers x and y are given in binary will be in P if P=PP. References [ABKM09] Eric Allender, Peter Bürgisser, Johan Kjeldgaard-Pedersen and Peter Bro Miltersen. On the complexity of numerical analysis. SIAM Journal on Computing , 38(5):1987–2006, Jan. 2009. http://dx.doi.org/10.1137/070697926 [ADH97] Leonard M. Adleman, Jonathan DeMarrais and Ming-Deh A. Huang. Quantum computability. SIAM Journal on Computing , 26(5):1524–1540, Oct. 1997. http://dx.doi.org/10.1137/S0097539795293639 [BV97] Ethan Bernstein and Umesh Vazirani. Quantum complexity theory. SIAM Journal on Computing , 26(5):1411–1473, Oct. 1997. http://dx.doi.org/10.1137/S0097539796300921 [FR98] Lance Fortnow and John Rogers. Complexity limitations on quantum computation. Journal of Computer and System Sciences , 59(2):240–252, Oct. 1999. http://dx.doi.org/10.1006/jcss.1999.1651 [Lau83] Clemens Lautemann. BPP and the polynomial time hierarchy. Information Processing Letters , 17(4):215–217, Nov. 1983. http://dx.doi.org/10.1016/0020-0190(83)90044-3 [Tiw92] Prasoon Tiwari. A problem that is easier to solve on the unit-cost algebraic RAM. Journal of Complexity , 8(4):393–397, Dec. 1992. http://dx.doi.org/10.1016/0885-064X(92)90003-T
{ "source": [ "https://cstheory.stackexchange.com/questions/2021", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/947/" ] }
2,032
Edit : As Ravi Boppana correctly pointed out in his answer and Scott Aaronson also added another example in his answer , the answer to this question turned out to be “yes” in a way which I had not expected at all. First I thought that they did not answer the question I had wanted to ask, but after some thinking, these constructions answer at least one of the questions I wanted to ask, that is, “Is there any way to prove a conditional result ‘P=NP ⇒ L ∈P’ without proving the unconditional result L ∈PH?” Thanks, Ravi and Scott! Is there a decision problem L such that the following conditions are both satisfied? L is not known to be in the polynomial hierarchy. It is known that P=NP will imply L ∈P. An artificial example is as good as a natural one. Also, although I use the letter “ L ,” it can be a promise problem instead of a language if it helps. Background . If we know that a decision problem L is in the polynomial hierarchy, then we know that “P=NP ⇒ L ∈P.” The intent of the question is to ask whether the converse holds. If a language L satisfying the above two conditions exists, then it can be thought of as an evidence that the converse fails. The question has been motivated by Joe Fitzsimons’s interesting comment to my answer to Walter Bishop’s question “ Consequences of #P = FP .”
Since you don't mind an artificial language, how about defining $L$ to be empty if P equals NP and to be the Halting Problem if P doesn't equal NP. Okay, it's a bit of a cheat, but I think you'll need to rephrase the problem to avoid such cheats.
{ "source": [ "https://cstheory.stackexchange.com/questions/2032", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/567/" ] }
2,052
A very specific question, I'm aware, and I doubt it will be answered by anyone that isn't already familiar with the rules of Magic. Cross-posted to Draw3Cards . Here are the comprehensive rules for the game Magic: the Gathering . See this question for a list of all Magic Cards. My question is - is the game Turing Complete? For more details, please see the post at Draw3Cards .
Alex Churchill (@AlexC) has posted a solution that does not require cooperation between the players, but rather models the complete execution of a universal Turing machine with two states and 18 tape symbols. For details, see https://www.toothycat.net/~hologram/Turing/ [ archive ].
{ "source": [ "https://cstheory.stackexchange.com/questions/2052", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/81/" ] }
2,058
This may be a basic question, but I've been reading and trying to understand papers on such subjects as Nash equilibrium computation and linear degeneracy testing and have been unsure of how real numbers are specified as input. E.g., when it's stated that LDT has certain polynomial lower bounds, how are the real numbers specified when they are treated as input?
I disagree with your accepted answer by Kaveh. For linear programming and Nash equilibria, floating point may be acceptable. But floating point numbers and computational geometry mix very badly: the roundoff error invalidates the combinatorial assumptions of the algorithms, frequently causing them to crash. More specifically, a lot of computational geometry algorithms depend on primitive tests that check whether a given value is positive, negative, or zero. If that value is very close to zero and floating point roundoff causes it to have the wrong sign, bad things can happen. Instead, inputs are often assumed to have integer coordinates, and intermediate results are often represented exactly, either as rational numbers with sufficiently high precision to avoid overflow or as algebraic numbers. Floating point approximations to these numbers may be used to speed up the computations, but only in situations where the numbers can be guaranteed to be far enough away from zero that the sign tests will give the right answers. In most theoretical algorithms papers in computational geometry, this issue is sidestepped by assuming that the inputs are exact real numbers and that the primitives are exact tests of the signs of roots of low-degree polynomials in the input values. But if you are implementing geometric algorithms then this all becomes very important.
{ "source": [ "https://cstheory.stackexchange.com/questions/2058", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/-1/" ] }
2,077
The first step of the AKS primality testing algorithm is to check if the input number is a perfect power. It seems that this is a well known fact in number theory since the paper did not explain it in details. Can someone tell me how to do this in polynomial time? Thanks.
Given a number n, if at all it can be written as $a^b$ (b > 1), then $b < \log(n) + 1$. And for every fixed $b$, checking if there exists an $a$ with $a^b = n$ can be done using binary search. The total running time is therefore $O(\log^2 n)$ I guess.
{ "source": [ "https://cstheory.stackexchange.com/questions/2077", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/1607/" ] }
2,119
Background The computation over real numbers are more complicated than computation over natural numbers, since real numbers are infinite objects and there are uncountably many real numbers, therefore real numbers can not be faithfully represented by finite strings over a finite alphabet. Unlike the classical computability over finite strings where different models of computation like: lambda calculus, Turing machines, recursive functions, ... turn out to be equivalent (at least for computability over functions on strings), there are various proposed models for computation over real numbers which are not compatible. For example, in the TTE model (see also [Wei00]) which is the closest one to the classical Turing machine model, the real numbers are represented using infinite input tapes (like Turing's oracles) and it is not possible to decide the comparison and equality relations between two given real numbers (in finite amount of time). On the other hand in the BBS/real-RAM models which are similar to RAM machine model , we have variables that can store arbitrary real numbers, and comparison and equality are among the atomic operations of the model. For this and similar reasons many experts say that the BSS/real-RAM models are not realistic (cannot be implemented, at least not on current digital computers), and they prefer the TTE or other equivalent models to TTE like effective domain theoretic model, Ko-Friedman model, etc. If I understood correctly , the default model of computation which is used in Computational Geometry is the BSS (a.k.a. real-RAM , see [BCSS98]) model. On the other hand, it seems to me that in the implementation of the algorithms in Computational Geometry (e.g. LEDA ), we are only dealing with algebraic numbers and no higher-type infinite objects or computations are involved (is this correct?). So it appears to me (probably naively) that one can also use the classical model of computation over finite strings to deal with these numbers and use the usual model of computation (which is also used for implementation of the algorithms) to discuss correctness and complexity of algorithms. Questions: What are the reasons that researchers in Computational Geometry prefer to use the BSS/real-RAM model? (reasons specific Computational Geometry for using the BSS/real-RAM model) What are the problems with the (probably naive) idea that I have mentioned in the previous paragraph? (using the classic model of computation and restricting the inputs to algebraic numbers in Computational Geometry) Addendum: There is also the complexity of algorithms issue, it is very easy to decide the following problem in the BSS/real-RAM model: Given two sets $S$ and $T$ of positive integers, is $\sum_{s\in S} \sqrt{s} > \sum_{t\in T}\sqrt{t}$ ? While no efficient integer-RAM algorithm is known for solving it. Thanks to JeffE for the example. References: Lenore Blum, Felipe Cucker, Michael Shub, and Stephen Smale, "Complexity and Real Computation", 1998 Klaus Weihrauch, " Computable Analysis, An Introduction ", 2000
First of all, computational geometers don't think of it as the BSS model. The real RAM model was defined by Michael Shamos in his 1978 PhD thesis ( Computational Geometry ), which arguably launched the field. Franco Preparata revised and extended Shamos' thesis into the first computational geometry textbook, published in 1985. The real RAM is also equivalent ( except for uniformity; see Pascal's answer! ) to the algebraic computation tree model defined by Ben-Or in 1983. Blum, Shub, and Smale's efforts were published in 1989, well after the real-RAM had been established, and were almost completely ignored by the computational geometry community. Most (classical) results in computational geometry are heavily tied to issues in combinatorial geometry, for which assumptions about coordinates being integral or algebraic are (at best) irrelevant distractions. Speaking as a native, it seems completely natural to consider arbitrary points, lines, circles, and the like as first class objects when proving things about them, and therefore equally natural when designing and analyzing algorithms to compute with them. For most (classical) geometric algorithms, this attitude is reasonable even in practice. Most algorithms for planar geometric problems are built on top of a very small number of geometric primitives: Is point $p$ to the left or right of point $q$? Above, below, or on the line through points $q$ and $r$? Inside, outside, or on the circle determined by points $q,r,s$? Left or right of the intersection of segments $qr$ and $st$? Each of these primitives is implemented by evaluating the sign of a low-degree polynomial in the input coordinates. (So these algorithms can be described in the weaker algebraic decision tree model.) If the input coordinates happen to be integers, these primitives can be evaluated exactly with only constant-factor increase in precision, and so running times on the real RAM and the integer RAM are the same. For similar reasons, when most people think about sorting algorithms, they don't care what they're sorting, as long as the data comes from a totally ordered universe and any two values can be compared in constant time. So the community developed a separation of concerns between the design of “real” geometric algorithms and their practical implementation; hence the development of packages like LEDA and CGAL. Even for people working on exact computation, there is a distinction between the real algorithm, which uses exact real arithmetic as part of the underlying model, and the implementation , which is forced by the otherwise irrelevant limitations of physical computing devices to use discrete computation. Within this worldview, for example, the most important open problem in computational geometry is the existence of a polynomial-time algorithm for linear programming. No, the ellipsoid and interior-point methods don't count. Unlike the simplex algorithm, those algorithms aren't guaranteed to terminate unless the constraint matrix happens to be rational. ( There are combinatorial types of convex polytopes that can only be represented by irrational constraint matrices , so this is a nontrivial restriction.) And even when the constraint matrix is rational, the running times of those algorithms aren't bounded by any function of the input size (dimension$\times$#constraints). There are a few geometric algorithms that really do rely heavily on the algebraic computation tree model, and therefore cannot be implemented exactly and efficiently on physical computers. One good example is minimum-link paths in simple polygons, which can be computed in linear time on a real RAM, but require a quadratic number of bits in the worst-case to represent exactly. Another good example is Chazelle's hierarchical cuttings , which are used in the most efficient algorithms known for simplex range searching . These cuttings use a hierarchy of sets of triangles, where the vertices of triangles at each level are intersection points of lines through edges of triangles at previous levels. Thus, even if the input coordinates are integers, the vertex coordinates for these triangles are algebraic numbers of unbounded degree; nevertheless, the algorithms for constructing and using cuttings assume that coordinates can be manipulated exactly in constant time. So, my short, personally biased answer is this: TTE, domain theory, Ko-Friedman, and other models of “realistic” real-number computation all address issues that the computational geometry community, on the whole, just doesn't care about.
{ "source": [ "https://cstheory.stackexchange.com/questions/2119", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/186/" ] }
2,136
On MathOverflow, Timothy Gowers asked a question titled " Demonstrating that rigour is important ". Most of the discussion there was about cases showing the importance of proof, which people on CSTheory probably do not need to be convinced about. In my experience proofs need to be more rigorous in theoretical computer science than in many parts of continuous mathematics, because our intuition so often turns out to be wrong for discrete structures, and because the drive to create implementations encourages more detailed arguments. A mathematician may be content with an existence proof, but a theoretical computer scientist will usually try to find a constructive proof. The Lovász Local Lemma is a nice example [1]. I would therefore like to know are there specific examples in theoretical computer science where a rigorous proof of a believed-to-be-true statement has led to new insight into the nature of the underlying problem? A recent example that is not directly from algorithms and complexity theory is proof-theoretic synthesis , the automatic derivation of correct and efficient algorithms from pre- and post-conditions [2]. [1] Robin A. Moser and Gábor Tardos, A Constructive Proof of the General Lovász Local Lemma , JACM 57 , article 11, 2010. http://doi.acm.org/10.1145/1667053.1667060 [2] Saurabh Srivastava, Sumit Gulwani, and Jeffrey S. Foster, From program verification to program synthesis , ACM SIGPLAN Notices 45 , 313–326, 2010. http://doi.acm.org/10.1145/1707801.1706337 Edit: The kind of answer I had in mind is like those by Scott and matus. As Kaveh suggested, this is a triple of something people wanted to prove (but which wasn't necessarily unexpected by "physics", "handwaving", or "intuitive" arguments), a proof, and consequences for the "underlying problem" that followed from that proof that weren't anticipated (perhaps creating a proof required unexpected new ideas, or naturally leads to an algorithm, or changed the way we think about the area). Techniques developed while developing proofs are the building blocks of theoretical computer science, so to retain the value of this somewhat subjective question, it would be worth focusing on personal experience, such as provided by Scott, or an argument that is backed up by references, as matus did. Moreover, I'm trying to avoid arguments about whether something qualifies or not; unfortunately the nature of the question may be intrinsically problematic. We already have a question about "surprising" results in complexity: Surprising Results in Complexity (Not on the Complexity Blog List) so ideally I am looking for answers that focus on the value of rigorous proof , not necessarily the size of the breakthrough.
András, as you probably know, there are so many examples of what you're talking about that it's almost impossible to know where to start! However, I think this question can actually be a good one, if people give examples from their own experience where the proof of a widely-believed conjecture in their subarea led to new insights. When I was an undergrad, the first real TCS problem I tackled was this: what's the fastest quantum algorithm to evaluate an OR of √n ANDs of √n Boolean variables each? It was painfully obvious to me and everyone else I talked to that the best you could do would be to apply Grover's algorithm recursively, both to the OR and to the ANDs. This gave an O(√n log(n)) upper bound. (Actually you can shave off the log factor, but let's ignore that for now.) To my enormous frustration, though, I was unable to prove any lower bound better than the trivial Ω(n 1/4 ). "Going physicist" and "handwaving the answer" never looked more appealing! :-D But then, a few months later, Andris Ambainis came out with his quantum adversary method , whose main application at first was a Ω(√n) lower bound for the OR-of-ANDs. To prove this result, Andris imagined feeding a quantum algorithm a superposition of different inputs; he then studied how the entanglement between the inputs and the algorithm increased with each query the algorithm made. He showed how this approach let you lower-bound quantum query complexity even for "messy," non-symmetric problems, by using only very general combinatorial properties of the function f that the quantum algorithm was trying to compute. Far from just confirming that the quantum query complexity of one annoying problem was what everyone expected it to be, these techniques turned out to represent one of the biggest advances in quantum computing theory since Shor's and Grover's algorithms. They've since been used to prove dozens of other quantum lower bounds, and were even repurposed to obtain new classical lower bounds. Of course, this is "just another day in the wonderful world of math and TCS." Even if everyone "already knows" X is true, proving X very often requires inventing new techniques that then get applied far beyond X, and in particular to problems for which the right answer was much less obvious a priori .
{ "source": [ "https://cstheory.stackexchange.com/questions/2136", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/109/" ] }
2,149
One of the holy grails of algorithm design is finding a strongly polynomial algorithm for linear programming, i.e., an algorithm whose runtime is bounded by a polynomial in the number of variables and constraints and is independent of the size of the representation of the parameters (assuming unit cost arithmetic). Would resolving this question have implications outside of better algorithms for linear programming? For instance, would the existence/non-existence of such an algorithm have any consequences for geometry or complexity theory? Edit: Maybe I should clarify what I mean by consequences. I'm looking for mathematical consequences or conditional results, implications that are known to be true now . For instance: "a polynomial algorithm for LP in the BSS model would separate/collapse algebraic complexity classes FOO and BAR", or "if there is no strongly polynomial algorithm then it resolves such-and-such conjecture about polytopes", or "a strongly polynomial algorithm for problem X which can be formulated as an LP would have interesting consequence blah ". The Hirsch conjecture would be a good example, except that it only applies if simplex is polynomial.
This would show that parity and mean-payoff games are in P. See Sven Schewe. From Parity and Payoff Games to Linear Programming. MFCS 2009.
{ "source": [ "https://cstheory.stackexchange.com/questions/2149", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/205/" ] }
2,168
Consider the 3-SAT problem on n variables. The number of possible distinct clauses is: $$C = 2n \times 2(n-1) \times 2(n -2) / 3! = 4 n(n-1)(n-2)/3 \text.$$ The number of problem instances is the number of all subsets of the set of possible clauses: $I = 2^C$. Trivially, for each $n \ge 3$, there exist at least one satisfiable instance and one unsatisfiable instance. Is it possible to calculate, or at least estimate, the number of satisfiable instance for any given n?
A long history of work on phase transitions in SAT has shown that for any fixed $n$, there's a threshold parametrized by the ratio of number of clauses to $n$ that decides satisfiability. Roughly speaking, if the ratio is less than 4.2, then with overwhelming probability the instance is satisfiable (and so a huge fraction of the number of instances with these many clauses and variables are satisfiable). If the ratio is slightly above 4.2, then the reverse holds - an overwhelming fraction of instances are unsatisfiable. The references are way too many to cite here: one source of information is the book by Mezard and Montanari . If anyone has sources for surveys etc on this topic, they could post it in comments or edit this answer (I'll make it CW) References: - Achlioptas survey - Where the really hard problems are - Refining the phase transition in combinatorial search
{ "source": [ "https://cstheory.stackexchange.com/questions/2168", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/1749/" ] }
2,215
I'm new to the site. On mathoverflow this would be community wiki, but I don't see how to set that here. Not a research question, but hopefully of interest to professional theoretical computer scientists. I am a 2nd year grad student in theory, and I was wondering what advice the community had for what I should be doing now to aim for a career in academia. I know I should "do great research" -- yes, I try. :-) I am looking for less obvious advice. How important are social aspects? Going to conferences, knowing great people? Am I at a big disadvantage if my advisor/school are not famous? Does a blog help/hurt my chances? Thanks!
Ok, let me bite with my own opinions: How important are social aspects? I would say that they are very important. Despite popular myth, scientific research is really a social activity -- Your research must interest other people in the area. Going to conferences, Very important -- for the previous reason knowing great people? Practically it may help a bit if they know you as their recommendation letters may carry more weight - but even this is really second-order. Am I at a big disadvantage if my advisor/school are not famous? The truth is that it is often harder to find the "right problems" to work on when you are not at a central department in your area. Human nature being what it is, it may also be somewhat more difficult to get your papers into conferences and journals if you are not from a "famous" school -- but I believe that not by much and that this is quite minor in TCS. Does a blog help/hurt my chances? Well, it depends what you write there.... On the average, I would guess that it's a net plus.
{ "source": [ "https://cstheory.stackexchange.com/questions/2215", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/-1/" ] }
2,229
This question is inspired by the Georgia Tech Algorithms and Randomness Center 's t-shirt, which asks "Randomize or not?!" There are many examples where randomizing helps, especially when operating in adversarial environments. There are also some settings where randomizing doesn't help or hurt. My question is: What are some settings when randomizing (in some seemingly reasonable way) actually hurts? Feel free to define "settings" and "hurts" broadly, whether in terms of problem complexity, provable guarantees, approximation ratios, or running time (I expect running time is where the more obvious answers will lie). The more interesting the example, the better!
Here is a simple example from game theory. In games in which both pure and mixed Nash equilibria exist, the mixed ones are often much less natural, and much "worse". For example, consider a simple balls and bins game: there are n bins, and n balls (players). Each player gets to pick a bin, and incurs a cost equal to the number of people in his bin. The pure Nash equilibrium has everyone each picking a unique bin, and nobody incurs cost more than 1. However, there is a mixed Nash equilibrium in which everyone randomly chooses a bin, and then with high probability, there will be one bin with ~ $\log(n)/\log\log(n)$ people. Since OPT is 1, that means that (if what we care about is max player cost), if randomization is not allowed, then the price of anarchy is 1. But if randomization is allowed, it grows unboundedly with the number of players in the game. The takeaway message: randomization can harm coordination.
{ "source": [ "https://cstheory.stackexchange.com/questions/2229", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/123/" ] }
2,252
The Chomsky(–Schützenberger) hierarchy is used in textbooks of theoretical computer science, but it obviously only covers a very small fraction of formal languages (REG, CFL, CSL, RE) compared to the full Complexity Zoo Diagram . Does the hierarchy play any role in current research anymore? I found only little references to Chomsky here at cstheory.stackexchange, and in Complexity Zoo the names Chomsky and Schützenberger are not mentioned at all. Is current research more focused on other means of description but formal grammars? I was looking for practical methods to describe formal languages with different expressiveness, and stumbled upon growing context sensitive language (GCSL) and visibly pushdown languages (VPL), which both lie between the classic Chomsky languages. Shouldn't the Chomsky hierarchy be updated to include them? Or is there no use of selecting a specific hierarchy from the full set of complexity classes? I tried to select only those languages that can be fit in gaps of the Chomsky hierarchy, as far as I understand: REG (=Chomsky 3) ⊊ VPL ⊊ DCFL ⊊ CFL (=Chomsky 2) ⊊ GCSL ⊊ CSL (=Chomsky 1) ⊊ R ⊊ RE I still don't get where "mildly context-sensitive languages" and "indexed languages" fit in (somewhere between CFL and CSL) although there seems to be of practical relevance for natural language processing (but maybe anything of practical relevance is less interesting in theoretical research ;-). In addition you could mention GCSL ⊊ P ⊂ NP ⊂ PSPACE and CSL ⊊ PSPACE ⊊ R to show the relation to the famous classes P and NP. I found on GCSL and VPL: Robert McNaughton: An Insertion into the Chomsky Hierarchy?. In: Jewels are Forever, Contributions on Theoretical Computer Science in Honor of Arto Salomaa. S. 204-212, 1999 http://en.wikipedia.org/wiki/Nested_word#References (VPL) I'd also be happy if you know any more recent textbook on formal grammars that also deal with VPL, DCLF, GCSL and indexed grammars, preferable with pointers to practical applications.
From what I have seen in the Natural Language Processing community, formal grammars à la Chomsky are not used so much any more. They (too) think that the Chomsky Hierarchy is outdated to model language. What took its place is stuff like Re-writting rule (the Lars algorithm), dependency models (Dan Klein), Tree Substitution Grammar (the DOP model), Binary Feature Grammars (Alex Clark).
{ "source": [ "https://cstheory.stackexchange.com/questions/2252", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/758/" ] }
2,312
Let $G = (V, E, w)$ be a graph with weight function $w:E\rightarrow \mathbb{R}$. The max-cut problem is to find: $$\arg\max_{S \subset V} \sum_{(u,v) \in E : u \in S, v \not \in S}w(u,v)$$ If the weight function is non-negative (i.e. $w(e) \geq 0$ for all $e \in E$), then there are many extremely simple 2-approximations for max-cut. For example, we can: Pick a random subset of vertices $S$. Pick an ordering on the vertices, and greedily place each vertex $v$ in $S$ or $\bar{S}$ to maximize the edges cut so far Make local improvements: If there is any vertex in $S$ that can be moved to $\bar{S}$ to increase the cut (or vice versa), make the move. The standard analysis of all of these algorithms actually shows that the resulting cut is at least as large as $\frac{1}{2}\sum_{e \in E}w(e)$, which is an upper bound on $1/2$ the weight of the max-cut if $w$ is non-negative -- but if some edges are allowed to have negative weight, is not! For example, algorithm 1 (pick a random subset of vertices) can clearly fail on graphs with negative edge weights. My question is: Is there a simple combinatorial algorithm that gets an O(1) approximation to the max-cut problem on graphs that can have negative edge weights? To avoid the possibly sticky issue of the max-cut taking value $0$, I will allow that $\sum_{e \in E}w(e) > 0$, and/or be satisfied with algorithms that result in small additive error in addition to a multiplicative factor approximation.
Here was my first attempt at an argument. It was wrong, but I fixed it after the "EDIT:" If you could efficiently approximately solve the max-cut problem with negative edge weights, couldn't you use that to solve the max-cut problem with positive edge weights? Start with a max-cut problem you want to solve whose optimal solution is $b$. Now, put a large negative weight edge (with weight $-a$) between $u$ and $v$. The optimum solution of the new problem is $b-a$, so our hypothetical approximation algorithm will get you a solution with maximum cut whose value is at most $(b-a)/2$ worse than optimal. On the original graph, the maximum cut is still at most $(b-a)/2$ worse than optimal. If you choose $a$ close to $b$, this violates the inapproximability result that if P$\neq$NP, you cannot approximate max-cut to better than a $16/17$ factor. EDIT: The above algorithm doesn't work because you can't guarantee that $u$ and $v$ are on opposite sides of the cut in the new graph, even if they were originally. I can fix this as follows, though. Let's assume that we have an approximation algorithm which will give us a cut within a factor of 2 of OPT as long as the sum of all the edge weights are positive. As above, start with a graph $G$ with all non-negative weights on edges. We'll find a modified graph $G^* $ with some negative weights such that if we can approximate the max cut of $G^* $ within a factor of 2, we can approximate the max cut of $G$ very well. Choose two vertices $u$ and $v$, and hope that they're on opposite sides of the max cut. (You can repeat this for all possible $v$ to ensure that one try works.) Now, put a large negative weight $-d$ on all edges $(u,x)$ and $(v,x)$ for $x \neq u,v$, and a large positive weight $a$ on edge $(u,v)$. Assume that the optimal cut has weight $OPT$. A cut with value $c$ in $G$, where vertices $u$ and $v$ are on the same side of the cut, now has value at $c - 2dm$ where $m$ is the number of vertices on the other side of the cut. A cut with $(u,v)$ on opposite sides with original value $c$ now has value $c + a - (n-2)d$. Thus, if we choose $d$ large enough, we can force all cuts with $u$ and $v$ on the same side to have negative value, so if there is any cut with positive value, then the optimal cut in $G^* $ will have $u$ and $v$ on opposite sides. Note that we are adding a fixed weight $(a - (n-2)d)$ to any cut with $u$ and $v$ on opposite sides. Let $f=(a - (n-2)d)$. Choose $a$ so that $f \approx - 0.98 OPT$ (we'll justify this later). A cut with weight $c$ in $G$ having $u$ and $v$ on opposite sides now becomes a cut with weight $c - 0.98 OPT$. This means the optimal cut in $G^* $ has weight $0.02 OPT$. Our new algorithm finds a cut in $G^* $ with weight at least $0.01 OPT$. This translates into a cut in the original graph $G$ with weight at least $0.99OPT$ (since all cuts in $G^* $ with positive weight separate $u$ and $v$), which is better than the inapproximability result. There is no problem with choosing $d$ large enough to make any cut with $u$ and $v$ on the same side negative, since we can choose $d$ as large as we want. But how did we choose $a$ so that $f \approx -.99OPT$ when we didn't know $OPT$? We can approximate $OPT$ really well ... if we let $T$ be the sum of the edge weights in $G$, we know $\frac{1}{2}T \leq OPT \leq T$. So we have a fairly narrow range of values for $f$, and we can iterate over $f$ taking all values between $-.49T$ and $-.99T$ at intervals of $0.005T$. For one of these intervals, we are guaranteed that $f \approx -0.98 OPT$, and so one of these iterations is guaranteed to return a good cut. Finally, we need to check that the new graph has edge weights whose sum is positive. We started with a graph whose edge weights had sum $T$, and added $f$ to the sum of the edge weights. Since $-.99T \leq f \leq -.49T$, we're O.K.
{ "source": [ "https://cstheory.stackexchange.com/questions/2312", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/25/" ] }
2,315
I recently taught expanders, and introduced the notion of Ramanujan graphs. Michael Forbes asked why they are called this way, and I had to admit I don't know. Anyone?
To add some content to the answers here, I'll explain briefly what Ramanujan's conjecture is. First of all, Ramanujan's conjecture is actually a theorem, proved by Eichler and Igusa. Here is one way to state it. Let $r_m(n)$ denote the number of integral solutions to the quadratic equation $x_1^2 + m^2 x_2^2 + m^2 x_3^2 + m^2 x_4^2 = n$. If $m=1$, that $r_m(n) > 0$ was of course proved by Legendre, but Jacobi gave the exact count: $r_1(n) = 8 \sum_{d \mid n, 4 \not \mid d} d$. Nothing similarly exact is known for larger $m$ but Ramanujan conjectured the bound: $r_m(n) = c_m \sum_{d \mid n} d + O(n^{1/2 + \epsilon})$ for every $\epsilon > 0$, where $c_m$ is a constant dependent only on $m$. Lubtozky, Phillips and Sarnak constructed their expanders based on this result. I'm not familiar with the details of their analysis but the basic idea, I believe, is to construct a Cayley graph of $PSL(2,Z_q)$ for a prime $q$ that $1 \bmod 4$, using generators determined by every sum-of-four-squares decomposition of $p$, where $p$ is a quadratic residue modulo $q$. Then, they relate the eigenvalues of this Cayley graph to $r_{2q}(p^k)$ for integer powers $k$. A reference, other than the Lubotzky-Phillips-Sarnak paper itself, is Noga Alon's brief description in Tools from Higher Algebra .
{ "source": [ "https://cstheory.stackexchange.com/questions/2315", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/1621/" ] }
2,373
What is the upper bound on the simplex algorithm for finding a solution to a Linear Program? How would I go about finding a proof for such a case? It seems as though the worst case is if each vertex has to be visited that is it $O(2^n)$. However in practice the simplex algorithm will run significantly faster than this for more standard problems. How can I reason about the average complexity of a problem being solved using this method? Any information or references are greatly appreciated!
The simplex algorithm indeed visits all $2^n$ vertices in the worst case ( Klee & Minty 1972 ), and this turns out to be true for any deterministic pivot rule. However, in a landmark paper using a smoothed analysis, Spielman and Teng (2001) proved that when the inputs to the algorithm are slightly randomly perturbed, the expected running time of the simplex algorithm is polynomial for any inputs -- this basically says that for any problem there is a "nearby" one that the simplex method will efficiently solve, and it pretty much covers every real-world linear program you'd like to solve. Afterwards, Kelner and Spielman (2006) introduced a polynomial time randomized simplex algorithm that truley works on any inputs, even the bad ones for the original simplex algorithm.
{ "source": [ "https://cstheory.stackexchange.com/questions/2373", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/1056/" ] }
2,384
Approximating number of colorings seems to be easy on minor-excluded graphs using algorithm by Jung/Shah. What are other examples of problems that are hard on general graphs but easy on minor-excluded graphs? Update 10/24 It seems to follow Grohe's results that formula that is FPT to test on bounded-treewidth graphs is FPT to test on minor excluded graphs. Now the question is -- how does it relate to tractability of counting satisfying assignments of such formula? The above statement is false. MSOL is FPT on bounded tree-width graphs, however 3-colorability is NP-complete on planar graphs which are minor-excluded.
The most general result known is by Grohe. A summary was presented in July 2010: Martin Grohe, Fixed-Point Definability and Polynomial Time on Graphs with Excluded Minors , LICS 2010. ( PDF ) In short, any statement that is expressible in fixed-point logic with counting has a polynomial-time algorithm on classes of graphs with at least one excluded minor. (FP+C is first-order logic augmented with a fixed-point operator and a predicate that gives the cardinality of definable sets of vertices). The key idea is that excluding a minor allows the graphs in the class to have ordered treelike decompositions that are definable in fixed-point logic (without counting). So a large class of answers to your question can be obtained by considering properties that are definable in FP+C but that are hard to count. Edit: I'm not sure this actually answers your question, even less so for your update. The pointer to and statement of Grohe's result are correct, but I don't think the struck out text is relevant for your question. (Thanks to Stephan Kreutzer for pointing this out.) It might be worth clarifying: do you want a counting problem that is difficult in general but easy on minor-excluded classes, or a decision problem?
{ "source": [ "https://cstheory.stackexchange.com/questions/2384", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/434/" ] }
2,386
I am often asked by my department to give talks to final year high school pupils about the more mathematical elements of computer science. I do my best to pick topics from TCS which might inspire their interest (which mostly involves something to do with the Halting problem) but would love hear other people's ideas/successes/failures. The remit is that these are pupils who are considering applying for a CS undergraduate degree at a decent university but may be more attracted by maths or another one of the sciences. I find that the usual topics of shortest path algorithms or faster sorting methods don't really work any more to pique their interest.
There is a neat way to introduce zero-knowledge proofs to students, which I think is originally due to Oded Goldreich (please correct me if I'm wrong). You have a red ball and a green ball, which poor colorblind Charlie believes are the same color. You want to convince Charlie that you can tell the difference between the red ball and green ball, and you want to do this in a way that Charlie does not learn which is red and which is green. (You want to prove something is true, in such a way that no one else can turn around and claim a proof of that something as their own.) How can you do this? Or is it impossible? One protocol is the following. Charlie puts a ball in each hand, then chooses to either switch the two balls behind him, or not. Then he presents the two balls again. If you can always detect whether he switched the two balls or not, then Charlie is increasingly convinced that you can tell the difference between them. If Charlie does this shuffle at random and you really can't tell the difference between the colors, then you will only guess correctly with probability $1/2$. After $k$ trials, Charlie should be convinced that you can tell the difference with probability at least $1-1/2^k$. Now while Charlie becomes increasingly convinced that you can tell the difference, he frustratingly never learns which ball is red and which one is green.
{ "source": [ "https://cstheory.stackexchange.com/questions/2386", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/1864/" ] }
2,399
As a side-project, I'm writing a language using Python. I started by using a flex/bison clone called Ply, but am coming up against the edges in the power of what I can express with that style of grammar, and I'm not interested in hacking up my language because of an impedance mismatch with the tool. Therefore, I'm not averse to writing my own. So what's the most powerful type of parser? Citations to papers (as well as more introductory articles) would be welcome. (I know that 'powerful' is not precisely defined, but let's be a little loose with it and see where the answers go)
A grammar is usually defined as a Context Free grammar - a precise definition is given on the Wikipedia page, but it works the same as it does in PLY, which is based on Bison , which is in turn based on yacc . It says here that PLY uses a LALR parser . This is essentially an LR parser where the lookup tables are condensed, possibly introducing parsing conflicts, reducing some of the expressiveness of an LR grammar (ie, a context free grammar that an LR parser can parse). If you want to know about the limitations of this particular branch of parsers and those of other parsers, an overview of all kinds of parsing techniques (LL, LR and others) is given here . To answer your question: there exist parsing algorithms capable of parsing any context-free language, even if the language is ambiguous (ie, there is more than one way to interpret the input): The first such algorithm was the CYK algorithm , which unfortunately has a running time of $O(n^3 |G|)$, where $n$ is the length of the input string and $|G|$ is the size of the grammar and is therefore impractical for parsing languages. The second algorithm is the Earley algorithm . This algorithm is also capable of parsing any context free grammar. Although the algorithm needs $O(n^3)$ time to parse an ambiguous language, it only needs $O(n^2)$ time to parse an unambiguous language. In addition, it apparently works in linear time for most LR grammars and works particularly well on left-recursive grammars. Here you can find a paper discussing a practical implementation of (an adaptation of) the Earley algorithm. They conclude: "Given the generality of Earley parsing compared to LALR(1) parsing ((which is roughly what PLY does)), and considering that even PEP’s ((their implementation of Earley's algorithm)) worst time would not be noticeable by a user, this is an excellent result". The last type of parser is the GLR parser . This is a generalised version of LR parsing, capable of parsing any context-free language. A mature implementation of GLR is ASF+SDF . Bison can also generate a GLR parser, though its implementations is slightly different from the 'standard' GLR algorithm. The Elkhound Algorithm is a GLR/LALR hybrid algorithm. It uses LALR when possible and GLR when needed, in order to be both fast and capable of parsing any grammar. Beyond context free grammars there are context sensitive grammars , but these are in general hard to parse and don't add that much expressiveness: you can do more with them, but for most applications the extra uses are not relevant, unless you're parsing a natural language. As the final step there are unrestricted grammars . At this point the grammar is Turing-complete, so there is no bound one can give on how long it will take to parse a particular language, which is undesirable for most parsing applications. The extra power is almost never needed. If you do want to use all that power, there is the language machine available. Lastly, implementing your own parser-generator is not a trivial affair, in particular to get it to be fast. I've personally just finished making my own version of flex (the lexer generator), and while this seemed like an exercise in relatively simple algorithmic problems, it became quite complex to get right, in particular when I tried to support Unicode. Consider using an already existing implementation instead of writing your own.
{ "source": [ "https://cstheory.stackexchange.com/questions/2399", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/1429/" ] }
2,421
There is a rich literature and at least one very good book setting out the known hardness of approximation results for NP-hard problems in the context of multiplicative error (e.g. 2-approximation for vertex cover is optimal assuming UGC). This also includes well understood approximation complexity classes such as APX, PTAS and so on. What is known when additive error is to be considered? A literature search shows a few upper bound type results, most notably for bin packing (see for example http://www.cs.princeton.edu/courses/archive/spr03/cs594/dpw/lecture2.ps ), but is there a more comprehensive complexity class classification or is there a reason why it is not so interesting or relevant? As a further comment, for bin packing, for example, there is as far as I know no theoretical reason why a poly time algorithm which is always within an additive distance from optimal of 1 couldn't be found (although I stand to be corrected). Would such an algorithm collapse any complexity classes or have any other significant theoretical knock-on effect? EDIT: The key phrase I didn't use is "asymptotic approximation class" (thanks Oleksandr). It seems that there is some work in this area but it hasn't got to the same stage of maturity yet as the theory of classic approximation classes.
The question is somewhat open-ended, so I do not think that it can be answered completely. This is a partial answer. An easy observation is that many problems are uninteresting when we consider additive approximation. For example, traditionally the objective function of the Max-3SAT problem is the number of satisfied clauses. In this formulation, approximating Max-3SAT within an O(1) additive error is equivalent to solving Max-3SAT exactly, simply because the objective function can be scaled by copying the input formula many times. Multiplicative approximation is much more essential for the problems of this kind. [Edit: In earlier revision, I had used Independent Set as an example in the previous paragraph, but I changed it to Max-3SAT because Independent Set is not a good example to illustrate the difference between multiplicative approximation and additive approximation; approximating Independent Set even within an O(1) multiplicative factor is also NP-hard. In fact, a much stronger inapproximability for Independent Set is shown by Håstad [Has99].] But, as you said, additive approximation is interesting for the problems like bin packing, where we cannot scale the objective function. Moreover, we can often reformulate a problem so that additive approximation becomes interesting. For example, if the objective function of Max-3SAT is redefined as the ratio of the number of satisfied clauses to the total number of clauses (as is sometimes done), additive approximation becomes interesting. In this setting, additive approximation is not harder than multiplicative approximation in the sense that approximability within a multiplicative factor 1− ε (0< ε <1) implies approximability within an additive error ε , because the optimal value is always at most 1. An interesting fact (which seems to be unfortunately often overlooked) is that many inapproximability results prove the NP-completeness of certain gap problems which does not follow from the mere NP-hardness of multiplicative approximation (see also Petrank [Pet94] and Goldreich [Gol05, Section 3]). Continuing the example of Max-3SAT, it is a well-known result by Håstad [Has01] that it is NP-hard to approximate Max-3SAT within a constant multiplicative factor better than 7/8. This result alone does not seem to imply that it is NP-hard to approximate the ratio version of Max-3SAT within a constant additive error beyond some threshold. However, what Håstad [Has01] proves is stronger than the mere multiplicative inapproximability: he proves that the following promise problem is NP-complete for every constant 7/8< s <1: Gap-3SAT s Instance : A CNF formula φ where each clause involves exactly three distinct variables. Yes-promise : φ is satisfiable. No-promise : No truth assignment satisfies more than s fraction of the clauses of φ. From this, we can conclude that it is NP-hard to approximate the ratio version of Max-3SAT within an additive error better than 1/8. On the other hand, the usual, simple random assignment gives approximation within an additive error 1/8. Therefore, the result by Håstad [Has01] does not only give the optimal multiplicative inapproximability for this problem but also the optimal additive inapproximability. I guess that there are many additive inapproximability results like this which do not appear explicitly in the literature. References [Gol05] Oded Goldreich. On promise problems (a survey in memory of Shimon Even [1935-2004]). Electronic Colloquium on Computational Complexity , Report TR05-018, Feb. 2005. http://eccc.hpi-web.de/report/2005/018/ [Has99] Johan Håstad. Clique is hard to approximate within n 1− ε . Acta Mathematica , 182(1):105–142, March 1999. http://www.springerlink.com/content/m68h3576646ll648/ [Has01] Johan Håstad. Some optimal inapproximability results. Journal of the ACM , 48(4):798–859, July 2001. http://doi.acm.org/10.1145/502090.502098 [Pet94] Erez Petrank. The hardness of approximation: Gap location. Computational Complexity , 4(2):133–157, April 1994. http://dx.doi.org/10.1007/BF01202286
{ "source": [ "https://cstheory.stackexchange.com/questions/2421", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/1864/" ] }
2,426
First of all, I apologize in advance for any stupidity. I am by no means an expert on complexity theory (far from it! I am an undergraduate taking my first class in complexity theory) Here's my question. Now Savitch's Theorem states that $$\text{NSPACE}\left(f\left(n\right)\right) \subseteq \text{DSPACE}\left(\left(f\left(n\right)\right)^2\right)$$ Now I'm curious if if this lower bound was tight, i.e that is something along the lines of $\text{NSPACE}\left(f\left(n\right)\right) \subseteq \text{DSPACE}\left(\left(f\left(n\right)\right)^{1.9}\right)$ is not achievable. It seems like something there should be a straightforward combinatorial argument to be made here - each node in the configuration graph for a Deterministic Turing machine has only one outgoing edge, while each node in the configuration graph of a Non-Deterministic Turing machine can have more than one outgoing edge. What Savitch's algorithm is doing is converting configuration graphs with any number outgoing edge to configuration graphs with $<2$ outgoing edges. Since the configuration graph defines a unique TM (not sure about this), the combinatorial size of the latter is almost certainly larger than the former. This "difference" is perhaps a factor of $n^2$, perhaps less - I don't know. Of course, there are lots of little technical issues to be worked out, like how you need to make sure there are no loops and so forth, but my question is if this is a reasonable way to begun proving a thing like this.
This is a well known open question. You will see in complexity theory many open questions for which you'd wonder how come no one managed to solve them. Part of the reason is that we need new people like you to help us solve them :) For the latest result in this area, showing that Savitch's algorithm is optimal in some restricted model, see Aaron Potechin's FOCS paper . Specifically, he starts from the nice observation that because the configuration graph of a deterministic TM has only one outgoing edge (after fixing the input), one can think of it as an undirected graph, and so the question becomes something like the following: given a directed graph $G$ of $n$ vertices with two special vertices $s,t$, if we map it to an $N$ vertex undirected graph $G'$ (also with special vertices $s',t'$) such that the existence of each edge in $G'$ depends on one edge in $G$ and there is a path from $s$ to $t$ in $G$ iff there's a path between $s'$ and $t'$ in $G'$, how much bigger $N$ has to be from $n$. To show that Savitch's algorithm is optimal, one needs to show that $N$ has to be at least $2^{\Omega(\log^2 n)} = n^{\Omega(\log n)}$. To show $L\neq NL$, it suffices to show the weaker bound that $N > n^c$ for every constant $c$. I'm pretty sure that even $N > n^{10}$ is not known, though perhaps something like $N \geq n^2$ is known for some not so interesting reasons.
{ "source": [ "https://cstheory.stackexchange.com/questions/2426", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/1892/" ] }
2,434
Hey Guys, I understand that the padding trick allows us to translate complexity classes upwards - for example $P=NP \rightarrow EXP=NEXP$. Padding works by "inflating" the input, running the conversion (say from say $NP$ to $P$), which yields a "magic" algorithm which you can run on the padded input. While this makes technical sense, I can't get a good intuition of how this works. What exactly is going on here? Is there a simple analogy for what padding is? Can provide a common sense reason why this is the case?
I think the best way to get intuition for this issue is to think of what the complete problems for exponential time classes are. For example, the complete problems for NE are the standard NP-complete problems on succinctly describable inputs, e.g., given a circuit that describes the adjacency matrix of a graph, is the graph 3-colorable? Then the problem of whether E=NE becomes equivalent to whether NP problems are solvable in polynomial time on the succinctly describable inputs, e.g., those with small effective Kolmogorov complexity. This is obviously no stronger than whether they are solvable on all inputs. The larger the time bound, the smaller the Kolmogorov complexity of the relevant inputs, so collapses for larger time bounds are in effect algorithms that work on smaller subsets of inputs. Russell Impagliazzo
{ "source": [ "https://cstheory.stackexchange.com/questions/2434", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/1892/" ] }
2,447
Minimum bandwidth problem is to a find an ordering of graph nodes on integer line that minimizes the largest distance between any two adjacent nodes. A $k$-caterpillar is a tree formed from main path by growing edge-disjoint paths of length at most $k$ from its nodes ($k$ is called the hair length). Minimum Bandwidth problem is in $P$ for 2-caterpillars but it is $NP$-complete for 3-caterpillars. Here is a very interesting fact, Minimum bandwidth problem is solvable in polynomial time for 1-caterpillars (hair length at most one) but it is $NP$-complete for cyclic 1-caterpillars (in cyclic caterpillar, one edge is added to connect the endpoints of the main path). So, the addition of exactly one edge makes the problem $NP$-complete. What is the most striking example of problem hardness jump where a small variation of input instance causes a complexity jump from polynomial-time solvability to $NP$-completeness?
One of the more interesting applied examples of hardness jumps can be observed in the following problem: Consider a soccer league championship with $n$ teams: The problem of deciding whether a given team can (still) win the league is in $P$ if in a match, the winning team is awarded 2 points, the losing one 0 and each team is awarded 1 point in a draw match. But if we change the rules so that the winning team gets 3 points, the same problem becomes $NP$-hard. The result can be generalized for any $(0, 1, k)$-point rule for every $k > 2$ and even for only three remaining rounds. Source: “Complexity Theory” by Ingo Wegener ( http://portal.acm.org/citation.cfm?id=1076319 )
{ "source": [ "https://cstheory.stackexchange.com/questions/2447", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/495/" ] }
2,461
We know that if you have a PSPACE machine, it's powerful enough to give an interactive proof of any level the polynomial hierarchy. (And if I remember right, all you need is #P.) But suppose you want to give an interactive proof of membership in a $\Sigma_2$ language. Is it enough to be able to solve problems in $\Sigma_2$? Is solving problems in $\Sigma_5$ adequate? More generally, if you can solve $\Sigma_k$ or $\Pi_k$ problems, for what $\Sigma_\ell$ is this sufficient to generate interactive proofs of all languates in $\Sigma_\ell$? This question was inspired by this cstheory stackexchange question .
Even for giving an IP for coNP, using current techniques, one needs to arithmetize, i.e. use counting, which means essentially the full power of #P. Any weaker prover even for coNP would be very interesting, I think (in particular would imply a new non relativing technique.)
{ "source": [ "https://cstheory.stackexchange.com/questions/2461", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/1677/" ] }
2,562
background Several years ago, when I was an undergraduate, we were given a homework on amortized analysis. I was unable to solve one of the problems. I had asked it in comp.theory , but no satisfactory result came up. I remember the course TA insisted on something he couldn't prove, and said he forgot the proof, and ... [you know what]. Today, I recalled the problem. I was still eager to know, so here it is... The Question Is it possible to implement a stack using two queues , so that both PUSH and POP operations run in amortized time O(1) ? If yes, could you tell me how? Note: The situation is quite easy if we want to implement a queue with two stacks (with corresponding operations ENQUEUE & DEQUEUE ). Please observe the difference. PS: The above problem is not the homework itself. The homework did not require any lower bounds; just an implementation and the running time analysis.
I don't have an actual answer, but here's some evidence that the problem is open: It's not mentioned in Ming Li, Luc Longpré and Paul M. B. Vitányi, "The power of the queue", Structures 1986, which considers several other closely related simulations It's not mentioned in Martin Hühne, "On the power of several queues", Theor. Comp. Sci. 1993, a follow-on paper. It's not mentioned in Holger Petersen, "Stacks versus Deques", COCOON 2001. Burton Rosenberg, "Fast nondeterministic recognition of context-free languages using two queues", Inform. Proc. Lett. 1998, gives an O(n log n) two-queue algorithm for recognizing any CFL using two queues. But a nondeterministic pushdown automaton can recognize CFLs in linear time. So if there were a simulation of a stack with two queues faster than O(log n) per operation, Rosenberg and his referees should have known about it.
{ "source": [ "https://cstheory.stackexchange.com/questions/2562", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/873/" ] }
2,571
I am currently an undergraduate student, bound to graduate this year. After graduation, I am considering to work towards a TCS master/PhD. I have begun wondering what fields of mathematics are considered helpful for TCS, especially (classical) complexity theory. What fields do you consider essential for someone that wants to study complexity theory? Do you know of any good textbooks covering these fields and if yes, please include their difficulty level (introductory,graduate etc.). If you consider a field that is not heavily used in complexity theory but you consider it critical for TCS, please also refer it.
If you look at the answers to this TCS StackExchange question , you'll see that there's a possibility that pretty much any area of mathematics could be important in complexity theory. So, if you're really interested in some area of mathematics that doesn't seem to be related, go ahead and study it anyway. If it ever does become relevant to complexity theory, you'll be one of the few complexity theorists who understands it.
{ "source": [ "https://cstheory.stackexchange.com/questions/2571", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/879/" ] }
2,606
Is there an interesting example of a randomized algorithm for a search problem that always outputs the same (correct) answer, regardless of its internal randomness, but which exploits the randomness so that its expected running time is better than the running time of the fastest known deterministic algorithm for the problem? In particular, I was wondering if there is such an algorithm for finding a prime between n and 2n. There's no known polynomial time deterministic algorithm. There's a trivial randomized algorithm that works just by sampling random integers in the interval, which works thanks to the prime number theorem . But is there an algorithm of the above kind whose expected running time is intermediate between the two? EDIT: To refine my question slightly, I wanted such an algorithm for a problem where there are many possible correct outputs, and yet the randomized algorithm settles on one independent of its randomness. I realize that the question is probably not fully specified...
Shafi Goldwasser communicated to me that she and coauthors have been investigating exactly such algorithms for number-theoretic problems! The following is known: Lenstra has shown that there is such an algorithm for finding a quadratic non-residue mod a given prime. Gat and Goldwasser have shown that there is such an algorithm for finding a generator of $\mathbb{Z}_p^*$, where $p$ is a given prime of the form $2q + 1$ for a prime $q$. (I don't know of citable references.) There is also ongoing research on the question I asked about finding a prime between $n$ and $2n$. EDIT: The paper by Gat and Goldwasser is now published: http://eccc.hpi-web.de/report/2011/136/ . This paper though doesn't resolve the question of finding a prime between $n$ and $2n$.
{ "source": [ "https://cstheory.stackexchange.com/questions/2606", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/15/" ] }
2,661
A graph is $k$-choosable (also known as $k$-list-colorable ) if, for every function $f$ that maps vertices to sets of $k$ colors, there is a color assignment $c$ such that, for all vertices $v$, $c(v)\in f(v)$, and such that, for all edges $vw$, $c(v)\ne c(w)$. Now suppose that a graph $G$ is not $k$-choosable. That is, there exists a function $f$ from vertices to $k$-tuples of colors that does not have a valid color assignment $c$. What I want to know is, how few colors in total are needed? How small can $\cup_{v\in G}f(v)$ be? Is there a number $N(k)$ (independent of $G$) such that we can be guaranteed to find an uncolorable $f$ that only uses $N(k)$ distinct colors? The relevance to CS is that, if $N(k)$ exists, we can test $k$-choosability for constant $k$ in singly-exponential time (just try all $\binom{N(k)}{k}^n$ choices of $f$, and for each one check that it can be colored in time $k^n n^{O(1)}$) whereas otherwise something more quickly growing like $n^{kn}$ might be required.
Daniel Král and Jiří Sgall answered your question to the negative. From the abstract of their paper: A graph $G$ is said to be $(k,\ell)$-choosable if its vertices can be colored from any lists $L(v)$ with $|L(v)| \ge k$, for all $v\in V(G)$, and with $|\bigcup_{v\in V(G)} L(v)| \le \ell$. For each $3 \le k \le \ell$, we construct a graph $G$ that is $(k,\ell)$-choosable but not $(k,\ell+1)$-choosable. So, $N(k)$ does not exist if $k\ge 3$. Král and Sgall also show that $N(2)=4$. Of course, $N(1)=1$. Daniel Král, Jiří Sgall: Coloring graphs from lists with bounded size of their union . Journal of Graph Theory 49(3): 177-186 (2005)
{ "source": [ "https://cstheory.stackexchange.com/questions/2661", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/95/" ] }
2,703
What is the relationship between DNA-algorithms and the complexity classes defined using Turing machines? What do the complexity measures like time and space correspond to in DNA-algorithms? Can they be used to solve instances of NP-complete problems like TSP that von Neumann machines can not solve feasibly in practice?
Soundbite answer: DNA computing does not provide a magic wand to solve NP-complete problems, even though some respected researchers in the 1990s thought for a time it might. The inaugural DNA computing experiment was performed in a laboratory headed by the renowned number theorist Len Adleman. Adleman solved a small Traveling Salesman Problem -- a well-known NP-complete problem, and he and others thought for a while the method might scale up. Adleman describes his approach in this short video , which I find fascinating. The problem they encountered was that to solve a TSP problem of modest size, they would need more DNA than the size of the Earth. They had figured out a way to save time by increasing the amount of work done in parallel, but this did not mean the TSP problem required less than exponential resources to solve. They had only shifted the exponential cost from amount-of-time to amount-of-physical material. (There's an added question: if you require an exponential amount of machinery to solve a problem, do you automatically require an exponential amount of time, or at least preprocessing, to build the machinery in the first place? I'll leave that issue to one side, though.) This general problem -- reducing the time a computation requires at the expense of some other resource -- has shown up many times in biologically-inspired models of computing. The Wikipedia page on membrane computing (an abstraction of a biological cell) says that a certain type of membrane system is able to solve NP-complete problems in polynomial time. This works because that system allows for the creation of exponentially-many subobjects inside an overall membrane, in polynomial time. Well... how does an exponential amount of raw material arrive from the outside world an enter through a membrane with constant surface area? Answer: it's not considered. They're not paying for a resource that the computation would otherwise require. Finally, to respond to Anthony Labarre, who linked to a paper showing AHNEPs can solve NP-complete problems in polynomial time. There's even a paper out showing AHNEPs can solve 3SAT in linear time. AHNEP = Accepting Hybrid Network of Evolutionary Processors. An evolutionary processor is a model inspired by DNA, whose core has a string that at each step can be changed by substitution, deletion, or (importantly) insertion. Further, an arbitrarily large number of strings is available at every node, and at each communication step, all nodes send all their correct strings to all attached nodes. So without time cost, it's possible to transfer exponential amounts of information, and because of the insertion rule, individual strings can become ever larger over the course of the computation, so it's a double whammy. If you are interested in recent work in biocomputation, by researchers who focus on computations that are real-world practical, I can offer this book review I recently wrote for SIGACT News, which touches briefly on multiple areas.
{ "source": [ "https://cstheory.stackexchange.com/questions/2703", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/2044/" ] }
2,708
Consider the following counting problem (or the associated decision problem): Given two positive integers encoded in binary, compute their greatest common divisor (gcd). What is the smallest complexity class this problem is contained in? Can you provide a reference? In this question I am not primarily interested in asymptotic bounds on the running time, but rather in complexity classes. Is the problem in AC? Can it be proven not to lie in AC0? What are other complexity classes inside P that are of relevance here?
This is a major open question in complexity theory: it is not known if GCDs can be computed in NC, and it is not known if computing GCDs is P-complete. The best parallel algorithms do have sub-linear parallel running time, one such algorithm being due to Sorenson: J. Sorenson. Two fast GCD algorithms . Journal of Algorithms, 1994. If I am not mistaken, it is not even known if one can decide whether two integers are relatively prime in NC.
{ "source": [ "https://cstheory.stackexchange.com/questions/2708", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/2047/" ] }
2,774
Polynomial methods , say Combinatorial Nullstellensatz and Chevalley–Warning theorem are powerful tools in additive combinatorics. By representing a problem with proper polynomials, they can guarantee the existence of a solution, or the number of solutions to the polynomials. They have been used to solve problems like restricted sumsets or zero-sum problems , and some of the theorems in this area can be proved only by such methods. To me the non-constructive manner of these methods are truly amazing, and I'm curious about that how we can apply these methods to prove any interesting inclusions and separations of complexity classes (even if the result can be solved by other methods). Are there any complexity results known that one can prove them by polynomial methods?
Some classic examples of the use of the polynomial method are: Razborov-Smolensky's proof of "Parity not in $AC^0$" ( lots of expositions available online, here is one ) Beigel-Reingold-Spielman's proof of "PP is closed under intersection" Bazzi's result for fooling DNFs , and Braverman's proof of " Polylog independence fools $AC^0$ " Also, fourier analysis of boolean functions ( here is a great course by Ryan O'Donnell ) has a HUGE collection of awesome results, my favourite being the Kushilevitz-Mansour-Nisan's proof of the Goldreich-Levin theorem . Scott Aaronson had in fact given a tutorial at FOCS'08 on the " The Polynomial Method in Classical and Quantum Computing (ppt) ". Hope this helps.
{ "source": [ "https://cstheory.stackexchange.com/questions/2774", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/1800/" ] }
2,800
This is a naive question, out of my expertise; apologies in advance. Goldbach's Conjecture and many other unsolved questions in mathematics can be written as short formulas in predicate calculus. For example, Cook's paper "Can Computers Routinely Discover Mathematical Proofs?" formulates that conjecture as $$\forall n [( n > 2 \wedge 2 | n) \supset \exists r \exists s (P(r) \wedge P(s) \wedge n = r + s) ]$$ If we restrict attention to polynomially-long proofs, then theorems with such proofs are in NP. So if P=NP, we could determine whether e.g. Goldbach's Conjecture is true in polynomial time. My question is: Would we also be able to exhibit a proof in polynomial time? Edit . As per the comments of Peter Shor and Kaveh, I should have qualified my claim that we could determine if Goldbach's conjecture is true if it indeed is one of the theorems with a short proof. Which of course we do not know!
Indeed! If P=NP, not only we can decide whether there exists a proof of length n for Goldbach's Conjecture (or any other mathematical statement), but we can also find it efficiently! Why? Because we can ask: is there a proof conditioned on the first bit being ..., then, is there a proof conditioned on the first two bits being ...., and so on... And how would you know n? You'll just try all possibilities, in increasing order. When we make a step in the i'th possibility we also try a step in each of the possibilities 1..(i-1).
{ "source": [ "https://cstheory.stackexchange.com/questions/2800", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/337/" ] }
2,812
Ryan Williams just posted his lower bound on ACC , the class of problems that have constant depth circuits with unbounded fan-in and gates AND, OR, NOT and MOD_m for all possible m's. What's so special about MOD_m gates? They allow one to simulate arithmetic over any ring Z_m. Before Ryan's result, throwing MOD_m gates to the mix gave the first class for which the known lower bounds did not work. Is there any other natural reason to study MOD_m gates?
$ACC^0$ is a natural complexity class. 1) Barrington showed that computation over non-solvable monoids capture $NC^1$ while over solvable monoids capture $ACC^0$. 2) Recently, Hansen and Koucky proved a beautiful result that poly-sized constant width planar branching programs are exactly $ACC^0$. Without the planarity condition, we of course get Barrington's result characterizing $NC^1$. So the difference between $ACC^0$ and $NC^1$ is group-theoretic on one hand and topological on the other. Added: Dana, a simple example of a solvable group is $S_4$, the symmetric group over elements. Without getting into details, any solvable group has a series whose quotients happen to be cyclic. This cyclic structure gets reflected as mod gates while building a circuit to solve word problems over the group. On planarity, one would like to believe that planarity may impose restrictions/bottlenecks in the flow of information. This is not always true: for example, variations of planar 3SAT are known to be NP-complete. However, in smaller classes, these restrictions are more "likely" to hold. In similar vein, Wigderson showed NL/poly=UL/poly using the isolation lemma. We do not know how to derandomize the isolation lemma over arbitrary DAGs to get NL=UL, but we know how to do so for planar DAGs.
{ "source": [ "https://cstheory.stackexchange.com/questions/2812", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/1621/" ] }
2,853
This is a question related to this one . Putting it again in a much simpler form after a lot of discussion there, that it felt like a totally different question. The classical proof of the undecidability of the halting problem depends on demonstrating a contradiction when trying to apply a hypothetical HALT decider to itself. I think that this is just denoting the impossibility of having a HALT decider that decides whether itself will halt or not, but doesn't give any information beyond that about the decidability of halting of any other cases. So the question is Is there a proof that the halting problem is undecidable that doesn't depend on showing that HALT can not decide itself, nor depends on the diagonalization argument ? Small edit: I will commit to the original phrasing of the question, which is asking for a proof that doesn't depend on diagonlization at all (rather than a just requiring it to not depend on diagonalization that depends on HALT).
Yes, there are such proofs in computability theory (a.k.a. recursion theory). You can first show that the halting problem (the set $0'$) can be used to compute a set $G\subseteq\mathbb N$ that is 1-generic meaning that in a sense each $\Sigma^0_1$ fact about $G$ is decided by a finite prefix of $G$. Then it is easy to prove that such a set $G$ cannot be computable (i.e., decidable). We could replace 1- generic here by 1-random, i.e., Martin-Löf random , for the same effect. This uses the Jockusch-Soare Low Basis Theorem . (Warning: one might consider just showing that $0'$ computes Chaitin's $\Omega$ , which is 1-random, but here we have to be careful about whether the proof that $\Omega$ is 1-random relies on the halting problem being undecidable! Therefore it's safer to just use the Low Basis Theorem).
{ "source": [ "https://cstheory.stackexchange.com/questions/2853", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/201/" ] }
2,863
A function $f \colon \{0, 1\}^* \to \{0, 1\}^*$ is one-way if $f$ can be computed by a polynomial time algorithm, but for every randomized polynomial time algorithm $A$, $\Pr[f(A(f(x))) = f(x)] < 1/p(n)$ for every polynomial $p(n)$ and sufficiently large $n$, assuming that $x$ is chosen uniformly from $\{ 0, 1 \}^n$. The probability is taken over the choice of $x$ and the randomness of $A$. So... do "One Way Functions" have any applications outside cryptography? If yes, what are they?
One-way functions show up crucially in the Razborov-Rudich natural proofs result. I wouldn't consider circuit lower bounds as part of "cryptography", so maybe this fits your criteria.
{ "source": [ "https://cstheory.stackexchange.com/questions/2863", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/13749/" ] }
2,951
It doesn't seem like this is known - but are there any interesting lower bounds on the complexity of matrix multiplication in the quantum computing model? Do we have any intuition that we can beat the complexity of the Coppersmith-Winograd algorithm using quantum computers?
In arXiv:quant-ph/0409035v2 Buhrman and Spalek present a quantum algorithm beating the Coppersmith-Winograd algorithm in cases where the output matrix has few nonzero entries. Update: There is also a slightly improved quantum algorithm by Dörn and Thierauf . Update: There is an improved quantum algorithm by Le Gall beating Burhman and Spalek in general.
{ "source": [ "https://cstheory.stackexchange.com/questions/2951", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/170/" ] }
2,953
After reading Daniel Apon's question , I started thinking that it might be useful (especially to junior researchers and graduate students like me) to ask a broader and more general question so we can learn from the experience of more senior researchers. So here is the question: What practices have you found most useful in your research? I don't want to restrict it to any particular type of advice, so any advice on research practice is welcome.
One thing I found useful is to allocate time and designate a space for doing specific research activities. When I was at Princeton U, I loved sitting at the Engineering library that is well lit, bright and spacious, to read and to think of new ideas. When I verified my 139 pages paper, I used to do it in a room in the biology library at Weizmann that had no computers and no other people, only a desk, chairs and a window to an inner garden. When I go over introductions or notes, I like doing it in coffee shops. There are several reasons why I found this to be a good practice for me: (1) Just pondering about a good environment for me for an activity fills me with anticipation for this activity, or at least somewhat prepares me for it. (2) The fact that I decide to do something specific at this time, and I have the space I need for doing that, induces simplicity, clarity and good order. (3) Knowing what I like, what I care about, and also what distracts me and what is not good for me, I create environments that make it is easier for me to do what I need to do.
{ "source": [ "https://cstheory.stackexchange.com/questions/2953", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/186/" ] }
3,008
I teach an advanced algorithms course and would like to include some topics related to machine learning which will be of interest to my students. As a result, I would like to hear people's opinions of the currently most interesting/greatest algorithmic results in machine learning. The potentially tricky constraint is that the students will not have any particular previous knowledge of linear algebra or the other main topics in machine learning. This is really to excite them about the topic and to let them know that ML is a potentially exciting research area for algorithms experts. EDIT: This is a final year undergraduate course (as we don't have graduate courses in the UK in the main). They will have done at least one basic algorithms course beforehand and presumably done well in it to have chosen the advanced follow up course. The current syllabus of the advanced course has topics such as perfect hashing, Bloom filters, van Emde Boas trees, linear prog., approx. algorithms for NP-hard problems etc. I don't intend to spend more than one lecture exclusively on ML but if something is really relevant to both an algorithms course and an ML one then of course it could also be included.
You can cover boosting . It's very clever, easy to implement, is widely used in practice, and doesn't require much prerequisite knowledge to understand.
{ "source": [ "https://cstheory.stackexchange.com/questions/3008", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/1864/" ] }
3,024
Hamiltonian cycle problem is $NP$-complete on cubic planar bipartite graphs. I'm interested in upper bounds on the length of the longest simple path in non-Hamiltonian cubic planar bipartite graphs. What is the best known upper bound on the length of longest simple path in non-Hamiltonian cubic planar bipartite graphs? Edit : Also, I am interested in non-trivial lower bounds on the length of the longest simple path in this class of graphs.
There exist cubic bipartite planar graphs in which the longest path has length only $O(\log^2 n)$:
{ "source": [ "https://cstheory.stackexchange.com/questions/3024", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/495/" ] }
3,064
It is easy to see that graph isomorphism (GI) is in NP. It is a major open problem whether GI is in coNP. Are there any potential candidates of properties of graphs that can be used as coNP certificates of GI. Any conjectures that imply $GI \in coNP$ ? What are some implications of $GI \in coNP$ ?
If $GI$ is in $coNP$, then we would have the result: $GI$ is not $NP$-complete unless $NP=coNP=PH$. (Currently known: $GI$ is not $NP$-complete unless $\Sigma_2 P = \Pi_2 P = PH$). Since $GI$ is in $coAM$, obviously derandomizing $coAM$ ( doi link ) would put $GI \in coNP$, but I don't know of any candidate graph properties for putting $GI \in coNP$ otherwise. I look forward to more answers though! Interestingly, in that paper they also show that Graph Non-Isomorphism has subexponential size proofs -- that is, $GI \in co NSUBEXP$ -- unless $PH = \Sigma_3 P$. This is at least headed in the direction of showing conditionally that $GI \in coNP$.
{ "source": [ "https://cstheory.stackexchange.com/questions/3064", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/344/" ] }
3,111
What is the funniest TCS-related published work you know? Please include only those that are intended to be funny. Works which are explicitly crafted to be intelligently humorous (rather than, say, a published collection of short jokes regarding complexity theory) are preferred. Works with humorous (actually humorous, not just cute) titles are also accepted. Please only one work per answer so the "best" ones can bubble to the top.
Scott Aaronson's newspiece: Polynomial hierarchy collapses: thousands feared tractable
{ "source": [ "https://cstheory.stackexchange.com/questions/3111", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/129/" ] }
3,203
I am familiar with the theorem which states that some languages are not in the RE (Recursively Enumerable) class of languages, but that can mean either that they are all in CO-RE (or rather, the part of it that doesn't intersect with RE), or that they are partly in CO-RE and partly somewhere else. Are there languages about which nothing can be decided, not even what words are not in them?
Yes , there are some. There is actually an infinite hierarchy of languages which are less and less decidable, namely the Arithmetical hierarchy . Recursively Enumerable languages and their complements are at its level 1. An example of language which isn't RE or coRE is the set of Turing machines computing total functions.
{ "source": [ "https://cstheory.stackexchange.com/questions/3203", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/817/" ] }
3,229
The recent breakthrough circuit complexity lower-bound result of Ryan Williams provides a proof technique that uses upper-bound result to prove complexity lower-bounds. Suresh Venkat in his answer to this question, Are there any counter-intuitive results in theoretical computer science? , provided two examples of establishing lower-bounds by proving upper-bounds. What are the other interesting results for proving complexity lower-bounds that was obtained by proving complexity upper-bounds? Is there any upper-bound conjecture that would imply $NP \not\subseteq P/poly$ (or $P \ne NP$)?
One could turn the question around and ask what lower bounds aren't proved by proving an upper bound. Almost all communication complexity lower bounds (and streaming algorithm lower bound and data structure lower bounds that rely on communication complexity arguments) are proved by showing that a communication protocol can be constructively turned into an encoding scheme, with the length of the encoding depending on the communication complexity of the protocol, and the lower bound for the protocol follows from the fact that you cannot encode all n bit messages using n-1 bits or fewer. The Razborov-Smolensky circuit lower bounds work by showing how to simulate bounded-depth circuits by low-degree polynomials. A couple of candidates of lower bounds that are not proved with an upper bound could be the time hierarchy theorem (although, to get the tightest bounds, one needs an efficient universal turing machine, which is a non-trivial algorithmic task) and the proof of AC0 lower bounds using the switching lemma (but the cleanest proof of the switching lemma uses a counting/incompressibility/Kolmogorov-complexity)
{ "source": [ "https://cstheory.stackexchange.com/questions/3229", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/495/" ] }
3,253
[ Timeline ] This question has the same spirit of what papers should everyone read and what videos should everybody watch . It asks for remarkable books in different areas of theoretical computer science. The books can be math-oriented, yet you may find it great for a computer scientist. Examples: Probability Inequalities Logic Graph Theory Combinatorics Design & Analysis of Algorithm Theory of Computation / Computational Complexity Theory Please devote each answer to books of the same subject (e.g. books on combinatorics). Note: The title might be misleading. Here's a clarification: Let X and Y be two fields in computer science. There are books that everyone in field X should read. in field Y should read. in both fields should read. This question seeks all 3 cases. In other words, it is NOT specific to the latter case. Edit: As suggested by Dai Le , please highlight the reason(s) you like the book as well. Related topics: References for TCS proof techniques Books on automata theory for self-study Books for probability Favorite popular math book Beginner's guide to derandomization References on circuit lower bounds Survey article on the theory of recursive functions Books on Programming Language Semantics What are the recent TCS books whose drafts are available online Books on probability
Computational Complexity: If you are looking for recent complexity textbooks. The following two are must have. Computational Complexity: A Modern Approach by Sanjeev Arora and Boaz Barak ( Textbook homepage ) Computational Complexity: A Conceptual Perspective by Oded Goldreich ( Textbook homepage ) The majority of the content between these two books is comparable. However, some key differences exist: Goldreich devotes more space to exploring the conceptual and philosophical basis of complexity theory, whereas Arora/Barak covers a wider selection of topics, including concrete models of complexity, quantum computation, and circuit lower bounds that are mostly absent from the former. Another option, an older but timeless textbook in complexity is: Computational Complexity by Christos Papadimitriou Papadimitriou's book is notable for chapters covering first-order logic as well as the classes SNP, MaxSNP$_0$, and APX (the theoretical foundations of hardness of approximation), which are missing from the more modern texts. Another (comparatively) old, but quite notable classic is: Introduction to the Theory of Computation by Michael Sipser This is one of the few/first textbooks that explicitly includes "Proof Idea:" between "Theorem:" and "Proof:", and is one of the best-written mathematical textbooks on any topic. On the other hand, it is only an intro to complexity, devoting only one 50-page chapter to "advanced topics" (including approximation, probabilistic algorithms, IP=PSPACE, and crypto). As a first book on complexity, or as an example of truly excellent writing, this book is great . The Nature of Computation by Cristopher Moore and Stephan Mertens Scott Aaronson writes that this book has "the fun of a popular book with the intellectual heft of a textbook." It tells stories and gives lots of entertaining examples and references (Game of Life, and lots of other examples for Turing-complete machines). It doesn't go too deep into complexity theory but has great breadth. Especially of note are its connections to statistical physics.
{ "source": [ "https://cstheory.stackexchange.com/questions/3253", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/873/" ] }
3,278
We know that the first level of the polynomial hierarchy (i.e. NP and co-NP) is in PP, and that $PP \subseteq PSPACE$. We also know from Toda's Theorem that $PH \subseteq P^{PP}$. Do we know whether $PH \subseteq PP$? If not, why is it that $P$ with a $PP$ oracle is stronger than $PP$? Is it possible that $PH \nsubseteq PP$ and $PP \nsubseteq PH$? This question is very simple, but I haven't found any resources addressing it. I asked this related but much less specific question on math overflow before learning more about the topic. Here is a somewhat related (but different) question: Is $coNP^{\#P}=NP^{\#P}=P^{\#P}$? Update: Take a look at Noam Nisan's question here: More on PH in PP?
Huck, as Lance and Robin pointed out, we do have oracles relative to which PH is not in PP. But that doesn't answer your question, which was what the situation is in the "real" (unrelativized) world! The short answer is that (as with so much else in complexity theory) we don't know. But the longer answer is that there are very good reasons to conjecture that indeed PH ⊆ PP. First, Toda's Theorem implies PH ⊆ BP.PP, where BP.PP is the complexity class that "is to PP as BPP is to P" (in other words, PP where you can use a randomization to decide which MAJORITY computation you want to perform). Second, under plausible derandomization hypotheses (similar to the ones that are known to imply P=BPP, by Nisan-Wigderson, Impagliazzo-Wigderson, etc.), we would have PP = BP.PP. Addendum, to address your other questions: (1) I'd say that we don't have a compelling intuition either way on the question of whether PP = P PP . We know, from the results of Beigel-Reingold-Spielman and Fortnow-Reingold, that PP is closed under nonadaptive (truth-table) reductions. In other words, a P machine that can make parallel queries to a PP oracle is no more powerful than PP itself. But the fact that these results completely break down for adaptive (non-parallel) queries to the PP oracle suggests that maybe the latter are really more powerful. (2) Likewise, NP PP and coNP PP might be still more powerful than P PP . And PP PP might be more powerful still, and so on. The sequence P, PP, P PP , PP PP , P PP^PP , etc. is called the counting hierarchy , and just as people conjecture that PH is infinite, so too one can conjecture (though maybe with less confidence!) that CH is infinite. This is closely related to the belief that, in constant-depth threshold circuits (i.e., neural networks), adding more layers of threshold gates gives you more computational power.
{ "source": [ "https://cstheory.stackexchange.com/questions/3278", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/969/" ] }
3,439
This might be a subjective question rather than one with a concrete answer, but anyway. In complexity theory we study the notion of efficient computations. There are classes like $\mathsf{P}$ stands for polynomial time , and $\mathsf{L}$ stands for log space . Both of them are considered to be represented as a kind of "efficiency", and they capture the difficulties of some problems pretty well. But there is a difference between $\mathsf{P}$ and $\mathsf{L}$: while the polynomial time, $\mathsf{P}$, is defined as the union of problems which runs in $O(n^k)$ time for any constant $k$, that is, $\mathsf{P} = \bigcup_{k \geq 0} \mathsf{TIME[n^k]}$, the log space, $\mathsf{L}$, is defined as $\mathsf{SPACE[\log n]}$. If we mimics the definition of $\mathsf{P}$, it becomes $\mathsf{PolyL} = \bigcup_{k \geq 0} \mathsf{SPACE[\log^k n]}$, where $\mathsf{PolyL}$ is called the class of polylog space . My question is: Why do we use log space as the notion of efficient computation, instead of polylog space? One main issue may be about the complete problem sets. Under logspace many-one reductions, both $\mathsf{P}$ and $\mathsf{L}$ have complete problems. In contrast, if $\mathsf{PolyL}$ has complete problems under such reductions, then we would have contradict to the space hierarchy theorem. But what if we moved to the polylog reductions? Can we avoid such problems? In general, if we try our best to fit $\mathsf{PolyL}$ into the notion of efficiency, and (if needed) modify some of the definitions to get every good properties a "nice" class should have, how far can we go? Is there any theoretical and/or practical reasons for using log space instead of polylog space?
The smallest class containing linear time and closed under subroutines is P. The smallest class containing log space and closed under subroutines is still log space. So P and L are the smallest robust classes for time and space respectively which is why they feel right for modeling efficient computation.
{ "source": [ "https://cstheory.stackexchange.com/questions/3439", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/1800/" ] }
3,473
I'm looking for examples of problems parametrized by a number $k \in \mathbb{N}$, where the problem's hardness is non-monotonic in $k$. Most problems (in my experience) have a single phase transition, for example $k$-SAT has a single phase transition from $k \in \{1,2\}$ (where the problem is in P) to $k \ge 3$ (where the problem is NP-complete). I'm interested in problems where there are phase transitions in both directions (from easy to hard and vice-versa) as $k$ increases. My question is somewhat similar to the one asked at Hardness Jumps in Computational Complexity , and in fact some of the responses there are relevant to my question. Examples I am aware of: $k$-colorability of planar graphs: In P except when $k=3$, where it is NP-complete. Steiner tree with $k$ terminals: In P when $k=2$ (collapses to shortest $s$-$t$ path) and when $k=n$ (collapses to MST), but NP-hard "in between". I don't know if these phase transitions are sharp (e.g., P for $k_0$ but NP-hard for $k_0+1$). Also the transitions of $k$ depend on the size of input instance, unlike my other examples. Counting satisfying assignments of a planar formula modulo $n$: In P when $n$ is a Mersenne prime number $n=2^k-1$, and #P-complete for most(?)/ all other values of $n$ (from Aaron Sterling in this thread ). Lots of phase transitions! Induced subgraph detection: The problem is not parametrized by an integer but a graph. There exist graphs $H_1 \subseteq H_2 \subseteq H_3$ (where $\subseteq$ denotes a certain kind of subgraph relation), for which determining whether $H_i \subseteq G$ for a given graph $G$ is in P for $i\in \{1,3\}$ but NP-complete for $i=2$. (from Hsien-Chih Chang in the same thread ).
One field with lots of non-monotonicity of problem complexity is property testing. Let $\mathcal{G}_n$ be the set of all $n$-vertex graphs, and call $P \subseteq \mathcal{G}_n$ a graph property. A generic problem is to determine whether a graph $G$ has property $P$ (i.e. $G \in P$) or is `far' from having property $P$ in some sense. Depending on what $P$ is, and what kind of query access you have to the graph, the problem can be quite difficult. But it is easy to see that the problem is non-monotone, in that if we have $S \subset P \subset T$, the fact that $P$ is easily testable does not imply either that $S$ is easily testable or that $T$ is. To see this, it is enough to observe that $P = \mathcal{G}_n$ and $P = \emptyset$ are both trivially testable, but that for some properties, there exist strong lower bounds.
{ "source": [ "https://cstheory.stackexchange.com/questions/3473", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/149/" ] }
3,496
Consider a nondeterministic finite automata $A = (Q, \Sigma, \delta, q_0, F)$, and a function $f(n)$. Additionally we define $\Sigma^{\leq k} = \bigcup_{i \leq k} \Sigma^i$. Now lets analyze the following statement: If $\Sigma^{\leq f(|Q|)} \subseteq L(A)$, then $L(A) = \Sigma^*$. It is easy to show, that for $f(n) = 2^n+1$ it is true, hence if the automata produces every word with length upto $2^{|Q|}+1$, then it produces $\Sigma^*$. But does it still hold if $f$ is a polynom? If not, what could a construction of a NFA $A$ for a given polynom $p$ look like, s.t. $\Sigma^{\leq p(|Q|)} \subseteq L(A) \subsetneq \Sigma^*$?
For the statement to hold, f must grow exponentially, even with the unary alphabet. [Edit: The analysis is improved slightly in revision 2.] Here is a proof sketch. Suppose that the statement holds and let f be a function such that every NFA with at most n states that accepts all strings with length at most f ( n ) accepts all strings whatsoever. We will prove that for every C >0 and sufficiently large n , we have f ( n ) > 2 C ⋅√ n . The prime number theorem implies that for every c < lg e and for sufficiently large k , there are at least c ⋅2 k / k primes in the range [2 k , 2 k +1 ]. We take c =1. For such k , let N k = ⌈2 k / k ⌉ and define an NFA M k as follows. Let p 1 , …, p N k be distinct primes in the range [2 k , 2 k +1 ]. The NFA M k has S k =1+ p 1 +…+ p N k states. Apart from the initial state, the states are partitioned into N k cycles where the i th cycle has length p i . In each cycle, all but one state are accepted states. The initial state has N k outgoing edges, each of which goes to the state immediately after the rejected state in each cycle. Finally, the initial state is also accepted. Let P k be the product p 1 … p N k . It is easy to see that M k accepts all strings of length less than P k but rejects the string of length P k . Therefore, f ( S k )≥ P k . Note that S k ≤ 1 + N k ⋅2 k +1 = o(2 2 k ) and that P k ≥ (2 k ) N k ≥ 2 2 k . The rest is standard.
{ "source": [ "https://cstheory.stackexchange.com/questions/3496", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/936/" ] }
3,535
Am I correct in understanding that proving a problem NP complete is a research success? If so why?
Ali, good question. Suppose you want to show that some problem P is computationally hard. Now, you could conjecture that P is hard just based on the fact that we don't have any efficient algorithms for it yet. But this is rather flimsy evidence, no? It could be that we have missed some nice way to look at P which would make it very easy to solve. So, in order to conjecture that P is hard, we would want to accumulate more evidence. Reductions provide a tool to do exactly that! If we can reduce some other natural problem Q to P, then we have shown P is at least as hard as Q. But Q could be a problem from some completely different area of mathematics, and people may have struggled for decades to solve Q also. Thus, we can view our failure to find an efficient algorithm for Q to be evidence that P is hard. If we have lots of such Q's from many different problem domains, then we have a huge body of evidence that P is hard. This is exactly what the theory of NP-completeness provides. If you prove your problem to be NP-complete, then you have tied its hardness to the hardness of hundreds of other problems, each of significant interest to various communities. Thus, morally speaking, you can be assured that your problem is indeed hard.
{ "source": [ "https://cstheory.stackexchange.com/questions/3535", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/2571/" ] }
3,540
Following the post What Books Should Everyone Read , I noticed that there are recent books whose drafts are available online. For instance, the Approximation Algorithms entry of the above post cites a 2011 book (yet to be published) titled The design of approximation algorithms . I think knowing recent works is really useful for whoever wants to get a taste of TCS trends. When drafts are available, one can check the books before actually buying them. So, What are the recent TCS books whose drafts are available online? Here, by "recent", I mean something that's no older than ~5 years.
Several TCS books by Now Publishers can be found in drafts: Foundations of Cryptography– A Primer by Oded Goldreich. This is a summarized version of his famous two-volume book on cryptography. (The draft of the two-volume version can be found in Robin's answer .) Data Streams: Algorithms and Applications by S. Muthukrishnan. Mathematical Aspects of Mixing Times in Markov Chains by Montenegro & Tetali. Pairwise Independence and Derandomization by Luby & Widgerson. Average-Case Complexity by Bogdanov & Trevisan. A Survey of Lower Bounds for Satisfiability and Related Problems by Melkebeek. Algorithms and Data Structures for External Memory by Vitter. Probabilistic Proof Systems: A Primer by Goldreich. Again, this is a summarized version of a part Goldreich's book Modern Cryptography, Probabilistic Proofs and Pseudorandomness . The Design of Competitive Online Algorithms via a Primal-Dual Approach by Buchbinder & Naor. Spectral Algorithms by Kannan & Vempala. On the Power of Small-Depth Computation by Viola. Algorithmic and Analysis Techniques in Property Testing by Ron. Arithmetic Circuits: A Survey of Recent Results and Open Questions by Amir Shpilka and Amir Yehudayoff (2010), Foundations and Trends® in Theoretical Computer Science: Vol. 5: No. 3–4, pp 207-388. http://dx.doi.org/10.1561/0400000039 In addition, drafts of several Springer books on "Information Security and Cryptography" can be found online: Cryptography in Constant Parallel Time by Applebaum. A Study of Statistical Zero-Knowledge Proofs by Vadhan. Locally Decodable Codes and Private Information Retrieval Schemes by Yekhanin. Concurrent Zero Knowledge by Rosen.
{ "source": [ "https://cstheory.stackexchange.com/questions/3540", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/1564/" ] }
3,566
Apologies for asking a question that must surely be in a lot of standard references. I'm curious about exactly the question in the title, in particular I am thinking of Boolean circuits, no depth bound. I put "smallest" in quotes to allow for the possibility there are multiple different classes, not known to include each other, for which a superlinear bound is known.
I believe that the smallest such classes known are $S_2P$ (Cai, 2001), $PP$ (Vinodchandran, 2005), and $(MA \cap coMA)/1$ (Santhanam, 2007). All of these are indeed known to not be in $SIZE(n^k)$ for each constant $k$.
{ "source": [ "https://cstheory.stackexchange.com/questions/3566", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/2279/" ] }
3,616
This post is inspired by the one in MO: Examples of common false beliefs in mathematics . Since the site is designed for answering research level questions, examples like $\mathsf{NP}$ stands for non-polynomial time should be not on the list. Meanwhile, we do want some examples that may not be hard, but without thinking in details it looks reasonable as well. We want the examples to be educational, and usually appears when studying the subject for the first time. What are some (non-trivial) examples of common false beliefs in theoretical computer science, that appear to people who are studying in this area? To be precise, we want examples different from surprising results and counterintuitive results in TCS; these kinds of results are surprising to many people, but they are TRUE. Here we are asking for surprising examples that people may think are true at first glance, but after deeper thought the fault within is exposed. As an example of proper answers on the list, this one comes from the field of algorithms and graph-theory: For an $n$ -node graph $G$ , a $k$ -edge separator $S$ is a subset of edges of size $k$ , where the nodes of $G \setminus S$ can be partition into two non-adjacent parts, each consists of at most $3n/4$ nodes. We have the following "lemma": A tree has a 1-edge separator. Right?
I've just got another myth busted, which is contributed by @XXYYXX's answer to this post : A problem X is $\mathsf{NP}$-hard if there is a polynomial time (or, logspace) reduction from all $\mathsf{NP}$ problems to X. Assume Exponential time hypothesis, 3-SAT does not have a sub-exponential time algorithm. Also, 3-SAT is in $\mathsf{NP}$. So no $\mathsf{NP}$-hard problems X have sub-exponential time algorithms. Otherwise a sub-exponential time algorithm for X + a polynomial time reduction = a sub-exponential time algorithm for 3-SAT. But we do have sub-exponential time algorithms for some NP-hard problems.
{ "source": [ "https://cstheory.stackexchange.com/questions/3616", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/1800/" ] }
3,650
It's my understanding that Turing's model has come to be the "standard" when describing computation. I'm interested to know why this is the case -- that is, why has the TM model become more widely-used than other theoretically equivalent (to my knowledge) models, for instance Kleene's μ-Recursion or the Lambda Calculus (I understand that the former didn't appear until later on and the latter wasn't originally designed specifically as a model of computation, but it shows that alternatives have existed from the start). All I can think of is that the TM model more closely represents the computers we actually have than its alternatives. Is this the only reason?
This seems to be true in the context of (some areas of) computer science but not generally. One reasons has to do with the Church's Thesis. The main reason is that some experts like Godel didn't think that the arguments that previous/other models of computation capture exactly the intuitive concept of computation were convincing. There are various arguments, Church had some, but they did not convince Godel. On the other hand Turing's analysis was convincing for Godel so it was accepted as the model for effective computation. The equivalences between different models is proven later (I think by Kleene). The second reason is technical and a later development related to the study of complexity theory. Defining the complexity measures like time, space, and nondeterminism seems to be easier using Turing machines than other models like $\lambda$ -calculus and $\mu$ -recursive functions. On the other hand, $\mu$ -recursive functions were and are still used as the main way of defining computability in logic and computability theory books. They are easier to work with when one only cares about effectiveness and not about complexity. Kleene's book "Metamathematics" was very influential for this development. Also $\lambda$ -calculus seems to be more common in CMU/European style computer science like programming languages and type theory. Some authors prefer the RAM and Register Machine models. (It seems to me that for some reason Americans adopted Turing's semantic model and Europeans adopted Church's syntactic model, Chruch was American and Turing was British. This a personal opinion/observation and others have a different view . Also see these papers by Viggo Stoltenberg-Hansen and John V. Tucker I , II .) Some resources for further reading: Robert I. Soare has a number of articles on the history of these developments, I personally like the one in the Handbook of Computability Theory. you can find more by checking the references in that paper. Another good resource is Neil Immerman's computability article on SEP, see also Church-Turing Thesis article by B. Jack Copeland. Godel's collected works contains lots of information on his views. Specially introductions to his articles are extremely well-written. Kleene's " Metamathematics " is a very nice book. Finally, if you are not still satisfied check the archives of the FOM mailing list , and if you cannot find an answer in the archive post a an email to the mailing list.
{ "source": [ "https://cstheory.stackexchange.com/questions/3650", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/1951/" ] }
3,711
I read in S. P. Jordan, D. Gosset, P. J. Love's " $QMA$-complete problems for stoquastic Hamiltonians and Markov matrices " that it is unlikely that $QMA \subseteq AM$. I was surprised about this assertion. So what is the proper relationship between $QMA$ and $AM$?
No relationship is known to hold between QMA and AM, and it is reasonable to conjecture they are incomparable. If QMA were proved to be contained in AM, it would be an absolutely enormous result in quantum complexity. Of course it would imply that BQP is in PH, which itself would be huge, but it would go beyond that -- it would surely require major revelations about the structure of quantum algorithms and quantum certificates. Having said that, the evidence against is not very convincing. An oracle relative to which QMA is not contained in AM would help, and it seems like such a result may not be far off -- but we don't even have this yet. A proof of the reverse containment, AM in QMA, would also be huge. At least here we have an oracle relative to which AM is not contained in QMA (and in fact is not even contained in PP).
{ "source": [ "https://cstheory.stackexchange.com/questions/3711", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/10136/" ] }
3,772
I'm interested in why natural numbers are so beloved by the authors of books on programming languages theory and type theory (e.g. J. Mitchell, Foundations for programming languages and B. Pierce, Types and Programming Languages). Description of the simply-typed lambda-calculus and in particular PCF programming language are usually based on Nat's and Bool's. For the people using and teaching general-purpose industrial PL's it is great deal more natural to treat integers instead of naturals. Can you mention some good reasons why PL theorist prefer nat's? Besides that it is a little less complicated. Are there any fundamental reasons or is it just an honour the tradition? UPD For all those comments about “fundamentality” of naturals: I'm a quite aware about all those cool things, but I'd rather prefer to see an example when it is really vital to have those properties in type theory of PL's theory. E.g. widely mentioned induction. When we have any sort of logic (which simply typed LC is), like basic first-order logic, we do really use induction — but induction on derivation tree (which we also have in lambda). My question basically comes from people from industry, who wants to gain some fundamental theory of programming languages. They used to have integers in their programs and without concrete arguments and applications to the theory being studied (type theory in our case) why to study languages with only nat's, they feel quite disappointed.
Short answer: the naturals are the first limit ordinals. Hence they play a central role in axiomatic set theory (eg, the axiom of infinity is the assertion they exist) and logic, and PL theorists tend to share foundational preoccupations with logicians. We want to have access to the principle of induction to prove total correctness, termination, and similar properties, and the naturals are an (er) natural choice of well-order. I don't want to imply that finite-width binary integers are any less cool objects, though. They are representations of the p-adics, and permit us to use power series methods in number theory and combinatorics. This means that their significance becomes more visible in algorithmics than PL, since this is when we start caring more about complexity rather than termination.
{ "source": [ "https://cstheory.stackexchange.com/questions/3772", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/869/" ] }
3,873
This question may not suit to here, but I couldn't find a better place to ask (it was closed in SO). I find research papers on computer science hard to understand. Of course the subjects are complicated. But after I understand a paper usually I can tell it to someone in simpler terms, and make them understand. If somebody else tells me what is done in that research I understand too. I think the best example that I can tell here is: I have tried to understand SIFT paper for a long time, and I found a tutorial while googling, in a couple of hours I was ready to implement the algorithm. If I was to understand the algorithm from the paper itself it might have taken a couple of days I think. My question is: is it only me who finds research papers this hard to understand? If not how do you deal with it? What are your techniques? Can you give tips?
Unfortunately, research conferences generally do not place a premium on writing for readability. In fact, sometimes it seems the opposite is true: papers that explain their results carefully and readably, in a way that makes them easy to understand, are downgraded in the conference reviewing process because they are "too easy" while papers that could be simplified but haven't been are thought to be deep and rated highly because of it. So, if you rephrase your question to add another word, is it not just you who finds some research papers unnecessarily hard to read, then no, it is not. If you can find a survey paper on the same subject, that may be better, both because the point of a survey is to be readable and because the process of re-developing ideas while writing a survey often leads to simplifications. As for strategies to read papers that you find hard, one of them that I sometimes use is the following: read the introduction to find out what problem they're trying to solve and some of the basic ideas of the solution, then stop reading and think about how you might try to use those ideas to solve the problem, and then go back and compare what you thought they might be doing to what they're actually doing. That way it may become clearer which parts of the paper are just technical but not difficult detail, and which other parts contain the key ideas needed to get through the difficult parts.
{ "source": [ "https://cstheory.stackexchange.com/questions/3873", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/2838/" ] }
3,888
Peter Shor showed that two of the most important NP-intermediate problems, factoring and the discrete log problem, are in BQP. In contrast, the best known quantum algorithm for SAT (Grover's search) only yields a quadratic improvement over the classical algorithm, hinting that NP-complete problems are still intractable on quantum computers. As Arora and Barak point out, there's also a problem in BQP that is not known to be in NP, leading to the conjecture that the two classes are incomparable. Is there any knowledge/conjecture as to why these NP-intermediate problems are in BQP, but why SAT (as far as we know) isn't? Do other NP-intermediate problems follow this trend? In particular, is graph isomorphism in BQP? (this one doesn't google well).
Graph isomorphism is not known to be in BQP. There has been a lot of work done on trying to put it in. A very intriguing observation is that graph isomorphism could be solved if quantum computers could solve the non-abelian hidden subgroup problem for the symmetric group (factoring and discrete log are solved by using the abelian hidden subgroup problem, which in turn is solved by applying the quantum Fourier transform on abelian groups). One of the ways people have tried to solve graph isomorphism was by applying the quantum Fourier transform for non-abelian groups. There are algorithms for the quantum Fourier transform for many non-abelian groups, including the symmetric group. Unfortunately, it appears that it may not be possible to use the quantum Fourier transform for the symmetric group to solve graph isomorphism; there have been quite a few papers written about this which show that it doesn't work, given various assumptions on the structure of the algorithm. These papers are probably what you find when you google.
{ "source": [ "https://cstheory.stackexchange.com/questions/3888", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/969/" ] }
3,919
Discussion : I've been spending some personal time lately learning various things in communication complexity. For instance, I've re-familiarized myself with the relevant chapter in Arora/Barak, started reading some papers, and ordered the book by Kushilevitz/Nisan. Intuitively, I want to contrast communication complexity with computational complexity. And in particular, I'm struck by the fact that computational complexity has developed into a rich theory of placing computational problems into complexity classes, some of which can be in turn ( from one perspective, at least ) envisioned in terms of complete problems for each given class. For instance, when explaining $NP$ to someone for the first time, it's hard to avoid comparisons to SAT or some other NP-complete problem. By comparison, I've never heard anything of an analogous concept for communication complexity classes. There are many examples that I'm aware of, of problems "complete for a theorem." For instance, as a general framework, the authors might describe a given communication problem $P$ and then prove that a related theorem $T$ holds $iff$ the communication problem can be solved in $X$ or less bits (for some $X$ that depends on the specific theorem/problem pair in question). The terminology used then in literature is that $P$ is "complete" for $T$. Further, there is a tantalizing line in the Arora/Barak communication complexity chapter draft (that seems to have been removed/tweaked in the final printing) that states "In general, one can consider communication protocols analogous to $NP$, $coNP$, $PH$ etc." However, I notice two important omissions: The "analogous" concept appears to be a manner of computing the communication complexity of solving a given protocol with access to different types of resources, but stops just short of defining proper communication complexity classes... Most of communication complexity seems to be relatively "low-level," in the sense that the overwhelming majority of results/theorems/etc. revolve around small-ish, specific, polynomial-sized values. This somewhat begs the question of why, say, $NEXP$ is interesting for computation but the analogous concept appears to be less interesting for communication. (Of course, I could just be at fault for simply being unaware of "higher-level" communication complexity concepts.) Question(s) : Is there an analogous concept to computational complexity classes for communication complexity? And: If so, how does it compare to the "standard" notion of complexity classes? (e.g. are there natural limitations to "communication complexity classes" that cause them to inherently fall short of the full range of computational complexity classes?) If not, what's the "big picture" reason that classes are an interesting formalism for computational complexity but not for communication complexity?
It seems that you are looking for this paper: http://portal.acm.org/citation.cfm?id=1382439.1382962
{ "source": [ "https://cstheory.stackexchange.com/questions/3919", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/108/" ] }
3,921
In an answer to an earlier question , I mentioned the common but false belief that “Gaussian” elimination runs in $O(n^3)$ time. While it is obvious that the algorithm uses $O(n^3)$ arithmetic operations, careless implementation can create numbers with exponentially many bits. As a simple example, suppose we want to diagonalize the following matrix: $$\begin{bmatrix} 2 & 0 & 0 & \cdots & 0 \\ 1 & 2 & 0 & \cdots & 0 \\ 1 & 1 & 2 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 1 & 1 & 1 & \cdots & 2 \\ \end{bmatrix}$$ If we use a version of the elimination algorithm without division, which only adds integer multiples of one row to another, and we always pivot on a diagonal entry of the matrix, the output matrix has the vector $(2, 4, 16, 256, \dots, 2^{2^{n-1}})$ along the diagonal. But what is the actual time complexity of Gaussian elimination? Most combinatorial optimization authors seem to be happy with “strongly polynomial”, but I'm curious what the polynomial actually is. A 1967 paper of Jack Edmonds describes a version of Gaussian elimination (“possibly due to Gauss”) that runs in strongly polynomial time. Edmonds' key insight is that every entry in every intermediate matrix is the determinant of a minor of the original input matrix. For an $n\times n$ matrix with $m$-bit integer entries, Edmonds proves that his algorithm requires integers with at most $O(n(m+\log n))$ bits. Under the “reasonable” assumption that $m=O(\log n)$, Edmonds' algorithm runs in $O(n^5)$ time if we use textbook integer arithmetic, or in $\tilde{O}(n^4)$ time if we use FFT-based multiplication, on a standard integer RAM, which can perform $O(\log n)$-bit arithmetic in constant time. (Edmonds didn't do this time analysis; he only claimed that his algorithm is “good”.) Is this still the best analysis known? Is there a standard reference that gives a better explicit time bound, or at least a better bound on the required precision? More generally: What is the running time (on the integer RAM) of the fastest algorithm known for solving arbitrary systems of linear equations?
I think the answer is $\widetilde O(n^3 \log( \|A\| + \|b\|))$, where we omit the (poly)logarithmic factors. The bound is presented in "W. Eberly, M. Giesbrecht, P. Giorgi, A. Storjohann, G. Villard. Solving sparse integer linear systems. Proc. ISSAC'06, Genova, Italy, ACM Press, 63-70, July 2006", but it is based on a paper by Dixon: "Exact solution of linear equations using P-adic expansions, John D. Dixon, NUMERISCHE MATHEMATIK, Volume 40, Number 1, 137-141".
{ "source": [ "https://cstheory.stackexchange.com/questions/3921", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/111/" ] }
3,987
I was wondering if the JSON spec defined a regular language. It seems simple enough, but I'm not sure how to prove it myself. The reason I ask, is because I was wondering if one could use regular expressions to effectively pars JSON. Could someone with enough rep please create the tags json and regular-language for me?
Since $a^n b^n$ is not a regular language, neither is JSON, since $[^n 5 ]^n$ is valid input for any $n$. Likewise, your regular expression parser would have to properly reject any input $[^m 4 ]^n$ where $m \ne n$ which you cannot do with regular expressions. Hence, JSON is not regular.
{ "source": [ "https://cstheory.stackexchange.com/questions/3987", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/2479/" ] }
4,016
Certain problems are known to be undecidable, but it is nevertheless possible to make some progress on solving them. For example, the halting problem is undecidable, but practical progress can be made on creating tools for detecting potential infinite loops in your code. Tiling problems are often undecidable (e.g., does this polyomino tile some rectangle?) but again it is possible to advance the state of the art in this area. What I am wondering is if there is any decent theoretical method of measuring progress on solving undecidable problems, that resembles the theoretical apparatus that has been developed for measuring progress on NP-hard problems. Or does it seem that we are stuck with ad hoc, I-know-progress-when-I-see-it assessments of how much particular breakthroughs advance our understanding of undecidable problems? Edit : As I think about this question, it occurs to me that perhaps parameterized complexity may be relevant here. An undecidable problem may become decidable if we introduce a parameter and fix the value of the parameter. I'm not sure if this observation is of any use, though.
In the case of the halting problem, the answer is "not yet". The reason is that the standard logical method for characterizing how hard a program's termination proof is (eg, ordinal analysis) tends to lose too much combinatorial and/or number-theoretic structure. The state of the art in practical termination analysis of imperative programs is something called "rank-function synthesis" (Byron Cook has a forthcoming book, Proving Program Termination, on the subject from CUP). The idea is to compute a linear function of the program's variables' values now and at the previous step, which serves as a termination metric. (One cool thing about this method is that it uses Farkas's lemma, which gives a neat geometric viewpoint to what's going on.) The interesting thing is that the tools built on this approach can do things like show the termination of the Ackerman function (which is non primitive recursive), but you can construct non-nested while loops which can defeat them (which only needs $\omega$ to show termination). This means that there isn't a neat relationship between the proof-theoretic strength of the metalogic in which you show termination (this is very important in rewriting theory, for example) and the functions that techniques like rank-function synthesis can show termination for. For the lambda calculus, we have a precise characterization of termination in terms of typability: a lambda term is strongly normalizing if and only if it is typeable under the intersection type discipline. Of course, this means that full type inference for intersection types is impossible, but it may also give a way of comparing partial inference algorithms.
{ "source": [ "https://cstheory.stackexchange.com/questions/4016", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/2970/" ] }
4,027
Consider the set of planar graphs where all the internal faces are triangles. If there is an interior point of odd degree the graph cannot be three colored. If every interior point has even degree can it always be three colored? Ideally I'd like a small counterexample.
Yes, this is a corollary of the Three Color Theorem, see at the bottom here: http://kahuna.merrimack.edu/~thull/combgeom/colornotes.html
{ "source": [ "https://cstheory.stackexchange.com/questions/4027", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/550/" ] }
4,090
Complexity theory is a strong secondary interest of mine but it's not my primary research interest, so there is no hope for me to attend all the conferences, read all the blogs, and ensure that the "in" crowd cc: me on every bit of hot news. I try to do some of this but I am wondering what methods will give me the most bang for the buck (or rather time, since time is more of a limiting factor than money in this context). Some methods I have attempted include: Look over STOC/FOCS proceedings. This often means I don't hear about breakthroughs until they're (somewhat) old news, but that's O.K. from my point of view as long as I am likely to catch the news eventually. Are there other proceedings I should be tracking? Subscribe to the Los Alamos ArXiv. How many complexity theorists use this? Are there other preprint servers I should look at? Read blogs. I tried this for a while but have more or less given up because there are too many blogs out there and it seems to be a very inefficient method of staying current. Anything I've missed? Again my focus is on finding time-efficient methods rather than on doing every conceivable thing to keep abreast. Edit: Thanks for all the responses; I would accept more than one answer if the software allowed it. My somewhat arbitrary choice is based on the fact that I now recall having heard of the ECCC and the CCC before, but I was completely unaware of the Blog Aggregator.
You could also subscribe to Theory of Computing Blog Aggregator . Though it includes not only complexity theory (CT) updates but the key news on CT, I think, you are guaranteed to obtain.
{ "source": [ "https://cstheory.stackexchange.com/questions/4090", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/2970/" ] }
4,096
In trying to devise my own sorting algorithm, I'm looking for the optimal benchmark to which I can compare it. For an unsorted ordering of elements A and a sorted ordering B , what is an efficient way to calculate the optimal number of transpositions to get from A to B ? A transposition is defined as switching the position of 2 elements in the list, so for instance 1 2 4 3 has one transposition (transposition 4 and 3) to make it 1 2 3 4 Something like 1 7 2 5 9 6 requires 4 transpositions (7, 2), (7, 6), (6,5), (9, 7) Update (9/7/11): question changed to use "transposition" instead of "swaps" to refer to non-adjacent exchanges.
If you're only dealing with permutations of $n$ elements, then you will need exactly $n-c(\pi)$ swaps, where $c(\pi)$ is the number of cycles in the disjoint cycle decomposition of $\pi$. Since this distance is bi-invariant, transforming $\pi$ into $\sigma$ (or $A$ into $B$, or conversely) requires $n-c(\sigma^{-1}\circ\pi)$ such moves.
{ "source": [ "https://cstheory.stackexchange.com/questions/4096", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/2233/" ] }
4,131
I was reading this . It says ... You won't find yourself as starving for funding like Pure Mathematics. (You'll still always find yourself starving for funding.)... Why do pure mathematicians need funding?(Ooops its mathoverflow question) Why would someone doing theoretical research need funding? I think tools of trade are papers, pencils, a laptop with good internet connection and a printer(?). Please enlighten me! :-)
This is purely US-centric: other countries have different funding models. This is also from the perspective of an academic with a Ph.D, rather than a graduate student As Jamie and Peter point out, the primary purpose of funding is to support graduate students. A secondary purpose is to support yourself during the summer. It's not widely known, but most US-based academics aren't paid for the 3 months of summer, and use grant money as salary for those months (I'll not discuss the limitations of NSF vs DARPA etc etc). So you say, "I don't need students, I'll just work with colleagues". Great ! but then you need money to visit them. Without grant money, you have to wait your turn for whatever meager departmental funds might be available for travel (usually minimal). So you then say "Fine ! I'll use skype and email to collaborate". Great ! but then you need to travel to a conference to give a talk. How do you fund that ? So you say "Fine ! I'll just publish in journals and on the arxiv, and the brilliance of my research will shine through". Um, yeah.... If you're a junior academic, not getting funding can also affect your ability to retain your job itself. Funding is a major income source for most American universities. None of this is ideal. But that's how the system is currently structured, dating back to Vannevar Bush, the founding of the NSF, and the mutation of the university into a research-generating enterprise.
{ "source": [ "https://cstheory.stackexchange.com/questions/4131", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/612/" ] }
4,204
Computational complexity includes the study of time or space complexity of computational problems. From the the perspective of mobile computing, energy is very valuable computational resource. So, Is there a well studied adaptation of Turing machines that account for the energy consumed during the execution of algorithms. Also, Are there established energy-complexity classes for computational problems? References are appreciated.
Is there a well studied adaptation of Turing machines that account for the energy consumed during the execution of algorithms? No! But maybe you could come up with one. It's possible you could divide the Turing machine steps into reversible and non-reversible (the non-reversible ones are where information is lost). Theoretically, it is only the non-reversible steps that cost energy. A cost of one unit of energy for each bit that is erased would theoretically be the right measure. There is a theorem of Charles Bennett that the time complexity increases by at most a constant when a computation is made reversible (C.H. Bennett, Logical Reversibility of Computation ), but if there are also limits on space, then making the computational reversible might incur a substantial increase in time (Reference here) . Landauer's principle says that erasing a bit costs $kT\, \ln 2$ of energy, where $T$ is temperature and $k$ is Boltzmann's constant. In real life, you cannot come anywhere close to achieving this minimum. However, you can build chips which perform reversible steps using substantially less energy than they use for irreversible steps. If you give reversible steps a cost of $\alpha$ and irreversible steps a cost of $\beta$, this seems like it may give a reasonable theoretical model. I don't know how Turing machines with some reversible steps relate to chips with some reversible circuitry, but I think both models are worth investigating.
{ "source": [ "https://cstheory.stackexchange.com/questions/4204", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/495/" ] }
4,352
I've read somewhere that a Turing machine cannot compute this and it's therefore undecidable but why? Why is it computationally impossible for a machine to generate the parse tree's and make a decision? Perhaps I'm wrong and it can be done?
We reduce from Post's Correspondence Problem . Suppose we can, in fact, decide the language $\{\langle G\rangle\vert G\textrm{ a CFG and }L(G)\textrm{ ambiguous}\}$ . Given $\alpha_1, \ldots, \alpha_m, \beta_1, \ldots, \beta_m$ : Construct the following CFG $G = (V,\Sigma,R,S)$ : $V = \{S, S_1, S_2\}$ , $$\begin{align} R = \{S_{\phantom0}&\rightarrow S_1\vert S_2,\\ S_1&\rightarrow \alpha_1 S_1 \sigma_1 \vert \cdots \vert \alpha_m S_1 \sigma_m \vert \alpha_1 \sigma_1 \vert \cdots \vert \alpha_m \sigma_m,\\ S_2&\rightarrow \beta_1 S_2 \sigma_1\vert \cdots \vert \beta_m S_2 \sigma_m \vert \beta_1 \sigma_1\vert \cdots \vert \beta_m \sigma_m\} \end{align}$$ (where $\sigma_i$ are new characters added to the alphabet, e.g., $\sigma_i = \underline{i}$ ). If the language is ambiguous, then there is a derivation of some string $w$ in two different ways. Supposing, wlog, that the derivations both start with the rule $S\rightarrow S_1$ , reading the new characters backwards until they end makes sure there can only be one derivation, so that's not possible. Hence, we see that the only ambiguity can come from one $S_1$ and one $S_2$ 'start'. But then, taking the substring of $w$ up to the beginning of the new characters, we have a solution to the PCP (since the strings of indices used after those points match). Similarly, if there is no ambiguity, then the PCP cannot be solved, since a solution would imply an ambiguity that just follows $S\Rightarrow S_1\Rightarrow^* \alpha\tilde{\sigma}$ and $S\Rightarrow S_2\Rightarrow^* \beta\tilde{\sigma}$ , where $\alpha = \beta$ are strings of matching $\alpha$ 's and $\beta$ 's (since the $\tilde{\sigma}$ 's match). Hence, we've reduced from PCP, and since that's undecidable, we're done. (Let me know if I've done anything boneheaded!)
{ "source": [ "https://cstheory.stackexchange.com/questions/4352", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/3284/" ] }
4,375
What are "easy regions" for satisfiability? In other words, sufficient conditions for some SAT solver to be able to find a satisfying assignment, assuming it exists. One example is when each clause shares variables with few other clauses, due to constructive proof of LLL, any other results along those lines? There's sizable literature on easy regions for Belief Propagation, is there something along those lines for satisfiability?
I guess you know the classical result of Schaefer from STOC'78, but just in case. 10.1145/800133.804350 Schaefer proved that if SAT is parametrised by a set of relations allowed in any instance, then there are only 6 tractable cases: 2-SAT (i.e. every clause is binary), Horn-SAT, dual-Horn-SAT, affine-SAT (solutions to linear equations in GF(2)), 0-valid (relations satisfied by the all-0 assignment) and 1-valid (relations satisfied by the all-1 assignment).
{ "source": [ "https://cstheory.stackexchange.com/questions/4375", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/434/" ] }
4,473
It is well-known that in general, the order of universal and existential quantifiers cannot be reversed. In other words, for a general logical formula $\phi(\cdot,\cdot)$, $(\forall x)(\exists y) \phi(x,y) \quad \not\Leftrightarrow \quad (\exists y)(\forall x) \phi(x,y)$ On the other hand, we know the right-hand side is more restrictive than the left-hand side; that is, $(\exists y)(\forall x) \phi(x,y) \Rightarrow (\forall x)(\exists y) \phi(x,y)$. This question focuses on techniques to derive $(\forall x)(\exists y) \phi(x,y) \Rightarrow (\exists y)(\forall x) \phi(x,y)$, whenever it holds for $\phi(\cdot,\cdot)$. Diagonalization is one such technique. I first see this use of diagonalization in the paper Relativizations of the $\mathcal{P} \overset{?}{=} \mathcal{NP}$ Question (see also the short note by Katz ). In that paper, the authors first prove that: For any deterministic, polynomial-time oracle machine M, there exists a language B such that $L_B \ne L(M^B)$. They then reverse the order of the quantifiers (using diagonalization ), to prove that: There exists a language B such that for all deterministic, poly-time M we have $L_B \ne L(M^B)$. This technique is used in other papers, such as [CGH] and [AH] . I found another technique in the proof of Theorem 6.3 of [IR] . It uses a combination of measure theory and pigeon-hole principle to reverse the order of quantifiers. I want to know what other techniques are used in computer science, to reverse the order of universal and existential quantifiers?
Reversal of quantifiers is an important property that is often behind well known theorems. For example, in analysis the difference between $\forall \epsilon > 0 . \forall x . \exists \delta > 0$ and $\forall \epsilon > 0 . \exists \delta > 0 . \forall x$ is the difference between pointwise and uniform continuity. A well known theorem says that every pointwise continuous map is uniformly continuous, provided the domain is nice, i.e., compact . In fact, compactness is at the heart of quantifier reversal. Consider two datatypes $X$ and $Y$ of which $X$ is overt and $Y$ is compact (see below for explanation of these terms), and let $\phi(x,y)$ be a semidecidable relation between $X$ and $Y$. The statement $\forall y : Y . \exists x : X . \phi(x,y)$ can be read as follows: every point $y$ in $Y$ is covered by some $U_x = \lbrace z : Y \mid \phi(x,z) \rbrace$. Since the sets $U_x$ are "computably open" (semidecidable) and $Y$ is compact there exists a finite subcover. We have proved that $$\forall y : Y . \exists x : X . \phi(x,y)$$ implies $$\exists x_1, \ldots, x_n : X . \forall y : Y . \phi(x_1,y) \lor \cdots \lor \phi(x_n, y).$$ Often we can reduce the existence of the finite list $x_1, \ldots, x_n$ to a single $x$. For example, if $X$ is linearly ordered and $\phi$ is monotone in $x$ with respect to the order then we can take $x$ to be the largest one of $x_1, \ldots, x_n$. To see how this principle is applied in a familiar case, let us look at the statement that $f : [0,1] \to \mathbb{R}$ is a continuous function. We keep $\epsilon > 0$ as a free variable in order not to get confused about an outer universal quantifier: $$\forall x \in [0,1] . \exists \delta > 0 . \forall y \in [x - \delta, x + \delta] . |f(y) - f(x)| < \epsilon.$$ Because $[x - \delta, x + \delta]$ is compact and comparison of reals is semidecidable, the statement $\phi(x, \delta) \equiv \forall y \in [x - \delta, x + \delta] . |f(y) - f(x)| < \epsilon$ is semidecidable. The positive reals are overt and $[0,1]$ is compact, so we can apply the principle: $$\exists \delta_1, \delta_2, \ldots, \delta_n > 0 . \forall x \in [0,1] . \phi(\delta_1, x) \lor \cdots \phi(\delta_n, x).$$ Since $\phi(\delta, x)$ is antimonotone in $\delta$ the smallest one of $\delta_1, \ldots, \delta_n$ does the job already, so we just need one $\delta$: $$\exists \delta > 0 . \forall x \in [0,1] . \forall y \in [x - \delta, x + \delta] . |f(y) - f(x)| < \epsilon.$$ What we have got is uniform continuity of $f$. Vaguely speaking, a datatype is compact if it has a computable universal quantifier and overt if it has a computable existential quantifier. The (non-negative) integers $\mathbb{N}$ are overt because in order to semidecide whether $\exists n \in \mathbb{N} . \phi(n)$, with $\phi(n)$ semidecidable, we perform the paralel search by dovetailing . The Cantor space $2^\mathbb{N}$ is compact and overt, as explained by Paul Taylor's Abstract Stone Duality and Martin Escardo's " Synthetic Topology of Datatypes and Classical Spaces " (also see the related notion of searchable spaces ). Let us apply the principle to the example you mentioned. We view a language as a map from (finite) words over a fixed alphabet to boolean values. Since finite words are in computable bijective correspondence with integers we may view a language as a map from integers to boolean values. That is, the datatype of all languages is, up to computable isomorphism, precisely the Cantor space nat -> bool , or in mathematical notation $2^\mathbb{N}$, which is compact. A polynomial-time Turing machine is described by its program, which is a finite string, thus the space of all (representations of) Turing machines can be taken to be nat or $\mathbb{N}$, which is overt. Given a Turing machine $M$ and a language $c$, the statement $\mathsf{rejects}(M,c)$ which says "language $c$ is rejected by $M$" is semidecidable because it is in fact decidable: just run $M$ with input $c$ and see what it does. The conditions for our principle are satisfied! The statement "every oracle machine $M$ has a language $b$ such that $b$ is not accepted by $M^b$" is written symbolically as $$\forall M : \mathbb{N} . \exists b : 2^\mathbb{N} . \mathsf{rejects}(M^b,b).$$ After inversion of quantifiers we get $$\exists b_1, \ldots, b_n : 2^\mathbb{N} . \forall M : \mathbb{N} . \mathsf{rejects}(M^{b_1}, b_1) \lor \cdots \lor \mathsf{rejects}(M^{b_n},b_n).$$ Ok, so we are down to finitely many languages. Can we combine them into a single one? I will leave that as an exercise (for myself and you!). You might also be interested in the slightly more general question of how to transform $\forall x . \exists y . \phi(x,y)$ to an equivalent statement of the form $\exists u . \forall v . \psi(u,v)$, or vice versa. There are several ways of doing this, for example: Skolem normal form , Herbrand normal form , Gödel's functional interpretation .
{ "source": [ "https://cstheory.stackexchange.com/questions/4473", "https://cstheory.stackexchange.com", "https://cstheory.stackexchange.com/users/873/" ] }