source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
4,479 | I'm looking for a software for visualizing main trends in some field.
Specifically, I want it to build a graph of the field with vertexes representing papers and edges representing how close papers are (probably, based on number of co-citations).
It seems to be very useful in order to find important subfields (clusters in the graph) and important papers (centers of the clusters). | Reversal of quantifiers is an important property that is often behind well known theorems. For example, in analysis the difference between $\forall \epsilon > 0 . \forall x . \exists \delta > 0$ and $\forall \epsilon > 0 . \exists \delta > 0 . \forall x$ is the difference between pointwise and uniform continuity. A well known theorem says that every pointwise continuous map is uniformly continuous, provided the domain is nice, i.e., compact . In fact, compactness is at the heart of quantifier reversal. Consider two datatypes $X$ and $Y$ of which $X$ is overt and $Y$ is compact (see below for explanation of these terms), and let $\phi(x,y)$ be a semidecidable relation between $X$ and $Y$. The statement $\forall y : Y . \exists x : X . \phi(x,y)$ can be read as follows: every point $y$ in $Y$ is covered by some $U_x = \lbrace z : Y \mid \phi(x,z) \rbrace$. Since the sets $U_x$ are "computably open" (semidecidable) and $Y$ is compact there exists a finite subcover. We have proved that
$$\forall y : Y . \exists x : X . \phi(x,y)$$
implies
$$\exists x_1, \ldots, x_n : X . \forall y : Y . \phi(x_1,y) \lor \cdots \lor \phi(x_n, y).$$
Often we can reduce the existence of the finite list $x_1, \ldots, x_n$ to a single $x$. For example, if $X$ is linearly ordered and $\phi$ is monotone in $x$ with respect to the order then we can take $x$ to be the largest one of $x_1, \ldots, x_n$. To see how this principle is applied in a familiar case, let us look at the statement that $f : [0,1] \to \mathbb{R}$ is a continuous function. We keep $\epsilon > 0$ as a free variable in order not to get confused about an outer universal quantifier:
$$\forall x \in [0,1] . \exists \delta > 0 . \forall y \in [x - \delta, x + \delta] . |f(y) - f(x)| < \epsilon.$$
Because $[x - \delta, x + \delta]$ is compact and comparison of reals is semidecidable, the statement $\phi(x, \delta) \equiv \forall y \in [x - \delta, x + \delta] . |f(y) - f(x)| < \epsilon$ is semidecidable. The positive reals are overt and $[0,1]$ is compact, so we can apply the principle:
$$\exists \delta_1, \delta_2, \ldots, \delta_n > 0 . \forall x \in [0,1] . \phi(\delta_1, x) \lor \cdots \phi(\delta_n, x).$$
Since $\phi(\delta, x)$ is antimonotone in $\delta$ the smallest one of $\delta_1, \ldots, \delta_n$ does the job already, so we just need one $\delta$:
$$\exists \delta > 0 . \forall x \in [0,1] . \forall y \in [x - \delta, x + \delta] . |f(y) - f(x)| < \epsilon.$$
What we have got is uniform continuity of $f$. Vaguely speaking, a datatype is compact if it has a computable universal quantifier and overt if it has a computable existential quantifier. The (non-negative) integers $\mathbb{N}$ are overt because in order to semidecide whether $\exists n \in \mathbb{N} . \phi(n)$, with $\phi(n)$ semidecidable, we perform the paralel search by dovetailing . The Cantor space $2^\mathbb{N}$ is compact and overt, as explained by Paul Taylor's Abstract Stone Duality and Martin Escardo's " Synthetic Topology of Datatypes and Classical Spaces " (also see the related notion of searchable spaces ). Let us apply the principle to the example you mentioned. We view a language as a map from (finite) words over a fixed alphabet to boolean values. Since finite words are in computable bijective correspondence with integers we may view a language as a map from integers to boolean values. That is, the datatype of all languages is, up to computable isomorphism, precisely the Cantor space nat -> bool , or in mathematical notation $2^\mathbb{N}$, which is compact. A polynomial-time Turing machine is described by its program, which is a finite string, thus the space of all (representations of) Turing machines can be taken to be nat or $\mathbb{N}$, which is overt. Given a Turing machine $M$ and a language $c$, the statement $\mathsf{rejects}(M,c)$ which says "language $c$ is rejected by $M$" is semidecidable because it is in fact decidable: just run $M$ with input $c$ and see what it does. The conditions for our principle are satisfied! The statement "every oracle machine $M$ has a language $b$ such that $b$ is not accepted by $M^b$" is written symbolically as
$$\forall M : \mathbb{N} . \exists b : 2^\mathbb{N} . \mathsf{rejects}(M^b,b).$$
After inversion of quantifiers we get
$$\exists b_1, \ldots, b_n : 2^\mathbb{N} . \forall M : \mathbb{N} . \mathsf{rejects}(M^{b_1}, b_1) \lor \cdots \lor \mathsf{rejects}(M^{b_n},b_n).$$
Ok, so we are down to finitely many languages. Can we combine them into a single one? I will leave that as an exercise (for myself and you!). You might also be interested in the slightly more general question of how to transform $\forall x . \exists y . \phi(x,y)$ to an equivalent statement of the form $\exists u . \forall v . \psi(u,v)$, or vice versa. There are several ways of doing this, for example: Skolem normal form , Herbrand normal form , Gödel's functional interpretation . | {
"source": [
"https://cstheory.stackexchange.com/questions/4479",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/2016/"
]
} |
4,489 | It's well-known that there are tons of amateurs--myself included--who are interested in the P vs. NP problem. There are also many amatuers--myself still included--who have made attempts to resolve the problem. One problem that I think the TCS community suffers from is a relatively high interested-amateur-to-expert ratio; this leads to experts being inundated with proofs that P != NP, and I've read that they are frustrated and overwhelmed, quite understandably, by this situation. Oded Goldreich has written on this issue, and indicated his own refusal to check proofs. At the same time, speaking from the point of view of an amateur, I can assert that there are few things more frustrating for non-expert-level TCS enthusiasts of any level of ability than generating a proof that just seems right, but lacking both the ability to find the error in the proof yourself and the ability to talk to anyone who can spot errors in your proof. Recently, R. J. Lipton wrote on the problem of amateurs who try to get taken seriously. I have a proposal for resolving this problem, and my question is whether or not others think it reasonable, or if there are problems with it. I think experts should charge a significant but reasonable sum of money (say, 200 - 300 USD) in exchange for agreeing to read proofs in detail and find specific errors in them. This would accomplish three things: Amateurs would have a clear way to get their proofs evaluated and taken seriously. Experts would be compensated for their time and energy expended. There would be a significantly high cost imposed on proof-checking that the number of proofs that amateurs submit would go down dramatically. Again, my question is whether or not this is a reasonable proposal. Obviously, I have no ability to cause experts to adopt what I suggest; however, I'm hoping that experts will read what I've written and decide that it's reasonable. | Let me respond to your suggestion with a counter-suggestion: Why don't you try setting up a business, acting as a middleman between amateurs and experts? Amateurs pay to have their proofs evaluated. You find an expert and pay the expert to evaluate the proof, taking a cut of the money for your middleman role. Trying to run such a business is the most reliable way of finding out whether your idea is a feasible one. | {
"source": [
"https://cstheory.stackexchange.com/questions/4489",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/-1/"
]
} |
4,769 | Arora and Barak 's book presents factoring as the following problem: $\text{FACTORING} = \{\langle L, U, N \rangle \;|\; (\exists \text{ a prime } p \in \{L, \ldots, U\})[p | N]\}$ They add, further in Chapter 2, that removing the fact that $p$ is prime makes this problem NP-complete, although this is not linked to the difficulty of factoring numbers. It looks there can be a reduction from SUBSETSUM, but I got stuck finding it. Any better luck around here? EDIT March 1st: The bounty is for $NP$-completeness proof using deterministic Karp (or Cook) reduction. | This is not quite an answer, but it's close. The following is a proof that the problem is NP-hard under randomized reductions. There's an obvious relation to subset sum which is: suppose you know the factors of $N$: $p_1$, $p_2$, $\ldots$, $p_k$. Now, you want to find a subset $S$ of $p_1$ $\ldots$ $p_k$ such that $$\displaystyle \log L \leq \sum_{p_i \in S} \log p_i \leq \log U.$$ The problem with trying to use this idea to show the problem is NP-hard is that if you have a subset-sum problem with numbers $t_1$, $t_2$, $\ldots$, $t_k$,
you can't necessarily find primes in polynomial time such that $\log p_i \propto t_i$ (where by $\propto$, I mean approximately proportional to). This is a real problem because, since subset-sum is not strongly NP-complete, you need to find these $\log p_i$ for large integers $t_i$. Now, suppose we require that all the integers $t_1$ $\ldots$ $t_k$ in a subset sum problem are between $x$ and $x(1+1/k)$, and that the sum is approximately $\frac{1}{2}\sum_i t_i$. The subset sum problem will still be NP-complete, and any solution will be the sum of $k/2$ integers. We can change the problem from integers to reals if we let $t'_i$ be between $t_i$ and $t_i+\frac{1}{10k}$, and instead of requiring the sum to be exactly $s$, we require it to be between $s$ and $s + \frac{1}{10}$. We only need to specify our numbers to around $4 \log k$ more bits of precision to do this. Thus, if we start with numbers with $B$ bits, and we can specify real numbers $\log p_i$ to approximately $B + 4 \log k$ bits of precision, we can carry out our reduction. Now, from wikipedia (via Hsien-Chih's comment below), the number of primes between $T$ and $T+ T^{5/8}$ is $\theta(T^{5/8}/\log T)$, so if you just choose numbers randomly in that range, and test them for primality, with high probability get a prime in polynomial time. Now, let's try the reduction. Let's say our $t_i$ are all $B$ bits long. If we take $T_i$ of length $3B$ bits, then we can find a prime $p_i$ near $T_i$ with $9/8B$ bits of precision. Thus, we can choose $T_i$ so that $\log T_i \propto t_i$ with precision $9/8\, B$ bits. This lets us find $p_i \approx T_i$ so that $\log p_i \propto t_i$ with precision $9/8\,B$ bits. If a subset of these primes multiplies to something close to the target value, a solution exists to the original subset sum problems. So we let $N=\Pi_i p_i$, choose $L$ and $U$ appropriately, and we have a randomized reduction from subset sum. | {
"source": [
"https://cstheory.stackexchange.com/questions/4769",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/88/"
]
} |
4,806 | I'm sorry if this question is a little vague, but I am curious how successful researchers get a "feel" for the results in TCS. For example, linear algebra can be understood geometrically, or in terms of its physical interpretations (eigenvectors can be thought of as "stable points" in a system), etc. It's also intuitive that there exists an IP protocol for TQBF (as the IP protocol can be visualized as a kind of a "game" between two entities of greatly differing computational power). However, I find that a lot of the results, even extremely basic ones in TCS do not have such simple intuitions (MA $\subseteq$ AM). Worse still, occasionally, unrefined intuitions go awfully wary (2-SAT is in P while 3-SAT is not believed to be in P (in fact, is NP-complete)). Are there any "general principles" for developing an intuition in TCS? | Like many scientific fields, it can take years to build intuition, but it can take only one new idea to tear that intuition down (and hopefully something nice gets rebuilt in its place). There are some basic exercises you can use to try to build intuition for some paper you're reading and can't seem to penetrate. Here's one that I still do from time to time. Start with a proof that you don't understand but would really like to, which is very long. As you read each paragraph of the proof, try to write a sentence in your own words about what you think the paragraph is saying, in the margins. Hopefully the proof is written well enough that there are well-defined "parts" to the proof ("do X, then define a new function f, then apply X to f, ..."). If not, then from your sentences, separate the proof into your own parts. Now for each part, try to write a sentence (in your own words) about what each part is doing. At this point, it could be that you find your earlier sentences are not quite accurate or don't fit well together (your intuition was "off"), so you may refine them so they fit logically together. Now you have a few sentences summarizing the whole proof. Then (now this last part is from my advisor, Manuel Blum) try to think of one word or phrase for the whole thing. This phrase would be the key idea that, in your mind, is what gets the whole argument started. (For example, most existence proofs via the probabilistic method can be summed up by: "PICK RANDOM". In the case of $MA \subseteq AM$, I would say something like "MAKE ARTHUR SPEAK MORE". But maybe something else in the proof feels to be the "key" idea to you, which is perfectly fine. It's your intuition!) I guess my suggestion may be useful for most mathematics, but I found it very useful for TCS, where many proofs really do boil down to 1-2 really new ideas, and the rest is a synthesis of that idea with what was already known. | {
"source": [
"https://cstheory.stackexchange.com/questions/4806",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/1892/"
]
} |
4,816 | This question is inspired by a similar question about applied mathematics on mathoverflow, and that nagging thought that important questions of TCS such as P vs. NP might be independent of ZFC (or other systems). As a little background, reverse mathematics is the project of finding the axioms necessary to prove certain important theorems. In other words, we start at a set of theorems we expect to be true and try to derive the minimal set of 'natural' axioms that make them so. I was wondering if the reverse mathematics approach has been applied to any important theorems of TCS. In particular to complexity theory. With deadlock on many open questions in TCS it seems natural to ask "what axioms have we not tried using?". Alternatively, have any important questions in TCS been shown to be independent of certain simple subsystems of second-order arithmetic? | Yes, the topic has been studied in proof complexity. It is called Bounded Reverse Mathematics . You can find a table containing some reverse mathematics results on page 8 of Cook and Nguyen's book, " Logical Foundations of Proof Complexity ", 2010. Some of Steve Cook's previous students have worked on similar topics, e.g. Nguyen's thesis, " Bounded Reverse Mathematics ", University of Toronto, 2008. Alexander Razborov (also other proof complexity theorists) has some results on the weak theories needed to formalize the circuit complexity techniques and prove circuit complexity lowerbounds. He obtains some unprovability results for weak theories, but the theories are considered too weak. All of these results are provable in $RCA_0$ (Simpson's base theory for Reverse Mathematics), so AFAIK we don't have independence results from strong theories (and in fact such independence results would have strong consequences as Neel has mentioned, see Ben-David's work (and related results) on independence of $\mathbf{P} vs. \mathbf{NP}$ from $PA_1$ where $PA_1$ is an extension of $PA$). | {
"source": [
"https://cstheory.stackexchange.com/questions/4816",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/1037/"
]
} |
4,882 | Is a deterministic polynomial-time algorithm known for the following problem: Input: a natural number $n$ (in binary encoding) Output: a prime number $p > n$. (According to a list of open problems by Leonard Adleman, the problem was open in 1995.) | The current best unconditional result was given by Odlyzko, which finds a prime $p >N$ in $O(N^{1/2 + o(1)})$ time. The strong conjecture in Polymath4 project seeks to resolve if this can be done in polynomial time, under reasonable number-theoretic assumptions like the GRH. http://michaelnielsen.org/polymath1/index.php?title=Finding_primes Currently the project seeks to answer the following question: Given a number $N$ and an interval between $N$ and $2N$, check in time $O(N^{1/2 - c})$ for some $c>0$ if the interval contains a prime. So far, they have a strategy which determines the parity of the number of primes in the interval. http://polymathprojects.org/2010/06/29/draft-version-of-polymath4-paper/ | {
"source": [
"https://cstheory.stackexchange.com/questions/4882",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/392/"
]
} |
4,885 | In quantum information theory, the distance between two quantum channels is often measured using the diamond norm. There are also a number of ways to measure distance between two quantum states, such as the trace distance, fidelity, etc. The Jamiołkowski isomorphism provides a duality between quantum channels and quantum states. This is interesting, to me at least, because the diamond norm is notoriously hard to calculate, and the Jamiołkowski isomorphism would seem to imply some correlation between distance measures of quantum channels and quantum states. So, my question is this: Is there any known relation between the distance in the diamond norm and the distance between the associated states (in some measure)? | For a quantum channel $\Phi$, let us write $J(\Phi)$ to denote the associated state:
$$
J(\Phi) = \frac{1}{n} \sum_{1\leq i,j \leq n} \Phi(|i \rangle \langle j|) \otimes |i \rangle \langle j|.
$$
Here we are assuming that the channel maps $M_n(\mathbb{C})$ (i.e., $n\times n$ complex matrices) to $M_m(\mathbb{C})$ for whatever choice of positive integers $n$ and $m$ you like. The matrix $J(\Phi)$ is sometimes called the Choi matrix or Choi-Jamiolkowski representation of $\Phi$, but it is more frequent that those terms are used when the $\frac{1}{n}$ normalization is omitted. Now, suppose that $\Phi_0$ and $\Phi_1$ are quantum channels. We may define the "diamond norm distance" between them as
$$
\| \Phi_0 - \Phi_1 \|_{\Diamond} =
\sup_{\rho} \: \| (\Phi_0 \otimes \operatorname{Id}_k)(\rho) - (\Phi_1 \otimes \operatorname{Id}_k)(\rho) \|_1
$$
where $\operatorname{Id}_k$ denotes the identity channel from $M_k(\mathbb{C})$ to itself, $\| \cdot \|_1$ denotes the trace norm, and the supremum is taken over all $k \geq 1$ and all density matrices $\rho$ chosen from $M_{nk}(\mathbb{C}) = M_n(\mathbb{C}) \otimes M_{k}(\mathbb{C})$. The supremum always happens to be achieved for some choice of $k\leq n$ and some rank 1 density matrix $\rho$. (Note that the above definition does not work for arbitrary mappings, only those of the form $\Phi = \Phi_0 - \Phi_1$ for completely positive maps $\Phi_0$ and $\Phi_1$. For general mappings, the supremum is taken over all matrices with trace norm 1, as opposed to just density matrices.) If you don't have any additional assumptions on the channels, you cannot say too much about how these norms relate aside from these coarse bounds:
$$
\frac{1}{n} \| \Phi_0 - \Phi_1 \|_{\Diamond} \leq
\| J(\Phi_0) - J(\Phi_1) \|_1 \leq \| \Phi_0 - \Phi_1 \|_{\Diamond}.
$$
For the second inequality, one is essentially settling for the specific choice
$$
\rho =
\frac{1}{n} \sum_{1\leq i,j \leq n} |i \rangle \langle j| \otimes |i \rangle \langle j|
$$
rather than taking the supremum over all $\rho$. The first inequality is a bid tougher, but it would be a reasonable assignment question for a graduate course on quantum information. (At this point I should thank you for your question, because I fully intend to use this question in the Fall offering of my quantum information theory course.) You can achieve either inequality for an appropriate choice of channels $\Phi_0$ and $\Phi_1$, even under the additional assumption that the channels are perfectly distinguishable (meaning $\| \Phi_0 - \Phi_1 \|_{\Diamond} = 2$). | {
"source": [
"https://cstheory.stackexchange.com/questions/4885",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/204/"
]
} |
5,003 | Suppose, transformation T is defined as given in the diagrams below.
Every vertex ( v ) is replaced by deg(v)-gon . And then graph is reconnected as shown. Those on the left are G s and on the right are T(G) s. It is easy to see that every vertex in T(G) has degree 3 . This paper claims that graph isomorphism of such graphs can be tested in polynomial time. Also, G can be converted to T(G) in polynomial time. Statement I: G1 and G2 are isomorphic iff T(G1) and T(G2) are isomorphic. EDIT: Specifications for G1,G2: G1 = (V1,E1) and G2=(V2,E2) |E1| = |E2| and |V1| = |V2| Sort[{deg(v)|v in V1}] = Sort[{deg(u)| u in V2}] If Statement I is True then do we have solution for GI problem? Note: I am n00b in this field. I invent funny techniques daily. | Along with the already-given answers stating the existence of two graphs G1≠G2 for which T(G1)=T(G2), there is also a different problem: there exist pairs G1=G2 for which T(G1)≠T(G2). This is because the transformation depends not just on the isomorphism class of G, but also on the cyclic ordering of the edges around each vertex of G. Edited to add an example based on Mark Reitblatt's comment: | {
"source": [
"https://cstheory.stackexchange.com/questions/5003",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/612/"
]
} |
5,004 | The question asked is whether the following question is decidable: Problem Given an integer $k$ and Turing machine $M$ promised to be in P, is the runtime of $M$ ${O}(n^k)$ with respect to input length $n$ ? A narrow answer of "yes", "no", or "open" is acceptable (with references, proof sketch, or a review of present knowledge), but broader answers too are very welcome. Answer Emanuele Viola has posted a proof that the question is undecidable (see below). Background For me, this question arose naturally in parsing Luca Tevisan's answer to the question Do runtimes for P require EXP resources to upper-bound? … are concrete examples known? The question relates also to a MathOverflow question: What are the most attractive Turing undecidable problems in mathematics? , in a variation in which the word "mathematics" is changed to "engineering," in recognition that runtime estimation is an ubiquitous engineering problem associated to (for example) control theory and circuit design. Thus, the broad objective in asking this question is to gain a better appreciation/intuition regarding which practical aspects of runtime estimation in the complexity class P are feasible (that is, require computational resources in P to estimate), versus infeasible (that is, require computational resources in EXP to estimate), versus formally undecidable. --- edit (post-answer) --- I have added Viola's proof to MathOverflow's community wiki "Attractive Turing-undecidable problems". It is that wiki's first contribution associated to the complexity class P; this attests to the novelty, naturality, and broad scope of Viola's proof (and IMHO its beauty too). --- edit (post-answer) --- Juris Hartmanis' monograph Feasible computations and provable complexity properties (1978) covers much of the same material as Emanuele Viola's proof. | The problem is undecidable. Specifically, you can reduce the halting problem to it as follows. Given an instance $(M,x)$ of the halting problem, construct a new machine $M'$ that works as follows: on inputs of length $n$, it simulates $M$ on $x$ for $n$ steps. If $M$ accepts, loop for $n^2$ steps and stop; otherwise loop for $n^3$ steps and stop. If $M$ halts on $x$ it does so in $t=O(1)$ steps, so the run time of $M'$ would be $O(n^2)$. If $M$ never halts then the run time of $M'$ is at least $n^3$. Hence you can decide if $M$ accepts $x$ by deciding if the run time of $M'$ is $O(n^2)$ or $O(n^3)$. | {
"source": [
"https://cstheory.stackexchange.com/questions/5004",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/1519/"
]
} |
5,018 | My question today is (as usual) a bit silly; but I would request you to kindly consider it. I wanted to know about the genesis and/or motivation behind the treewidth concept. I sure understand that it is used in FPT algorithms, but I do not think that that was the reason why this notion was defined. I have written up the scribe notes on this topic in the class of Prof Robin Thomas . I think I understand some of the applications of this concept (as in it transfers separation properties of the tree to the graph decomposed), but for some reason I am not really convinced that the reason this concept was developed was to measure closeness of a graph to a tree. I will try to make myself more clear (I am not sure if I can, please let me know if the question is not clear). I would like to know if similar notions existed elsewhere in some other branch of mathematics from where this notion was supposedly "borrowed". My guess will be topology -- but owing to my lack of background, I cannot say anything. The primary reason as to why I am curious about this would be -- the first time I read its definition, I was not sure why and how would anyone conceive of it and to what end. If the question is not still clear I would finally try stating it this way - Let us pretend the notion of treewidth did not exist. What natural questions (or extensions of some mathematical theorems/concepts) to discrete settings will lead one to conceive of a definition (let me use the word involved) as treewidth's. | If you really want to know what led Neil Robertson and me to tree-width, it wasn't algorithms at all. We were trying to solve Wagner's conjecture that in any infinite set of graphs, one of them is a minor of another, and we were right at the beginning. We knew it was true if we restricted to graphs with no k-vertex path; let me explain why. We knew all such graphs had a simple structure (more exactly, every graph with no k-vertex path has this structure, and every graph with this structure has no 2^k-vertex path); and we knew that in every infinite set of graphs all with this structure, one of them was a minor of another. So Wagner's conjecture was true for graphs with a bound on their maximum path length. We also knew it was true for graphs with no k-star as a minor, again because we had a structure theorem for such graphs. We tried to look for more general minors that had corresponding structure theorems that we could use to prove Wagner's conjecture, and that led us to path-width; exclude ANY tree as a minor and you get bounded path-width, and if you have bounded path-width then there are trees you can't have as a minor. (That was a hard theorem for us; we had a tremendously hard proof in the first Graph Minors paper, don't read it, it can be made much easier.) But we could prove Wagner's conjecture for graphs with bounded path-width, and that meant it was true for graphs not containing any fixed tree as a minor; a big generalization of the path and star cases I mentioned earlier. Anyway, with that done we tried to get further. We couldn't do general graphs, so we thought about planar graphs. We found a structure theorem for the planar graphs that did not contain any fixed planar graph as a minor (this was easy);
it was bounded tree-width. We proved that for any fixed planar graph, all the planar graphs that did not contain it as a minor had bounded tree-width. As you can imagine, that was really exciting; by coincidence, the structure theorem for excluding planar graphs (inside bigger planar graphs) was a natural twist on the structure theorem for excluding trees (inside general graphs). We felt we were doing something right. And that let us prove Wagner's conjecture for all planar graphs, because we had this structure theorem. Since tree-width worked for excluding planar graphs inside bigger planar graphs, it was a natural question whether it worked for excluding planar graphs inside non-planar graphs -- was it true that for every fixed planar graph, all graphs not containing it as a minor had bounded tree-width? This we couldn't prove for a long time, but that's how we got to thinking about tree-width of general graphs. And once we had the concept of tree-width, it was pretty clear that it was good for algorithms. (And yes, we had no idea that Halin had thought about tree-width already.) | {
"source": [
"https://cstheory.stackexchange.com/questions/5018",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/55/"
]
} |
5,080 | I'm new to the CS field and I have noticed that in many of the papers that I read, there are no empirical results (no code, just lemmas and proofs). Why is that? Considering that Computer Science is a science, shouldn't it follow the scientific method? | Mathematics is a science also, and you would have to search for a long time to find published empirical results in this field (although I guess there must be some). There are other scientific domains where "lemmas and proofs" are valued over experience, such as quantum physics. That said, most sciences mix theory and practice (with various ratios), and Computer Science is no exception. Computer Science has its roots in Mathematics (see Turing's biography for instance http://en.wikipedia.org/wiki/Alan_Turing ), and as such many results (generally dubbed as in the field of "theoretical computer science") consist in proofs that computers in some computational model can solve some problem in a given amount of operations (e.g. conferences such as FOCS, STOC, SODA, SoCG, etc..). Nevertheless, many other results of computer science are concerned with the applicability of those theories to practical life, through the analysis of experimental results (e.g. conferences such as WADS, ALENEX, etc...). It is often suggested that the ideal is a good balance between theory and practice, as in "Natural Science", where the observation of experiments prompts the generation of new theories, which in turn suggest new experiments to confirm or infirm those: as such many conferences attempts to accept both experimental and theoretic results (e.g. ESA, ICALP, LATIN, CPM, ISAAC, etc...). The subfield of "Algorithms and Data Structures" in computer science might suffer of an imbalance in the sense that "Theoretical" conferences are generally more highly ranked than experimental ones. I believe that this is not true in other subfields of computer science, such as HCI or AI. Hope it helps? | {
"source": [
"https://cstheory.stackexchange.com/questions/5080",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/3953/"
]
} |
5,096 | Factoring is not known to be NP-complete. This question asked for consequences of Factoring being NP-complete. Curiously, no one asked for consequences of Factoring being in P (maybe because such a question is trivial). So my questions are: Which would be the theoretical consequences of Factoring being in P? How the overall picture of complexity classes would be affected by such a fact? Which would be the practical consequences of Factoring being in P? Please do not say that banking transactions could be in jeopardy, I already know this trivial consequence. | There are pretty much no complexity-theoretic consequences of Factoring being in P. This means that there are no good justifications for factoring being hard, other than that nobody has been able to crack it so far. Polynomial-time factoring would make it possible to take square roots over $Z_n$ (and also over a much more general class of rings as well), and give polynomial-time algorithms for a number of other number-theoretic problems for which the bottleneck in the algorithm is currently factoring. As for practical consequences, banking transactions are probably not that much of a problem -- as soon as it was known that factoring was in P, the banks would switch to some other system, probably causing only a brief period of delays while this was being implemented. Decoding past banking transactions would probably not cause serious problems for the banks. A much more serious problem is that all the communication which was previously protected by RSA would now be in danger of being read. | {
"source": [
"https://cstheory.stackexchange.com/questions/5096",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/947/"
]
} |
5,110 | I think that a size hierarchy theorem for circuit complexity can be a major breakthrough in the area. Is it an interesting approach to class separation? The motivation for the question is that we have to say there is some function that cannot be computed by size $f(n)$ circuits and can be computed by a size $g(n)$ circuit where $f(n)<o(g(n))$. (and possibly something regarding the depth) so, if $f(m)g(n) \leq n^{O(1)}$, the property seem to be unnatural (it violates the largeness condition). Clearly we can't use diagonalization, because we aren't in a uniform setting. Is there a result in this direction? | In fact it is possible to show that, for every $f$ sufficiently small (less than $2^n/n$), there are functions computable by circuits of size $f(n)$ but not by circuits of size $f(n)-O(1)$, or even $f(n)-1$, depending on the type of gates that you allow. Here is a simple argument that shows that there are functions computable in size $f(n)$ but not size$ f(n)-O(n)$. We know that: there is a function $g$ that requires circuit complexity at least $2^n/O(n)$, and, in particular, circuit complexity more than $f(n)$. the function $z$ such that $z(x)=0$ for every input $x$ is computable by a constant-size circuit. if two functions $g_1$ and $g_2$ differ only in one input, then their circuit complexity differs by at most $O(n)$ Suppose that $g$ is nonzero on $N$ inputs. Call such inputs $x_1,\ldots,x_N$. We can consider, for each $i$, the function $g_i(x)$ which is the indicator function of the set $\{ x_1,\ldots,x_i \}$; thus $g_0=0$ and $g_N=g$. Clearly there is some $i$ such that $g_{i+1}$ has circuit complexity more than $f(n)$ and $g_i$ has circuit complexity less than $f(n)$. But then $g_{i}$ has circuit complexity less than $f(n)$ but more than $f(n) - O(n)$. | {
"source": [
"https://cstheory.stackexchange.com/questions/5110",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/3847/"
]
} |
5,120 | It is impossible to write a programming language that allows all machines that halt on all inputs and no others. However, it seems to be easy to define such a programming language for any standard complexity class. In particular, we can define a language in which we can express all efficient computations and only efficient computations. For instance, for something like $P$: take your favorite programming language, and after you write your program (corresponding to Turing Machine $M'$), add three values to the header: an integer $c$, and integer $k$, and a default output $d$. When the program is compiled, output a Turing machine $M$ that given input $x$ of size $n$ runs $M'$ on $x$ for $c n^k$ steps. If $M'$ does not halt before the $c n^k$ steps are up, output the default output $d$. Unless I am mistaken, this programming languages will allow us to express all computations in $P$ and nothing more. However, this proposed language is inherently non-interesting. My question: are there programming languages that capture subsets of computable functions (such as all efficiently computable function) in a non-trivial way? If there are not, is there a reason for this? | One language attempting to express only polynomial time computations is the soft lambda calculus . It's type system is rooted in linear logic. A recent thesis addresses polynomial time calculi, and provides a good summary of recent developments based on this approach. Martin Hofmann has been working on the topic for quite some time. An older list of relevant papers can be found here ; Many of his papers 's continue in this direction. Other work takes the approach of verifying that the program uses a certain amount of resources, using Dependent Types or Typed Assembly Language . Yet other approaches are based on resource bounded formal calculi , such as variants of the ambient calculus. These approaches have the property that well-typed programs satisfy some pre-specified resource bounds. The resource bound could be time or space, and generally can depend upon the size of the inputs. Early work in this area is on strongly normalising calculi, meaning that all well-typed programs halt. System F , aka the polymorphic lambda calculus, is strongly normalising. It has no fixed point operator, but is nonetheless quite expressive, though I don't think it is known what complexity class it corresponds to. By definition, any strongly normalising calculus expresses some class of terminating computations. The programming language Charity is a quite expressive functional language that halts on all inputs. I don't know what complexity class it can expression, but the Ackermann function can be written in Charity. | {
"source": [
"https://cstheory.stackexchange.com/questions/5120",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/1037/"
]
} |
5,228 | I've been reading some articles on dependent types and programming contracts. From the majority of what I've read, it seems that contracts are dynamically checked constraints and dependent types are statically checked. There have been some papers that have made me think that it's possible to have contracts that are partially statically checked: Hybrid Type Checking (C. Flanagan - 2006) Unifying Hybrid Types and Contracts (J. Gronski, C. Flanagan - 2007) With this, there seems to be a significant amount of overlap and my categorisation of contracts vs dependent types starts to disappear. Is there something deeper in either concepts that I'm missing? Or are these really just fuzzy categories of representing the same underlying concept? | On a practical level, contracts are assertions. They let you check (quantifier-free) properties of individual executions of a program. The key idea at the heart of contract checking is the idea of blame -- basically, you want to know who is at fault for a contract violation. This can either be an implementation (which does not compute the value it promised) or the caller (who passed a function the wrong sort of value). The key insight is that you can track blame using the same machinery as embedding-projection pairs in the inverse limit construction of domain theory. Basically, you switch from working with assertions to working with pairs of assertions, one of which blames the program context and the other of which blames the program. Then this lets you wrap higher-order functions with contracts, because you can model the contravariance of the function space by swapping the pair of assertions. (See Nick Benton's paper "Undoing Dynamic Typing" , for example.) Dependent types are types. Types specify rules for asserting whether or not certain programs are acceptable or not. As a result, they do not include things like the notion of blame, since their function is to prevent ill-behaved programs from existing in the first place. There is nothing to blamed since only well-formed programs are even grammatical utterances. Pragmatically, this means that it is very easy to use dependent types to speak of properties of terms with quantifiers (eg., that a function works for all inputs). These two views are not the same, but they are related. Basically, the point is that with contracts, we start with a universal domain of values, and use contracts to cut things down. But when we use types, we try to specify smaller domains of values (with a desired property) up front. So we can connect the two via type-directed families of relations (ie logical relations). For example, see Ahmed, Findler, Siek and Wadler's recent "Blame for All" , or Reynolds' "The Meaning of Types: from Intrinsic to Extrinsic Semantics" . | {
"source": [
"https://cstheory.stackexchange.com/questions/5228",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/4052/"
]
} |
5,245 | This is a follow up question to What is the difference between proofs and programs (or between propositions and types)? What program would correspond to a non-constructive (classical) proof of the form $\forall k \ T(e,k) \lor \lnot \forall k \ T(e,k)$? (Assume that $T$ is some interesting decidable relation e.g. $e$-th TM does not halt in $k$ steps.) (ps: I am posting this question partly because I am interested in learning more about what Neel means by " the Godel-Gentzen translation is a continuation-passing transformation" in his comment .) | This an interesting question. Obviously one can't expect to have a program that decides for each $e$ whether $\forall k T(e, k)$ holds or not, as this would decide the Halting Problem. As mentioned already, there are several ways of interpreting proofs computationally: extensions of Curry-Howard, realizability, dialectica, and so on. But they would all computationally interpret the theorem you mentioned more or less in the following way. For simplicity consider the equivalent classical theorem (1) $\exists i \forall j (\neg T(e, j) \to \neg T(e, i))$ This is (constructively) equivalent to the one mentioned because given $i$ we can decide whether $\forall k T(e, k)$ holds or not by simply checking the value of $\neg T(e, i)$. If $\neg T(e, i)$ holds then $\exists i \neg T(e, i)$ and hence $\neg \forall i T(e, i)$. If on the other hand $\neg T(e, i)$ does not hold then by (1) we have $\forall j (\neg T(e, j) \to \bot)$ which implies $\forall j T(e, j)$. Now, again we can't compute $i$ in (1) for each given $e$ because we would again solve the Halting Problem. What all interpretations mentioned above would do is to look at the equivalent theorem (2) $\forall f \exists i' (\neg T(e, f(i')) \to \neg T(e, i'))$ The function $f$ is called the Herbrand function. It tries to compute a counter example $j$ for each given potential witness $i$. It is clear that (1) and (2) are equivalent. From left to right this is constructive, simply take $i' = i$ in (2), where $i$ is the assumed witness of (1). From right to left one has to reason classically. Assume (1) was not true. Then, (3) $\forall i \exists j \neg (\neg T(e, j) \to \neg T(e, i))$ Let $f'$ be a function witnessing this, i.e. (4) $\forall i \neg (\neg T(e, f'(i)) \to \neg T(e, i))$ Now, take $f = f'$ in (2) and we have $(\neg T(e, f'(i')) \to \neg T(e, i'))$, for some $i'$. But taking $i = i'$ in (4) we obtain the negation of that, contradiction. Hence (2) implies (1). So, we have that (1) and (2) are classically equivalent. But the interesting thing is that (2) has now a very simple constructive witness. Simply take $i' = f(0)$ if $T(e, f(0))$ does not hold, because then the conclusion of (2) is true; or else take $i' = 0$ if $T(e, f(0))$ holds, because then $\neg T(e, f(0))$ does not hold and the premise of (2) is false, making it again true. Hence, the way to computationally interpret a classical theorem like (1) is to look at a (classically) equivalent formulation which can be proven constructively, in our case (2). The different interpretations mentioned above only diverge on the way the function $f$ pops up. In the case of realizability and the dialectica interpretation this is explicitly given by the interpretation, when combined with some form of negative translation (like Goedel-Gentzen's). In the case of Curry-Howard extensions with call-cc and continuation operators the function $f$ arises from the fact that the program is allowed to "know" how a certain value (in our case $i$) will be used, so $f$ is the continuation of the program around the point where $i$ is computed. Another important point is that you want the passage from (1) to (2) to be "modular", i.e. if (1) is used to prove (1'), then its interpretation (2) should be used used in similar way to prove the interpretation of (1'), say (2'). All the interpretations mentioned above do that, including the Goedel-Gentzen negative translation. | {
"source": [
"https://cstheory.stackexchange.com/questions/5245",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/186/"
]
} |
5,251 | A number of geometric problems are easy when considered in $R^1$, but are NP-complete in $R^d$ for $d\geq2$ (including one of my favourite problems, unit disk cover). Does anyone know of a problem which is polytime solvable for $R^1$ and $R^2$, but NP-complete for $R^d,d\geq3$? More generally, do problems exist which are NP-complete for $R^k$ but polytime solvable for $R^{k-1}$, where $k\geq3$? | Set cover by half-spaces. Given a set of points in the plane, and a set of halfplanes computing the minimum number of halfplanes covering the point sets can be solved in polynomial time in the plane. The problem however is NP hard in 3d (it is harder than finding a min cover by subset of disks of points in 2d). In 3d you are given a subset of halfspaces and points, and you are looking for min number of halfspaces covering the points. The polytime algorithm in 2d is described here: http://valis.cs.uiuc.edu/~sariel/papers/08/expand_cover/ | {
"source": [
"https://cstheory.stackexchange.com/questions/5251",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/1092/"
]
} |
5,323 | Currently, solving either a $NP$-complete problem or a $PSPACE$-complete problem is infeasible in the general case for large inputs. However, both are solvable in exponential time and polynomial space. Since we are unable to build nondeterministic or 'lucky' computers, does it make any difference to us if a problem is $NP$-complete or $PSPACE$-complete? | This is a very nice question that I have thought about a lot: Does the fact that a problem is $NP$-complete or $PSPACE$-complete actually affect the worst-case time complexity of the problem? More fuzzily, does such a distinction really affect the "typical case" complexity of the problem in practice? Intuition says that the $PSPACE$-complete problem is harder than the $NP$-complete one, regardless of what complexity measure you use. But the situation is subtle. It could be, for example, that $QBF$ (Quantified Boolean Formulas, the canonical $PSPACE$-complete problem) is in subexponential time if and only if $SAT$ (Satisfiability, the canonical $NP$-complete problem) is in subexponential time. (One direction is obvious; the other direction would be a major result!) If this is true, then maybe from the "I just want to solve this problem" point of view, it's not a big deal whether the problem is $PSPACE$-complete or $NP$-complete: either way, a subexponential algorithm for one implies a subexponential algorithm for the other. Let me be a devil's advocate, and give you an example where one problem happens to be "harder" than the other, but yet turns out to be "more tractable" than the other as well. Let $F(x_1,\ldots,x_{n})$ be a Boolean formula on $n$ variables, where $n$ is even. Suppose you have a choice between two formulas you want to decide: $\Phi_1 = (\exists x_1)(\exists x_2)\cdots (\exists x_{n-1})(\exists x_{n})F(x_1,\ldots,x_{n})$. $\Phi_2 = (\exists x_1)(\forall x_2)\cdots (\exists x_{n-1}(\forall x_{n})F(x_1,\ldots,x_{n})$ (That is, in $\Phi_2$, the quantifiers alternate.) Which one do you think is easier to solve? Formulas of type $\Phi_1$, or formulas of type $\Phi_2$? One would think that the obvious choice is $\Phi_1$, as it is only $NP$-complete to decide it, whereas $\Phi_2$ is a $PSPACE$-complete problem. But in fact, according to our best known algorithms, $\Phi_2$ is an easier problem. We have no idea how to solve $\Phi_1$ for general $F$ in less than $2^n$ steps. (If we could do this, we'd have new formula size lower bounds!) But $\Phi_2$ can be easily solved for any $F$ in randomized $O(2^{.793 n})$ time, using randomized game tree search! For a reference, see Chapter 2, Section 2.1, in Motwani and Raghavan. The intuition is that adding universal quantifiers actually constrains the problem , making it easier to solve, rather than harder. The game tree search algorithm relies heavily on having alternating quantifiers, and cannot handle arbitrary quantifications. Still, the point remains that problems can sometimes get "simpler" under one complexity measure, even though they may look "harder" under another measure. | {
"source": [
"https://cstheory.stackexchange.com/questions/5323",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/988/"
]
} |
5,399 | It is well known that if $\mathbf{P}=\mathbf{NP}$ then the polynomial hierarchy collapses and $\mathbf{P}=\mathbf{PH}$. This can easily be understood inductively using oracle machines.
The question is - why can't we continue the inductive process beyond a constant level of alternations and prove $\mathbf{P}=\mathbf{AltTime}(n^{O(1)})$ (aka $\mathbf{AP}=\mathbf{PSPACE}$)? I am looking for an intuitive answer. | The proof for $\mathbf{P}=\mathbf{AltTime}(O(1))$ ($=\mathbf{PH}$) is an induction using $\mathbf{P}=\mathbf{NP}$. The induction shows that for any natural number $k$, $\mathbf{P}=\mathbf{AltTime}(k)$ (and $\mathbf{AltTime}(O(1))$ is just their union). The induction does not work when the number of alternation can change with the input size (i.e. when the number of possible alternations of the machine is not a number but a function of the input size, i.e. we are not showing that an execution of the machine on a single input can be reduced to no alternation, we are showing that the executions of the machine on all inputs can be "uniformly" reduced to no alternation). Let's look at a similar but simpler statement. We want to show that the identity function $id(n)=n$ eventually dominates all constant functions ($f \ll g$ iff for all but finitely many $n$ $f(n) \leq g(n)$). It can be proven say by induction. For all $k$, $k \ll n$ (i.e. $f_k \ll id$ where $f_k(n)=k$), but we don't have this for non-constant functions like $n^2$, $n^2 \not \ll n$. | {
"source": [
"https://cstheory.stackexchange.com/questions/5399",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/3808/"
]
} |
5,619 | Stable Marriage Problem: http://en.wikipedia.org/wiki/Stable_marriage_problem I am aware that for an instance of a SMP, many other stable marriages are possible apart from the one returned by the Gale-Shapley algorithm. However, if we are given only $n$ , the number of men/women, we ask the following question - Can we construct a preference list that gives the maximum number of stable marriages? What is the upper bound on such a number? | For an instance with $n$ men and $n$ women, the trivial upper bound is $n!$, and nothing better is known. For a lower bound, Knuth (1976) gives an infinite family of instances with $\Omega(2.28^n)$ stable matchings, and Thurber (2002) extends this family to all $n$. | {
"source": [
"https://cstheory.stackexchange.com/questions/5619",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/3777/"
]
} |
5,635 | If you could rename dynamic programming, what would you call it? | Richard Bellman's autobiography suggests that he chose the term “dynamic programming” to be intentionally distracting. The 1950s were not good years for mathematical research. We had a very interesting gentleman in Washington named Wilson. He was secretary of Defense, and he actually had a pathological fear and hatred of the word ‘research’. I'm not using the term lightly; I'm using it precisely. His face would suffuse, he would turn red, and he would get violent if people used the term ‘research’ in his presence. You can imagine how he felt, then, about the term ‘mathematical’. The RAND Corporation was employed by the Air Force, and the Air Force had Wilson as its boss, essentially. Hence, I felt I had to do something to shield Wilson and the Air Force from the fact that I was really doing mathematics inside the RAND Corporation. What title, what name, could I choose? In the first place I was interested in planning, in decision making, in thinking. But planning, is not a good word for various reasons. I decided therefore to use the word ‘programming’. I wanted to get across the idea that this was dynamic, this was multistage, this was time-varying—I thought, let’s kill two birds with one stone. Let’s take a word that has an absolutely precise meaning, namely ‘dynamic’, in the classical physical sense. It also has a very interesting property as an adjective, and that is it’s impossible to use the word ‘dynamic’ in a pejorative sense. Try thinking of some combination that will possibly give it a pejorative meaning. It’s impossible. Thus, I thought “dynamic programming’ was a good name. It was something not even a Congressman could object to. So I used it as an umbrella for my activities. (As Russell and Norvig point out in their AI textbook, however, this story must be a creative embellishment of the truth. Bellman first used the phrase "dynamic programming" in 1952 , and Charles Erwin Wilson did not become Secretary of Defense until 1953.) Anyway, Bellman's original motivation suggests multistage planning , but at least for algorithmic purposes, I'd prefer something like frugal bottom-up recursion , only with fewer syllables. | {
"source": [
"https://cstheory.stackexchange.com/questions/5635",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/4377/"
]
} |
5,675 | I was wondering if there is a good bibliography of attempts to investigate the Collatz conjecture as a formal grammar? (or any other attempts in the CS community to deal with this class of generative phenomena & their "halting" properties). | I guess these papers by Jeffrey C. Lagarias could help: The 3x+1 problem: An annotated bibliography (1963--1999) (sorted by author) . The 3x+1 Problem: An Annotated Bibliography, II (2000-2009) . Another good source is the recent book " The Ultimate Challenge ". In it chapter "Generalized $3x+1$ functions and the theory of computation", section $\#$ 8 , can also be of interest. | {
"source": [
"https://cstheory.stackexchange.com/questions/5675",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/4406/"
]
} |
5,696 | Question: How do 'tactics' work in proof assistants? They seem to be ways of specifying how to rewrite a term into an equivalent term (for some definition of 'equivalent'). Presumably there are formal rules for this, how can I learn what they are and how they work? Do they involve more than choice of order for Beta-reduction? Background about my interest: Several months ago, I decided I wanted to learn formal math. I decided to go with type theory because from preliminary research it seems like The Right Way To Do Things (tm) and because it seems to unify programming and mathematics which is fascinating . I think my eventual goal is to be able to use and understand a proof assistant like Coq (I think especially being able to use dependent types as I am curious about things like representing the types of matrixes). I started off knowing very little, not even rudimentary functional programming, but I'm making slow progress. I've read and understood large chunks of Types and Programming Languages (Pierce) and learned some Haskell and ML. | The basic tactics either run an inference rule forwards or backwards (for example, convert hypotheses $A$ and $B$ into $A\wedge B$ or convert goal $A\wedge B$ into two goals $A$ and $B$ with same hypotheses), apply a lemma (~ function application), split up a lemma about some inductive type into a case for each constructor, and so on. Basic tactics may succeed or fail depending upon the context in which they are applied. More advanced tactics are like little functional programs that run the basic tactics, perform pattern matching over the terms in the goal and/or assumptions, make choices based on the success or failure of tactics, and so forth. More advanced tactics deal with arithmetic and other specific kinds of theories. The key paper on Coq's tactic language is the following: D. Delahaye. A Tactic Language for the System Coq . In Proceedings of Logic for Programming and Automated Reasoning (LPAR), Reunion Island, volume 1955 of Lecture Notes in Computer Science, pages 85–95. Springer-Verlag, November 2000. The formal rules which form the essence of the basic tactics are defined in the Coq users guide here or in Chapter 4 of the pdf . A quite instructive paper on implementing tactics and tacticals (essentially tactics that take other tactics as arguments) is: Amy Felty. Implementing Tactics and Tacticals in a Higher-order Programming Language Journal of Automated Reasoning, 11(1):43-81, August 1993. Coq's tactic language has the limitation that the proofs written using it hardly resemble proofs one does by hand. Several attempts have been made to enable clearer proofs. These include Isar (for Isabelle/HOL) and Mizar 's proof language. Aside: Did you also know that the programming language ML was originally designed to implement tactics for the LCF theorem prover? Many ideas developed for ML, such as type inference, have influenced modern programming languages. | {
"source": [
"https://cstheory.stackexchange.com/questions/5696",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/4425/"
]
} |
5,834 | Can anyone explain briefly (if thats possible!) or refer me to a reference, summarizing the differences between untyped lambda calculus and the more common typed lambda calculi? I'm particularly looking for statements of their expressive power, equivalences to logic/arithmetic systems or computation methods, and analogies to programming languages if applicable. While I certainly intend to read, something like a reference table outlining the calculi and their equivalences/differences/place in the hierarchy would be a HUGE reference for helping me sort them out. Not saying the below is correct, just trying to sketch together some of the impressions i have to see if they at least serve as a starting point (or something to correct!) Untyped lambda calculus - eq. to first order logic - cannot do X Simply typed lambda calculus - eq to ... logic, related to Lisp? 'Polymorphic' lambda calc -
etc. Calculus of Constructions - intutionist logic? Combinatory Logic - comparable to ??? typed lambda calculus, related to APL/J kind of languages If this ties into the lambda cube and its three axes all the better. While I'm familiar with the basics of lambda calculus and programming with functional languages, I have never wrapped my head around, or made any significant connections to, the type systems involved and different flavors of lambda (and maybe pi?) calculi. When I attempt to research this i cant help but find myself sidetracked, opening up many browser tabs and branching in so many directions I never get into any of them with any depth! I'm not sure if what I'm asking for is reasonable, but hopefully at the very least I've painted enough of a picture to suggest some reading that can explain what im looking for? | Your table is a bit confused; here's a better one. Untyped lambda calculus -- no logical interpretation, as Andrej notes Simply typed lambda calculus -- intuitionistic propositional logic Polymorphic lambda calculus -- pure second-order logic (ie, without first-order quantifiers) Dependent types -- generalization of first-order logic Calculus of constructions -- generalization of higher-order logic Type dependency is more general than first-order quantification, since it turns proofs into objects you can quantify over. Lambda calculi corresponding to ordinary intuitionistic FOL exist, but are not widely used enough to have a special name -- people tend to go straight to dependent types. You can also relate the syntactic form of a calculus to logical systems, as well. Combinator calculi (eg, SKI combinators) -- Hilbert-style systems A-normal form -- sequent calculus Ordinary typed lambda calculus -- natural deduction | {
"source": [
"https://cstheory.stackexchange.com/questions/5834",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/4509/"
]
} |
5,836 | I'm going over the course notes at CIS 500: Software Foundations and the exercises are a lot of fun. I'm only at the third exercise set but I would like to know more about what's happening when I use tactics to prove things like forall (n m : nat), n + n = m + m -> n = m. | One place to start is the Coq reference manual ( pdf ). Chapter 4 describes the underlying logic of Coq, and ultimately everything is based on this. It's called the calculus of (co)inductive constructions, and many papers describe. Getting your hands on the Coq'Art book Interactive Theorem Proving and Program Development provides a more leisurely but not cheap introduction to Coq. To learn about how tactics work, have a look at this earlier question: How do 'tactics' work in proof assistants? To build up the required theory, you need to learn about Type Theory . Most closely related to the theory underlying a proof assistant is probably Per Martin-Löf's Intutionistic Type Theory notes (or book ) or the book Programming in Martin-Löf Type Theory , which is really about writing and proving theorems in type theory. A programming language perspective on type theory can be obtained from Pierce's Types and Programming Languages . Girard et al's Proofs and Types , which also addresses the importance of the Curry-Howard Correspondence , is another excellent reference.
Then you are probably well and truly ready to read Coquand and Huet's The Calculus of Constructions . Finally chase up some of the references in the back of the Coq manual. There are other proof assistants , HOL, NuPRL, Mizar, Twelf, etc., and they have their theory too, so you can learn a lot too by reading in that direction. Finally, for an overview of the history and future of proof assistants, check out the recent article by Herman Geuvers. | {
"source": [
"https://cstheory.stackexchange.com/questions/5836",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/-1/"
]
} |
5,907 | A few years ago, there was some work by Joel Friedman relating lower circuit bounds to Grothendieck cohomology (see papers: http://arxiv.org/abs/cs/0512008 , http://arxiv.org/abs/cs/0604024 ). Has this line of thought brought any new insights into boolean complexity, or does it remains rather a mathematical curiosity? | I corresponded with Joel Friedman about 3 years ago on this topic. At the time he said that his approach had not led to any significant new insights into complexity theory, though he still thought it was a promising tack. Basically, Friedman tries to rephrase the problems of circuit complexity in the language of sheaves on a Grothendieck topology. The hope is that this process will allow geometric intuition to be applied to the problem of finding circuit lower bounds. While it's certainly worth checking to see if this path leads anywhere, there are heuristic reasons to be skeptical. Geometric intuition works best in the context of smooth varieties, or things that are sufficiently similar to smooth varieties that the intuition doesn't totally break down. In other words, you need some structure in order for geometric intuition to gain a foothold. But circuit lower bounds by their very nature must confront arbitrary computations , which are difficult to analyze precisely because they seem to be so structureless. Friedman admits right up front that the Grothendieck topologies he considers are highly combinatorial, and far removed from the usual objects of study in algebraic geometry. As a side comment, I'd say that it's important not to get too excited about an idea just because it uses unfamiliar, high-powered machinery. The machinery might be very effective at solving the problems that it was designed for, but for it to be useful for attacking a known hard problem in another domain, there needs to be some compelling argument why the foreign machinery is well adapted to address the fundamental obstacle in the problem of interest. | {
"source": [
"https://cstheory.stackexchange.com/questions/5907",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/2083/"
]
} |
6,054 | I have come to a rather startling discovery in regards to a question that has been posed to me a number of times through my now almost 1 year of CS studies. Solving this problem is of great interest to many people, and if I have indeed done so... What do I do? I see a few options, but none seem to ensure that my work gets recognized as my own: I could stash away the notes till the time comes for my masters thesis. This would force me to share both work and credit with my supervisor, though. I could attempt to publish it... But who would want to take work from a 9 months old computer scientist? I am at a loss - in attempting to solve a difficult problem I have only given rise to an even more difficult one! | Worrying about someone taking credit for your work is very common among amateurs and people just starting out, but in my experience is far less common with more experienced researchers. I'm not really sure exactly what the reason for this is, but I have encountered this phenomenon numerous times. I would venture that it is perhaps because those with more experience in the field know how uncommon it is for one person to essentially steal another's results. Most people are relatively good, or at worst benign, and wouldn't dream of taking credit for work they haven't contributed to. Couple that with the fact that being caught plagiarizing someone else's work would be career suicide, and you can be pretty confident that no one you talk to is likely to write up your idea and pass it off as their own. As Suresh says, you could always upload it to the arxiv, but I would be inclined not to do this, as if it turns out that the work is not novel or there is a major error, then you cannot actually delete the paper, but rather will have to post a retraction, and the original will still be there for all to see. Even now, I still ask colleagues to check any paper I consider relatively important before uploading a manuscript. If I were you, I would be inclined to talk to one of my lecturers in the first instance, as they can probably give you a feel for whether your solution is novel (and correct), and if it meets both of these criteria, they should be able to give you an idea of how best to proceed. Alternatively, you could just post it as a question here, asking if the result was previously known, or if anybody could point you to a good survey paper on the problem. You'd have a time stamp from the post time, so you wouldn't have to worry about that. If you got an encouraging response here (i.e. that it was not currently known), then you could consider how to proceed from there. If on the other hand, it was already known, or there was a flaw, then it wouldn't really matter. Technically, you could also simply write it up and submit it to a journal. As Dave says, no one involved in the process will actually know your level (the first time, it's kind of a kick when you get the correspondence addressed to you as either Dr. or Professor), unless one of the reviewers happens to know you personally. However, I would not suggest doing so. You need to search the literature to make sure your idea is actually novel, and as Dave mentions, the writing of papers is an art in and of itself. You would probably need to read a lot of journal articles to even get the style right. | {
"source": [
"https://cstheory.stackexchange.com/questions/6054",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/4666/"
]
} |
6,201 | This question is targeted at people who assign problems: teachers, student assistants, tutors, etc. This has happened to me a handful of times in my 12-year career as a professor: I hurriedly assigned some problem from the text thinking "this looks good." Then later realized I couldn't solve it. Few things are more embarrassing. Here's a recent example: "Give a linear-time algorithm that determines if digraph $G$ has an odd-length cycle." I assigned this thinking it was trivial, only to later realize my approach wasn't going to work. My question: what do you think is the "professional" thing to do: Obsess on the problem until you solve it, then say nothing to your students. Cancel the problem without explanation and move on with your life. Ask for help on cstheory.SE (and suffer the response, "is this a homework problem?") Note: I'm looking for practical and level-headed suggestions that I perhaps haven't thought of. I realize my question has a strong subjective element since handling this situation involves one's own tastes to a large extent, so I understand if readers would prefer to see it not discussed. | Yes, sadly, I've done this several times, as well as the slightly more forgivable sin of assigning a problem that I can solve, but only later realizing that the solution requires tools that the students haven't seen. I think the following is the most professional response (at least, it's the response I've settled on after several false starts): Immediately and publicly admit the mistake. Explain steps 2 and 3. Give every student full credit for the problem. Yes, even if they submit nothing. Grade all submitted solutions normally, but award the resulting points as extra credit. In particular, give the usual partial credit for partial solutions. The first point is both the hardest and the most important. If you try to cover your ass, you will lose the respect and attention of your students (who are not stupid), which means they won't try as hard, which means they won't learn as well, which means you haven't done your job. I don't think it's fair to let students twist in the wind with questions I honestly don't think they can answer without some advance warning. (I regularly include open questions as homework problems in my advanced grad classes, but I warn the students at the start of the semester.) Educational , sure, but not fair. It's sometimes useful to give hints or an outline (as @james and @Martin suggest) to make the problem more approachable; otherwise, almost nobody will even try. Obviously, this is only possible if you figure out the solution first. On the other hand, sometimes it's appropriate for nobody to even try. (For example, "Describe a polynomial-time algorithm for X" when X is NP-hard, or if the setting is a timed exam.) If you still can't solve the problem yourself after sweating buckets over it, relax. Probably none of the students will solve it either, but if you're lucky, you'll owe someone a LOT of extra credit and a recommendation letter. And if you later realize the solution is easy after all, well, I guess you screwed up twice. Go to step 1. | {
"source": [
"https://cstheory.stackexchange.com/questions/6201",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/3866/"
]
} |
6,256 | As there was no response at Lambda the Ultimate I try it here again: term rewriting systems are used for instance in automated theorem proving a symbolic calculation, and of course to define formal grammars. There are some programming languages based in term rewriting, but as far as I understand the concept is more known as pattern matching . Pattern matching is used a lot in
functional languages. Barry Jay has created a whole theory called pattern calculus , but he only mentions term rewriting in brief. I have the feeling that they all refer to the same basic idea, so can you use term rewriting and pattern matching synonymously? | One way of looking at these two concepts is to say that pattern matching is a feature of programming languages for combining discrimination on constructors and destructing terms (while at the same time selecting and locally naming term fragments) safely, compactly and efficiently. Research on pattern matching typically focusses on implementation efficiency, e.g. on how to minimise the number of comparisons the matching mechanism has to do. In contrast, term rewriting is a general model of computation that investigates a wide range of (potentially non-deterministic) methods of replacing subterms of syntactic expressions (more precisely an element of a term-algebra over some set of variables) with other terms. Research on term rewriting systems is usually about abstract properties of rewriting systems such as confluence, determinism and termination, and more specifically about how such properties are or are not preserved by algebraic operations on rewrite systems, i.e. to what extent these properties are compositional. Clearly there are conceptual overlaps between both, and the distinction is to a degree traditional, rather than technical. A technical difference is that term rewriting happens under arbitrary contexts (i.e. a rule $(l, r)$ induces rewrites $C[l\sigma] \rightarrow C[r\sigma]$ for arbitrary contexts $C[.]$ and substitutions $\sigma$), while pattern matching in modern languages like Haskell, OCaml or Scala provides only for rewriting 'at the top' of a term. This restriction is also, I think, imposed in Jay's pattern calculus.
Let me explain what I mean by this restriction. With pattern matching in the OCaml, Haskell, Scala sense, you cannot say something like match M with
| C[ x :: _ ] -> printf "%i ...\n" x
| C[ [] ] -> printf "[]" What is C[.] here? It's supposed to be a
variable that ranges over one-holed contexts. But languages like
OCaml, Haskell or Scala don't give programmers variables that
range over arbitrary (one-holed) contexts, only variables that range
over values. In other words, in such languages you cannot pattern
match at an arbitrary position in a term. You always have to specify
the path from the root of the pattern to the parts that you are
interested in. I guess the key reason for imposing this restriction
is that otherwise pattern matching would be non-deterministic, because
a term might match a pattern in more than one way. For example the
term (true, [9,7,4], "hello", 7) matches the
pattern C[7] in two ways, assuming C[.] ranged over such contexts. | {
"source": [
"https://cstheory.stackexchange.com/questions/6256",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/758/"
]
} |
6,448 | Often, when we take part in TCS conferences, we notice some little details that we wish the conference organisers would have taken care of. And when we are organising conferences, we have already forgotten it. Hence the question: Which small steps we could easily take to improve TCS conferences ? Hopefully, this question could become a resource that we could double-check whenever we are organising conferences, to make sure that we do not repeat the same mistakes again and again... I am here interested in relatively small and inexpensive details – something that conference organisers could have easily done if only they had thought about it in time. For example, it might be a useful piece of information that could be put on the conference web page well in advance; a five-dollar gadget that may save the day; something to consider when choosing the restaurant for the banquet; the best timing of the coffee breaks; or your ideal design of the conference badges. We can cover here all aspects of conference arrangements (including paper submissions, program committees, reviews, local arrangements, etc.). This is a community wiki question. Please post one idea per answer, and please vote other answers up or down depending on how important they are in your opinion. | Keep prices down : you can get lovely hors d'oeuvres at a $750 -per-seat hotel-hosted conference, it is true, but it tends to detract from the intellectual atmosphere you can get with a $100 -per-seat university-hosted conference. Also, you don't just get the people attending who deliver talks. Allow students at the university to attend for free; Space parallel sessions , so that people can really move between them; But also encourage interaction between speakers and attendees at parallel sessions beforehand, so that parallel sessions are also autonomous communities. This encourages interaction, and the sense that parallel sessions are moving the discussion in their subfield forward. Some conferences (e.g., the German linguistics conference, DGfS) have taken this to the conclusion of having all parallel sessions be independent workshops. | {
"source": [
"https://cstheory.stackexchange.com/questions/6448",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/74/"
]
} |
6,563 | The basic idea of backwards induction is to start with all the possible final positions of a game in which player X wins. So for chess, look at all the ways White can checkmate Black. Now work backwards to all the possible moves/positions that would allow White to move in to one of those positions. If White ever found herself in such a position she could win by moving to the relevant checkmating move. Now we work backwards another step and so on. Eventually we get back to all the possible first moves White could make. The point is, once we've done this, we know that we have White's best response to any move Black makes. Recently (last five years or so) Checkers was "solved" in this way. Obviously Noughts and Crosses (what the colonials might call "Tic-Tac-Toe") has been solved for ages. At the very least since this xkcd but presumably long before. So the question is: what factors does this sort of procedure depend on? The number of possible legal positions, presumably. But also perhaps the number of legal moves at any given node... And given this, how complex is this sort of problem? Bonus question: how long before a $2000 PC can solve checkers in a day? Chess? Go? (Of course for this you also have to take into account increasing speed of home computers...) I've added the graph-algorithms tag because you can represent these games as trees, but if I'm abusing the tag please add something more appropriate | As @Joe points out, chess is trivial to solve in $O(1)$ time using a lookup table. (An actual implementation of this algorithm would require a universe significantly larger than the one we live in, but this is a site for theoretical computer science. The size of the constant is irrelevant.) There is obviously no canonical $n\times n$ generalization of chess, but several variants have been considered; their complexity depends on how the rules about moves without captures and repeating positions are generalized. If a draw is declared after a polynomial number of capture-free moves, or after any position repeats a polynomial number of times, then any $n\times n$ chess game ends after a polynomial number of moves, so the problem is clearly in PSPACE. Storer proved that this variant is PSPACE-hard. For the variant with no limits on repeated positions or capture-free moves, the number of legal $n\times n$ chess positions is exponential in $n$, so the problem is clearly in EXPTIME. Fraenkel and Lichtenstein proved that this variant is EXPTIME-hard. | {
"source": [
"https://cstheory.stackexchange.com/questions/6563",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/1622/"
]
} |
6,596 | So, Bloom filters are pretty cool -- they are sets that support membership checking with no false negatives, but a small chance of a false positive. Recently though, I've been wanting a "Bloom filter" that guarantees the opposite: no false positives, but potentially false negatives. My motivation is simple: given a huge stream of items to process (with duplicates), we'd like to avoid processing items we've seen before. It doesn't hurt to process a duplicate, it is just a waste of time. Yet, if we neglected to process an element, it would be catastrophic. With a "reverse Bloom filter", one could store the items seen with little space overhead, and avoid processing duplicates with high probability by testing for membership in the set. Yet I can't seem to find anything of the sort. The closest I've found are " retouched Bloom filters ", which allow one to trade selected false positives for a higher false negative rate. I don't know how well their data structure performs when one wants to remove all false positives, however. Anyone seen anything like this? :) | One answer is to use a big hash table and when it fills up start replacing elements in it rather than finding (nonexistent) empty slots elsewhere for them. You don't get the nice fixed-rate of false answers that you do with Bloom filters, but it's better than nothing. I believe this is standard e.g. in chess software for keeping track of positions that have already been searched. | {
"source": [
"https://cstheory.stackexchange.com/questions/6596",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/3062/"
]
} |
6,607 | Other than ACM, IEEE computer Society, Google Scholar which is the best site to get bibtex entries for computer science related articles ? | You don't get correct TCS Bibtex entries from anywhere. CiteSeer, Google Scholar, etc.: the Bibtex entries are garbage, worse than useless. Examples: Many conference papers in Google Scholar are exported as an @article , with (some version of) the title of the book in the journal field. Google Scholar abbreviates the first names of the authors. And then of course we have ridiculous things like author = {Submission, H.C.F.} – Google Scholar populates the fields by picking some words from the cover page of the paper. Publishers: the entries are a bit better, but you cannot rely on them – you must check every single field manually anyway. IEEE tends to be worst, ACM and Springer are little bit better, but even with the latter, you need to do manual editing and cross-checking. Springer has a strange idea of what is the title of a proceedings book. ACM gives book titles in a strange mixture of upper-case and lower-case letters. And, as usual, if there are accents in the authors' names, or any math in the title, all bets are off. Examples of booktitle fields for conference papers: Springer might produce something like booktitle = {Distributed Computing} for a proceedings volume – it requires a lot of imagination to figure out that it actually means "Proc. 23rd International Symposium on Distributed Computing (DISC 2009)". IEEE exports unreadable titles such as booktitle = {Sensor, Mesh and Ad Hoc Communications and Networks, 2007. SECON '07. 4th Annual IEEE Communications Society Conference on} . ACM is usually fairly good, but you need to fix the mixture of upper and lower case letters: booktitle = {Proceedings of the twenty-first annual symposium on Parallelism in algorithms and architectures} . Examples of titles with math: ACM might produce (\&\#948;+1) instead of {$(\Delta+1)$} . IEEE might produce Otilde(radic(log n)) instead of {$\tilde{O}(\sqrt{\log n})$} . I am not kidding you. MathSciNet: high-quality Bibtex entries for journal articles, but the coverage of TCS is poor, and conference papers are not necessarily that well indexed. "Core TCS" conferences such as FOCS, STOC, and SODA seem to be covered fairly well, but anything else is more patchy. For example, there seem to be few papers indexed from PODC or SPAA. The entries of the conference papers are not perfect. You can find something like @incollection instead of @inproceedings , or proceeding books such as BOOKTITLE = {Distributed computing} . DBLP: reasonably good, but once again, a lot of data comes from the publishers, and you need to double-check it anyway (beware of accents). Examples of accents: Michal Hanckowiak instead of Micha{\l} Ha{\'n}{\'c}kowiak . As JɛffE pointed out in the comments, the correct title of a conference volume is a matter of taste (and a matter of interpretation). For example, LNCS volumes may have useless main titles and ridiculously long subtitles; therefore even if you had pedantically correct bibliographic entries, you most likely would like to edit some of them slightly, for readability and for consistency. But as soon as you start to tweak the titles of the conference volumes, it becomes obvious that even for your own purposes , there are many possible right answers. When you are running out of space, you might prefer "Proc. STOC 2010" to "Proceedings of the 42th ACM Symposium on Theory of Computing (STOC, Cambridge, MA, USA, June 2010)". This answer at the TeX site gives one example of how to deal with multiple versions of the titles, so that you can easily switch between different variants. | {
"source": [
"https://cstheory.stackexchange.com/questions/6607",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/3162/"
]
} |
6,660 | Do you know sensible algorithms that run in polynomial time in (Input length + Output length), but whose asymptotic running time in the same measure has a really huge exponent/constant (at least, where the proven upper bound on the running time is in such a way)? | Algorithms based on the regularity lemma are good examples for polynomial-time algorithms with terrible constants (either in the exponent or as leading coefficients). The regularity lemma of Szemeredi tells you that in any graph on $n$ vertices you can partition the vertices into sets where the edges between pairs of sets are "pseudo-random" (i.e., densities of sufficiently large subsets look like densities in a random graph). This is a structure that is very nice to work with, and as a consequence there are algorithms that use the partition.
The catch is that the number of sets in the partition is an exponential tower in the parameter of pseudo-randomness (See here: http://en.wikipedia.org/wiki/Szemer%C3%A9di_regularity_lemma ). For some links to algorithms that rely on the regularity lemma, see, e.g.: http://www.cs.cmu.edu/~ryanw/regularity-journ.pdf | {
"source": [
"https://cstheory.stackexchange.com/questions/6660",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/4290/"
]
} |
6,735 | I already read examples of formulas in CTL but not in LTL and vice-versa, but I'm having trouble gaining a mental grasp on LTL formulas and really what, at the heart, is the difference. | To really understand the difference between LTL and CTL you have to study the semantics of both languages. LTL formulae denote properties that will be interpreted on each execution of a program. For each possible execution (a run), which can be see as a sequence of events or states on a line — and this is why it is named "linear time" — satisfiability is checked on the run with no possibility of switching to another run during the checking. On the other hand, CTL semantics checks a formula on all possible runs and will try either all possible runs ( A operator) or only one run ( E operator) when facing a branch. In practice this means that some formulae of each language cannot be stated in the other language. For example, the reset property (an important reachability property for circuit design) states that there is always a possibility that a state can be reached during a run, even if it is never actually reached ( AG EF reset ). LTL can only state that the reset state is actually reached and not that it can be reached. On the other hand, the LTL formula $\Diamond\Box s$ cannot be translated into CTL. This formula denotes the property of stability : in each execution of the program, s will finally be true until the end of the program (or forever if the program never stops). CTL can only provide a formula that is too strict ( AF AG s ) or too permissive ( AF EG s ). The second one is clearly wrong. It is not so straightforward for the first. But AF AG s is erroneous. Consider a system that loops on A1 , can go from A1 to B and then will go to A2 on the next move. Then the system will stay in A2 state forever. Then "the system will finally stay in a A state" is a property of the type $\Diamond\Box s$. It is obvious that this property holds on the system. However, AF AG s cannot capture this property since the opposite is true : there is a run in which the system will always be in the state from which a run finally goes in a non A state. I don't know if this answers to your question, but I would like to add some comments. There is a lot of discussion of the best logic to express properties for software verification... but the real debate is somewhere else. LTL can express important properties for software system modelling (fairness) when the CTL must have a new semantics (a new satisfiability relation) to express them. But CTL algorithms are usually more efficient and can use BDD-based algorithms. So... there is no best solution. Only two different approaches, so far. One of the commenters suggests Vardi's paper "Branching versus Linear Time: Final Showdown" . | {
"source": [
"https://cstheory.stackexchange.com/questions/6735",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/7104/"
]
} |
6,748 | In one sentence: would the existence of a hierarchy for $\mathsf{BPTIME}$ imply any derandomization results? A related but vaguer question is: does the existence of a hierarchy for $\mathsf{BPTIME}$ imply any difficult lower bounds? Does the resolution of this problem hit against a known barrier in complexity theory? My motivation for this question is to understand the relative difficulty (with respect to other major open problems in complexity theory) of showing a hierarchy for $\mathsf{BPTIME}$. I am assuming that everyone believes that such a hierarchy exists, but please correct me if you think otherwise. Some background : $\mathsf{BPTIME}(f(n))$ contains those languages whose membership can be decided by a probabilistic Turning machine in time $f(n)$ with bounded probability of error. More precisely, a language $L \in \mathsf{BPTIME}(f(n))$ if there exists a probabilistic Turing machine $T$ such that for any $x \in L$ the machine $T$ runs in time $O(f(|x|))$ and accepts with probability at least $2/3$, and for any $x \not \in L$, $T$ runs in time $O(f(|x|))$ and rejects with probability at least $2/3$. Unconditionally, it is open whether $\mathsf{BPTIME}(n^c) \subseteq \mathsf{BPTIME}(n)$ for all $c > 1$. Barak showed that there exists a strict hierarchy for $\mathsf{BPTIME}$ for machines with $O(\log n)$ advice. Fortnow and Santhanam improved this to 1 bit of advice. This leads me to think that a proving the existence of a probabilistic time hierarchy is not that far off. On the other hand, the result is still open and I cannot find any progress after 2004. References, as usual, can be found in the Zoo . The relation to derandomization comes from Impagliazzo and Wigderson's results: they showed that under a plausible complexity assumption, $\mathsf{BPTIME}(n^d) \subseteq \mathsf{DTIME}(n^c)$ for any constant $d$ and some constant $c$. By the classical time-hierarchy theorems for deterministic time, this implies a time hierarchy for probabilistic time. I am asking the converse question: does a probabilistic hiearchy hit against a barrier related to proving derandomization results? EDIT: I am accepting Ryan's answer as a more complete solution. If anyone has observations about what stands between us and proving the existence of a hierarchy for probabilistic time, feel free to answer/comment. Of course, the obvious answer is that $\mathsf{BPTIME}$ has a semantic definition that defies classical techniques. I am interested in less obvious observations. | Let PTH be the hypothesis that there exists a probabilistic time hierarchy. Suppose the answer to your question is true, i.e., "PTH implies $BPP \subseteq TIME[2^{n^{c}}]$" for some fixed $c$. Then, $EXP \neq BPP$ would be unconditionally true. Consider two cases: If PTH is false, then $EXP \neq BPP$. This is the contrapositive of what Lance noted. If PTH is true, then "PTH implies $BPP \subseteq TIME[2^{n^{c}}]$" so again $EXP \neq BPP$. In fact, even an infinitely-often derandomization of BPP under PTH would entail $EXP \neq BPP$ unconditionally. So whatever barriers apply to proving $EXP \neq BPP$, they apply to proving statements of the kind "PTH implies derandomization". | {
"source": [
"https://cstheory.stackexchange.com/questions/6748",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/4896/"
]
} |
6,753 | $\mathsf{NC}$ captures the idea of efficiently parallelizable, and one interpretation of it is problems that are solvable in time $O(\log^c n)$ using $O(n^k)$ parallel processors for some constants $c$, $k$. My question is if there is an analogous complexity class where time is $n^c$ and number of processors is $2^{n^k}$. As a fill-in-the-blank question: $\mathsf{NC}$ is to $\mathsf{P}$ as _ _ is to $\mathsf{EXP}$ In particular, I am interested in a model where we have an exponential number of computers arranged in a network with polynomially bounded degree (lets say the network is independent of the input/problem or atleast somehow easy to construct, or any other reasonable uniformity assumption). At each time step: Every computer reads the polynomial number of polynomial sized messages it received in the previous time step. Every computer runs some polytime computation that can depend on these messages. Every computer passes a message (of polylength) to each of its neighbours. What is the name of a complexity class corresponding to these sort of models? What is a good place to read about such complexity classes? Are there any complete-problems for such a class? | I believe the class you are looking for is $PSPACE$. Suppose you have $exp(n^k) = 2^{O(n^k)}$ processors fitting the requirements: Every computer reads the polynomial number of polynomial sized messages it received in the previous time step. Every computer runs some polytime computation that can depend on these messages. Every computer passes a message (of polylength) to each of its neighbours. This can be modeled by having a circuit with $poly(n)$ layers, where each layer has $exp(n^k)$ "gates", and each "gate" does a polynomial time computation (satisfying part 2) with polynomial fan-in (satisfying part 1), and has polynomial fan-out (satisfying part 3). Since each gate computes a polynomial time function, they each can be replaced by a polynomial size circuit (with AND/OR/NOT) in the usual way. Note the polynomial fan-ins and fan-outs can be made to be 2, by only increasing the depth by a $O(\log n)$ factor. What remains is a $poly(n)$ depth uniform circuit with $exp(n^k)$ AND/OR/NOT gates. This is precisely alternating polynomial time, which is precisely $PSPACE$. | {
"source": [
"https://cstheory.stackexchange.com/questions/6753",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/1037/"
]
} |
6,755 | Scott Aaronson's blog post today gave a list of interesting open problems/tasks in complexity. One in particular caught my attention: Build a public library of 3SAT instances, with as few variables and clauses as possible, that would have noteworthy consequences if solved. (For example, instances encoding the RSA factoring challenges.) Investigate the performance of the best current SAT-solvers on this library. This triggered my question: What's the standard technique for reducing RSA/factoring problems to SAT, and how fast is it? Is there such a standard reduction? Just to be clear, by "fast" I don't mean polynomial time. I'm wondering whether we have tighter upper bounds on the reduction's complexity. For example, is there a known cubic reduction? | One approach to encode Factoring (RSA) to SAT is to use multiplicator circuits (every circuit can be encoded as CNF). Let's assume we are given an integer $C$ with $2n$ bits, $C=(c_1,c_2,\cdots,c_{2n})_2$ . We are interested in finding two $n$ -bit integers $A=(a_1,\cdots,a_n)$ and $B=(b_1,\cdots,b_n)$ whose product is $C=A*B$ . The most naive encoding can be something like this; we know that: $$c_{2n}= a_n \land b_n$$ $$c_{2n-1}= (a_n\land b_{n-1}) xor (a_{n-1}\land b_n)$$ $$Carry:d_{2n-1}= (a_n\land b_{n-1}) \land (a_{n-1}\land b_n)$$ $$c_{2n-2}= (a_n\land b_{n-2}) xor (a_{n-1}\land b_{n-1}) xor (a_{n-2}\land b_{n}) xor d_{2n-1}$$ ... Then using Tseitin transformation, the above encoding can be translated into CNF. This approach produces a relatively small CNF. But this encoding does not support "Unit Propagation" and so, the performance of SAT Solvers are really bad. There are other circuit for multiplication which can be used for this purpose, but they produce a larger CNF. | {
"source": [
"https://cstheory.stackexchange.com/questions/6755",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/969/"
]
} |
6,864 | I've come across the polynomial algorithm that solves 2SAT. I've found it boggling that 2SAT is in P where all (or many others) of the SAT instances are NP-Complete. What makes this problem different? What makes it so easy (NL-Complete - even easier than P)? | Here is a further intuitive and unpretentious explanation along the lines of MGwynne's answer. With $2$-SAT, you can only express implications of the form $a \Rightarrow b$, where $a$ and $b$ are literals. More precisely, every $2$-clause $l_1 \lor l_2$ can be understood as a pair of implications: $\lnot l_1 \Rightarrow l_2$ and $\lnot l_2 \Rightarrow l_1$. If you set $a$ to true, $b$ must be true as well. If you set $b$ to false, $a$ must be false as well. Such implications are straightforward: there is no choice, you have only $1$ possibility, there is no room for case-multiplication. You can just follow every possible implication chain, and see if you ever derive both $\lnot l$ from $l$ and $l$ from $\lnot l$: if you do for some $l$, then the 2-SAT formula is unsatisfiable, otherwise it is satisfiable. It is the case that the number of possible implication chains is polynomially bounded in the size of the input formula. With $3$-SAT, you can express implications of the form $a \Rightarrow b \lor c$, where $a$, $b$ and $c$ are literals. Now you are in trouble: if you set $a$ to true, then either $b$ or $c$ must be true, but which one? You have to make a choice: you have 2 possibilities. Here is where case-multiplication becomes possible, and where the combinatorial explosion arises. In other words, $3$-SAT is able to express the presence of more than one possibility, while $2$-SAT doesn't have such ability. It is precisely such presence of more than one possibility ($2$ possibilities in case of $3$-SAT, $k-1$ possibilities in case of $k$-SAT) that causes the typical combinatorial explosion of NP-complete problems. | {
"source": [
"https://cstheory.stackexchange.com/questions/6864",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/1536/"
]
} |
6,959 | It is sometimes claimed that Ketan Mulmuley's Geometric Complexity Theory is the only plausible program for settling the open questions of complexity theory like P vs. NP question. There has been several positive commentaries from famous complexity theorists about the program. According to Mulmuley it will take a long time to achieve the desired results. Entering the area is not easy for general complexity theorists and needs considerable efforts to get a handle on algebraic geometry and representation theory. Why is GCT considered to be capable of settling P vs. NP? What is the value of the claim if it is expected to take more than 100 years to reach there? What are its advantages to other current approaches and those that may rise in the next 100 years? What is the current state of the program? What is the next target of the program? Has there been any fundamental criticism of the program? I would prefer answers that are understandable by a general complexity theorist with the minimum background from algebraic geometry and representation theory assumed. | As pointed out by many others, much has already been said on many of these questions by Mulmuley, Regan, and others. I will offer here just a brief summary of what I think are some key points that haven't yet been mentioned in the comments. As to why GCT is considered plausibly capable of showing $P \neq NP$ many answers have already been given elsewhere and in the comments above, though I think no one has yet mentioned that it appears to avoid the known barriers (relativization, algebrization, natural proofs). As to its value - I think even if it takes us 100 years, we will learn something new about complexity along the way by studying it from this angle. Some progress is being made on understanding the algebraic varieties, the representations, and the algorithmic questions that arise in GCT. The principal researchers I know of who have done work on this are (in no particular order): P. Burgisser, C. Ikenmeyer, M. Christandl, J. M. Landsberg, K. V. Subrahmanyan, J. Blasiak, L. Manivel, N. Ressayre, J. Weyman, V. Popov, N. Kayal, S. Kumar, and of course K. Mulmuley and M. Sohoni. More concretely, Burgisser and Ikenmeyer just presented (STOC 2011) some modest lower bounds on matrix multiplication using the GCT approach ($n^2 + 2$, compared to the currently best known $\frac{3}{2}n^2 +O(n)$). Although these lower bounds are not new bounds, they at least give some proof-of-concept, in that the representation-theoretic objects hypothesized to exist in GCT do exist for these modest lower bounds on this model problem. N. Kayal has a couple papers on the algorithmic question of testing when one polynomial is in the orbit of another or is a projection of another. He shows that in general these problems are NP-hard but that for special functions like permanent, determinant, and elementary symmetric polynomials, these problems are decidable in P. This is a step towards some of Mulmuley's conjectures (that certain harder problems - deciding orbit closure - are in P for special functions such as determinant). I don't have much more specific to say on this than the answer to 2. As far as I know there has not been fundamental criticism, in the sense that I have not seen any criticism which really discredits the program in any way. There has certainly been discussion about why such techniques should be necessary, the value of the program given the long time horizons expected, etc., but I would characterize these more as healthy discussion than fundamental criticism. | {
"source": [
"https://cstheory.stackexchange.com/questions/6959",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/14197/"
]
} |
7,027 | Lately I've been dealing with compression-related algorithms, and I was wondering which is the best compression ratio that can be achievable by lossless data compression. So far, the only source I could find on this topic was the Wikipedia: Lossless compression of digitized data
such as video, digitized film, and
audio preserves all the information,
but can rarely do much better than 1:2
compression because of the intrinsic
entropy of the data. Unfortunately, Wikipedia's article doesn't contain a reference or citation to support this claim. I'm not a data-compression expert, so I'd appreciate any information you can provide on this subject, or if you could point me to a more reliable source than Wikipedia. | I am not sure if anyone has yet explained why the magical number seems to be exactly 1:2 and not, for example, 1:1.1 or 1:20. One reason is that in many typical cases almost half of the digitised data is noise , and noise (by definition) cannot be compressed. I did a very simple experiment: I took a grey card . To a human eye, it looks like a plain, neutral piece of grey cardboard. In particular, there is no information . And then I took a normal scanner – exactly the kind of device that people might use to digitise their photos. I scanned the grey card. (Actually, I scanned the grey card together with a postcard. The postcard was there for sanity-checking so that I could make sure the scanner software does not do anything strange, such as automatically add contrast when it sees the featureless grey card.) I cropped a 1000x1000 pixel part of the grey card, and converted it to greyscale (8 bits per pixel). What we have now should be a fairly good example of what happens when you study a featureless part of a scanned black & white photo , for example, clear sky. In principle, there should be exactly nothing to see. However, with a larger magnification, it actually looks like this: There is no clearly visible pattern, but it does not have a uniform grey colour. Part of it is most likely caused by the imperfections of the grey card, but I would assume that most of it is simply noise produced by the scanner (thermal noise in the sensor cell, amplifier, A/D converter, etc.). Looks pretty much like Gaussian noise; here is the histogram (in logarithmic scale): Now if we assume that each pixel has its shade picked i.i.d. from this distribution, how much entropy do we have? My Python script told me that we have as much as 3.3 bits of entropy per pixel . And that's a lot of noise. If this really was the case, it would imply that no matter which compression algorithm we use, the 1000x1000 pixel bitmap would be compressed, in the best case, into a 412500-byte file. And what happens in practice: I got a 432018-byte PNG file, pretty close. If we over-generalise slightly, it seems that no matter which black & white photos I scan with this scanner, I will get the sum of the following: "useful" information (if any), noise, approx. 3 bits per pixel. Now even if your compression algorithm squeezes the useful information into << 1 bits per pixel, you will still have as much as 3 bits per pixel of incompressible noise. And the uncompressed version is 8 bits per pixel. So the compression ratio will be in the ballpark of 1:2, no matter what you do. Another example, with an attempt to find over-idealised conditions: A modern DSLR camera, using the lowest sensitivity setting (least noise). An out-of-focus shot of a grey card (even if there was some visible information in the grey card, it would be blurred away). Conversion of the RAW file into a 8-bit greyscale image, without adding any contrast. I used typical settings in a commercial RAW converter. The converter tries to reduce noise by default. Moreover, we are saving the end result as an 8-bit file – we are, in essence, throwing away the lowest-order bits of the raw sensor readings! And what was the end result? It looks much better than what I got from the scanner; the noise is less pronounced, and there is exactly nothing to be seen. Nevertheless, the Gaussian noise is there: And the entropy? 2.7 bits per pixel . File size in practice? 344923 bytes for 1M pixels. In a truly best-case scenario, with some cheating, we pushed the compression ratio to 1:3. Of course all of this has exactly nothing to do with TCS research, but I think it is good to keep in mind what really limits the compression of real-world digitised data. Advances in the design of fancier compression algorithms and raw CPU power is not going to help; if you want to save all the noise losslessly, you cannot do much better than 1:2. | {
"source": [
"https://cstheory.stackexchange.com/questions/7027",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/5616/"
]
} |
7,129 | This may be considered a stupid question. I am not a computer science major (and I'm not a mathematics major yet, either), so please excuse me if you think that the following questions display some major erroneous assumptions. While there are plans to formalize Fermat's Last Theorem (see this presentation ), I have never read or heard that a computer can prove even a "simple" theorem like Pythagoras'. Why not? What is (/are) the main difficulty(/ies) behind establishing a fully autonomous proof by a computer, aided only by some "built-in axioms"? A second question I would like to ask is the following: Why are we able to formalize many proofs, while it is currently impossible for a computer to prove a theorem on its own? Why is that "harder" ? | While there are plans to formalize Fermat's Last Theorem (see this presentation), I have never read or heard that a computer can prove even a "simple" theorem like Pythagoras'. In 1949 Tarski proved that almost everything in The Elements lies within a decidable fragment of logic, when he showed the decidability of the first-order theory of real closed fields. So the Pythagorean theorem in particular is not talked about much because it's not especially hard. In general, the thing that makes theorem proving hard is induction. First-order logic without induction has a very useful property called the subformula property: true formulas $A$ have proofs involving only the subterms of $A$. This means that it's possible to build theorem provers which can decide what to prove next based on an analysis of the theorem they are instructed to prove. (Quantifier instantiation can make the right notion of subformula a bit more subtle, but we have reasonable techniques to cope with this.) However, the addition of the induction schema to the axioms breaks this property. The only proof of a true formula $A$ may require doing a proof $B$ which is not syntactically a subformula of $A$. When we run into this in a paper proof, we say we have to "strengthen the induction hypothesis". This is quite hard for a computer to do, because the appropriate strengthening can require both significant domain-specific information, and an understanding of why you're proving a particular theorem. Without this information, truly relevant generalizations can get lost in a forest of irrelevant ones. | {
"source": [
"https://cstheory.stackexchange.com/questions/7129",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/5712/"
]
} |
7,186 | This is related to a math.SE question I asked, but I've realized I want a slightly different form and it's probably more of an algorithms question. Fix an alphabet $\Sigma$. Let $P(n)$ be all length-$n$ strings over $\Sigma$. What is the shortest string $S$ such that every $s \in P(n)$ is a substring of $S$? For example, let $\Sigma = \{a\}$ and $n=1$; then clearly $S=a$ is minimal. If instead $\Sigma = {a,b}$ and $n=2$, $P(2) = {aa, ab, ba, bb}$ and $S = aabba$ is minimal. Clearly we are solving "shortest common superstring" over $P(n)$, and it's known to be MaxSNP hard. I'm wondering if this special form, with $P(n)$ containing all substrings of $\Sigma$ might be easy?! | This is called a de Bruijn sequence ( http://en.wikipedia.org/wiki/De_Bruijn_sequence ). You can generate it by taking an Euler tour of a de Bruijn graph, but there are also other ways. You can use de Bruijn sequences to break into 1990s-era cars efficiently ( http://everything2.com/title/Weak+security+in+our+daily+lives ) among many other applications. | {
"source": [
"https://cstheory.stackexchange.com/questions/7186",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/3866/"
]
} |
7,199 | I'm considering graph classes that can be characterized by forbidden subgraphs. If a graph class has a finite set of forbidden subgraphs, then there is a trivial polynomial time recognition algorithm (one can just use brute force). But an infinite family of forbidden subgraphs does not imply hardness: there are some classes with infinite list of forbidden subgraphs such that the recognition can also be tested in polynomial time. Chordal and Perfect graphs are examples but, in those cases, there is a "nice" structure on the forbidden family. Is there any know relation between the hardness of the recognition of a class and the "bad behavior" of the forbidden family? Such a relation should exist? This "bad behavior" has been formalized somewhere? | Although it seems intuitive that the list of forbidden (induced) subgraphs for a class $\mathscr{C}$ of graphs which has NP-hard recognition should possess some "intrinsic" complexity, I have recently found some striking negative evidence to this intuition in the literature. Perhaps the simplest to describe is the following, taken from an article by B. Lévêque, D. Lin, F. Maffray and N. Trotignon . Let $F$ be the family of graphs which are composed of a cycle of length at least four, plus three vertices: two adjacent to the same vertex $u$ of the cycle, and one adjacent to a vertex $v$ of the cycle, where $u$ and $v$ are not consecutive in the cycle (and no other edges). Now let $F'$ be the family of graphs which are composed exactly the same way, except that you add four vertices: two adjacent to the same vertex $u$ of the cycle (as before), but now two adjacent to the same vertex $v$ of the cycle, where again $u$ and $v$ are not consecutive. Then the class of graphs which has $F$ as the forbidden induced subgraphs has polynomial-time recognition, whereas the recognition of the class which has $F'$ as the forbidden induced subgraphs is NP-hard. Therefore, I find it hard to conceive of any general condition that a list of forbidden induced subgraphs has to satisfy when it results in a class with (NP-) hard recognition, considering that such a condition will have to separate the "very similar" $F$ and $F'$ above. | {
"source": [
"https://cstheory.stackexchange.com/questions/7199",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/282/"
]
} |
7,213 | Here the goal is to reduce an arbitrary SAT problem to 3-SAT in polynomial time using the fewest number of clauses and variables. My question is motivated by curiosity. Less formally, I would like to know: "What is the 'most natural' reduction from SAT to 3-SAT?" Now the reduction that I've always seen in text books goes something like this: First take your instance of SAT and apply the Cook-Levin theorem to reduce it to circuit SAT. Then you finish the job by the standard reduction of circuit SAT to 3-SAT by replacing gates with clauses. While this works, the resulting 3-SAT clauses end up looking almost nothing like the SAT clauses you started with, due to the initial application of the Cook-Levin theorem. Can anyone see how to do the reduction more directly, skipping the intermediate circuit step and going directly to 3-SAT? I would even be happy with a direct reduction in the special case of n-SAT. (I would guess that there are some trade-offs between computation time and the size of the output. Clearly a degenerate -- though fortunately inadmissible unless P=NP -- solution would be to just solve the SAT problem, then emit a trivial 3-SAT instance...) EDIT: Based on ratchet's answer it is clear now that the reduction to n-SAT is somewhat trivial (and that I really should have thought that one through a bit more carefully before posting). I'm leaving this question open for a bit in case someone knows the answer to the more general situation, otherwise I will simply accept ratchet's answer. | Each SAT clause has 1, 2, 3 or more variables. The 3 variable clause can be copied with no issue The 1 and 2 variable clauses {a1} and {a1,a2} can be expanded to {a1,a1,a1} and {a1,a2,a1} respectively. The clause with more than 3 variables {a1,a2,a3,a4,a5} can be expanded to {a1,a2,s1}{!s1,a3,s2}{!s2,a4,a5} with s1 and s2 new variables whose value will depend on which variable in the original clause is true | {
"source": [
"https://cstheory.stackexchange.com/questions/7213",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/559/"
]
} |
7,396 | If you ask a question about parsing HTML with regex, you will certainly be referenced to this famous rant . Though there is not a canonical rant for it, I've also been told that regex aren't powerful enough to parse SQL. I'm a self-taught programmer, so I don't know much about languages from a theoretical perspective. Practically speaking, what are examples of languages or grammars that regex can always parse successfully? Edit: To clarify, I'd really like a few examples of languages that are used in the real world that fit in the category of regular languages, rather than some axioms or equivalent conditions, etc. | Practically speaking, what are examples of languages or grammars that regex can always parse successfully? A short answer is: Probably nothing that you call a language. In theoretical computer science (TCS), a language simply means a set of words. But in most cases, what people call a “language” outside TCS has some recursive structure. “Recursive structure” is ambiguous here, but intuitively regular expressions cannot parse them because regular expressions even cannot parse balanced parentheses . Many compilers use regular expressions for lexical analysis before parsing. For example, you can decide whether a certain string is a valid identifier in C++ or not by using a regular expression. This is possible because the language consisting of valid identifiers in C++ is a regular language. But the set of valid C++ identifiers is usually not called a language outside TCS. Disclaimers: Some people distinguish “regular expression” and “regex.” In this answer, I am talking about regular expressions, not regexes, if we use this convention. Actual C++ compilers do not probably use a regular expression for valid identifiers because excluding keywords makes the regular expression unmanageable. They use a different technique to cope with this, but that is not the main point here. | {
"source": [
"https://cstheory.stackexchange.com/questions/7396",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/4473/"
]
} |
7,469 | SAT solvers are very important in algebraic attacks , for example walksat and minisat . However, when solving the benchmark problems available here there is an enormous performance difference between the two - Walksat is much faster than minisat for these problems. Why is this? This implementation of walksat appears to have some performance improvements - is there any reason it wasn't included in the international SAT Competitions ? | Yes, there is a major difference between MiniSAT and WalkSAT. First, let's clarify - MiniSAT is a specific implementation of the generic class of DPLL /CDCL algorithms which use backtracking and clause learning, whereas WalkSAT is the general name for an algorithm which alternates between greedy steps and random steps. In general DPLL/CDCL is much faster on structured SAT instances while WalkSAT is faster on random k-SAT. Industrial and applied SAT instances tend to have a lot of structure, so DPLL/CDCL is dominant in most modern SAT solvers. Instance to instance one technique may win out, though, which is one reason why portfolio solvers have become popular. I take a lot of issue with your claim that WalkSAT is much faster than MiniSAT on the instances on that page. For one thing, there are gigabytes of SAT instances there - how many did you try comparing them on? WalkSAT is not at all competitive on most structured instances which is why it's not often seen in competitions. On a side note - Vijay is right that MiniSAT is still relevant. Actually, because it's open source and well-written, MiniSAT is the solver to beat in order to show that a given optimization has promise. Many people tweak MiniSAT itself to showcase their optimizations - take a look at the "MiniSAT hack" category in the recent SAT competitions. | {
"source": [
"https://cstheory.stackexchange.com/questions/7469",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/5762/"
]
} |
7,528 | One of the most discussed questions on the site has been What it Would Mean to Disprove the Church-Turing Thesis . This is partly because Dershowitz and Gurevich published a proof of the Church-Turing Thesis is the Bulletin of Symbolic Logic in 2008. (I won't discuss that here, but for a link and extensive comments, please see the original question, or -- shameless self-promotion -- a blog entry I wrote.) This question is about the Extended Church-Turing Thesis, which, as formulated by Ian Parberry, is: Time on all "reasonable" machine models is related by a polynomial. Thanks to Giorgio Marinelli, I learned that one of the co-authors of the previous paper, Dershowitz, and a PhD student of his, Falkovich, have published a proof of the Extended Church-Turing Thesis, which just appeared at the workshop Developments of Computational Models 2011 . I just printed out the paper this morning, and I have skimmed it, nothing more. The authors claim that Turing machines can simulate any sequential computational device with at most polynomial overhead. Quantum computation and large-scale parallel computation are explicitly not covered. My question relates to the following statement in the paper. We have shown -- as has been conjectured and is widely believed -- that every effective implementation, regardless of what data structures it uses, can be simulated by a Turing machine, with at most polynomial overhead in time complexity. So, my question: is this really "widely believed," even in the case of "truly" sequential computation with no randomization? What if things are random? Quantum computing would be a likely counterexample, if in fact it can be instantiated, but are there possibilities "weaker" than quantum that would be counterexamples as well? | Preparatory Rant I've gotta tell you, I don't see how talking about "proofs" of the CT or ECT adds any light to this discussion. Such "proofs" tend to be exactly as good as the assumptions they rest on---in other words, as what they take words like "computation" or "efficient computation" to mean. So then why not proceed right away to a discussion of the assumptions, and dispense with the word "proof"? That much was clear already with the original CT, but it's even clearer with ECT---since not only is the ECT "philosophically unprovable," but today it's widely believed to be false! To me, quantum computing is the huge, glaring counterexample that ought to be the starting point for any modern discussion about the ECT, not something shunted off to the side. Yet the paper by Dershowitz and Falkovich doesn't even touch on QC until the last paragraph: The above result does not cover large-scale parallel computation, such as quantum computation, as it posits that there is a fixed bound on the degree of parallelism, with the number of critical terms fixed by the algorithm. The question of relatively [sic] complexity of parallel models will be pursued in the near future. I found the above highly misleading: QC is not a "parallel model" in any conventional sense. In quantum mechanics, there's no direct communication between the "parallel processes"---only interference of amplitudes---but it's also easy to generate an exponential number of "parallel processes." (Indeed, one could think of every physical system in the universe as doing so as we speak!) In any case, whatever you think about the interpretation of quantum mechanics (or even its truth or falsehood), it's clear that it requires a separate discussion! Now, on to your (interesting) questions! No, I don't know of any convincing counterexample to the ECT other than quantum computing. In other words, if quantum mechanics had been false (in a way that still kept the universe more "digital" than "analog" at the Planck scale---see below), then the ECT as I understand it still wouldn't be "provable" (since it would still depend on empirical facts about what's efficiently computable in the physical world), but it would be a good working hypothesis. Randomization probably doesn't challenge the ECT as it's conventionally understood, because of the strong evidence today that P=BPP. (Though note that, if you're interested in settings other than language decision problems---for example, relational problems, decision trees, or communication complexity---then randomization provably can make a huge difference. And those settings are perfectly reasonable ones to talk about; they're just not the ones people typically have in mind when they discuss the ECT.) The other class of "counterexamples" to the ECT that's often brought up involves analog or "hyper" computing. My own view is that, on our best current understanding of physics, analog computing and hypercomputing cannot scale, and the reason why they can't, ironically, is quantum mechanics! In particular, while we don't yet have a quantum theory of gravity, what's known today suggests that there are fundamental obstacles to running more than about 10 43 computation steps per second, or resolving distances smaller than about 10 -33 cm. Finally, if you want to assume out of discussion anything that might be a plausible or interesting challenge to the ECT, and only allow serial, discrete, deterministic computation, then I agree with Dershowitz and Falkovich that the ECT holds! :-) But even there, it's hard to imagine a "formal proof" increasing my confidence in that statement -- the real issue, again, is just what we take words like "serial", "discrete", and "deterministic" to mean . As for your last question: Quantum computing would be a likely counterexample, if in fact it can be instantiated, but are there possibilities "weaker" than quantum that would be counterexamples as well? Today, there are lots of interesting examples of physical systems that seem able to implement some of quantum computing, but not all of it (yielding complexity classes that might be intermediate between BPP and BQP). Furthermore, many of these systems might be easier to realize than a full universal QC. See for example this paper by Bremner, Jozsa, and Shepherd, or this one by Arkhipov and myself. | {
"source": [
"https://cstheory.stackexchange.com/questions/7528",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/30/"
]
} |
7,552 | The Valiant-Vazirani theorem says that if there is a polynomial time algorithm (deterministic or randomized) for distinguishing between a SAT formula that has exactly one satisfying assignment, and an unsatisfiable formula - then NP=RP . This theorem is proved by showing that UNIQUE-SAT is NP -hard under randomized reductions. Subject to plausible derandomization conjectures, the Theorem can be strengthened to "an efficient solution to UNIQUE-SAT implies NP = P ". My first instinct was to think that implied there exists a deterministic reduction from 3SAT to UNIQUE-SAT, but it's not clear to me how this particular reduction can be derandomized. My question is: what is believed or known about "derandomizing reductions"? Is it/should it be possible? What about in the case of V-V? Since UNIQUE-SAT is complete for PromiseNP under randomized reductions, can we use a derandomization tool to show that "a deterministic polynomial time solution to UNIQUE-SAT implies that PromiseNP = PromiseP ? | Under the right derandomization assumptions (see Klivans-van Melkebeek ) you get the following: There is a polytime computable $f(\phi)=(\psi_1,\ldots,\psi_k)$ s.t. for all $\phi$, If $\phi$ is satisfiable then at least one of the $\psi_i$ has exactly one satisfying assignment. If $\phi$ is not satisfiable then all of the $\psi_i$ are unsatisfiable. You need k polynomial in then length of $\phi$. Probably can't be done for $k=1$. | {
"source": [
"https://cstheory.stackexchange.com/questions/7552",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/170/"
]
} |
7,574 | In essence, the question is: What is the least publishable unit for the ArXiv? Of particular interest are fields that use the ArXiv extensively such as quantum computing. But comments on other fields and preprint services (like, ECCC & ePrint) are also welcome. Detailed question This is based on the following two questions: When should you say what you know? How do you decide when you have enough research results to write a paper and to which journal you submit the paper In particular, on Jukka Suomela's comment on this answer : I think ArXiving your results ASAP is a good idea. Please keep in mind that an ArXiv manuscript does not need to constitute a minimum publishable unit. I think it is perfectly ok to submit a 2-page proof to ArXiv, even though it would be obviously too short as a conference or journal paper. Resolving an open problem that other people would like to solve is more than enough. In my field (quantum computing) it seems that every preprint I see on the ArXiv is a publication-level paper, released early so that we don't have to wait for conference proceedings or journal turnaround. It is intimidating to submit something that is not at publication-level. Is it alright to put up results which are partial or only slight extensions of existing work? Is it alright to put up results that are potentially interesting (i.e. you've given some talks on them and not everybody fell asleep) but you doubt would get into a top-conference or journal? Do you have advice on when to share results on ArXiv or similar preprint-servers? Can sharing results early hurt you? Some specific background Just to make the question more personal, I'll include a further motivation. However, I am hoping to receive answers that give more general guidelines that I (and others) could follow in the future. I did some work on unitary t-designs in which I extended an existing theorem (in a way that is kind of useful, but the proof of the original just needs to be modified slightly -- so no new idea; i.e. when I talked to the author of the earlier paper his comment was along the lines of "oh cool, didn't think about that", and for the proof I had to say about a sentence and then he was like "okay, I see how you would prove that"), proved some easy results, and provided an alternative proof of a lower bound. I wrote up a pretty verbose paper that I keep on my website, but unfortunately I am not well read enough in the field to really understand how it fits in the bigger picture (and I think that is the biggest weak-point, that I doubt I could overcome easily). I keep the text around mostly as a sort of "I worked on this" note and since I give talks on the topic sometimes. It has also come in useful once to a friend since I make a pretty gentle intro and so he used it as a basic starting point to relate some of his work to designs (although he didn't use any of the results in the paper, just like a lecture note on definitions). Would this be an example of something that I should put up on the ArXiv? or is the appropriate measure to keep it in on my website? | ArXiv papers still need to be recognizable as papers. I'd only put something on the arXiv if I'd feel comfortable publishing it as a letter in a journal (like, say, Information Processing Letters). For stuff that's even smaller than that, but that I still want to put on some sort of public record, I'll just make a blog post. But in your case, if you've written it up as a preprint anyway, and you clearly state in it how much or how little is new, then why not? ArXiv papers don't actually have to have any new research content — survey papers are also welcome — so a paper that's mostly a survey but that extends the problem a small step in some direction doesn't sound problematic to me. | {
"source": [
"https://cstheory.stackexchange.com/questions/7574",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/1037/"
]
} |
7,619 | I was confused by Wikipedia's definitions of " chordal graph ", " interval graph ", " string graph ", " comparability graph ", "incomparability graph" and the complements of these.
Wikipedia says "The complement of any interval graph is a comparability graph". So is any chordal graph an incomparability graph? Is there a book or survey that contains detailed description of these sorts of different perfect graphs? | I believe the answer to your question, and to most questions like this, is to be found on http://graphclasses.org/ There's also a book that has much of this (including an appendix at the back with some of the main subset relations between graph classes): Brandstädt, Andreas; Le, Van Bang; Spinrad, Jeremy (1999), Graph Classes: A Survey, SIAM Monographs on Discrete Mathematics and Applications, ISBN 0-89871-432-X. The answer to your specific question is no. The graph shown in http://commons.wikimedia.org/wiki/File:SubdividedTriangle.png (a central triangle with three more triangle attached to its edges) is chordal, but its complement http://en.wikipedia.org/wiki/File:Forbidden_interval_subgraph.svg is not a comparability graph. | {
"source": [
"https://cstheory.stackexchange.com/questions/7619",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/6108/"
]
} |
7,623 | Context: As I understand, in geometric complexity theory, the existence of obstructions serves as a proof-certificate, so to speak, for the nonexistence of an efficient computational circuit for the explicit hard function in the lower bound problem under consideration. Now there are some other assumptions for obstructions that they must be short, easy to verify and easy to construct. Question: My question is that say I have a problem that I conjecture to be solvable in polynomial time. Then how can I show that there exist no obstruction for this problem, i.e. if no obstructions exist then the problem can be computed efficiently and it is indeed in polynomial time. Approach: I think, and I may be wrong in this assertion, that showing no obstructions exist can be equivalent to standard reduction of NP problems to other problems whose complexity is yet unknown, in the proof that they themselves are in NP. So then in that case one can, if possible, show that obstructions exist as one tries to reduce an NP problem to the problem under consideration, that way, the reduction is intractable. Also what role does postselection play in all of this? Is it possible to simply postselect on the nonexistence of obstructions? Thanks and pardon the lack of precise statements in my approach and questions. Just an another example, consider a problem X that we know to be in P. Now let's say we didn't know about that problem being solvable in polynomial time, then is it possible, that one can make the following assertion: Since no obstructions exist in the computation of X we can say that it is in the class P From there on, the problem is the easy (computationally) discovery of those obstructions, if even one exists, would show that X is not in polynomial time. However going the other way, i.e. finding that no obstructions exist is a difficult task. | I believe the answer to your question, and to most questions like this, is to be found on http://graphclasses.org/ There's also a book that has much of this (including an appendix at the back with some of the main subset relations between graph classes): Brandstädt, Andreas; Le, Van Bang; Spinrad, Jeremy (1999), Graph Classes: A Survey, SIAM Monographs on Discrete Mathematics and Applications, ISBN 0-89871-432-X. The answer to your specific question is no. The graph shown in http://commons.wikimedia.org/wiki/File:SubdividedTriangle.png (a central triangle with three more triangle attached to its edges) is chordal, but its complement http://en.wikipedia.org/wiki/File:Forbidden_interval_subgraph.svg is not a comparability graph. | {
"source": [
"https://cstheory.stackexchange.com/questions/7623",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/5898/"
]
} |
7,642 | I haven't managed to find this data structure, but I'm not an expert in the field. The structure implements a set, and is basically an array of comparable elements with an invariant. The invariant is the following (defined recursively): An array of length 1 is a merge-array. An array of length 2^n (for n > 0) is a merge-array iff: the first half is a merge-array and the second half is empty
or the first array is full and sorted, and the second half is a merge-array. Note that if the array is full, it is sorted. To insert an element, we have two cases: If the first half is not full, insert recursively in the first half. If the first half is full, insert recursively in the second half. After the recursive step, if the whole array is full, merge the
halves (which are sorted), and resize it to the double of its original
length. To find an element, recurse in both halves, using binary search
when the array is full. (This should be efficient since there are
at most $O(\log(n))$ ascending fragments). The structure can be thought as a static version of mergesort. It's not clear what one should do to erase an element. Edit: after improving my understanding of the structure. | You're describing the classical Bentley-Saxe logarithmic method , applied to static sorted arrays. The same idea can be used to add support for insertions to any static data structure (no insertions or deletions) for any decomposable searching problem. (A search problem is decomposable if the answer for any union $A\cup B$ can be computed easily from the answers for the sets $A$ and $B$.) The transformation increases the amortized query time by a factor of $O(\log n)$ (unless it was already bigger than some polynomial in $n$), but increases the space by only a constant factor. Yes, it can be deamortized, thanks to Overmars and van Leeuwen, but you really don't want to do that if you don't have to. These notes cover the basics. Cache-oblivious lookahead arrays are the mutant offspring of Bentley-Saxe and van Emde Boas trees on steroids. | {
"source": [
"https://cstheory.stackexchange.com/questions/7642",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/5796/"
]
} |
7,645 | Is there something known about the class of graphs with the property that all maximal independent sets have the same cardinality and are therefore maximum ISs? For example, take a set of points in the plane and consider the graph of intersections among all segments between pairs of points in the set. (segments->vertices, intersections->edges). This graph will have the above property, as all maximal ISs correspond to triangulations of the original point set. Are there other categories of graphs known to have this property? Can this property be easily tested? | Such graphs are called well-covered graphs. Here is a recent paper on the subject that lists several useful references. As Suresh mentioned, the recognition problem is co-NP-complete. Note that the independent sets of a graph form an abstract simplicial complex. Simplicial complexes that arise in this way are called "independence complexes" or "flag complexes." A simplicial complex is said to be pure if every maximal simplex has the same cardinality. So you may find some relevant papers by searching for "pure independence complex" or "pure flag complex." | {
"source": [
"https://cstheory.stackexchange.com/questions/7645",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/1609/"
]
} |
7,664 | I am looking for a reference (not a proof, that I can do) to the following extension of Chernoff. Let $X_1,..,X_n$ be Boolean random variables, not necessarily independent .
Instead, it is guaranteed that $Pr(X_i=1|C)<p$ for each $i$ and every event $C$ that only depends on $\{X_j|j\neq i\}$. Naturally, I want an upper bound on $\Pr\left(\sum_{i\in[n]}X_i>(1+\lambda)np\right)$. Thanks in advance! | What you want is the generalized Chernoff bound, which only assumes $P(\bigwedge_{i\in S} X_{i}) \leq p^{|S|}$ for any subset S of variable indices. The latter follows from your assumption, since for $S=\{i_1,\ldots,i_{|S|}\}$, $$P(\bigwedge_{i\in S} X_{i}) = P(X_{i_1} = 1)P(X_{i_2}=1|X_{i_1}=1)\cdots P(X_{i_{|S|}}=1|X_{i_1},...,X_{i_{|S|-1}}=1)\leq p^{|S|}$$
Impagliazzo and Kabanets recently gave an alternative proof of the Chernoff bound, including the generalized one. In their paper you can find all the appropriate references to previous work: http://www.cs.sfu.ca/~kabanets/papers/RANDOM2010.pdf | {
"source": [
"https://cstheory.stackexchange.com/questions/7664",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/6136/"
]
} |
7,715 | I'm preparing some course material on heuristics for optimization, and have been looking at coordinate descent methods. The setting is here a multivariate function $f$ that you wish to optimize. $f$ has the property that restricted to any single variable, it is easy to optimize. So coordinate descent proceeds by cycling through the coordinates, fixing all but the chosen one and minimizing along that coordinate. Eventually, improvements slow to a halt, and you terminate. My question is: is there any theoretical study of coordinate descent methods that talks about convergence rates, and properties of $f$ that make the method work well, and so on ? Obviously, I'm not expecting fully general answers, but answers that illuminate cases where the heuristic does well would be helpful. Aside: the alternating optimization technique used for $k$-means can be seen as an example of coordinate descent, and the Frank-Wolfe algorithm seems related (but is not a direct example of the framework) | (Edit notes: I reorganized this after freaking out at its length.) Literature on coordinate descent can be a little hard to track down. Here are some reasons for this. Many of the known properties of coordinate methods are captured in umbrella theorems for more general descent methods. Two examples of this, given below, are the fast convergence under strong convexity (hold for any $l^p$ steepest descent), and the general convergence of these methods (usually attributed to Zoutendijk). Naming is not standard. Even the term "steepest descent" is not standard. You may have success googling any of the terms "cyclic coordinate descent", "coordinate descent", "Gauss-Seidel", "Gauss-Southwell". usage is not consistent. The cyclic variant rarely receives special mention. Instead, usually only the best single choice of coordinate is discussed. But this almost always gives the cyclic guarantee, albeit with an extra factor $n$ (number of variables): this is because most convergence analyses proceed by lower bounding the improvement of a single step,and you can ignore the extra coordinates. It also seems difficult to say anything general about what cyclic buys you, so people just do the best coordinate and the $n$ factor can usually be verified. Rate under strong convexity. The simplest case is that your objective function is strongly convex. Here, all gradient descent variants have the rate $\mathcal O(\ln (1/\epsilon))$. This is proved in Boyd & Vandenberghe's book. The proof first gives the result for gradient descent, and then uses norm equivalence to give the result for general $l^p$ steepest descent. Constraints. Without strong convexity, you have to start being a little bit careful. You didn't say anything about constraints, and thus in general, the infimum may not be attainable. I'll say briefly on the topic of constraints that the standard approach (with descent methods) is to project onto your constraint set each iteration to maintain feasibility, or to use barriers to roll the constraints into your objective function. In the case of the former, I don't know how it plays with coordinate descent; in the case of the latter, it works fine with coordinate descent, and these barriers can be strongly convex. More specifically to coordinates methods, rather than projecting, many people simply make the coordinate update maintain feasibility: this for instance is exactly the case with the Frank-Wolfe algorithm and its variants (i.e., using it to solve SDPs). I'll also note briefly that the SMO algorithm for SVMs can be viewed as a coordinate descent method, where you are updating two variables at once, and maintaining a feasibility constraint while you do so. The choice of variables is heuristic in this method, and so the guarantees are really just the cyclic guarantees. I'm not sure if this connection appears in standard literature; I learned about the SMO method from Andrew Ng's course notes, and found them to be quite clean. General convergence guarantee. What I know in this more general setting (for coordinate descent) is much weaker. First, there is an ancient result, due to Zoutendijk, that all these gradient variants have guaranteed convergence; you can find this in the book by Nocedal & Wright, and it also appears in some of Bertsekas's books (at the very least, "nonlinear programming" has it). These results are again for something more general than coordinate descent, but you can specialize them to coordinate descent, and then get the cyclic part by multiplying by $n$. More specifically to cyclic coordinate descent, there's a paper by Luo & Tseng titled "On the convergence of the coordinate descent method for convex differentiable minimization". These results require the infimum to be attainable. There are no rates here, only convergence guarantees, but these results have been applied to some more specialized settings to get rates; for instance, in boosting (in the special case that the infimum is attainable), Warmuth, Mika, Raetsch, and Warmuth ("on the convergence of leveraging") were able to show rates of $\mathcal O(\ln(1/\epsilon))$. There are some more recent results on coordinate descent, I've seen stuff on arXiv. Also, luo&tseng have some newer papers. but this is the main stuff. More convergence rates in the special case of boosting. Due to its importance, there has been other specialization in the case of boosting. This is a pretty severe special case because your objective can be written $\sum_{i=1}^m g(\langle a_i, \lambda\rangle)$ where $g$ is a (convex) univariate function and the $(a_i)_1^m$ are fixed vectors ($\lambda$ is the optimization variable). Bickel, Ritov, and Zakai ("some theory for generalized boosting algorithms") showed you can get $\exp(1/\epsilon^2)$ in general, and there are more recent results by other people showing $\mathcal O(1/\epsilon)$. The difficulty in these is that the infimum is not assumed attainable. The issue with exact updates. Also, it is very often the case that you do not have a closed form single coordinate update. Or the exact solution may simply not exist. But fortunately, there are lots and lots of line search methods that get basically the same guarantees as an exact solution. This material can be found in standard nonlinear programming texts, for instance in the Bertsekas or Nocedal&Wright books mentioned above. Vis a vis your second paragraph: when these work well. First, many of the above mentioned analyses for gradient work for coordinate descent. So why not always use coordinate descent? The answer is that for many problems where gradient descent is applicable, you can also use Newton methods, for which superior convergence can be proved. I don't know of a way to get the Newton advantage with coordinate descent. Also, the high cost of Newton methods can be mitigated with Quasinewton updates (see for instance LBFGS). Second, the place where these methods shine is where the presumed solution is sparse (in the $l^0$ sense). Of course, there are NP-hardness issues with this kind of sparsity, but the point is that if you run $k$ iterations, you have $k$ nonzero entries. These facts generalize to, say, using coordinate methods with SDP solvers, where each iteration you throw in a rank 1 matrix, thus with $k$ iterations you have a rank $k$ iterate. There is a great paper on this topic, by Shalev-Shwartz, Srebro, and Zhang, titled "trading accuracy for sparsity in optimization problems with sparsity constraints". Most specifically to the second paragraph of your question, this paper gives further properties on $f$ that allow fast convergence and good sparsity (true to its title). | {
"source": [
"https://cstheory.stackexchange.com/questions/7715",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/80/"
]
} |
7,753 | This is my first question on the cstheory stack, so don't be too rude if I'm violating etiquette somehow ) As we know, in mathematics even famous mathematicians, superstars and geniuses are doing serious mistakes time to time. For example, both 4-color theorem and Fermat theorem provide us dramatic cases of how even brightest minds can be deluded. It even can take years to prove the incorrectness of some falsy proofs. My question is - can you provide some outstanding examples of such mistakes in computer science? I don't know, something like "Dr. X has proved in 1972 that it is impossible to do Y in less than O(log n) time, but in 1995 it turned out that he actually was wrong". | An infamous example in computational geometry is the incorrect proof of the Zone Theorem for hyperplane arrangements published by Edelsbrunner, O'Rourke, and Seidel [FOCS 1983, SICOMP 1986]. The proof also appears in Edelsbrunner's 1987 computational geometry textbook. Zone Theorem: In any arrangement of $n$ hyperplanes in $\mathbb{R}^d$, the total complexity of all cells intersecting any hyperplane is $O(n^{d-1})$. The Zone Theorem is a key step in the proof that the standard recursive incremental algorithm to build an arrangement of $n$ hyperplanes in $\mathbb{R}^d$ runs in $O(n^d)$ time. In 1990, Raimund Seidel discovered that the published proof was incorrect, after being challenged on a subtle technical point by a student in his computational geometry class. Meanwhile, a huge literature on hyperplane/halfspace/simplex/semialgebraic range searching had been developed, all of which relied on the $O(n^d)$ construction time for arrangements, which in turn relied on the Zone Theorem. (None of those authors noticed the bug. Raimund had taught the published "proof" in detail for several years before he was challenged.) Fortunately, Edelsbrunner, Seidel, and Sharir almost immediately found a correct (and much simpler!) proof of the Zone Theorem [New Results and New Trends in CS 1991, SICOMP 1993]. | {
"source": [
"https://cstheory.stackexchange.com/questions/7753",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/6172/"
]
} |
7,841 | Is there a data structure that takes an unordered array of $n$ items, performs preprocessing in $O(n)$ and answers queries: is there some element $x$ on the list, each query in worst time $O(\log n)$? I really think there isn't, so a proof that there is none is also welcomed. | Here's a proof that it's impossible. Suppose you could build such a data structure. Build it. Then choose $n/\log n$ items at random from the list, add $\epsilon$ to each of them, where $\epsilon$ is smaller than the difference between any two items on the list, and perform the queries to check whether any of the resulting items is in the list. You've performed $O(n)$ queries so far. I would like to claim that the comparisons you have done are sufficient to tell whether an item $a$ on the original list is smaller than or larger than any new item $b$. Suppose you couldn't tell. Then, because this is a comparison-based model, you wouldn't know whether $a$ was equal to $b$ or not, a contradiction of the assumption that your data structure works. Now, since the $n/\log n$ items you chose were random, your comparisons have with high probability given enough information to divide the original list into $n/\log n$ lists each of size $O(\log n)$. By sorting each of these lists, you get a randomized $O(n \log \log n)$-time sorting algorithm based solely on comparisons, a contradiction. | {
"source": [
"https://cstheory.stackexchange.com/questions/7841",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/6241/"
]
} |
7,875 | I have the following problem: We are given an instance of the 3-SAT problem.
Is there a satisfying assignment s.t. at least two literals are satisfied in each clause? The question is: Is the problem NP-complete? The question might sound stupid, but I couldn't figure it out by myself. I searched the web and in books but didn't find anything. I also tried to reduce 3-SAT to it, but without success. (I have to admit that I didn't spent much time to do it since it is not my main research focus; this is just a question that came to my mind while working on another problem. I am interested in the answer because if it turns out to be NP-complete it could help me in a future problem.) Thanks in advance for your answers! Every answer or comment is welcome. | At least two of the literals $x$, $y$, $z$ are satisfied iff at least one literal in each pair $(x,y)$, $(x,z)$, $(y,z)$ is satisfied. Therefore it is a special case of 2SAT , and there is a polynomial-time algorithm for solving it. | {
"source": [
"https://cstheory.stackexchange.com/questions/7875",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/1657/"
]
} |
7,900 | I would like to ask for help in compiling a list of as many TCS-related conferences and workshops as possible. My main motivation for doing this is to plan possible blog coverage of more theory venues -- finding correspondents attending these events who would be willing to write either brief or in-depth blog entries about events they are attending. Beyond that, I hope a list like this would give everyone a better sense of the lay of the theory land. I'll seed the question with an answer containing a few "obvious" conferences. Please feel free to edit my answer and/or post additional answers of your own. Standard abbreviation of conference, name of conference, subject matter, any additional notes. Intended as community wiki. | GENERAL : STOC, ACM Symposium on the Theory of Computing FOCS, IEEE Symposium on Foundations of Computer Science ICALP EATCS International Colloquium on Automata, Languages and Programming (A: algorithms, complexity, B: logic, semantics, automata) FOSSACS, Foundations of Software Science and Computation Structures STACS, Symposium on Theoretical Aspects of Computer Science MFCS, Mathematical Foundations of Computer Science FSTTCS, Foundations of Software Technology and Theoretical Computer Science COCOON, Computing and Combinatorics Conference ITCS, Innovations in Theoretical Computer Science CSR, Computer Science in Russia ISAAC, International Symposium on Algorithms and Computation TAMC, Theory and Applications of Models of Computation COCOA, Conference on Combinatorial Optimization and Applications FM, Formal Methods FCT, Fundamentals of Computation Theory LATIN, Latin American Symposium on Theoretical Informatics SOFSEM, Conference on Current Trends in Theory and Practice of Computer Science TASE, Theoretical Aspects of Software engineering CC: COMPLEXITY CCC, IEEE Conference on Computational Complexity SIROCCO, International Colloquium on Structural Information and Communication Complexity CG: COMPUTATIONAL GEOMETRY SOCG, Symposium on Computational Geometry CCCG, Canadian Conference on Computational Geometry EuroCG, European Workshop on Computational Geometry CR: CRYPTOGRAPHY AND SECURITY CRYPTO, International Cryptology Conference EUROCRYPT, Conference on the Theory and Applications of Cryptographic Techniques ASIACRYPT, Conference on the Theory and Application of Cryptology LATINCRYPT, International Conference on Cryptology and Information Security in Latin America AFRICACRYPT, International Conference on Cryptology in Africa PQCRYPTO, International Conference on Post-Quantum Cryptography TCC, Theory of Cryptography Conference PKC, International Conference on Practice and Theory in Public Key Cryptography FSE, Conference on Fast Software Encryption CHES, Conference on Cryptographic Hardware and Embedded Systems IEEE S&P, IEEE Symposium on Security and Privacy CCS, ACM Conference on Computer and Communication Security POST, Principles of Security and Trust CSF, Computer Security Foundations Symposium ITC, Information Theoretic Cryptography DB: DATABASE THEORY SIGMOD/PODS, ACM Symposium on Principles of Database Systems (both accept theory, but SIGMOD has broader scope) ICDT, The international Conference on Database Theory VLDB, Very Large Data Bases AMW, Alberto Mendelzon International Workshop on Foundations of Data Management DC: DISTRIBUTED, PARALLEL, AND CLUSTER COMPUTING PODC, ACM Symposium on Principles of Distributed Computing DISC, International Symposium on Distributed Computing SPAA, ACM Symposium on Parallelism in Algorithms and Architectures IPDPS, IEEE International Parallel and Distributed Processing Symposium ICDCN, International Conference on Distributed Computing and Networking OPODIS, International Conference on Principles of Distributed Systems SSS, International Symposium on Stabilization, Safety, and Security of Distributed Systems Algosensors, International Symposium on Algorithms for Sensor Systems, Wireless Ad Hoc Networks and Autonomous Mobile Entities DM: DISCRETE MATHEMATICS AND COMBINATORICS WG, International Workshop on Graph-Theoretic Concepts in Computer Science LAGOS, Latin-American Algorithms, Graphs and Optimization Symposium DS: DATA STRUCTURES AND ALGORITHMS SODA, ACM-SIAM Symposium on Discrete Algorithms ESA, European Symposium on Algorithms (track A is theoretical) WADS, The Algorithms and Data Structures Symposium SAT, Theory and Applications of Satisfiability Testing SWAT, Scandinavian Symposium and Workshops on Algorithm Theory ALENEX, Algorithm Engineering and Experimentation SOSA, Symposium on Simplicity in Algorithms IPCO, Integer Programming and Combinatorial Optimization APPROX/RANDOM, Workshop on Approximation Algorithms for Optimization Problems / Workshop on Randomization and Computation WAOA, Workshop on Approximation and Online Algorithms IPEC, International Symposium on Parameterized and Exact Computation IWOCA, International Workshop on Combinatorial Algorithms WAW, Workshop on Algorithms and Models for the Web-Graph CPM, Combinatorial Pattern Matching CP, Principles and Practice of Constraint Programming FL: AUTOMATA THEORY AND FORMAL LANGUAGES DLT, International Conference on Developments in Language Theory LATA, Language and Automata Theory and Applications AFL, Automata and Formal Languages NCMA, Non-Classical Models of Automata and Applications CIAA, International Conference on Implementation and Application of Automata DFCS, Descriptional Complexity of Formal Systems GT: ALGORITHMIC GAME THEORY EC, Electronic Commerce SAGT, International Symposium on Algorithmic Game Theory WINE, Workshop on Internet and Network Economics LG: LEARNING THEORY COLT, Conference on Learning Theory ALT, Algorithmic Learning Theory LO: LOGIC IN COMPUTER SCIENCE LICS, IEEE Symposium on Logic in Computer Science CONCUR, International Conference on Concurrency Theory CSL, Computer Science Logic CiE, Computablility in Europe LCC, An International Workshop on Logic and Computational Complexity WoLLIC, Workshop on Logic, Language, Information and Computation Highlights of logic, games and automata PL: PROGRAMMING LANGUAGES POPL, Principles of Programming Languages ICFP, International Conference on Functional Programming ETAPS, European Joint Conferences on Theory and Practice of Software (includes FOSSACS, ESOP and POST, see separate entries) ESOP, European Symposium On Programming MSFP, Mathematically Structured Functional Programming MFPS, Mathematical Foundations of Programming Semantics SC: SYMBOLIC COMPUTATION ISSAC: International Symposium on Symbolic and Algebraic Computation FPSAC: Formal Power Series and Algebraic Combinatorics CASC: Computer Algebra in Scientific Computing SNC: Symbolic Numeric Computation THEOREM PROVING CADE, International Conference on Automated Deduction ITP, Interactive Theorem Proving CPP, Certified Proofs and Programs QUANTUM QIP, Workshop on Quantum Information Processing QCMC, International Conference on Quantum Communication, Information and Computing TQC, Theory of Quantum Computation, Communication and Cryptography AQIS, Asian Quantum Information Science Conference QCRYPT, Conference on Quantum Cryptography QEC, International Conference on Quantum Error Correction CEQIP, Central European Quantum Information Processing Workshop RO: Robotics WAFR, Workshop on the Algorithmic Foundation of Robotics . COMPUTATIONAL BIOLOGY RECOMB: Research in Computational Molecular Biology ISMB: Intelligent Systems for Molecular Biology WABI: Workshop on Algorithms in Bioinformatics OTHER CAV, Computer Aided Verification GD, International Symposium on Graph Drawing FUN, International Conference on Fun With Algorithms DNA, DNA Computing and Molecular Programming (DNA computing, Track A is theoretical, track B is experimental) DCM, Developments in Computational Models RTA, Rewriting techniques and applications TLCA, Typed lambda calculi and applications UCNC, Unconventional Computation & Natural Computation | {
"source": [
"https://cstheory.stackexchange.com/questions/7900",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/30/"
]
} |
8,200 | Given a regular language (NFA, DFA, grammar, or regex), how can the number of accepting words in a given language be counted? Both "with exactly n letters" and "with at most n letters" are of interest. Margareta Ackerman has two papers on the related subject of enumerating words accepted by an NFA, but I wasn't able to modify them to count efficiently. It seems like the restricted nature of regular languages should make counting them relatively easy -- I almost expect a formula more than an algorithm Unfortunately my searches so far haven't turned up anything, so I must be using the wrong terms. | For a DFA, in which the initial state is state $0$, the number of words of length $k$ that end up in state $i$ is $A^k[0,i]$, where $A$ is the transfer matrix of the DFA (a matrix in which the number in row $i$ and column $j$ is the number of different input symbols that cause a transition from state $i$ to state $j$). So you can count accepting words of length exactly $k$ easily, even when $k$ is moderately large, just by calculating a matrix power and adding the entries corresponding to accepting states. The same thing works for accepting words of length at most $k$, with a slightly different matrix. Add an extra row and column of the matrix, with a one in the cell that's both in the row and the column, a one in the new row and the column of the initial state, and a zero in all the other cells. The effect of this change to the matrix is to add one more path to the initial state at each power. This doesn't work for NFAs. I suspect the best thing to do is just convert to a DFA and then apply the matrix powering algorithm. | {
"source": [
"https://cstheory.stackexchange.com/questions/8200",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/962/"
]
} |
8,234 | Inspired by the extensive hierarchies present in complexity theory, I wondered if such hierarchies were also present for type systems. However, the two examples I've found so far are both more like checklists (with orthogonal features) rather than hierarchies (with successively more and more expressive type systems). The two examples I have found are the Lambda cube and the concept of k-ranked polymorphism . The first one is a checklist with three options, the second is a real hierarchy (though k-ranked for specific values of k is uncommon I believe). All other type system features I know of are mostly orthogonal. I'm interested in these concepts because I'm designing my own language and I'm very curious how it ranks among the currently existing type systems (my type system is somewhat unconventional, as far as I know). I realise that the concept of 'expressiveness' might be a bit vague, which may explain why type systems seem like checklists to me. | There are several senses of "expressiveness" that you might want for a type system. What mathematical functions can you express in a particular type system. For example, in the simply typed lambda calculus, not all computable functions can be expressed. The same is true of System $F$, but strictly more functions can be expressed. This is not very interesting once you get to type systems for Turing-complete languages. Can system $A$ typecheck every program written in system $B$. This is basically what cody's first notion of strength is about for PTSs. Again, System $F$ is stronger than the STLC in this ordering, since every STLC program types in System $F$. Similarly, a system with subtyping will be stronger than a system without. Are there local transformations (in the sense of Felleisen's paper On the expressive power of programming languages ) that allow a program that types in system $A$ to type in system $B$, but not vice versa. Does one type system guarantee stronger properties than another. For example, linear type systems just reject more programs, but that allows them to make stronger statements about the programs they do accept. Unfortunately, I don't believe that there's been work on categorizing or formalizing these notions, with the exception of Barendregt's lambda-cube, as @cody discusses. | {
"source": [
"https://cstheory.stackexchange.com/questions/8234",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/988/"
]
} |
8,259 | I think I'm not understanding it, but $\eta$-conversion looks to me as a $\beta$-conversion that does nothing, a special case of $\beta$-conversion where the result is just the term in the lambda abstraction because there is nothing to do, kind of a pointless $\beta$-conversion. So maybe $\eta$-conversion is something really deep and different from this, but, if it is, I don't get it, and I hope you can help me with it. (Thank you and sorry, I know this is part of the very basics in lambda calculus) | Update [2011-09-20]: I expanded the paragraph about $\eta$-expansion and extensionality. Thanks to Anton Salikhmetov for pointing out a good reference. $\eta$-conversion $(\lambda x . f x) = f$ is a special case of $\beta$- conversion only in the special case when $f$ is itself an abstraction, e.g., if $f = \lambda y . y y$ then $$(\lambda x . f x) = (\lambda x . (\lambda y . y y) x) =_\beta (\lambda x . x x) =_\alpha f.$$ But what if $f$ is a variable, or an application which does not reduce to an abstraction? In a way $\eta$-rule is like a special kind of extensionality, but we have to be a bit careful about how that is stated. We can state extensionality as: for all $\lambda$-terms $M$ and $N$, if $M x = N x$ then $M = N$, or for all $f, g$ if $\forall x . f x = g x$ then $f = g$. The first one is a meta-statement about the terms of the $\lambda$-calculus. In it $x$ appears as a formal variable, i.e., it is part of the $\lambda$-calculus. It can be proved from $\beta\eta$-rules, see for example Theorem 2.1.29 in "Lambda Calculus: its Syntax and Semantics" by Barendregt (1985). It can be understood as a statement about all the definable functions, i.e., those which are denotations of $\lambda$-terms. The second statement is how mathematicians usually understand mathematical statements. The theory of $\lambda$-calculus describes a certain kind of structures, let us call them " $\lambda$-models ". A $\lambda$-model might be uncountable, so there is no guarantee that every element of it corresponds to a $\lambda$-term (just like there are more real numbers than there are expressions describing reals). Extensionality then says: if we take any two things $f$ and $g$ in a $\lambda$-model, if $f x = g x$ for all $x$ in the model, then $f = g$. Now even if the model satisfies the $\eta$-rule, it need not satisfy extensionality in this sense. (Reference needed here, and I think we need to be careful how equality is interpreted.) There are several ways in which we can motivate $\beta$- and $\eta$-conversions. I will randomly pick the category-theoretic one, disguised as $\lambda$-calculus, and someone else can explain other reasons. Let us consider the typed $\lambda$-calculus (because it is less confusing, but more or less the same reasoning works for the untyped $\lambda$-calculus). One of the basic laws that should holds is the exponential law $$C^{A \times B} \cong (C^B)^A.$$ (I am using notations $A \to B$ and $B^A$ interchangably, picking whichever seems to look better.) What do the isomorphisms $i : C^{A \times B} \to (C^B)^A$ and $j : (C^B)^A \to C^{A \times B}$ look like, written in $\lambda$-calculus? Presumably they would be $$i = \lambda f : C^{A \times B} . \lambda a : A . \lambda b : B . f \langle a, b \rangle$$ and $$j = \lambda g : (C^B)^A . \lambda p : A \times B . g (\pi_1 p) (\pi_2 p).$$
A short calculation with a couple of $\beta$-reductions (including the $\beta$-reductions $\pi_1 \langle a, b \rangle = a$ and $\pi_2 \langle a, b \rangle = b$ for products) tells us that, for every $g : (C^B)^A$ we have $$i (j g) = \lambda a : A . \lambda b : B . g a b.$$
Since $i$ and $j$ are inverses of each other, we expect $i (j g) = g$, but to actually prove this we need to use $\eta$-reduction twice: $$i(j g) = (\lambda a : A . \lambda b : B . g a b) =_\eta (\lambda a : A . g a) =_\eta g.$$
So this is one reason for having $\eta$-reductions. Exercise: which $\eta$-rule is needed to show that $j (i f) = f$? | {
"source": [
"https://cstheory.stackexchange.com/questions/8259",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/6577/"
]
} |
8,539 | There is always a way for application in topics related to theoretical computer science. But textbooks and undergraduate courses usually don't explain the reason that automata theory is an important topic and whether it still has applications in practice. Therefore undergraduate students might have trouble in understanding the importance of automata theory and might think it is not of any practical use anymore. Is automata theory still useful in practice? Should it be part of undergraduate CS curriculum? | Ever used a tool like grep/awk/sed? Regular expressions form the heart of these tools. You'll be surprised how much coding you can avoid by principled use of regular expressions - in "practical projects", like an email server. If you're a CS major, you'll definitely be writing a compiler/interpreter for a (at least a small) language. If you've ever tried this task before and got stuck, you'll appreciate how much a little theory (aka context free grammars) can help you. This theory has made a once impossible task into something that can be completed over a weekend. (And it won the inventor a Turing award - google BNF). If you're a CS major, at some point, you need to sit back and think about the philosophical foundations of computing, and not just about how cool the next version of the Android API is. On a related note, it is the job of the university not to prepare you for the next 5 years of your life, but to prepare you for the next 50. The only thing they can do in this regard is to help you think - think of automata theory as one of those courses. | {
"source": [
"https://cstheory.stackexchange.com/questions/8539",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/6830/"
]
} |
8,805 | For instance, in programming languages it's common to write an X-in-X compiler/interpreter, but on a more general level many known Turing-complete systems can simulate themselves in impressive ways (e.g. simulating Conway's Game of Life in Conway's Game of Life). So my question is: is a system being able to simulate itself sufficient to prove it's Turing complete? It certainly is a necessary condition. | Not necessarily. For instance, the two-dimensional block cellular automaton with two states, in which a cell becomes live only when its four predecessors have exactly two adjacent live cells, can simulate itself with a factor of two slowdown and a factor of two size blowup, but is not known to be Turing complete. See The B36/S125 “2x2” Life-Like Cellular Automaton by Nathaniel Johnston for more on this block automaton and on the B36/S125 rule for the Moore neighborhood which is also capable of simulating this block automaton. | {
"source": [
"https://cstheory.stackexchange.com/questions/8805",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/7091/"
]
} |
8,851 | Coming from a more mathematical background, I never really learned how to code.
I am starting a PhD in TCS and many people were surprised by how little I knew about programming (and about computer in general). I can write algorithms in pseudo-code, but I don't really know any programming language. I can imagine that someday I may have to implement some algorithms for my work, but then can I wait for this moment? Or is there something more? How important is knowing how to code in TCS (in fields where programming is not directly involved)?
Are there reasons which could bring a CS theorist (for example) to know how to code? Is it worth spending a lot of time learning how to code? And if there are, is there a category (functional, imperative, object-oriented...) of programming language that would be more suited? | Theoretical computer science is a broad field and the importance of programming depends on what you do in TCS. I will mention two ways in which programming can help you, without implying that these are the only ways. First, if you design algorithms for problems of practical importance, implementing your algorithms and making the code available to others can be a big plus. For example, the convex hull problem arises in many fields, and people use software packages such as cdd by Komei Fukuda and lrs by David Avis to solve this problem. If they had published their algorithms only in papers, probably less people would have used their algorithms. More users mean more feedback and probably also more opportunities to collaborate, which is invaluable. Second, even if you do not work in algorithms, writing a one-time code helps you to test a simple conjecture when the conjecture is suitable to numerical calculation. For example, if you wonder whether the product of three positive definite matrices always has a positive trace, it is easy to write a code to test it for some random choices of 2×2 or 3×3 positive definite matrices and find a counterexample. Although you do not advertise that you wrote any program to test the conjecture, programming can save the time which would have been spent in vain trying to prove a false statement. The programming language to choose depends on what you want to do with programming, and it can be a topic for a whole book in my opinion. But if you design algorithms and want to implement your algorithms so that other people can use the implementation, then one important factor is availability. Although you can expect that most potential users of your code have access to a C compiler, you cannot expect that the same people have access to a Haskell compiler. For one-time programs, the choice is more based on available libraries, and includes the environments such as Matlab. By the way, programming can also be fun. | {
"source": [
"https://cstheory.stackexchange.com/questions/8851",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/4873/"
]
} |
8,893 | This is about how effectively we can express an algorithm at hand. I need this for my undergraduate teaching. I understand there is no such thing as standard way of writing a pseudo code. Different authors follow different conventions. It would be helpful if people here point out, the way they follow and think the best one. Is there any book that deals with this in a good detail? | Writing pseudocode is like writing code: It's not particularly important which standard you follow, as long as you (and the people you write with) actually follow some standard. But for the record, here's the idiosyncratic standard I use in my lecture notes, research papers, and upcoming book. Use standard imperative syntax for control flow and memory access — if, while, for, return, array[index], function(arguments). Spell out "else if". But use $field(record)$ instead of record.field or record->field Use standard mathematical notation for math — Write $xy$ instead of x*y , $a\bmod b$ instead of a%b , $s\le t$ instead of s <= t , $\lnot p$ instead of !p , $\sqrt{x}$ instead of sqrt(x) , $\pi$ instead of PI , $\infty$ instead of MAX_INT , etc. But use $x\gets y$ for assignment, to avoid the == problem. But avoid notation (and pseudocode!) entirely if English is clearer. Symmetrically, avoid English if notation is clearer! Minimize syntactic sugar — Indicate block structure by consistent indentation (à la Python). Omit sugary keywords like "begin/end" or "do/od" or "fi". Omit line numbers. Do not emphasize keywords like "for" or "while" or "if" by setting them in a different typeface or style . Ever. Just don't. But typeset algorithm names and constants in \textsc{Small Caps}, variable names in italic , and literal strings in sans serif. But add a small amount of vertical "breathing" space ( \\[0.5ex] ) between meaningful code chunks. Don't specify unimportant details. If it doesn't matter what order you visit the vertices, just say "for all vertices". For example, here is a recursive formulation of Borůvka's minimum spanning tree algorithm . I've previously defined $G / L$ as the graph obtained from $G$ by contracting all edges in the set $L$, and Flatten as a subroutine that removes loops and parallel edges. I use my own lightweight algorithm LaTeX environment to typeset pseudocode. (It's just a tabbing environment inside an \fbox .) Here's my source code for Borůvka's algorithm: \begin{algorithm}
\textul{$\textsc{Borůvka}(G)$:}\+
\\ if $G$ has no edges\+
\\ return $\varnothing$\-
\\[0.5ex]
$L \gets \varnothing$
\\ for each vertex $v$ of $G$\+
\\ add the lightest edge incident to $v$ to $L$\-
\\[0.5ex]
return $L \cup \textsc{Borůvka}(\textsc{Flatten}(G / L))$
\end{algorithm} | {
"source": [
"https://cstheory.stackexchange.com/questions/8893",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/3162/"
]
} |
8,918 | The state of our knowledge about general arithmetic circuits seems to be similar to the state of our knowledge about Boolean circuits, i.e. we don't have good lower-bounds. On the other hand we have exponential size lower-bounds for monotone Boolean circuits . What do we know about monotone arithmetic circuits?
Do we have similar good lower-bounds for them?
If not, what is the essential difference that doesn't allow us to get similar lower-bounds for monotone arithmetic circuits? The question is inspired by comments on this question . | Lower bounds for monotone arithmetic circuits come easier because they forbid cancellations. On the other hand, we can prove exponential lower bounds for circuits computing boolean functions even if any monotone real-valued functions $g:R\times R\to R$ are allowed as gates (see e.g. Sect. 9.6 in the book ). Even though monotone arithmetic circuits are weaker than monotone boolean circuits (in the latter we have cancellations $a\land a=a$ and $a\lor (a\land b)=a$ ), these circuits are interesting because of their relation to dynamic programming (DP) algorithms. Most of such algorithms can be simulated by circuits over semirings $(+,\min)$ or $(+,\max)$ . Gates then correspond to subproblems used by the algorithm. What Jerrum and Snir (in the paper by V Vinay) actually prove is that any DP algorithm for the Min Weight Perfect Matching (as well as for the TSP problem) must produce exponentially many subproblems. But the Perfect Mathching problem is not of "DP flawor" (it does not satisfy Bellman's Principle of Optimality ). Linear programming (not DP) is much more suited for this problem. So what about optimization problems that can be solved by reasonably small DP algorithms - can we prove lower bounds also for them? Very interesting in this respect is an old result of Kerr (Theorem 6.1 in his phd ). It implies that the classical Floyd-Warshall DP algorithm for the All-Pairs Shortest Paths problem (APSP) is optimal : $\Omega(n^3)$ subproblems are necessary. Even more interesting is that Kerr's argument is very simple (much simpler than that Jerrum and Snir used): it just uses the distributivity axiom $a+\min(b,c)=\min(a,b)+\min(a,c)$ , and the possibility to "kill" min-gates by setting one of its arguments to $0$ .This way he proves that $n^3$ plus-gates are necessary to multiply two $n\times n$ matrices over the semiring $(+,\min)$ . In Sect. 5.9 of the book by Aho, Hopcroft and Ullman it is shown that this problem is equivalent to APSP problem. A next question could be: what about the Single-Source Shortest Paths (SSSP) problem? Bellman-Ford DP algorithm for this (seemingly "simpler") problem also uses $O(n^3)$ gates. Is this optimal? So far, no separation between these two versions of the shortest path problem are known; see an interesting paper of Virginia and Ryan Williams along these lines. So, an $\Omega(n^3)$ lower bound in $(+,\min)$ -circuits for SSSP would be a great result. Next question could be: what about lower bounds for Knapsack? In this draft lower bounds for Knapsack are proved in weaker model of $(+,\max)$ circuits where the usage of $+$ -gates is restricted; in Appendix Kerr's proof is reproduced. | {
"source": [
"https://cstheory.stackexchange.com/questions/8918",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/186/"
]
} |
8,991 | Shiva Kintali has just announced a (cool!) result that graph isomorphism for bounded treewidth graphs of width $\geq 4$ is $\oplus L$-hard . Informally, my question is, "How hard is that?" We know that nonuniformly $NL \subseteq \oplus L$, see the answers to this question . We also know that it is unlikely that $\oplus L = P$, see the answers to this question . How surprising would it be if $L=\oplus L$? I have heard many people say that $L=NL$ would not be shocking the way $P=NP$ would. What are the consequences of $L=\oplus L$? Definition: $\oplus L$ is the set of languages recognized by a non-deterministic Turing machine which can only distinguish between an even number or odd number of "acceptance" paths (rather than a zero or non-zero number of acceptance paths), and which is further restricted to work in logarithmic space. | Wigderson proved that $NL/poly \subseteq \oplus L/poly$. By standard arguments, $L = \oplus L$ would imply $L/poly = NL/poly$. (Take a machine $M$ in $NL/poly$. It has an equivalent machine $M'$ in $\oplus L/poly$. Take the $\oplus L$ language of instance-advice pairs $S = \{(x,a)~|~M'(x,a)~\textrm{accepts}\}$. If this language is in $L$, then by hardcoding the appropriate advice $a$ we get an $L/poly$ machine equivalent to $M$.) I think that would be surprising: nondeterministic branching programs would be equivalent to deterministic branching programs (up to polynomial factors). | {
"source": [
"https://cstheory.stackexchange.com/questions/8991",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/30/"
]
} |
9,031 | Given $m, n, k$, how many of $k$-DNFs with $n$ variables and $m$ clauses are tautology? (or how many $k$-CNFs are unsatisfiable?) | The answer depends on $k$, $m$, and $n$. Exact counts are generally not known, but there is a "threshold" phenomenon that for most settings of $k$, $m$, $n$, either nearly all $k$-SAT instances are satisfiable, or nearly all instances are unsatisfiable. For example, when $k=3$, it has been empirically observed that when $m < 4.27 n$, all but a $o(1)$ fraction of 3-SAT instances are satisfiable, and when $m > 4.27n$, all but a $o(1)$ fraction are unsatisfiable. (There are also rigorous proofs of bounds known.) One starting point is "The Asymptotic Order of the k-SAT Threshold" . Amin Coja-Oghlan has also done a lot of work on these satisfiability threshold problems. | {
"source": [
"https://cstheory.stackexchange.com/questions/9031",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/14197/"
]
} |
9,088 | Are there any algorithms for SAT solving which are not DPLL based?
Or are all algorithms used by SAT solvers are DPLL based? | Resolution Search (just applying the resolution rule with some good heuristics) is another possible strategy for SAT solvers. Theoretically it's exponentially more powerful (i.e. there exist problems for which it has exponential shorter proofs) than DPLL (which just does tree resolution though you can augment it with nogood learning to increase its power - whether that makes it as powerful as general resolution is still open as far as I know) but I don't know of an actual implementation that performs better. If you don't limit yourself to complete search, then WalkSat is a local search solver which can be used to find satisfiable solutions and outperforms DPLL-based search in many cases. One can't use it to prove unsatisfiability though unless one caches all the assignments that have failed which would mean exponential memory requirements. Edit: Forgot to add - Cutting planes can also be used (by reducing SAT to an integer program). In particular Gomory cuts suffice to solve any integer program to optimality. Again in the worst case, an exponential number may be needed. I think Arora & Barak's Computational Complexity book has a few more examples of proof systems that one could in theory use for something like SAT solving. Again, I haven't really seen a fast implementation of anything apart from DPLL-based or local search based methods. | {
"source": [
"https://cstheory.stackexchange.com/questions/9088",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/14197/"
]
} |
9,091 | I will be attending my first computer science conference and after reading the advice for how to improve conferences I noticed the several suggestions were about grad students attending their first conference. What advice do you have for a grad student attending his first conference and what should his focus be. | Talk to people, even if they are scary big names. Attend all the keynote/invited presentations. Attend the talks most relevant to you. Don't be afraid to ask questions. Attend the social events, meet other graduate students, have fun. Talk enthusiastically about your research. Make sure you have a 1 minute pitch describing your work, plus a 5 minute description, and also be prepared to enter into a more detailed discussion. Ask people about their research. Simply asking what are you working on? will get the conversation started. Be open to possible collaborations, and follow up after the conference. | {
"source": [
"https://cstheory.stackexchange.com/questions/9091",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/6679/"
]
} |
9,173 | This question may not be technical. As a non-native speaker and a TA for algorithm class, I always wondered what gadget means in 'clause gadget' or 'variable gadget'. The dictionary says a gadget is a machine or a device, but I'm not sure what colloquial meaning it has in the context of NP-complete proof. | A "gadget" is a small specialized device for some particular task. In NP-hardness proofs, when doing a reduction from problem A to problem B, the colloquial term "gadget" refers to small (partial) instances of problem B that are used to "simulate" certain objects in problem A. For example, when reducing 3SAT to 3-COLORING, clause gadgets are small graphs that are used to represent the clauses of the original formula and variable gadgets are small graphs that are used to represent the variables of the original formula. In order to ensure that the reduction is correct, the gadgets have to be graphs that can be 3-colored in very specific ways. Hence we think of these small graphs as devices that perform a specialized task. In many cases, the main difficulty of proving NP-hardness is constructing appropriate gadgets. Sometimes these gadgets are complicated and moderately large. The creative process of creating such gadgets is sometimes called "gadgeteering." | {
"source": [
"https://cstheory.stackexchange.com/questions/9173",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/7411/"
]
} |
9,196 | Practically, for a language that can eventually be compiled/transformed into system level instructions, is it necessary that it be a context free grammar? ex: Are all programming/scripting languages context free grammars? Java is based on CFGs, but is it actually the case that all programming languages are based on CFGs? It does not seem mandatory, but there are gaps in my understanding. Some context for the question: I was looking at Java language specification, which also provides the grammar rules . This made me think about this question. | Two times no. First, most HPLs are not context free. While they usually have syntax based on a CFG, they also have what people call static semantics (which is also often included in the term syntax). This can include names and types which have to check out for a correct program. For instance, class A {
String a = "a";
int b = a + d;
} is a syntactically correct Java program but will not compile because d is not defined and a does not have a fitting type. Secondly, you can parse languages that are not context-free (as obviously proven by the existence of compilers). It is only that CFGs can be parsed efficiently, while CSGs can not, in general. However, you can add certain non-context-free features while remaining efficient. Compilers often run in phases: first tokenization (regular), then context-free parsing, then name and type analysis (context-sensitive, sometimes even harder). You can observe that behaviour by the kind of error messages you get. | {
"source": [
"https://cstheory.stackexchange.com/questions/9196",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/5873/"
]
} |
9,241 | It is known that metric TSP can be approximated within $1.5$ and cannot be approximated better than $123\over 122$ in polynomial time.
Is anything known about finding approximation solutions in exponential time (for example, less than $2^n$ steps with only polynomial space)?
E.g. in what time and space we can find a tour whose distance is at most $1.1\times OPT$? | I've studied the problem and I found the best known algorithms for TSP. $n$ is the number of vertices, $M$ is the maximal edge weight.
All bounds are given up to a polynomial factor of the input size ( $poly(n, \log M)$ ).
We denote Asymmetric TSP by ATSP. 1. Exact Algorithms for TSP 1.1. General ATSP $M2^{n-\Omega(\sqrt{n/\log (Mn)})}$ time and $exp$ -space ( Björklund ). $2^n$ time and $2^n$ space ( Bellman ; Held, Karp ). $4^n n^{\log n}$ time and $poly$ -space ( Gurevich, Shelah ; Björklund, Husfeldt ). $2^{2n-t} n^{\log(n-t)}$ time and $2^t$ space for $t=n,n/2,n/4,\ldots$ ( Koivisto, Parviainen ). $O^*(T^n)$ time and $O^*(S^n)$ space for any $\sqrt2<S<2$ with $TS<4$ ( Koivisto, Parviainen ). $2^n\times M$ time and poly-space ( Lokshtanov, Nederlof ). $2^n\times M$ time and space $M$ ( Kohn, Gottlieb, Kohn ; Karp ; Bax, Franklin ). Even for Metric TSP nothing better is known than algorithms above. It is a big challenge to develop $2^n$ -time algorithm for TSP with polynomial space (see Open Problem 2.2.b, Woeginger ). 1.2. Special Cases of TSP $1.657^n\times M$ time and exponentially small probability of error( Björklund ) for Undirected TSP. $(2-\epsilon)^n$ and exponential space for TSP in graphs with bounded average degree, $\epsilon$ depends only on degree of graph ( Cygan, Pilipczuk ; Björklund, Kaski, Koutis ). $(2-\epsilon)^n$ and $poly$ -space for TSP in graphs with bounded maximal degree and bounded integer weights, $\epsilon$ depends only on degree of graph ( Björklund, Husfeldt, Kaski, Koivisto ). $1.251^n$ and $poly$ -space for TSP in cubic graphs ( Iwama, Nakashima ). $1.890^n$ and $poly$ -space for TSP in graphs of degree $4$ ( Eppstein ). $1.733^n$ and exponential space for TSP in graphs of degree $4$ ( Gebauer ). $1.657^n$ time and $poly$ -space for Undirected Hamiltomian Cycle ( Björklund ). $(2-\epsilon)^n$ and exponential space for TSP in graphs with at most $d^n$ Hamiltonian cycles (for any constant $d$ ) ( Björklund, Kaski, Koutis ). 2. Approximation Algorithms for TSP 2.1. General TSP Cannot be approximated within any polynomial time computable function unless P=NP ( Sahni, Gonzalez ). 2.2. Metric TSP $3 \over 2$ -approximation ( Christofides ). Cannot be approximated with a ratio better than $123\over 122$ unless P=NP ( Karpinski, Lampis, Schmied ). 2.3. Graphic TSP $7\over5$ -approximation ( Sebo, Vygen ). 2.4. (1,2)-TSP MAX-SNP hard ( Papadimitriou, Yannakakis ). $8 \over 7$ -approximation ( Berman, Karpinski ). 2.5. TSP in Metrics with Bounded Dimension PTAS for TSP in a fixed-dimensional Euclidean space ( Arora ; Mitchell ). TSP is APX-hard in a $\log{n}$ -dimensional Euclidean space ( Trevisan ). PTAS for TSP in metrics with bounded doubling dimension ( Bartal, Gottlieb, Krauthgamer ). 2.6. ATSP with Directed Triangle Inequality $O(1)$ -approximation ( Svensson, Tarnawski, Végh ) Cannot be approximated with a ratio better than $75\over 74$ unless P=NP ( Karpinski, Lampis, Schmied ). 2.7. TSP in Graphs with Forbidden Minors Linear time PTAS ( Klein ) for TSP in Planar Graphs. PTAS for minor-free graphs ( Demaine, Hajiaghayi, Kawarabayashi ). $22\frac{1}{2}$ -approximation for ATSP in planar graphs ( Gharan, Saberi ). $O(\frac{\log g}{\log\log g})$ -approximation for ATSP in genus- $g$ graphs ( Erickson, Sidiropoulos ). 2.8. MAX-TSP $7\over9$ -approximation for MAX-TSP ( Paluch, Mucha, Madry ). $7\over8$ -approximation for MAX-Metric-TSP ( Kowalik, Mucha ). $3\over4$ -approximation for MAX-ATSP ( Paluch ). $35\over44$ -approximation for MAX-Metric-ATSP ( Kowalik, Mucha ). 2.9. Exponential-Time Approximations It is possible to compute $(1+\epsilon)$ -approximation for MIN-Metric-TSP in time $2^{(1-\epsilon/2)n}$ with exponential space
for any $\epsilon\le \frac{2}{5}$ , or in time $4^{(1-\epsilon/2)n} n^{\log n}$ with polynomial space for any $\epsilon \leq \frac{2}{3}$ ( Boria, Bourgeois, Escoffier, Paschos ). I would be grateful for any additions and suggestions. | {
"source": [
"https://cstheory.stackexchange.com/questions/9241",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/7454/"
]
} |
9,298 | We "know" that $\mathsf{SC}$ is named for Steve Cook and $\mathsf{NC}$ is named for Nick Pippenger. If I'm not mistaken, Steve Cook named NC in honor of Nick Pippenger, and I was told that the reverse is true as well. However, I wasn't able to find any evidence of this latter fact in either Steve Cook's paper on DCFLs or Nisan's proof that $\mathsf{RL} \subseteq \mathsf{SC}$. Is there any documented evidence of the latter claim, or is this merely "in the air" ? p.s I'm asking because I was browsing examples of Stigler's Law of Eponymy , and was wondering about what I'll call "Stigler Reciprocity": where something invented by A is named after B and vice versa. An example of this is Cartan Matrices and Killing forms. | The following is according to Nick Pippenger: The relevant references are as follows. Steve described NC as Nick's Class in his paper "Deterministic CFL's Are Accepted Simultaneously in Polynomial Time and Log Squared Space" (ACM STOC, 11 (1979) 338-345) on SC, and I described SC as Steve's Class in my paper "Simultaneous Resource Bounds'' (IEEE FOCS, 20 (1979) 307-311) on NC. But the names originated about a year-and-a-half earlier, when I visited the University of Toronto CS Department (January through June, 1978). That's when the study of the two classes began, with Steve defining SC and me defining NC, and various people in the department (I think Allan Borodin was the first) using the two names. The following fall, Steve submitted the paper cited above. I was on the program committee for that STOC, and not allowed to submit papers to it, so my paper appeared in the following FOCS conference. Best wishes, Nick | {
"source": [
"https://cstheory.stackexchange.com/questions/9298",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/80/"
]
} |
9,350 | The PCP theorem states that there is no polynomial time algorithm for MAX 3SAT to find an assignment satisfying $7/8+ \epsilon$ clauses of a satisfiable 3SAT formula unless $P = NP$. There is a trivial polynomial time algorithm that satisfies $7/8$ of the clauses. So, Can we do better than $7/8+ \epsilon $ if we allow super-polynomial algorithms? What approximation ratios can we achieve with quasi-polynomial time algorithms ( $n^{O(\log n)}$) or with sub-exponential time algorithms($2^{o(n)}$)? I'm looking for references to any such algorithms. | One can get a $7/8+\varepsilon/8$ approximation for MAX3SAT that runs in $2^{O(\varepsilon n)}$ time without too much trouble. Here is the idea. Divide the set of variables into $O(1/\varepsilon)$ groups of $\varepsilon n$ variables each. For each group, try all $2^{\varepsilon n}$ ways to assign the variables in the group. For each reduced formula, run the Karloff and Zwick $7/8$-approximation. Output the assignment satisfying a maximum number of clauses, out of all these trials. The point is that there is some variable block such that the optimal assignment (restricted to that block) already satisfies a $\varepsilon$-fraction of the maximum number of satisfied clauses. You'll get those extra clauses exactly correct, and you'll get $7/8$ of the the remaining fraction of the optimum using Karloff and Zwick. It is an interesting question if one can get $2^{O(\varepsilon^2 n)}$ time for the same type of approximation. There is a "Linear PCP Conjecture" that 3SAT can be reduced in polynomial time to MAX3SAT, such that: if the 3SAT instance is satisfiable then the MAX3SAT instance is completely satisfiable, if the 3SAT instance is unsatisfiable then the MAX3SAT instance isn't $7/8+\varepsilon$ satisfiable, and the reduction increases the formula size by only a $poly(1/\varepsilon)$ factor. Assuming this Linear PCP Conjecture, a $2^{O(\varepsilon^c m)}$-time $7/8+\varepsilon$ approximation, for all $c$ and $\varepsilon$, would entail that 3SAT is in $2^{\varepsilon n}$ time, for all $\varepsilon$. (Here $m$ is the number of clauses.) The proof uses the Sparsification Lemma of Impagliazzo, Paturi, and Zane. | {
"source": [
"https://cstheory.stackexchange.com/questions/9350",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/495/"
]
} |
9,366 | I'm trying to figure out how the Path Graph $P(G)$ according to Eppstein's Algorithm in this paper works
and how I can reconstruct the $k$ shortest paths from $s$ to $t$ with the corresponding heap construction $H(G)$. So far: $out(v)$ contains all edges leaving a vertex $v$ in a graph $G$ which are not part of a shortest path in $G$. They are heap-ordered by the "waste of time" called $\delta(e)$ when using this edge instead of the one on a shortest paths. By applying Dijkstra I find the shortest paths to every vertex from $t$. I can calculate this by taking the length of the edge + (the value of the head vertex (where the directed edge is pointing to) - the value of the tail vertex (where the directed edge is starting). If this is $> 0$ it is not on a shortest path, if it is $= 0$ it is on a shortest path. Now I build a 2-Min-Heap $H_{out}(v)$ by heapifying the set of edges $out(v)$ according to their $\delta(e)$ for any $v \in V$, where the root $outroot(v)$ has only one child (= subtree). In order to build $H_T(v)$ I insert $outroot(v)$ in $H_T(next_T(v))$ beginning at the terminal vertex $t$. Everytime a vertex is somehow touched while inserting it is marked with a $*$. Now I can build $H_G(v)$ by inserting the rest of $H_{out}(w)$ in $H_T(v)$. Every vertex in $H_G(v)$ contains either $2$ children from $H_T(v)$ and $1$ from $H_{out}(w)$ or $0$ from the first and $2$ from the second and is a 3-heap. With $H_G(v)$ I can build a DAG called $D(G)$ containing a vertex for each $*$-marked vertex from $H_T(v)$ and for each non-root vertex from $H_{out}(v)$. The roots of $H_G(v)$ in $D(G)$ are called $h(v)$ and they are connected to the vertices they belong to according to $out(v)$ by a "mapping". So far, so good. The paper says I can build $P(G)$ by inserting a root $r = r(s)$ and connecting this to $h(s)$ by an inital edge with $\delta(h(s))$. The vertices of $D(G)$ are the same in $P(G)$ but they are not weighted. The edges have lengths. Then for each directed edge $(u,v) \in D(G)$ the corresponding edges in $P(G)$ are created and weighted by $\delta(v) - \delta(u)$. They are called Heap Edges. Then for each vertex $v \in P(G)$, which represents an edge not in a shortest path connecting a pair of vertices $u$ and $w$, "cross edges" are created from $v$ to $h(w)$ in $P(G)$ having a length $\delta(h(w))$. Every vertex in $P(G)$ only has a out going degree of $4$ max. $P(G)$'s paths starting from $r$ are supposed to be a one-to-one length correspondence between $s$-$t$-paths in $G$. In the end a new heap ordered 4-Heap $H(G)$ is build. Each vertex corresponds to a path in $P(G)$ rooted at $r$. The parent of any vertex has one fewer edge. The weight of a vertex is the lenght of the corresponding path. To find the $k$ shortest paths I use BFS to $P(G)$ and "translate" the search result to paths by using $H(G)$. Unfortunately, I don't understand how I can "read" $P(G)$ and then "translate" it through $H(G)$ to receive the $k$ shortest paths. | It's been long enough since I wrote that, that by now my interpretation of what's in there is probably not much more informed than any other reader's. Nevertheless: I believe that the description you're looking for is the last paragraph of the proof of Lemma 5. Basically, some of the edges in P(G) (the "cross edges") correspond to sidetracks in G (that is, edges that diverge from the shortest path tree). The path in G is formed by following the shortest path tree to the starting vertex of the first sidetrack, following the sidetrack edge itself, following the shortest path tree again to the starting vertex of the next sidetrack, etc. | {
"source": [
"https://cstheory.stackexchange.com/questions/9366",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/7502/"
]
} |
9,381 | In light of the announcement of the world's first programmable quantum photonic chip , I was wondering just what software for a computer that uses quantum entanglement would be like. One of the first programs I ever wrote was something like for i = 1 to 10
print i
next i Can anybody give an example of code of comparable simplicity that would utilize quantum photonic chips (or similar hardware), in pseudocode or high level language? I am having difficulty making the conceptual jump from traditional programming to entanglement, etc. | Caveat Emptor: the following is heavily biased on my own research and view on the field of QC. This does not constitute the general consensus of the field and might even contain some self-promotion. The problem of showing a 'hello world' of quantum computing is that we're basically still as far from quantum computers as Leibnitz or Babbage were from your current computer. While we know how they should operate theoretically, there is no standard way of actually building a physical quantum computer. A side-effect of that is that there is no single programming model of quantum computing. Textbooks such as Nielsen et al. will show you a 'quantum circuit' diagram, but those are far from formal programming languages: they get a little 'hand-waving' on the details such as classical control or dealing with input/output/measurement results. What has suited me best in my research as a programming language computer scientist, and to get the jist of QC across to other computer scientist, is to use the simplest QC model I've come across that does everything. The simplest quantum computing program I have seen that contains all essential elements is a small three-instruction program in the simplest quantum programming model I've come across. I use it as you would a 'hello world' to get the basics across. Allow me to give quick simplified summary of the The Measurement Calculus by Danos et al. 1 that is based on is based on the one-way quantum computer 2 : a qubit is destroyed when measured, but measuring it affects all other qubits that were entangled with it. It has some theoretical and practical benefits over the 'circuit-based' quantum computers as realized by the photonic chip, but that is a different discussion. Consider a quantum computer that has only five instructions: N, E, M, X and Z. Its "assembly language" is similar to your regular computer, after executing one instruction it goes to the next instruction in the sequence. Each instruction takes a target qubit identifier, we use just a number here, and other arguments. N 2 # create a new quantum bit and identify it as '2'
E 1 2 # entangle qubits '1' and '2', qubit 1 already exists and is considered input
M 1 0 # measure qubit '1' with an angle of zero (angle can be anything in [0,2pi]
# qubit '1' is destroyed and the result is either True or False
# operations beyond this point can be dependent on the signal of '1'
X 2 1 # if the signal of qubit '1' is True, execute the Pauli-X operation on qubit '2' The above program thus creates an ancilla, entangles it with the input qubit, measures the input and depending on the measurement outcome performs an operation on the ancilla. The result is that qubit 2 now contains the state of qubit 1 after Hadamard operation. The above is naturally at such low level that you wouldn't want to hand-code it. The benefit of the measurement calculus is that it introduces 'patterns', some sort of composable macros that allow you to compose larger algorithms as you would with subroutines. You start off with 1-instruction patterns and grow larger patterns from there. Instead of an assembler-like instruction sequence, it is also common to write the program down as a graph: input .........
\--> ( E ) ---> (M:0) v
(N) ---> ( ) ------------> (X) ---> output where full arrows are qubit dependencies and the dotted arrow is a 'signal' dependency. The following is the same Hadamard example expressed in a little programming tool as I would imagine a 'quantum programmer' would use. edit: (adding relation with 'classical' computers) Classical computers are still really efficient in what they do best, and so the vision is that quantum computers will be used to off-load certain algorithms, analogous to how current computer offloads graphics to a GPU. As you have seen above, the CPU would control the quantum computer by sending it an instruction stream and read back the measurement results from the boolean 'signals'. This way you have a strict separation of classical control by the CPU and quantum state and effects on the quantum computer. For example, I'm going to use my quantum co-processor to calculate a random boolean or cointoss. Classical computers are deterministic, so its bad at returning a good random number. Quantum computers are inherently probabilistic though, all I have to do to get a random 0 or 1 is to measure out a equally-balanced qubit. The communication between the CPU and 'QPU' would look something like this: qrand() N 1; M 1 0;
==> | CPU | ------------> | QPU | ==> { q1 } , []
start()
| | ------------> | | ==> { } , [q1: 0]
read(q1)
| | ------------> | |
q1: 0
0 | | <----------- | |
<== Where { ... } is the QPU's quantum memory containing qubits and [...] is its classical (signal) memory containing booleans. Danos et al. The Measurement Calculus. arXiv (2007) vol.
quant-ph Raussendorf and Briegel. A one-way quantum computer.
Physical Review Letters (2001) vol. 86 (22) pp. 5188-5191 | {
"source": [
"https://cstheory.stackexchange.com/questions/9381",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/6365/"
]
} |
9,500 | The definition of Ramsey numbers is the following: Let $R(a,b)$ be a positive number such that every graph of order at least $R(a,b)$ contains either a clique on $a$ vertices or a stable set on $b$ vertices. I am working on some extension of Ramsey Numbers. While the study has some theoretical interest, it would be important to know the motivation of these numbers. More specifically I am wondering the (theoretical or practical) applications of Ramsey numbers. For instance, are there any solution methodology for a real life problem that uses Ramsey numbers? Or similarly, are there any proofs of some theorems based on Ramsey numbers? | Applications of Ramsey theory to CS , Gasarch | {
"source": [
"https://cstheory.stackexchange.com/questions/9500",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/2296/"
]
} |
9,673 | I would like to know why for the recognition of context-free languages only non-deterministic push-down automata (DPA=NPDA) work. Why do deterministic push-down automata (DPDA) not recognize such languages? | I'm not quite sure which flavour of "why" you are looking for. One reason for the increase in power when allowing nondeterminism can be seen in the following example: Let $L$ be the set of palindromes $w\bar{w}$ over some alphabet (of at least two symbols), where $\bar{w}$ is the reverse of $w$. An NPDA for this language can just keep pushing symbols onto its stack, and then at some point guess that it has reached the middle of the input and gradually empty the stack. Note that the acceptance condition is purely existential - it is enough that there is a correct guess for the word to be accepted. A deterministic PDA would have to choose the position it considers the middle in some way that only depends on the current prefix. Assume $A$ is such a DPDA. For any $k\in\mathbb{N}$, let $u_k=ab^{2k}a$; let $v_0$ be the empty word, and $v_{k+1} = v_ku_kv_k$. This is a sequence of palindromes, each a prefix of the next, so that $A$ must be in an accepting state $q_k$, with the stack empty, after reading $v_k$. By the pigeon hole principle, there must be some $k,l$ such that $k\neq l$ and $q_k=q_l$ (there is a finite number of states, and so some must be 'reused' as there are an infinite number of $k$s). But then $A$ cannot distinguish $v_ku_kv_k$, which is a palindrome, from $v_lu_kv_k$, which isn't. | {
"source": [
"https://cstheory.stackexchange.com/questions/9673",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/7885/"
]
} |
9,731 | I don't think I understand type classes. I'd read somewhere that thinking of type classes as "interfaces" (from OO) that a type implements is wrong and misleading.
The problem is, I'm having a problem seeing them as something different and how that is wrong. For example, if I have a type class (in Haskell syntax) class Functor f where
fmap :: (a -> b) -> f a -> f b How is that different than the interface [1] (in Java syntax) interface Functor<A> {
<B> Functor<B> fmap(Function<B, A> fn)
}
interface Function<Return, Argument> {
Return apply(Argument arg);
} One possible difference I can think of is that the type class implementation used at a certain invocation is not specified but rather determined from the environment -- say, examining available modules for an implementation for this type. That seems to be an implementation artifact that could be addressed in an OO language; like the compiler (or runtime) could scan for a wrapper/extender/monkey-patcher that exposes the necessary interface on the type. What am I missing? [1] Note the f a argument has been removed from fmap since given it's an OO language, you'd be calling this method on an object. This interface assumes the f a argument has been fixed. | In their basic form, type classes are somewhat similar to object interfaces. However, in many respects, they are much more general. Dispatch is on types, not values. No value is required to perform it. For example, it is possible to do dispatch on the result type of function, as with Haskell's Read class: class Read a where
readsPrec :: Int -> String -> [(a, String)]
... Such dispatch is clearly impossible in conventional OO. Type classes naturally extend to multiple dispatch, simply by providing multiple parameters: class Mul a b c where
(*) :: a -> b -> c
instance Mul Int Int Int where ...
instance Mul Int Vec Vec where ...
instance Mul Vec Vec Int where ... Instance definitions are independent from both class and type definitions, which makes them more modular. A type T from module A can be retrofitted to a class C from module M2 without modifying the definition of either, simply by providing an instance in module M3. In OO, this requires more esoteric (and less OO-ish) language features like extension methods. Type classes are based on parametric polymorphism, not subtyping. That enables more accurate typing. Consider e.g. pick :: Enum a => a -> a -> a
pick x y = if fromEnum x == 0 then y else x vs. pick(x : Enum, y : Enum) : Enum = if x.fromEnum() == 0 then y else x In the former case, applying pick '\0' 'x' has type Char , whereas in the latter case, all you'd know about the result would be that it's an Enum. (This is also the reason why most OO languages these days integrate parametric polymorphism.) Closely related is the issue of binary methods. They are completely natural with type classes: class Ord a where
(<) :: a -> a -> Bool
...
min :: Ord a => a -> a -> a
min x y = if x < y then x else y With subtyping alone, the Ord interface is impossible to express. You need a more complicated, recursive form or parametric polymorphism called "F-bounded quantification" to do it accurately. Compare Java's Comparable and its use: interface Comparable<T> {
int compareTo(T y);
};
<T extends Comparable<T>> T min(T x, T y) {
if (x.compareTo(y) < 0)
return x;
else
return y;
} On the other hand, subtyping-based interfaces naturally allow the formation of heterogeneous collections, e.g. a list of type List<C> can contain members that have various subtypes of C (although it is not possible to recover their exact type, except by using downcasts). To do the same based on type classes, you need existential types as an additional feature. | {
"source": [
"https://cstheory.stackexchange.com/questions/9731",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/1682/"
]
} |
9,896 | Given $n$ subsets $S_1,\ldots,S_n$ of $\{1,\ldots,d\}$. Check whether there are sets $S_i,S_j$ with $S_i \subsetneq S_j$. (If so, find an example, if not, simply say "no") The trivial solution to this problem goes through all pairs of sets and checks inclusion for a pair in time $O(d)$, so the overall runtime is $O(n^2 d)$. Can this problem be solved faster? Is there a name for it in the literature? | You cannot solve it in $O(n^{2-\epsilon})$ time for any constant $\epsilon>0$ unless the Strong Exponential Time Hypothesis is false. That is, if we had such an algorithm, we could solve $n$-variable CNF Satisfiability in $O((2-\epsilon')^{n})$ time for some $\epsilon'>0$.
The reason is that we could divide the variables in two equal parts $P_1$ and $P_2$ of $n/2$ variables each. For each part we construct
a family $F_1$ and $F_2$ respectively of subsets of the clauses in the following way. For each assignment we add a subset consisting of the
clauses not satisfied by the assignment. This construction runs in $poly(n)2^{n/2}$ time. To finish the construction, we note that the original CNF instance has a solution iff there is a subset in $F_1$ which is disjoint to some subset in $F_2$. Adding some extra elements to your ground set in addition to the ones for each clause, it is not too hard to embed this disjointness problem as a question of
set inclusion. You basically take the complements of the subsets in $F_1$. To make sure two sets in $F_1$ isn't counted as an inclusion you add a code from an anti-chain
on the extra elements. Another anti-chain code (on other extra elements of the ground set) is used on the subsets of $F_2$ to make sure no pair of subsets from $F_2$ form an inclusion.
Finally, all sets formed from $F_1$ includes all elements of $F_2$'s anti-chain codes. This is a set inclusion question on $2^{n/2+1}$ subsets on a $d=poly(n)$ ground set. The argument basically goes back to some early paper of Ryan Williams (can't remember which). | {
"source": [
"https://cstheory.stackexchange.com/questions/9896",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/8096/"
]
} |
9,969 | I'm working on a problem set for a class, and thought of a question related to what I was working on. Is there a minimum number of states that a finite automaton must have in order to accept binary strings that represent numbers divisible by an integer n? In an earlier problem set, I was able to construct a DFA that accepted binary strings divisible by 3 with 3 states. Is this a coincidence, or is there something inherent to the general problem of detecting strings divisible by n that suggests a minimum number of states? I promise this will not answer a homework question for me! :) | There is a known formula for minimum number of states for such a finite automaton. This depends on $n$ as well as the radix $R$ of the underlying positional representation. If $n$ is coprime to $R$, then the minimal number of states is $n$. However, when $n$ shares a factor with the radix then the situation is rather complicated. See Mathematica Journal Vol 3 Issue 11. "Divisibility and State Complexity" by Klaus Sutner. | {
"source": [
"https://cstheory.stackexchange.com/questions/9969",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/8146/"
]
} |
10,045 | Inspired by this question , what are the major problems and existing solutions which needs improvement in (theoretical) distributed systems domain. Something like membership protocols, data consistency? | See, for instance, Eight open problems in distributed computing . | {
"source": [
"https://cstheory.stackexchange.com/questions/10045",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/1203/"
]
} |
10,104 | I was just recently having a discussion about Turing Machines when I was asked, "Is the Turing Machine derived from automata, or is it the other way around"? I didn't know the answer of course, but I'm curious to find out. The Turing Machine is basically a slightly more sophisticated version of a Push-Down Automata. From that I would assume that the Turing Machine was derived from automata, however I have no definitive proof or explanation. I might just be plain wrong... perhaps they were developed in isolation. Please! Free this mind from everlasting tangents of entanglement. | Neither! The best way to see this independence is to read the original papers . Turing's 1936 paper introducing Turing machines does not refer to any simpler type of (abstract) finite automaton. McCulloch and Pitts' 1943 paper introducing "nerve-nets", the precursors of modern-day finite-state machines, proposed them as simplified models of neural activity, not computation per se. For an interesting early perspective, see the 1953 survey by Claude Shannon , which has an entire section on Turing machines, but says nothing about finite automata as we would recognize them today (even though he cites Kleene's 1951 report). Modern finite automata arguably start with a 1956 paper of Kleene , originally published as a RAND technical report in 1951, which defined regular expressions. Kleene was certainly aware of Turing's results, having published similar results himself (in the language of primitive recursive functions) at almost the same time. Nevertheless, Kleene's only reference to Turing is an explanation that Turing machines are not finite automata, because of their unbounded tapes. It's of course possible that Kleene's thinking was influenced by Turing's abstraction, but Kleene's definitions appear (to me) to be independent. In the 1956 survey volume edited by Shannon and McCarthy , in which both Kleene's paper on regular experssions and Moore's paper on finite-state transducers were finally published, finite automata and Turing machines were discussed side by side, but almost completely independently. Moore also cites Turing, but only in a footnote stating that Turing machines aren't finite automata. ( A recent paper of Kline recounts the rather stormy history of this volume and the associated Dartmouth conference, sometimes called the "birthplace of AI".) (An even earlier version of neural nets is found in Turing's work on "type B machines", as reprinted in the book "The essential Turing", from about 1937 I think. It seems likely that many people were playing with the idea at the time, as even today many CS undergrads think they have "invented" it at some point in their studies before discovering its history.) | {
"source": [
"https://cstheory.stackexchange.com/questions/10104",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/8290/"
]
} |
10,365 | Guessing it's unlikely a common question, but wondering if anyone has seen material that was clearly made to address this audience in a meaningful way. | Computer Science Unplugged addresses kids (and teachers) in primary school. | {
"source": [
"https://cstheory.stackexchange.com/questions/10365",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/1734/"
]
} |
10,407 | I'm looking for an algorithm to merge two binary search trees of arbitrary size and range. The obvious way I would go about implementing this would be to find entire subtrees whose range can fit into an arbitrary external node in the other tree. However, the worst case running time for this type of algorithm seems to be on the order of O(n+m) where n and m are the size of each tree respectively. However, I've been told that this could be done in O(h) , where h is the height of the tree with the larger height. And I'm completely lost on how this is possible. I've tried experimenting with rotating one the trees first, but rotating a tree into a spine is already O(h). | In ArXiv:1002.4248 , John Iacono and Özgür Özkan describe a relatively simple algorithm to merge two binary search trees in $O(\log^2 n)$ amortized time; the analysis is the hard part. [ Update: As Joe correctly observes in his answer, this algorithm is due to Brown and Tarjan.] They also describe a more complicated dictionary data structure, based on biased skip lists, that supports merges in $O(\log n)$ amortized time. On the other hand, a worst-case bound of $O(\log n)$ is impossible. Consider two binary search trees with $n$ nodes, one storing the even integers between $2$ and $2n$, the other storing the odd integers between $1$ and $2n-1$. Merging the two trees creates a new binary search tree storing all integers between $1$ and $2n$. In any such tree, a constant fraction of the nodes have different parity than their parents. (Proof: The parent of an odd leaf must be even.) Thus, merging the even and odd trees requires changing $\Omega(n)$ pointers. | {
"source": [
"https://cstheory.stackexchange.com/questions/10407",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/8555/"
]
} |
10,594 | Might anyone be able to explain the difference between: Algebraic Datatypes (which I am fairly familiar with) Generalized Algebraic Datatypes (what makes them generalized?) Inductive Types (e.g. Coq) (Especially inductive types.) Thank you. | Algebraic data types let you define types recursively. Concretely, suppose we have the datatype $$
\mathsf{data\;list = Nil \;\;|\;\; Cons\;of\;\mathbb{N} \times list}
$$ What this means is that $\mathsf{list}$ is the smallest set generated by the $\mathsf{Nil}$ and $\mathsf{Cons}$ operators. We can formalize this by defining the operator $F(X)$ $$
F(X) == \{ \mathsf{Nil} \} \cup \{ \mathsf{Cons}(n, x) \;|\; n \in \mathbb{N} \land x \in X \}
$$ and then defining $\mathsf{list}$ as $$
\mathsf{list} = \bigcup_{i \in \mathbb{N}} F^i(\emptyset)
$$ A generalized ADT is what we get when define a type operator recursively. For example, we might define the following type constructor: $$
\mathsf{bush}\;a = \mathsf{Leaf\;of\;}a \;\;|\;\; \mathsf{Nest\;of\;bush}(a \times a)
$$ This type means that an element of $\mathsf{bush\;}a$ is a tuple of $a$s of length $2^n$ for some $n$, since each time we go into the $\mathsf{Nest}$ constructor the type argument is paired with itself. So we can define the operator we want to take a fixed point of as: $$
F(R) = \lambda X.\; \{ \mathsf{Leaf}(x) \;|\; x \in X\} \cup \{ \mathsf{Nest}(v) \;|\; v \in R(X) \}
$$ An inductive type in Coq is essentially a GADT, where the indexes of the type operator are not restricted to other types (as in, for example, Haskell), but can also be indexed by values of the type theory. This lets you give types for length-indexed lists, and so on. | {
"source": [
"https://cstheory.stackexchange.com/questions/10594",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/6696/"
]
} |
10,728 | Consider the following model: an n-bit string r=r 1 ...r n is chosen uniformly at random. Next, each index i∈{1,...,n} is put into a set A with independent probability 1/2. Finally, an adversary is allowed, for each i∈A separately, to flip r i if it wants to. My question is this: can the resulting string (call it r') be used by an RP or BPP algorithm as its only source of randomness? Assume that the adversary knows in advance the entire BPP algorithm, the string r, and the set A, and that it has unlimited computation time. Also assume (obviously) that the BPP algorithm knows neither the adversary's flip decisions nor A. I'm well-aware that there's a long line of work on precisely this sort of question, from Umesh Vazirani's work on semi-random sources (a different but related model), to more recent work on extractors, mergers, and condensers. So my question is simply whether any of that work yields the thing I want! The literature on weak random sources is so large, with so many subtly-different models, that someone who knows that literature can probably save me a lot of time. Thanks in advance! | What you need is a "seeded extractor" with the following parameters: seed of length $O(\log n)$, crude randomness $n/2$, and output length $n^{\Omega(1)}$. These are known. While I'm not up to date with the most recent surveys, I believe that section 3 of Ronen's survey is enough. The only thing you will need to show is that your source has sufficient "min-entropy", i.e. no n-bit string gets a probability of more than $2^{-n/2}$, which I think is clear in your setting. | {
"source": [
"https://cstheory.stackexchange.com/questions/10728",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/1575/"
]
} |
10,829 | Many years ago I heard that computing the minimal NFA (nondeterministic finite automaton) from a DFA (deterministic) was an open question, as opposed to the vice versa direction which has been known for decades and is well researched with an efficient $O(n \lg n)$ algorithm. Has anyone come up with an algorithm? A quick search gave me this paper that proves that its definitely a hard problem. Apparently, no algorithm is given. [1] Minimal NFA problems are hard / Tao Jiang and B. Ravikumar I was reminded of this problem by the following question on the CS.SE site for which a DFA->NFA minimization algorithm would be closely related. This following question seems to me to be research level. I suggested migrating it to TCS and I wrote an answer suggesting a statistical/empirical attack. [2] What are the conditions for a NFA for its equivalent DFA to be maximal in size? | This is really a stubborn -- and well-studied -- problem. Regarding positive results, an exact algorithm by Kameda and Weiner, a heuristic approach by Polák, and a recent approach using SAT solvers by Geldenhuys et al. come to mind. But there seem to be far more negative results ruling out other possible approaches (e.g. approximation algorithms, special cases, less powerful models of NFAs, ...) See below for some references. T. Kameda and P. Weiner. On the state minimization of nondeterministic finite automata. IEEE Transactions on Computers, C-19(7):617–627, 1970. A. Malcher. Minimizing finite automata is computationally hard. Theoretical Computer Science 327:375-390, 2004. L. Polák. Minimalizations of NFA using the universal automaton. International Journal of Foundations of Computer Science, 16(5):999–1010, 2005. G. Gramlich and G. Schnitger. Minimizing NFAs and Regular Expressions. Symposium on Theoretical Aspects of Computer Science (STACS 2005), LNCS 3404, pp. 399–411. H. Gruber and M. Holzer. Inapproximability of nondeterministic state
and transition complexity assuming P <> NP. Developments in Language Theory (DLT 2007), LNCS 4588, pp. 205–216. H. Gruber and M. Holzer. Computational complexity of NFA minimization for finite and unary languages. Language and Automata Theory and Applications (LATA 2007), pp. 261–272. H. Björklund and W. Martens. The tractability frontier for NFA minimization. International Colloquium on Automata, Languages and Programming (ICALP 2008), LNCS 5126, pp. 27–38. J. Geldenhuys, B. van der Merwe, L. van Zijl: Reducing Nondeterministic Finite Automata with SAT Solvers. Finite-State Methods and Natural Language Processing (FSMNLP 2009), LNCS 6062, 81–92. EDIT (June 8, 2015) Update: The following paper presents a heuristic algorithm for reducing the size of nondeterministic Büchi automata, along with experiments on random automata. As they state in the conclusion, their method applies to NFAs as well: "While we presented our methods in the context of Büchi automata, most of them trivially carry over to the simpler case of automata over finite words." Richard Mayr, Lorenzo Clemente. Advanced Automata Minimization. POPL 2013. Extended Technical Report EDI-INF-RR-1414. Their command-line tool Reduce v1.2 can be invoked with the option "-finite" for reducing a given NFA. The implementation is open source and released under the GNU General Public License. | {
"source": [
"https://cstheory.stackexchange.com/questions/10829",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/7884/"
]
} |
10,837 | The AND&OR gate is a gate which is given two inputs and returns their AND and their OR. Are circuits made only out of the AND&OR gate, without fanout, capable of doing arbitrary computations? More precisely, is polynomial time computation logspace reducible to AND&OR circuits? My motivation for this problem is rather strange. As described here , this question is important for computation inside the computer game Dwarf Fortress . | If I don't misunderstand what you mean by AND&OR gate, it is basically a comparator gate which takes two input bits $x$ and $y$ and produces two output bits $x\wedge y$ and $x\vee y$. The two output bits $x\wedge y$ and $x\vee y$ are basically min$(x,y)$ and max$(x,y)$. Comparator circuits are built by composing these comparator gates together but allowing no more fan-outs other than the two outputs produced by each gate . Thus, we can draw comparator circuits using the notations below (similarly to how we draw sorting networks). We can define the comparator circuit value problem (CCV) as follows: given a comparator circuit with specified Boolean inputs, determine the output value of a designated wire. By taking the closure of this CCV problem under logspace reductions, we get the complexity class CC , whose complete problems include natural problems like lex-first maximal matching, stable marriage, stable roomate. In this recent paper , Steve Cook, Yuval Filmus and I showed that even when we use AC$^0$ many-one closure, we still get the same class CC. To the best of our knowledge at this point, NL $\subseteq$ CC $\subseteq$ P. In our paper, we provided evidence that CC and NC are incomparable (so that CC is a proper subset of P), by giving oracle settings where relativized CC and relativized NC are incomparable. We also gave evidence that CC and SC are incomparable. | {
"source": [
"https://cstheory.stackexchange.com/questions/10837",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/1858/"
]
} |
10,916 | I'm a software practitioner and I'm writing a survey on algebraic structures for personal research and am trying to produce examples of how these structures are used in theoretical computer science (and to a lesser degree, other sub-fields of computer science). Under group theory I've come across syntactic monoids for formal languages and trace and history monoids for parallel/concurrent computing. From a ring theory standpoint, I've come across semiring frameworks for graph processing and semiring based parsing. I have yet to find any uses of algebraic structures from module theory in my research (and would like to). I'm assuming that there are further examples and that I'm just not looking in the right place to find them. What are some other examples of algebraic structures from the domains listed above that are commonly found in theoretical computer science (and other sub-fields of computer science)? Alternatively, what journals or other resources can you recommend that might cover these topics? | My impression is that, by and large, traditional algebra is rather too specific for use in Computer Science. So Computer Scientists either use weaker (and, hence, more general) structures, or generalize the traditional structures so that they can fit them to their needs. We also use category theory a lot , which mathematicians don't think of as being part of algebra, but we don't see why not. We find the regimentation of traditional mathematics into "algebra" and "topology" as separate branches inconvenient, even pointless, because algebra is generally first-order whereas topology has a chance of dealing with higher-order aspects. So, the structures used in Computer Science have algebra and topology mixed in. In fact, I would say they tend more towards topology than algebra. Regimentation of reasoning into "algebra" and "logic" is another pointless division from our point of view, because algebra deals with equational properties whereas logic deals with all other kinds of properties as well. Coming back to your question, semigroups and monoids are used quite intensely in automata theory. Eilenberg has written a 2-volume collection , the second of which is almost entirely algebra. I am told that he was planning four volumes but his age did not allow the project to be finished. Jean-Eric Pin has a modernized version of a lot of this content in an online book . Automata are "monoid modules" (also called monoid actions or "acts"), which are at the right level of generality for Computer Science. Traditional ring modules are probably too specific. Lattice theory was a major force in the development of denotational semantics. Topology was mixed into lattice theory when Computer Scientists, jointly with mathematicians, developed continuous lattices and then generalized them to domains . I would say that domain theory is Computer Scientists' own mathematics, which traditional mathematics has no knowledge of. Universal algebra is used for defining algebraic specifications of data types . Having gotten there, Computer Scientists immediately found the need to deal with more general properties: conditional equations (also called equational Horn clauses) and first-order logic properties, still using the same ideas of universal algebra. As you would note, algebra now merges into model theory. Category theory is the foundation for type theory. As Computer Scientists keep inventing new structures to deal with various computational phenomena, category theory is a very comforting framework in which to place all these ideas. We also use structures that are enabled by category theory, which don't have existence in "traditional" mathematics, such as functor categories. Also, algebra comes back into the picture from a categorical point of view in the use of monads and algebraic theories of effects . Coalgebras , which are the duals of algebras, also find a lot of application. So, there is a wide-ranging application of "algebra" in Computer Science, but it is not the kind of algebra found in traditional algebra textbooks. Additional note : There is a concrete sense in which category theory is algebra. Monoid is a fundamental structure in algebra. It consists of a binary "multiplication" operator that is associative and has an identity. Category theory generalizes this by associating "types" to the elements of the monoid, $a : X \rightarrow Y$. You can "multiply" the elements only when the types match: if $a : X \rightarrow Y$ and $b : Y \to Z$ then $ab : X \to Z$. For example, $n \times n$ matrices have a multiplication operation making them a monoid. However, $m \times n$ matrices (where $m$ and $n$ could be different) form a category. Monoids are thus special cases of categories that have a single type. Rings are special cases of additive categories that have a single type. Modules are special cases of functors where the source and target categories have a single type. So on. Category theory is typed algebra whose types make it infinitely more applicable than traditional algebra. | {
"source": [
"https://cstheory.stackexchange.com/questions/10916",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/5069/"
]
} |
10,983 | I wonder how to find the girth of a sparse undirected graph. By sparse I mean $|E|=O(|V|)$. By optimum I mean the lowest time complexity. I thought about some modification on Tarjan's algorithm for undirected graphs, but I didn't find good results. Actually I thought that if I could find a 2-connected components in $O(|V|)$, then I can find the girth, by some sort of induction which can be achieved from the first part. I may be on the wrong track, though. Any algorithm asymptotically better than $\Theta(|V|^2)$ (i.e. $o(|V|^2)$) is welcome. | Here's what I know about the girth problem in undirected unweighted graphs.
First of all, if the girth is even, you can determine it in $O(n^2)$ time- this is an old result of Itai and Rodeh (A. Itai and M. Rodeh. Finding a minimum circuit in a graph. SIAM J. Computing, 7(4):413–423, 1978.). The idea there is: for each vertex in the graph, start a BFS until the first cycle is closed (then stop and move on to the next vertex); return the shortest cycle found. If the girth is even the shortest cycle found will be the shortest cycle. In particular if your graph is bipartite this will always compute the girth. If the girth $g$ is odd, however, you'll find a cycle of length $g$ or $g+1$, so you may be off by $1$. Now, the real problem with odd girth is that inevitably your algorithm would have to be able to detect if the graph has a triangle. The best algorithms for that use matrix multiplication: $O($ min{$n^{2.38}, m^{1.41})$ time for graphs on $n$ nodes and $m$ edges.
Itai and Rodeh also showed that any algorithm that can find a triangle in dense graphs can also compute the girth, so we have an $O(n^{2.38})$ time girth algorithm. However, the runtime for the girth in sparse graphs is not as good as that for finding triangles. The best we know in general is $O(mn)$. In particular, what seems to be the hardest is to find a $o(n^2)$ time algorithm for graphs with $m=O(n)$. If you happen to care about approximation algorithms, Liam Roditty and I have a recent paper in SODA'12 on that: Liam Roditty, V. Vassilevska Williams: Subquadratic time approximation algorithms for the girth. SODA 2012: 833-845.
There we show that a $2$-approximation can be found in subquadratic time, and some other results concerning additive approximations and extensions. Generally speaking, because of a theorem of Bondy and Simonovits, when you have densish graphs, say on $n^{1+1/k}$ edges, they already contain short even cycles, say roughly $2k$. So the denser the graph is, the easier it is to find a good approximation to the girth. When the graph is very sparse, the girth can be essentially arbitrarily large. | {
"source": [
"https://cstheory.stackexchange.com/questions/10983",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/1846/"
]
} |
11,363 | In his celebrated paper "Conjugate Coding" (written around 1970), Stephen Wiesner proposed a scheme for quantum money that is unconditionally impossible to counterfeit, assuming that the issuing bank has access to a giant table of random numbers, and that banknotes can be brought back to the bank for verification. In Wiesner's scheme, each banknote consists of a classical "serial number" $s$, together with a quantum money state $|\psi_s\rangle$ consisting of $n$ unentangled qubits, each one either $$|0\rangle,\ |1\rangle,\ |+\rangle=(|0\rangle+|1\rangle)/\sqrt{2},\ \text{or}\ |-\rangle=(|0\rangle-|1\rangle)/\sqrt{2}.$$ The bank remembers a classical description of $|\psi_s\rangle$ for every $s$. And therefore, when $|\psi_s\rangle$ is brought back to the bank for verification, the bank can measure each qubit of $|\psi_s\rangle$ in the correct basis (either $\{|0\rangle,|1\rangle\}$ or ${|+\rangle,|-\rangle}$), and check that it gets the correct outcomes. On the other hand, because of the uncertainty relation (or alternatively, the No-Cloning Theorem), it's "intuitively obvious" that, if a counterfeiter who doesn't know the correct bases tries to copy $|\psi_s\rangle$, then the probability that both of the counterfeiter's output states pass the bank's verification test can be at most $c^n$, for some constant $c<1$. Furthermore, this should be true regardless of what strategy the counterfeiter uses, consistent with quantum mechanics (e.g., even if the counterfeiter uses fancy entangled measurements on $|\psi_s\rangle$). However, while writing a paper about other quantum money schemes, my coauthor and I realized that we'd never seen a rigorous proof of the above claim anywhere, or an explicit upper bound on $c$: neither in Wiesner's original paper nor in any later one. So, has such a proof (with an upper bound on $c$) been published? If not, then can one derive such a proof in a more-or-less straightforward way from (say) approximate versions of the No-Cloning Theorem, or results about the security of the BB84 quantum key distribution scheme? Update: In light of the discussion with Joe Fitzsimons below, I should clarify that I'm looking for more than just a reduction from the security of BB84. Rather, I'm looking for an explicit upper bound on the probability of successful counterfeiting (i.e., on $c$)---and ideally, also some understanding of what the optimal counterfeiting strategy looks like. I.e., does the optimal strategy simply measure each qubit of $|\psi_s\rangle$ independently, say in the basis $$\{ \cos(\pi/8)|0\rangle+\sin(\pi/8)|1\rangle, \sin(\pi/8)|0\rangle-\cos(\pi/8)|1\rangle \}?$$ Or is there an entangled counterfeiting strategy that does better? Update 2: Right now, the best counterfeiting strategies that I know are (a) the strategy above, and (b) the strategy that simply measures each qubit in the $\{|0\rangle,|1\rangle\}$ basis and "hopes for the best." Interestingly, both of these strategies turn out to achieve a success probability of (5/8) n . So, my conjecture of the moment is that (5/8) n might be the right answer. In any case, the fact that 5/8 is a lower bound on c rules out any security argument for Wiesner's scheme that's "too" simple (for example, any argument to the effect that there's nothing nontrivial that a counterfeiter can do, and therefore the right answer is c=1/2). Update 3: Nope, the right answer is (3/4) n ! See the discussion thread below Abel Molina's answer. | It seems like this interaction can be modeled in the following way: Alice prepares one of the states $|000\rangle$, $|101\rangle$,$(|0\rangle+|1\rangle)|10\rangle/\sqrt{2}$, $(|0\rangle-|1\rangle)|11\rangle/\sqrt{2}$, according to a certain probability distribution, and sends the first qubit to Bob. Bob performs an arbitrary quantum channel that sends his qubit to two qubits, which are then returned to Alice. Alice performs a projective measurement on the four qubit on her possession. If I am not wrong about this (and sorry if I am), this falls within the formalism from Gutoski and Watrous presented here and here , which implies that: From Theorem 4.9 in the second of those, it is optimal to Bob to act independently when Alice repeats this process with several qubits in an independent way, if the objective of Bob is to always fool Alice. It is possible to obtain the value of c from a small semidefinite program. You can find more details of how to obtain this program in Section 3 here . See the comments for the cvx code for the program and its value. | {
"source": [
"https://cstheory.stackexchange.com/questions/11363",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/1575/"
]
} |
11,425 | Is there an example of a natural problem that's in BPP but that's not known to be in RP or co-RP? | Moved my comment here after Suresh's request. An example of a natural problem for which we only know algorithms that require error on both sides is the following: given three algebraic circuits, decide whether exactly two of them are identical. This comes from the fact that deciding whether two algebraic circuits are identical is in co-RP. Reference: see the post How Many Sides to Your Error? (Dec 2, 2008) about the very same question on Lance Fortnow's blog and the comments below his post for a discussion about the naturalness of the problem. | {
"source": [
"https://cstheory.stackexchange.com/questions/11425",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/15/"
]
} |
11,611 | I am interested in the "nearest" (and "most complex") problem to the Collatz conjecture that has been successfully solved (which Erdos famously said "mathematics is not yet ripe for such problems"). It has been proven that a class of "Collatz-like" problems is undecidable. However, problems that are vaguely similar such as Hofstadter's MIU game (resolved, but admittedly more of a toy problem) are indeed decidable or have been solved. Related questions Collatz Conjecture & Grammars / Automata | An extended comment: Collatz-like sequences can be computed by small Turing machines having few symbols and states. In " Small Turing machines and generalized busy beaver competition " by P. Michel (2004) ( doi and closely related 2019 preprint update ), there is a nice table that positions Collatz-like problems between decidable TMs (for which the halting problem is decidable) and Universal TMs. There are TMs that compute Collatz-like sequences for which the decidability is still an open problem: $TM(5,2)$ , $TM(3,3)$ and $TM(2,4)$ (where $TM(k,l)$ is the set of Turing Machine with $k$ states and $l$ symbols). I don't know if the results have been inproved. From the comclusion of the paper: ... The present Collatz-like line is already on its lowest possible level, with the possible exception of $TM(4,2)$ , but we conjecture that all machines in this set can be proved to be decidable... See also " The complexity of small universal Turing machines: a survey " by D. Woods and T. Neary (2007) ( doi ) Another example of Collatz-like problem for which decidability is an open problem is the Post's tag system: $\mu = 2, v=3,0\rightarrow 00, 1 \rightarrow 1101$ ; for a recent analysis see " On the boundaries of solvability and unsolvability in tag systems. Theoretical and Experimental Results " by L. De Mol (2009). | {
"source": [
"https://cstheory.stackexchange.com/questions/11611",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/7884/"
]
} |
11,855 | A lot of pathfinding algorithms have been developed in recent years which can calculate the best path in response to graph changes much faster than A* - what are they, and how do they differ? Are they for different situations, or do some obsolete others? These are the ones I've been able to find so far: D* (1994) Focused D* (1995) DynamicSWSF-FP (1996) LPA (1997) LPA*/Incremental A* (2001) D* Lite (2002) SetA* (2002) HPA* (2004) Anytime D* (2005) PRA* (2005) Field D* (2007) Theta* (2007) HAA* (2008) GAA* (2008) LEARCH (2009) BDDD* (2009 - I cannot access this paper :|) Incremental Phi* (2009) GFRA* (2010) MTD*-Lite (2010) Tree-AA* (2011) I'm not sure which of these apply to my specific problem - I'll read them all if necessary, but it would save me a lot of time if someone could write up a summary. My specific problem: I have a grid with a start, a finish, and some walls. I'm currently using A* to find the best path from the start to the finish. The user will then move one wall , and I have to recalculate the entire path again. The "move-wall/recalculate-path" step happens many times in a row, so I'm looking for an algorithm that will be able to quickly recalculate the best path without having to run a full iteration of A*. Though, I am not necessarily looking for an alteration to A* - it could be a completely separate algorithm. | So, I skimmed through the papers, and this is what I gleamed. If there is anyone more knowledgable in the subject-matter, please correct me if I'm wrong (or add your own answer, and I will accept it instead!) . Links to each paper can be found in the question-post, above. Simple recalculations D* (aka Dynamic A* ) (1994): On the initial run, D* runs very similarly to A*, finding the best path from start to finish very quickly. However, as the unit moves from start to finish, if the graph changes, D* is able to very quickly recalculate the best path from that unit's position to the finish, much faster than simply running A* from that unit's position again. D*, however, has a reputation for being extremely complex, and has been completely obsoleted by the much simpler D*-Lite. Focused D* (1995): An improvement to D* to make it faster/"more realtime." I can't find any comparisons to D*-Lite, but given that this is older and D*-Lite is talked about a lot more, I assume that D*-Lite is somehow better. DynamicSWSF-FP (1996): Stores the distance from every node to the finish-node. Has a large initial setup to calculate all the distances. After changes to the graph, it's able to update only the nodes whose distances have changed. Unrelated to both A* and D*. Useful when you want to find the distance from multiple nodes to the finish after each change; otherwise, LPA* or D*-Lite are typically more useful. LPA*/Incremental A* (2001): LPA* (Lifelong Planning A*) , also known as Incremental A* (and sometimes, confusingly, as "LPA," though it has no relation to the other algorithm named LPA) is a combination of DynamicSWSF-FP and A*. On the first run, it is exactly the same as A*. After minor changes to the graph, however, subsequent searches from the same start/finish pair are able to use the information from previous runs to drastically reduce the number of nodes which need to be examined, compared to A*. This is exactly my problem, so it sounds like LPA* will be my best fit. LPA* differs from D* in that it always finds the best path from the same start to the same finish; it is not used when the start point is moving (such as units moving along the initial best path) . However... D*-Lite (2002): This algorithm uses LPA* to mimic D*; that is, it uses LPA* to find the new best path for a unit as it moves along the initial best path and the graph changes. D*-Lite is considered much simpler than D*, and since it always runs at least as fast as D*, it has completely obsoleted D*. Thus, there is never any reason to use D*; use D*-Lite instead. Any-angle movement Field D* (2007): A variant of D*-Lite which does not constrain movement to a grid; that is, the best path can have the unit moving along any angle, not just 45- (or 90-)degrees between grid-points. Was used by NASA to pathfind for the Mars rovers. Theta* (2007): A variant of A* that gives better (shorter) paths than Field D*. However, because it is based on A* rather than D*-Lite, it does not have the fast-replanning capabilities that Field D* does. See also . Incremental Phi* (2009): The best of both worlds. A version of Theta* that is incremental (aka allows fast-replanning) Moving Target Points GAA* (2008): GAA* (Generalized Adaptive A*) is a variant of A* that handles moving target points. It's a generalization of an even earlier algorithm called "Moving Target Adaptive A*" GRFA* (2010): GFRA* (Generalized Fringe-Retrieving A*) appears (?) to be a generalization of GAA* to arbitrary graphs (ie. not restricted to 2D) using techniques from another algorithm called FRA*. MTD*-Lite (2010): MTD*-Lite (Moving Target D*-Lite) is "an extension of D* Lite that uses the principle behind Generalized Fringe-Retrieving A*" to do fast-replanning moving-target searches. Tree-AA* (2011): (???) Appears to be an algorithm for searching unknown terrain, but is based on Adaptive A*, like all other algorithms in this section, so I put it here. Not sure how it compares to the others in this section. Fast/Sub-optimal Anytime D* (2005): This is an "Anytime" variant of D*-Lite, done by combining D*-Lite with an algorithm called Anytime Repairing A* . An "Anytime" algorithm is one which can run under any time constraints - it will find a very suboptimal path very quickly to begin with, then improve upon that path the more time it is given. HPA* (2004): HPA* (Hierarchical Path-Finding A*) is for path-finding a large number of units on a large graph, such as in RTS (real-time strategy) video games. They will all have different starting locations, and potentially different ending locations. HPA* breaks the graph into a hierarchy in order to quickly find "near-optimal" paths for all these units much more quickly than running A* on each of them individually. See also PRA* (2005): From what I understand, PRA* (Partial Refinement A*) solves the same problem as HPA*, but in a different way. They both have "similar performance characteristics." HAA* (2008): HAA* (Hierarchical Annotated A*) is a generalization of HPA* that allows for restricted traversal of some units over some terrains (ex. a small pathway that some units can walk through but larger ones can't; or a hole that only flying units can cross; etc.) Other/Unknown LPA (1997): LPA (Loop-free path-finding algorithm) appears to be a routing-algorithm only marginally related to the problems the other algorithms here solve. I only mention it because this paper is confusingly (and incorrectly) referenced on several places on the Internet as the paper introducing LPA*, which it is not. LEARCH (2009): LEARCH is a combination of machine-learning algorithms, used to teach robots how to find near-optimal paths on their own. The authors suggest combining LEARCH with Field D* for better results. BDDD* (2009): ??? I cannot access the paper. SetA* (2002): ??? This is, apparently, a variant of A* that searches over the "binary decision diagram" (BDD) model of the graph? They claim that it runs "several orders of magnitude faster than A*" in some cases. However, if I'm understanding correctly, those cases are when each node on the graph has many edges? Given all this, it appears that LPA* is the best fit for my problem. | {
"source": [
"https://cstheory.stackexchange.com/questions/11855",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/8532/"
]
} |
12,162 | I remember I might have encountered references to problems that have been proven to be solvable with a particular complexity, but with no known algorithm to actually reach this complexity. I struggle wrapping my mind around how this can be the case; how a non-constructive proof for the existence of an algorithm would look like. Do there actually exist such problems? Do they have a lot of practical value? | Consider the function (taken from here ) $\qquad \displaystyle f(n) = \begin{cases} 1 & 0^n \text{ occurs in the decimal representation of } \pi \\ 0 & \text{else}\end{cases}$ Despite the looks, $f$ is computable by the following argument. Either $0^n$ occurs for every $n$ or there is a $k$ so that $0^k$ occurs but $0^{k+1}$ does not. We do not know which it is (yet), but we know that $f \in F = \{f_\infty, f_0, f_1, \dots \}$ with $f_\infty(n) = 1$ and $f_k(n) = [n \leq k]$. Since $F \subset \mathsf{RE}$, $f$ is computable -- but we can not say what $f$ is. | {
"source": [
"https://cstheory.stackexchange.com/questions/12162",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/326/"
]
} |
12,377 | Paul Wegner and Dina Goldin have for over a decade been publishing papers and books arguing primarily that the Church-Turing thesis is often misrepresented in the CS Theory community and elsewhere. That is, it is presented as encompassing all computation when in fact it applies only to computation of functions, which is a very small subset of all computation. Instead they suggest we should seek to model interactive computation, where communication with the outside world happens during the computation. The only critique I have seen of this work is in the Lambda the Ultimate forum, where somebody lamented these authors for continually publishing what is obviously known. My question then is, is there any more critique into this line of thinking, and in particular their Persistent Turing Machines. And if not, why then is it seemingly studied very little (I could be mistaken). Lastly, how does the notion of universality translate to an interactive domain. | Here's my favorite analogy. Suppose I spent a decade publishing books and papers arguing that, contrary to theoretical computer science's dogma, the Church-Turing Thesis fails to capture all of computation, because Turing machines can't toast bread . Therefore, you need my revolutionary new model, the Toaster-Enhanced Turing Machine (TETM), which allows bread as a possible input and includes toasting it as a primitive operation. You might say: sure, I have a "point", but it's a totally uninteresting one. No one ever claimed that a Turing machine could handle every possible interaction with the external world, without first hooking it up to suitable peripherals. If you want a TM to toast bread, you need to connect it to a toaster; then the TM can easily handle the toaster's internal logic (unless this particular toaster requires solving the halting problem or something like that to determine how brown the bread should be!). In exactly the same way, if you want a TM to handle interactive communication, then you need to hook it up to suitable communication devices, as Neel discussed in his answer. In neither case are we saying anything that wouldn't have been obvious to Turing himself. So, I'd say the reason why there's been no "followup" to Wegner and Goldin's diatribes is that TCS has known how to model interactivity whenever needed, and has happily done so, since the very beginning of the field. Update (8/30): A related point is as follows. Does it ever give the critics pause that, here inside the Elite Church-Turing Ivory Tower (the ECTIT), the major research themes for the past two decades have included interactive proofs, multiparty cryptographic protocols, codes for interactive communication, asynchronous protocols for routing, consensus, rumor-spreading, leader-election, etc., and the price of anarchy in economic networks? If putting Turing's notion of computation at the center of the field makes it so hard to discuss interaction, how is it that so few of us have noticed? Another Update: To the people who keep banging the drum about higher-level formalisms being vastly more intuitive than TMs, and no one thinking in terms of TMs as a practical matter, let me ask an extremely simple question. What is it that lets all those high-level languages exist in the first place, that ensures they can always be compiled down to machine code? Could it be ... err ... THE CHURCH-TURING THESIS , the very same one you've been ragging on? To clarify, the Church-Turing Thesis is not the claim that "TURING MACHINEZ RULE!!" Rather, it's the claim that any reasonable programming language will be equivalent in expressive power to Turing machines -- and as a consequence , that you might as well think in terms of the higher-level languages if it's more convenient to do so. This, of course, was a radical new insight 60-75 years ago. Final Update: I've created a blog post for further discussion of this answer. | {
"source": [
"https://cstheory.stackexchange.com/questions/12377",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/4501/"
]
} |
Subsets and Splits