source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
1,662
I asked question on CS stack exchange , How to solve following problem using segment trees? .I really want to improve thus i want to know how can i improve my post . 1.I copied question statement directly from website .I think explanation of question on that website was easiest and thus i copied the statement (but i have given the credits.) 2.I told my Approach. My question is on hold and it's unlikely i will get answer .I want to know my mistakes in post.
Thank you for all of your enormous contributions to the site over the years, Gilles! I admire your acts of service and your spirit of giving to the world anonymously, and I will miss having you on the moderation team. Best wishes in all your future endeavours.
{ "source": [ "https://cs.meta.stackexchange.com/questions/1662", "https://cs.meta.stackexchange.com", "https://cs.meta.stackexchange.com/users/-1/" ] }
3
In a standard algorithms course we are taught that quicksort is $O(n \log n)$ on average and $O(n^2)$ in the worst case. At the same time, other sorting algorithms are studied which are $O(n \log n)$ in the worst case (like mergesort and heapsort ), and even linear time in the best case (like bubblesort ) but with some additional needs of memory. After a quick glance at some more running times it is natural to say that quicksort should not be as efficient as others. Also, consider that students learn in basic programming courses that recursion is not really good in general because it could use too much memory, etc. Therefore (and even though this is not a real argument), this gives the idea that quicksort might not be really good because it is a recursive algorithm. Why, then, does quicksort outperform other sorting algorithms in practice? Does it have to do with the structure of real-world data ? Does it have to do with the way memory works in computers? I know that some memories are way faster than others, but I don't know if that's the real reason for this counter-intuitive performance (when compared to theoretical estimates). Update 1: a canonical answer is saying that the constants involved in the $O(n\log n)$ of the average case are smaller than the constants involved in other $O(n\log n)$ algorithms. However, I have yet to see a proper justification of this, with precise calculations instead of intuitive ideas only. In any case, it seems like the real difference occurs, as some answers suggest, at memory level, where implementations take advantage of the internal structure of computers, using, for example, that cache memory is faster than RAM. The discussion is already interesting, but I'd still like to see more detail with respect to memory-management, since it appears that the answer has to do with it. Update 2: There are several web pages offering a comparison of sorting algorithms, some fancier than others (most notably sorting-algorithms.com ). Other than presenting a nice visual aid, this approach does not answer my question.
Short Answer The cache efficiency argument has already been explained in detail. In addition, there is an intrinsic argument, why Quicksort is fast. If implemented like with two “crossing pointers”, e.g. here , the inner loops have a very small body. As this is the code executed most often, this pays off. Long Answer First of all, The Average Case does not exist! As best and worst case often are extremes rarely occurring in practice, average case analysis is done. But any average case analysis assume some distribution of inputs ! For sorting, the typical choice is the random permutation model (tacitly assumed on Wikipedia). Why $O$-Notation? Discarding constants in analysis of algorithms is done for one main reason: If I am interested in exact running times, I need (relative) costs of all involved basic operations (even still ignoring caching issues, pipelining in modern processors ...). Mathematical analysis can count how often each instruction is executed, but running times of single instructions depend on processor details, e.g. whether a 32-bit integer multiplication takes as much time as addition. There are two ways out: Fix some machine model. This is done in Don Knuth 's book series “The Art of Computer Programming” for an artificial “typical” computer invented by the author. In volume 3 you find exact average case results for many sorting algorithms, e.g. Quicksort: $ 11.667(n+1)\ln(n)-1.74n-18.74 $ Mergesort: $ 12.5 n \ln(n) $ Heapsort: $ 16 n \ln(n) +0.01n $ Insertionsort: $2.25n^2+7.75n-3ln(n)$ [ source ] These results indicate that Quicksort is fastest. But, it is only proved on Knuth's artificial machine, it does not necessarily imply anything for say your x86 PC. Note also that the algorithms relate differently for small inputs: [ source ] Analyse abstract basic operations . For comparison based sorting, this typically is swaps and key comparisons . In Robert Sedgewick's books, e.g. “Algorithms” , this approach is pursued. You find there Quicksort: $2n\ln(n)$ comparisons and $\frac13n\ln(n)$ swaps on average Mergesort: $1.44n\ln(n)$ comparisons, but up to $8.66n\ln(n)$ array accesses (mergesort is not swap based, so we cannot count that). Insertionsort: $\frac14n^2$ comparisons and $\frac14n^2$ swaps on average. As you see, this does not readily allow comparisons of algorithms as the exact runtime analysis, but results are independent from machine details. Other input distributions As noted above, average cases are always with respect to some input distribution, so one might consider ones other than random permutations. E.g. research has been done for Quicksort with equal elements and there is nice article on the standard sort function in Java
{ "source": [ "https://cs.stackexchange.com/questions/3", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/24/" ] }
57
In most introductory algorithm classes, notations like $O$ (Big O) and $\Theta$ are introduced, and a student would typically learn to use one of these to find the time complexity. However, there are other notations, such as $o$, $\Omega$ and $\omega$. Are there any specific scenarios where one notation would be preferable to another?
You are referring to the Landau notation . They are not different symbols for the same thing but have entirely different meanings. Which one is "preferable" depends entirely on the desired statement. $f \in \cal{O}(g)$ means that $f$ grows at most as fast as $g$, asymptotically and up to a constant factor; think of it as a $\leq$. $f \in o(g)$ is the stricter form, i.e. $<$. $f \in \Omega(g)$ has the symmetric meaning: $f$ grows at least as fast as $g$. $\omega$ is its stricter cousin. You can see that $f \in \Omega(g)$ is equivalent to $g \in \cal{O}(f)$. $f \in \Theta(g)$ means that $f$ grows about as fast as $g$; formally $f \in \cal{O}(g) \cap \Omega(g)$. $f \sim g$ (asymptotic equality) is its stronger form. We often mean $\Theta$ when we use $\cal{O}$. Note how $\cal{O}(g)$ and its siblings are function classes . It is important to be very aware of this and their precise definitions -- which can differ depending on who is talking -- when doing "arithmetics" with them. When proving things, take care to work with your precise definition. There are many definitions for Landau symbols around (all with the same basic intuition), some of which are equivalent on some sets on functions but not on others. Suggested reading: What are the rules for equals signs with big-O and little-o? Sorting functions by asymptotic growth How do O and Ω relate to worst and best case? Nested Big O-notation Definition of $\Theta$ for negative functions What is the meaning of $O(m+n)$? Is O(mn) considered "linear" or "quadratic" growth? Sums of Landau terms revisited What does big O mean as a term of an approximation ratio? Any other question about asymptotics and landau-notation as exercise. If you are interested in using Landau notation in a rigorous and sound manner, you may be interested in recent work by Rutanen et al. [1]. They formulate necessary and sufficient criteria for asymptotic notation as we use them in algorithmics, show that the common definition fails to meet them and provide a (the, in fact) workable definition. A general definition of the O-notation for algorithm analysis by K. Rutanen et al. (2015)
{ "source": [ "https://cs.stackexchange.com/questions/57", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/110/" ] }
109
Let us call a context-free language deterministic if and only if it can be accepted by a deterministic push-down automaton, and nondeterministic otherwise. Let us call a context-free language inherently ambiguous if and only if all context-free grammars which generate the language are ambiguous, and unambiguous otherwise. An example of a deterministic, unambiguous language is the language: $$\{a^{n}b^{n} \in \{a, b\}^{*} | n \ge 0\}$$ An example of a nondeterministic, unambiguous language is the language: $$\{w \in \{a, b\}^{*} | w = w^{R}\}$$ From Wikipedia , an example of an inherently ambiguous context-free language is the following union of context-free languages, which must also be context-free: $$L = \{a^{n}b^{m}c^{m}d^{n} \in \{a, b, c, d\}^{*} | n, m \ge 0\} \cup \{a^{n}b^{n}c^{m}d^{m} \in \{a, b, c, d\}^{*} | n, m \ge 0\}$$ Now for the questions: Is it known whether there exists a deterministic, inherently ambiguous context-free language? If so, is there an (easy) example? Is it known whether there exists a nondeterministic, inherently ambiguous context-free language? If so, is there an (easy) example? Clearly, since an inherently ambiguous context-free language exists ($L$ is an example), the answer to one of these questions is easy, if it is known whether $L$ is deterministic or nondeterministic. I also assume that it's true that if there's a deterministic one, there's bound to be a nondeterministic one as well... but I've been surprised before. References are appreciated, and apologies in advance if this is a well-known, celebrated result (in which case, I'm completely unaware of it).
If a language $L$ is deterministic, it is accepted by some deterministic push-down automaton, which in turn means there is some LR(1) grammar describing the language, and as every LR(1) grammar is unambiguous, this means that $L$ cannot be inherently ambiguous. Knuth proved this in his paper in which he introduced LR(1) ( On the Translation of Languages from Left to Right ). A language can be described by some context-free grammar if and only if it can be recognized by some nondeterministic automaton. As a special case of this, inherently ambiguous context-free grammars can be parsed by some nondeterministic automaton. On a final note, any deterministic push-down automaton is also nondeterministic (this is the case for just about anything that can be nondeterministic, for a reasonable definition of nondeterminism).
{ "source": [ "https://cs.stackexchange.com/questions/109", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/69/" ] }
110
See the end of this post for some clarification on the definition(s) of min-heap automata. One can imagine using a variety of data structures for storing information for use by state machines. For instance, push-down automata store information in a stack, and Turing machines use a tape. State machines using queues, and ones using two multiple stacks or tapes, have been shown to be equivalent in power to Turing machines. Imagine a min-heap machine. It works exactly like a push-down automaton, with the following exceptions: Instead of getting to look at the last thing you added to the heap, you only get to look at the smallest element (with the ordering defined on a per-machine basis) currently on the heap. Instead of getting to remove the last thing you added to the heap, you only get to remove one of the smallest element (with the ordering defined on a per-machine basis) currently on the heap. Instead of getting to add an element to the top of the heap, you can only add an element to the heap, with its position being determined according to the other elements in the heap (with the ordering defined on a per-machine basis). This machine can accept all regular languages, simply by not using the heap. It can also accept the language $\displaystyle \{a^{n}b^{n} \in \{a, b\}^{*} \mid n \ge 0\}$ by adding $a$'s to the heap, and removing $a$'s from the heap when it reads $b$'s. It can accept a variety of other context-free languages. However, it cannot accept, for instance, $\displaystyle \{w \in \{a, b\}^{*} \mid w = w^{R}\}$ (stated without proof). EDIT: or can it? I don't think it can, but I've been surprised before, and I'm sure I'll keep being surprised when my assumptions to keep making of me an... well. Can it accept any context-sensitive or Turing-complete languages? More generally, what research, if any, has been pursued in this direction? What results are there, if any? I am also interested in other varieties of exotic state machines, possibly those using other data structures for storage or various kinds of restrictions on access (e.g., how LBAs are restricted TMs). References are appreciated. I apologize in advance if this question is demonstrating ignorance. Formal Definition: I provide some more detailed definitions of min-heap automata here in order to clarify further discussion in questions which reference this material. We define a type-1 nondeterministic min-heap automaton as a 7-tuple $$(Q, q_0, A, \Sigma, \Gamma, Z_0, \delta)$$ where... $Q$ is a finite, non-empty set of states; $q_0 \in Q$ is the initial state; $A \subseteq Q$ is the set of accepting states; $\Sigma$ is a finite, non-empty input alphabet; $\Gamma$ is a finite, non-empty input alphabet, where the weight of a symbol $\gamma \in \Gamma$, $w(\gamma) \in \mathbb{N}$, is such that $w(\gamma_1) = w(\gamma_2) \iff \gamma_1 = \gamma_2$; $Z_0 \notin \Gamma$ is the special bottom-of-the-heap symbol; $\delta : Q \times (\Sigma \cup \{\epsilon\}) \times (\Gamma \cup \{Z_0\}) \rightarrow \mathcal{P}({Q \times \Gamma^*})$ is the transition function. The transition function works by assuming an initially empty heap consisting of only $Z_0$. The transition function may add to the heap an arbitrary collection (finite, but possibly empty or with repeats) of elements $\gamma_1, \gamma_2, ..., \gamma_k \in \Gamma$. Alternatively, the transition function may remove an instance of the element $\gamma$ with the lowest weight $w(\gamma)$ of all elements remaining on the heap (i.e., the element on top of the heap). The transition function may only use the top-most (i.e., of minimal weight) symbol instance in determining any given transition. Further, define a type-1 deterministic min-heap automaton to be a type-1 nondeterministic min-heap automaton which satisfies the following property: for all strings $x{\sigma}y \in \Sigma$ such that $|x| = n$ and $\sigma \in \Sigma$, $|\delta^{n+1}(q_0, x{\sigma}y, Z_0)| \leq 1$. Define also a type-2 nondeterministic min-heap automaton exactly the same as a type-1 nondeterministic min-heap automaton, except for the following changes: $\Gamma$ is a finite, non-empty input alphabet, where the weight of a symbol $\gamma \in \Gamma$, $w(\gamma) \in \mathbb{N}$, is such that $w(\gamma_1) = w(\gamma_2)$ does not necessarily imply $\gamma_1 = \gamma_2$; in other words, different heap symbols can have the same weight. When instances of distinct heap symbols with same weight are added to the heap, their relative order is preserved according to a last-in, first-out (LIFO) stack-like ordering. Thanks to Raphael for pointing out this more natural definition, which captures (and extends) the context-free languages. Some results demonstrated so far: Type-1 min-heap automata recognize a set of languages which is neither a subset nor a superset of the context-free languages. [ 1 , 2 ] Type-2 min-heap automata, by their definition, recognize a set of languages which is a proper superset of the context-free languages, as well as a proper superset of the languages accepted by type-1 min-heap automata. Languages accepted by type-1 min-heap automata appear to be closed under union, concatenation, and Kleene star, but not under complementation [ 1 ], intersection, or difference; Languages accepted by type-1 nondeterministic min-heap automata appear to be a proper superset of languages accepted by type-1 deterministic min-heap automata. There may be a few other results I have missed. More results are (possibly) on the way. Follow-up Questions Closure under reversal? -- Open Closure under complementation? -- No! Does nondeterminism increase power? -- Yes? Is $HAL \subsetneq CSL$ for type-2? -- Open Does adding heaps increase power for type-1? -- $HAL^1 \subsetneq HAL^2 = HAL^k$ for $k > 2$ (?) Does adding a stack increase power for type-1? -- Open
You can recognize the canonical non-context-free (but context-sensitive) language $\{ a^n b^n c^n\ |\ n \geq 1 \}$ with this type of state machine. The crux is that you add tokens to the heap for every $a$ character, and while parsing the $b$ characters, you add 'larger' tokens to the heap, so they only end up at the bottom of the heap when you have parsed all the $b$ characters. Heap symbols are $a$ and $b$ , where $a < b$ . We consume all the $a$ symbols on the input and add $a$ symbols to the heap. If we encounter a $b$ , we switch strategies: for every $b$ we encounter subsequently we remove an $a$ from the heap and add a $b$ to the heap. When we encounter a $c$ we should have run out of $a$ s to remove, and then for every $c$ in the remaining input we remove a $b$ from the heap. If the heap is empty at the end, the string is in the language. Obviously, we reject if something goes wrong. Update: The language $EPAL = \{ ww^R | w \in \{a, b\}^* \}$ can not be recognized by min-heap automata. Suppose that we do have a min-heap automaton that can recognize $EPAL$ . We look at the 'state' the automaton is in after reading $w$ (the first part of the input, so $w^R$ is next). The only state we have are the contents of the heap and the particular state of the automaton it is in. This means that after recognizing $w$ , this 'state' needs to hold enough information to match $w^R$ . In particular, in order to do this, there must be $2^n$ possible different 'state's (where $n = |w|$ ), as there are $2^n$ possible words consisting of $a$ and $b$ characters. As there are only a finite number of states and only a finite number of heap characters, this implies that there exists some word $w$ for which the heap contains an exponential number of some heap character, say $x$ . We first prove the theorem for deterministic min-heap automata, and then extend this proof to non-deterministic min-heap automata. In particular, deterministic automata that recognize some language will not put themselves in an infinite loop, which is a useful property. We shall prove that the heap can only contain at most a number of heap tokens that is linear in the number of characters read from the input. This immediately rules out that $x$ appears an exponential number of times on the heap, which completes the proof that $EPAL$ can not be recognized by min-heap automata. Because we only have a finite number of states in our automaton and because a deterministic automaton will not put itself into an infinite loop, on reading an input signal it will add at most a constant number of heap characters onto the heap. Similarly, on consuming some heap symbol $y$ , it can only add at most a constant number of heap characters that are strictly larger than $y$ and it can only decrease the number of $y$ symbols on the stack (otherwise we get an infinite loop). Consuming heap symbols may therefore cause an (enormous) buildup of larger heap symbols, but as there are only a constant number of different types of heap symbols, this is only a constant number not dependent on $n$ . This implies that the number of heap symbols is at most some (large) constant times the number of input symbols read so far. This completes the proof for the deterministic case. In the non-deterministic case, the proof is similar, but a bit trickier: instead of adding at most some constant number of heap tokens to the heap, it adds some arbitrary number of heap tokens to the heap. However, the crucial point is that this number does not depend on $n$ . In particular, if we can non-deterministically get exactly the right heap symbols on the heap after recognizing $w$ (right for recognizing $w^R$ ), we can also non-deterministically choose the heap symbols that match some other word $w'$ , and thereby recognize $w w'^R$ , thus contradicting that the min-heap automaton recognizes exactly $EPAL$ . Update 3: I'll make the last argument (about non-determinism) rigorous. By the above argument, there must exist an infinite set of words $W \subseteq \{a,b\}^*$ such that for every $w \in W$ , after recognizing $w$ , the heap contains $\omega(|w|)$ elements (note that we can talk about $O(f(|w|))$ as we have an infinite set of words). As we cannot get that many elements on the heap through deterministic means, we must have had some form of a loop in which we first non-deterministically chose to add more elements to the heap (without consuming input), and later chose to exit this loop, and we must have traversed this loop $\omega(1)$ times. Take the set of all such loops used by $W$ . As there are only $O(1)$ states, the size of this set is $O(1)$ , and the set of all its subsets is also $O(1)$ . Now note that the 'deterministic' part of the execution paths can only contribute to $O(|w|)$ of the tokens, which means that a lot of the exponential number of different words must have execution paths whose 'deterministic' parts contribute the same tokens to the heap. In particular, the only way to get more tokens is to take the loops we identified above. Combining these observations, this means that there must be two distinct words in $W$ , $w_1$ and $w_2$ say, whose 'deterministic' part of the execution paths contribute the same tokens to the heap, and that are differentiated by taking some subset of the loops above a different number of times, but that use the same subset of loops (remember there are only $O(1)$ of these loops). We can now show that $w_1 w_2$ can also be recognized by the min-heap automaton: we follow the execution path for $w_1$ as above, but we traverse the loops the same number of times the execution path for $w_2$ traverses them. This fills the min-heap with tokens such that $w_2$ is accepted as suffix, thus completing the proof. Update 2: It just occurred to me that the above means that we can simulate a deterministic min-heap automaton using only logarithmic space: we keep a counter for every type of character in the min-heap. As shown above, this counter will at most be $O(n)$ , and hence can be stored using only $O(\log n)$ space (as there are only a constant number of these counters). This gives us: $\mathrm{DHAL} \subset \mathrm{L}$ $\mathrm{HAL} \subset \mathrm{NL}$ where $\mathrm{DHAL}$ is the set of languages recognized by some deterministic min-heap automaton.
{ "source": [ "https://cs.stackexchange.com/questions/110", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/69/" ] }
125
In quantum computation, what is the equivalent model of a Turing machine? It is quite clear to me how quantum circuits can be constructed out of quantum gates, but how can we define a quantum Turing machine (QTM) that can actually benefit from quantum effects, namely, perform on high-dimensional systems?
( note : the full desciption is a bit complex, and has several subtleties which I prefered to ignore. The following is merely the high-level ideas for the QTM model) When defining a Quantum Turing machine (QTM), one would like to have a simple model, similar to the classical TM (that is, a finite state machine plus an infinite tape), but allow the new model the advantage of quantum mechanics. Similarly to the classical model, QTM has: $Q=\{q_0,q_1,..\}$ - a finite set of states. Let $q_0$ be an initial state. $\Sigma=\{\sigma_0,\sigma_1,...\}$, $\Gamma=\{\gamma_0,..\}$ - set of input/working alphabet an infinite tape and a single "head". However, when defining the transition function, one should recall that any quantum computation must be reversible . Recall that a configuration of TM is the tuple $C=(q,T,i)$ denoting that the TM is at state $q\in Q$, the tape contains $T\in \Gamma^*$ and the head points to the $i$th cell of the tape. Since, at any given time, the tape consist only a finite amount of non-blank cells, we define the (quantum) state of the QTM as a unit vector in the Hilbert space $\mathcal{H}$ generated by the configuration space $Q\times\Sigma^*\times \mathrm{Z}$. The specific configuration $C=(q,T,i)$ is represented as the state $$|C\rangle = |q\rangle |T\rangle |i\rangle.$$ (remark: Therefore, every cell in the tape isa $\Gamma$-dimensional Hilbert space.) The QTM is initialized to the state $|\psi(0)\rangle = |q_0\rangle |T_0\rangle |1\rangle$, where $T_0\in \Gamma^*$ is concatenation of the input $x\in\Sigma^*$ with many "blanks" as needed (there is a subtlety here to determine the maximal length, but I ignore it). At each time step, the state of the QTM evolves according to some unitary $U$ $$|\psi(i+1)\rangle = U|\psi(i)\rangle$$ Note that the state at any time $n$ is given by $|\psi(n)\rangle = U^n|\psi(0)\rangle$. $U$ can be any unitary that "changes" the tape only where the head is located and moves the head one step to the right or left. That is, $\langle q',T',i'|U|q,T,i\rangle$ is zero unless $i'= i \pm 1$ and $T'$ differs from $T$ only at position $i$. At the end of the computation (when the QTM reaches a state $q_f$) the tape is being measured (using, say, the computational basis). The interesting thing to notice, is that each "step" the QTM's state is a superposition of possible configurations, which gives the QTM the "quantum" advantage. The answer is based on Masanao Ozawa, On the Halting Problem for Quantum Turing Machines . See also David Deutsch, Quantum theory, the Church-Turing principle and the universal quantum computer .
{ "source": [ "https://cs.stackexchange.com/questions/125", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/157/" ] }
130
We know that DFAs are equivalent to NFAs in expressiveness power; there is also a known algorithm for converting NFAs to DFAs (unfortunately I do now know the inventor of that algorithm), which in worst case gives us $2^S$ states, if our NFA had $S$ states. My question is: what is determining the worst case scenario? Here's a transcription of an algorithm in case of ambiguity: Let $A = (Q,\Sigma,\delta,q_0,F)$ be a NFA. We construct a DFA $A' = (Q',\Sigma,\delta',q'_0,F')$ where $Q' = \mathcal{P}(Q)$, $F' = \{S \in Q' | F \cap S \neq \emptyset \}$, $\delta'(S,a) =\bigcup_{s \in S} (\delta(s,a) \cup \hat \delta(s,\varepsilon))$, and $q'_0 = \{q_0\} \cup \hat \delta(q_0, \varepsilon)$, where $\hat\delta$ is the extended transition function of $A$.
The algorithm you refer to is called the Powerset Construction, and was first published by Michael Rabin and Dana Scott in 1959. To answer your question as stated in the title, there is no maximal DFA for a regular language, since you can always take a DFA and add as many states as you want with transitions between them, but with no transitions between one of the original states and one of the new ones. Thus, the new states will not be reachable from the initial state $q_0$, so the language accepted by the automaton will not change (since $\hat\delta(q_0,w)$ will remain the same for all $w\in\Sigma^*$). That said, it is clear that there can be no conditions on a NFA for its equivalent DFA to be maximal, since there is no unique equivalent DFA. In contrast, the minimal DFA is unique up to isomorphism. A canonical example of a language accepted by a NFA with $n+1$ states with equivalent DFA of $2^n$ states is $$L=\{w\in\{0,1\}^*:|w|\geq n\text{ and the \(n\)-th symbol from the last one is 1}\}.$$ A NFA for $L$ is $A=\langle Q,\{0,1\},\delta,q_0,\{q_{n+1}\}\rangle$, with $\delta(q_0,0)=\{q_0\}$, $\delta(q_0,1)=\{q_0,q_1\}$ and $\delta(q_i,0)=\delta(q_i,1)=\{q_{i+1}\}$ for $i\in\{1,\ldots,n\}$. The DFA resulting of applying the powerset construction to this NFA will have $2^n$ states, because you need to represent all $2^n$ words of length $n$ as suffixes of a word in $L$.
{ "source": [ "https://cs.stackexchange.com/questions/130", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/147/" ] }
178
Is there any "natural" language which is undecidable? by "natural" I mean a language defined directly by properties of strings, and not via machines and their equivalent. In other words, if the language looks like $$ L = \{ \langle M \rangle \mid \ldots \}$$ where $M$ is a TM, DFA (or regular-exp), PDA (or grammar), etc.., then $L$ is not natural. However $L = \{xy \ldots \mid x \text{ is a prefix of y} \ldots \}$ is natural.
Since you wanted "strings", I mention the classic one: Post Correspondence Problem .
{ "source": [ "https://cs.stackexchange.com/questions/178", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/157/" ] }
210
Why in computer science any complexity which is at most polynomial is considered efficient? For any practical application (a) , algorithms with complexity $n^{\log n}$ are way faster than algorithms that run in time, say, $n^{80}$, but the first is considered inefficient while the latter is efficient. Where's the logic?! (a) Assume, for instance, the number of atoms in the universe is approximately $10^{80}$.
Another perspective on "efficiency" is that polynomial time allows us to define a notion of "efficiency" that doesn't depend on machine models. Specifically, there's a variant of the Church-Turing thesis called the "effective Church-Turing Thesis" that says that any problem that runs in polynomial time on on kind of machine model will also run in polynomial time on another equally powerful machine model. This is a weaker statement to the general C-T thesis, and is 'sort of' violated by both randomized algorithms and quantum algorithms, but has not been violated in the sense of being able to solve an NP-hard problem in poly-time by changing the machine model. This is ultimately the reason why polynomial time is a popular notion in theoryCS. However, most people realize that this does not reflect "practical efficiency". For more on this, Dick Lipton's post on ' galactic algorithms ' is a great read.
{ "source": [ "https://cs.stackexchange.com/questions/210", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/157/" ] }
265
We learned about the class of context-free languages $\mathrm{CFL}$. It is characterised by both context-free grammars and pushdown automata so it is easy to show that a given language is context-free. How do I show the opposite, though? My TA has been adamant that in order to do so, we would have to show for all grammars (or automata) that they can not describe the language at hand. This seems like a big task! I have read about some pumping lemma but it looks really complicated.
To my knowledge the pumping lemma is by far the simplest and most-used technique. If you find it hard, try the regular version first, it's not that bad. There are some other means for languages that are far from context free. For example undecidable languages are trivially not context free. That said, I am also interested in other techniques than the pumping lemma if there are any. EDIT: Here is an example for the pumping lemma: suppose the language $L=\{ a^k \mid k ∈ P\}$ is context free ($P$ is the set of prime numbers). The pumping lemma has a lot of $∃/∀$ quantifiers, so I will make this a bit like a game: The pumping lemma gives you a $p$ You give a word $s$ of the language of length at least $p$ The pumping lemma rewrites it like this: $s=uvxyz$ with some conditions ($|vxy|≤p$ and $|vy|≥1$) You give an integer $n≥0$ If $uv^nxy^nz$ is not in $L$, you win, $L$ is not context free. For this particular language for $s$ any $a^k$ (with $k≥p$ and $k$ is a prime number) will do the trick. Then the pumping lemma gives you $uvxyz$ with $|vy|≥1$. Do disprove the context-freeness, you need to find $n$ such that $|uv^nxy^nz|$ is not a prime number. $$|uv^nxy^nz|=|s|+(n-1)|vy|=k+(n-1)|vy|$$ And then $n=k+1$ will do: $k+k|vy|=k(1+|vy|)$ is not prime so $uv^nxy^nz\not\in L$. The pumping lemma can't be applied so $L$ is not context free. A second example is the language $\{ww \mid w \in \{a,b\}^{\ast}\}$. We (of course) have to choose a string and show that there's no possible way it can be broken into those five parts and have every derived pumped string remain in the language. The string $s=a^{p}b^{p}a^{p}b^{p}$ is a suitable choice for this proof. Now we just have to look at where $v$ and $y$ can be. The key parts are that $v$ or $y$ has to have something in it (perhaps both), and that both $v$ and $y$ (and $x$) are contained in a length $p$ substring - so they can't be too far apart. This string has a number of possibilities for where $v$ and $y$ might be, but it turns out that several of the cases actually look pretty similar. $vy \in a^{\ast}$ or $vy \in b^{\ast}$. So then they're both contained in one of the sections of continguous $a$s or $b$s. This is the relatively easy case to argue, as it kind of doesn't matter which they're in. Assume that $|vy| = k \leq p$. If they're in the first section of $a$s, then when we pump, the first half of the new string is $a^{p+k}b^{p-k/2}$, and the second is $b^{k/2}a^{p}b^{p}$. Obviously this is not of the form $ww$. The argument for any of the three other sections runs pretty much the same, it's just where the $k$ and $k/2$ ends up in the indices. $vxy$ straddles two of the sections. In this case pumping down is your friend. Again there's several places where this can happen (3 to be exact), but I'll just do one illustrative one, and the rest should be easy to figure out from there. Assume that $vxy$ straddles the border between the first $a$ section and the first $b$ section. Let $vy = a^{k_{1}}b^{k_{2}}$ (it doesn't matter precisely where the $a$s and $b$s are in $v$ and $y$, but we know that they're in order). Then when we pump down (i.e. the $i=0$ case), we get the new string $s'=a^{p-k_{1}}b^{p-k_{2}}a^{p}b^{p}$, but then if $s'$ could be split into $ww$, the midpoint must be somewhere in the second $a$ section, so the first half is $a^{p-k_{1}}b^{p-k_{2}}a^{(k_{1}+k_{2})/2}$, and the second half is $a^{p-(k_{1}+k_{2})/2}b^{p}$. Clearly these are not the same string, so we can't put $v$ and $y$ there. The remaining cases should be fairly transparent from there - they're the same ideas, just putting $v$ and $y$ in the other 3 spots in the first instance, and 2 spots in the second instance. In all cases though, you can pump it in such a way that the ordering is clearly messed up when you split the string in half.
{ "source": [ "https://cs.stackexchange.com/questions/265", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/98/" ] }
266
We learned about the concept of enumerations of functions. In practice, they correspond to programming languages. In a passing remark, the professor mentioned that the class of all total functions (i.e. the functions that always terminate for every input) is not enumerable. That would mean that we can not devise a programming language that allows us to write all total functions but no others---which would be nice to have! So how is it that we (apparently) have to accept the potential for non-termination if we want decent computational power?
Because of diagonalization. If $(f_e: e \in \mathbb{N})$ was a computable enumeration of all total computable functions from $\mathbb{N}$ to $\mathbb{N}$, such that every $f_e$ was total, then $g(i) = f_i(i)+ 1$ would also be a total computable function, but it would not be in the enumeration. That would contradict the assumptions about the sequence. Thus no computable enumeration of functions can consist of exactly the total computable functions. Suppose we think of a universal computable function $h(e,i)$, where "universal" means $h$ is a computable binary function and that for every total computable unary function $f(n)$ there is some $e$ such that $f(i) = h(e,i)$ for all $i$. Then there must also be some $e$ such that $g(n) = h(e,n)$ is not a total function, because of the previous paragraph. Otherwise $h$ would give a computable enumeration of total computable unary functions that includes all the total computable unary functions. Thus the requirement that every function is a system of functions is total is incompatible with the existence of a universal function in that system. For some weak systems, such as the primitive recursive functions, every function is total but there are not universal functions. Stronger systems that have universal functions, such as Turing computability, simply must have partial functions in order to allow the universal function to exist.
{ "source": [ "https://cs.stackexchange.com/questions/266", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/98/" ] }
269
In our computer systems lecture we were introduced to the MIPS processor. It was (re)developed over the course of the term and has in fact been quite easy to understand. It uses a RISC design, that is its elementary commands are regularly encoded and there are only few of them in order to keep the wires simple. It was mentioned that CISC follows a different philosophy. I looked briefly at the x86 instruction set and was shocked. I can not image how anyone would want to build a processor that uses so complex a command set! So I figure there have to be good arguments why large portions of the processor market use CISC architectures. What are they?
There is a general historical trend. In the olden days, memories were small, and so programs were perforce small. Also, compilers were not very smart, and many programs were written in assembler, so it was considered a good thing to be able to write a program using few instructions. Instruction pipelines were simple, and processors grabbed one instruction at a time to execute it. The machinery inside the processor was quite complex anyway; decoding instructions was not felt to be much of a burden. In the 1970s, CPU and compiler designers realized that having such complex instructions was not so helpful after all. It was difficult to design processors in which those instructions were really efficient, and it was difficult to design compilers that really took advantage of these instructions. Chip area and compiler complexity was better spent on more generic pursuits such as more general-purpose registers. The Wikipedia article on RISC explains this in more detail. MIPS is the ultimate RISC architecture, which is why it's taught so often. The x86 family is a bit different. It was originally a CISC architecture meant for systems with very small memory (no room for large instructions), and has undergone many successive versions. Today's x86 instruction set is not only complicated because it's CISC, but because it's really a 8088 with a 80386 with a Pentium possibly with an x86_64 processor. In today's world, RISC and CISC are no longer the black-and-white distinction they might have been once. Most CPU architectures have evolved to different shades of grey. On the RISC side, some modern MIPS variants have added multiplication and division instructions, with a non-uniform encoding. ARM processors have become more complex: many of them have a 16-bit instruction set called Thumb in addition to the “original” 32-bit instructions, not to mention Jazelle to execute JVM instructions on the CPU. Modern ARM processors also have SIMD instructions for multimedia applications: some complex instructions do pay after all. On the CISC side, all recent processors are to some extent RISC inside. They have microcode to define all these complex macro instructions. The sheer complexity of the processor makes the design of each model take several years, even with a RISC design, what with the large number of components, with pipelining and predictive execution and whatnot. So why do the fastest processors remain CISC outside? Part of it, in the case of the x86 (32-bit and 64-bit) family, is historical compatibility. But that's not the whole of it. In the early 2000s, Intel tried pushing the Itanium architecture. Itanium is an extreme case of complex instructions (not really CISC, though: its design has been dubbed EPIC ). It even does away with the old-fashioned idea of executing instructions in sequence: all instructions are executed in parallel until the next barrier. One of the reasons Itanium didn't take is that nobody, whether at Intel or elsewhere, could write a decent compiler for it. Now a good old mostly-sequential processor like x86_64, that's something we understand.
{ "source": [ "https://cs.stackexchange.com/questions/269", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/98/" ] }
270
When implementing a dictionary ('I want to look up customer data by their customer IDs'), the typical data structures used are hash tables and binary search trees. I know for instance that the C++ STL library implements dictionaries (they call them maps) using (balanced) binary search trees, and the .NET framework uses hash tables under the hood. What are the advantages and disadvantages of these data structures? Is there some other option that is reasonable in certain situations? Note that I'm not particularly interested in cases where the keys have a strong underlying structure, say, they are all integers between 1 and n or something.
A whole treatise could be written on this topic; I'm just going to cover some salient points, and I'll keep the discussion of other data structures to a minimum (there are many variants indeed). Throughout this answer, $n$ is the number of keys in the dictionary. The short answer is that hash tables are faster in most cases , but can be very bad at their worst. Search trees have many advantages, including tame worst-case behavior , but are somewhat slower in typical cases. Balanced binary search trees have a fairly uniform complexity: each element takes one node in the tree (typically 4 words of memory), and the basic operations (lookup, insertion, deletion) take $O(\mathrm{lg}(n))$ time (guaranteed asymptotic upper bound). More precisely, an access in the tree takes about $\mathrm{log}_2(n)$ comparisons. Hash tables are a bit more variable. They require an array of around $2n$ pointers. Access to one element depends on the quality of the hash function. The purpose of a hash function is to disperse the elements. A hash table “works” if all the elements you want to store in it have different hashes. If this is the case, then the basic operations (lookup, insertion, deletion) take $O(1)$ time, with a fairly small constant (one hash calculation plus one pointer lookup). This makes hash tables very fast in many typical cases. A general problem with hash tables is that the $O(1)$ complexity is not guaranteed. For addition, there's a point where the table becomes full; when that happens (or, better, a little before that happens), the table needs to be enlarged, which requires moving all of its elements, for an $O(n)$ cost. This can introduce “jerky” behavior when a lot of elements are added. It's possible for the input to collide over a few hash values. This rarely happens naturally, but it can be a security problem if the inputs are chosen by an attacker: it's a way to considerably slow down some servers. This issue has led some programming language implementations (such as Perl and Python) to switch from a plain old hash table to a hash function involving a random number chosen when the hash table is built, together with a hash function that spreads this random datum well (which increases the multiplicative constant in the $O(1)$), or to a binary search tree. While you can avoid collisions by using a cryptographic hash, this is not done in practice because cryptographic hashes are comparatively very slow to compute. When you throw data locality into the mix, hash tables do poorly. They work precisely because they store related elements far apart, which means that if the application looks up elements sharing a prefix in sequence, it will not benefit from cache effects. This is not relevant if the application makes essentially random lookups. Another factor in favor of search trees is that they're an immutable data structure: if you need to take a copy of a tree and change a few elements in it, you can share most of the data structure. If you take a copy of a hash table, you need to copy the whole array of pointers. Also, if you're working in a purely functional languages, hash tables are often not an option. When you go beyond strings, hash tables and binary search trees make different requirements on the data type of the key: hash tables require a hash function (a function from the keys to the integers such that $k_1 \equiv k_2 \implies h(k_1) = h(k_2)$, while binary search trees require a total order. Hashes can sometimes be cached, if there is enough room in the data structure where the key is stored; caching the result of comparisons (a binary operation) is often impractical. On the other hand, comparisons can benefit from shortcutting: if keys often differ within the first few bytes, a negative comparison can be very fast. In particular, if you're going to need the order on the keys, for example if you want to be able to list the keys in alphabetical order, then hash tables are no help (you'll need to sort them), whereas you can straightforwardly traverse a search tree in order. You can combine binary search trees and hash tables in the form of hash trees . A hash tree stores keys in a search tree according to their hash. This is useful, for example, in a purely functional programming language where you want to work on data that does not have an easy-to-compute order relation. When the keys are strings (or integers), a trie can be another option. A trie is a tree, but indexed differently from a search tree: you write the key in binary, and go left for a 0 and right for a 1. The cost of an access is thus proportional to the length of the key. Tries can be compressed to remove intermediate nodes; this is known as a patricia trie or radix tree . Radix trees can outperform balanced trees, particularly when many keys share a common prefix.
{ "source": [ "https://cs.stackexchange.com/questions/270", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/92/" ] }
298
When searching graphs, there are two easy algorithms: breadth-first and depth-first (Usually done by adding all adjactent graph nodes to a queue (breadth-first) or stack (depth-first)). Now, are there any advantages of one over another? The ones I could think of: If you expect your data to be pretty far down inside the graph, depth-first might find it earlier, as you are going down into the deeper parts of the graph very fast. Conversely, if you expect your data to be pretty far up in the graph, breadth-first might give the result earlier. Is there anything I have missed or does it mostly come down to personal preference?
I'd like to quote an answer from Stack Overflow by hstoerr which covers the problem nicely: That heavily depends on the structure of the search tree and the number and location of solutions . If you know a solution is not far from the root of the tree, a breadth first search (BFS) might be better. If the tree is very deep and solutions are rare, depth first search (DFS) might rootle around forever, but BFS could be faster. If the tree is very wide, a BFS might need too much more memory, so it might be completely impractical. If solutions are frequent but located deep in the tree, BFS could be impractical. If the search tree is very deep you will need to restrict the search depth for depth first search (DFS), anyway (for example with iterative deepening). But these are just rules of thumb; you'll probably need to experiment. Rafał Dowgird also remarks: Some algorithms depend on particular properties of DFS (or BFS) to work. For example the Hopcroft and Tarjan algorithm for finding 2-connected components takes advantage of the fact that each already visited node encountered by DFS is on the path from root to the currently explored node.
{ "source": [ "https://cs.stackexchange.com/questions/298", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/101/" ] }
332
You are given an array of $2n$ elements $$a_1, a_2, \dots, a_n, b_1, b_2, \dots b_n$$ The task is to interleave the array, using an in-place algorithm such that the resulting array looks like $$b_1, a_1, b_2, a_2, \dots , b_n, a_n$$ If the in-place requirement wasn't there, we could easily create a new array and copy elements giving an $\mathcal{O}(n)$ time algorithm. With the in-place requirement, a divide and conquer algorithm bumps up the algorithm to be $\theta(n \log n)$. So the question is: Is there an $\mathcal{O}(n)$ time algorithm, which is also in-place? (Note: You can assume the uniform cost WORD RAM model, so in-place translates to $\mathcal{O}(1)$ space restriction).
Here is the answer which elaborates upon the algorithm from the paper linked by Joe: http://arxiv.org/abs/0805.1598 First let us consider a $\Theta(n \log n)$ algorithm which uses divide and conquer. 1) Divide and Conquer We are given $$a_1, a_2, \dots , b_1, b_2, \dots b_n$$ Now to use divide and conquer, for some $m = \Theta(n)$ , we try to get the array $$ [a_1, a_2, \dots , a_m, b_1, b_2, \dots, b_m], [a_{m+1}, \dots, a_n, b_{m+1}, \dots b_n]$$ and recurse. Notice that the portion $$ b_1 , b_2, \dots b_m, a_{m+1}, \dots a_n$$ is a cyclic shift of $$ a_{m+1}, \dots a_n, b_1 , \dots b_m$$ by $m$ places. This is a classic and can be done in-place by three reversals and in $\mathcal{O}(n)$ time. Thus the divide and conquer gives you a $\Theta(n \log n)$ algorithm, with a recursion similar to $T(n) = 2T(n/2) + \Theta(n)$ . 2) Permutation Cycles Now, another approach to the problem is the consider the permutation as a set of disjoint cycles. The permutation is given by (assuming starting at $1$ ) $$ j \mapsto 2j \mod 2n+1$$ If we somehow knew exactly what the cycles were, using constant extra space, we could realize the permutation by picking an element $A$ , determine where that element goes (using the above formula), put the element in the target location into temporary space, put the element $A$ into that target location and continue along the cycle. Once we are done with one cycle we move onto an element of the next cycle and follow that cycle and so on. This would give us an $\mathcal{O}(n)$ time algorithm, but it assumes that we "somehow knew what the exact cycles were" and trying to do this book-keeping within the $\mathcal{O}(1)$ space limitation is what makes this problem hard. This is where the paper uses number theory. It can be shown that, in the case when $2n + 1 = 3^k$ , the elements at positions $1$ , $3, 3^2, \dots, 3^{k-1}$ are in different cycles and every cycle contains an element at the position $3^m, m \ge 0$ . This uses the fact that $2$ is a generator of $(\mathbb{Z}/3^k)^*$ . Thus when $2n+1 = 3^k$ , the follow the cycle approach gives us an $\mathcal{O}(n)$ time algorithm, as for each cycle, we know exactly where to begin: powers of $3$ (including $1$ ) (those can be computed in $\mathcal{O}(1)$ space). 3) Final Algorithm Now we combine the above two: Divide and Conquer + Permutation Cycles. We do a divide and conquer, but pick $m$ so that $2m+1$ is a power of $3$ and $m = \Theta(n)$ . So instead on recursing on both "halves", we recurse on only one and do $\Theta(n)$ extra work. This gives us the recurrence $T(n) = T(cn) + \Theta(n)$ (for some $0 \lt c \lt 1$ ) and thus gives us an $\mathcal{O}(n)$ time, $\mathcal{O}(1)$ space algorithm!
{ "source": [ "https://cs.stackexchange.com/questions/332", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/139/" ] }
342
Intuitively, "balanced trees" should be trees where left and right sub-trees at each node must have "approximately the same" number of nodes. Of course, when we talk about red-black trees*(see definition at the end) being balanced, we actually mean that they are height balanced and in that sense, they are balanced. Suppose we try to formalize the above intuition as follows: Definition: A Binary Tree is called $\mu$ -balanced, with $0 \le \mu \leq \frac{1}{2}$ , if for every node $N$ , the inequality $$ \mu \le \frac{|N_L| + 1}{|N| + 1} \le 1 - \mu$$ holds and for every $\mu' \gt \mu$ , there is some node for which the above statement fails. $|N_L|$ is the number of nodes in the left sub-tree of $N$ and $|N|$ is the number of nodes under the tree with $N$ as root (including the root). I believe, these are called weight-balanced trees in some of the literature on this topic. One can show that if a binary tree with $n$ nodes is $\mu$ -balanced (for a constant $\mu \gt 0$ ), then the height of the tree is $\mathcal{O}(\log n)$ , thus maintaining the nice search properties. So the question is: Is there some $\mu \gt 0$ such that every big enough red-black tree is $\mu$ -balanced? The definition of Red-Black trees we use (from Introduction to Algorithms by Cormen et al): A binary search tree, where each node is coloured either red or black and The root is black All NULL nodes are black If a node is red, then both its children are black. For each node, all paths from that node to descendant NULL nodes have the same number of black nodes. Note: we don't count the NULL nodes in the definition of $\mu$ -balanced above. (Though I believe it does not matter if we do).
Claim : Red-black trees can be arbitrarily un- $\mu$ -balanced. Proof Idea : Fill the right subtree with as many nodes as possible and the left with as few nodes as possible for a given number $k$ of black nodes on every root-leaf path. Proof : Define a sequence $T_k$ of red-black trees so that $T_k$ has $k$ black nodes on every path from the root to any (virtual) leaf. Define $T_k = B(L_k, R_k)$ with $R_k$ the complete tree of height $2k - 1$ with the first, third, ... level colored red, the others black, and $L_k$ the complete tree of height $k-1$ with all nodes colored black. Clearly, all $T_k$ are red-black trees. For example, these are $T_1$ , $T_2$ and $T_3$ , respectively: [ source ] [ source ] [ source ] Now let us verify the visual impression of the right side being huge compared to the left. I will not count virtual leaves; they do not impact the result. The left subtree of $T_k$ is complete and always has height $k-1$ and therefore contains $2^k - 1$ nodes. The right subtree, on the other hand, is complete and has height $2k - 1$ and thusly contains $2^{2k}-1$ nodes. Now the $\mu$ -balance value for the root is $\qquad \displaystyle \frac{2^k}{2^k + 2^{2k}} = \frac{1}{1 + 2^k} \underset{k\to\infty}{\to} 0$ which proves that there is no $\mu > 0$ as requested.
{ "source": [ "https://cs.stackexchange.com/questions/342", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/139/" ] }
356
Most of today's encryption, such as the RSA, relies on the integer factorization, which is not believed to be a NP-hard problem, but it belongs to BQP, which makes it vulnerable to quantum computers. I wonder, why has there not been an encryption algorithm which is based on an known NP-hard problem. It sounds (at least in theory) like it would make a better encryption algorithm than a one which is not proven to be NP-hard.
Worst-case Hardness of NP-complete problems is not sufficient for cryptography. Even if NP-complete problems are hard in the worst-case ($P \ne NP$), they still could be efficiently solvable in the average-case. Cryptography assumes the existence of average-case intractable problems in NP. Also, proving the existence of hard-on-average problems in NP using the $P \ne NP$ assumption is a major open problem. An excellent read is the classic by Russell Impagliazzo, A Personal View of Average-Case Complexity , 1995. An excellent survey is Average-Case Complexity by Bogdanov and Trevisan, Foundations and Trends in Theoretical Computer Science Vol. 2, No 1 (2006) 1–106
{ "source": [ "https://cs.stackexchange.com/questions/356", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/5/" ] }
358
You need to check that your friend, Bob, has your correct phone number, but you cannot ask him directly. You must write the question on a card which and give it to Eve who will take the card to Bob and return the answer to you. What must you write on the card, besides the question, to ensure Bob can encode the message so that Eve cannot read your phone number? Note: This question is on a list of "google interview questions". As a result, there are tons of versions of this question on the web, and many of them don't have clear, or even correct answers. Note 2: The snarky answer to this question is that Bob should write "call me". Yes, that's very clever, 'outside the box' and everything, but doesn't use any techniques that field of CS where we call our hero "Bob" and his eavesdropping adversary "Eve". Update: Bonus points for an algorithm that you and Bob could both reasonably complete by hand. Update 2: Note that Bob doesn't have to send you any arbitrary message, but only confirm that he has your correct phone number without Eve being able to decode it, which may or may not lead to simpler solutions.
First we must assume that Eve is only passive. By this, I mean that she truthfully sends the card to Bob, and whatever she brings back to Alice is indeed Bob's response. If Eve can alter the data in either or both directions (and her action remains undetected) then anything goes. (To honour long-standing traditions, the two honest parties involved in the conversation are called Alice and Bob. In your text, you said "you". My real name is not "Alice", but I will respond just as if you wrote that Alice wants to verify Bob's phone number.) The simple (but weak) answer is to use a hash function. Alice writes on the card: "return to me the SHA-256 hash of your phone number". SHA-256 is a cryptographic hash function which is believed to be secure, as far as hash functions go. Computing it by hand would be tedious but still doable (that's about 2500 32-bit operations, where each operation is an addition, a word shift or rotate, or a bitwise combination of bits; Bob should be able to do it in a day or so). Now what's weak about that ? SHA-256, being a cryptographic hash function, is resistant to "preimages": this means that given a hash output, it is very hard to recover a corresponding input (that's the problem that Eve faces). However, "very hard" means "the easiest method is brute force: trying possible inputs until a match is found". Trouble is that brute force is easy here: there are not so many possible phone numbers (in North America, that's 10 digits, i.e. a mere 10 billions). Bob wants to do things by hand, but we cannot assume that Eve is so limited. A basic PC can try a few millions SHA-256 hashes per second so Eve will be done in less than one hour (less than 5 minutes if she uses a GPU). This is a generic issue: if Bob is deterministic (i.e. for a given message from Alice, he would always return the same response), Eve can simulate him. Namely, Eve knows everything about Bob except the phone number, so she virtually runs 10 billions of Bobs, who differ only by their assumed phone number; and she waits for one of the virtual Bobs to return whatever the real Bob actually returned. The flaw affects many kinds of "smart" solutions involving random nonces and symmetric encryption and whatsnot. It is a strong flaw, and its root lies in the huge difference in computing power between Eve and Bob (now, if Bob also had a computer as big as Eve's, then he could use a slow hash function through the use of many iterations; that's more or less what password hashing is about, with the phone number in lieu of the password; see bcrypt and also this answer ). Hence, a non-weak solution must involve some randomness on Bob's part: Bob must flip a coin or throw dice repeatedly, and inject the values in his computations. Moreover, Eve must not be able to unravel what Bob did, but Alice must be able to, so some information is confidentialy conveyed from Bob to Alice. This is called asymmetric encryption or, at least, asymmetric key agreement. The simplest algorithm of that class to compute, but still reasonably secure, is then RSA with the PKCS#1 v1.5 padding . RSA can use $e = 3$ as public exponent. So the protocol goes thus: Alice generates a big integer $n = pq$ where $p$ and $q$ are similarly-sized prime integer, such that the size of $n$ is sufficient to ensure security (i.e. at least 1024 bits, as of 2012). Also, Alice must arrange for $p-1$ and $q-1$ not to be multiples of 3. Alice writes $n$ on the card. Bob first pads his phone number into a byte sequence as long as $n$, as described by PKCS#1 (this means: 00 02 xx xx ... xx 00 bb bb .. bb, where 'bb' are the ten bytes which encode the phone number, and the 'xx' are random non-zero byte values, for a total length of 128 bytes if $n$ is a 1024-bit integer). Bob interprets his byte sequence as a big integer value $m$ (big-endian encoding) and computes $m^3 \mathrm{\ mod\ } n$ (so that's a couple of multiplications with very big integers, then a division, the result being the remainder of the division). That's still doable by hand (but, there again, it will probably take the better part of a day). The result is what Bob sends back to Alice. Alice uses her knowledge of $p$ and $q$ to recover $m$ from the $m^3 \mathrm{\ mod\ } n$ sent by Bob. The Wikipedia page on RSA has some reasonably clear explanations on that process. Once Alice has $m$, she can remove the padding (the 'xx' are non-zero, so the first 'bb' byte can be unambiguously located) and she then has the phone number, which she can compare with the one she had. Alice's computation will require a computer (what a computer does is always elementary and doable by hand, but a computer is devilishly fast at it, so the "doable" might take too much time to do in practice; RSA decryption by hand would take many weeks). (Actually we could have faster by-hand computation by using McEliece encryption , but then the public key -- what Alice writes on the card -- would be huge, and a card would simply not do; Eve would have to transport a full book of digits.)
{ "source": [ "https://cs.stackexchange.com/questions/358", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/71/" ] }
367
We were given the following exercise. Let $\qquad \displaystyle f(n) = \begin{cases} 1 & 0^n \text{ occurs in the decimal representation of } \pi \\ 0 & \text{else}\end{cases}$ Prove that $f$ is computable. How is this possible? As far as I know, we do not know wether $\pi$ contains every sequence of digits (or which) and an algorithm can certainly not decide that some sequence is not occurring. Therefore I think $f$ is not computable, because the underlying problem is only semi-decidable.
There are only two possibilities to consider. For every positive integer $n$ , the string $0^n$ appears in the decimal representation of $\pi$ . In this case, the algorithm that always returns 1 is always correct. There is a largest integer $N$ such that $0^N$ appears in the decimal representation of $\pi$ . In this case the following algorithm (with the value $N$ hard-coded) is always correct: Zeros-in-pi(n): if (n > N) then return 0 else return 1 We have no idea which of these possibilities is correct, or what value of $N$ is the right one in the second case. Nevertheless, one of these algorithms is guaranteed to be correct. Thus, there is an algorithm to decide whether a string of $n$ zeros appears in $\pi$ ; the problem is decidable. Note the subtle difference with the following proof sketch proposed by gallais : Take a random Turing machine and a random input. Either the computation will go on for ever or it will stop at some point and there is a (constant) computable function describing each one of these behaviors. ??? Profit! Alex ten Brink explains: watch out what the Halting theorem states: it says that there exists no single program that can decide whether a given program halts. You can easily make two programs such that either one computes whether a given program halts: the first always says 'it halts', the second 'it doesn't halt' - one program is always right, we just can't compute which one of them is! sepp2k adds: In the case of Alex's example neither of the algorithms will return the right result for all inputs. In the case of this question one of them will. You can claim that the problem is decidable because you know that there is an algorithm that produces the right result for all inputs. It doesn't matter whether you know which one that algorithm is. 10
{ "source": [ "https://cs.stackexchange.com/questions/367", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/98/" ] }
368
(I'm a student with some mathematical background and I'd like to know how to count the number of a specific kind of binary trees.) Looking at Wikipedia page for Binary Trees , I've noticed this assertion that the number of rooted binary trees of size $n$ would be this Catalan Number : $$C_n = \dfrac{1}{n+1}{2n \choose n}$$ But I don't understand how I could come up with such a result by myself? Is there a method to find this result? Now, what if the order of sub-trees (which is left, which is right) is not considered? For example, from my point of view, I consider that these two trees are the same: /\ /\ /\ /\ Would it be possible to apply a similar method to count how many of these objects have exactly $n$ nodes?
For counting many types of combinatorial objects, like trees in this case, there are powerful mathematical tools (the symbolic method) that allow you to mechnically derive such counts from a description how the combinatorial objects are constructed. This involves generating functions. An excellent reference is Analytic Combinatorics by the late Philipe Flajolet and Robert Sedgewick. It is available from the link above. The late Herbert Wilf’s book generatingfunctionology is another free source. And of course Concrete Mathematics by GKP is a treasure trove. For binary trees it goes like this: First you need a clear definition of the tree. A binary tree is a rooted tree in which every non-leaf node has degree 2 exactly. Next we have to agree what we want to call the size of a tree. On the left all nodes are equal. In the middle we distinguish the leaves and non-leaves. On the right we have a pruned binary tree where the leaves have been removed. Notice that it has unary branches of two types (left and right)! Now we have to derive a description of how these combinatorial objects are built. In the case of binary trees a recursive decomposition is possible. Let $\mathcal{A}$ be the set of all binary trees of the first type then symbolically we have: It reads as: “An object of the class of binary trees is either a node or a node followed by two binary trees.” This can be written as equation of sets: $$\mathcal{A}=\{\bullet\}\cup\bigl(\{\bullet\}\times\mathcal{A}\times\mathcal{A}\bigr)$$ By introducing the generating function $A(z)$ that enumerates this class of combinatorial objects we can translate the set equation into an equation involving the generating function. $$A(z)=z+zA^2(z)$$ Our choice of treating all nodes equally and taking the number of nodes in the tree as notion of its size is expressed by “marking” the nodes with the variable $z$. We can now solve the quadratic equation $zA^2(z)-A(z)+z=0$ for $A(z)$ and get, as usual, two solutions, the explicit closed form of the generating function: $$A(z)=\frac{1\pm\sqrt{1-4z^2}}{2z}$$ Now we simply need Newton’s (generalized) Binomial Theorem: $$(1+x)^a=\sum_{k=0}^\infty\binom{a}{k}x^k$$ with $a=1/2$ and $x=-4z^2$ to expand the closed form of the generating function back into a power series. We do this because, the coefficient at $z^n$ is just the number of combinatorial objects of size $n$, typically written as $[z^n]A(z)$. But here our notion of “the size” of the tree forces us to look for the coefficient at $z^{2n+1}$. After a little bit of juggling with binomials and factorials we get: $$[z^{2n+1}]A(z)=\frac{1}{n+1}\binom{2n}{n}.$$ If we start with the second notion of the size the recursive decomposition is: We get a different class of combinatorial objects $\mathcal{B}$. It reads: “An object of the class of binary trees is either a leaf or a interal node followed by two binary trees.” We can use the same approach and turn $\mathcal{B}=\{\square\}\cup\bigl(\{\bullet\}\times\mathcal{B}\times\mathcal{B}\bigr)$ into $\mathcal{B}=1+zB^2(z)$. Only this time the variable $z$ only marks the internal nodes, not the leaves, because the definition “the size“ is different here. We get a different generating function as well: $$B(z)=\frac{1-\sqrt{1-4z}}{2z}$$ Extracting the coefficient yields $$[z^n]B(z)=\frac{1}{n+1}\binom{2n}{n}.$$ Class $\mathcal{A}$ and $\mathcal{B}$ agree on the counts, because a binary tree with $n$ internal nodes has $n+1$ leaves, thus $2n+1$ nodes in total. In the last case we have to work a little harder: which is a description of non-empty pruned binary tries. We extend this to $$\begin{align}\mathcal{C}&=\{\bullet\}\cup\bigl(\{\bullet\}\times\mathcal{C}\bigr)\cup\bigl(\{\bullet\}\times\mathcal{C}\bigr)\cup\bigl(\{\bullet\}\times\mathcal{C}\times\mathcal{C}\bigr)\\\mathcal{D}&=\{\epsilon\}\cup\bigl(\{\bullet\}\times\mathcal{C}\times\mathcal{C}\bigr)\end{align}$$ and rewrite it with generating functions $$\begin{align}C(z)&=z+2zC(z)+zC^2(z)\\D(z)&=1+zC^2(z)\end{align}$$ solve the quadratic equations $$\begin{align}C(z)&=\frac{1-2z-\sqrt{1-4z}}{2z}\\D(z)&=\frac{1-\sqrt{1-4z}}{2z}\end{align}$$ and get yet again $$[z^n]C(z)=\frac{1}{n+1}\binom{2n}{n}\quad n\ge1 \qquad [z^n]D(z)=\frac{1}{n+1}\binom{2n}{n} \quad n\ge0$$ Note that the Catalan generating function is $$E(z)=\frac{1-\sqrt{1-4z}}{2}$$ it enumerates the class of general trees . That is the trees with no restriction on the node degree. $$\mathcal{E}=\{\bullet\}\times\mathrm{SEQ}(\mathcal{E})$$ It reads as: “An object of the class of general trees is a node followed by a possible empty sequence of general trees.” $$E(z)=\frac{z}{1-E(z)}$$ With the Lagrange-Bürmann Inversion Formula we get $$[z^n]E(z)=\frac{1}{n+1}\binom{2n}{n}$$ So we proved that there are as many general trees as there are binary trees. No wonder there is a bijection between the general and binary trees. The bijection is known as the rotation correspondence (explained at the end of the linked article), that allows us two store every general tree as a binary tree. Note that if we do not distinguish the left and right sibling in class $\mathcal{C}$ we get yet another class of trees $\mathcal{T}$: the unary binary trees. $$\mathcal{T}=\{\bullet\}\times\mathrm{SEQ}_{\le2}(\mathcal{T})$$ They have a generating function too $$T(z)=\frac{1-z-\sqrt{1-2z-3z^2}}{2z}$$ however their coefficient is different. You get the Motzkin numbers $$[z^n]T(z)=\frac{1}{n}\sum_k\binom{n}{k}\binom{n-k}{k-1}.$$ Oh and if you don’t like generating functions there are plenty of other proofs too. See here , there is one where you could use the encoding of binary trees as Dyck words and and derive a recurrence from their recursive definition. Then solving that recurrence gives the answer too. However the symbolic method saves you from coming up with the recurrence in the first place, as it works directly with the blueprints of the combinatorial objects.
{ "source": [ "https://cs.stackexchange.com/questions/368", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/68/" ] }
407
Given an instance of SAT, I would like to be able to estimate how difficult it will be to solve the instance. One way is to run existing solvers, but that kind of defeats the purpose of estimating difficulty. A second way might be looking a the ratio of clauses to variables, as is done for phase transitions in random-SAT, but I am sure better methods exist. Given an instance of SAT, are there some fast heuristics to measure the difficulty? The only condition is that these heuristics be faster than actually running existing SAT solvers on the instance. Related question Which SAT problems are easy? on cstheory.SE. This questions asks about tractable sets of instances. This is a similar question, but not exactly the same. I am really interested in a heuristic that given a single instance, makes some sort of semi-intelligent guess of if the instance will be a hard one to solve.
In general, this is a very relevant and interesting research question. "One way is to run existing solvers..." and what would this even tell us exactly? We could see empirically that an instance seems hard for a specific solver or a specific algorithm/heuristic, but what does it really tell about the hardness of the instance? One way that has been pursued is the identification of various structural properties of instances that lead to efficient algorithms. These properties are indeed prefered to be "easily" identifiable. An example is the topology of the underlying constraint graph, measured using various graph width parameters. For example it is known that an instance is solvable in polynomial time if the treewidth of the underlying constraint graph is bounded by a constant. Another approach has focused on the role of hidden structure of instances. One example is the backdoor set , meaning the set of variables such that when they are instantiated, the remaining problem simplifies to a tractable class. For example, Williams et al., 2003 [1] show that even when taking into account the cost of searching for backdoor variables, one can still obtain an overall computational advantage by focusing on a backdoor set, provided the set is sufficiently small. Furthermore, Dilkina et al., 2007 [2] note that a solver called Satz-Rand is remarkably good at finding small strong backdoors on a range of experimental domains. More recently, Ansotegui et al., 2008 [3] propose the use of the tree-like space complexity as a measure for DPLL-based solvers. They prove that also constant-bounded space implies the existence of a polynomial time decision algorithm with space being the degree of the polynomial (Theorem 6 in the paper). Moreover, they show the space is smaller than the size of cycle-cutsets. In fact, under certain assumptions, the space is also smaller than the size of backdoors. They also formalize what I think you are after, that is: Find a measure $\psi$, and an algorithm that given a formula $\Gamma$ decides satisfiability in time $O(n^{\psi ( \Gamma )})$. The smaller the measure is, the better it characterizes the hardness of a formula . [1] Williams, Ryan, Carla P. Gomes, and Bart Selman. "Backdoors to typical case complexity." International Joint Conference on Artificial Intelligence. Vol. 18, 2003. [2] Dilkina, Bistra, Carla Gomes, and Ashish Sabharwal. "Tradeoffs in the Complexity of Backdoor Detection." Principles and Practice of Constraint Programming (CP 2007), pp. 256-270, 2007. [3] Ansótegui, Carlos, Maria Luisa Bonet, Jordi Levy, and Felip Manya. "Measuring the Hardness of SAT Instances." In Proceedings of the 23rd National Conference on Artificial Intelligence (AAAI’08), pp. 222-228, 2008.
{ "source": [ "https://cs.stackexchange.com/questions/407", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/55/" ] }
419
I've always thought vaguely that the answer to the above question was affirmative along the following lines. Gödel's incompleteness theorem and the undecidability of the halting problem both being negative results about decidability and established by diagonal arguments (and in the 1930's), so they must somehow be two ways to view the same matters. And I thought that Turing used a universal Turing machine to show that the halting problem is unsolvable. (See also this math.SE question.) But now that (teaching a course in computability) I look closer into these matters, I am rather bewildered by what I find. So I would like some help with straightening out my thoughts. I realise that on one hand Gödel's diagonal argument is very subtle: it needs a lot of work to construct an arithmetic statement that can be interpreted as saying something about it's own derivability. On the other hand the proof of the undecidability of the halting problem I found here is extremely simple, and doesn't even explicitly mention Turing machines, let alone the existence of universal Turing machines. A practical question about universal Turing machines is whether it is of any importance that the alphabet of a universal Turing machine be the same as that of the Turing machines that it simulates. I thought that would be necessary in order to concoct a proper diagonal argument (having the machine simulate itself), but I haven't found any attention to this question in the bewildering collection of descriptions of universal machines that I found on the net. If not for the halting problem, are universal Turing machines useful in any diagonal argument? Finally I am confused by this further section of the same WP article, which says that a weaker form of Gödel's incompleteness follows from the halting problem: "a complete, consistent and sound axiomatisation of all statements about natural numbers is unachievable" where "sound" is supposed to be the weakening. I know a theory is consistent if one cannot derive a contradiction, and a complete theory about natural numbers would seem to mean that all true statements about natural numbers can be derived in it; I know Gödel says such a theory does not exist, but I fail to see how such a hypothetical beast could possibly fail to be sound, i.e., also derive statements which are false for the natural numbers: the negation of such a statement would be true, and therefore by completeness also derivable, which would contradict consistency. I would appreciate any clarification on one of these points.
I recommend you to check Scott Aaronson's blog post on a proof of the Incompleteness Theorem via Turing machines and Rosser's Theorem. His proof of the incompleteness theorem is extremely simple and easy to follow.
{ "source": [ "https://cs.stackexchange.com/questions/419", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/136/" ] }
451
I'm pretty fluent in C/C++, and can make my way around the various scripting languages (awk/sed/perl). I've started using python a lot more because it combines some of the nifty aspects of C++ with the scripting capabilities of awk/sed/perl. But why are there so many different programming languages ? I'm guessing all these languages can do the same things, so why not just stick to one language and use that for programming computers ? In particular, is there any reason I should know a functional language as a computer programmer ? Some related reading: Why new programming languages succeed -- or fail ? is there still research to be done in programming languages?
Programming languages evolve and are improved with time (innovation). People take ideas from different languages and combine them into new languages. Some features are improved (inheritance mechanisms, type systems), some are added (garbage collection, exception handling), some are removed ( goto statements, low-level pointer manipulations). Programmers start using a language in a particular way that is not supported by any language constructs. Language designers identify such usage patterns and introduce new abstractions/language constructs to support such usage patterns. There were no procedures in assembly language. No classes in C. No exception handling in (early) C++. No safe way of loading new modules in early languages (easy in Java). No built-in threads (easy-peasy in Java). Researchers think about alternative ways of expressing computations. This led to Lisp and the functional language branch of the language tree, Prolog and the logic programming branch, Erlang and other actor-based programming models, among others. Over time, language designers/researchers come to better understand all of these constructs, and how they interact, and design languages to include many of the popular constructs, all designed to work seamlessly together. This results in wonderful languages such as Scala, which has objects and classes (expressed using traits instead of single or multiple inheritance), functional programming features, algebraic data types integrated nicely with the class system and pattern matching, and actor-based concurrency. Researchers who believe in static type systems strive to improve their expressiveness, allowing things such as typed generic classes in Java (and all of the wonderful things in Haskell), so that a programmer gets more guarantees before running a program that things are not going to go wrong. Static type systems often impose a large burden on the programmer (typing in the types), so research has gone into alleviating that burden. Languages such as Haskell and ML allow the programmer to omit all of the type annotations (unless they are doing something tricky). Scala allows the programmer to omit the types within the body of methods, to simplify the programmer's job. The compiler infers all the missing types and informs the programmer of possible errors. Finally, some languages are designed to support particular domains. Examples include SQL, R, Makefiles, the Graphviz input language, Mathmatica, LaTeX. Integrating what these languages' functionalities into general purpose languages (directly) would be quite cumbersome. These languages are based on abstractions specific to their particular domain. Without evolution in programming language design, we'd all still be using assembly language or C++. As for knowing a functional programming language : functional languages allow you to express computations differently, often more concisely than using other programming languages. Consider about the difference between C++ and Python and multiply it by 4. More seriously, as already mentioned in another answer, functional programming gives you a different way of thinking about problems. This applies to all other paradigms; some a better suited to some problems, and some are not. This is why multi-paradigm languages are becoming more popular: you can use constructs from a different paradigm if you need to, without changing language, and, more challengingly, you can mix paradigms within one piece of software.
{ "source": [ "https://cs.stackexchange.com/questions/451", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/45/" ] }
473
I read in many places that some problems are difficult to approximate (it is NP-hard to approximate them). But approximation is not a decision problem: the answer is a real number and not Yes or No. Also for each desired approximation factor, there are many answers that are correct and many that are wrong, and this changes with the desired approximation factor! So how can one say that this problem is NP-hard? (inspired by the second bullet in How hard is counting the number of simple paths between two nodes in a directed graph? )
As you said, there is no decision to make, so new complexity classes and new types of reductions are needed to arrive at a suitable definition of NP-hardness for optimization-problems . One way of doing this is to have two new classes NPO and PO that contain optimizations problems and they mimic of course the classes NP and P for decision problems. New reductions are needed as well. Then we can recreate a version of NP-hardness for optimization problems along the lines that was successful for decision problems. But first we have to agree what an optimization-problem is. Definition: Let $O=(X,L,f,opt)$ be an optimization-problem . $X$ is the set of inputs or instances suitable encoded as strings. $L$ is a function that maps each instance $x\in X$ onto a set of strings, the feasible solutions of instance $x$. It is a set because there are many solutions to an optimization-problem. Thus we haven an objective function $f$ that tells us for every pair $(x, y)$ $y\in L(x)$ of instance and solution its cost or value . $opt$ tells us whether we are maximizing or minimizing. This allows us to define what an optimal solution is: Let $y_{opt}\in L(x)$ be the optimal solution of an instance $x\in X$ of an optimization-problem $O=(X,L,f,opt)$ with $$f(x,y_{opt})=opt\{f(x,y')\mid y'\in L(x)\}.$$ The optimal solution is often denoted by $y^*$. Now we can define the class NPO : Let $NPO$ be the set of all optimization-problems $O=(X,L,f,opt)$ with: $X\in P$ There is a polynomial $p$ with $|y|\le p(|x|)$ for all instances $x\in X$ and all feasible solutions $y\in L(x)$. Furthermore there is an deterministic algorithm that decides in polynomial time whether $y\in L(x)$. $f$ can be evaluated in polynomial time. The intuition behind it is: We can verify efficiently if $x$ is actually a valid instance of our optimization problem. The size of the feasible solutions is bounded polynomially in the size of the inputs, And we can verify efficiently if $y\in L(x)$ is a fesible solution of the instance $x$. The value of a solution $y\in L(x)$ can be determined efficiently. This mirrors how $NP$ is defined, now for PO : Let $PO$ be the set of all problems from $NPO$ that can be solved by an deterministic algorithm in polynomial time. Now we are able to define what we want to call an approximation-algorithm : An approximation-algorithm of an optimization-problem $O=(X,L,f,opt)$ is an algorithm that computes a feasible solution $y\in L(x)$ for an instance $x\in X$. Note: That we don’t ask for an optimal solution we only what to have a feasible one. Now we have two types of errors: The absolute error of a feasible solution $y\in L(x)$ of an instance $x\in X$ of the optimization-problem $O=(X,L,f,opt)$ is $|f(x,y)-f(x,y^*)|$. We call the absolute error of an approximation-algorithm $A$ for the optimization-problem $O$ bounded by $k$ if the algorithm $A$ computes for every instance $x\in X$ a feasible solution with an absolute error bounded by $k$. Example: According to the Theorem of Vizing the chromatic index of a graph (the number of colours in the edge coloring with the fewest number of colors used) is either $\Delta$ or $\Delta+1$, where $\Delta$ is the maximal node degree. From the proof of the theorem an approximation-algorithm can be devised that computes an edge coloring with $\Delta+1$ colours. Accordingly we have an approximation-algorithm for the $\mathsf{Minimum-EdgeColoring}$-Problem where the absolute error is bounded by $1$. This example is an exception, small absolute errors are rare, thus we define the relative error $\epsilon_A(x)$ of the approximation-algorithm $A$ on instance $x$ of the optimization-problem $O=(X,L,f,opt)$ with $f(x,y)>0$ for all $x\in X$ and $y\in L(x)$ to be $$\epsilon_A(x):=\begin{cases}0&f(x,A(x))=f(x,y^*)\\\frac{|f(x,A(x))-f(x,y^*)|}{\max\{f(x,A(x)),f(x,y^*)\}}&f(x,A(x))\ne f(x,y^*)\end{cases}$$ where $A(x)=y\in L(x)$ is the feasible solution computed by the approximation-algorithm $A$. We can now define approximation-algorithm $A$ for the optimization-problem $O=(X,L,f,opt)$ to be a $\delta$-approximation-algorithm for $O$ if the relative error $\epsilon_A(x)$ is bounded by $\delta\ge 0$ for every instance $x\in X$, thus $$\epsilon_A(x)\le \delta\qquad \forall x\in X.$$ The choice of $\max\{f(x,A(x)),f(x,y^*)\}$ in the denominator of the definition of the relative error was selected to make the definition symmetric for maximizing and minimizing. The value of the relative error $\epsilon_A(x)\in[0,1]$. In case of a maximizing problem the value of the solution is never less than $(1-\epsilon_A(x))\cdot f(x,y^*)$ and never larger than $1/(1-\epsilon_A(x))\cdot f(x,y^*)$ for a minimizing problem. Now we can call an optimization-problem $\delta$-approximable if there is a $\delta$-approximation-algorithm $A$ for $O$ that runs in polynomial time. We do not want to look at the error for every instance $x$, we look only at the worst-case. Thus we define $\epsilon_A(n)$, the maximal relativ error of the approximation-algorithm $A$ for the optimization-problem $O$ to be $$\epsilon_A(n)=\sup\{\epsilon_A(x)\mid |x|\le n\}.$$ Where $|x|$ should be the size of the instance. Example: A maximal matching in a graph can be transformed in to a minimal node cover $C$ by adding all incident nodes from the matching to the vertex cover. Thus $1/2\cdot |C|$ edges are covered. As each vertex cover including the optimal one must have one of the nodes of each covered edge, otherwise it could be improved, we have $1/2\cdot |C|\cdot f(x,y^*)$. It follows that $$\frac{|C|-f(x,y^*)}{|C|}\le\frac{1}{2}$$ Thus the greedy algorithm for a maximal matching is a $1/2$-approximatio-algorithm for $\mathsf{Minimal-VertexCover}$. Hence $\mathsf{Minimal-VertexCover}$ is $1/2$-approximable. Unfortunately the relative error is not always the best notion of quality for an approximation as the following example demonstrates: Example: A simple greedy-algorithm can approximate $\mathsf{Minimum-SetCover}$. An analysis shows that $$\frac{|C|}{|C^*|}\le H_n\le 1+\ln(n)$$ and thus $\mathsf{Minimum-SetCover}$ would be $\frac{\ln(n)}{1+\ln(n)}$-approximable. If the relative error is close to $1$ the following definition is advantageous. Let $O=(X,L,f,opt)$ be an optimization-problem with $f(x, y)>0$ for all $x\in X$ and $y\in L(x)$ and $A$ an approximation-algorithm for $O$. The approximation-ratio $r_A(x)$ of feasible solution $A(x)=y\in L(x)$ of the instance $x\in X$ is $$r_A(x)=\begin{cases}1&f(x,A(x))=f(x,y^*)\\\max\left\{ \frac{f(x,A(x))}{f(x, y^*)},\frac{f(x, y^*)}{f(x, A(x))}\right\}&f(x,A(x))\ne f(x,y^*)\end{cases}$$ As before we call an approximation-algorithm $A$ an $r$-approximation-algorithm for the optimization-problem $O$ if the approximation-ratio $r_A(x)$ is bounded by $r\ge1$ for every input $x\in X$. $$r_A(x)\le r$$ And yet again if we have an $r$-approximation-algorithm $A$ for the optimization-problem $O$ then $O$ is called $r$-approximable . Again we only care about to the worst-case and define the maximal approximation-ratio $r_A(n)$ to be $$r_A(n)=\sup\{r_A(x)\mid |x|\le n\}.$$ Accordingly the approximation-ratio is larger than $1$ for suboptimal solutions. Thus better solutions have smaller ratios. For $\mathsf{Minimum-SetCover}$ we can now write that it is $(1+\ln(n))$-approximable. And in case of $\mathsf{Minimum-VertexCover}$ we know from the previous example that it is $2$-approximable. Between relative error and approximation-ratio we have simple relations: $$r_A(x)=\frac{1}{1-\epsilon_A(x)}\qquad \epsilon_A(x)=1-\frac{1}{r_A(x)}.$$ For small deviations from the optimum $\epsilon<1/2$ and $r<2$ the relative error is advantageous over the approximation-ratio, that shows its strengths for large deviations $\epsilon\ge 1/2$ and $r\ge 2$. The two versions of $\alpha$-approximable don’t overlap as one version has always $\alpha\le 1$ and the other $\alpha\ge 1$. The case $\alpha=1$ is not problematic as this is only reached by algorithms that produce an exact solution and consequentially need not be treated as approximation-algorithms. Another class appears often APX . It is define as the set of all optimization-problems $O$ from $NPO$ that haven an $r$-approximation-algorithm with $r\ge1$ that runs in polynomial time. We are almost through. We would like to copy the successful ideas of reductions and completness from complexity theory. The observation is that many NP-hard decision variants of optimization-problems are reducible to each other while their optimization variants have different properties regarding their approximability. This is due to the polynomialtime-Karp-reduction used in NP-completness reductions, which does not preserve the objective function. And even if the objective functions is preserved the polynomialtime-Karp-reduction may change the quality of the solution. What we need is a stronger version of the reduction, which not only maps instances from optimization-problem $O_1$ to instances of $O_2$, but also good solutions from $O_2$ back to good solutions from $O_1$. Hence we define the approximation-preserving-reduction for two optimization-problems $O_1=(X_1,L_1,f_1,opt_1)$ and $O_2=(X_2,L_2,f_2,opt_2)$ from $NPO$. We call $O_1$ $AP$-reducible to $O_2$, written as $O_1\le_{AP} O_2$, if there are two functions $g$ and $h$ and a constant $c$ with: $g(x_1, r)\in X_2$ for all $x_1\in X_1$ and rational $r>1$ $L_2(g(x, r_1))\ne\emptyset$ if $L_1(x_1)\ne\emptyset$ for all $x_1\in X_1$ and rational $r>1$ $h(x_1, y_2, r)\in L_1(x_1)$ for all $x_1\in X_1$ and rational $r>1$ and for all $y_2\in L_2(g(x_1,r))$ For fixed $r$ both functions $g$ and $h$ can be computed by two algorithms in polynomial time in the length of their inputs. We have $$f_2(g(x_1,r),y_2)\le r \Rightarrow f_1(x_1,h(x_1,y_2,r))\le 1+c\cdot(r-1) $$ for all $x_1\in X_1$ and rational $r>1$ and for all $y_2\in L_2(g(x_1,r))$ In this definition $g$ and $h$ depend on the quality of the solution $r$. Thus for different qualities the functions can differ. This generality is not always needed and we just work with $g(x_1)$ and $h(x_1, y_2)$. Now that we have a notion of a reduction for optimization-problems we finally can transfer many things we know from complexity theory. For example if we know that $O_2\in APX$ and we show that $O_1\le_{AP} O_2$ it follows that $O_1\in APX$ too. Finally we can define what we mean by $\mathcal{C}$-hard and $\mathcal{C}$-complete for optimization-problems: Let $O$ be an optimization-problem from $NPO$ and $\mathcal{C}$ a class of optimization-problems from $NPO$ then $O$ is called $\mathcal{C}$-hard with respect to $\le_{AP}$ if for all $O'\in\mathcal{C}$ $O'\le_{AP} O$ holds. Thus once more we have a notion of a hardest problem in the class. Not surprising a $\mathcal{C}$-hard problem is called $\mathcal{C}$-complete with respect to $\le_{AP}$ if it is an element of $\mathcal{C}$. Thus we can now talk about $NPO$-completness and $APX$-completness etc. And of course we are now asked to exhibit a first $NPO$-complete problem that takes over the role of $\mathsf{SAT}$. It comes almost naturally, that $\mathsf{Weighted-Satisfiability}$ can be shown to be $NPO$-complete. With the help of the PCP-Theorem one can even show that $\mathsf{Maximum-3SAT}$ is $APX$-complete.
{ "source": [ "https://cs.stackexchange.com/questions/473", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/157/" ] }
524
There are a great many data structures that implement the priority-queue interface: Insert: insert an element into the structure Get-Min: return the smallest element in the structure Extract-Min: remove the smallest element in the structure Common data structures implementing this interface are (min) heaps . Usually, the (amortized) running times of these operations are: Insert: $\mathcal{O}(1)$ (sometimes $\mathcal{O}(\log n)$) Get-Min: $\mathcal{O}(1)$ Extract-Min: $\mathcal{O}(\log n)$ The Fibonacci heap achieves these running times for example. Now, my question is the following: Is there a data structure with the following (amortized) running times? Insert: $\mathcal{O}(\log n)$ Get-Min: $\mathcal{O}(1)$ Extract-Min: $\mathcal{O}(1)$ If we can construct such a structure in $\mathcal{O}(n)$ time given sorted input, then we can for instance find line intersections on pre-sorted inputs with $o\left(\frac{n}{\log n}\right)$ intersections strictly faster than if we use the 'usual' priority queues.
Our idea is to use threaded splay trees . Other than the Wikipedia article we will thread the trees so that every node has a pointer next to its successor in the in-order traversal; we also hold a pointer start to the smallest element in the tree. It is easy to see that extracting the smallest element is possible in (worst case) time $\mathcal{O}(1)$: just follow the start pointer, remove the minimum and change the pointer to the minimum's next . The minimum can never have a left child; if it has a right child, we put it in the minimum's place in the tree. We do not perform the splay operation splay trees usually would do. The result is a search tree that is still reasonably balanced: because we only remove nodes on the left flank, we know that when the number of nodes (in an affected subtree) drops to about half the original number because of deletions, the (sub)tree's height is reduced by one. Insertions are possible in $\mathcal{O}(\log n)$ amortised time; the zig-zag (and what not) operations will here also rebalance the tree nicely. This is a rough sketch at best. Credits go to F. Weinberg who puzzled over the question with me and our advisor M. Nebel who mentioned splay trees, about the only tree variant we had not tried.
{ "source": [ "https://cs.stackexchange.com/questions/524", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/92/" ] }
525
I've heard of (structural) induction. It allows you to build up finite structures from smaller ones and gives you proof principles for reasoning about such structures. The idea is clear enough. But what about coinduction? How does it work? How can one say anything conclusive about an infinite structure? There are (at least) two angles to address, namely, coinduction as a way of defining things and as a proof technique. Regarding coinduction as a proof technique, what is the relationship between coinduction and bisimulation?
First, to dispel a possible cognitive dissonance: reasoning about infinite structures is not a problem, we do it all the time. As long as the structure is finitely describable, that's not a problem. Here are a few common types of infinite structures: languages (sets of strings over some alphabet, which may be finite); tree languages (sets of trees over some alphabet); execution traces of a non-deterministic system; real numbers; sets of integers; sets of functions from integers to integers; … Coinductivity as the largest fixpoint Where inductive definitions build a structure from elementary building blocks, coinductive definitions shape structures from how they can be deconstructed. For example, the type of lists whose elements are in a set A is defined as follows in Coq: Inductive list (A:Set) : Set := | nil : list A | cons : A -> list A -> list A. Informally, the list type is the smallest type that contains all values built from the nil and cons constructors, with the axiom that $\forall x \, y, \: \mathtt{nil} \ne \mathtt{cons} \: x \: y$. Conversely, we can define the largest type that contains all values built from these constructors, keeping the discrimination axiom: CoInductive colist (A:Set) : Set := | conil : colist A | cocons : A -> colist A -> colist A. list is isomorphic to a subset of colist . In addition, colist contains infinite lists: lists with cocons upon cocons . CoFixpoint flipflop : colist ℕ := cocons 1 (cocons 2 flipflop). CoFixpoint from (n:ℕ) : colist ℕ := cocons n (from (1 + n)). flipflop is the infinite (circular list) $1::2::1::2::\ldots$; from 0 is the infinite list of natural numbers $0::1::2::\ldots$. A recursive definition is well-formed if the result is built from smaller blocks: recursive calls must work on smaller inputs. A corecursive definition is well-formed if the result builds larger objects. Induction looks at constructors, coinduction looks at destructors. Note how the duality not only changes smaller to larger but also inputs to outputs. For example, the reason the flipflop and from definitions above are well-formed is that the corecursive call is guarded by a call to the cocons constructor in both cases. Where statements about inductive objects have inductive proofs, statements about coinductive objects have coinductive proofs. For example, let's define the infinite predicate on colists; intuitively, the infinite colists are the ones that don't end with conil . CoInductive Infinite A : colist A -> Prop := | Inf : forall x l, Infinite l -> Infinite (cocons x l). To prove that colists of the form from n are infinite, we can reason by coinduction. from n is equal to cocons n (from (1 + n)) . This shows that from n is larger than from (1 + n) , which is infinite by the coinduction hypothesis, hence from n is infinite. Bisimilarity, a coinductive property Coinduction as a proof technique also applies to finitary objects. Intuitively speaking, inductive proofs about an object are based on how the object is built. Coinductive proofs are based on how the object can be decomposed. When studying deterministic systems, it is common to define equivalence through inductive rules: two systems are equivalent if you can get from one to the other by a series of transformations. Such definitions tend to fail to capture the many different ways non-deterministic systems can end up having the same (observable) behavior in spite of having different internal structure. (Coinduction is also useful to describe non-terminating systems, even when they're deterministic, but this isn't what I'll focus on here.) Nondeterministic systems such as concurrent systems are often modeled by labeled transition systems . An LTS is a directed graph in which the edges are labeled. Each edge represents a possible transition of the system. A trace of an LTS is the sequence of edge labels over a path in the graph. Two LTS can behave identically, in that they have the same possible traces, even if their internal structure is different. Graph isomorphism is too strong to define their equivalence. Instead, an LTS $\mathscr{A}$ is said to simulate another LTS $\mathscr{B}$ if every transition of the second LTS admits a corresponding transition in the first. Formally, let $S$ be the disjoint union of the states of the two LTS, $L$ the (common) set of labels and $\rightarrow$ the transition relation. The relation $R \subseteq S \times S$ is a simulation if $$ \forall (p,q)\in R, %\forall p'\in S, \forall\alpha\in L, \text{ if } p \stackrel\alpha\rightarrow p' \text{ then } \exists q', \; q \stackrel\alpha\rightarrow q' \text{ and } (p',q')\in R $$ $\mathscr{A}$ simulates $\mathscr{B}$ if there is a simulation in which all the states of $\mathscr{B}$ are related to a state in $\mathscr{A}$. If $R$ is a simulation in both directions, it is called a bisimulation . Simulation is a coinductive property: any observation on one side must have a match on the other side. There are potentially many bisimulations in an LTS. Different bisimulations might identify different states. Given two bisimulations $R_1$ and $R_2$, the relation given by taking the union of the relation graphs $R_1 \cup R_2$ is itself a bisimulation, since related states give rise to related states for both relations. (This holds for infinite unions as well. The empty relation is an unintersting bisimulation, as is the identity relation.) In particular, the union of all bisimulations is itself a bisimulation, called bisimilarity. Bisimilarity is the coarsest way to observe a system that does not distinguish between distinct states. Bisimilarity is a coinductive property. It can be defined as the largest fixpoint of an operator: it is the largest relation which, when extended to identify equivalent states, remains the same. References Coq and the calculus of inductive constructions Yves Bertot and Pierre Castéran. Interactive Theorem Proving and Program Development — Coq'Art: The Calculus of Inductive Constructions . Springer, 2004. Ch. 13. [ website ] [ Amazon ] Eduardo Giménez. An application of co-inductive types in coq: verification of the alternating bit protocol . In Workshop on Types for Proofs and Programs , number 1158 in Lecture Notes in Computer Science , pages 135–152. Sprin­ger-Verlag, 1995. [ Google Books ] Eduardo Giménez and Pierre Castéran. A Tutorial on [Co-]Inductive Types in Coq. 2007. [ PDF ] Labeled transition systems and bisimulations Robin Milner. Communication and Concurrency . Prentice Hall, 1989. Davide Sangiorgi. On the origins of bisimulation and coinduction . ACM Transactions on Programming Languages and Systems (TOPLAS), volume 31 issue 4, May 2009. [ PDF ] [ ACM ] Associated course slides: [ PDF ] [ CiteSeer ] Davide Sangiorgi. The Pi-Calculus: A Theory of Mobile Processes . Cambridge University Press, 2003. [ Amazon ] More references suggested by Anton Trunov A chapter in Certified Programming with Dependent Types by A. Chlipala D. Sangiorgi. "Introduction to Bisimulation and Coinduction". 2011. [ PDF ] D. Sangiorgi and J. Rutten. Advanced Topics in Bisimulation and Coinduction . Cambridge University Press, 2012. [ CUP ]
{ "source": [ "https://cs.stackexchange.com/questions/525", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/31/" ] }
539
Most of us learned programming using "textual" programming languages like Basic, C/C++, and Java. I believe it is more natural and efficient for humans to think visually. Visual programming allows developers to write programs by manipulating graphical elements. I guess using visual programming should improve the quality of code and reduce programming bugs. I'm aware of a few visual languages such as App Inventor , Scratch , and LabView . Why are there no mainstream, general-purpose visual languages for developers? What are the advantages and disadvantages of visual programming?
In general, there is a trade-off in programming language design between ease of use and expressiveness (power). Writing a simple "Hello, world" program in a beginner language, such as Scratch or App Inventor, is generally easier than writing it in a general-purpose programming language such as Java or C++, where you might have a choice of several streams to output to, different character sets, the opportunity to change the syntax, dynamic types, etc. During the creation of App Inventor (which I was part of), our design philosophy was to make programming simple for the beginner. A trivial example was basing our array indices at 1, rather than 0, even though that makes calculations likely to be performed by advanced programmers slightly more complex. The main way, however, that visual programming languages tend to be designed for beginners is by eliminating the possibility of syntax errors by making it impossible to create syntactically invalid programs. For example, the block languages don't let you make an rvalue the destination of an assignment statement. This philosophy tends to yield simpler grammars and languages. When users start building more complex programs in a blocks language, they find that dragging and dropping blocks is slower than typing would be. Would you rather type "a*x^2+b*x+c" or create it with blocks? Justice can't be given to this topic (at least by me) in a few paragraphs, but some of the main reasons are: Block languages tend to be designed for beginners so are not as powerful by design. There is no nice visual way of expressing some complex concepts, such as type systems, that you find in general-purpose programming languages. Using blocks is unwieldy for complex programs.
{ "source": [ "https://cs.stackexchange.com/questions/539", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/96/" ] }
561
While discussing some intro level topics today, including the use of genetic algorithms; I was told that research has really slowed in this field. The reason given was that most people are focusing on machine learning and data mining. Update: Is this accurate? And if so, what advantages does ML/DM have when compared with GA?
Well, machine learning in the sense of statistical pattern recognition and data mining are definitely hotter areas, but I wouldn't say research in evolutionary algorithms has particularly slowed. The two areas aren't generally applied to the same types of problems. It's not immediately clear how a data driven approach helps you, for instance, figure out how to best schedule worker shifts or route packages more efficiently. Evolutionary methods are most often used on hard optimization problems rather than pattern recognition. The most direct competitors are operations research approaches, basically mathematical programming, and other forms of heuristic search like tabu search, simulated annealing, and dozens of other algorithms collectively known as "metaheuristics". There are two very large annual conferences on evolutionary computation (GECCO and CEC), a slew of smaller conferences like PPSN, EMO, FOGA, and Evostar, and at least two major high-quality journals (IEEE Transactions on Evolutionary Computation and the MIT Press journal Evolution Computation) as well as a number of smaller ones that include EC part of their broader focus. All that said, there are several advantages the field more generally thought of as "machine learning" has in any comparison of "hotness". One, it tends to be on much firmer theoretical ground, which the mathematicians always like. Two, we're in something of a golden age for data, and lots of the cutting edge machine learning methods really only start to shine when given tons of data and tons of compute power, and in both respects, the time is in a sense "right".
{ "source": [ "https://cs.stackexchange.com/questions/561", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/258/" ] }
581
I believe I have a reasonable grasp of complexities like $\mathcal{O}(1)$ , $\Theta(n)$ and $\Theta(n^2)$ . In terms of a list, $\mathcal{O}(1)$ is a constant lookup, so it's just getting the head of the list. $\Theta(n)$ is where I'd walk the entire list once, and $\Theta(n^2)$ is walking the list once for each element in the list. Is there a similarly intuitive way to grasp $\Theta(\log n)$ other than just knowing it lies somewhere between $\mathcal{O}(1)$ and $\Theta(n)$ ?
The $\Theta(\log n)$ complexity is usually connected with subdivision. When using lists as an example, imagine a list whose elements are sorted. You can search in this list in $\mathcal{O}(\log n)$ time - you do not actually need to look at each element because of the sorted nature of the list. If you look at the element in the middle of the list and compare it to the element you search for, you can immediately say whether it lies in the left or right half of the array. Then you can just take this one half and repeat the procedure until you find it or reach a list with 1 item which you trivially compare. You can see that the list effectively halves each step. That means if you get a list of length $32$, the maximum steps you need to reach one-item list is $5$. If you have a list of $128 = 2^7$ items, you need only $7$ steps, for a list of $1024 = 2^{10}$ you need only $10$ steps etc. As you can see, the exponent $n$ in $2^n$ always shows the number of steps necessary. Logarithm is used to "extract" exactly this exponent number, for example $\log_2 2^{10} = 10$. It also generalizes to list lengths that are not powers of two long.
{ "source": [ "https://cs.stackexchange.com/questions/581", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/385/" ] }
645
I have used the technique of dynamic programming multiple times however today a friend asked me how I go about defining my sub-problems, I realized I had no way of providing an objective formal answer. How do you formally define a sub-problem for a problem that you would solve using dynamic programming?
The principle of dynamic programming is to think top-down (i.e recursively) but solve bottom up. So a good strategy for designing a DP is to formulate the problem recursively and generate sub-problems that way.
{ "source": [ "https://cs.stackexchange.com/questions/645", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/106/" ] }
820
I am learning Automated Theorem Proving / SMT solvers / Proof Assistants by myself and post a series of questions about the process, starting here. Note that these topics are not easily digested without a background in (mathematical) logics. If you have problems with basic terms, please read up on those, for instance Logics in Computer Science by M. Huth and M. Ryan (in particular chapters one, two and four) or An Introduction to Mathematical Logic and Type Theory by P. Andrews. For a short introduction into higher order logic (HOL) see here . I looked at Coq and read the first chapter of the intoduction to Isabelle amongst others; Types of Automated Theorem Provers I have known Prolog for a few decades and am now learning F#, so ML, O'Caml and LISP are a bonus. Haskell is a different beast. I have the following books "Handbook of Automated Reasoning" edited by Alan Robinson and Andrei Vornkov "Handbook of Practical Logic and Automated Reasoning" by John Harrison "Term Rewriting and All That" by Franz Baader and Tobias Nipkow What are the differences between Coq and Isabelle? Should I learn either Isabelle or Coq, or both? Is there an advantage to learning either Isabelle or Coq first? Find the series' next question here .
My preference is for Coq, but I imagine that others prefer Isabelle. One of the strange things I found about Isabelle is that there is a two-level syntax, where some of your definitions need to be inside double quote. No such nonsense is present in Coq. Ultimately, the one that is most suitable for you may depend on what you want to prove. Both languages have a lot of library support and active communities doing all sorts of development and example theories. If one language provides adequate library (or other) support for the kinds of theory you wish to develop, then I'd select that language. One strategy is to do a simple tutorial in both languages and follow up the one that feels the best. For example, Coq in a Hurry by Yves Bertot, or the first part of A Proof Assistant for Higher-Order Logic by Tobias Nipkow, Lawrence C. Paulson and Markus Wenzel. Here is a blog post briefly comparing the two by someone who ultimately prefers Isabelle. Make sure you use a proper IDE (such as ProofGeneral ), rather than doing things on the command line. Another way to to get into Coq is to try the online book Software Foundations by Benjamin Pierce et al. It provides an excellent tutorial with loads of details provided. The focus is mostly on programming language semantics, but a lot of the basics (and beyond) of Coq and semi-automated theorem proving are covered along the way.
{ "source": [ "https://cs.stackexchange.com/questions/820", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/268/" ] }
824
Assume I have a list of functions, for example $\qquad n^{\log \log(n)}, 2^n, n!, n^3, n \ln n, \dots$ How do I sort them asymptotically, i.e. after the relation defined by $\qquad f \leq_O g \iff f \in O(g)$, assuming they are indeed pairwise comparable (see also here )? Using the definition of $O$ seems awkward, and it is often hard to prove the existence of suitable constants $c$ and $n_0$. This is about measures of complexity, so we're interested in asymptotic behavior as $n \to +\infty$, and we assume that all the functions take only non-negative values ($\forall n, f(n) \ge 0$).
If you want rigorous proof, the following lemma is often useful resp. more handy than the definitions. If $c = \lim_{n\to\infty} \frac{f(n)}{g(n)}$ exists, then $c=0 \qquad \ \,\iff f \in o(g)$ , $c \in (0,\infty) \iff f \in \Theta(g)$ and $c=\infty \quad \ \ \ \iff f \in \omega(g)$ . With this, you should be able to order most of the functions coming up in algorithm analysis¹. As an exercise, prove it! Of course you have to be able to calculate the limits accordingly. Some useful tricks to break complicated functions down to basic ones are: Express both functions as $e^{\dots}$ and compare the exponents; if their ratio tends to $0$ or $\infty$ , so does the original quotient. More generally: if you have a convex, continuously differentiable and strictly increasing function $h$ so that you can re-write your quotient as $\qquad \displaystyle \frac{f(n)}{g(n)} = \frac{h(f^*(n))}{h(g^*(n))}$ , with $g^* \in \Omega(1)$ and $\qquad \displaystyle \lim_{n \to \infty} \frac{f^*(n)}{g^*(n)} = \infty$ , then $\qquad \displaystyle \lim_{n \to \infty} \frac{f(n)}{g(n)} = \infty$ . See here for a rigorous proof of this rule (in German). Consider continuations of your functions over the reals. You can now use L'Hôpital's rule ; be mindful of its conditions²! Have a look at the discrete equivalent, Stolz–Cesàro . When factorials pop up, use Stirling's formula : $\qquad \displaystyle n! \sim \sqrt{2 \pi n} \left(\frac{n}{e}\right)^n$ It is also useful to keep a pool of basic relations you prove once and use often, such as: logarithms grow slower than polynomials, i.e. $\qquad\displaystyle (\log n)^\alpha \in o(n^\beta)$ for all $\alpha, \beta > 0$ . order of polynomials: $\qquad\displaystyle n^\alpha \in o(n^\beta)$ for all $\alpha < \beta$ . polynomials grow slower than exponentials: $\qquad\displaystyle n^\alpha \in o(c^n)$ for all $\alpha$ and $c > 1$ . It can happen that above lemma is not applicable because the limit does not exist (e.g. when functions oscillate). In this case, consider the following characterisation of Landau classes using limes superior/inferior : With $c_s := \limsup_{n \to \infty} \frac{f(n)}{g(n)}$ we have $0 \leq c_s < \infty \iff f \in O(g)$ and $c_s = 0 \iff f \in o(g)$ . With $c_i := \liminf_{n \to \infty} \frac{f(n)}{g(n)}$ we have $0 < c_i \leq \infty \iff f \in \Omega(g)$ and $c_i = \infty \iff f \in \omega(g)$ . Furthermore, $0 < c_i,c_s < \infty \iff f \in \Theta(g) \iff g \in \Theta(f)$ and $ c_i = c_s = 1 \iff f \sim g$ . Check here and here if you are confused by my notation. ¹ Nota bene: My colleague wrote a Mathematica function that does this successfully for many functions, so the lemma really reduces the task to mechanical computation. ² See also here .
{ "source": [ "https://cs.stackexchange.com/questions/824", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/776/" ] }
842
I wonder if it is possible to build compilers for dynamic languages like Ruby to have similar and comparable performance to C/C++? From what I understand about compilers, take Ruby for instance, compiling Ruby code can't ever be efficient because the way Ruby handles reflection, features such as automatic type conversion from integer to big integer, and lack of static typing makes building an efficient compiler for Ruby extremely difficult. Is it possible to build a compiler that can compile Ruby or any other dynamic languages to a binary that performs very closely to C/C++? Is there a fundamental reason why JIT compilers, such as PyPy/Rubinius will eventually or will never match C/C++ in performance? Note: I do understand that “performance” can be vague, so to clear that up, I meant, if you can do X in C/C++ with performance Y, can you do X in Ruby/Python with performance close to Y? Where X is everything from device drivers and OS code, to web applications.
To all those who said “yes” I’ll offer a counter-point that the answer is “no”, by design . Those languages will never be able to match the performance of statically compiled languages. Kos offered the (very valid) point that dynamic languages have more information about the system at runtime which can be used to optimise code. However, there‘s another side of the coin: this additional information needs to be kept track of. On modern architectures, this is a performance killer. William Edwards offers a nice overview of the argument . In particular, the optimisations mentioned by Kos can’t be applied beyond a very limited scope unless you limit the expressive power of your languages quite drastically, as mentioned by Devin. This is of course a viable trade-off but for the sake of the discussion, you then end up with a static language, not a dynamic one. Those languages differ fundamentally from Python or Ruby as most people would understand them. William cites some interesting IBM slides : Every variable can be dynamically-typed: Need type checks Every statement can potentially throw exceptions due to type mismatch and so on: Need exception checks Every field and symbol can be added, deleted, and changed at runtime: Need access checks The type of every object and its class hierarchy can be changed at runtime: Need class hierarchy checks Some of those checks can be eliminated after analysis (N.B.: this analysis also takes time – at runtime). Furthermore, Kos argues that dynamic languages could even surpass C++ performance. The JIT can indeed analyse the program’s behaviour and apply suitable optimisations. But C++ compilers can do the same! Modern compilers offer so-called profile-guided optimisation which, if they are given suitable input, can model program runtime behaviour and apply the same optimisations that a JIT would apply. Of course, this all hinges on the existence of realistic training data and furthermore the program cannot adapt its runtime characteristics if the usage pattern changes mid-run. JITs can theoretically handle this. I’d be interested to see how this fares in practice, since, in order to switch optimisations, the JIT would continually have to collect usage data which once again slows down execution. In summary, I’m not convinced that runtime hot-spot optimisations outweigh the overhead of tracking runtime information in the long run , compared to static analysis and optimisation.
{ "source": [ "https://cs.stackexchange.com/questions/842", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/124/" ] }
857
Over here , Dave Clarke proposed that in order to compare asymptotic growth you should plot the functions at hand. As a theoretically inclined computer scientist, I call(ed) this vodoo as a plot is never proof. On second thought, I have to agree that this is a very useful approach that is even sometimes underused; a plot is an efficient way to get first ideas, and sometimes that is all you need. When teaching TCS, there is always the student who asks: "What do I need formal proof for if I can just do X which always works?" It is up to his teacher(s) to point out and illustrate the fallacy. There is a brilliant set of examples of apparent patterns that eventually fail over at math.SE, but those are fairly mathematical scenarios. So, how do you fool the plot inspection heuristic? There are some cases where differences are hard to tell appart, e.g. [ source ] Make a guess, and then check the source for the real functions. But those are not as spectacular as I would hope for, in particular because the real relations are easy to spot from the functions alone, even for a beginner. Are there examples of (relative) asymptotic growth where the truth is not obvious from the function definiton and plot inspection for reasonably large $n$ gives you a completely wrong idea? Mathematical functions and real data sets (e.g. runtime of a specific algorithm) are both welcome; please refrain from piecewise defined functions, though.
Speaking from experience, when trying to figure out the growth rate for some observed function (say, Markov chain mixing time or algorithm running time), it is very difficult to tell factors of $(\log n)^a$ from $n^b$. For example, $O(\sqrt{n} \log n)$ looks a lot like $O(n^{0.6})$: [ source ] For example, in "Some unexpected expected behavior results for bin packing" by Bentley et al., the growth rate of empty space for the Best Fit and First Fit bin packing algorithms when packing items uniform on $[0,1]$ was estimated empirically as $n^{0.6}$ and $n^{0.7}$, respectively. The correct expressions are $n^{1/2}\log^{3/4}n$ and $n^{2/3}$.
{ "source": [ "https://cs.stackexchange.com/questions/857", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/98/" ] }
909
Knapsack problems are easily solved by dynamic programming. Dynamic programming runs in polynomial time; that is why we do it, right? I have read it is actually an NP-complete problem, though, which would mean that solving the problem in polynomial problem is probably impossible. Where is my mistake?
Knapsack problem is $\sf{NP\text{-}complete}$ when the numbers are given as binary numbers. In this case, the dynamic programming will take exponentially many steps (in the size of the input, i.e. the number of bits in the input) to finish $\dagger$ . On the other hand, if the numbers in the input are given in unary, the dynamic programming will work in polynomial time (in the size of the input). This kind of problems is called weakly $\sf{NP\text{-}complete}$ . $\dagger$ : Another good example to understand the importance of the encoding used to give the input is considering the usual algorithms to see if a number is prime that go from $2$ up to $\sqrt{n}$ and check if any of them divide $n$ . This is polynomial in $n$ but not necessarily in the input size. If $n$ is given in binary, the size of input is $\lg n$ and the algorithm runs in time $O(\sqrt{n}) = O(2^{\lg n/2})$ which is exponential in the input size. And the usual computational complexity of a problem is w.r.t. the size of the input. This kind of algorithm, i.e. polynomial in the largest number that is part of the input, but exponential in the input length is called pseudo-polynomial .
{ "source": [ "https://cs.stackexchange.com/questions/909", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/848/" ] }
930
What would be the fastest way of doing this (from an algorithmic perspective, as well as a practical matter)? I was thinking something along the following lines. I could add to the end of an array and then use bubblesort as it has a best case (totally sorted array at start) that is close to this, and has linear running time (in the best case). On the other hand, if I know that I start out with a sorted array, I can use a binary search to find out the insertion point for a given element. My hunch is that the second way is nearly optimal, but curious to see what is out there. How can this best be done?
We count the number of array element reads and writes. To do bubble sort, you need $1 + 4n$ accesses (the initial write to the end, then, in the worst case, two reads and two writes to do $n$ swaps). To do the binary search, we need $2\log n + 2n + 1$ ($2\log n$ for binary search, then, in the worst case, $2n$ to shift the array elements to the right, then 1 to write the array element to its proper position). So both methods have the same complexity for array implementations, but the binary search method requires fewer array accesses in the long run... asymptotically, half as many. There are other factors at play, naturally. Actually, you could use better implementations and only count actual array accesses (not accesses to the element to be inserted). You could do $2n + 1$ for bubble sort, and $\log n + 2n + 1$ for binary search... so if register/cache access is cheap and array access is expensive, searching from the end and shifting along the way (smarter bubble sort for insertion) could be better, though not asymptotically so. A better solution might involve using a different data structure. Arrays give you O(1) accesses (random access), but insertions and deletions might cost. A hash table could have O(1) insertions & deletions, accesses would cost. Other options include BSTs and heaps, etc. It could be worth considering your application's usage needs for insertion, deletion and access, and choose a more specialized structure. Note also that if you want to add $m$ elements to a sorted array of $n$ elements, a good idea might be to efficiently sort the $m$ items, then merge the two arrays; also, sorted arrays can be built efficiently using e.g. heaps (heap sort).
{ "source": [ "https://cs.stackexchange.com/questions/930", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/863/" ] }
934
I have asked a series of questions concerning capabilities of a certain class of exotic automata which I have called min-heap automata ; the original question, and links to others, can be found here . This question concerns the power of type-1 min-heap automata, which represent my initial idea for how these machines would operate. The class of languages which can be accepted by such automata is incomparable (i.e., neither a proper subset nor a proper superset) of the set of context-free languages. Push down automata, which possess a single stack for data storage, accept the set of context-free languages, in the same way that min-heap automata, which possess a single heap for data storage, accept the set $HAL_1$ of languages accepted by nondeterministic type-1 min-heap automata. Push-down automata with two stacks are equivalent to Turing machines in computational power; they can simulate Turing machines, and vice versa; which leads me to my question: Does adding another heap to non-deterministic type-1 min-heap automata make them equivalent in terms of computing ability to Turing machines, in the sense that they are able to simulate Turing machines? If not, does it increase their computational power at all, in the sense that nondeterministic type-1 min-heap automata can accept a set of languages which is a proper subset of $HAL_1$? If so, does adding additional heaps increase computational power, i.e., can nondeterministic min-heap automata with $k+1$ heaps accept more languages than automata with $k$ heaps, for any $k$? This is one of the last questions I plan to ask about these automata; if good answers can be had for these (and other) questions, my curiosity will be completely satisfied. Thanks in advance and for all the hard work so far.
We count the number of array element reads and writes. To do bubble sort, you need $1 + 4n$ accesses (the initial write to the end, then, in the worst case, two reads and two writes to do $n$ swaps). To do the binary search, we need $2\log n + 2n + 1$ ($2\log n$ for binary search, then, in the worst case, $2n$ to shift the array elements to the right, then 1 to write the array element to its proper position). So both methods have the same complexity for array implementations, but the binary search method requires fewer array accesses in the long run... asymptotically, half as many. There are other factors at play, naturally. Actually, you could use better implementations and only count actual array accesses (not accesses to the element to be inserted). You could do $2n + 1$ for bubble sort, and $\log n + 2n + 1$ for binary search... so if register/cache access is cheap and array access is expensive, searching from the end and shifting along the way (smarter bubble sort for insertion) could be better, though not asymptotically so. A better solution might involve using a different data structure. Arrays give you O(1) accesses (random access), but insertions and deletions might cost. A hash table could have O(1) insertions & deletions, accesses would cost. Other options include BSTs and heaps, etc. It could be worth considering your application's usage needs for insertion, deletion and access, and choose a more specialized structure. Note also that if you want to add $m$ elements to a sorted array of $n$ elements, a good idea might be to efficiently sort the $m$ items, then merge the two arrays; also, sorted arrays can be built efficiently using e.g. heaps (heap sort).
{ "source": [ "https://cs.stackexchange.com/questions/934", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/69/" ] }
991
Does there exist a set of programming language constructs in a programming language in order for it to be considered Turing Complete? From what I can tell from wikipedia , the language needs to support recursion, or, seemingly, must be able to run without halting. Is this all there is to it?
I always though that $\mu$-recursive functions nailed it. Here is what defines the whole set of computable functions; it is the smallest set of functions containing resp. closed against: The constant $0$ function The successor function Selecting parameters Function composition Primitive Recursion The $\mu$-operator (look for the smallest $x$ such that...) Check above link for details; you see that it makes for a very compact programming language. It is also horrible to program in -- no free lunch. If you drop any of those, you will lose full power, so it is a minimal set of axioms. You can translate those quite literally into basic syntactical elements for WHILE programs , namely The constant 0 Incrementation _ + 1 Variable access x Program/statement concatenation _; _ Countdown loops for ( x to 0 ) do _ end While loops while ( x != 0 ) do _ end
{ "source": [ "https://cs.stackexchange.com/questions/991", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/385/" ] }
1,031
We learned about the class of regular languages $\mathrm{REG}$. It is characterised by any one concept among regular expressions, finite automata and left-linear grammars, so it is easy to show that a given language is regular. How do I show the opposite, though? My TA has been adamant that in order to do so, we would have to show for all regular expressions (or for all finite automata, or for all left-linear grammars) that they can not describe the language at hand. This seems like a big task! I have read about some pumping lemma but it looks really complicated. This is intended to be a reference question collecting usual proof methods and application examples. See here for the same question on context-free languages.
Proof by contradiction is often used to show that a language is not regular: let $P$ a property true for all regular languages, if your specific language does not verify $P$ , then it's not regular. The following properties can be used: The pumping lemma, as exemplified in Dave's answer ; Closure properties of regular languages (set operations, concatenation, Kleene star, mirror, homomorphisms); A regular language has a finite number of prefix equivalence class, Myhill–Nerode theorem . To prove that a language $L$ is not regular using closure properties, the technique is to combine $L$ with regular languages by operations that preserve regularity in order to obtain a language known to be not regular, e.g., the archetypical language $I= \{ a^n b^n \mid n \in \mathbb{N} \}$ . For instance, let $L= \{a^p b^q \mid p \neq q \}$ . Assume $L$ is regular, as regular languages are closed under complementation so is $L$ 's complement $L^c$ . Now take the intersection of $L^c$ and $a^\star b^\star$ which is regular, we obtain $I$ which is not regular. The Myhill–Nerode theorem can be used to prove that $I$ is not regular. For $p \geq 0 $ , $I/a^p= \{ a^{r}b^rb^p\mid r \in \mathbb{N} \}=I.\{b^p\}$ . All classes are different and there is a countable infinity of such classes. As a regular language must have a finite number of classes $I$ is not regular.
{ "source": [ "https://cs.stackexchange.com/questions/1031", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/98/" ] }
1,065
After reading several sources I'm still confused about user- and kernel-level threads. In particular: Threads can exist at both the user level and the kernel level What is the difference between the user level and kernel level?
One of the roles of a multitasking operating system kernel is scheduling : determining which thread of execution to execute when. So such a kernel has some notion of thread or process . A thread is a sequential piece of code that is executing, and has its own stack and sometimes other data. In an operating system context, people usually use process to mean a thread that has its own memory space, and thread to mean a thread that shares its memory space with other threads. A process can have one or more threads. Some operating systems, for example older unix systems, only provide processes: every thread that the kernel manages has its own memory space. Other operating systems, for example most modern unix systems, allow processes to contain multiple threads of execution: they provide a kernel-level notion of threads. It's also possible for a process to manage its own threading. In cooperative multithreading, the code of each thread contains instructions to switch to another thread. In preemptive multithreading, the process requests periodic asynchronous notifications from the kernel, and reacts to these notifications by switching to a different thread. This way, multithreading is implemented with no kernel cooperation, at the user level, in a library. A system can offer both kernel-level and user-level threads; this is known as hybrid threading . User- and kernel-level threads each have their benefits and downsides. Switching between user-level threads is often faster, because it doesn't require resetting memory protections to switch to the in-kernel scheduler and again to switch back to the process. This mostly matters for massively concurrent systems that use a large number of very short-lived threads, such as some high-level languages ( Erlang in particular) and their green threads . User-level threads require less kernel support, which can make the kernel simpler. Kernel-level threads allow a thread to run while another thread in the same process is blocked in a system call ; processes with user-level threads must take care not to make blocking system calls, as these block all the threads of the process. Kernel-level threads can run simultaneously on multiprocessor machines, which purely user-level threads cannot achieve.
{ "source": [ "https://cs.stackexchange.com/questions/1065", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/935/" ] }
1,088
In a multicore processor, what happens to the contents of a core's cache (say L1) when a context switch occurs on that cache? Is the behaviour dependent on the architecture or is it a general behaviour followed by all chip manufacturers?
That depends both on the processor (not just the processor series, it can vary from model to model) and the operating systems, but there are general principles. Whether a processor is multicore has no direct impact on this aspect; the same process could be executing on multiple cores simultaneously (if it's multithreaded), and memory can be shared between processes, so cache synchronization is unavoidable regardless of what happens on a context switch. When a processor looks up a memory location in the cache, if there is an MMU , it can use either the physical or the virtual address of that location (sometimes even a combination of both, but that's not really relevant here). With physical addresses, it doesn't matter which process is accessing the address, the contents can be shared. So there is no need to invalidate the cache content during a context switch. If the two processes map the same physical page with different attributes, this is handled by the MMU (acting as a MPU (memory protection unit)). The downside of a physically addressed cache is that the MMU has to sit between the processor and the cache, so the cache lookup is slow. L1 caches are almost never physically addresses; higher-level caches may be. The same virtual address can denote different memory locations in different processes. Hence, with a virtually addressed cache, the processor and the operating system must cooperate to ensure that a process will find the right memory. There are several common techniques. The context-switching code provided by the operating system can invalidate the whole cache; this is correct but very costly. Some CPU architectures have room in their cache line for an ASID (address space identifier) the hardware version of a process ID, also used by the MMU. This effectively separates cache entries from different processes, and means that two processes that map the same page will have incoherent views of the same physical page (there is usually a special ASID value indicating a shared page, but these need to be flushed if they are not mapped to the same address in all processes where they are mapped). If the operating system takes care that different processes use non-overlapping address spaces (which defeats some of the purpose of using virtual memory, but can be done sometimes), then cache lines remain valid. Most processors that have an MMU also have a TLB . The TLB is a cache of mappings from virtual addresses to physical addresses. The TLB is consulted before lookups in physically-addressed caches, to determine the physical address quickly when possible; the processor may start the cache lookup before the TLB lookup is complete, as often candidate cache lines can be identified from the middle bits of the address, between the bits that determine the offset in a cache line and the bits that determine the page. Virtually-addressed caches bypass the TLB if there is a cache hit, although the processor may initiate the TLB lookup while it is querying the cache, in case of a miss. The TLB itself must be managed during a context switch. If the TLB entries contain an ASID, they can remain in place; the operating system only needs to flush TLB entries if their ASID has changed meaning (e.g. because a process has exited). If the TLB entries are global, they must be invalidated when switching to a different context.
{ "source": [ "https://cs.stackexchange.com/questions/1088", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/59/" ] }
1,240
I am taking a complexity course and I am having trouble with coming up with reductions between NPC problems. How can I find reductions between problems? Is there a general trick that I can use? How should I approach a problem that asks me to prove a problem is NPC?
There is no magic bullet; NP-hardness proofs are hard. However, there is a general framework for all such proofs. Many students who struggle with NP-hardness proofs are confused about what they're supposed to be doing, which obviously makes it impossible to figure out how to do it. So here is what to do to prove a problem NP-hard. First, unless you're just doing homework, you have to decide which NP-hard problem to reduce to your problem . This is largely a question of "smell". If the number 3 appears anywhere in the problem statement, try reducing from $\mathsf{3SAT}$ or $\mathsf{3Color}$ or $\mathsf{3Partition}$. (Yes, I'm serious.) If your problem involves finding an optimal subsequence or permutation or path, try reducing from $\mathsf{HamiltonianCycle}$ or $\mathsf{HamiltonianPath}$. If your problem asks for the smallest subset with a certain property, try $\mathsf{Clique}$; if it asks for the largest subset with a certain property, try $\mathsf{IndependentSet}$. If your problem involves doing something in the plane, try $\mathsf{PlanarCircuitSAT}$ or $\mathsf{PlanarTSP}$. And so on. If your problem doesn't "smell" like anything, $\mathsf{3SAT}$ or $\mathsf{CircuitSAT}$ is probably your best bet. Obviously, you need to already know precisely how all these problems are defined , and the simpler the problem you reduce from, the better. So as cool as the result might look in the end, I don't recommend reducing from $\mathsf{Minesweeper}$ or $\mathsf{Tetris}$ or $\mathsf{OneCheckersMove}$ or $\mathsf{SuperMarioBros}$ . Second, the actual reduction. To reduce problem X (the one you know is NP-hard) to problem Y (the one you're trying to prove is NP-hard, you need to describe an algorithm that transforms an arbitrary instance of X into a legal instance of Y. The reduction algorithm needs to do something specific with each "feature" of the X-instance; the portion of the output for each "feature" is usually called a gadget . But what's a feature? That depends on problem X. For example: To transform an instance of $\mathsf{3SAT}$, you'll need a gadget for each variable and for each clause in the input formula. Each variable gadget should have two "states" that correspond to "true" and "false". Each clause gadget should also have multiple "states", each of which somehow forces at least one of the literals in that clause to be true. (The states are not part of the output of the reduction algorithm.) To transform an instance of $\mathsf{3Color}$, you'll need a gadget for each vertex and each edge of the input graph, and another gadget to define the three colors. To transform an instance of $\mathsf{PlanarCircuitSat}$, you'll need a gadget for each input, for each wire, and for each gate in the input circuit. The actual form of the gadget depends on problem Y, the one you're reducing to . For example, if you're reducing to a problem about graphs, your gadgets will be small subgraphs; see the Wikipedia article. If you're reducing to a problem about scheduling, each gadget will be a set of jobs to be scheduled. If you're reducing to a problem about Mario , each gadget will be a set of blocks and bricks and Koopas. This can get confusing if both problems involve the same kind of object. For example, if both X and Y are problems about graphs, your algorithm is going to transform one graph (an instance of X) into a different graph (an instance of Y). Choose your notation wisely, so that you don't confuse these two graphs. I also strongly recommend using multiple colors of ink. Finally, your reduction algorithm must satisfy three properties: It runs in polynomial time. (This is usually easy.) If your reduction algorithm is given a positive instance of X as input, it produces a positive instance of Y as output. If your reduction algorithm produces a positive instance of Y as output, it must have been given a positive instance of X as input. There's an important subtlety here. Your reduction algorithm only works in one direction, from instances of X to instances of Y, but proving the algorithm correct requires reasoning about the transformation in both directions. You must also remember that your reduction algorithm cannot tell whether a given instance of X is positive or negative—that would require solving an NP-hard problem in polynomial time! That's the what . The how just comes with practice.
{ "source": [ "https://cs.stackexchange.com/questions/1240", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/1068/" ] }
1,243
In many textbooks NP problems are defined as: Set of all decision problems solvable by non deterministic algorithms in polynomial time I couldn't understand the part "solvable by non deterministic algorithms". Could anyone please explain that?
Adding to Shitikanth's answer , a nondeterministic algorithm is one that has multiple choices in some points during its control flow. The actual choice made when the program runs is not determined by the input or values in registers, or if we are talking about Turing machines, the choice is not determined by the input value and the state; instead an arbitrary choice among the possibilities can be made in a given run of the program. Thus multiple runs of the same algorithm on the same input can result in different outputs. The point of using a non-deterministic algorithm is that it can make certain guesses at certain points during its computation. Such algorithms are designed so that if they make the right guesses at all the choice points, then they can solve the problem at hand. A simple example is primality testing. To decide whether a number $N$ is not prime, one simply selects non-deterministically a number $n\le\sqrt{N}$ and checks whether $N$ is divisible by $n$. For any composite number, this algorithm finds a factor of the number by making the right guess. The polynomial time part means that if the nondeterministic algorithm makes all the right guesses, then the amount of time it takes is bounded by a polynomial.
{ "source": [ "https://cs.stackexchange.com/questions/1243", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/947/" ] }
1,271
When I was explaining the Baker-Gill-Solovay proof that there exists an oracle with which we can have, $\mathsf{P} = \mathsf{NP}$, and an oracle with which we can have $\mathsf{P} \neq \mathsf{NP}$ to a friend, a question came up as to why such techniques are ill-suited for proving the $\mathsf{P} \neq \mathsf{NP}$ problem, and I couldn't give a satisfactory answer. To put it more concretely, if I have an approach to prove $\mathsf{P} \neq \mathsf{NP}$ and if I could construct oracles to make a situation like above happen, why does it make my method invalid? Any exposition/thoughts on this topic?
To put it more concretely, if I have an approach to prove P≠NP and if I could construct oracles to make a situation like above happen, why does it make my method invalid? Note that the latter “if” is not a condition, because Baker, Gill, and Solovay already constructed such an oracle. It is just a mathematical truth that (1) there exists an oracle relative to which P=NP, and that (2) there exists an oracle relative to which P≠NP. This means that if you have an approach to prove P≠NP and the same proof would equally prove a stronger result “P A ≠NP A for all oracles A ,” then your approach is doomed to fail because it would contradict (1). In other words, there is some fundamental difference between proving P≠NP and proving e.g. the time hierarchy theorem, because the proof of the latter just uses diagonalization and is equally applicable to any relativized world. Of course, this does not mean that there is no proof for P≠NP. Such a proof (if one exists) must fail to prove the stronger result mentioned above. In other words, some part of the proof must distinguish the nonrelativizing world from arbitrary relativized worlds.
{ "source": [ "https://cs.stackexchange.com/questions/1271", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/639/" ] }
1,331
There are many methods to prove that a language is not regular , but what do I need to do to prove that some language is regular? For instance, if I am given that $L$ is regular, how can I prove that the following $L'$ is regular, too? $\qquad \displaystyle L' := \{w \in L: uv = w \text{ for } u \in \Sigma^* \setminus L \text{ and } v \in \Sigma^+ \}$ Can I draw a nondeterministic finite automaton to prove this?
Yes, if you can come up with any of the following: deterministic finite automaton (DFA), nondeterministic finite automaton (NFA), regular expression (regexp of formal languages) or regular grammar for some language $L$, then $L$ is regular. There are more equivalent models , but the above are the most common. There are also useful properties outside of the "computational" world. $L$ is also regular if it is finite, you can construct it by performing certain operations on regular languages, and those operations are closed for regular languages , such as intersection, complement, homomorphism, reversal, left- or right-quotient, regular transduction and more , or using Myhill–Nerode theorem if the number of equivalence classes for $L$ is finite. In the given example, we have some (regular) langage $L$ as basis and want to say something about a language $L'$ derived from it. Following the first approach -- construct a suitable model for $L'$ -- we can assume whichever equivalent model for $L$ we so desire; it will remain abstract, of course, since $L$ is unknown. In the second approach, we can use $L$ directly and apply closure properties to it in order to arrive at a description for $L'$.
{ "source": [ "https://cs.stackexchange.com/questions/1331", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/1133/" ] }
1,447
I know that Euclid’s algorithm is the best algorithm for getting the GCD (great common divisor) of a list of positive integers. But in practice you can code this algorithm in various ways. (In my case, I decided to use Java, but C/C++ may be another option). I need to use the most efficient code possible in my program. In recursive mode, you can write: static long gcd (long a, long b){ a = Math.abs(a); b = Math.abs(b); return (b==0) ? a : gcd(b, a%b); } And in iterative mode, it looks like this: static long gcd (long a, long b) { long r, i; while(b!=0){ r = a % b; a = b; b = r; } return a; } There is also the Binary algorithm for the GCD, which may be coded simply like this: int gcd (int a, int b) { while(b) b ^= a ^= b ^= a %= b; return a; }
Your two algorithms are equivalent (at least for positive integers, what happens with negative integers in the imperative version depends on Java's semantics for % which I don't know by heart). In the recursive version, let $a_i$ and $b_i$ be the argument of the $i$th recursive call: $$\begin{gather*} a_{i+1} = b_i \\ b_{i+1} = a_i \mathbin{\mathrm{mod}} b_i \\ \end{gather*}$$ In the imperative version, let $a'_i$ and $b'_i$ be the values of the variables a and b at the beginning of the $i$th iteration of the loop. $$\begin{gather*} a'_{i+1} = b'_i \\ b'_{i+1} = a'_i \mathbin{\mathrm{mod}} b'_i \\ \end{gather*}$$ Notice a resemblance? Your imperative version and your recursive version are calculating exactly the same values. Furthermore, they both end at the same time, when $a_i=0$ (resp. $a'_i=0$), so they perform the same number of iterations. So algorithmically speaking, there is no difference between the two. Any difference will be a matter of implementation, highly dependent on the compiler, the hardware it runs on, and quite possibly the operating system and what other programs are running concurrently. The recursive version makes only tail recursive calls . Most compilers for imperative languages do not optimize these, and so it is likely that the code they generate will waste a little time and memory constructing a stack frame at each iteration. With a compiler that optimizes tail calls (compilers for functional languages almost always do), the generated machine code may well be the same for both (assuming you harmonize those calls to abs ).
{ "source": [ "https://cs.stackexchange.com/questions/1447", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/1152/" ] }
1,477
Assume that I am a programmer and I have an NP-complete problem that I need to solve it. What methods are available to deal with NPC problems? Is there a survey or something similar on this topic?
There are a number of well-studied strategies; which is best in your application depends on circumstance. Improve worst case runtime Using problem-specific insight, you can often improve the naive algorithm. For instance, there are $O(c^n)$ algorithms for Vertex Cover with $c < 1.3$ [1]; this is a huge improvement over the naive $\Omega(2^n)$ and might make instance sizes relevant for you tractable. Improve expected runtime Using heuristics, you can often devise algorithms that are fast on many instances. If those include most that you meet in practice, you are golden. Examples are SAT for which quite involved solvers exist, and the Simplex algorithm (which solves a polynomial problem, but still). One basic technique that is often helpful is branch and bound . Restrict the problem If you can make more assumptions on your inputs, the problem may become easy. Structural properties Your inputs may have properties that simplify solving the problem, e.g. planarity, bipartiteness or missing a minor for graphs. See here for some examples of graph classes for which CLIQUE is easy. Bounding functions of the input Another thing to look at is parameterised complexity ; some problems are solvable in time $O(2^kn^m)$ for $k$ some instance parameter (maximum node degree, maximum edge weight, ...) and $m$ constant. If you can bound $k$ by a polylogarithmic function in $n$ in your setting, you get polynomial algorithms. Saeed Amiri gives details in his answer . Bounding input quantities Furthermore, some problems admit algorithms that run in pseudo-polynomial time , that is their runtime is bounded by a polynomial function in a number that is part of the input; the naive primality check is an example. This means that if the quantities encoded in your instances have reasonable size, you might have simple algorithms that behave well for you. Weaken the result This means that you tolerate errorneous or incomplete results. There are two main flavors: Probabilistic algorithms You only get the correct result with some probability. There are some variants, most notable Monte-Carlo and Las-Vegas algorithms. A famous example is the Miller-Rabin primality test . Approximation algorithms You no longer look for optimal solutions but almost optimal ones. Some algorithms admit relative ("no worse than double the optimum"), others absolute ("no worse than $5$ plus the optimum") bounds on the error. For many problems it is open how well they can be approximated. There are some that can be approximated arbitrarily well in polynomial time, while others are known to not allow that; check the theory of polynomial-time approximation schemes . Refer to Algorithmics for Hard Problems by Hromkovič for a thorough treatment. Simplicity is beauty: Improved upper bounds for vertex cover by Chen Jianer, Iyad A. Kanj, Ge Xia (2005)
{ "source": [ "https://cs.stackexchange.com/questions/1477", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/1219/" ] }
1,580
I often hear people talking about parallel computing and distributed computing, but I'm under the impression that there is no clear boundary between the 2, and people tend to confuse that pretty easily, while I believe it is very different: Parallel computing is more tightly coupled to multi-threading, or how to make full use of a single CPU. Distributed computing refers to the notion of divide and conquer, executing sub-tasks on different machines and then merging the results. However, since we stepped into the Big Data era, it seems the distinction is indeed melting, and most systems today use a combination of parallel and distributed computing. An example I use in my day-to-day job is Hadoop with the Map/Reduce paradigm, a clearly distributed system with workers executing tasks on different machines, but also taking full advantage of each machine with some parallel computing. I would like to get some advice to understand how exactly to make the distinction in today's world, and if we can still talk about parallel computing or there is no longer a clear distinction. To me it seems distributed computing has grown a lot over the past years, while parallel computing seems to stagnate, which could probably explain why I hear much more talking about distributing computations than parallelizing.
This is partly a matter of terminology, and as such, only requires that you and the person you're talking to clarify it beforehand. However, there are different topics that are more strongly associated with parallelism , concurrency , or distributed systems . Parallelism is generally concerned with accomplishing a particular computation as fast as possible, exploiting multiple processors. The scale of the processors may range from multiple arithmetical units inside a single processor, to multiple processors sharing memory, to distributing the computation on many computers. On the side of models of computation, parallelism is generally about using multiple simultaneous threads of computation internally, in order to compute a final result. Parallelism is also sometimes used for real-time reactive systems , which contain many processors that share a single master clock; such systems are fully deterministic . Concurrency is the study of computations with multiple threads of computation. Concurrency tends to come from the architecture of the software rather than from the architecture of the hardware. Software may be written to use concurrency in order to exploit hardware parallelism, but often the need is inherent in the software's behavior, to react to different asynchronous events (e.g. a computation thread that works independently of a user interface thread, or a program that reacts to hardware interrupts by switching to an interrupt handler thread). Distributed computing studies separate processors connected by communication links. Whereas parallel processing models often (but not always) assume shared memory, distributed systems rely fundamentally on message passing. Distributed systems are inherently concurrent. Like concurrency, distribution is often part of the goal, not solely part of the solution: if resources are in geographically distinct locations, the system is inherently distributed. Systems in which partial failures (of processor nodes or of communication links) are possible fall under this domain.
{ "source": [ "https://cs.stackexchange.com/questions/1580", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/1307/" ] }
1,636
I can understand the importance that computer scientists or any software development related engineers should have understood the study of basic logics as a basis. But is there any tasks/jobs that explicitly require the knowledge about these, other than the tasks that require any kind of knowledge representation using Knowledge Base ? I want to hear the types of tasks, rather than conceptual responses. The reason I ask this is just from my curiosity. While CS students have to spend certain amount of time on this subject, some practicality-intensive courses (e.g. AI-Class ) skipped this topic entirely. And I just wonder that for example knowing predicate logic might help drawing ER diagram but might not be a requirement. Update 5/27/2012) Thanks for answers. Now I think I totally understand & agree with the importance of logic in CS with its vast amount of application. I just picked the best answer truly from the impressiveness that I got by the solution for Windows ' blue screen issue.
I tend to like Unification and anything related to it. If you don't know propositional & predicate logic, then you are skipping the basics of logic. If you have an interest in anything listed , then it would be like having an interest in math and skipping addition and multiplication. Logic is not just for AI. As a practical answer, remember the Intel floating point problem and how you never see them anymore? Thanks to the use of theorem provers they are a thing of the past. Remember the Microsoft blue screen of death . Thanks to SAT solvers, model checking and other logic based solution, they are an endangered species.
{ "source": [ "https://cs.stackexchange.com/questions/1636", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/152/" ] }
1,643
Normally in algorithms we do not care about comparison, addition, or subtraction of numbers -- we assume they run in time $O(1)$. For example, we assume this when we say that comparison-based sorting is $O(n\log n)$, but when numbers are too big to fit into registers, we normally represent them as arrays so basic operations require extra calculations per element. Is there a proof showing that comparison of two numbers (or other primitive arithmetic functions) can be done in $O(1)$? If not why are we saying that comparison based sorting is $O(n\log n)$? I encountered this problem when I answered a SO question and I realized that my algorithm is not $O(n)$ because sooner or later I should deal with big-int, also it wasn't pseudo polynomial time algorithm, it was $P$.
For people like me who study algorithms for a living, the 21st-century standard model of computation is the integer RAM . The model is intended to reflect the behavior of real computers more accurately than the Turing machine model. Real-world computers process multiple-bit integers in constant time using parallel hardware; not arbitrary integers, but (because word sizes grow steadily over time) not fixed size integers, either. The model depends on a single parameter $w$, called the word size . Each memory address holds a single $w$-bit integer, or word . In this model, the input size $n$ is the number of words in the input, and the running time of an algorithm is the number of operations on words . Standard arithmetic operations (addition, subtraction, multiplication, integer division, remainder, comparison) and boolean operations (bitwise and, or, xor, shift, rotate) on words require $O(1)$ time by definition . Formally, the word size $w$ is NOT a constant for purposes of analyzing algorithms in this model. To make the model consistent with intuition, we require $w \ge \log_2 n$, since otherwise we cannot even store the integer $n$ in a single word. Nevertheless, for most non-numerical algorithms, the running time is actually independent of $w$, because those algorithms don't care about the underlying binary representation of their input. Mergesort and heapsort both run in $O(n\log n)$ time; median-of-3-quicksort runs in $O(n^2)$ time in the worst case. One notable exception is binary radix sort, which runs in $O(nw)$ time. Setting $w = \Theta(\log n)$ gives us the traditional logarithmic-cost RAM model. But some integer RAM algorithms are designed for larger word sizes, like the linear-time integer sorting algorithm of Andersson et al. , which requires $w = \Omega(\log^{2+\varepsilon} n)$. For many algorithms that arise in practice, the word size $w$ is simply not an issue, and we can (and do) fall back on the far simpler uniform-cost RAM model. The only serious difficulty comes from nested multiplication, which can be used to build very large integers very quickly. If we could perform arithmetic on arbitrary integers in constant time, we could solve any problem in PSPACE in polynomial time . Update: I should also mention that there are exceptions to the "standard model", like Fürer's integer multiplication algorithm , which uses multitape Turing machines (or equivalently, the "bit RAM"), and most geometric algorithms, which are analyzed in a theoretically clean but idealized "real RAM" model . Yes, this is a can of worms.
{ "source": [ "https://cs.stackexchange.com/questions/1643", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/-1/" ] }
1,771
I have just completed the first chapter of the Introduction to the Theory of Computation by Michael Sipser which explains the basics of finite automata. He defines a regular language as anything that can be described by a finite automata. But I could not find where he explains why a regular language is called "regular?" What is the origin of the term "regular" in this context? NOTE: I am a novice so please try to explain in simple terms!
As Kaveh says in a comment, Kleene bestowed the name way back when he kicked off automata theory and formal languages. I believe the term was arbitrary, though it has been many years since I read his original paper. Mathematicians have a habit of hijacking common nouns and adjectives for mathematical objects and properties, sometimes with good reasons such as geometric or other analogies or metaphors, and sometimes arbitrarily. Just look at "group", "ring", "space", "sheaf", "atlas", "manifold", "field" and so on. In fact, the term "regular" for finite-state languages, while still prevalent in automata theory, is not used very much in its algebraic cousin, finite semigroup theory or abstract algebra in general. Why? Because the term was already taken for a semigroup that is close to a group in a specific technical sense, so you couldn't match up a regular language in Kleene's sense with a corresponding regular semigroup . Third, Kleene defined another kind of event called "definite", which was much studied for a while, but has turned out to be not particularly fruitful. Today, finite sets of language play the role of definite events as the basis for regular events. The preferred term in algebra is "rational" for both Kleene's class of languages and the more general semigroups and monoids. That usage also reflects an important analogy between the term "rational" in algebra as the solution of a linear equation with integer coefficients and the concept of rational power series in automata and formal language theory. Additional information. Kleene's original paper of 1951, entitled "Representation of events in nerve nets and finite automata" may be found here . On p. 46 it settles the arbitrariness of the term "regular" with this statement: We shall presently describe a class of events which we will call "regular events". (We would welcome any suggestions as to a more descriptive term.) Apparently, nobody came up with a more descriptive term. ;-) As is often the case with seminal papers which lead to the intensive development of whole new areas, the terminology and concepts are almost unrecognizable in today's terms. First, the paper was about models of neurons, hence the use of "events" instead of "languages" or "sets". The term "events" persisted well into the 60's and 70's, even after the importance of Kleene's concepts for automata and formal languages vastly outweighed any value for neuroscience. Second, there are some mathematical differences, such as defining what came to be called "Kleene Closure" as a binary operation, equivalent to $a^*b$, instead of the simpler unary operation $a^*$ or $a^+$ that we use today. Kleene's motivation was to avoid the empty string (or event with duration zero in his terms). That was a remarkably prescient intuition since subsequent theory has shown how crucial the choice is to include or exclude the empty string from definitions in many contexts. Third, Kleene defined a concept called "definite events" and developed regular events from them, but nowadays we use finite sets for the purpose. Definite events were studied for a while, but have turned out to be far less important than regular events/sets/languages. Anyway, a complete reading of this paper is probably not worth anyone's time today, except for historical purposes. I just skimmed it for the crucial definitions and ideas, and that was fun.
{ "source": [ "https://cs.stackexchange.com/questions/1771", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/1434/" ] }
1,810
Are there any known problems in $\mathsf{NP}$ (and not in $\mathsf{P}$) that aren't $\mathsf{NP}$ Complete? My understanding is that there are no currently known problems where this is the case, but it hasn't been ruled out as a possibility. If there is a problem that is $\mathsf{NP}$ (and not $\mathsf{P}$) but not $\mathsf{NP\text{-}complete}$, would this be a result of no existing isomorphism between instances of that problem and the $\mathsf{NP\text{-}complete}$ set? If this case, how would we know that the $\mathsf{NP}$ problem isn't 'harder' than what we currently identify as the $\mathsf{NP\text{-}complete}$ set?
Are there any known problems in NP (and not in P) that aren't NP Complete? My understanding is that there are no currently known problems where this is the case, but it hasn't been ruled out as a possibility. No, this is unknown (with the exception of the trivial languages $\emptyset$ and $\Sigma^*$, these two are not complete because of the definition of many-one reductions, typically these two are ignored when considering many-one reductions). Existence of an $\mathsf{NP}$ problem which is not complete for $\mathsf{NP}$ w.r.t. many-one polynomial time reductions would imply that $\mathsf{P}\neq\mathsf{NP}$ which is not known (although widely believed). If the two classes are different then we know that there are problems in $\mathsf{NP}$ which are not complete for it, take any problem in $\mathsf{P}$. If there is a problem that is NP (and not P) but not NP Complete, would this be a result of no existing isomorphism between instances of that problem and the NP Complete set? If the two complexity classes are different then by Ladner's theorem there are problems which are $\mathsf{NP}$-intermediate, i.e. they are between $\mathsf{P}$ and $\mathsf{NP\text{-}complete}$. If this case, how would we know that the NP problem isn't 'harder' than what we currently identify as the NP Complete set? They are still polynomial time reducible to $\mathsf{NP\text{-}complete}$ problems so they cannot be harder than $\mathsf{NP\text{-}complete}$ problems.
{ "source": [ "https://cs.stackexchange.com/questions/1810", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/1472/" ] }
1,877
There are lots of attempts at proving either $\mathsf{P} = \mathsf{NP} $ or $\mathsf{P} \neq \mathsf{NP}$, and naturally many people think about the question, having ideas for proving either direction. I know that there are approaches that have been proven to not work, and there are probably more that have a history of failing. There also seem to be so-called barriers that many proof attemps fail to overcome. We want to avoid investigating into dead-ends, so what are they?
Note: I haven't checked the answer carefully yet and there are missing parts to be written, consider it a first draft. This answer is meant mainly for people who are not researchers in complexity theory or related fields. If you are a complexity theorist and have read the answer please let me know if you notice any issue or have an idea about to improve the answer. Where you can find claimed solutions of P vs. NP There is The P-versus-NP page which has a list of such claims. Articles claiming to resolve the question are regularly posted on arXiv . Other lists of how not to solve P vs. NP Lance Fortnow, So You Think You Settled P verus NP , 2009 Scott Aaronson, Eight Signs A Claimed P≠NP Proof Is Wrong , 2010 Polymath page for Deolalikar's paper , where the further readings section has nice list of references about the problem. How not to approach P vs. NP Let me discuss "how not to approach P vs. NP" not in the sense of ideas that will not work but in a more general sense. P vs. NP is a easy to state problem (see also my answer here ): NP = P : For every decision problem with a polynomial time verifier algorithm there is an polynomial time algorithm. or equivalently There is a polynomial time algorithm for SAT. SAT can be replace with any other NP-complete problem . . Often people oversimplify and overphilosophize the problem and exaggerate the practical importance of the problem (as stated above). Such statements are often meant to give intuition, but they are not in any way a replacement for the actual mathematical statement of the problem. Theoretical efficiency is not the same as feasibility in practice. Let me first with exaggerated practical consequences. I. It is possible that P=NP but it does not help for any problem in practice! Say for example that SAT is in P but the fastest algorithm for its running time is $2^{2^{64}} n^{65536} + 2^{2^{128}}$ . This algorithm is of no practical use. II. It is possible that P $\neq$ NP and we can solve NP-complete problems efficiently . Say for example that SAT is not in P but has an algorithm with running time $n^{\lg^*\lg^* n}$ . To give an input that would make $\lg^* n > 6$ you have to use more electrons that there are thought to be in universe. So the exponent is essentially $2$ . The main point here is that P is an abstract simple model of efficient computation, worst-case complexity is an abstract simple model of estimating the cost of a computation, etc. All of these are abstractions, but no one in practice would consider an algorithm like the one in (I) above as an efficient algorithm really. P is a nice abstract model, it has nice properties, it makes technical issues easy, and it is a useful one. However like all mathematical abstraction it hides details that in practice we may care about. There are various more refined models but the more complicated the model becomes the less nice it would be to argue about. What people care about in practice is to compute an answer to the problem for instances that they care about using reasonable amount of resources. There are task dependent and should be taken into consideration. Trying to find better algorithms for practical instances of NP-hard problems is an interesting and worthy endeavor. There are SAT-solver heuristic algorithms that are used in the industry and can solve practical instances of SAT with millions of variables. There is even an International SAT Competition . (But there are also small concrete instances that all these algorithms fail and fail quite badly, we can actually prove that all state of art modern SAT-solvers take exponential time to solve simple instances like propositional Pigeonhole Principle .) Keep in mind that correctness and running time of programs cannot be obtained just from running the program on instances . It does not matter how many instance you try, no amount is sufficient. There are infinitely many possible inputs and you have to show correctness and efficiency (i.e. running time is polynomial) of the program for all of them. In short, you need mathematical proof of correctness and efficiency. If you do not know what is a mathematical proof then you should first learn some basic mathematics (read a textbook discrete math/combinatorics/graph theory, these are good topic to learn about what is considered a mathematical proof). Also be careful about other claims about P vs. NP and the consequence of its answers. Such claims are often based on similar simplifications. Complexity theorists do not really care about an answer to P vs. NP! I exaggerated a bit. Of course we do care about an answer to P vs. NP. But we care about it in a context. P vs. NP is our flagship problem but it is not the ultimate goal. It is an easy to state problem, it involves many fundamental ideas, it is useful for explaining the kind of questions we are interested in to people who are not familiar with the topic. But we do not seek a one bit Yes/No answer to the question. We seek a better understanding of the nature of efficient computation . We believe that resolving the question will come with such understanding and that is the real reason we care about it. It is part of a huge body of research. If you want to have a taste of what we do have look at a good complexity theory textbook, e.g. Arora and Barak's " Complexity Theory: A Modern Approach " ( draft version ). Let us assume that someone comes with an encrypted completely formal proof of P $\neq$ NP and we can verify its correctness to a very high degree of confidence by selecting and decrypting a few bits of the proof (see Zero-Knowledge Proof and PCP theorem ). So we can verify the claim with probability of error less than a meteor hitting our house, we are quite sure the proof is correct and P=NP, but we do not know the proof. It will not create much satisfying or exciting for us. The formal proof itself will not also be that satisfying. What we seek is not a formal proof, what we seek is understanding. In short, from a complexity theorist's perspective P vs. NP is not a puzzle with a Yes/No answer. We seek an answer to P vs. NP because we think it will come a better understanding of the nature of efficient computation. An answer without a major advancement in our understanding is not very interesting. There has been too many occasions that non-experts have claimed solutions for P vs. NP, and those claims typically suffer from issues that they would not have made if they just read a standard textbook on complexity theory. Common problems P=NP The claims of P=NP seem to be more common. I think the following is the most common type. Someone has an idea and writes a program and tests it on a few instances and thinks it is polynomial time and correctly solves an NP-complete problem. As I explained above no amount of testing will show P=NP. P=NP needs a mathematical proof , not just a program that seems to solve an NP-complete problem in polynomial time. These attempts typically suffer from one of the two issues: I. the algorithm is not really polynomial time. II. the algorithm does not solve all instances correctly. Signs that a P $\neq$ NP argument is not correct [to be written] How to check that your algorithm does not really work You cannot show that your algorithm works correctly by testing. But you can show it does not work correctly by testing! So here is how you can make sure that your algorithm is not correct if you are willing to do some work. First, write a program to convert instances of SAT (in the standard CNF format) to the NP-hard problem that you are solving. SAT is one of the most studied NP-hard problems and reductions from other problems to SAT is typically easy. Second, take the examples that the state of art SAT-solvers struggle with (e.g. take the examples from SAT competition) and feed them to your algorithm and see how your algorithm performs. Try known hard instances like propositional Pigeonhole Principle (and don't cheat by hard-coding them as special cases), cryptographic instances (like RSA Factoring Challenges ), random k-SAT instances near the threshold , etc. Similarly you can check that your algorithm is not efficient. E.g. if you think your algorithm's running time is not $10 n^2$ but it is taking days to solve an instance of say size 1000. Fix the polynomial worst-case running-time upper bound that you think your algorithm has. Take the instances and estimate the time your algorithm will take to solve them and check if matches your estimates. How to check your algorithmic P=NP idea cannot work If you do these you will be pretty sure that your algorithm does not work (if it works better than the state of the art SAT-solvers then compete in the next competition and lots of people would be interested in studying your algorithm and ideas). Now you know it does not really work but that is not enough. You want to know why, is the reason my algorithm does not work a small issue that can be fixed or is there a fundamental reason why it cannot work? Sometimes the problem with the algorithm is simple and one can identify what was wrong conceptually. The best outcome is that you understand the reason your idea cannot work. Often it is not the case, your idea does not work but you cannot figure out why. In that case keep in mind: understanding why some idea cannot work can be more difficult that solving P vs. NP! If you can formalize your idea enough you might be able to prove a limitations of particular ideas (e.g. there are results that say particular formalizations of greedy algorithm cannot solve NP-complete problems). However, it is even more difficult, and you do not have much chance if you have not read a standard complexity theory textbook. Sometime there is not even a clear conceptual idea why the algorithm should work, i.e. it is based on some not well-understood heuristics . If you do not have a clear conceptual idea of why your algorithm should work then you might not have much chance in understanding why it does not! Common problems with P $\neq$ NP claims Although most experts think P $\neq$ NP is more likely than P=NP, such claims seems to be less common. The reason is that proving lower-bounds seems to be a harder task than designing algorithms (but often proving lower-bounds and upper-bounds are intrinsically related ). Issue 1: the author does not know the definition of P and NP, or even worse does not understand what is a mathematical proof. Because the author lacks basic mathematical training he does not understand when he is told what he is presenting is not a proof (e.g. the steps do not follow from previous ones). Issue 2: the author confuses "we don't know how" with "mathematical impossibility". For example they make various unjustified assumptions and when asked "why this statement is true?" they reply "how can it be false?". One common one is to assume that any program solving the problem has to go throw particular steps, e.g. it has to compute particular intermediate values, because he cannot think of an alternative way of solving the problem. [to be completed] Signs that a P $\neq$ NP argument is not correct [to be written] How to check your P $\neq$ NP idea cannot work If a claim does not suffer from these basic issues then rejecting it becomes more difficult. On the first level one can find an incorrect step in the argument. The typical response from the author is that I can fix it and this back and forth can go on. Similar to P=NP solutions it is often a very difficult to find a fundamental issue with an idea that can show it cannot work, particularly when the idea itself is informal. In the best case, if we can formalizes the idea and identify the obstacle that shows the idea cannot work we have proven a new barrier result (this is how attempts to prove P $\neq$ NP using circuit lower-bounds lead to the Natural Proofs barrier).
{ "source": [ "https://cs.stackexchange.com/questions/1877", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/98/" ] }
1,914
To find the median of an unsorted array, we can make a min-heap in $O(n\log n)$ time for $n$ elements, and then we can extract one by one $n/2$ elements to get the median. But this approach would take $O(n \log n)$ time. Can we do the same by some method in $O(n)$ time? If we can, then how?
This is a special case of a selection algorithm that can find the $k$th smallest element of an array with $k$ is the half of the size of the array. There is an implementation that is linear in the worst case. Generic selection algorithm First let's see an algorithm find-kth that finds the $k$th smallest element of an array: find-kth(A, k) pivot = random element of A (L, R) = split(A, pivot) if k = |L|+1, return pivot if k ≤ |L| , return find-kth(L, k) if k > |L|+1, return find-kth(R, k-(|L|+1)) The function split(A, pivot) returns L,R such that all elements in R are greater than pivot and L all the others (minus one occurrence of pivot ). Then all is done recursively. This is $O(n)$ in average but $O(n^2)$ in the worst case. Linear worst case: the median-of-medians algorithm A better pivot is the median of all the medians of sub arrays of A of size 5, by using calling the procedure on the array of these medians. find-kth(A, k) B = [median(A[1], .., A[5]), median(A[6], .., A[10]), ..] pivot = find-kth(B, |B|/2) ... This guarantees $O(n)$ in all cases. It is not that obvious. These powerpoint slides are helpful both at explaining the algorithm and the complexity. Note that most of the time using a random pivot is faster.
{ "source": [ "https://cs.stackexchange.com/questions/1914", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/1545/" ] }
1,919
In terms of references and their implementation on the heap and the stack, how is equality testing for arrays different from that for integers? This is to do with Java programming, if you have a stack and a heap, would equality testing for example j == i be the same for arrays and for integers? I understand that arrays, are stored in the heap and the stack, as it holds bulks of data, but integers are only stored in the stack and referenced in the heap. I understand for equality testing j==i (variables) the stack pointer will point to the same location. I'm confused on how j==i would be different for array and integers. Could someone explain?
This is a special case of a selection algorithm that can find the $k$th smallest element of an array with $k$ is the half of the size of the array. There is an implementation that is linear in the worst case. Generic selection algorithm First let's see an algorithm find-kth that finds the $k$th smallest element of an array: find-kth(A, k) pivot = random element of A (L, R) = split(A, pivot) if k = |L|+1, return pivot if k ≤ |L| , return find-kth(L, k) if k > |L|+1, return find-kth(R, k-(|L|+1)) The function split(A, pivot) returns L,R such that all elements in R are greater than pivot and L all the others (minus one occurrence of pivot ). Then all is done recursively. This is $O(n)$ in average but $O(n^2)$ in the worst case. Linear worst case: the median-of-medians algorithm A better pivot is the median of all the medians of sub arrays of A of size 5, by using calling the procedure on the array of these medians. find-kth(A, k) B = [median(A[1], .., A[5]), median(A[6], .., A[10]), ..] pivot = find-kth(B, |B|/2) ... This guarantees $O(n)$ in all cases. It is not that obvious. These powerpoint slides are helpful both at explaining the algorithm and the complexity. Note that most of the time using a random pivot is faster.
{ "source": [ "https://cs.stackexchange.com/questions/1919", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/1376/" ] }
2,016
Converting regular expressions into (minimal) NFA that accept the same language is easy with standard algorithms, e.g. Thompson's algorithm . The other direction seems to be more tedious, though, and sometimes the resulting expressions are messy. What algorithms are there for converting NFA into equivalent regular expressions? Are there advantages regarding time complexity or result size? This is supposed to be a reference question. Please include a general decription of your method as well as a non-trivial example.
There are several methods to do the conversion from finite automata to regular expressions. Here I will describe the one usually taught in school which is very visual. I believe it is the most used in practice. However, writing the algorithm is not such a good idea. State removal method This algorithm is about handling the graph of the automaton and is thus not very suitable for algorithms since it needs graph primitives such as ... state removal. I will describe it using higher-level primitives. The key idea The idea is to consider regular expressions on edges and then removing intermediate states while keeping the edges labels consistent. The main pattern can be seen in the following to figures. The first has labels between $p,q,r$ that are regular expressions $e,f,g,h,i$ and we want to remove $q$. Once removed, we compose $e,f,g,h,i$ together (while preserving the other edges between $p$ and $r$ but this is not displayed on this): Example Using the same example as in Raphael's answer : we successively remove $q_2$: and then $q_3$: then we still have to apply a star on the expression from $q_1$ to $q_1$. In this case, the final state is also initial so we really just need to add a star: $$ (ab+(b+aa)(ba)^*(a+bb))^* $$ Algorithm L[i,j] is the regexp of the language from $q_i$ to $q_j$. First, we remove all multi-edges: for i = 1 to n: for j = 1 to n: if i == j then: L[i,j] := ε else: L[i,j] := ∅ for a in Σ: if trans(i, a, j): L[i,j] := L[i,j] + a Now, the state removal. Suppose we want to remove the state $q_k$: remove(k): for i = 1 to n: for j = 1 to n: L[i,i] += L[i,k] . star(L[k,k]) . L[k,i] L[j,j] += L[j,k] . star(L[k,k]) . L[k,j] L[i,j] += L[i,k] . star(L[k,k]) . L[k,j] L[j,i] += L[j,k] . star(L[k,k]) . L[k,i] Note that both with a pencil of paper and with an algorithm you should simplify expressions like star(ε)=ε , e.ε=e , ∅+e=e , ∅.e=∅ (By hand you just don't write the edge when it's not $∅$, or even $ε$ for a self-loop and you ignore when there is no transition between $q_i$ and $q_k$ or $q_j$ and $q_k$) Now, how to use remove(k) ? You should not remove final or initial states lightly, otherwise you will miss parts of the language. for i = 1 to n: if not(final(i)) and not(initial(i)): remove(i) If you have only one final state $q_f$ and one initial state $q_s$ then the final expression is: e := star(L[s,s]) . L[s,f] . star(L[f,s] . star(L[s,s]) . L[s,f] + L[f,f]) If you have several final states (or even initial states) then there is no simple way of merging these ones, other than applying the transitive closure method. Usually this is not a problem by hand but this is awkward when writing the algorithm. A much simpler workaround is to enumerate all pairs $(s,f)$ and run the algorithm on the (already state-removed) graph to get all expressions $e_{s,f}$ supposing $s$ is the only initial state and $f$ is the only final state, then doing the union of all $e_{s,f}$. This, and the fact that this is modifying languages more dynamically than the first method make it more error-prone when programming. I suggest using any other method. Cons There are a lot of cases in this algorithm, for example for choosing which node we should remove, the number of final states at the end, the fact that a final state can be initial, too etc. Note that now that the algorithm is written, this is a lot like the transitive closure method. Only the context of the usage is different. I do not recommend implementing the algorithm, but using the method to do that by hand is a good idea.
{ "source": [ "https://cs.stackexchange.com/questions/2016", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/98/" ] }
2,155
I started reading more and more language research papers. I find it very interesting and a good way to learn more about programming in general. However, there usually comes a section where I always struggle with (take for instance part three of this ) since I lack the theoretical background in computer science: Type Rules. Are there any good books or online resources available to get started in this area? Wikipedia is incredibly vague and doesn't really help a beginner.
In most type systems, the type rules work together to define judgements of the form: $$\Gamma\vdash e:\tau$$ This states that in context $\Gamma$ the expression $e$ has type $\tau$ . $\Gamma$ is a mapping of the free variables of $e$ to their types. A type system will consist of a set of axioms and rules (a formal system of rules of inference , as Raphael points out). An axiom is of the form $$\dfrac{}{\Gamma \vdash e:\tau}$$ This states that the judgement $\Gamma \vdash e:\tau$ holds (always). An example is $$\dfrac{}{x:\tau\vdash x:\tau}$$ which states that under the assumption that the type of variable $x$ is $\tau$ , then the expression $x$ has type $\tau$ . Inference rules take facts that have already been determined and build larger facts from them. For instance the inference rule $$\dfrac{\Gamma\vdash e_1:\tau\to\tau' \quad \Gamma\vdash e_2:\tau}{\Gamma\vdash e_1\ e_2:\tau'}$$ says that if I have a derivation of the fact $\Gamma\vdash e_1:\tau\to\tau'$ and a derivation of the fact $\Gamma\vdash e_2:\tau$ , then I can obtain a derivation of the fact $\Gamma\vdash e_1\ e_2:\tau'$ . In this case, this is the rule for typing function application. There are two ways of reading this rule: top-down - given two expressions (a function and another expression) and some constraints on their type, we can construct another expression (the application of the function to the expression) with the given type. bottom-up - given an expression that is, in this case, the application of a function to some expression, the way this is typed is by first typing the two expressions, ensuring that their types satisfy some constraints, namely that the first is a function type and that the second has the argument type of the function. Some inference rules also manipulate $\Gamma$ by adding new ingredients into it (view-ed bottom up). Here is the rule for $\lambda$ -abstraction: $$\dfrac{\Gamma x:\tau\vdash e:\tau'}{\Gamma\vdash \lambda x.e:\tau\to \tau'}$$ The inference rules are applied inductively based on the syntax of the expression being considered to form a derivation tree. At the leaves of the tree (at the top) will be axioms, and branches will be formed by applying inference rules. At the very bottom of the tree is the expression you are interested in typing. For example, a derivation of the typing of expression $\lambda f.\lambda x.f\ x$ is $$\dfrac{\dfrac{}{f:\tau\to\tau',x:\tau\vdash f:\tau\to\tau'} \qquad \dfrac{}{f:\tau\to\tau',x:\tau\vdash x:\tau}} {\dfrac{f:\tau\to\tau',x:\tau\vdash f\ x:\tau'}{ \dfrac{f:\tau\to\tau'\vdash \lambda x.f\ x:\tau'}{\vdash \lambda f.\lambda x.f\ x:\tau'}}}$$ Two very good books for learning about type systems are: Types and Programming Languages by Benjamin Pierce Practical Foundations for Programming Languages by Robert Harper Both books are very comprehensive, yet they start slowly, building a solid foundation.
{ "source": [ "https://cs.stackexchange.com/questions/2155", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/1745/" ] }
2,557
I created a simple regular expression lexer and parser to take a regular expression and generate its parse tree. Creating a non-deterministic finite state automaton from this parse tree is relatively simple for basic regular expressions. However I can't seem to wrap my head around how to simulate backreferences, lookaheads, and lookbehinds. From what I read in the purple dragon book I understood that to simulate a lookahead $r/s$ where the regular expression $r$ is matched if and only if the match is followed by a match of the regular expression $s$, you create a non-deterministic finite state automaton in which $/$ is replaced by $\varepsilon$. Is it possible to create a deterministic finite state automaton that does the same? What about simulating negative lookaheads and lookbehinds? I would really appreciate it if you would link me to a resource which describes how to do this in detail.
First of all, backreferences can not be simulated by finite automata as they allow you to describe non-regular languages. For example, ([ab]^*)\1 matches $\{ww \mid w \in \{a,b\}^*\}$, which is not even context-free. Look-ahead and look-behind are nothing special in the world of finite automata as we only match whole inputs here. Therefore, the special semantic of "just check but don't consume" is meaningless; you just concatenate and/or intersect checking and consuming expressions and use the resulting automata. The idea is to check the look-ahead or look-behind expressions while you "consume" the input and store the result in a state. When implementing regexps, you want to run the input through an automaton and get back start and end indices of matches. That is a very different task, so there is not really a construction for finite automata. You build your automaton as if the look-ahead or look-behind expression were consuming, and change your index storing resp. reporting accordingly. Take, for instance, look-behinds. We can mimic the regexp semantics by executing the checking regexp concurrently to the implicitly consuming "match-all" regexp. only from states where the look-behind expression's automaton is in a final state can the automaton of the guarded expression be entered. For example, the regexp /(?=c)[ab]+/ (assuming $\{a,b,c\}$ is the full alphabet) -- note that it translates to the regular expression $\{a,b,c\}^*c\{a,b\}^+\{a,b,c\}^*$ -- could be matched by [ source ] and you would have to store the current index as $i$ whenever you enter $q_2$ (initially or from $q_2$) and report a (maximum) match from $i$ to the current index ($-1$) whenever you hit (leave) $q_2$. Note how the left part of the automaton is the parallel automaton of the canonical automata for [abc]* and c (iterated), respectively. Look-aheads can be dealt with similarly; you have to remember the index $i$ when you enter the "main" automaton, the index $j$ when you leave the main automaton and enter the look-ahead automaton and report a match from $i$ to $j$ only when you hit the look-ahead automaton's final state. Note that non-determinism is inherent to this: main and look-ahead/-behind automaton might overlap, so you have to store all transitions between them in order to report the matching ones later, or backtrack.
{ "source": [ "https://cs.stackexchange.com/questions/2557", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/2023/" ] }
2,615
If a graph $G$ is connected and has no path with a length greater than $k$, prove that every two paths in $G$ of length $k$ have at least one vertex in common. I think that that common vertex should be in the middle of both the paths. Because if this is not the case then we can have a path of length $>k$. Am I right?
Assume for contradiction that $P_{1} = \langle v_{0},\ldots,v_{k}\rangle$ and $P_{2} = \langle u_{0},\ldots,u_{k}\rangle$ are two paths in $G$ of length $k$ with no shared vertices. As $G$ is connected, there is a path $P'$ connecting $v_{i}$ to $u_{j}$ for some $i,j \in [1,k]$ such that $P'$ shares no vertices with $P_{1} \cup P_{2}$ other than $v_{i}$ and $u_{j}$ . Say $P' = \langle v_{i},x_{0},\ldots,x_{b},u_{j}\rangle$ (note that there may be no $x_{i}$ vertices, i.e., $b$ may be $0$ - the notation is a bit deficient though). Without loss of generality we may assume that $i,j \geq \lceil\frac{k}{2}\rceil$ (we can always reverse the numbering). Then we can construct a new path $P^{*} = \langle v_{0},\ldots,v_{i},x_{1},\ldots,x_{b},u_{j},\ldots,u_{0}\rangle$ (by going along $P_{1}$ to $v_{i}$ , then across the bridge formed by $P'$ to $u_{j}$ , then along $P_{2}$ to $u_{0}$ ). Obviously $P^{*}$ has length at least $k+1$ , but this contradicts the assumption that $G$ has no paths of length greater than $k$ . So then any two paths of length $k$ must intersect at at least one vertex and your observation that it must be in the middle (if there's only one) follows as you reasoned.
{ "source": [ "https://cs.stackexchange.com/questions/2615", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/778/" ] }
2,658
The discrete logarithm is the same as finding $b$ in $a^b=c \bmod N$, given $a$, $c$, and $N$. I wonder what complexity groups (e.g. for classical and quantum computers) this is in, and what approaches (i.e. algorithms) are the best for accomplishing this task. The wikipedia link above doesn't really give very concrete runtimes. I'm hoping for something more like what the best known methods are for finding such.
Short answer. If we formulate an appropriate decision problem version of the Discrete Logarithm problem, we can show that it belongs to the intersection of the complexity classes NP , coNP , and BQP . A decision problem version of Discrete Log. The discrete logarithm problem is most often formulated as a function problem , mapping tuples of integers to another integer. That formulation of the problem is incompatible with the complexity classes P , BPP , NP , and so forth which people prefer to consider, which concern only decision (yes/no) problems. We may consider a decision problem version of the discrete log problem which is effectively equivalent: Discrete Log (Decision Problem). Given a prime $N$, a generator $a \in \mathbb Z_N^\times$ of the multiplicative units modulo $N$, an integer $0 < c < N$, and an upper bound $b \in \mathbb N$, determine whether there exists $1 \leqslant L \leqslant b$ such that $a^L \equiv c \pmod{N}$. This would allow us to actually compute log a ( c ) modulo N by binary search, if we could efficiently solve it. We may then ask to which complexity classes this problem belongs. Note that we've phrased it as a promise problem: we can extend it to a decision problem by suspending the requirements that $N$ be prime and $a \in \mathbb Z_N^\times$ a generator, but adding the condition that these restrictions hold for any 'YES' instance of the problem. Discrete Log is in  BQP. Using Shor's algorithm for computing the discrete logarithm ( Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer ), we may easily contain Discrete Log in BQP . (To test whether or not $a \in \mathbb Z_N^\times$ actually is a generator, we may use Shor's order-finding algorithm in the same paper, which is the basis for the discrete logarithm algorithm, to find the order of $a$ and compare it against $N-1$.) Discrete Log is in  NP ∩ coNP. If it is actually the case that $N$ is prime and $a \in \mathbb Z_N^\times$ is a generator, a sufficient certificate either for a 'YES' or a 'NO' instance of the decision problem is the unique integer $0 \leqslant L < N-1$ such that $a^L \equiv c \pmod{N}$. So it suffices to show that we can certify whether or not the conditions on $a$ and $N$ hold. Following Brassard's A note on the complexity of cryptography , if it is both the case that $N$ is prime and $a \in \mathbb Z_N^\times$ is a generator, then it is the case that $$ r^{N-1} \equiv 1 \!\!\!\!\pmod{N} \qquad\text{and}\qquad r^{(N-1)/q} \not\equiv 1 \!\!\!\!\pmod{N} ~~\text{for primes $q$ dividing $N-1$} $$ by definition (using the fact that $\mathbb Z_N^\times$ has order $N-1$). A certificate that the constraints on $N$ and $a$ both hold would be a list of the prime factors $q_1, q_2, \ldots$ dividing $N-1$, which will allow us to test the above congruence constraints. (We can test whether each $q_j$ is prime using AKS test if we wish, and test that these are all of the prime factors of $N-1$ by attempting to find the prime-power factorization of $N-1$ with only those primes.) A certificate that one of the constraints on $N$ or $a$ fail would be an integer $q$ which divides $N-1$, such that $a^{(N-1)/q} \equiv 1 \pmod{N}$. It isn't necessary to test $q$ for primeness in this case; it immediately implies that the order of $a$ is less than $N-1$, and so it is a generator of the multiplicative group only if $N$ fails to be prime.
{ "source": [ "https://cs.stackexchange.com/questions/2658", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/1667/" ] }
2,718
I was going through the discussion on the question How to define quantum Turing machines? and I feel that quantum TM and nondetermistic TM are one and the same. The answers to the other question do not touch on that. Are these two models one and the same? If no, What are the differences between quantum TM and NDTM? Is there any computation which a NDTM would do quicker than Quantum TM? If this is the case then quantum TM is a DTM, then why is there so much fuzz about this technology, we already have so many DTM. Why to design a new DTM in the end?
As a general preamble, QTMs, TMs and NTMs are all different things (taking huge liberties with a bunch of unspoken assumptions). I'll assume you know what a Turing Machine is. A NTM is a TM where, at any state, with any symbol, the transition function is allowed to have a number of choices of action that is not precisely $1$, i.e. $0$ or more than $1$ (a deterministic TM must have exactly one action for each symbol at each state, though the $0$ case is easy to deal with). When faced with a situation where there are several choices of transition a NTM will make the choice that will ultimately take it to an accepting state, if such an option exists. In contrast a QTM is a model of quantum computation, as detailed in the thread you linked. It is not nondeterministic, not all. Probably the key high level differences between a QTM and a TM is that a QTM has as its state a linear combination of the basis states (again, it's all in that other thread) and that it's probabilistic, that is, the accuracy of its ouput is bounded by some probability less than $1$ (broadly speaking). Just to be really really clear on a point that catches many people, nondeterminism is not randomness, it's not parallelism, it's a theoretical construct that has nothing to do with either of those. The full answer to this depends on some complexity theoretic assumptions. Taking a particular standpoint (that $QMA \supset BQP$ and $NP \supset P$), the answer is yes. $NP$-complete problems can be solved by a NTM in polynomial time, and it also seems that $NP\text{-complete} \cap BQP = \emptyset$, so they can't be solved by a QTM in polynomial time. Again, this is all dependent on which way the cards fall with a variety of complexity classes. If it turns out that $QMA = BQP$ then the answer is no, for example. The first thing to say here is to be careful about confusing TMs (of any kind) and computers. A TM is not a computer, a QTM is not a quantum computer. TMs (of any kind) model computation. What a given computer can do is governed by this, but this is quite different to saying that the thing I'm typing this on is a TM. Having said that, if we speak loosely and lazily identify QTMs with quantum computers and TMs with standard computers, then (again under certain complexity assumptions) it seems that quantum computers can quickly do certain tasks that seem hard for standard computers (factoring, discrete logs, a really particular type of searching, and a couple of others). However these problems aren't known to be hard in the $NP$-complete sense either, it seems quantum computers offer capabilities that extend a standard computer, but in a different direction to what would be needed to solve $NP$-complete problems quickly. Again just to be really clear, I've glossed over a lot of computational complexity here, if you really want to understand how everything fits together, you'll need start digging in to the literature.
{ "source": [ "https://cs.stackexchange.com/questions/2718", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/2137/" ] }
2,811
Summary: According to Rice's theorem, everything is impossible. And yet, I do this supposedly impossible stuff all the time! Of course, Rice's theorem doesn't simply say "everything is impossible". It says something rather more specific: "Every property of a computer program is non-computable." (If you want to split hairs, every "non-trivial" property. That is, properties which all programs posses or no programs posses are trivially computable. But any other property is non-computable.) That's what the theorem says, or appears to say. And presumably a great number of very smart people have carefully verified the correctness of this theorem. But it seems to completely defy logic! There are numerous properties of programs which are trivial to compute!! For example: How many steps does a program execute before halting? To decide whether this number is finite or infinite is precisely the Halting Problem, which is non-computable. To decide whether this number is greater or less than some finite $n$ is trivial! Just run the program for up to $n$ steps and see if it halts or not. Easy! Similarly, does the program use more or less than $n$ units of memory in its first $m$ execution steps? Trivially computable. Does the program text mention a variable named $k$? A trivial textual analysis will reveal the answer. Does the program invoke command $\sigma$? Again, scan the program text looking for that command name. I can see plenty of properties that do look non-computable as well; e.g., how many additions does a complete run of the program perform? Well, that's nearly the same as asking how many steps the program performs, which is virtually the Halting Problem. But it looks like there are boat-loads of program properties which a really, really easy to compute. And yet, Rice's theorem insists that none of them are computable. What am I missing here?
For the purposes of this discussion, a "program" is a piece of code which always takes an integer as an input, and either runs forever or returns an integer. We say that two programs $f$ and $g$ are extensionally equal if they compute the same function, i.e., for every number $n$ either both $f(n)$ and $g(n)$ run forever, or they both terminate and output the same number. An extensional property of programs is a property $P$ that respects extensional equality, i.e., if $f$ and $g$ are extensionally equal then they either both have the property $P$ or both do not have it. Here are some examples of non -extensional properties: The program halts within $n$ steps. (We can always modify a program to an extensionally equal one that runs longer.) The program uses fewer than $n$ memory cells within the first $m$ steps of execution. (We can always modify a program to an extensionally equal one so that it uses up some memory for no good reason.) The program text mentions a variable named k . (We can rename variables.) Does the program invoke command $\sigma$. This may depend a bit on what $\sigma$ is, but if it is something that can be simulated in some way, then we can evade $\sigma$ and still have a program which is extensionally equal to the original one. I am sure you have noticed that I listed precisely your alleged counter-examples to Rice's theorem, which says: Theorem (Rice): A computable extensional property of programs either holds of all programs or of none. There is another way to explain this: you have to distinguish between a program and the function it computes. Many different programs compute the same function (they are all extensionally equal). Rice's theorem is about properties of functions, not about properties of programs that compute them.
{ "source": [ "https://cs.stackexchange.com/questions/2811", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/1951/" ] }
2,832
In this answer it is mentioned A regular language can be recognized by a finite automaton. A context-free language requires a stack, and a context sensitive language requires two stacks (which is equivalent to saying it requires a full Turing machine) . I wanted to know regarding the truth of the bold part above. Is it in fact true or not? What is a good way to reach at an answer to this?
Two bits to this answer; Firstly, the class of languages recognised by Turing Machines is not context sensitive , it's recursively enumerable (context sensitive is the class of languages you get from linear bound automata ). The second part, assuming we adjust the question, is that yes, a two-stack PDA is as powerful as a TM. It's mildly simpler to assume that we're using the model of TMs that has a tape that's infinite in one direction only (though both directions is not much harder, and equivalent). To see the equivalence, just think of the first stack as the contents of the tape to the left of the current position, and the second as the contents to the right. You start off like so: Push the normal "bottom of stack" markers on both stacks. Push the input to the left stack (use non-determinism to "guess" the end of the input). Move everything to the right stack (to keep things in the proper order). Now you can ignore the input and do everything on the contents of the stacks (which is simulating the tape). You pop to read and push to write (so you can change the "tape" by pushing something different to what you read). Then we can simulate the TM by popping from the right stack and pushing to the left to move right, and vice versa to move left. If we hit the bottom of the left stack we behave accordingly (halt and reject, or stay where you, depending on the model), if we hit the bottom of the right stack, we just push a blank symbol onto the left. For a full formal proof, see an answer to another question . The relationship the other way should be even more obvious, i.e. that we can simulate a two-stack PDA with a TM.
{ "source": [ "https://cs.stackexchange.com/questions/2832", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/1558/" ] }
2,834
So, TSP (Travelling salesman problem) decision problem is NP complete . But I do not understand how I can verify that a given solution to TSP is in fact optimal in polynomial time, given that there is no way to find the optimal solution in polynomial time (which is because the problem is not in P)? Anything that might help me see that the verification can in fact be done in polynomial time?
To be more precise, we do not know if TSP is in $\mathsf{P}$. It is possible that it can be solved in polynomial time, even though perhaps the common belief is that $\mathsf{P} \neq \mathsf{NP}$. Now, recall what it means for a problem to be $\mathsf{NP}$-hard and $\mathsf{NP}$-complete, see for example my answer here . I believe your source of confusion stems from the definitions: an $\mathsf{NP}$-hard problem is not necessarily in $\mathsf{NP}$. As you and the Wikipedia page you link states, the decision problem is $\mathsf{NP}$-complete: given the costs and an integer $x$, decide whether there is a tour cheaper than $x$ . One way of seeing the problem is in $\mathsf{NP}$ is to see that given a solution, it is easy to verify in polynomial time whether the solution is cheaper than $x$. How can you do this? Just follow the tour given, record its total cost and finally compare the total cost to $x$.
{ "source": [ "https://cs.stackexchange.com/questions/2834", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/1558/" ] }
2,869
In a recent CACM article [1], the authors present an implementation for staged functions . They use the term as if it was well-known, and none of the references looks like an obvious introduction. They give a short explanation (emphasis mine and reference number changed; it's 22 in the original) In the context of program generation, multistage programming (MSP, staging for short) as established by Taha and Sheard [2] allows programmers to explicitly delay evaluation of a program expression to a later stage (thus, staging an expression). The present stage effectively acts as a code generator that composes (and possibly executes) the program of the next stage. However, Taha and Sheard write (emphasis mine): A multi-stage program is one that involves the generation, compilation, and execution of code, all inside the same process. Multi-stage languages express multi-stage programs. Staging, and consequently multi-stage programming, address the need for general purpose solutions which do not pay run-time interpretive overheads. They than go on to several references to older work allegedly showing that staging is effective, which suggests that the concept is even older. They don't give a reference for the term itself. These statements seem to be orthogonal, if not contradictory; maybe what Rompf and Odersky write is an application of what Taha and Sheard propose, but maybe it is another perspective on the same thing. They seem to agree that an important point is that programs (re)write parts of themselves at runtime, but I do not know whether that is a necessary and/or sufficient ability. So, what is staging respectively are interpretations of staging in this context? Where does the term come from? Lightweight Modular Staging: A Pragmatic Approach to Runtime Code Generation and Compiled DSLs by T. Rompf and M. Odersky (2012) MetaML and multi-stageprogramming with explicit annotations by W. Taha and T. Sheard (2000)
To the best of my knowledge, the term staged computation was first used by Bill Scherlis in this paper . Prior to that, the term " partial evaluation " was used for much the same concept, but the idea of staged computation is subtly different. Both the ideas are related to Kleene's S-m-n theorem . If you have a function $\phi(m,n)$ of two arguments, but you know one argument, say $m$, then you can perform some of the computation of the function right away using the knowledge of the first argument. What you are then left with is a function $\phi_m(n)$ whose computations only depend on the second, unknown, argument. The idea of partial evaluation is to compute the specialized function $\phi_m(n)$ automatically . Given the code for the original function $\phi$, partial evaluation does static analysis to determine which bits of the code depend on $m$ and which bits depend on $n$, and transforms it to a function $\phi'$ which, given $m$, constructs $\phi_m$. The second argument $n$ can then be fed to this specialized function. The idea of staged computation is to think about the function $\phi'$ first. It is called a "staged" function because it works in multiple stages. Once we give it the first argument $m$, it constructs the code for the specialized function $\phi_m$. This is the "first stage." In the second stage, the second argument is provided to $\phi_m$ which does the rest of the job. So, the job of partial evaluation is to transform the code for an ordinary function $\phi$ to a staged function $\phi'$. Scherlis envisaged that this transformation could be done by more general mechanisms than the earlier partial evaluation methods. The subject of "staged computation" now deals with issues such as: How to define staged functions? What programming languages and type systems should be used for defining staged functions? What is the semantics of such languages? How do we ensure the coherence and correctness of staged functions? What techniques are useful for automatically or semi-automatically constructing staged functions? How do we prove the correctness of such techniques? Staged computation can be very important in practice. In fact, every compiler is in effect a staged computation. Given a source program, it constructs a translated and optimized target program, which can then take the actual input and calculate the result. It is hard to write staged computation programs in practice because we have to juggle the multiple stages and make sure that the right things are done at the right time. Everybody who has written a compiler has struggled with such issues. It is also hard to write programs that write other programs, may they be machine language programs (compilers), SQL queries (database manipulations) or HTML/Server Pages/Javascript code (web applications) and myriads of other applications. The researchers in staged computation aim to create good languages and tools that make it easier and safer to create such applications.
{ "source": [ "https://cs.stackexchange.com/questions/2869", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/98/" ] }
2,907
I am trying to understand clustering methods. What I I think I understood: In supervised learning, the categories/labels data is assigned to are known before computation. So, the labels, classes or categories are being used in order to "learn" the parameters that are really significant for those clusters. In unsupervised learning, datasets are assigned to segments, without the clusters being known. Does that mean that, if I don't even know which parameters are crucial for a segmentation, I should prefer supervised learning?
The difference is that in supervised learning the "categories", "classes" or "labels" are known. In unsupervised learning, they are not, and the learning process attempts to find appropriate "categories". In both kinds of learning all parameters are considered to determine which are most appropriate to perform the classification. Whether you chose supervised or unsupervised should be based on whether or not you know what the "categories" of your data are. If you know, use supervised learning. If you do not know, then use unsupervised. As you have a large number of parameters and you do not know which ones are relevant, you could use something like principle component analysis to help determine the relevant ones.
{ "source": [ "https://cs.stackexchange.com/questions/2907", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/2266/" ] }
2,973
The 3SUM problem tries to identify 3 integers $a,b,c$ from a set $S$ of size $n$ such that $a + b + c = 0$. It is conjectured that there is not better solution than quadratic, i.e. $\mathcal{o}(n^2)$. Or to put it differently: $\mathcal{o}(n \log(n) + n^2)$. So I was wondering if this would apply to the generalised problem: Find integers $a_i$ for $i \in [1..k]$ in a set $S$ of size $n$ such that $\sum_{i \in [1..k]} a_i = 0$. I think you can do this in $\mathcal{o}(n \log(n) + n^{k-1})$ for $k \geq 2$ (it's trivial to generalise the simple $k=3$ algorithm). But are there better algorithms for other values of $k$?
$k$ -SUM can be solved more quickly as follows. For even $k$ : Compute a sorted list $S$ of all sums of $k/2$ input elements. Check whether $S$ contains both some number $x$ and its negation $-x$ . The algorithm runs in $O(n^{k/2}\log n)$ time. For odd $k$ : Compute the sorted list $S$ of all sums of $(k-1)/2$ input elements. For each input element $a$ , check whether $S$ contains both $x$ and $-a-x$ , for some number $x$ . (The second step is essentially the $O(n^2)$ -time algorithm for 3SUM.) The algorithm runs in $O(n^{(k+1)/2})$ time. Both algorithms are optimal (except possibly for the log factor when $k$ is even and bigger than $2$ ) for any constant $k$ in a certain weak but natural restriction of the linear decision tree model of computation. For more details, see: Nir Ailon and Bernard Chazelle. Lower bounds for linear degeneracy testing . JACM 2005. Jeff Erickson. Lower bounds for linear satisfiability problems . CJTCS 1999.
{ "source": [ "https://cs.stackexchange.com/questions/2973", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/26/" ] }
3,019
A few years ago, MapReduce was hailed as revolution of distributed programming. There have also been critics but by and large there was an enthusiastic hype. It even got patented! [1] The name is reminiscent of map and reduce in functional programming, but when I read (Wikipedia) Map step: The master node takes the input, divides it into smaller sub-problems, and distributes them to worker nodes. A worker node may do this again in turn, leading to a multi-level tree structure. The worker node processes the smaller problem, and passes the answer back to its master node. Reduce step: The master node then collects the answers to all the sub-problems and combines them in some way to form the output – the answer to the problem it was originally trying to solve. or [2] Internals of MAP: [...] MAP splits up the input value into words. [...] MAP is meant to associate each given key/value pair of the input with potentially many intermediate key/value pairs. Internals of REDUCE: [...] [REDUCE] performs imperative aggregation (say, reduction): take many values, and reduce them to a single value. I can not help but think: this is divide & conquer (in the sense of Mergesort), plain and simple! So, is there (conceptual) novelty in MapReduce somewhere, or is it just a new implementation of old ideas useful in certain scenarios? US Patent 7,650,331: "System and method for efficient large-scale data processing " (2010) Google’s MapReduce programming model — Revisited by R. Lämmel (2007)
I can not help but think: this is divide & conquer, plain and simple! M/R is not divide & conquer. It does not involve the repeated application of an algorithm to a smaller subset of the previous input. It's a pipeline (a function specified as a composition of simpler functions) where pipeline stages are alternating map and reduce operations. Different stages can perform different operations. So, is there (conceptual) novelty in MapReduce somewhere, or is it just a new implementation of old ideas useful in certain scenarios? MapReduce does not break new ground in the theory of computation -- it does not show a new way of decomposing a problem into simpler operations. It does show that particular simpler operations are practical for a particular class of problem. The MapReduce paper's contribution was evaluating a pipeline of two well understood orthogonal operators that can be distributed efficiently and fault-tolerantly on a particular problem: creating a text index of large corpus benchmarking map-reduce on that problem to show how much data is transferred between nodes and how latency differences in stages affect overall latency showing how to make the system fault tolerant so machine failures during computation can be compensated for automatically identifying specific useful implementation choices and optimizations Some of the critiques fall into these classes: "Map/reduce does not break new ground in theory of computation." True. The original paper's contribution was that these well-understood operators with a specific set of optimizations had been successfully used to solve real problems more easily and fault-tolerantly than one-off solutions. "This distributed computation doesn't easily decompose into map & reduce operations". Fair enough, but many do. "A pipeline of n map/reduce stages require latency proportional to the number of reduce steps of the pipeline before any results are produced." Probably true. The reduce operator does have to receive all its input before it can produce a complete output. "Map/reduce is overkill for this use-case." Maybe. When engineers find a shiny new hammer, they tend to go looking for anything that looks like a nail. That doesn't mean that the hammer isn't a well-made tool for a certain niche. "Map/reduce is a poor replacement for a relational DB." True. If a relational DB scales to your data-set then wonderful for you -- you have options.
{ "source": [ "https://cs.stackexchange.com/questions/3019", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/98/" ] }
3,028
I'm learning Haskell and I'm fascinated by the language. However I have no serious math or CS background. But I am an experienced software programmer. I want to learn category theory so I can become better at Haskell. Which topics in category theory should I learn to provide a good basis for understanding Haskell?
In a previous answer in the Theoretical Computer Science site , I said that category theory is the "foundation" for type theory. Here, I would like to say something stronger. Category theory is type theory . Conversely, type theory is category theory . Let me expand on these points. Category theory is type theory In any typed formal language, and even in normal mathematics using informal notation, we end up declaring functions with types $f : A \to B$. Implicit in writing that is the idea that $A$ and $B$ are some things called "types" and $f$ is a "function" from one type to another. Category theory is the algebraic theory of such "types" and "functions". (Officially, category theory calls them "objects" and "morphisms" so as to avoid treading on the set-theoretic toes of the traditionalists, but increasingly I see category theorists throwing such caution to the wind and using the more intuitive terms: "type" and "function". But, be prepared for protests from the traditionalists when you do so.) We have all been brought up on set theory from high school onwards. So, we are used to thinking of types such as $A$ and $B$ as sets, and functions such as $f$ as set-theoretic mappings. If you never thought of them that way, you are in good shape. You have escaped set-theoretic brain-washing. Category theory says that there are many kinds of types and many kinds of functions. So, the idea of types as sets is limiting. Instead, category theory axiomatizes types and functions in an algebraic way. Basically, that is what category theory is. A theory of types and functions. It does get quite sophisticated, involving high levels of abstraction. But, if you can learn it, you will acquire a deep understanding of types and functions. Type theory is category theory By "type theory," I mean any kind of typed formal language, based on rigid rules of term-formation which make sure that everything type checks. It turns out that, whenever we work in such a language, we are working in a category-theoretic structure. Even if we use set-theoretic notations and think set-theoretically, still we end up writing stuff that makes sense categorically. That is an amazing fact . Historically, Dana Scott may have been the first to realize this. He worked on producing semantic models of programming languages based on typed (and untyped) lambda calculus. The traditional set-theoretic models were inadequate for this purpose, because programming languages involve unrestricted recursion which set theory lacks. Scott invented a series of semantic models that captured programming phenomena, and came to the realization that typed lambda calculus exactly represented a class of categories called cartesian closed categories . There are plenty of cartesian closed categories that are not "set-theoretic". But typed lambda calculus applies to all of them equally. Scott wrote a nice essay called " Relating theories of lambda calculus " explaining what is going on, parts of which seem to be available on the web. The original article was published in a volume called "To H. B. Curry: Essays on Combinatory Logic, Lambda Calculus and Formalism", Academic Press, 1980. Berry and Curien came to the same realization, probably independently. They defined a categorical abstract machine (CAM) to use these ideas in implementing functional languages, and the language they implemented was called "CAML" which is the underlying framework of Microsoft's F# . Standard type constructors like $\times$, $\to$, $List$ etc. are functors . That means that they not only map types to types, but also functions between types to functions between types. Polymorphic functions preserve all such functions resulting from functor actions. Category theory was invented in 1950's by Eilenberg and MacLane precisely to formalize the concept of polymorphic functions. They called them "natural transformations", "natural" because they are the only ones that you can write in a type-correct way using type variables. So, one might say that category theory was invented precisely to formalize polymorphic programming languages, even before programming languages came into being! A set-theoretic traditionalist has no knowledge of the functors and natural transformations that are going on under the surface when he uses set-theoretic notations. But, as long as he is using the type system faithfully, he is really doing categorical constructions without being aware of them. All said and done, category theory is the quintessential mathematical theory of types and functions. So, all programmers can benefit from learning a bit of category theory, especially functional programmers. Unfortunately, there do not seem to be any text books on category theory targeted at programmers specifically. The "category theory for computer science" books are typically targeted at theoretical computer science students/researchers. The book by Benjamin Pierce, Basic category theory for computer scientists is perhaps the most readable of them. However, there are plenty of resources on the web, which are targeted at programmers. The Haskellwiki page can be a good starting point. At the Midlands Graduate School , we have lectures on category theory (among others). Graham Hutton's course was pegged as a "beginner" course, and mine was pegged as an "advanced" course. But both of them cover essentially the same content, going to different depths. University of Chalmers has a nice resource page on books and lecture notes from around the world. The enthusiastic blog site of "sigfpe" also provides a lot of good intuitions from a programmer's point of view. The basic topics you would want to learn are: definition of categories, and some examples of categories functors, and examples of them natural transformations, and examples of them definitions of products, coproducts and exponents (function spaces), initial and terminal objects. adjunctions monads, algebras and Kleisli categories My own lecture notes in the Midlands Graduate School covers all these topics except for the last one (monads). There are plenty of other resources available for monads these days. So that is not a big loss. The more mathematics you know, the easier it would be to learn category theory. Because category theory is a general theory of mathematical structures, it is helpful to know some examples to appreciate what the definitions mean. (When I learnt category theory, I had to make up my own examples using my knowledge of programming language semantics, because the standard text books only had mathematical examples, which I didn't know anything about.) Then came the brilliant book by Lambek and Scott called " Introduction to categorical logic " which related category theory to type systems (what they call "logic"). It is now possible to understand category theory just by relating it to type systems even without knowing a lot of examples. A lot of the resources I mentioned above use this approach to explain category theory.
{ "source": [ "https://cs.stackexchange.com/questions/3028", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/-1/" ] }
3,078
Can anyone suggest me a linear time algorithm that takes as input a directed acyclic graph $G=(V,E)$ and two vertices $s$ and $t$ and returns the number of simple paths from $s$ to $t$ in $G$. I have an algorithm in which I will run a DFS(Depth First Search) but if DFS finds $t$ then it will not change the color(from white to grey) of any of the nodes which comes in the path $s \rightsquigarrow t$ so that if this is the subpath of any other path then also DFS goes through this subpath again.For example consider the adjacency list where we need to find the number of paths from $p$ to $v$. $$\begin{array}{|c|c c c|} \hline p &o &s &z \\ \hline o &r &s &v\\ \hline s &r \\ \hline r &y \\ \hline y &v \\ \hline v &w \\ \hline z & \\ \hline w &z \\ \hline \end{array}$$ Here DFS will start with $p$ and then lets say it goes to $p \rightsquigarrow z$ since it doesnot encounter $v$ DFS will run normally.Now second path is $psryv$ since it encounter $v$ we will not change the color of vertices $s,r,y,v$ to grey.Then the path $pov$ since color of $v$ is still white.Then the path $posryv$ since color of $s$ is white and similarly of path $poryv$.Also a counter is maintained which get incremented when $v$ is encountered. Is my algorithm correct? if not, what modifications are needed to make it correct or any other approaches will be greatly appreciated. Note :Here I have considered the DFS algorithm which is given in the book "Introduction to algorithms by Cormen" in which it colors the nodes according to its status.So if the node is unvisited , unexplored and explored then the color will be white,grey and black respectively.All other things are standard.
Your current implementation will compute the correct number of paths in a DAG. However, by not marking paths it will take exponential time. For example, in the illustration below, each stage of the DAG increases the total number of paths by a multiple of 3. This exponential growth can be handled with dynamic programming. Computing the number of $s$-$t$ paths in a DAG is given by the recurrence, $$\text{Paths}(u) = \begin{cases} 1 & \text{if } u = t \\ \sum_{(u,v) \in E} \text{Paths}(v) & \text{otherwise.}\\ \end{cases}$$ A simple modification of DFS will compute this given as def dfs(u, t): if u == t: return 1 else: if not u.npaths: # assume sum returns 0 if u has no children u.npaths = sum(dfs(c, t) for c in u.children) return u.npaths It is not difficult to see that each edge is looked at only once, hence a runtime of $O(V + E)$.
{ "source": [ "https://cs.stackexchange.com/questions/3078", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/778/" ] }
3,101
Let $F = \{⟨M⟩:\text{M is a TM which stops for every input in at most 50 steps}\}$. I need to decide whether F is decidable or recursively enumerable. I think it's decidable, but I don't know how to prove it. My thoughts This "50 steps" part immediate turns the R sign for me. If it was for specific input it would be decidable. However, here it's for every input. Checking it for infinite inputs makes me think that the problem is co-RE , i.e. its complement is acceptable. Perhaps, I can check the configurations and see that all configurations after 50 steps don't lead to accept state- how do I do that?
Let's consider the more general problem of machines which stop after at most $N$ steps, for some $N \geqslant 1$. (The following is a substantial simplifcation of a previous version of this answer, but is effectively equivalent.) As swegi remarks in an earlier response, if the machine stops after at most $N$ steps, then only the cells $0,1,\ldots,N-1$ on the tape are significant. Then it suffices to simulate the machine $M$ on all input strings of the form $x \in \Sigma^N$, of which there are a finite number. If any of these simulations fail to enter a halting state by the $N^{\text{th}}\:\!$ transition, this indicates that any input string starting with $x$ is one for which the machine does not stop within the first $N$ steps. If all of these simulations halt by the $N^{\text{th}}\:\!$ transition, then $M$ halts within $N$ steps on all inputs of any length (of which the substring of length $N$ is all that it ever acts on).
{ "source": [ "https://cs.stackexchange.com/questions/3101", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/1183/" ] }
3,149
This is a basic question, but I'm thinking that $O(m+n)$ is the same as $O(\max(m,n))$, since the larger term should dominate as we go to infinity? Also, that would be different from $O(\min(m,n))$. Is that right? I keep seeing this notation, especially when discussing graph algorithms. For example, you routinely see: $O(|V| + |E|)$ (e.g. see here ).
You are right. Notice that the term $O(n+m)$ slightly abuses the classical big-O Notation , which is defined for functions in one variable. However there is a natural extension for multiple variables. Simply speaking, since $$ \frac{1}{2}(m+n) \le \max\{m,n\} \le m+n \le 2 \max\{m,n\},$$ you can deduce that $O(n+m)$ and $O(\max\{m,n\})$ are equivalent asymptotic upper bounds. On the other hand $O(n+m)$ is different from $O(\min\{n,m\})$, since if you set $n=2^m$, you get $$O(2^m+m)=O(2^m) \supsetneq O(m)=O(\min\{2^m,m\}).$$
{ "source": [ "https://cs.stackexchange.com/questions/3149", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/9667/" ] }
3,209
What is the difference between programming language and a scripting language? For example, consider C versus Perl. Is the only difference that scripting languages require only the interpreter and don't require compile and linking?
I think the difference has a lot more to do with the intended use of the language. For example, Python is interpreted, and doesn't require compiling and linking, as is Prolog. I would classify both of these as programming languges. Programming langauges are meant for writing software. They are designed to manage large projects. They can probably call programs, read files, etc., but might not be quite as good at that as a scripting language. Scripting langauges aren't meant for large-scale software development. Their syntax, features, library, etc. are focused more around accomplishing small tasks quickly. This means they are sometimes more "hackish" than programming langauges, and might not have all of the same nice features. They're designed to make commonly performed tasks, like iterating through a bunch of files or performing sysadmin tasks, to be automated. For example, Bash doesn't do arithmetic nicely, which would probably make writing large-scale software in it a nightmare. As a kind of benchmark: I would never write a music player in perl, even though I probably could. Likewise, I would never try to use C++ to rename all the files in a given folder. This line is becoming blurrier and blurrier. JavaScript, by definition a "scripting" langauge, is increasingly used to develop "web apps" which are more in the realm of software. Likewise, Python initially fit many of the traits of a scripting language but is seeing more and more sofware developed using Python as the primary platform.
{ "source": [ "https://cs.stackexchange.com/questions/3209", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/2502/" ] }
3,227
I am looking for an algorithm to generate an array of N random numbers, such that the sum of the N numbers is 1, and all numbers lie within 0 and 1. For example, N=3, the random point (x, y, z) should lie within the triangle: x + y + z = 1 0 < x < 1 0 < y < 1 0 < z < 1 Ideally I want each point within the area to have equal probability. If it's too hard, I can drop the requirement. Thanks.
Let us first assume that you want to sample within x + y + z = 1 0 ≤ x ≤ 1 0 ≤ y ≤ 1 0 ≤ z ≤ 1 This doesn't make quite a difference, since the sample point will still lie in your requested area with high probability. Now you are left with sampling a point from a simplex . In the 3d example you get a 2d simplex (triangle) realized in 3d. How to pick a point uniformly at random was discussed in this blog post (see the comments). For your problem it would mean that you take $n-1$ random numbers from the interval $(0,1)$, then you add a $0$ and $1$ to get a list of $n+1$ numbers. You sort the list and then you record the differences between two consecutive elements. This gives you a list of $n$ number that will sum up to $1$. Moreover this sampling is uniform. This idea can be found in Donald B. Rubin, The Bayesian bootstrap Ann. Statist. 9, 1981, 130-134. For example ($n=4$) you have the three random numbers 0.4 0.2 0.1 then you obtain the sorted sequence 0 0.1 0.2 0.4 1 and this gives the differences 0.1 0.1 0.2 0.6 , and by construction these four numbers sum up to 1. Another approach is the following: first sample from the hypercube (that is you forget about x+y+z=1 ) and then normalize the sample point. The normalization is a projection from the $d$-hypercube to the $d-1$-simplex. It should be intuitively clear that the points at the center of the simplex have more "pre-image-points" than at the outside . Hence, if you sample uniformly from the hypercube, this wont give you a uniform sampling in the simplex. However, if you sample from the hypercube with an appropriate Exponential Distribution, than this effect cancels out. The Figure gives you an idea how both methods will sample. However, I prefer the "sorting" method due to its simple form. It's also easier to implement.
{ "source": [ "https://cs.stackexchange.com/questions/3227", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/2553/" ] }
3,251
I'm stuck on the following question: "Regular languages are precisely those accepted by finite automata. Given this fact, show that if the language $L$ is accepted by some finite automaton, then $L^{R}$ is also accepted by some finite; $L^{R}$ consists of all words of $L$ reversed."
So given a regular language $L$ , we know (essentially by definition) that it is accepted by some finite automaton, so there's a finite set of states with appropriate transitions that take us from the starting state to the accepting state if and only if the input is a string in $L$ . We can even insist that there's only one accepting state, to simplify things. Then, to accept the reverse language, all we need to do is reverse the direction of the transitions, change the start state to an accept state, and the accept state to the start state. Then we have a machine that is "backwards" compared to the original, and accepts the language $L^{R}$ .
{ "source": [ "https://cs.stackexchange.com/questions/3251", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/2576/" ] }
3,390
I have been a computer nerd for many many years. I can program in quite a few languages, and I can even build them. I sat down with a buddy the other day and asked how a computer actually takes electricity and does something with it, and we just couldnt figure it out, and Google wasn't much help either. I mean, how does a computer take a constant flow of electricity and turn it into 1's and 0's and then actually do something with those 1's and 0's like turn a light on for 15 seconds? I understand gates (AND, OR, NOR, NAND, NOT) and a little about diodes , resistors and transistors , but I figured this would be the perfect place to have it explained in true laymens terms! Can anybody point me in the right direction or give me a brief explanation?
This is a broad question that does not have an easy answer; it's a long way from electrons skittering along copper wires to rendering a website in Firefox. I will attempt to give you an overview from bottom to top and point you towards the right things to look up. Encoding Numbers The basic motivation is to compute things, as in doing arithmetics¹. The first thing to look at is how to represent numbers. There have been many approaches, using decimal or ternary and I think even octal systems, but in the end, binary won out. Now we know we have to build devices that deal with two values -- let's call them $0$ and $1$. Note that there are also multiple ways to encode numbers in binary. After you build up your first processor, you realise advantages of doing things in certain ways. Popular examples are the two-complement and IEEE floats . For starters, restrict yourself to plain natural numbers. Gates Assume we use binary encoding. Think of how you learned adding in primary school and write down the same for binary numbers. As it turns out, the building blocks of Boolean algebra are already there for you; it is easy to build a basic adder (and other arithmetic functions) using logic gates . How to build such gates is outside of the scope of computer science; eletrical engineering has provided multiple solution using e.g. tubes or transistors . Head over to Electrical Engineering Stack Exchange for questions on this. Clock and State Not all gates are equally fast and not all parts of a computation have the same number of gates. Therefore, we have to take extra care that individual operations do not overtake each other. It has proven useful to use a global clock ; the result of a given network of gates is the state of the output wires at the end of the cycle (which may change wildly while the gates cascade towards their individual final states). That means that results of one cycle may have to be stored until the next cycle starts, e.g. if you wire up loops. There are a number of basic elements you can use to varying effect, all build up from gates; some are called flip-flops . Those are also used to build registers , elements that store numbers for as many clock cycles as needed. Architecture and Commands Now you have a myriad of design choices to make. What arithmetic operations does your processor provide? What do your commands look like? It may be educational to look at the MIPS architecture whose early forms are easy compared to other designs. Have a look at the plans : Original from http://ube.ege.edu.tr/~erciyes/CENG311 Essentially, it's fetching and and disassembling commands, a set of registers, an ALU and control. Commands encode which ALU operation to perform on which operands (by the number of the register they are held in), how to manipulate the program counter² or which register to load/store from/to memory³. Further Considerations By now you have a working processor in the modern sense, assuming you figured out how to build a memory and a way to feed it commands. On its way to a modern machine, many choices have to be made. Here are some: Do you want a von Neumann architecture or would you rather keep commands and data separate? Do you want regularity and simplicity or abstraction and space economy in your instructions? Depending on your memory implementation, you might want to have caches . Do you want your processor to handle multiple operations at the same time using a pipeline ? Are all operations (and memory addresses) available to all programs, or do you have an operating system ? How do you implement I/O ? Alternatives The above is heavily influenced by how history turned out. In a different world with different minds, computers may work differently. In fact, there is a plenty of models of computation , some of which have advantages that make them useful as abstraction for real machines in many cases. There are also attempts to imitate the way our brains work, that is to enable neural computing , or more generally to exploit problem-solving and information-storing strategies observed in nature , most prominently DNA and quantum computing. So maybe (hopefully?) the information above is all ancient history another 50 or 100 years from now. All the fancy things we do with computers today are broken down into many small arithmetic tasks which the processor executes one by one. If your model allows the program to manipulate control flow, this would be the memory address the processor gets the next instruction from. You can also conceive machines that only read a fixed set of instructions from, say, a tape. In fact, early implementations did that. No jumps meant no loops; a program was a completely unrolled/unfolded series of instructions depending on the data. Obviously, being able to use the same program for multiple input data is more powerful. Assuming you have memory; your processor works fine without but can then only deal with so many values at once. Early computers did read in all their data from tapes and kept them in registers. There was no memory, let alone writable, persistent storage as we know it today.
{ "source": [ "https://cs.stackexchange.com/questions/3390", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/2703/" ] }
4,619
Assuming P $\neq$ NP, NP-complete problems are "hard to solve, but have answers that are easy to check." Does it make any sense to consider the opposite, that is, problems for which it's easy to compute a correct answer, but hard to verify an arbitrary purported solution? I think such a problem would imply either: Exponentially many "correct" answers for any given input, because otherwise verification could be carried out by simply computing all of the correct answers. Some "correct" answers are easy to compute, but others are difficult to find.
If you are fine with artificial problems, you can make plenty of them. Here are a few: Given a positive integer n in unary, answer a satisfiable 3CNF formula in n Boolean variables. Giving one satisfiable 3CNF formula is easy, but deciding whether a given 3CNF formula is satisfiable or not is 3SAT, a well-known NP-complete problem. There is no input. Just answer a Turing machine which halts (when run with an empty input tape). Giving one such Turing machine is easy, but whether a given Turing machine halts or not is undecidable. Added : By the way, I do not think that what you wrote in the last paragraph holds: I think such a problem would imply exponentially many "correct" answers for any given input, because otherwise verification could be carried out by simply computing all of the correct answers. If the problem has one solution, then indeed checking an answer is no harder than computing the correct solution. However, if the problem has one easy solution and one difficult solution, then you cannot compute all the solutions efficiently. Here is one such problem (which is very artificial): Given a Turing machine M , answer one of the following statements that is true: “ M halts on empty input tape,” “ M does not halt on empty input tape,” and “ M is a Turing machine.” Giving one solution is easy: you can always choose “ M is a Turing machine.” However, whether a given answer is correct or not is undecidable. Note that in this problem, there are only two solutions for each instance.
{ "source": [ "https://cs.stackexchange.com/questions/4619", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/903/" ] }
4,678
I just had this interesting question. What is the fastest growing function known to man? Is it busy beaver ? We know functions such as $x^2$, but this function grows slower than $2^x$, which in turn grows slower than $x!$, which in turn grows slower than $x^x$. We can then combine functions, to have $(x^x)!$ that grows faster than $x^x$, and so on. Then we arrive at recursive functions such as Ackermann's function $A(x,x)$ that grows much faster than $(x^x)!$. Then people though about busy beaver $B(x)$ function that grows even faster than Ackermann's function. At this point I haven't heard of any other functions that grow faster than busy beaver. Does it mean that there are no other functions that can possibly grow quicker than busy beaver? (Aside from factorial of $B(x)$ and like $A(B(x), B(x))$, etc.)
The busy beaver function grows faster than any computable function . However, it can be computed by a Turing machine which has been given access to an oracle for solving the halting problem. You can then define a "second order" busy beaver function, that grows faster than any function that can be computed even by any Turing machine with an oracle for the halting problem. You can keep doing this forever, building up a hierarchy of ever faster growing busy beaver functions. See Scott Aaronson's excellent essay on this topic, Who Can Name the Bigger Number? .
{ "source": [ "https://cs.stackexchange.com/questions/4678", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/43180/" ] }
4,793
What's the best way that anyone can do to have a good introduction to the theory of distributed system, any books or references, and topics should be covered first and requirements to start learning in this topic.
Roger Wattenhofer's Principles of Distributed Computing lecture collection is also a good place to start. It is freely available online, it assumes no prior knowledge on the area, and the material is very well up-to-date — it even covers some results that were presented at conferences a couple of months ago.
{ "source": [ "https://cs.stackexchange.com/questions/4793", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/687/" ] }
6,111
Maybe this is quite simple but I have some trouble to get this reduction. I want to reduce Subset Sum to Partition but at this time I don't see the relation! Is it possible to reduce this problem using a Levin Reduction ? If you don't understand write for clarification!
Let $(L,B)$ be an instance of subset sum, where $L$ is a list (multiset) of numbers, and $B$ is the target sum. Let $S = \sum L$. Let $L'$ be the list formed by adding $S+B,2S-B$ to $L$. (1) If there is a sublist $M \subseteq L$ summing to $B$, then $L'$ can be partitioned into two equal parts: $M \cup \{ 2S-B \}$ and $L\setminus M \cup \{ S+B \}$. Indeed, the first part sums to $B+(2S-B) = 2S$, and the second to $(S-B)+(S+B) = 2S$. (2) If $L'$ can be partitioned into two equal parts $P_1,P_2$, then there is a sublist of $L$ summing to $B$. Indeed, since $(S+B)+(2S-B) = 3S$ and each part sums to $2S$, the two elements belong to different parts. Without loss of generality, $2S-B \in P_1$. The rest of the elements in $P_1$ belong to $L$ and sum to $B$.
{ "source": [ "https://cs.stackexchange.com/questions/6111", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/3048/" ] }
6,230
I know the general concept of recursion. I came across the concept of tail recursion while studying the quicksort algorithm. In this video of quick sort algorithm from MIT at 18:30 seconds the professor says that this is a tail recursive algorithm. It is not clear to me what tail recursion really means. Can someone explain the concept with a proper example? Some answers provided by the SO community here .
Tail recursion is a special case of recursion where the calling function does no more computation after making a recursive call. For example, the function int f(int x, int y) { if (y == 0) { return x; } return f(x*y, y-1); } is tail recursive (since the final instruction is a recursive call) whereas this function is not tail recursive: int g(int x) { if (x == 1) { return 1; } int y = g(x-1); return x*y; } since it does some computation after the recursive call has returned. Tail recursion is important because it can be implemented more efficiently than general recursion. When we make a normal recursive call, we have to push the return address onto the call stack then jump to the called function. This means that we need a call stack whose size is linear in the depth of the recursive calls. When we have tail recursion we know that as soon as we return from the recursive call we're going to immediately return as well, so we can skip the entire chain of recursive functions returning and return straight to the original caller. That means we don't need a call stack at all for all of the recursive calls, and can implement the final call as a simple jump, which saves us space.
{ "source": [ "https://cs.stackexchange.com/questions/6230", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/2223/" ] }
6,371
The well known SAT problem is defined here for reference sake. The DOUBLE-SAT problem is defined as $\qquad \mathsf{DOUBLE\text{-}SAT} = \{\langle\phi\rangle \mid \phi \text{ has at least two satisfying assignments}\}$ How do we prove it to be NP-complete? More than one way to prove will be appreciated.
Here is one solution: Clearly Double-SAT belongs to ${\sf NP}$, since a NTM can decide Double-SAT as follows: On a Boolean input formula $\phi(x_1,\ldots,x_n)$, nondeterministically guess 2 assignments and verify whether both satisfy $\phi$. To show that Double-SAT is ${\sf NP}$-Complete, we give a reduction from SAT to Double-SAT, as follows: On input $\phi(x_1,\ldots,x_n)$: Introduce a new variable $y$. Output formula $\phi'(x_1,\ldots,x_n, y) = \phi(x_1,\ldots,x_n) \wedge (y \vee \bar y)$. If $\phi (x_1,\ldots,x_n)$ belongs to SAT, then $\phi$ has at least 1 satisfying assignment, and therefore $\phi'(x_1,\ldots,x_n, y)$ has at least 2 satisfying assignments as we can satisfy the new clause ($y \vee \bar y$) by assigning either $y = 1$ or $y = 0$ to the new variable $y$, so $\phi'$($x_1$, ... ,$x_n$, $y$) $\in$ Double-SAT. On the other hand, if $\phi(x_1,\ldots,x_n)\notin \text{SAT}$, then clearly $\phi' (x_1,\ldots,x_n, y) = \phi (x_1,\ldots,x_n) \wedge (y \vee \bar y)$ has no satisfying assignment either, so $\phi'(x_1,\ldots,x_n,y) \notin \text{Double-SAT}$. Therefore, $\text{SAT} \leq_p \text{Double-SAT}$, and hence Double-SAT is ${\sf NP}$-Complete.
{ "source": [ "https://cs.stackexchange.com/questions/6371", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/4190/" ] }
6,618
Is there a clear reference, with pseudo-code, on how to go about implementing a Prolog interpreter in a purely functional language? That which I have found so far seems to deal only with imperative languages, is merely a demonstration of Prolog implemented in itself, or offers no concrete algorithm to use for interpretation. I would be very appreciative of an answer.
Since Prolog = Syntactic Unification + Backward chaining + REPL All three parts can be found in Artificial intelligence: structures and strategies for complex problem solving by George F. Luger. In the fourth edition of the book all three parts are implemented in LISP in Section 15.8, Logic Programming in LISP. He also puts the same code in his other books, but I don't have all of them for noting here. The code for his books can be found here . Another source with all three parts can be found in Paradigms of artificial intelligence programming: case studies in Common Lisp by Peter Norvig. See Chapters 11, Logic Programming and 12, Compiling Logic Programs. The code for his book can be found here . Another source is Structure and interpretation of computer programs by Hal Abelson, Jerry Sussman and Julie Sussman. See Section 4.4 Logic Programming. The site for the book is here and the code for the book is here . It is not uncommon to find the unification algorithm with back chaining implemented in many applications if you know where to look; it is especially prevalent in type inferencing in functional compilers. Using the keywords unification or occurs helps to spot the functions. Also most implementations use unif for the name of the unification function. For a version of Prolog, less the REPL, done in OCaml see Code and resources for "Handbook of Practical Logic and Automated Reasoning" - prolog.ml A translation of the book code to F# can be found here . A translation of the book code to Haskell can be found here . In terms of finding the code, the unification algorithm is easiest to find, then implementations with back chaining imbedded in applications. Finding a fully functional implementation of Prolog in a functional language with an REPL is the hardest. Most of the time the code is not in a format for direct use within PROLOG; it is heavily customized to enhance performance, so you may find the code but it will not be worth the price to tease out the parts you want. My advice would be to read Luger's book and build it up from scratch in your language of choice, even if it means installing and learning LISP and translating to do so. EDIT Since this is a duplicate question from StackOverflow and the OP is new and in the comments says: To give more context, I'm attempting to implement type inference, however the intricate features in the type system of my language (Dependent types, refinement types, linear typing to name a few of the less common ones) make me feel that it would be useful to base my type inference off of the algorithms driving Prolog as to obtain a very general algorithm. I will note that I'm entirely self taught, so my knowledge is lacking in large areas. I'll expand on this here, but realize the OP should ask a new question. For some intro stuff see implementing type inference . The best book I know on this is Types and programming languages by Benjamin C. Pierce. The book's site is here . The resources with links to OCaml code is here . And recently started but mostly complete translation of this to F# is here . Dependent types: pg. 462 Refinement types: pg. 207 Linear logic and type systems: pg. 109
{ "source": [ "https://cs.stackexchange.com/questions/6618", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/4555/" ] }
7,050
What are the differences between computer vision and image processing? For example, in object recognition, what are the roles of computer vision and image processing?
In image processing , an image is "processed", that is, transformations are applied to an input image and an output image is returned. The transformations can e.g. be "smoothing", "sharpening", "contrasting" and "stretching". The transformation used depends on the context and issue to be solved. In computer vision , an image or a video is taken as input, and the goal is to understand (including being able to infer something about it) the image and its contents. Computer vision uses image processing algorithms to solve some of its tasks. The main difference between these two approaches are the goals (not the methods used). For example, if the goal is to enhance an image for later use, then this may be called image processing. If the goal is to emulate human vision, like object recognition, defect detection or automatic driving, then it may be called computer vision.
{ "source": [ "https://cs.stackexchange.com/questions/7050", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/4813/" ] }
7,074
It might sound like a stupid question but I'm really curious to know how a computer knows that $1<2$? Also, how does a computer know that the order of integer is $1,2,3,4,5,\ldots$ and alphabet is A,B,C,D,...? Is it somewhere stored in the hardware or does the operating system provide this kind of information?
First your integer numbers are converted into binary numbers. For example, the integer 2 is converted to 0010. The CPU uses a digital comparator : A digital comparator or magnitude comparator is a hardware electronic device that takes two numbers as input in binary form and determines whether one number is greater than or less than or equal to the other number. Comparators are used in central processing units (CPU) and microcontrollers. Source: https://en.wikipedia.org/wiki/Digital_comparator In comparator hardware some gates are used (AND, OR, NAND, NOR, XOR, etc). These gates take binary inputs and give result in binary. The output can be seen from a truth table. Inputs Outputs A B A>B A=B A<B 0 0 0 1 0 0 1 0 0 1 1 0 1 0 0 1 1 0 1 0 Here 0 & 1 are electronic voltages for the gate. 1 - Represents some threshold voltage which indicates some positive voltage. 0 - Represents the voltage below than the threshold. E.g. suppose a comparator works on 5 volt (it is consideration for explanation) then: Voltage more than 3 volt can be considered as binary-1 . Voltage below than 3 volt be considered as binary-0 If a gate gets one input as 3.5 volt and another input as 2 volt then it considers as, it takes one input as binary 1 & another input as binary 0. These sequences of 1's & 0's are provided very fastly through the switching circuit. The operation of a two bit digital comparator can be expressed as a truth table: Inputs Outputs A1 A0 B1 B0 A>B A=B A<B 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 0 1 0 1 0 0 0 0 1 1 1 0 0 0 1 0 0 0 0 1 0 1 0 1 0 1 0 0 1 1 0 1 0 0 0 1 1 1 1 0 0 1 0 0 0 0 0 1 1 0 0 1 0 0 1 1 0 1 0 0 1 0 1 0 1 1 1 0 0 1 1 0 0 0 0 1 1 1 0 1 0 0 1 1 1 1 0 0 0 1 1 1 1 1 0 1 0 To quote from Wikipedia : Examples: Consider two 4-bit binary numbers A and B such that Here each subscript represents one of the digits in the numbers. Equality The binary numbers A and B will be equal if all the pairs of significant digits of both numbers are equal, i.e., . . . Since the numbers are binary, the digits are either 0 or 1 and the boolean function for equality of any two digits and > can be expressed as is 1 only if and are equal. For the equality of A and B, all variables (for i=0,1,2,3) must be 1. So the quality condition of A and B can be implemented using the AND operation as The binary variable (A=B) is 1 only if all pairs of digits of the two numbers are equal. Inequality In order to manually determine the greater of two binary numbers, we inspect the relative magnitudes of pairs of significant digits, starting from the most significant bit, gradually proceeding towards lower significant bits until an inequality is found. When an inequality is found, if the corresponding bit of A is 1 and that of B is 0 then we conclude that A>B. This sequential comparison can be expressed logically as:
{ "source": [ "https://cs.stackexchange.com/questions/7074", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/4824/" ] }
7,644
In an unweighted, undirected graph with $V$ vertices and $E$ edges such that $2V \gt E$, what is the fastest way to find all shortest paths in a graph? Can it be done in faster than Floyd-Warshall which is $O(V^3)$ but very fast per iteration? How about if the graph is weighted?
Since this is an unweighted graph, you could run a Breadth First Search (BFS) from every vertex $v$ in the graph. Each run of BFS gives you the shortest distances (and paths) from the starting vertex to every other vertex. Time complexity for one BFS is $O(V + E) = O(V)$ since $E = O(V)$ in your sparse graph. Running it $V$ times gives you a $O(V^2)$ time complexity. For a weighted directed graph, the Johnson's algorithm as suggested by Yuval is the fastest for sparse graphs. It takes $O(V^2\log V + VE)$ which in your case turns out to be $O(V^2\log V)$. For a weighted undirected graph, you could either run Dijkstra's algorithm from each node, or replace each undirected edge with two opposite directed edges and run the Johnson's algorithm. Both these will give the same aysmptotic times as Johnson's algorithm above for your sparse case. Also note that the BFS approach I mention above works for both directed and undirected graphs.
{ "source": [ "https://cs.stackexchange.com/questions/7644", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/5233/" ] }
7,665
Is it possible that $\mathsf{P} \not = \mathsf{NP}$ and the cardinality of $\mathsf{P}$ is the same as the cardinality of $\mathsf{NP}$? Or does $\mathsf{P} \not = \mathsf{NP}$ mean that $\mathsf{P}$ and $\mathsf{NP}$ must have different cardinalities?
It is known that P$\subseteq$NP$\subset$R, where R is the set of recursive languages. Since R is countable and P is infinite (e.g. the languages $\{n\}$ for $n \in \mathbb{N}$ are in P), we get that P and NP are both countable.
{ "source": [ "https://cs.stackexchange.com/questions/7665", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/4849/" ] }
7,759
It is said that computability theory is also called recursion theory. Why is it called like that? Why recursion has this much importance?
In the 1920's and 1930's people were trying to figure out what it means to "effectively compute a function" (remember, there were no general purpose computing machines around, and computing was something done by people). Several definitions of "computable" were proposed, of which three are best known: The $\lambda$-calculus Recursive functions Turing machines These turned out to define the same class of number-theoretic functions. Because recursive functions are older than Turing machines, and the even older $\lambda$-calculus was not immediately accepted as an adequate notion of computability, the adjective "recursive" was used widely (recursive functions, recursive sets, recursively enumerable sets, etc.) Later on, there was an effort, popularized by Robert Soare , to change "recursive" to "computable". Thus we nowadays speak of computable functions and computably enumerable sets. But many older textbooks, and many people, still prefer the "recursive" terminology. So much for the history. We can also ask whether recursion is important for computation from a purely mathematical point of view. The answer is a very definite "yes!". Recursion lies at the basis of general-purpose programming languages (even while loops are just a form of recursion because while p do c is the same as if p then (c; while p do c) ), and many fundamental data stuctures, such as lists and trees, are recursive. Recursion is simply unavoidable in computer science, and in computability theory specifically.
{ "source": [ "https://cs.stackexchange.com/questions/7759", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/947/" ] }
7,879
When doing mental calculus one can do: Given an integer k, sum all the digits (in base 10), and if the result is a multiple of 3, then k is a multiple of 3. Do you know of any algorithm working similarily but operating on binary numbers digits (bits)? At first, I was thinking of using the ready made functions of my language converting integer to ascii to perform the convertion from base 2 to base 10, then apply the mental calculus trick. But of course then I could also encode the base convertion 2 to 10 myself. I have not done it yet, but I'll give it a try. Then I have thought of euclidian division in base 2... However I wonder if there are other means, algorithms.
Consider the following two observations (left as an exercise to the reader): The even powers of two are 1 modulo 3. The odd powers of two are -1 modulo 3. We conclude that that a number (in binary) is divisible by three if and only if the sum of the bits in the even positions equals the sum of the bits in the odd positions modulo 3.
{ "source": [ "https://cs.stackexchange.com/questions/7879", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/2100/" ] }
9,063
I'm wondering if there is a good example for an easy to understand NP-Hard problem that is not NP-Complete and not undecidable? For example, the halting problem is NP-Hard, not NP-Complete, but is undecidable. I believe that this means that it is a problem that a solution for can be verified but not in polynomial time. (Please, correct this statement if this is not the case).
By the nondeterministic version of the time-hierarchy theorem , we have $\mathsf{NP} \subsetneq \mathsf{NEXP}$, where $\mathsf{NEXP}$ is the class of problems solvable in non-deterministic exponential-time. Thus it suffices to consider any problem which is $\mathsf{NP}$-hard and in $\mathsf{NEXP}$, but not in $\mathsf{NP}$. For instance, we may consider any $\mathsf{NEXP}$-complete problem , such as 3-colourability of graphs described by succinct circuits — or any other NP-complete problem on graphs — where a "succinct circuit" is a format for representing very large graphs at the input: instead of explicit representation of a graph e.g. by adjacency lists, we instead provide a circuit computing some function $f: \{0,1\}^{n} \times \{0,1\}^n \to \{0,1\}$ which computes the coefficients of a $2^n \times 2^n$ adjacency matrix. (Non-)equivalence of two regular expressions, where the Kleene star is replaced by squaring (repeating a sub-pattern exactly twice, rather than zero or more times), and where we ask whether two such regular expressions represent different sets of strings. Note that in the latter case, if we take regular expressions as we are used to considering, including the Kleene star, the resulting problem is $\mathsf{EXPSPACE}$-complete: because we have the containments $\mathsf{NP} \subset \mathsf{NEXP} \subseteq \mathsf{EXPSPACE}$, this is still a decidable problem which is $\mathsf{NP}$-hard, and not in $\mathsf{NP}$.
{ "source": [ "https://cs.stackexchange.com/questions/9063", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/6495/" ] }
9,133
Given a directed acyclic graph $D = (V,A)$, a vertex $v \in V$ is a source if its indegree is zero, meaning that it has only outgoing arcs. Does there exist a linear time algorithm to find a source in a given directed acyclic graph? Follow-up question: Can one in linear time find all sources?
As Yuval mentions, the datastructure is important here. I'll try to give a solution for some of the types of adjacency lists: Incoming edge list : For each node, there is a list of vertices from which there is an incoming edge to this node. You can simply scan all vertices and check if the size of their adjacency list is $0$ or not. A size $0$ list means no incoming edges, so the node is either a source or disconnected. Assuming a connected graph, this scan of each vertex will give you a list of all sources (or you can stop after finding one) in $O(|V|)$ time - linear in the number of vertices . Outgoing edge list : For each node, there is a list of vertices to which there is a directed edge from this node. Keep a bit-string with each bit representing a vertex, initialized to 0. Starting from the first node, start scanning its list for vertices to which there is an outgoing edge from this. Every such node (neighbour) cannot be a source, so keep setting their corresponding bit in the bit-string. At the end, all vertices whose corresponding bits are still unset, are the source vertices. You can do this in time linear in the size of the graph - $O(|V| + |E|)$. Both lists together : For each vertex, there is a mixed list of vertices which have an edge to or from this vertex, with some other attribute indicating which of the two is actually the case. The approach is similar to 2 above, with the addition that any incoming edge immediately rules out the current vertex (and you can mark its bit set). Unlike in point 2 where you need to go through all vertices, here, you might find some source sooner. If you don't stop, you will have all sources. For both cases, time is again linear in the size of the graph - $O(|V| + |E|)$. Both lists separately : Just pick the incoming edge list and follow 1. As a side note, if choosing the datastructure is in your hands, you might want to analyze what all operations you intend to perform, and how frequently, and choose an appropriate datastructure. Edit: For case 1, if you have a dag where the number of sources is very small as compared to $|V|$ (eg, in a tree with one source), and where the average distance from any vertex to a source is small as compared to $|V|$ and you only want any one source, you can use a faster on average algorithm (although worst case asymptotic complexity will be the same). Select any vertex at random, and go to any of its parent (from the incoming edge list), and on to its parent and so on, till you reach a node which has no parent - a source. This small gain of efficiency is for very limited types of graphs with a slightly more complex algorithm.
{ "source": [ "https://cs.stackexchange.com/questions/9133", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/1108/" ] }
9,137
I have a multilayer perceptron. It has an input layer with two neurons, a hidden layer with an arbitrary number of neurons, and an output layer with two neurons. Given that randomboolean and targetboolean are random boolean values, and the network operates as such: input(randomboolean); //Set the input neurons to reflect the random boolean propagateforwards(); //Perform standard forward propagation outputboolean = output(); //To get the networks output ideal(targetboolean); //Performs connection updating via back-prop Is it possible to get the network to map the randomboolean value to the targetboolean value in such a way as the the outputboolean value will correctly match the targetboolean while running in an 'on-line' (where prediction occurs along with continued learning) mode after some arbitrary number of training cycles. I hear that the network needs to be recurrent to process this as it may be temporal behaviour, however the MLP is a universal computing platform and I assume it should be able to approximate the temporal behaviour needed for this task.
As Yuval mentions, the datastructure is important here. I'll try to give a solution for some of the types of adjacency lists: Incoming edge list : For each node, there is a list of vertices from which there is an incoming edge to this node. You can simply scan all vertices and check if the size of their adjacency list is $0$ or not. A size $0$ list means no incoming edges, so the node is either a source or disconnected. Assuming a connected graph, this scan of each vertex will give you a list of all sources (or you can stop after finding one) in $O(|V|)$ time - linear in the number of vertices . Outgoing edge list : For each node, there is a list of vertices to which there is a directed edge from this node. Keep a bit-string with each bit representing a vertex, initialized to 0. Starting from the first node, start scanning its list for vertices to which there is an outgoing edge from this. Every such node (neighbour) cannot be a source, so keep setting their corresponding bit in the bit-string. At the end, all vertices whose corresponding bits are still unset, are the source vertices. You can do this in time linear in the size of the graph - $O(|V| + |E|)$. Both lists together : For each vertex, there is a mixed list of vertices which have an edge to or from this vertex, with some other attribute indicating which of the two is actually the case. The approach is similar to 2 above, with the addition that any incoming edge immediately rules out the current vertex (and you can mark its bit set). Unlike in point 2 where you need to go through all vertices, here, you might find some source sooner. If you don't stop, you will have all sources. For both cases, time is again linear in the size of the graph - $O(|V| + |E|)$. Both lists separately : Just pick the incoming edge list and follow 1. As a side note, if choosing the datastructure is in your hands, you might want to analyze what all operations you intend to perform, and how frequently, and choose an appropriate datastructure. Edit: For case 1, if you have a dag where the number of sources is very small as compared to $|V|$ (eg, in a tree with one source), and where the average distance from any vertex to a source is small as compared to $|V|$ and you only want any one source, you can use a faster on average algorithm (although worst case asymptotic complexity will be the same). Select any vertex at random, and go to any of its parent (from the incoming edge list), and on to its parent and so on, till you reach a node which has no parent - a source. This small gain of efficiency is for very limited types of graphs with a slightly more complex algorithm.
{ "source": [ "https://cs.stackexchange.com/questions/9137", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/814/" ] }
9,380
I am learning C++ and noticed that the running time for the push_back function for vectors is constant "amortized." The documentation further notes that "If a reallocation happens, the reallocation is itself up to linear in the entire size." Shouldn't this mean the push_back function is $O(n)$, where $n$ is the length of the vector? After all, we are interested in worst case analysis, right? I guess, crucially, I don't understand how the adjective "amortized" changes the running time.
The important word here is "amortized". Amortized analysis is an analysis technique that examines a sequence of $n$ operations. If the whole sequence runs in $T(n)$ time, then each operation in the sequence runs in $T(n)/n$. The idea is that while a few operations in the sequence might be costly, they can't happen often enough to weigh down the program. It's important to note that this is different from average case analysis over some input distribution or randomized analysis. Amortized analysis established a worst case bound for the performance of an algorithm irrespective of the inputs. It's most commonly used to analyse data structures, which have a persistent state throughout the program. One of the most common examples given is the analysis of a stack with a multipop operations that pops $k$ elements. A naive analysis of multipop would say that in the worst case multipop must take $O(n)$ time since it might have to pop off all the elements of the stack. However, if you look at a sequence of operations, you'll notice that the number of pops can not exceed the number of pushes. Thus over any sequence of $n$ operations the number of pops can't exceed $O(n)$, and so multipop runs in $O(1)$ amortized time even though occasionally a single call might take more time. Now how does this relate to C++ vectors? Vectors are implemented with arrays so to increase the size of a vector you must reallocate memory and copy the whole array over. Obviously we wouldn't want to do this very often. So if you perform a push_back operation and the vector needs to allocate more space, it will increase the size by a factor $m$. Now this takes more memory, which you may not use in full, but the next few push_back operations all run in constant time. Now if we do the amortized analysis of the push_back operation (which I found here ) we'll find that it runs in constant amortized time. Suppose you have $n$ items and your multiplication factor is $m$. Then the number of relocations is roughly $\log_m(n)$. The $i$th reallocation will cost proportional to $m^i$, about the size of the current array. Thus the total time for $n$ push back is $\sum_{i=1}^{\log_m(n)}m^i \approx \frac{nm}{m-1}$, since it's a geometric series. Divide this by $n$ operations and we get that each operation takes $\frac{m}{m-1}$, a constant. Lastly you have to be careful about choosing your factor $m$. If it's too close to $1$ then this constant gets too large for practical applications, but if $m$ is too large, say 2, then you start wasting a lot of memory. The ideal growth rate varies by application, but I think some implementations use $1.5$.
{ "source": [ "https://cs.stackexchange.com/questions/9380", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/2860/" ] }
9,523
If I have some function whose time complexity is O( mn ), where m and n are the sizes of its two inputs, would we call its time complexity "linear" (since it's linear in both m and n ) or "quadratic" (since it's a product of two sizes)? Or something else? I feel calling it "linear" is confusing because O(m + n) is also linear but much faster, but I feel like calling it "quadratic" is also weird because it's linear in each variable separately.
In mathematics, functions like this are called multilinear functions. But computer scientists probably won't generally know this terminology. This function should definitely not be called linear, either in mathematics or computer science, unless you can reasonably consider one of $m$ and $n$ a constant.
{ "source": [ "https://cs.stackexchange.com/questions/9523", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/836/" ] }
9,556
I'm in a course about computing and complexity , and am unable to understand what these terms mean. All I know is that NP is a subset of NP-complete, which is a subset of NP-hard, but I have no idea what they actually mean. Wikipedia isn't much help either, as the explanations are still a bit too high level.
I think the Wikipedia articles $\mathsf{P}$ , $\mathsf{NP}$ , and $\mathsf{P}$ vs. $\mathsf{NP}$ are quite good. Still here is what I would say: Part I , Part II [I will use remarks inside brackets to discuss some technical details which you can skip if you want.] Part I Decision Problems There are various kinds of computational problems. However in an introduction to computational complexity theory course it is easier to focus on decision problem , i.e. problems where the answer is either YES or NO. There are other kinds of computational problems but most of the time questions about them can be reduced to similar questions about decision problems. Moreover decision problems are very simple. Therefore in an introduction to computational complexity theory course we focus our attention to the study of decision problems. We can identify a decision problem with the subset of inputs that have answer YES. This simplifies notation and allows us to write $x\in Q$ in place of $Q(x)=YES$ and $x \notin Q$ in place of $Q(x)=NO$ . Another perspective is that we are talking about membership queries in a set. Here is an example: Decision Problem: Input: A natural number $x$ , Question: Is $x$ an even number? Membership Problem: Input: A natural number $x$ , Question: Is $x$ in $Even = \{0,2,4,6,\cdots\}$ ? We refer to the YES answer on an input as accepting the input and to the NO answer on an input as rejecting the input. We will look at algorithms for decision problems and discuss how efficient those algorithms are in their usage of computable resources . I will rely on your intuition from programming in a language like C in place of formally defining what we mean by an algorithm and computational resources. [Remarks: If we wanted to do everything formally and precisely we would need to fix a model of computation like the standard Turing machine model to precisely define what we mean by an algorithm and its usage of computational resources. If we want to talk about computation over objects that the model cannot directly handle, we would need to encode them as objects that the machine model can handle, e.g. if we are using Turing machines we need to encode objects like natural numbers and graphs as binary strings.] $\mathsf{P}$ = Problems with Efficient Algorithms for Finding Solutions Assume that efficient algorithms means algorithms that use at most polynomial amount of computational resources. The main resource we care about is the worst-case running time of algorithms with respect to the input size, i.e. the number of basic steps an algorithm takes on an input of size $n$ . The size of an input $x$ is $n$ if it takes $n$ -bits of computer memory to store $x$ , in which case we write $|x| = n$ . So by efficient algorithms we mean algorithms that have polynomial worst-case running time . The assumption that polynomial-time algorithms capture the intuitive notion of efficient algorithms is known as Cobham's thesis . I will not discuss at this point whether $\mathsf{P}$ is the right model for efficiently solvable problems and whether $\mathsf{P}$ does or does not capture what can be computed efficiently in practice and related issues. For now there are good reasons to make this assumption so for our purpose we assume this is the case. If you do not accept Cobham's thesis it does not make what I write below incorrect, the only thing we will lose is the intuition about efficient computation in practice. I think it is a helpful assumption for someone who is starting to learn about complexity theory. $\mathsf{P}$ is the class of decision problems that can be solved efficiently , i.e. decision problems which have polynomial-time algorithms. More formally, we say a decision problem $Q$ is in $\mathsf{P}$ iff there is an efficient algorithm $A$ such that for all inputs $x$ , if $Q(x)=YES$ then $A(x)=YES$ , if $Q(x)=NO$ then $A(x)=NO$ . I can simply write $A(x)=Q(x)$ but I write it this way so we can compare it to the definition of $\mathsf{NP}$ . $\mathsf{NP}$ = Problems with Efficient Algorithms for Verifying Proofs/Certificates/Witnesses Sometimes we do not know any efficient way of finding the answer to a decision problem, however if someone tells us the answer and gives us a proof we can efficiently verify that the answer is correct by checking the proof to see if it is a valid proof . This is the idea behind the complexity class $\mathsf{NP}$ . If the proof is too long it is not really useful, it can take too long to just read the proof let alone check if it is valid. We want the time required for verification to be reasonable in the size of the original input, not the size of the given proof! This means what we really want is not arbitrary long proofs but short proofs. Note that if the verifier's running time is polynomial in the size of the original input then it can only read a polynomial part of the proof. So by short we mean of polynomial size . From this point on whenever I use the word "proof" I mean "short proof". Here is an example of a problem which we do not know how to solve efficiently but we can efficiently verify proofs: Partition Input: a finite set of natural numbers $S$ , Question: is it possible to partition $S$ into two sets $A$ and $B$ ( $A \cup B = S$ and $A \cap B = \emptyset$ ) such that the sum of the numbers in $A$ is equal to the sum of number in $B$ ( $\sum_{x\in A}x=\sum_{x\in B}x$ )? If I give you $S$ and ask you if we can partition it into two sets such that their sums are equal, you do not know any efficient algorithm to solve it. You will probably try all possible ways of partitioning the numbers into two sets until you find a partition where the sums are equal or until you have tried all possible partitions and none has worked. If any of them worked you would say YES , otherwise you would say NO . But there are exponentially many possible partitions so it will take a lot of time to enumerate all the possibilities. However if I give you two sets $A$ and $B$ , you can easily check if the sums are equal and if $A$ and $B$ is a partition of $S$ . Note that we can compute sums efficiently. Here the pair of $A$ and $B$ that I give you is a proof for a YES answer. You can efficiently verify my claim by looking at my proof and checking if it is a valid proof . If the answer is YES then there is a valid proof, and I can give it to you and you can verify it efficiently. If the answer is NO then there is no valid proof. So whatever I give you you can check and see it is not a valid proof. I cannot trick you by an invalid proof that the answer is YES. Recall that if the proof is too big it will take a lot of time to verify it, we do not want this to happen, so we only care about efficient proofs, i.e. proofs which have polynomial size. Sometimes people use " certificate " or " witness " in place of "proof". Note I am giving you enough information about the answer for a given input $x$ so that you can find and verify the answer efficiently. For example, in our partition example I do not tell you the answer, I just give you a partition, and you can check if it is valid or not. Note that you have to verify the answer yourself, you cannot trust me about what I say. Moreover you can only check the correctness of my proof. If my proof is valid it means the answer is YES. But if my proof is invalid it does not mean the answer is NO. You have seen that one proof was invalid, not that there are no valid proofs. We are talking about proofs for YES. We are not talking about proofs for NO. Let us look at an example: $A=\{2,4\}$ and $B=\{1,5\}$ is a proof that $S=\{1,2,4,5\}$ can be partitioned into two sets with equal sums. We just need to sum up the numbers in $A$ and the numbers in $B$ and see if the results are equal, and check if $A$ , $B$ is partition of $S$ . If I gave you $A=\{2,5\}$ and $B=\{1,4\}$ , you will check and see that my proof is invalid. It does not mean the answer is NO, it just means that this particular proof was invalid. Your task here is not to find the answer, but only to check if the proof you are given is valid. It is like a student solving a question in an exam and a professor checking if the answer is correct. :) (unfortunately often students do not give enough information to verify the correctness of their answer and the professors have to guess the rest of their partial answer and decide how much mark they should give to the students for their partial answers, indeed a quite difficult task). The amazing thing is that the same situation applies to many other natural problems that we want to solve: we can efficiently verify if a given short proof is valid, but we do not know any efficient way of finding the answer . This is the motivation why the complexity class $\mathsf{NP}$ is extremely interesting (though this was not the original motivation for defining it). Whatever you do (not just in CS, but also in math, biology, physics, chemistry, economics, management, sociology, business, ...) you will face computational problems that fall in this class. To get an idea of how many problems turn out to be in $\mathsf{NP}$ check out a compendium of NP optimization problems . Indeed you will have hard time finding natural problems which are not in $\mathsf{NP}$ . It is simply amazing. $\mathsf{NP}$ is the class of problems which have efficient verifiers , i.e. there is a polynomial time algorithm that can verify if a given solution is correct. More formally, we say a decision problem $Q$ is in $\mathsf{NP}$ iff there is an efficient algorithm $V$ called verifier such that for all inputs $x$ , if $Q(x)=YES$ then there is a proof $y$ such that $V(x,y)=YES$ , if $Q(x)=NO$ then for all proofs $y$ , $V(x,y)=NO$ . We say a verifier is sound if it does not accept any proof when the answer is NO. In other words, a sound verifier cannot be tricked to accept a proof if the answer is really NO. No false positives. Similarly, we say a verifier is complete if it accepts at least one proof when the answer is YES. In other words, a complete verifier can be convinced of the answer being YES. The terminology comes from logic and proof systems . We cannot use a sound proof system to prove any false statements. We can use a complete proof system to prove all true statements. The verifier $V$ gets two inputs, $x$ : the original input for $Q$ , and $y$ : a suggested proof for $Q(x)=YES$ . Note that we want $V$ to be efficient in the size of $x$ . If $y$ is a big proof the verifier will be able to read only a polynomial part of $y$ . That is why we require the proofs to be short. If $y$ is short saying that $V$ is efficient in $x$ is the same as saying that $V$ is efficient in $x$ and $y$ (because the size of $y$ is bounded by a fixed polynomial in the size of $x$ ). In summary, to show that a decision problem $Q$ is in $\mathsf{NP}$ we have to give an efficient verifier algorithm which is sound and complete . Historical Note: historically this is not the original definition of $\mathsf{NP}$ . The original definition uses what is called non-deterministic Turing machines. These machines do not correspond to any actual machine model and are difficult to get used to (at least when you are starting to learn about complexity theory). I have read that many experts think that they would have used the verifier definition as the main definition and even would have named the class $\mathsf{VP}$ (for verifiable in polynomial-time) in place of $\mathsf{NP}$ if they go back to the dawn of the computational complexity theory. The verifier definition is more natural, easier to understand conceptually, and easier to use to show problems are in $\mathsf{NP}$ . $\mathsf{P}\subseteq \mathsf{NP}$ Therefore we have $\mathsf{P}$ =efficient solvable and $\mathsf{NP}$ =efficiently verifiable . So $\mathsf{P}=\mathsf{NP}$ iff the problems that can be efficiently verified are the same as the problems that can be efficiently solved. Note that any problem in $\mathsf{P}$ is also in $\mathsf{NP}$ , i.e. if you can solve the problem you can also verify if a given proof is correct: the verifier will just ignore the proof! That is because we do not need it, the verifier can compute the answer by itself, it can decide if the answer is YES or NO without any help. If the answer is NO we know there should be no proofs and our verifier will just reject every suggested proof. If the answer is YES, there should be a proof, and in fact we will just accept anything as a proof. [We could have made our verifier accept only some of them, that is also fine, as long as our verifier accept at least one proof the verifier works correctly for the problem.] Here is an example: Sum Input: a list of $n+1$ natural numbers $a_1,\cdots,a_n$ , and $s$ , Question: is $\Sigma_{i=1}^n a_i = s$ ? The problem is in $\mathsf{P}$ because we can sum up the numbers and then compare it with $s$ , we return YES if they are equal, and NO if they are not. The problem is also in $\mathsf{NP}$ . Consider a verifier $V$ that gets a proof plus the input for Sum. It acts the same way as the algorithm in $\mathsf{P}$ that we described above. This is an efficient verifier for Sum. Note that there are other efficient verifiers for Sum, and some of them might use the proof given to them. However the one we designed does not and that is also fine. Since we gave an efficient verifier for Sum the problem is in $\mathsf{NP}$ . The same trick works for all other problems in $\mathsf{P}$ so $\mathsf{P} \subseteq \mathsf{NP}$ . Brute-Force/Exhaustive-Search Algorithms for $\mathsf{NP}$ and $\mathsf{NP}\subseteq \mathsf{ExpTime}$ The best algorithms we know of for solving an arbitrary problem in $\mathsf{NP}$ are brute-force / exhaustive-search algorithms. Pick an efficient verifier for the problem (it has an efficient verifier by our assumption that it is in $\mathsf{NP}$ ) and check all possible proofs one by one. If the verifier accepts one of them then the answer is YES. Otherwise the answer is NO. In our partition example, we try all possible partitions and check if the sums are equal in any of them. Note that the brute-force algorithm runs in worst-case exponential time. The size of the proofs is polynomial in the size of input. If the size of the proofs is $m$ then there are $2^m$ possible proofs. Checking each of them will take polynomial time by the verifier. So in total the brute-force algorithm takes exponential time. This shows that any $\mathsf{NP}$ problem can be solved in exponential time, i.e. $\mathsf{NP}\subseteq \mathsf{ExpTime}$ . (Moreover the brute-force algorithm will use only a polynomial amount of space, i.e. $\mathsf{NP}\subseteq \mathsf{PSpace}$ but that is a story for another day). A problem in $\mathsf{NP}$ can have much faster algorithms, for example any problem in $\mathsf{P}$ has a polynomial-time algorithm. However for an arbitrary problem in $\mathsf{NP}$ we do not know algorithms that can do much better. In other words, if you just tell me that your problem is in $\mathsf{NP}$ (and nothing else about the problem) then the fastest algorithm that we know of for solving it takes exponential time. However it does not mean that there are not any better algorithms, we do not know that . As far as we know it is still possible (though thought to be very unlikely by almost all complexity theorists) that $\mathsf{NP}=\mathsf{P}$ and all $\mathsf{NP}$ problems can be solved in polynomial time. Furthermore, some experts conjecture that we cannot do much better, i.e. there are problems in $\mathsf{NP}$ that cannot be solved much more efficiently than brute-force search algorithms which take exponential amount of time. See the Exponential Time Hypothesis for more information. But this is not proven, it is only a conjecture . It just shows how far we are from finding polynomial time algorithms for arbitrary $\mathsf{NP}$ problems. This association with exponential time confuses some people: they think incorrectly that $\mathsf{NP}$ problems require exponential-time to solve (or even worse there are no algorithm for them at all). Stating that a problem is in $\mathsf{NP}$ does not mean a problem is difficult to solve, it just means that it is easy to verify, it is an upper bound on the difficulty of solving the problem, and many $\mathsf{NP}$ problems are easy to solve since $\mathsf{P}\subseteq\mathsf{NP}$ . Nevertheless, there are $\mathsf{NP}$ problems which seem to be hard to solve. I will return to this in when we discuss $\mathsf{NP}$ -hardness. Lower Bounds Seem Difficult to Prove OK, so we now know that there are many natural problems that are in $\mathsf{NP}$ and we do not know any efficient way of solving them and we suspect that they really require exponential time to solve. Can we prove this? Unfortunately the task of proving lower bounds is very difficult. We cannot even prove that these problems require more than linear time ! Let alone requiring exponential time. Proving linear-time lower bounds is rather easy: the algorithm needs to read the input after all. Proving super-linear lower bounds is a completely different story. We can prove super-linear lower bounds with more restrictions about the kind of algorithms we are considering, e.g. sorting algorithms using comparison, but we do not know lower-bounds without those restrictions. To prove an upper bound for a problem we just need to design a good enough algorithm. It often needs knowledge, creative thinking, and even ingenuity to come up with such an algorithm. However the task is considerably simpler compared to proving a lower bound. We have to show that there are no good algorithms . Not that we do not know of any good enough algorithms right now, but that there does not exist any good algorithms , that no one will ever come up with a good algorithm . Think about it for a minute if you have not before, how can we show such an impossibility result ? This is another place where people get confused. Here "impossibility" is a mathematical impossibility , i.e. it is not a short coming on our part that some genius can fix in future. When we say impossible we mean it is absolutely impossible, as impossible as $1=0$ . No scientific advance can make it possible. That is what we are doing when we are proving lower bounds. To prove a lower bound, i.e. to show that a problem requires some amount of time to solve, means that we have to prove that any algorithm, even very ingenuous ones that do not know yet, cannot solve the problem faster. There are many intelligent ideas that we know of (greedy, matching, dynamic programming, linear programming, semidefinite programming, sum-of-squares programming, and many other intelligent ideas) and there are many many more that we do not know of yet. Ruling out one algorithm or one particular idea of designing algorithms is not sufficient, we need to rule out all of them, even those we do not know about yet, even those may not ever know about! And one can combine all of these in an algorithm, so we need to rule out their combinations also. There has been some progress towards showing that some ideas cannot solve difficult $\mathsf{NP}$ problems, e.g. greedy and its extensions cannot work, and there are some work related to dynamic programming algorithms, and there are some work on particular ways of using linear programming. But these are not even close to ruling out the intelligent ideas that we know of (search for lower-bounds in restricted models of computation if you are interested). Barriers: Lower Bounds Are Difficult to Prove On the other hand we have mathematical results called barriers that say that a lower-bound proof cannot be such and such, and such and such almost covers all techniques that we have used to prove lower bounds! In fact many researchers gave up working on proving lower bounds after Alexander Razbarov and Steven Rudich's natural proofs barrier result. It turns out that the existence of particular kind of lower-bound proofs would imply the insecurity of cryptographic pseudorandom number generators and many other cryptographic tools. I say almost because in recent years there has been some progress mainly by Ryan Williams that has been able to intelligently circumvent the barrier results, still the results so far are for very weak models of computation and quite far from ruling out general polynomial-time algorithms. But I am diverging. The main point I wanted to make was that proving lower bounds is difficult and we do not have strong lower bounds for general algorithms solving $\mathsf{NP}$ problems. [On the other hand, Ryan Williams' work shows that there are close connections between proving lower bounds and proving upper bounds. See his talk at ICM 2014 if you are interested.] Reductions: Solving a Problem Using Another Problem as a Subroutine/Oracle/Black Box The idea of a reduction is very simple: to solve a problem, use an algorithm for another problem. Here is simple example: assume we want to compute the sum of a list of $n$ natural numbers and we have an algorithm $\operatorname{Sum}$ that returns the sum of two given numbers. Can we use $\operatorname{Sum}$ to add up the numbers in the list? Of course! Problem: Input: a list of $n$ natural numbers $x_1,\ldots,x_n$ , Output: return $\sum_{i=1}^{n} x_i$ . Reduction Algorithm: $s = 0$ for $i$ from $1$ to $n$ 2.1. $s = \operatorname{Sum}(s,x_i)$ return $s$ Here we are using $\operatorname{Sum}$ in our algorithm as a subroutine . Note that we do not care about how $\operatorname{Sum}$ works, it acts like black box for us, we do not care what is going on inside $\operatorname{Sum}$ . We often refer to the subroutine $\operatorname{Sum}$ as oracle . It is like the oracle of Delphi in Greek mythology, we ask questions and the oracle answers them and we use the answers. This is essentially what a reduction is: assume that we have algorithm for a problem and use it as an oracle to solve another problem. Here efficient means efficient assuming that the oracle answers in a unit of time, i.e. we count each execution of the oracle a single step. If the oracle returns a large answer we need to read it and that can take some time, so we should count the time it takes us to read the answer that oracle has given to us. Similarly for writing/asking the question from the oracle. But oracle works instantly, i.e. as soon as we ask the question from the oracle the oracle writes the answer for us in a single unit of time. All the work that oracle does is counted a single step, but this excludes the time it takes us to write the question and read the answer. Because we do not care how oracle works but only about the answers it returns we can make a simplification and consider the oracle to be the problem itself in place of an algorithm for it. In other words, we do not care if the oracle is not an algorithm, we do not care how oracles comes up with its replies. For example, $\operatorname{Sum}$ in the question above is the addition function itself (not an algorithm for computing addition). We can ask multiple questions from an oracle, and the questions does not need to be predetermined: we can ask a question and based on the answer that oracle returns we perform some computations by ourselves and then ask another question based on the answer we got for the previous question. Another way of looking at this is thinking about it as an interactive computation . Interactive computation in itself is large topic so I will not get into it here, but I think mentioning this perspective of reductions can be helpful. An algorithm $A$ that uses a oracle/black box $O$ is usually denoted as $A^O$ . The reduction we discussed above is the most general form of a reduction and is known as black-box reduction (a.k.a. oracle reduction , Turing reduction ). More formally: We say that problem $Q$ is black-box reducible to problem $O$ and write $Q \leq_T O$ iff there is an algorithm $A$ such that for all inputs $x$ , $Q(x) = A^O(x)$ . In other words if there is an algorithm $A$ which uses the oracle $O$ as a subroutine and solves problem $Q$ . If our reduction algorithm $A$ runs in polynomial time we call it a polynomial-time black-box reduction or simply a Cook reduction (in honor of Stephen A. Cook ) and write $Q\leq^\mathsf{P}_T O$ . (The subscript $T$ stands for "Turing" in the honor of Alan Turing ). However we may want to put some restrictions on the way the reduction algorithm interacts with the oracle. There are several restrictions that are studied but the most useful restriction is the one called many-one reductions (a.k.a. mapping reductions ). The idea here is that on a given input $x$ , we perform some polynomial-time computation and generate a $y$ that is an instance of the problem the oracle solves. We then ask the oracle and return the answer it returns to us. We are allowed to ask a single question from the oracle and the oracle's answers is what will be returned. More formally, We say that problem $Q$ is many-one reducible to problem $O$ and write $Q \leq_m O$ iff there is an algorithm $A$ such that for all inputs $x$ , $Q(x) = O(A(x))$ . When the reduction algorithm is polynomial time we call it polynomial-time many-one reduction or simply Karp reduction (in honor of Richard M. Karp ) and denote it by $Q \leq_m^\mathsf{P} O$ . The main reason for the interest in this particular non-interactive reduction is that it preserves $\mathsf{NP}$ problems: if there is a polynomial-time many-one reduction from a problem $A$ to an $\mathsf{NP}$ problem $B$ , then $A$ is also in $\mathsf{NP}$ . The simple notion of reduction is one of the most fundamental notions in complexity theory along with $\mathsf{P}$ , $\mathsf{NP}$ , and $\mathsf{NP}$ -complete (which we will discuss below). The post has become too long and exceeds the limit of an answer (30000 characters). I will continue the answer in Part II .
{ "source": [ "https://cs.stackexchange.com/questions/9556", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/6569/" ] }
9,604
The fixed-point combinator FIX (aka the Y combinator) in the (untyped) lambda calculus ($\lambda$) is defined as: FIX $\triangleq \lambda f.(\lambda x. f~(\lambda y. x~x~y))~(\lambda x. f~(\lambda y. x~x~y))$ I understand its purpose and I can trace the execution of its application perfectly fine; I would like to understand how to derive FIX from first principles . Here is as far as I get when I try to derive it myself: FIX is a function: FIX $\triangleq \lambda_\ldots$ FIX takes another function, $f$, to make it recursive: FIX $\triangleq \lambda f._\ldots$ The first argument of the function $f$ is the "name" of the function, used where a recursive application is intended. Therefore, all appearances of the first argument to $f$ should be replaced by a function, and this function should expect the rest of the arguments of $f$ (let's just assume $f$ takes one argument): FIX $\triangleq \lambda f._\ldots f~(\lambda y. _\ldots y)$ This is where I do not know how to "take a step" in my reasoning. The small ellipses indicate where my FIX is missing something (although I am only able to know that by comparing it to the "real" FIX). I already have read Types and Programming Languages , which does not attempt to derive it directly, and instead refers the reader to The Little Schemer for a derivation. I have read that, too, and its "derivation" was not so helpful. Moreover, it is less of a direct derivation and more of a use of a very specific example and an ad-hoc attempt to write a suitable recursive function in $\lambda$.
I haven't read this anywhere, but this is how I believe $Y$ could have been derived: Let's have a recursive function $f$, perhaps a factorial or anything else like that. Informally, we define $f$ as pseudo-lambda term where $f$ occurs in its own definition: $$f = \ldots f \ldots f \ldots $$ First, we realize that the recursive call can be factored out as a parameter: $$f = \underbrace{(\lambda r . (\ldots r \ldots r \ldots))}_{M} f$$ Now we could define $f$ if we only had a way how to pass it as an argument to itself. This is not possible, of course, because we don't have $f$ at hand. What we have at hand is $M$. Since $M$ contains everything we need to define $f$, we can try to pass $M$ as the argument instead of $f$ and try to reconstruct $f$ from it later inside. Our first attempt looks like this: $$f = \underbrace{(\lambda r . (\ldots r \ldots r \ldots))}_{M} \underbrace{(\lambda r . (\ldots r \ldots r \ldots))}_{M}$$ However, this is not completely correct. Before, $f$ got substituted for $r$ inside $M$. But now we pass $M$ instead. We have to somehow fix all places where we use $r$ so that they reconstruct $f$ from $M$. Actually, this not difficult at all: Now that we know that $f = M M$, everywhere we use $r$ we simply replace it by $(r r)$. $$f = \underbrace{(\lambda r . (\ldots (rr) \ldots (rr) \ldots))}_{M'} \underbrace{(\lambda r . (\ldots (rr) \ldots (rr) \ldots))}_{M'}$$ This solution is good, but we had to alter $M$ inside. This is not very convenient. We can do this more elegantly without having to modify $M$ by introducing another $\lambda$ that sends to $M$ its argument applied to itself: By expressing $M'$ as $\lambda x.M(xx)$ we get $$f = (\lambda x.\underbrace{(\lambda r . (\ldots r \ldots r \ldots))}_{M}(xx)) (\lambda x.\underbrace{(\lambda r . (\ldots r \ldots r \ldots))}_{M}(xx))$$ This way, when $M$ is substituted for $x$, $MM$ is substituted for $r$, which is by the definition equal to $f$. This gives us a non-recursive definition of $f$, expressed as a valid lambda term! The transition to $Y$ is now easy. We can take an arbitrary lambda term instead of $M$ and perform this procedure on it. So we can factor $M$ out and define $$Y = \lambda m . (\lambda x. m(xx)) (\lambda x.m(xx))$$ Indeed, $Y M$ reduces to $f$ as we defined it. Note: I've derived $Y$ as it is defined in literature. The combinator you've described is a variant of $Y$ for call-by-value languages, sometimes also called $Z$. See this Wikipedia article .
{ "source": [ "https://cs.stackexchange.com/questions/9604", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/5291/" ] }
9,633
I'm currently reading a book in algorithms and complexity. At the moment I'm, reading about computable and non-computable functions, and my book states that there are many more functions that are non-computable than computable, in fact the majority is non-computable it says. In some sense I can intuitively accept that but the book does not give a formal proof nor does it elaborate much on the topic. I just wanted to see a proof/let someone here elaborate about it/understand more strictly why there are so many more non-computable functions than computable ones.
The are countably many computable functions: Each computable function has at least one algorithm. Each algorithm has a finite description using symbols from a finite set, e.g. finite binary strings using symbols $\{0,1\}$. The number of finite binary strings denoted by $\{0,1\}^*$ is countable (i.e. the same as the number of natural numbers $\mathsf{N}$). Therefore there can be at most countably many computable functions. There are at least countable many computable function since for each $c\in \{0,1\}^*$, the constant function $f(x)=c$ is computable. In other words, there is a correspondence between: the set of computable functions, the set of algorithms, $\{0,1\}^*$, the set of finite strings from $\{0,1\}$, and $\mathbb{N}$, the set of natural numbers. On the other hand, there are uncountably many functions over strings (or natural numbers). A function $f:\mathbb{N} \to \mathbb{N}$ (or $f:\{0,1\}^* \to \{0,1\}^*$) assigns a value for each input. Each of these values can be chosen independently from others. So there are $\mathbb{N}^\mathbb{N}=2^\mathbb{N}$ possible function. The number of functions over natural numbers is equal to the number of real numbers. Since only countably many of functions are computable, most of them are not. In fact the number of uncomputable functions is also $2^{\mathbb{N}}$. If you want to picture this intuitively, think about natural numbers and real numbers, or about finite binary strings and infinite binary strings. There are way more real numbers and infinite binary strings than natural numbers and finite strings. In other words $\mathbb{N} < 2^\mathbb{N}$ (for a proof of this fact see Cantor's diagonal argument and Cardinal arithmetic ).
{ "source": [ "https://cs.stackexchange.com/questions/9633", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/6834/" ] }
9,648
Why would a company like Twitter be interested in algebraic concepts like groups, monoids and rings? See their repository at github:twitter/algebird . All I could find is: Implementations of Monoids for interesting approximation algorithms, such as Bloom filter , HyperLogLog and CountMinSketch . These allow you to think of these sophisticated operations like you might numbers, and add them up in hadoop or online to produce powerful statistics and analytics. and in another part of the GitHub page: It was originally developed as part of Scalding's Matrix API, where Matrices had values which are elements of Monoids , Groups , or Rings . Subsequently, it was clear that the code had broader application within Scalding and on other projects within Twitter. What could this broader application be? within Twitter and for general interest? It seems like composition aggregations of databases have a monoid-like structure. Same question on Quora: What is Twitter's interest in abstract algebra (with algebird)? I have math background but I'm not computer scientist. It would be great to have "real-world" uses of monoids and semi-groups. These are normally considered useless theoretical constructs, and ignored in many abstract algebra courses (for lack of anything interesting to say).
The main answer is that by exploiting semi-group structure, we can build systems that parallelize correctly without knowing the underlying operation (the user is promising associativity). By using Monoids, we can take advantage of sparsity (we deal with a lot of sparse matrices, where almost all values are a zero in some Monoid). By using Rings, we can do matrix multiplication over things other than numbers (which on occasion we have done). The algebird project itself (as well as the issue history) pretty clearly explains what is going on here: we are building a lot of algorithms for aggregation of large data sets, and leveraging the structure of the operations gives us a win on the systems side (which is usually the pain point when trying to productionize algorithms on 1000s of nodes). Solve the systems problems once for any Semigroup/Monoid/Group/Ring, and then you can plug in any algorithm without having to think about Memcache, Hadoop, Storm, etc...
{ "source": [ "https://cs.stackexchange.com/questions/9648", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/3131/" ] }
9,676
In many implementations of depth-first search that I saw (for example: here ), the code distinguish between a grey vertex (discovered, but not all of its neighbours was visited) and a black vertex (discovered and all its neighbours was visited). What is the purpose of this distinction? It seems that DFS algorithm will never visit a visited vertex regardless of whether it's grey or black.
When doing a DFS, any node is in one of three states - before being visited, during recursively visiting its descendants, and after all its descendants have been visited (returning to its parent, i.e., wrap-up phase). The three colors correspond to each of the three states. One of the reasons for mentioning colors and time of visit and return is to explicitly make these distinctions for better understanding. Of course, there are actual uses of these colors. Consider a directed graph $G$. Suppose you want to check $G$ for the existence of cycles. In an undirected graph, if the node under consideration has a black or grey neighbor, it indicates a cycle (and the DFS does not visit it as you mention). However, in case of a directed graph, a black neighbor does not mean a cycle. For example, consider a graph with 3 vertices - $A, B,$ and $C$, with directed edges as $A \to B$, $B \to C$, $A \to C$. Suppose the DFS starts at $A$, then visits $B$, then $C$. When it has returned to $A$, it then checks that $C$ has already been visited and is black. But there is no cycle in the graph. In a directed graph, a cycle is present if and only if a node is seen again before all its descendants have been visited. In other words, if a node has a neighbor which is grey, then there is a cycle (and not when the neighbor is black). A grey node means we are currently exploring its descendants - and if one such descendant has an edge to this grey node, then there is a cycle. So, for cycle detection in directed graphs, you need to have 3 colors. There could be other examples too, but you should get the idea.
{ "source": [ "https://cs.stackexchange.com/questions/9676", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/6805/" ] }
9,756
Basically I am aware of three foundations for math Set theory Type theory Category theory So in what ways are programming languages and foundations of mathematics related? EDIT The original question was "Programming languages based on foundations of math" with the added paragarph of And implementations of theory 1. Type theory in Coq 2. Set theory in SETL 3. Category theory in Haskell Based on a suggestion this was changed to "How are programming languages and foundations of mathematics related" Since this is one of those questions were I did not know enough about what I was asking but wanted to learn something, I am modifing the question to make it more valuable for learning and others, yet leaving the details in so as not to make the current answer by Andrej Bauer seem off topic. Thanks for all the comments and the answer so far, I am learning from them.
[Note: this paragraphs is now outdated.] The title of your question contains an unwarranted assumption, namely that programming languages are "based on foundations of mathematics". This is generally not the case, although the two areas do have important relationships. A more accurate statement would be that (some) programming languages were designed using foundational techniques. A better question to ask would be "how are programming languages and foundations of mathematics related?" The most general connection is embodied in the slogan proofs-as-programs , which can be made to work in several ways. The Curry-Howard correspondence is the most obvious one. With it we relate at once type theory, logic, and programming. But it should be emphasize that the Curry-Howard correspondence does not work very well in the presence of general recursion (because every type becomes inhabited), which every general-purpose programming language supports. A subtler way of making the slogan proofs-as-programs work is to use realizability . Here too we relate proofs and programs, but now the direction goes from proofs to programs: every proof gives a program, but not every program is necessarily a proof. The main example of a programming language based on a foundation is Agda , which simply is an implementation of dependent type theory. However, Agda is not a general-purpose programming language because it does not support general recursion. Every function in Agda is total, and there are computable functions which cannot be implemented in Agda. In practice programmers won't notice this, but they will notice that Agda does not allow undefined values, for example infinite loops. Coq is not a programming language but rather a proof assistant. However, it too has extraction capabilities which give programs from proofs. Proof assistants and programming languages should not be confused with each other. We should not forget that prolog and other logic programming languages take their inspiration from the idea that computation as proof search . This of course relates them closely to logic. Haskell is a general-purpose programming language which is based on domain theory . That is to say, its semantics is domain-theoretic because it has to account for partial functions and recursion. The Haskell community has developed a number of techniques inspired by category theory, of which monads are best known but should not be confused with monads . More generally, advanced programming features are usually treated with a combination of domain theory and category theory, but this is not something that the Haskell programmer in the street is adept at. The so-called "syntactic category" of Haskell types is a lay man's view of how Haskell and category theory correspond to each other. Set theory (classical or constructive) seems to inspire ideas in programming language to a lesser extent. Of course, constructive set theory has its connection to programming through constructive logic. One important application of intuitionistic set theory to programming languages was given by Alex Simpson who used it to make synthetic domain theory work. But this is quite advanced stuff, maybe see these slides . Jean-Louis Krivine has developed a very interesting brand of realizability for classical set theory. This seems a good way to relate classical set theory and programming. In summary, the theory of programming languages uses foundational techniques. This is not surprising, as we consider computation to be a fundamental concept. But it is too naive to say that programming languages are "based" on a certain foundations. In fact, the trichotomy of foundations "set theory -- type theory -- category theory" is again just a useful high-level observation that can be made mathematically precise in various ways, but there is nothing necessary about it. It is a historic accident.
{ "source": [ "https://cs.stackexchange.com/questions/9756", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/268/" ] }