id
int64
1
141k
title
stringlengths
15
150
body
stringlengths
43
35.6k
tags
stringlengths
1
118
label
int64
0
1
1,597
Maximise sum of "non-overlapping" numbers in square array - help with proof
<p>A <a href="https://stackoverflow.com/questions/10378738/maximise-sum-of-non-overlapping-numbers-from-matrix">question was posted on Stack Overflow</a> asking for an algorithm to solve this problem:</p>&#xA;&#xA;<blockquote>&#xA; <p>I have a matrix (call it A) which is nxn. I wish to select a subset&#xA; (call it B) of points from matrix A. The subset will consist of n&#xA; elements, where one and only one element is taken from each row and&#xA; from each column of A. The output should provide a solution (B) such&#xA; that the sum of the elements that make up B is the maximum possible&#xA; value, given these constraints (eg. 25 in the example below). If&#xA; multiple instances of B are found (ie. different solutions which give&#xA; the same maximum sum) the solution for B which has the largest minimum&#xA; element should be selected.</p>&#xA; &#xA; <p>B could also be a selection matrix which is nxn, but where only the n&#xA; desired elements are non-zero.</p>&#xA; &#xA; <p>For example: if A =</p>&#xA;&#xA;<pre><code>|5 4 3 2 1|&#xA;|4 3 2 1 5|&#xA;|3 2 1 5 4|&#xA;|2 1 5 4 3|&#xA;|1 5 4 3 2|&#xA;</code></pre>&#xA; &#xA; <p>=> B would be</p>&#xA;&#xA;<pre><code> |5 5 5 5 5|&#xA;</code></pre>&#xA;</blockquote>&#xA;&#xA;<p>I <a href="https://stackoverflow.com/a/10387455/1191425">proposed a dynamic programming solution</a> which I suspect is as efficient as any solution is going to get. I've copy-pasted my proposed algorithm below.</p>&#xA;&#xA;<hr>&#xA;&#xA;<ul>&#xA;<li>Let $A$ be a square array of $n$ by $n$ numbers.</li>&#xA;<li>Let $A_{i,j}$ denote the element of $A$ in the <code>i</code>th row and <code>j</code>th column.</li>&#xA;<li>Let $S( i_1:i_2, j_1:j_2 )$ denote the optimal sum of non-overlapping numbers for a square subarray of $A$ containing the intersection of rows $i_1$ to $i_2$ and columns $j_1$ to $j_2$.</li>&#xA;</ul>&#xA;&#xA;<p>Then the optimal sum of non-overlapping numbers is denoted <code>S( 1:n , 1:n )</code> and is given as follows:</p>&#xA;&#xA;<p>$$S( 1:n , 1:n ) = \max \left \{ \begin{array}{l} S( 2:n , 2:n ) + A_{1,1} \\&#xA; S( 2:n , 1:n-1 ) + A_{1,n} \\&#xA; S( 1:n-1 , 2:n ) + A_{n,1} \\&#xA; S( 1:n-1 , 1:n-1 ) + A_{n,n} \\&#xA; \end{array} \right.$$</p>&#xA;&#xA;<pre><code>Note that S( i:i, j:j ) is simply Aij.&#xA;</code></pre>&#xA;&#xA;<p>That is, the optimal sum for a square array of size <code>n</code> can be determined by separately computing the optimal sum for each of the four sub-arrays of size <code>n-1</code>, and then maximising the sum of the sub-array and the element that was "left out".</p>&#xA;&#xA;<pre><code>S for |# # # #|&#xA; |# # # #|&#xA; |# # # #|&#xA; |# # # #|&#xA;&#xA;Is the best of the sums S for:&#xA;&#xA;|# | | #| |# # # | | # # #|&#xA;| # # #| |# # # | |# # # | | # # #|&#xA;| # # #| |# # # | |# # # | | # # #|&#xA;| # # #| |# # # | | #| |# |&#xA;</code></pre>&#xA;&#xA;<hr>&#xA;&#xA;<p>This is a very elegant algorithm and I strongly suspect that it is correct, but I can't come up with a way to <strong>prove</strong> it is correct.</p>&#xA;&#xA;<p>The main difficulty I am having it proving that the problem displays optimal substructure. I believe that if the four potential choices in each calculation are the <em>only</em> four choices, then this is enough to show optimal substructure. That is, I need to prove that this:</p>&#xA;&#xA;<pre><code>| # |&#xA;| # # #|&#xA;| # # #| &#xA;| # # #|&#xA;</code></pre>&#xA;&#xA;<p>Is not a valid solution, either because it's impossible (i.e. proof by contradiction) or because this possibility is already accounted for by one of the four "<code>n-1</code> square" variations.</p>&#xA;&#xA;<p>Can anyone point out any flaws in my algorithm, or provide a proof that it really does work?</p>&#xA;
algorithms dynamic programming check my algorithm
1
1,602
Distributed Storage for Access and Preservation
<p>My organization wants to maintain multiple copies of data in order to preserve access in the case of localized disasters as well as for the purpose of long term preservation. Are there accepted formal models for determining the appropriate variety of media (eg tape, disk) and their placement in the network? Are currently operating distributed solutions (eg LOCKSS) viable long term solutions for large collections of data?</p>&#xA;
digital preservation distributed systems storage
0
1,603
Mapping Reductions to Complement of A$_{TM}$
<p>I have a general question about mapping reductions. I have seen several examples of reducing functions to $A_{TM}$</p>&#xA;&#xA;<p>where $A_{TM} = \{\langle M, w \rangle : \text{ For } M \text{ is a turing machine which accepts string } w\}$</p>&#xA;&#xA;<p>which is great for proving undecidability. But say I want to prove unrecognizability instead. That is, I want to use the corollary that given $A \le_{m} B$, if $A$ is unrecognizable then $B$ is unrecognizable.</p>&#xA;&#xA;<p>So for any arbitrary unrecognizable language $C$ which can be reduced to $\overline{A_{TM}}$ (any example language would suffice for sake of example), how can I reduce $\overline{A_{TM}} \le_{m} C$?</p>&#xA;&#xA;<p>For simplicity, suffice to merely consider TM in $\overline{A_{TM}}$.</p>&#xA;&#xA;<p><strong>EDIT</strong></p>&#xA;&#xA;<p>For clarification, $\overline{A_{TM}} = \{ \langle M, w \rangle : M \text{ is a turing machine which does not accept string } w \}$</p>&#xA;
computability proof techniques reductions
1
1,606
Restricted version of the Clique problem?
<p>Consider the following version of the Clique problem where the input is of size $n$ and we're asked to find a clique of size $k$. The restriction is that the decision procedure cannot change the input graph into any other representation and cannot use any other representation to compute its answer, other than $\log(n^k)$ extra bits beyond the input graph. The extra bits can be used for example in the brute-force algorithm to keep track of the status of the exhaustive search for a clique, but the decision procedure is welcome to use them in any other way that still decides the problem.</p>&#xA;&#xA;<p>Is anything known at this point about the complexity of this? Has any work been done on other restrictions of Clique, and if so, could you direct me to such work?</p>&#xA;
complexity theory time complexity
1
1,607
Reduction rule for IF?
<p>I'm working through Simon Peyton Jones' "The Implementation of Functional Programming Languages" and on page 20 I see:</p>&#xA;&#xA;<pre>&#xA;IF TRUE ((&#955;p.p) 3) &#8596; IF TRUE 3 (per &#946; red) (1)&#xA; &#8596; (&#955;x.IF TRUE 3 x) (per &#951; red) (2)&#xA; &#8596; (&#955;x.3) (3)&#xA;</pre>&#xA;&#xA;<p>Step 1 to 2 is explained as &#951;-conversion. But from 2 to 3 it says "The final step is the reduction rule for IF." I'm not sure what this reduction rule is. </p>&#xA;
logic programming languages lambda calculus term rewriting operational semantics
0
1,609
Example of Soundness & Completeness of Inference
<p>Is the following example correct about whether an <em>inference</em> algorithm is <em>sound</em> and <em>complete</em>? </p>&#xA;&#xA;<p>Suppose we have needles a, b, c in a haystack, and have also an inference algorithm that is designed to find needles.</p>&#xA;&#xA;<ul>&#xA;<li><p><em>sound</em> - Only needles a, b and c are obtained.</p></li>&#xA;<li><p><em>complete</em> - Needles a, b and c are obtained. Other hay may also be obtained.</p></li>&#xA;</ul>&#xA;
logic
1
1,616
Irregularity of $\{a^ib^jc^k \mid \text{if } i=1 \text{ then } j=k \}$
<p>I read <a href="https://cs.stackexchange.com/questions/1027/using-pumping-lemma-to-prove-language-is-not-regular">on the site</a> on how to use the pumping lemma but still I don't what is wrong with way I'm using it for proving that the following language is not a regular language:</p>&#xA;&#xA;<p>$L = \{a^ib^jc^k \mid \text{if } i=1 \text{ then } j=k \}$</p>&#xA;&#xA;<p>for $i\neq1$ the language is obviously regular but in the case which $i=1$ , we get that the language is $a^1b^nc^n$, now for every division $w=xyz$ such that $|y|&gt;0 , |xy|&lt; p$ where p is the pumping constant I get the word $a^1b^pc^p$ would be out of the language. since $|xy|&lt; p$&#xA;, $y$ may contains only $a's$ or $b's$ or both. if $x= \epsilon$ and $y=a$, pump it once and you're out of the language, if it contains only $b's$, pump it once and your'e out of the language, and if it contains both, pump it and you're out of the language again.</p>&#xA;&#xA;<p>so, why does this language considered as not regular and cannot be proved for its irregularity by the pumping lemma? please point out my mistake. </p>&#xA;
formal languages regular languages pumping lemma
1
1,625
Invariant For Nested Loop in Matrix Multiplication Program
<p>I'm making a graduate thesis about proving correctness of program for multiplying 2 matrices using Hoare logic. For doing this, I need to generate the invariant for nested loop for this program:</p>&#xA;&#xA;<pre><code>for i = 1:n&#xA; for j = 1:n&#xA; for k = 1:n&#xA; C(i,j) = A(i,k)*B(k,j) + C(i,j);&#xA; end&#xA; end&#xA;end&#xA;</code></pre>&#xA;&#xA;<p>I've tried to find the invariant for inner loop first, but I can't find the true one until now. Is there someone can help me for finding the invariant for above program?</p>&#xA;
algorithms loop invariants correctness proof
0
1,626
Efficient data structures for building a fast spell checker
<p>I'm trying to write a spell-checker which should work with a pretty large dictionary. I really want an efficient way to index my dictionary data to be used using a <a href="http://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance" rel="noreferrer">Damerau-Levenshtein</a> distance to determine which words are closest to the misspelled word.</p>&#xA;&#xA;<p>I'm looking for a data structure who would give me the best compromise between space complexity and runtime complexity.</p>&#xA;&#xA;<p>Based on what I found on the internet, I have a few leads regarding what type of data structure to use:</p>&#xA;&#xA;<h2>Trie</h2>&#xA;&#xA;<p><img src="https://i.stack.imgur.com/KhvoF.png" alt="trie-500px"></p>&#xA;&#xA;<p>This is my first thought and looks pretty easy to implement and should provide fast lookup/insertion. Approximate search using Damerau-Levenshtein should be simple to implement here as well. But it doesn't look very efficient in terms of space complexity since you most likely have a lot of overhead with pointers storage.</p>&#xA;&#xA;<h2>Patricia Trie</h2>&#xA;&#xA;<p><img src="https://i.stack.imgur.com/EJYB0.png" alt="trie-500px"></p>&#xA;&#xA;<p>This seems to consume less space than a regular Trie since you're basically avoiding the cost of storing the pointers, but I'm a bit worried about data fragmentation in case of very large dictionaries like what I have.</p>&#xA;&#xA;<h2>Suffix Tree</h2>&#xA;&#xA;<p><img src="https://i.stack.imgur.com/uXH1b.png" alt="suffix-500px"></p>&#xA;&#xA;<p>I'm not sure about this one, it seems like some people do find it useful in text mining, but I'm not really sure what it would give in terms of performance for a spell checker.</p>&#xA;&#xA;<h2>Ternary Search Tree</h2>&#xA;&#xA;<p><img src="https://i.stack.imgur.com/X8hPY.png" alt="tst"></p>&#xA;&#xA;<p>These look pretty nice and in terms of complexity should be close (better?) to Patricia Tries, but I'm not sure regarding fragmentation if it would be better of worse than Patricia Tries.</p>&#xA;&#xA;<h2>Burst Tree</h2>&#xA;&#xA;<p><img src="https://i.stack.imgur.com/9jn1m.png" alt="burst"></p>&#xA;&#xA;<p>This seems kind of hybrid and I'm not sure what advantage it would have over Tries and the like, but I've read several times that it's very efficient for text mining.</p>&#xA;&#xA;<hr>&#xA;&#xA;<p>I would like to get some feedback as to which data structure would be best to use in this context and what makes it better than the other ones. If I'm missing some data structures who would be even more appropriate for a spell-checker, I'm very interested as well.</p>&#xA;
data structures strings string metrics
0
1,632
All soldiers should shoot at the same time
<p>When I was a student, I saw a problem in a digital systems/logic design textbook, about N soldiers standing in a row, and want to shoot at the same time. A more difficult version of the problem was that the soldiers stand in a general network instead of a row. I am sure this is a classical problem, but I cannot remember its name. Can you remind me?</p>&#xA;
algorithms distributed systems clocks
1
1,634
Target-Value Search (& II)
<p>[<em>previously appearing in cstheory, it was closed there and introduced here instead</em>]</p>&#xA;&#xA;<p>Given an edge-weighted graph $G=(V,E)$ the problem of finding the shortest path is known to be in P ---and indeed a simple approach would be Dijkstra's algorithm which can solve this problem in $O(V^2)$. A similar problem is to find the maximum path in $G$ from a source node to a target node and this can be solved with Integer Programming so that, as far as I know, this is not known to be in P.</p>&#xA;&#xA;<p>Now, the problem of finding a path in $G$ such that it deviates the minimum from a given target value (typically larger than the optimal distance but less than the maximum distance that separates the source and target nodes) has been conjectured to be in EXPTIME (see section "Conventions" of <a href="http://search-conference.org/index.php/Main/SOCS09program" rel="nofollow">A depth-first approach to target-value search</a> in the proceedings of SoCS 2009). In particular, this paper addresses this particular problem for directed acyclic graphs (DAGs). A previous work is <a href="http://www.uwosh.edu/faculty_staff/furcyd/search_symposium_2008/schedule.html" rel="nofollow">Heuristic Search for Target-Value Path Problem</a>. There is event a US Patent of this algorithm <a href="http://www.google.es/patents?hl=es&amp;lr=&amp;vid=USPATAPP12497353&amp;id=gojwAAAAEBAJ&amp;oi=fnd&amp;dq=%22depth-first+search+for+target+value+problems%22&amp;printsec=abstract#v=onepage&amp;q=%22depth-first%20search%20for%20target%20value%20problems%22&amp;f=false" rel="nofollow">US 2011/0004625</a>.</p>&#xA;&#xA;<p>I've been searching for related problems in other fields of Computer Science and Mathematics and strikingly, I have found none though this problem is clearly relevant in practice ---there are tons of opportunities to look for a specific target value instead of the minimum or the maximum path.</p>&#xA;&#xA;<p>Do you know related problems to this or additional bibliographical references to this problem? Any information on this problem including studies of their complexity would be very welcome</p>&#xA;&#xA;<p><strong>Note</strong>: as already pointed out by Jeffe in cstheory, proving this problem to be in EXPTIME is trivial and the authors probably meant EXPTIME-complete.</p>&#xA;
algorithms complexity theory reference request search algorithms
0
1,636
Reason to learn propositional & predicate logic
<p>I can understand the importance that computer scientists or any software development related engineers should have understood the study of basic logics as a basis. </p>&#xA;&#xA;<p>But is there any tasks/jobs that explicitly require the knowledge about these, other than the tasks that require any kind of knowledge representation using <code>Knowledge Base</code>? I want to hear the types of tasks, rather than conceptual responses.</p>&#xA;&#xA;<p>The reason I ask this is just from my curiosity. While CS students have to spend certain amount of time on this subject, some practicality-intensive courses (e.g. <a href="https://www.ai-class.com/">AI-Class</a>) skipped this topic entirely. And I just wonder that for example knowing <code>predicate logic</code> might help drawing <code>ER diagram</code> but might not be a requirement.</p>&#xA;&#xA;<hr>&#xA;&#xA;<p>Update 5/27/2012) Thanks for answers. Now I think I totally understand &amp; agree with the importance of <code>logic</code>in CS with its vast amount of application. I just picked the best answer truly from the impressiveness that I got by the solution for <code>Windows</code>' blue screen issue.</p>&#xA;
logic
1
1,640
Approximate minimum-weighted tree decomposition on complete graphs
<p>Say I have a weighted undirected complete graph $G = (V, E)$. Each edge $e = (u, v, w)$ is assigned with a positive weight $w$. I want to calculate the minimum-weighted $(d, h)$-tree-decomposition. By $(d, h)$-tree-decomposition, I mean to divide the vertices $V$ into $k$ trees, such that the height of each tree is $h$, and each non-leaf node has $d$ children. </p>&#xA;&#xA;<p>I know it is definitely $\text{NP}$-Hard, since minimum $(1, |V|-1)$-tree-decomposition is the minimum Hamilton path. But are there any good approximation algorithms?</p>&#xA;
algorithms complexity theory graphs approximation
0
1,643
How can we assume that basic operations on numbers take constant time?
<p>Normally in algorithms we do not care about comparison, addition, or subtraction of numbers -- we assume they run in time $O(1)$. For example, we assume this when we say that comparison-based sorting is $O(n\log n)$, but when numbers are too big to fit into registers, we normally represent them as arrays so basic operations require extra calculations per element.</p>&#xA;&#xA;<p>Is there a proof showing that comparison of two numbers (or other primitive arithmetic functions) can be done in $O(1)$? If not why are we saying that comparison based sorting is $O(n\log n)$?</p>&#xA;&#xA;<hr>&#xA;&#xA;<p><em>I encountered this problem when I answered a SO question and I realized that my algorithm is not $O(n)$ because sooner or later I should deal with big-int, also it wasn't pseudo polynomial time algorithm, it was $P$.</em></p>&#xA;
algorithms complexity theory algorithm analysis time complexity reference question
1
1,647
How to scale down parallel complexity results to constantly many cores?
<p>I have had problems accepting the complexity theoretic view of "efficiently solved by parallel algorithm" which is given by the class <a href="https://en.wikipedia.org/wiki/NC_%28complexity%29">NC</a>:</p>&#xA;&#xA;<blockquote>&#xA; <p>NC is the class of problems that can be solved by a parallel algorithm in time $O(\log^cn)$ on $p(n) \in O(n^k)$ processors with $c,k \in \mathbb{N}$.</p>&#xA;</blockquote>&#xA;&#xA;<p>We can assume a <a href="https://en.wikipedia.org/wiki/Parallel_random_access_machine">PRAM</a>.</p>&#xA;&#xA;<p>My problem is that this does not seem to say much about "real" machines, that is machines with a finite amount of processors. Now I am told that "it is known" that we can "efficiently" simulate a $O(n^k)$ processor algorithm on $p \in \mathbb{N}$ processors.</p>&#xA;&#xA;<p>What does "efficiently" mean here? Is this folklore or is there a rigorous theorem which quantifies the overhead caused by simulation?</p>&#xA;&#xA;<p>What I am afraid that happens is that I have a problem which has a sequential $O(n^k)$ algorithm and also an "efficient" parallel algorithm which, when simulated on $p$ processors, also takes $O(n^k)$ time (which is all that can be expected on this granularity level of analysis if the sequential algorithm is asymptotically optimal). In this case, there is no speedup whatsover as far as we can see; in fact, the simulated parallel algorithm may be <em>slower</em> than the sequential algorithm. That is I am really looking for statements more precise than $O$-bounds (or a declaration of absence of such results).</p>&#xA;
complexity theory reference request parallel computing
1
1,652
The operator $A(L)= \{w \mid ww \in L\}$
<p>Consider the operator $A(L)= \{w \mid ww \in L\}$. Apparently, the class of context free languages is not closed against $A$. Still, after a lot of thinking, I can't find any CFL for which $A(L)$ wouldn't be CFL. </p>&#xA;&#xA;<p>Does anyone have an idea for such a language?</p>&#xA;
formal languages context free closure properties
1
1,662
Why are lambda-abstractions the only terms that are values in the untyped lambda calculus?
<p>I am confused about the following claim: "The only values in the untyped lambda calculus are lambda-abstractions".</p>&#xA;&#xA;<p>Why are the other terms not values? What does it mean for a lambda-abstraction to be a value? The first thing that came to my mind was that maybe lambda-abstractions are the only possible normal forms, but this is not true of course, e.g. $(\lambda x.\; x)\;y \to y$.</p>&#xA;&#xA;<p>Can someone enlighten me?</p>&#xA;
logic lambda calculus
1
1,665
Frame Pointers in Assembler
<p>I am currently learning assembly programming on wombat 4, I am looking at Frame pointers. I understand exactly what a frame pointer is: it is a register and are used to access parameters on a stack. But i'm confused on how they affect the program counter and why they are preferred over normal registers. </p>&#xA;&#xA;<p>Could some one explain, please. </p>&#xA;
computer architecture compilers
1
1,666
Please explain this formal definition of computation
<p>I am trying to attack TAOCP once again, given the sheer literal heaviness of the volumes I have trouble committing to it seriously. In TAOCP 1 Knuth writes, page 8, basic concepts::</p>&#xA;&#xA;<blockquote>&#xA; <p>Let $A$ be a finite set of letters. Let $A^*$ be the set of all strings in $A$ (the set of all ordered sequences $x_1$ $x_2$ ... $x_n$ where $n \ge 0$ and $x_j$ is in $A$ for $1 \le j \le n$). The idea is to encode the states of the computation so that they are represented by strings of $A^*$ . Now let $N$ be a non-negative integer and Q (the state) be the set of all $(\sigma, j)$, where $\sigma$ is in $A^*$ and j is an integer $0 \le j \le N$; let $I$ (the input) be the subset of Q with $j=0$ and let $\Omega$ (the output) be the subset with $j = N$. If $\theta$ and $\sigma$ are strings in $A^*$, we say that $\theta$ occurs in $\sigma$ if $\sigma$ has the form $\alpha \theta \omega$ for strings $\alpha$ and $\omega$. To complete our definition, let $f$ be a function of the following type, defined by the strings $\theta_j$, $\phi_j$ and the integers $a_j$, $b_j$ for $0 \le j \le N$:</p>&#xA; &#xA; <ul>&#xA; <li>$f((\sigma, j)) = (\sigma, a_j)$ if $\theta_j$ does not occur in $\sigma$</li>&#xA; <li>$f((\sigma, j)) = (\alpha \psi_j \omega, b_j)$ if $\alpha$ is the shortest possible string for which $\sigma = \alpha \theta_j \omega$</li>&#xA; <li>$f((\sigma,N)) = (\sigma, N)$</li>&#xA; </ul>&#xA;</blockquote>&#xA;&#xA;<p>Not being a computer scientist, I have trouble grasping the whole passage. I kind of get the idea that is behind a system of opcodes, but I haven't progressed effectively in understanding. I think that the main problem is tat I don't know how to read it effectively. </p>&#xA;&#xA;<p>Would it be possible to explain the passage above so that I can understand it, and give me a strategy in order to get in the logic in interpreting these statements?</p>&#xA;
formal languages turing machines computation models
0
1,669
Connection between KMP prefix function and string matching automaton
<p>Let $A_P = (Q,\Sigma,\delta,0,\{m\})$ the <em>string matching automaton</em> for pattern $P \in \Sigma^m$, that is </p>&#xA;&#xA;<ul>&#xA;<li>$Q = \{0,1,\dots,m\}$</li>&#xA;<li>$\delta(q,a) = \sigma_P(P_{0,q}\cdot a)$ for all $q\in Q$ and $a\in \Sigma$</li>&#xA;</ul>&#xA;&#xA;<p>with $\sigma_P(w)$ the length of the longest prefix of $P$ that is a Suffix of $w$, that is</p>&#xA;&#xA;<p>$\qquad \displaystyle \sigma_P(w) = \max \left\{k \in \mathbb{N}_0 \mid P_{0,k} \sqsupset w \right\}$.</p>&#xA;&#xA;<p>Now, let $\pi$ the <em>prefix function</em> from the <a href="https://secure.wikimedia.org/wikipedia/en/wiki/Knuth%E2%80%93Morris%E2%80%93Pratt_algorithm" rel="nofollow">Knuth-Morris-Pratt algorithm</a>, that is</p>&#xA;&#xA;<p>$\qquad \displaystyle \pi_P(q)= \max \{k \mid k &lt; q \wedge P_{0,k} \sqsupset P_{0,q}\}$.</p>&#xA;&#xA;<p>As it turns out, one can use $\pi_P$ to compute $\delta$ quickly; the central observation is:</p>&#xA;&#xA;<blockquote>&#xA; <p>Assume above notions and $a \in \Sigma$. For $q \in \{0,\dots,m\}$ with $q = m$ or $P_{q+1} \neq a$, it holds that</p>&#xA; &#xA; <p>$\qquad \displaystyle \delta(q,a) = \delta(\pi_P(q),a)$</p>&#xA;</blockquote>&#xA;&#xA;<p>But how can I prove this?</p>&#xA;&#xA;<hr>&#xA;&#xA;<p>For reference, this is how you compute $\pi_P$:</p>&#xA;&#xA;<pre><code>m ← length[P ]&#xA;Ο€[0] ← 0&#xA;k ← 0&#xA;for q ← 1 to m βˆ’ 1 do&#xA; while k &gt; 0 and P [k + 1] =6 P [q] do&#xA; k ← Ο€[k]&#xA; if P [k + 1] = P [q] then&#xA; k ← k + 1&#xA; end if&#xA; Ο€[q] ← k&#xA; end while&#xA;end for&#xA;&#xA;return Ο€&#xA;</code></pre>&#xA;
algorithms finite automata strings searching
1
1,671
Connection between castability and convexity
<p>I am wondering if there are any connection between convex polygon and castable object? What can we say about castability of the object if we know that the object is convex polygon and vice versa.</p>&#xA;<p>Let's gather together few basic things that we have to know.</p>&#xA;<blockquote>&#xA;<p>The object is castable if it can removed from the mold.</p>&#xA;<p>The polyhedron P can be removed from its mold by a translation in direction <span class="math-container">$\vec{d}$</span> if and only if <span class="math-container">$\vec{d}$</span> makes an angle of at least <span class="math-container">$90^{\circ}$</span> with the outward normal of all ordinary facets of P.</p>&#xA;</blockquote>&#xA;<p>For a arbitrary object testing for castability has time complexity <span class="math-container">$O(n^2)$</span>. In my opinion, for a convex polygon if could be improved to linear time, because for every new top facet we should test that the vector <span class="math-container">$\vec{d}$</span> makes an angle at least <span class="math-container">$90^{\circ}$</span> with outward normal not of all but only of two adjacent ordinary facets of P.</p>&#xA;<p>If this is true at least we have improvement in testing for castability in case of convex polygon.</p>&#xA;<p>We else can we state about castability and convexity. Especially interesting to know, if castability tells us something about convexity.</p>&#xA;
complexity theory time complexity computational geometry
1
1,672
Line separates two sets of points
<p>If there is a way to identify if two sets of points can be separated by a line?</p>&#xA;&#xA;<blockquote>&#xA; <p>We have two sets of points $A$ and $B$ if there is a line that separates $A$ and $B$ such that all points of $A$ and only $A$ on the one side of the line, and all points of $B$ and only $B$ on the other side.</p>&#xA;</blockquote>&#xA;&#xA;<p>The most naive algorithm I came up with is building convex polygon for $A$ and $B$ and test them for intersection. It looks time the time complexity for this should be $O(n\log h)$ as for constructing a convex polygon. Actually I am not expecting any improvements in time complexity, I am not sure it can be improved at all. But al least there should be a more beautiful way to determine if there is such a line.</p>&#xA;
algorithms machine learning computational geometry
1
1,680
Is WPA2 with pre-shared key an example of a zero-knowledge proof?
<p>When setting up an access point and selecting WPA2, one must manually enter a pre-shared key (a password), PSK, into both the AP and the STA. </p>&#xA;&#xA;<p>Both parties, AP and STA, must authenticate each other. But they have to do so without revealing the PSK. Both have to proof to the other party that they know the PSK without actually sending it. </p>&#xA;&#xA;<p>Is that an example of a <a href="http://en.wikipedia.org/wiki/Zero-knowledge_proof">zero-knowledge proof</a>?</p>&#xA;&#xA;<p>I thought it was, but nothing legit shows up when I google for zero-knowledge proof and WPA2 or EPA-PSK (the authentication method used). </p>&#xA;
cryptography security authentication
0
1,681
Big-Endian/Little-Endian argument - paper by Danny Cohen
<p>Reading a book I was redirected to <a href="http://www.ietf.org/rfc/ien/ien137.txt" rel="nofollow">"On holy wars and a plea for peace"</a> paper by Danny Cohen, which covers the "holy war" between big-endians and little-endians considering byte-order.</p>&#xA;&#xA;<p>Reaching the summary of the memory section I got confused as the author sais:</p>&#xA;&#xA;<blockquote>&#xA; <p>To the best of my knowledge only the Big-Endians of Blefuscu have&#xA; built systems with a consistent order which works across &#xA; chunk-boundaries, registers, instructions and memories. I<br>&#xA; failed to find a Little-Endians' system which is totally&#xA; consistent.</p>&#xA;</blockquote>&#xA;&#xA;<p>Which kind of contradicts his previous text sections covering little-endian:</p>&#xA;&#xA;<p>e.g.</p>&#xA;&#xA;<blockquote>&#xA; <p>When they add the bit order and the byte order they get:</p>&#xA;&#xA;<pre><code> ...|---word2---|---word1---|---word0---|&#xA; ....|C3,C2,C1,C0|C3,C2,C1,C0|C3,C2,C1,C0|&#xA; .....|B31......B0|B31......B0|B31......B0|&#xA;</code></pre>&#xA; &#xA; <p>In this regime, when word W(n) is shifted right, its LSB moves into&#xA; the MSB of word W(n-1).&#xA; 4</p>&#xA; &#xA; <p>English text strings are stored in the same order, with the&#xA; first character in C0 of W0, the next in C1 of W0, and so on.</p>&#xA; &#xA; <p>This order is very consistent with itself, with the Hebrew language,&#xA; and (more importantly) with mathematics, because significance&#xA; increases with increasing item numbers (address).</p>&#xA;</blockquote>&#xA;&#xA;<p>he even lateron sais:</p>&#xA;&#xA;<blockquote>&#xA; <p>The Big-Endians struck again, and without any resistance got their&#xA; way. The decimal number 12345678 is stored in the VAX memory in this&#xA; order:</p>&#xA;&#xA;<pre><code> 7 8 5 6 3 4 1 2&#xA; ...|-------long0-------|&#xA; ....|--word1--|--word0--|&#xA; .....|-C1-|-C0-|-C1-|-C0-|&#xA; ......|B15....B0|B15....B0|&#xA;</code></pre>&#xA; &#xA; <p>This ugliness cannot be hidden even by the standard Chinese trick.</p>&#xA;</blockquote>&#xA;&#xA;<p><strong>How did the author get to this completely different conclusion on overall consistency?</strong> </p>&#xA;&#xA;<p>An answer does not have to only base on the text, but may also include other sources which might clear up how the statement is sound.</p>&#xA;
terminology computer architecture
1
1,682
Solving Recurrence Equations containing two Recursion Calls
<p>I am trying to find a $\Theta$ bound for the following recurrence equation:</p>&#xA;&#xA;<p>$$ T(n) = 2 T(n/2) + T(n/3) + 2n^2+ 5n + 42 $$ </p>&#xA;&#xA;<p>I figure Master Theorem is inappropriate due to differing amount of subproblems and divisions. Also recursion trees do not work since there is no $T(1)$ or rather $T(0)$. </p>&#xA;
asymptotics recurrence relation master theorem
0
1,689
Subset sum, pseudo-polynomial time dynamic programming solution?
<p>I found the P vs NP problem some time ago and I have recently worked on the subset sum problem. I have read <a href="http://en.wikipedia.org/wiki/Subset_sum_problem" rel="nofollow noreferrer">Wikipedia article</a> on the Subset Sum problem as well as the question <a href="https://stackoverflow.com/questions/4355955/subset-sum-algorithm">Subset Sum Algorithm</a> </p>&#xA;&#xA;<p>I have looked at the problem and found some solutions but so far they seem to be NP, &#xA;I believe I can make a sufficiently fast algorithm in NP time.</p>&#xA;&#xA;<p>My problem is I am not good in theory so it doesn't help me much to talk about the Cook-Levin Theorem or Non-Deterministic Turing Machines.</p>&#xA;&#xA;<p>What I would like is an explanation of the pseudo-polynomial time dynamic programming subset sum that on Wikipedia.</p>&#xA;&#xA;<p>I have read it and I believe I understand the general concept of why it is NP instead of P (related to the size of the input rather than the operations with it),&#xA;but I do not understand the algorithm.</p>&#xA;&#xA;<p>I would appreciate if someone would put provide an example with some numbers and how it works. It would help me a lot because it would:</p>&#xA;&#xA;<ul>&#xA;<li>Give me ideas to improve my future algorithm</li>&#xA;<li>Help me understand intuitively when an algorithm is pseudo-polyonmial instead of NP.</li>&#xA;</ul>&#xA;
algorithms dynamic programming
0
1,692
Prove fingerprinting
<p>Let $a \neq b$ be two integers from the interval $[1, 2^n].$ Let $p$ be a random prime with $ 1 \le p \le n^c.$ Prove that&#xA;$$\text{Pr}_{p \in \mathsf{Primes}}\{a \equiv b \pmod{p}\} \le c \ln(n)/(n^{c-1}).$$</p>&#xA;&#xA;<p>Hint: As a consequence of the prime number theorem, exactly $n/ \ln(n) \pm o(n/\ln(n))$ many numbers from $\{ 1, \ldots, n \}$ are prime.</p>&#xA;&#xA;<p>Conclusion: we can compress $n$ bits to $O(\log(n))$ bits and get a quite small false-positive rate.</p>&#xA;&#xA;<p>My question is how can i proove that $$\text{Pr}_{p \in \mathsf{Primes}}\{a \equiv b \pmod{p}\} \le c \ln(n)/(n^{c-1})$$?</p>&#xA;
probability theory information theory coding theory number theory
1
1,693
A faster, leaner JavaScript for scientific computing: what features should I keep?
<p>Here I'm really interested in lowering barriers to mathematical education.</p>&#xA;&#xA;<p>Target:</p>&#xA;&#xA;<p>I'd like to see created for the JavaScript community, an equivalent of the Python-based/linked <strong>scientific and high-performance computing</strong> libraries (great lists of which are available through <a href="http://sagemath.org/download-packages.html" rel="noreferrer">Sage</a> and <a href="http://wiki.python.org/moin/NumericAndScientific" rel="noreferrer">otherwise</a>). And I want that, because I'd like to make it easy for people who learn JavaScript to get into scientific and numerical computing without having to learn Python (&amp; company). (I know it's easy to learn Python, as I basically did it at some point, but this suggests that perhaps it'll be easy to compile some restricted subset of JavaScript to Python.)</p>&#xA;&#xA;<p>Hypothesised method:</p>&#xA;&#xA;<p>I'm primarily interested in a new language with minimal difference from JavaScript, because the market ("human compilers") I'm targeting are programmers who already know JavaScript. What I want to target those people for, is to give them a minimally different language in which to write code that compiles to faster C, in the manner that RPython and Cython do for Python. I'm willing to throw out a lot of JavaScript features, I just want to be careful to add a minimum number of features back in. I'll definitely be looking at Lua, Dart, ECMA Harmony (which has no formal date of release, or am I mistaken?), etc. as these are all close resemblances to contemporary (2012) implementations of JavaScript.</p>&#xA;&#xA;<p>Questionable Motivations:</p>&#xA;&#xA;<p>I'm personally willing to learn any language/toolset that gets things done faster (I'm learning Erlang myself, for this), but here, I am specifically interested in lowering the bar (sorry) for other people who may not have such willingness. This is just one of those "want to have my cake, and eat it too, so I am putting some time into researching the problem" situations. I have very limited prior experience in computer language design, but so far from a hacking-the-ecosystem point of view, the problem seems interesting enough to study, so, I hope to be doing more of that soon.</p>&#xA;
programming languages compilers performance
0
1,694
Is it possible to use dynamic programming to factor numbers
<p>Let's say I am trying to break all the numbers from 1 to N down into their prime factors. Once I have the factors from 1 to N-1, is there an algorithm to give me the factors of 1 to N using dynamic programming?</p>&#xA;
algorithms dynamic programming factoring
0
1,697
Are there undecidable properties of non-turing-complete automata?
<p>Are there undecidable properties of linear bounded automata (avoiding the empty set language trick)? What about for a deterministic finite automaton? (put aside intractability). </p>&#xA;&#xA;<p>I would like to get an example (if possible) of an undecidable problem that is defined <em>without using Turing machines</em> explicitly.</p>&#xA;&#xA;<p>Is Turing completeness of a model necessary to support uncomputable problems?</p>&#xA;
computability automata undecidability
1
1,698
Find the minimal number of runs to visit every edge of a directed graph
<p>I am looking for an algorithm to find a minimal traversal of a directed graph of the following type. Two vertices are given, a start vertex and a terminating vertex. The traversal consists of several runs; each run is a path from the start vertex to the terminating vertex. A run may visit a node more than once. The length of a traversal is the total number of vertices traversed by the runs, with multiplicity; in other words, the length of a traversal is the number of runs plus the sum of the lengths of the runs.</p>&#xA;&#xA;<p>If there are edges that are not reachable (i.e. the origin of the edge is not reachable from the start vertex, or the terminating vertex is not reachable from the target of the edge), they are ignored.</p>&#xA;&#xA;<p>To illustrate my needs, I give a simple graph and post the result, I would like to receive by the algorithm (start vertex $1$, terminating vertex $4$):</p>&#xA;&#xA;<p>Graph edges:</p>&#xA;&#xA;<ul>&#xA;<li>$1 \to 2,3$</li>&#xA;<li>$2 \to 1,3,4$</li>&#xA;<li>$3 \to 4$</li>&#xA;</ul>&#xA;&#xA;<p>Result:</p>&#xA;&#xA;<ul>&#xA;<li>Run A: $1, 2, 1, 3, 4$</li>&#xA;<li>Run B: $1, 2, 4$</li>&#xA;<li>Run C: $1, 2, 3, 4$</li>&#xA;</ul>&#xA;&#xA;<p>Each edge (also each direction) has been covered. Each run begins with vertex $1$ and ends with vertex $4$. The minimum total number of visited vertices is searched. In the given example, the minimum number is $5+3+4=12$. There is no unreachable edge in this example.</p>&#xA;
algorithms graphs
0
1,700
What is meant by interrupts in the context of operating systems?
<p>I've decided to read <a href="http://rads.stackoverflow.com/amzn/click/0470128720" rel="noreferrer">Operating Systems Concepts</a> by Silberschatz, Galvin Gagne (8th edition) over the summer. I've gotten to a topic that's confusing me - interrupts and their role as it relates to operating systems. </p>&#xA;&#xA;<p>The text says that an operating system will begin a first process such as "init" and then wait for an "event" to occur and this event is usually signaled by an interrupt. The text also says that the interrupt can come from either the hardware or the software. How does this work, in a little more detail? Is the operating system driven by interrupts? </p>&#xA;&#xA;<p>I am just looking for some big picture understanding. </p>&#xA;
operating systems computer architecture process scheduling
1
1,706
Why is this example a regular language?
<p>Consider this example (taken from this document: <a href="http://www.cs.nott.ac.uk/~txa/g51mal/notes-3x.pdf" rel="nofollow">Showing that language is not regular</a>):</p>&#xA;&#xA;<p>$$L = \{1^n \mid n\text{ is even}\} $$</p>&#xA;&#xA;<p>According to the Pumping Lemma, a language $L$ is regular if :</p>&#xA;&#xA;<ul>&#xA;<li>$y \ne Ξ΅$</li>&#xA;<li>$|xy| \lt n$</li>&#xA;<li>$\forall k \in N, xy^kz \in L$</li>&#xA;</ul>&#xA;&#xA;<p>In the above example, $n$ must be even. Suppose we have $n = 4$, we can express: $$xy^kz$$ such that: $x = 1$, $z = 1$, and with $k = 2$, we have $y^k = y^2 = 11$, so we get the string $1111$. However, since all $k$ must be satisfied, if $k = 1$, the string is $111$, it does not belong to $L$. Yet, I was told that the above example is a regular language. How can it be?</p>&#xA;
formal languages regular languages proof techniques
1
1,710
Organisation and Architecture of Quantum Computers
<p>What are devices and their interconnections used alongwith Quantum Processors? Are they compatible with hardware devices like Cache, RAM, Disks of current computers?</p>&#xA;
computer architecture quantum computing
1
1,712
Randomized String Searching
<p>I need to detect whether a binary pattern $P$ of length $m$ occurs in a binary text $T$ of length $n$ where $m &lt; n$.</p>&#xA;&#xA;<p>I want to state an algorithm that runs in time $O(n)$ where we assume that arithmetic operations on $O(\log_2 n)$ bit numbers can be executed in constant time. The algorithm should accept with probability $1$ whenever $P$ is a substring of $T$ and reject with probability of at least $1 - \frac{1}{n}$ otherwise.</p>&#xA;&#xA;<p>I think fingerprinting could help here. But I can't get it.</p>&#xA;
algorithms strings searching probabilistic algorithms
1
1,713
Expected space consumption of skip lists
<p>What is the expected space used by the skip list after inserting $n$ elements?</p>&#xA;&#xA;<p>I expect that in the worst case the space consumption may grow indefinitely.</p>&#xA;&#xA;<p><a href="http://en.wikipedia.org/wiki/Skip_list" rel="nofollow">Wikipedia</a> says space $O(n)$.</p>&#xA;&#xA;<p>How can this be proven one way or another?</p>&#xA;
data structures space complexity
0
1,726
how do you prove that SAT is NP-complete?
<p>As it is, how do you prove that SAT is NP-complete?</p>&#xA;&#xA;<p>I know what it means by NP-complete, so I do not need an explanation on that.</p>&#xA;&#xA;<p>What I want to know is how do you know that one problem, such as SAT, is NP-complete without resorting to reduction to other problems such as hamiltonian problem or whatever.</p>&#xA;
complexity theory satisfiability
1
1,727
How will broadcast behave with a certain capacity?
<p>I wanted to confirm something and will appreciate your help. Suppose we have three nodes called A,B and C. All are connected to a switch whose port supports 1 Gbps. Now suppose, Node's A network card is 100 Mbps while the the remaining have a 1 Gbps. Following are the constraints;</p>&#xA;&#xA;<ol>&#xA;<li>A can send B only with a maximum of 100 Mbps.</li>&#xA;<li>A can send C only with a maximum of 80 Mbps.</li>&#xA;</ol>&#xA;&#xA;<p>Now if I were to broadcast a 2 GB file;</p>&#xA;&#xA;<ol>&#xA;<li>It would reach B after approx 2.73 minutes.</li>&#xA;<li>It would reach C after approx 3.41 minutes.</li>&#xA;</ol>&#xA;&#xA;<p>Now even if I replace node A's network card with 1 Gbps with the same constraints, I would still get the same results. Have I got it right? </p>&#xA;
computer networks
0
1,729
What techniques exist for energy-efficient computing and networking?
<p>I am currently reviewing the potentials of cloud computing regarding energy efficiency and green IT. In connection with this review I am having a look on techniques for increasing energy-efficiency in data centers (computing), hardware, networking and storage devices.</p>&#xA;&#xA;<p>Specificially for computing/servers I have found already a few:</p>&#xA;&#xA;<ul>&#xA;<li>energy-aware scheduling techniques utilizing frequency and voltage scaling </li>&#xA;<li>virtualization to consolidate server resources</li>&#xA;<li>energy-saving hardware, e.g. ACPI, several processor techniques, especially for mobile devices etc.</li>&#xA;</ul>&#xA;&#xA;<p>However, for networking devices it is rather hard to get information about energy-saving technologies. I have read that people are thinking about new protocols and alternative routing methods to be able to switch off hardware if the network is under low load. Does anyone know of such examples? </p>&#xA;&#xA;<p>Which other points should be added, either for networking or computing</p>&#xA;
reference request computer architecture operating systems computer networks power consumption
0
1,731
Proving that recursively enumerable languages are closed against taking prefixes
<p>Define $\mathrm{Prefix} (L) = \{x\mid \exists y .xy \in L \}$. I'd love your help with proving that $\mathsf{RE}$ languages are closed under $\mathrm{Prefix}$.</p>&#xA;&#xA;<p>I know that recursively enumerable languages are formal languages for which there exists a Turing machine that will halt and accept when presented with any string in the language as input, but may either halt and reject or loop forever when presented with a string not in the language.</p>&#xA;&#xA;<p>Any help for how should I approach to this kind of a proof?</p>&#xA;
formal languages turing machines closure properties
1
1,739
Hashing using search trees instead of lists
<p>I am struggling with hashing and binary search tree material.&#xA;And I read that instead of using lists for storing entries with the same hash values, it is also possible to use binary search trees. And I try to understand what the worst-case and average-case running time for the operations</p>&#xA;&#xA;<ol>&#xA;<li><code>insert</code>, </li>&#xA;<li><code>find</code> and</li>&#xA;<li><code>delete</code></li>&#xA;</ol>&#xA;&#xA;<p>is in worth- resp. average case. Do they improve with respect to lists?</p>&#xA;
data structures time complexity runtime analysis search trees hash tables
0
1,740
DFA with limited states
<p>Lets $L_z \ := \{ a^i b^i c^i : 0 \leq i &lt; z \}$</p>&#xA;&#xA;<p>$\{a,b,c\} \in \sum^*$</p>&#xA;&#xA;<p>there is a DFA with $\frac{z(z+1)}{2}+1$ states - How can I prove this?</p>&#xA;&#xA;<p>And I need largest possible number $n_z$, for which i can prove that every NFA, which accepts $L_z$, have $n_z$ states, at least!</p>&#xA;&#xA;<p>But first I need to show that $n_z = \frac{z(z+1)}{2}$&#xA; right?</p>&#xA;
formal languages regular languages automata finite automata
1
1,745
Recursion for runtime of divide and conquer algorithms
<p>A divide and conquer algorithm's work at a specific level can be simplified into the equation:</p>&#xA;&#xA;<p>$\qquad \displaystyle O\left(n^d\right) \cdot \left(\frac{a}{b^d}\right)^k$</p>&#xA;&#xA;<p>where $n$ is the size of the problem, $a$ is the number of sub problems, $b$ is the factor the size of the problem is broken down by at each recursion, $k$ is the level, and $d$ is the exponent for Big O notation (linear, exponential etc.).</p>&#xA;&#xA;<p>The book claims if the ratio is greater than one the sum of work is given by the last term on the last level, but if it is less than one the sum of work is given by the first term of the first level. Could someone explain why this is true?</p>&#xA;
algorithm analysis asymptotics runtime analysis recursion mathematical analysis
1
1,748
array median transformation using the min number of steps
<p>Let $A[1...N]$ be an Array of size $N$ with maximum element $\max$.</p>&#xA;&#xA;<p>I want to transform array $A$ such that after transformations all elements of $A$ contain $\max$, i.e. after transformation $A = [\max,\max,\max,\max,\dots,\max]$.</p>&#xA;&#xA;<p>In one step, I can apply the following operation to any consecutive sub-array $A[x..y]$:</p>&#xA;&#xA;<blockquote>&#xA; <p>Assign to all $A[i]$ with $x \leq i \leq y$ the <a href="https://en.wikipedia.org/wiki/Median#The_sample_median" rel="nofollow">median</a> of subarray $A[x..y]$.</p>&#xA;</blockquote>&#xA;&#xA;<p>We consider as <em>median</em> always the $\left\lceil \frac{n+1}{2} \right\rceil$-th element in an increasingly sorted version of $A$.</p>&#xA;&#xA;<p>What is the minimum number of steps needed to transform $A$ as desired? If it helps, assume that $N\leq 30$.</p>&#xA;&#xA;<hr>&#xA;&#xA;<p><strong>Example 1:</strong></p>&#xA;&#xA;<p>Let $A = [1, 2, 3]$. We need to change it to $[3, 3, 3]$. The minium number of steps is two, first for subarray $A[2..3]$ (after that $A$ equals to $[1, 3, 3]$), then operation to $A[1..3]$.</p>&#xA;&#xA;<p><strong>Example 2:</strong></p>&#xA;&#xA;<p>$A=[2,1,1,2]$.The min step is two. The median of subarray $A[1..4]$ is $2$ (3rd element in $[1,1,2,2]$. Apply the operation to $A[1..4]$ once and we get $[2,2,2,2]$.</p>&#xA;
algorithms arrays
0
1,749
Dijsktra's algorithm applied to travelling salesman problem
<p>I am a novice(total newbie to computational complexity theory) and I have a question.</p>&#xA;&#xA;<p>Lets say we have 'Traveling Salesman Problem' ,will the following application of Dijkstra's Algorithms solve it?</p>&#xA;&#xA;<p>From a start point we compute the shortest distance between two points. We go to the point. We delete the source point. Then we compute the next shortest distance point from the current point and so on...</p>&#xA;&#xA;<p>Every step we make the graph smaller while we move the next available shortest distance point. Until we visit all the points.</p>&#xA;&#xA;<p>Will this solve the traveling salesman problem.</p>&#xA;
algorithms graphs
0
1,752
Optimizing order of graph reduction to minimize memory usage
<p>Having extracted the data-flow in some rather large programs as directed, acyclic graphs, I'd now like to optimize the order of evaluation to minimze the maximum amount of memory used.</p>&#xA;&#xA;<p>That is, given a graph {1 -> 3, 2 -> 3, 4 -> 5, 3 -> 5}, I'm looking for an algorithm that will decide the order of graph reduction to minimize the number of 'in-progress' nodes, in this particular case to decide that it should be reduced in the order 1-2-3-4-5; avoiding the alternative ordering, in this case 4-1-2-3-5, which would leave the output from node 4 hanging until 3 is also complete.</p>&#xA;&#xA;<p>Naturally, if there are two nodes using the output from a third, then it only counts once; data is not copied unnecessarily, though it does hang around until both of those nodes are reduced.</p>&#xA;&#xA;<p>I would also quite like to know what this problem is called, if it has a name. It looks similar to the graph bandwidth problem, only not quite; the problem statement may be defined in terms of path/treewidth, but I can't quite tell, and am unsure if I should prioritize learning that branch of graph theory right now.</p>&#xA;
algorithms graphs optimization software engineering program optimization
0
1,753
Non-regular Languages?
<blockquote>&#xA; <p><strong>Possible Duplicate:</strong><br>&#xA; <a href="https://cs.stackexchange.com/questions/1031/how-to-prove-that-a-language-is-not-regular">How to prove that a language is not regular?</a> </p>&#xA;</blockquote>&#xA;&#xA;&#xA;&#xA;<p>Why $L_a$ and $L_b$ are not reguluar?</p>&#xA;&#xA;<p>$L_a = \{ e^i f^{n-i} g^j h^{n-j} : n \in N, 1 \leq i, j \leq n \}$. </p>&#xA;&#xA;<p>$L_b= \{nm^{i_1} nm^{i_2}...bn^{i_z}: z \in N, (i_1,...,i_n) \in N^z, 1 \leq j \leq z, i_j β‰  j \}$.</p>&#xA;
formal languages regular languages finite automata
1
1,754
Explain $\log_2(n)$ squared asymptotic run-time for naive nested parallel CREW PRAM mergesort
<p>On from Page 1 of <a href="http://www.inf.ed.ac.uk/teaching/courses/dapa/note3.pdf" rel="nofollow">these lecture notes</a> it is stated in the final paragraph of the section titled CREW Mergesort:</p>&#xA;&#xA;<blockquote>&#xA; <p>Each such step (in a sequence of $\Theta(\log_2\ n)$ steps) takes&#xA; time $\Theta(\log_2\ s)$ with a sequence length of $s$. Summing these, we&#xA; obtain an overall run time of $\Theta((\log_2\ n)^2)$ for $n$&#xA; processors, which is not quite (but almost!) cost-optimal.</p>&#xA;</blockquote>&#xA;&#xA;<p>Can anyone show explicitly how the sum mentioned is calculated and the squared log result arrived at?</p>&#xA;
algorithms complexity theory parallel computing
1
1,758
How many strings are close to a given set of strings?
<p>This question has been prompted by <a href="https://cs.stackexchange.com/questions/1626/efficient-data-structures-for-building-a-fast-spell-checker">Efficient data structures for building a fast spell checker</a>.</p>&#xA;<p>Given two strings <span class="math-container">$u,v$</span>, we say they are <em><span class="math-container">$k$</span>-close</em> if their <a href="http://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance" rel="noreferrer">Damerau–Levenshtein distance</a>ΒΉ is small, i.e. <span class="math-container">$\operatorname{LD}(u,v) \geq k$</span> for a fixed <span class="math-container">$k \in \mathbb{N}$</span>. Informally, <span class="math-container">$\operatorname{LD}(u,v)$</span> is the minimum number of deletion, insertion, substitution and (neighbour) swap operations needed to transform <span class="math-container">$u$</span> into <span class="math-container">$v$</span>. It can be computed in <span class="math-container">$\Theta(|u|\cdot|v|)$</span> by dynamic programming. Note that <span class="math-container">$\operatorname{LD}$</span> is a <a href="http://en.wikipedia.org/wiki/Metric_%28mathematics%29" rel="noreferrer">metric</a>, that is in particular symmetric.</p>&#xA;<p>The question of interest is:</p>&#xA;<blockquote>&#xA;<p>Given a set <span class="math-container">$S$</span> of <span class="math-container">$n$</span> strings over <span class="math-container">$\Sigma$</span> with lengths at most <span class="math-container">$m$</span>, what is the cardinality of</p>&#xA;<p><span class="math-container">$\qquad \displaystyle S_k := \{ w \in \Sigma^* \mid \exists v \in S.\ \operatorname{LD}(v,w) \leq k \}$</span>?</p>&#xA;</blockquote>&#xA;<p>As even two strings of the same length have different numbers of <span class="math-container">$k$</span>-close stringsΒ² a general formula/approach may be hard (impossible?) to find. Therefore, we might have to compute the number explicitly for every given <span class="math-container">$S$</span>, leading us to the main question:</p>&#xA;<blockquote>&#xA;<p>What is the (time) complexity of finding the cardinality of the set <span class="math-container">$\{w\}_k$</span> for (arbitrary) <span class="math-container">$w \in \Sigma^*$</span>?</p>&#xA;</blockquote>&#xA;<p>Note that the desired quantity is exponential in <span class="math-container">$|w|$</span>, so explicit enumeration is not desirable. An efficient algorithm would be great.</p>&#xA;<p>If it helps, it can be assumed that we have indeed a (large) set <span class="math-container">$S$</span> of strings, that is we solve the first highlighted question.</p>&#xA;<hr />&#xA;<ol>&#xA;<li>Possible variants include using the <a href="http://en.wikipedia.org/wiki/Levenshtein_distance" rel="noreferrer">Levenshtein distance</a> instead.</li>&#xA;<li>Consider <span class="math-container">$aa$</span> and <span class="math-container">$ab$</span>. The sets of <span class="math-container">$1$</span>-close strings over <span class="math-container">$\{a,b\}$</span> are <span class="math-container">$\{ a, aa,ab,ba,aaa,baa,aba,aab \}$</span> (8 words) and <span class="math-container">$\{a,b,aa,bb,ab,ba,aab,bab,abb,aba\}$</span> (10 words), respectively .</li>&#xA;</ol>&#xA;
algorithms time complexity strings word combinatorics string metrics
0
1,762
Counting trees (order matters)
<p>As a follow up to this <a href="https://cs.stackexchange.com/questions/368/counting-binary-trees">question</a> (the number of rooted binary trees of size n), how many possible binary trees can you have if the nodes are now labeled, so that abc is different than bac cab etc ? In other words, order matters. Certainly it will be much more than the Catalan number.</p>&#xA;&#xA;<p>What would the problem be if you have n-ary trees instead of binary ? </p>&#xA;&#xA;<p>Are these known problems? reference ? </p>&#xA;
binary trees combinatorics trees
1
1,771
Why is a regular language called 'regular'?
<p>I have just completed the first chapter of the <a href="http://www-math.mit.edu/~sipser/book.html"><em>Introduction to the Theory of Computation</em></a> by <em>Michael Sipser</em> which explains the basics of finite automata. </p>&#xA;&#xA;<p>He defines a regular language as anything that can be described by a finite automata. But I could not find where he explains why a regular language is called "regular?" What is the origin of the term "regular" in this context?</p>&#xA;&#xA;<p>NOTE: I am a novice so please try to explain in simple terms!</p>&#xA;
formal languages regular languages terminology finite automata history
1
1,773
Recommendation algorithms based on a set of attributes
<p>I'm building an application which should suggest products for the users. I want to base my recommendation on different attributes, like location, weather, date, etc. Each of these attributes can have multiple values so the feature space I need to consider is huge.&#xA;I was thinking about two approaches to solve this problem.&#xA;Firstly, using decision trees, so I create the tables with different decisions, e.g. </p>&#xA;&#xA;<pre><code>sunny; hot; France; summer; choose xyz&#xA;overcast, warm, Italy, spring, choose abc&#xA;</code></pre>&#xA;&#xA;<p>Based on this data I could learn the decision tree and use it in my application.</p>&#xA;&#xA;<p>Secondly, I could tag every recommendation item with the possible attributes to which it applies. For example:</p>&#xA;&#xA;<pre><code>xyz: {sunny} {hot} {France, Spain} {spring, summer}&#xA;abc: {overcast, raining} {cold, warm} {Italy} {spring, summer}&#xA;</code></pre>&#xA;&#xA;<p>Then, based on the actual values of the attributes from the user I could infer an item to recommend.</p>&#xA;&#xA;<p>The second option looks better for me as it requires from me only describing the recommendation items while the first approach requires describing a lot of situations which might happen so that the decision tree is of high quality. Unfortunately, I don't know any algorithm for the second solution.</p>&#xA;&#xA;<p>Which approach would you use? If the second one, than what are the possible algorithms to have a look at?</p>&#xA;
artificial intelligence recommendation systems
0
1,774
How do I classify my emulator input optimization problem, and with which algorithm should I approach it?
<p>Due to the nature of the question, I have to include lots of background information (because my question is: how do I narrow this down?) That said, it can be summarized (to the best of my knowledge) as:</p>&#xA;&#xA;<p><strong>What methods exist to find local optimums on extremely large combinatorial search spaces?</strong></p>&#xA;&#xA;<h2>Background</h2>&#xA;&#xA;<p>In the tool-assisted superplay community we look to provide specially-crafted (not generated in real-time) input to a video game console or emulator in order to minimize some cost (usually time-to-completion). The way this is currently done is by playing the game frame-by-frame and specifying the input for each frame, often redoing parts of the run many times (for example, the <a href="http://tasvideos.org/2020M.html">recently published</a> run for <em>The Legend of Zelda: Ocarina of Time</em> has a total of 198,590 retries).</p>&#xA;&#xA;<p><strong>Making these runs obtain their goal usually comes down to two main factors: route-planning and traversal.</strong> The former is much more "creative" than the latter.</p>&#xA;&#xA;<p>Route-planning is determining which way the player should navigate overall to complete the game, and is often the most important part of the run. This is analogous to choosing which sorting method to use, for example. The best bubble sort in the world simply isn't going to outperform a quick-sort on 1 million elements.</p>&#xA;&#xA;<p>In the desire for perfection, however, traversal (how the route is carried out) is also a huge factor. Continuing the analogy, this is how the sorting algorithm is implemented. Some routes can't even be performed without very specific frames of input. This is the most tedious process of tool-assisting and is what makes the production of a completed run takes months or even years. It's not a <em>difficult</em> process (to a human) because it comes down to trying different variations of the same idea until one is deemed best, but humans can only try so many variations in their attention-span. The application of machines to this task seems proper here.</p>&#xA;&#xA;<p><strong>My goal now is to try to automate the traversal process in general for the Nintendo 64 system</strong>. The search space for this problem is <em>far</em> too large to attack with a brute-force approach. An n-frame segment of an N64 run has 2<sup>30n</sup> possible inputs, meaning a mere 30 frames of input (a second at 30FPS) has 2<sup>900</sup> possible inputs; it would be impossible to test these potential solutions, let alone those for a full two-hour run.</p>&#xA;&#xA;<p>However, I'm not interested in attempting (or rather, am not going to even try to attempt) total global optimization of a full run. Rather, <strong>I would like to, given an initial input, approximate the <em>local</em> optimum for a particular <em>segment</em> of a run (or the nearest <em>n</em> local optimums, for a sort of semi-global optimization)</strong>. That is, given a route and an initial traversal of that route: search the neighbors of that traversal to minimize cost, but don't degenerate into trying all the cases that could solve the problem.</p>&#xA;&#xA;<p>My program should therefore take a starting state, an input stream, an evaluation function, and output the local optimum by minimizing the result of the evaluation.</p>&#xA;&#xA;<h2>Current State</h2>&#xA;&#xA;<p>Currently I have all the framework taken care of. This includes evaluating an input stream via manipulation of the emulator, setup and teardown, configuration, etc. And as a placeholder of sorts, the optimizer is a very basic genetic algorithm. It simply evaluates a population of input streams, stores/replaces the winner, and generates a new population by mutating the winner stream. This process continues until some arbitrary criteria is met, like time or generation number.</p>&#xA;&#xA;<p><strong>Note that the slowest part of this program will be, by far, the evaluation of an input stream</strong>. This is because this involves emulating the game for <em>n</em> frames. (If I had the time I'd write my own emulator that provided hooks into this kind of stuff, but for now I'm left with synthesizing messages and modifying memory for an existing emulator from another process.) On my main computer, which is fairly modern, evaluating 200 frames takes roughly 14 seconds. As such, I'd prefer an algorithm (given the choice) that minimizes the number of function evaluations.</p>&#xA;&#xA;<p>I've created a system in the framework that manages emulators concurrently. As such <strong>I can evaluate a number of streams at once</strong> with a linear performance scale, but practically speaking the number of running emulators can only be 8 to 32 (and 32 is really pushing it) before system performance deteriorates. This means (given the choice), an algorithm which can do processing while an evaluation is taking place would be highly beneficial, because the optimizer can do some heavy-lifting while it waits on an evaluation.</p>&#xA;&#xA;<p>As a test, my evaluation function (for the game <em>Banjo Kazooie</em>) was to sum, per frame, the distance from the player to a goal point. This meant the optimal solution was to get as close to that point as quickly as possible. Limiting mutation to the analog stick only, it took a day to get an <em>okay</em> solution. (This was before I implemented concurrency.)</p>&#xA;&#xA;<p>After adding concurrency, I enabled mutation of A button presses and did the same evaluation function at an area that required jumping. With 24 emulators running it took roughly 1 hour to reach the goal from an initially blank input stream, but would probably need to run for days to get to anything close to optimal.</p>&#xA;&#xA;<h2>Problem</h2>&#xA;&#xA;<p><strong>The issue I'm facing is that I don't know enough about the mathematical optimization field to know how to properly model my optimization problem</strong>! I can roughly follow the conceptual idea of many algorithms as described on Wikipedia, for example, but I don't know how to categorize my problem or select the state-of-the-art algorithm for that category.</p>&#xA;&#xA;<p><strong>From what I can tell, I have a combinatorial problem with an extremely large neighborhood</strong>. On top of that, <strong>the evaluation function is extremely discontinuous, has no gradient, and has many plateaus</strong>. Also, there aren't many constraints, though I'll gladly add the ability to express them if it helps solve the problem; I would like to allow specifying that the Start button should not be used, for example, but this is not the general case.</p>&#xA;&#xA;<h2>Question</h2>&#xA;&#xA;<p><strong>So my question is: how do I model this? What kind of optimization problem am I trying to solve? Which algorithm am I suppose to use?</strong> I'm not afraid of reading research papers so let me know what I should read!</p>&#xA;&#xA;<p>Intuitively, a genetic algorithm couldn't be the best, because it doesn't really seem to learn. For example, if pressing Start seems to <em>always</em> make the evaluation worse (because it pauses the game), there should be some sort of designer or brain that learns: "pressing Start at any point is useless." But even this goal isn't as trivial as it sounds, because sometimes pressing start <em>is</em> optimal, such as in so-called "pause backward-long-jumps" in <em>Super Mario 64</em>! Here the brain would have to learn a much more complex pattern: "pressing Start is useless except when the player is in this very specific state <em>and will continue with some combination of button presses</em>." </p>&#xA;&#xA;<p>It seems like I should (or the machine could learn to) represent input in some other fashion more suited to modification. Per-frame input seems too granular, because what's really needed are "actions", which may span several frames...yet many discoveries are made on a frame-by-frame basis, so I can't totally rule it out (the aforementioned pause backward-long-jump requires frame-level precision). It also seems like the fact that input is processed serially should be something that can be capitalized on, but I'm not sure how.</p>&#xA;&#xA;<p><strong>Currently I'm reading about (Reactive) Tabu Search, Very Large-scale Neighborhood Search, Teaching-learning-based Optimization, and Ant Colony Optimization.</strong></p>&#xA;&#xA;<p>Is this problem simply too hard to tackle with anything other than random genetic algorithms? Or is it actually a trivial problem that was solved long ago? Thanks for reading and thanks in advance for any responses.</p>&#xA;
reference request machine learning combinatorics optimization search problem
1
1,778
Regular expression for all strings with at least two 0s over alphabet {0,1}
<p>My answer : (0+1)* 0 (0+1)* 0 (0+1)*</p>&#xA;&#xA;<p>Why is this incorrect? Can somebody explain to me what the correct answer is and why?</p>&#xA;
formal languages regular languages regular expressions
0
1,779
$L(M) = L$ where $M$ is a $TM$ that moves only to the right side so $L$ is regular
<p>Suppose that $L(M) = L$ where $M$ is a $TM$ that moves only to the right side.</p>&#xA;&#xA;<p>I need to Show that $L$ is regular.</p>&#xA;&#xA;<p>I'd relly like some help, I tried to think of any way to prove it but I didn't reach to any smart conclusion. what is it about the only side right moves and the regularity? </p>&#xA;
formal languages computability turing machines regular languages computation models
1
1,780
Are the functions always asymptotically comparable?
<p>When we compare the complexity of two algorithms, it is usually the case that either $f(n) = O(g(n))$ or $g(n) = O(f(n))$ (possibly both), where $f$ and $g$ are the running times (for example) of the two algorithms.</p>&#xA;&#xA;<p>Is this always the case? That is, does at least one of the relationships $f(n) = O(g(n))$ and $g(n) = O(f(n))$ always hold, that is for general functions $f$,$g$? If not, which assumptions do we have to make, and (why) is it ok when we talk about algorithm running times?</p>&#xA;
asymptotics mathematical analysis
0
1,785
Turn one string into another with single letter substitions
<p>I want to turn one string into another with only single letter substitions. What is a good way to do this, passing through only valid words in between (<a href="http://www.wuzzlesandpuzzles.com/wordchange/" rel="nofollow">this</a> website has some examples)?</p>&#xA;&#xA;<p>Valid here means "a word in English" as this is the domain I consider.</p>&#xA;&#xA;<p>My current idea is that I could use a shortest path algorithm with the Hamming distance for edge weights. The problem is that it will take a long time to build the graph, and even then the weight is not so precise in terms of distance (though it will never underestimate it) unless the weight is one, so I would probably have to find a to build a graph that only had weights of one.</p>&#xA;&#xA;<p>What would be the easiest way to build the graph? Am I taking entirely the wrong approach?</p>&#xA;
algorithms strings string metrics
0
1,789
Quicksort to find median?
<p>Why is the worst scenario $\mathcal{O}\left(n^2\right)$ when using quicksort to find the median of a set of numbers?</p>&#xA;&#xA;<ul>&#xA;<li><p>If your algorithm continually picks a number larger than or smaller than <em>all</em> numbers in the list wouldn't your algorithm fail? For example if the list of numbers are:</p>&#xA;&#xA;<p>$S = (12,75,82,34,55,15,51)$</p>&#xA;&#xA;<p>and you keep picking numbers greater than $82$ or less than $12$ to create sublists with, wouldn't your set always remain the same size?</p></li>&#xA;<li><p>If your algorithm continually picks a number that creates sublists of $1$ why is the worst case scenario $\mathcal{O}\left(n^2\right)$? Wouldn't efficiency be linear considering that according to the <a href="http://en.wikipedia.org/wiki/Master_theorem" rel="nofollow">Master Theorem</a>, $d&gt;\log_b a$?* (and therefore be $\mathcal{O}\left(n^d\right)$ or specifically in this case $\mathcal{O}\left(n\right)$)</p></li>&#xA;</ul>&#xA;&#xA;<p>*Where $d$ is the efficiency exponent (i.e. linear, exponential etc.), $b$ is the factor the size of problem is reduced by at each iteration, $a$ is the number of subproblems and $k$ is the level. Full ratio: $T(n) = \mathcal{O}\left(n^d\right) * (\frac{a}{b^d})^k$</p>&#xA;
algorithms algorithm analysis search algorithms
1
1,790
subsets of infinite recursive sets
<p>A recent exam question went as follows:</p>&#xA;&#xA;<blockquote>&#xA; <ol>&#xA; <li>$A$ is an infinite recursively enumerable set. Prove that $A$ has an infinite recursive subset.</li>&#xA; <li>Let $C$ be an infinite recursive subset of $A$. Must $C$ have a subset that is <em>not</em> recursively enumerable?</li>&#xA; </ol>&#xA;</blockquote>&#xA;&#xA;<p>I answered 1. already. Regarding 2., I answered affirmatively and argued as follows. </p>&#xA;&#xA;<p>Suppose that all the subsets of $C$ were recursively enumerable. Since $C$ is infinite, the power set of $C$ is uncountable, so by assumption there would be uncountably many recursively enumerable sets. But the recursively enumerable sets are in one-to-one correspondence with the Turing machines that recognize them, and Turing machines are enumerable. Contradiction. So $C$ must have a subset that is not recursively enumerable.</p>&#xA;&#xA;<p>Is this correct?</p>&#xA;
computability check my proof
0
1,792
Cyclic coordinate method: how does it differ from Hook & Jeeves and Rosenbrock?
<p>I have trouble understanding the cyclic coordinate method. How does it differ with the <a href="http://en.wikipedia.org/wiki/Pattern_search_%28optimization%29" rel="nofollow">Hook and Jeeves method</a> and the <a href="http://en.wikipedia.org/wiki/Rosenbrock_methods" rel="nofollow">Rosenbrock method</a>?</p>&#xA;&#xA;<p>From a past exam text:</p>&#xA;&#xA;<blockquote>&#xA; <p>Describe the cyclic coordinate method and outline the similarities and the &#xA; differences between the Cyclic Coordinate method, the Hooke and Jeeves &#xA; method, and the Rosenbrock method.</p>&#xA;</blockquote>&#xA;&#xA;<p>I would appreciate a good reference, I'm having trouble finding any.</p>&#xA;
algorithms reference request optimization numerical analysis
0
1,796
Fast Poisson quantile computation
<p>I am seeking a fast algorithm to compute the following function, a quantile of the <a href="http://en.wikipedia.org/wiki/Poisson_distribution" rel="nofollow">Poisson distribution</a>:&#xA;$$f(n, \lambda) = e^{-\lambda} \sum_{k=0}^{n} \frac{\lambda^k}{k!} $$</p>&#xA;&#xA;<p>I can think of an algorithm in $O(n)$, but considering the structure of the series, there is probably a $O(1)$ solution (or at least a good $O(1)$ approximation). Any take?</p>&#xA;
algorithms numerical analysis
1
1,797
Key secrecy vs Algorithm secrecy
<p>it's a well known statement that </p>&#xA;&#xA;<p>"<em>Cryptographic security must rely on a secret key instead of a secret algorithm</em>."</p>&#xA;&#xA;<p>I would like to ask about some details about it. <em>And which are their differences?</em></p>&#xA;&#xA;<p>I see the obvious thing that for a multi user system, generating a key is overwhelmingly easier than generating a distinct alghorithm for every user pair, (and even for a single pair of users one could argue that updating the key is easier)</p>&#xA;&#xA;<p>But, Is it the only argument? </p>&#xA;&#xA;<p>I mean, if we define </p>&#xA;&#xA;<pre><code>AlgorithmA = AlgorithmX + key A&#xA;AlgorithmB = AlgorithmX + key B&#xA;</code></pre>&#xA;&#xA;<p>Then a change on the key is not different from a change in the algorithm.</p>&#xA;&#xA;<p>The only different I see is that for a new pair of users/keys</p>&#xA;&#xA;<ul>&#xA;<li><p><em>Most of</em> the Algorithm structure <strong>remains constant</strong> in the case of secret key,</p></li>&#xA;<li><p><em>Most of</em> Algorithm structure <strong>need to change</strong> in the case of secret Algorithm</p></li>&#xA;</ul>&#xA;&#xA;<p>But where is the limit? "most of" meaning?</p>&#xA;&#xA;<p>I would like to have more views and clues to understand why this distinction is usually mentioned.</p>&#xA;
cryptography security encryption
1
1,801
From FACTOR To KNAPSACK
<ol>&#xA;<li><p>If there were an algorithm that factored in polynomial time by means of examining each possible factor of a complex number efficiently, could one not also use this algorithm to solve unbounded knapsack problems since two factors can be viewed as one value, say within the set for the knapsack problem, and the other being the number of copies of the first factor?</p>&#xA;&#xA;<p>FACTOR 15; 3, 5</p>&#xA;&#xA;<p>Unbounded KNAPSACK with value of 15 and the set of all integers; {5,5,5} andor {3,3,3,3,3}</p></li>&#xA;<li><p>Would this mean FACTOR was NP-Complete?</p></li>&#xA;<li><p>Would solving unbounded knapsack problems in polynomial time in this way prove P=NP?</p></li>&#xA;</ol>&#xA;
complexity theory np complete integers knapsack problems
1
1,803
Extracting non-duplicate cells in a particular matrix with repeated entries
<p>Consider a board of $n$ x $n$ cells, where $n = 2k, kβ‰₯2$. Each of the numbers from $S = \left\{1,...,\frac{n^2}{2}\right\}$ is written to two cells so that each cell contains exactly one number.</p>&#xA;&#xA;<p>How can I show that $n$ cells $c_{i, j}$ can be chosen with one cell per row and one cell per column such that no pair of cells contains the same number.</p>&#xA;&#xA;<p>This was an example problem for an exam I'm studying for. I tried it now for several hours but I can't get it right. I think random permutations can help here but I am not sure.</p>&#xA;
combinatorics probability theory
1
1,806
Weighted Maximum 3-DIMENSIONAL-MATCHING with restricted weights (Approx Algo)
<p>If the weights of the weighted 3-DIMENSIONAL-MATCHING problem are restricted to let's say, 1 and 2, is there a possibility to reduce this case to the unweighted 3-DIMENSIONAL-MATCHING problem?&#xA;(Because for the unweighted version, there is a (1.5+$\epsilon$)-approximation<sup>1</sup> algorithm, for the weighted version, there is only a 2-approx<sup>2,3</sup> algorithm)</p>&#xA;&#xA;<hr>&#xA;&#xA;<p>References:</p>&#xA;&#xA;<ol>&#xA;<li><p><a href="http://www.nada.kth.se/~viggo/wwwcompendium/node275.html#HurSch89" rel="nofollow">unweighted ($1.5+\epsilon$-approx)</a> </p></li>&#xA;<li><p><a href="http://www.nada.kth.se/~viggo/wwwcompendium/node275.html#ArkHas97" rel="nofollow">weighted ($2+\epsilon$-approx)</a> </p></li>&#xA;<li><p><a href="http://www.cs.umd.edu/~yhchan/thesis.pdf" rel="nofollow">weighted ($2$-approx)</a> by CHAN, Yuk Hei, 2009</p></li>&#xA;</ol>&#xA;
algorithms approximation
0
1,809
Probabilistic test of matrix multiplication with one-sided error
<p>Given three matrices $A, B,C \in \mathbb{Z}^{n \times n}$ we want to test whether $AB \neq C$. Assume that the arithmetic operations $+$ and $-$ take constant time when applied to numbers from $\mathbb{Z}$.</p>&#xA;&#xA;<p>How can I state an algorithm with one-sided error that runs in $O(n^2)$ time and prove its correctness?</p>&#xA;&#xA;<p>I tried it now for several hours but I can't get it right. I think I have to use the fact that for any $x \in \mathbb{Z}^n$ at most half of the vectors $s \in S = \left\{1, 0\right\}^n$ satisfy $x \cdot s = 0$, where $x \cdot s$ denotes the scalar product$\sum_{i=1}^{n} x_is_i$.</p>&#xA;
algorithms probabilistic algorithms matrices linear algebra
1
1,810
Are there NP problems, not in P and not NP Complete?
<p>Are there any known problems in $\mathsf{NP}$ (and not in $\mathsf{P}$) that aren't $\mathsf{NP}$ Complete? My understanding is that there are no currently known problems where this is the case, but it hasn't been ruled out as a possibility. </p>&#xA;&#xA;<p>If there is a problem that is $\mathsf{NP}$ (and not $\mathsf{P}$) but not $\mathsf{NP\text{-}complete}$, would this be a result of no existing isomorphism between instances of that problem and the $\mathsf{NP\text{-}complete}$ set? If this case, how would we know that the $\mathsf{NP}$ problem isn't 'harder' than what we currently identify as the $\mathsf{NP\text{-}complete}$ set?</p>&#xA;
complexity theory np complete p vs np
1
1,814
An argument for error accumulation during complex DFT
<p>I am doing FFT-based multiplication of polynomials with integer coefficients (long integers, in fact). The coefficients have a maximum value of $BASE-1, \quad BASE \in \mathbb{n},\quad BASE &gt; 1$. </p>&#xA;&#xA;<p>I would like to put forward a formal argument that if we use complex DFT for computing a convolution on a physical machine, it will yield incorrect results at some transform length $n\in \mathbb{N}$. </p>&#xA;&#xA;<p>What was easy to prove was the fact that at some big $n$ computing the convolution with DFT will not at all be possible, since, for example, the following difference of primitive roots modulo $n$: $\omega_n^1 - \omega_n^2 \rightarrow 0$ when $n \rightarrow \infty$, and if we are restricted by some machine epsilon $\epsilon$, at some $n$ it will make the values indistinguishable and interpolation impossible.</p>&#xA;&#xA;<p>But the boundary I've received using such an argument was way too big: only for $n=2^{60}$ I've received $\omega_n^1 - \omega_n^2$ that had both components, $Re$ and $Im$, less than representable by $double$-precision type. This certainly is a boundary, but not very practical one.</p>&#xA;&#xA;<p>What I would like to show (if it is possible), is that much earlier than interpolation becomes theoretically impossible, the round-off errors will start to give wrong coefficients in the convolution, so that</p>&#xA;&#xA;<p>$$a\cdot b \neq IDFT(DFT(a)\times DFT(b)),$$</p>&#xA;&#xA;<p>where $DFT$ and $IDFT$ are algorithm implementations that I use to calculate the Fourier transform. </p>&#xA;&#xA;<p>Maybe it is possible to make use of the fact that the value of the primitive root modulo $n$, $\omega_n = \exp(-2\pi i / n)$, is an irrational number for the majority of $n$'s. It will thereby be computed with inevitable error $\psi$, defined as the value needed to "round off" everything that's less than the machine epsilon $\epsilon$. Thus all the values used for DFT,</p>&#xA;&#xA;<p>$$\omega_n^0, \omega_n^1, ..., \omega_n^{n-1},$$</p>&#xA;&#xA;<p>except for $\omega_n^0$ will also be computed with errors. </p>&#xA;&#xA;<p>Since I'm not a good mathematician at all, I don't know if and how I could use this fact to prove that the situation is going to worsen with increasing $n$ and that eventually the convolution is going to be computed incorrectly. </p>&#xA;&#xA;<p>I would also like to have and argument for OR against the following claim: for fixed $n$, the maximal error will be produced when all the coefficients of both polynomials are $BASE-1$.</p>&#xA;&#xA;<p>Thank you very much in advance!</p>&#xA;
algorithms proof techniques numerical analysis
0
1,816
Is there a typed SKI calculus?
<p>Most of us know the correspondence between <a href="http://en.wikipedia.org/wiki/Combinatory_logic">combinatory logic</a> and <a href="http://en.wikipedia.org/wiki/Lambda_calculus">lambda calculus</a>. But I've never seen (maybe I haven't looked deep enough) the equivalent of "typed combinators", corresponding to the simply typed lambda calculus. Does such thing exist? Where could one find information about it?</p>&#xA;
reference request logic lambda calculus type theory combinatory logic
1
1,822
dynamic programming exercise on cutting strings
<p>I have been working on the following problem from this <a href="http://www.cs.berkeley.edu/~vazirani/algorithms/chap6.pdf">book</a>.</p>&#xA;&#xA;<blockquote>&#xA; <p>A certain string-processing language offers a primitive operation which splits a string into two&#xA; pieces. Since this operation involves copying the original string, it takes n units of time for a&#xA; string of length n, regardless of the location of the cut. Suppose, now, that you want to break a&#xA; string into many pieces. The order in which the breaks are made can affect the total running&#xA; time. For example, if you want to cut a 20-character string at positions $3$ and $10$, then making&#xA; the first cut at position $3$ incurs a total cost of $20 + 17 = 37$, while doing position 10 first has a&#xA; better cost of $20 + 10 = 30$.</p>&#xA;</blockquote>&#xA;&#xA;<p>I need a dynamic programming algorithm that given $m$ cuts, finds the minimum cost of cutting a string into $m +1$ pieces.</p>&#xA;
algorithms combinatorics strings dynamic programming
0
1,825
Maximum Enclosing Circle of a Given Radius
<p>I try to find an approach to the following problem:</p>&#xA;&#xA;<blockquote>&#xA; <p>Given the set of point $S$ and radius $r$, find the center point of circle, such that the circle contains the maximum number of points from the set. The running time should be $O(n^2)$.</p>&#xA;</blockquote>&#xA;&#xA;<p>At first it seemed to be something similar to smallest enclosing circle problem, that easily can be solved in $O(n^2)$. The idea was to set an arbitrary center and encircle all point of $S$. Next, step by step, replace the circle to touch the left/rightmost points and shrink the circle to the given radius, obviously, this is not going to work.</p>&#xA;
algorithms computational geometry
1
1,828
Polytime and polyspace algorithm for determining the leading intersection of n discrete monotonic functions
<p>Some frontmatter: I'm a recreational computer scientist and employed software engineer. So, pardon if this prompt seems somewhat out of left field -- I routinely play with mathematical simulcra and open problems when I have nothing better to do. </p>&#xA;&#xA;<p>While playing with the <a href="http://en.wikipedia.org/wiki/Riemann_hypothesis">Riemann hypothesis</a>, I determined that the <a href="http://en.wikipedia.org/wiki/Prime_gap">prime gap</a> can be reduced to a recurrence relation based on the intersection of all $n-1$ complementary functions formed by the multiples of each previous prime number (keen observers will note this is a generalization of the <a href="http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes">Sieve of Eratosthenes</a>). If this makes absolutely no sense to you, don't worry -- it's still frontmatter.</p>&#xA;&#xA;<p>Seeing how these functions related, I realized that the next instance of each prime can be reduced to the first intersection of these functions, recurring forward infinitely. However, I could not determine if this is tractable in polytime and polyspace. Thus: <strong>what I'm looking for is an algorithm that can determine the first intersection of $n$ discrete (and, if applicable, monotonic) functions in polynomial time and space. If no such algorithm currently exists or can exist, a terse proof or reference stating so is sufficient.</strong> </p>&#xA;&#xA;<p>The closest I can find so far is <a href="http://en.wikipedia.org/wiki/Dykstra%27s_projection_algorithm">Dykstra's projection algorithm</a> (yes, that's R. L. Dykstra, not <a href="http://en.wikipedia.org/wiki/Edsger_Dijkstra">Edsger Dijkstra</a>), which I believe reduces itself to a problem of <a href="http://en.wikipedia.org/wiki/Linear_programming#Integer_unknowns">integer programming</a> and is, therefore, NP-hard. Similarly, if one performs a transitive set intersection of all of the applicable points (as they're currently understood to be bounded), we must still constrain ourselves to exponential space for our recurrence due to the current weak bound of $\ln(m)$ primes for any real $m$ (and therefore, $e^n$ space for each prime $n$).</p>&#xA;&#xA;<p>Globally, I'm wondering if my understanding of the reduction of the problem is wrong. I don't expect to solve the Riemann hypothesis (or any deep, open problem in this space) any time soon. Rather, I'm seeking to learn more about it by playing with the problem, and I've hit a snag in my research.</p>&#xA;
algorithms reference request discrete mathematics
1
1,830
Why is $L= \{ 0^n 1^n | n \geq 1 \}$ not regular language?
<p>I'm looking for intuition about when a language is regular and when it is not. For example, consider:</p>&#xA;&#xA;<p>$$ L = \{ 0^n 1^n \mid n \geq 1 \} = \{ 01, 0011, 000111, \ldots \}$$</p>&#xA;&#xA;<p>which is not a regular language. Intuitively it seems a very simple language, there doesn't seem to be anything complicated going on. What is the difference between $L$ and a regular language like:</p>&#xA;&#xA;<p>$$L' = \{ w \mid w \text{ does not contain } 11 \} = \{0,10\}^*\cdot (1 \mid \varepsilon).$$</p>&#xA;&#xA;<p>I know how to <a href="https://cs.stackexchange.com/questions/1031/how-to-prove-that-a-language-is-not-regular">prove that $L$ is not regular</a>, using the Pumping Lemma. Here I am looking for <strong>intuition</strong> about what makes a language regular.</p>&#xA;
formal languages regular languages intuition
0
1,836
What is complexity class $\oplus P^{\oplus P}$
<p>What does the complexity class $\oplus P^{\oplus P}$ mean? I know that $\oplus P$ is the complexity class which contains languages $A$ for which there is a polynomial time nondeterministic Turing machine $M$ such that $x \in A$ iff the number of accepting states of the machine $M$ on the input $x$ is odd.</p>&#xA;&#xA;<p>But what does $\oplus P^{\oplus P}$ mean? I just can't follow what it actually does :)</p>&#xA;&#xA;<p>What are practical consequences of such complexity class and how it is possible to show that $\oplus P^{\oplus P} = \oplus P$?</p>&#xA;
complexity theory terminology complexity classes
0
1,838
Counting different words in text using hashing
<p>I am still fighting with hashing and I am ask myself: what is the most efficient way to count the number of different words in a text using a hash table?</p>&#xA;&#xA;<p>My intuition says that applying the hashcode function to every word in the text, as result we will have words with different hash values in different buckets and the same words will have the same bucket and therefore we will have a collision problem which we can resolve using the chaining method.</p>&#xA;&#xA;<p>Does it work like that?</p>&#xA;
algorithms strings hash tables
0
1,842
A context free grammar proof
<p>There is a problem which I cannot solve. If you give a tip I will be very glad.</p>&#xA;&#xA;<p>Prove that following language is <em>not</em> context free:</p>&#xA;&#xA;<p>$L= \{ a^nb^m | \gcd(n,m) = 1 \}$.</p>&#xA;&#xA;<p>It can be proven using the pumping lemma, but how?</p>&#xA;&#xA;<p>If I start with some prime numbers $m$ and $n$ where $m&gt;n&gt;2$ and pump it up from $uVxYz$, there are three possible outcomes: $a^{n + k} b^m$, $a^{n +k}b^{m +k}$, $a^n b^{m +k}$. Since I do not know whether $k$ is even or odd I cannot say something. It is certain that $a^n$ and $b^m$ will be odd. However after adding $k$ to some of them, how can I say something about whether their gcd is 1 or not?</p>&#xA;
formal languages context free pumping lemma
0
1,843
Golden Section, Fibonacci and Dichotomic Searches
<p>I wonder if somebody could quickly and briefly outline some of the similarities and differences between the line search methods <a href="http://glossary.computing.society.informs.org/ver2/mpgwiki/index.php/Golden_section_search" rel="nofollow">Golden Section Search</a>, <a href="http://glossary.computing.society.informs.org/ver2/mpgwiki/index.php/Fibonacci_search" rel="nofollow">Fibonacci Search</a> and <a href="https://en.wikipedia.org/wiki/Dichotomic_search" rel="nofollow">Dichotomic Search</a>.</p>&#xA;&#xA;<p>I know Dichotomous has two functional evaluations per iteration whereas the other two only one, and that the Fibonacci search tends to the Golden Section as the number of functional evaluates tends to infinity. I know also that you have to predetermine the number of functional evaluates for Fibonacci. Are there other, similar techniques?</p>&#xA;
algorithms optimization
0
1,847
Pumping lemma for simple finite regular languages
<p><a href="http://en.wikipedia.org/wiki/Pumping_lemma_for_regular_languages">Wikipedia</a> has the following definition of the pumping lemma for regular langauges...</p>&#xA;&#xA;<blockquote>&#xA; <p>Let $L$ be a regular language. Then there exists an integer $p$ β‰₯ 1&#xA; depending only on $L$ such that every string $w$ in $L$ of length at&#xA; least $p$ ($p$ is called the "pumping length") can be written as $w$ =&#xA; $xyz$ (i.e., $w$ can be divided into three substrings), satisfying the&#xA; following conditions:</p>&#xA; &#xA; <ol>&#xA; <li>|$y$| β‰₯ 1</li>&#xA; <li>|$xy$| ≀ $p$</li>&#xA; <li>for all $i$ β‰₯ 0, $xy^iz$ ∈ $L$</li>&#xA; </ol>&#xA;</blockquote>&#xA;&#xA;<p>I do not see how this is satisfied for a simple finite regular language. If I have an alphabet of {$a,b$} and regular expression $ab$ then $L$ consists of just the one word which is $a$ followed by $b$. I now want to see if my regular language satisfies the pumping lemma...</p>&#xA;&#xA;<p>As nothing repeats in my regular expression the value of $y$ must be empty so that condition 3 is satisifed for all $i$. But if so then it fails condition 1 which says $y$ must be at least 1 in length! </p>&#xA;&#xA;<p>If instead I let $y$ be either $a$, $b$ or $ab$ then it will satisfy condition 1 but fail condition 3 because it never actually repeats itself.</p>&#xA;&#xA;<p>I am obviously missing something mind blowingly obvious. Which is?</p>&#xA;
formal languages regular languages pumping lemma finite sets
0
1,852
How to prove that a constrained version of 3SAT in which no literal can occur more than once, is solvable in polynomial time?
<p>I'm trying to work out an assignment (taken from the book <a href="http://www.cs.berkeley.edu/~vazirani/algorithms.html">Algorithms - by S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani</a>, Chap 8, problem 8.6a), and I'm paraphrasing what it states:</p>&#xA;&#xA;<blockquote>&#xA; <p>Given that 3SAT remains NP-complete even when restricted to formulas in which&#xA; each literal appears at most twice, show that if each literal appears at most once, then the problem is solvable in polynomial time.</p>&#xA;</blockquote>&#xA;&#xA;<p>I attempted to solve this by separating the clauses into multiple groups: </p>&#xA;&#xA;<ol>&#xA;<li>Clauses which did not have any variable in common with the rest of the clauses</li>&#xA;<li>Clauses which had only 1 variable in common</li>&#xA;<li>Clauses which had 2 variables in common</li>&#xA;<li>Clauses which had all 3 variables in common</li>&#xA;</ol>&#xA;&#xA;<p>My reasoning was attempted along the lines that the # of such groups is finite (due to the imposed restriction of no literal being present more than once), and we could try to satisfy the most restricted group first (group 4) and then substitute the result in the lesser restricted groups (3, 2 and then 1), but I realized that this wasn't quite getting me anywhere, as this doesn't differ much from the case for the constrained version of 3SAT in which each literal can appear at most twice, which has been proven to be NP-complete. </p>&#xA;&#xA;<p>I tried searching online for any hints/solutions, but all I could get was <a href="http://www.cs.rpi.edu/~moorthy/Courses/CSCI2300/lab2011-9.html">this link</a>, in which the stated hint didn't make sufficient sense to me, which I'm reproducing verbatim here:</p>&#xA;&#xA;<blockquote>&#xA; <p>Hint: Since each literal appears at most once, convert this problem to 2SAT problem - hence polynomial time, if a literal $x_i$ appears in clause $C_j$ and complement of $x_i$ (i.e., $\overline{x_i}$) in clause $C_k$, construct a new clause clause $C_j \lor \overline{C_k}$.</p>&#xA;</blockquote>&#xA;&#xA;<p>Both $C_j$ and $C_k$ have three literals each - I didn't get how I should go about converting it into 2SAT by doing $C_j \lor \overline{C_k}$ (or $\overline{C_j \lor C_k}$ if I read it incorrectly).</p>&#xA;&#xA;<p>Any help in either decrypting the hint, or providing a path I can explore would be really appreciated.</p>&#xA;
complexity theory satisfiability 3 sat
1
1,853
NP $\subsetneq$ EXP?
<p>I think I heard in somewhere that it has been proven that $\mathsf{NP}$ is strictly contained in $\mathsf{EXP}$, that is $\mathsf{NP} \subsetneq \mathsf{EXP}$. Is this right? Wikipedia and book resources do not seem to bring me an answer..</p>&#xA;&#xA;<p>I just found a post similar to this, but I am not sure whether $\mathsf{NP}$ is <em>strictly</em> contained in $\mathsf{EXP}$.</p>&#xA;
complexity theory complexity classes np
0
1,859
Rule of thumb to know if a problem could be NP-complete
<p>This question was inspired by <a href="https://stackoverflow.com/questions/10589995/algorithm-have-a-set-of-points-g-that-can-see-other-points-c-need-an-al/10590173#comment13716914_10590173">a comment on StackOverflow</a>.</p>&#xA;&#xA;<p>Apart from knowing NP-complete problems of the Garey Johnson book, and many others; is there a rule of thumb to know if a problem looks like an NP-complete one?</p>&#xA;&#xA;<p>I am not looking for something rigorous, but to something that works in most cases.</p>&#xA;&#xA;<p>Of course, every time we have to prove that a problem is NP-complete, or a slight variant of an NP-complete one; but before rushing to the proof it would be great to have certain confidence in the positive result of the proof.</p>&#xA;
complexity theory np complete intuition
1
1,860
Does every NP problem have a poly-sized ILP formulation?
<p>Since Integer Linear Programming is NP-complete, there is a Karp reduction from any problem in NP to it. I thought this implied that there is always a polynomial-sized ILP formulation for any problem in NP.</p>&#xA;&#xA;<p>But I've seen papers on specific NP problems where people write things like "this is the first poly-sized formulation" or "there is no known poly-sized formulation". That's why I'm puzzled.</p>&#xA;
complexity theory np complete reductions linear programming
0
1,864
Hoare triple for assignment P{x/E} x:=E {P}
<p>I am trying to understand Hoare logic presented at Wikipedia,&#xA;<a href="http://en.wikipedia.org/wiki/Hoare_logic" rel="nofollow">Hoare logic at Wikipedia</a>&#xA;Apparently, if I understand correctly, a Hoare triple $$\{P\}~ C ~\{Q\}$$ means</p>&#xA;&#xA;<blockquote>&#xA; <p>if P just before C, then Q holds immediately after C, as long as C terminates. (A)</p>&#xA;</blockquote>&#xA;&#xA;<p>However, the assignment axiom schema seems to be interpreted in a different way:</p>&#xA;&#xA;<p>$$\frac{}{\{P[x/E]\} ~~x:=E~~ \{P\}}$$</p>&#xA;&#xA;<p>The wikipedia says:</p>&#xA;&#xA;<p>The assignment axiom means that the truth of $\{P[x/E]\}$ is equivalent to the after-assignment truth of $\{P\}$. Thus were $\{P[x/E]\}$ true prior to the assignment, by the assignment axiom, then $\{P\}$ would be true subsequent to which. Conversely, were $\{P[x/E]\}$ false prior to the assignment statement, $\{P\}$ must then be false consequently.</p>&#xA;&#xA;<p>I think the Hoare triple only affirms that if P[x/E] before x:=E, then P(x) holds after x:=E. It DOES NOT affirm, by its definition, that if P(x) holds after x:=E, then P[x/E] holds before x:=E. </p>&#xA;&#xA;<p>My naive question is, how can $\{P[x/E]\}$ before the assignment can be equivalent to $\{P\}$ after the assignment? Does this contradict with point (A) at the beginning of my post?</p>&#xA;
logic programming languages semantics hoare logic software verification
1
1,868
Is a PDA as powerful as a CPU?
<p>This is a question I have stumbled upon in my exam revision and I find it intriguing:</p>&#xA;<p><strong>My computer is blue and it has a massive graphics card and a DVD and every-&#xA;thing so which is more powerful: my computer or a Pushdown Automaton?</strong></p>&#xA;<hr />&#xA;<h2>My Thoughts</h2>&#xA;<p>When we talk about power I have assumed this to be computational power. I believe that a PDA has the computational power to equal the computational power of a CPU (cpu in this case is the core elements of a computer ie memory and processor). This is because a PDA utilises a stack which is memory(RAM) in a computer. The PDA has states as does a CPU and also the PDA calculates simple logic at each state. I realise that the PDA itself would be a complex series of states to emulate the computational power of the cpu and there would have to be a series of PDAs to compute different functions.</p>&#xA;<p>My Question: I know that Turing machines are best used to simulate the logic of a CPU but <strong>am I right in saying that a PDA (Or PDA's) can be designed to be as powerful as a cpu?</strong></p>&#xA;
computability computation models computable analysis
0
1,872
Brzozowski's algorithm for DFA minimization
<p>Brzozowski's DFA minimization algorithm builds a minimal DFA for DFA $G$ by:</p>&#xA;&#xA;<ol>&#xA;<li>reversing all the edges in $G$, making the initial state an accept state, and the accept states initial, to get an NFA $N&#39;$ for the reverse language, </li>&#xA;<li>using powerset construction to get $G&#39;$ for the reverse language, </li>&#xA;<li>reversing the edges (and initial-accept swap) in $G&#39;$ to get an NFA $N$ for the original language, and</li>&#xA;<li>doing powerset construction to get $G_{\min}$.</li>&#xA;</ol>&#xA;&#xA;<p>Of course, since some DFA's have an exponential large reverse DFA, this algorithm runs in exponential time in worst case in terms of the size of the input, so lets keep track of the size of the reverse DFA. </p>&#xA;&#xA;<p>If $N$ is the size of the input DFA, $n$ is the size of the minimal DFA, and $m$ the size of the minimal reverse DFA, then <strong>what is the run time of Brzozowski's algorithm in terms of $N$,$n$, and $m$?</strong></p>&#xA;&#xA;<p>In particular, <strong>under what relationship between $n$ and $m$ does Brzozowski's algorithm outperform Hopcroft's or Moore's algorithms?</strong></p>&#xA;&#xA;<p>I have heard that on typical examples in <em>practice/application</em>, Brzozowski's algorithm outperforms the others. <strong>Informally, what are these typical examples like?</strong></p>&#xA;
algorithms finite automata runtime analysis
0
1,877
How not to solve P=NP?
<p>There are lots of attempts at proving either $\mathsf{P} = \mathsf{NP} $ or $\mathsf{P} \neq \mathsf{NP}$, and naturally many people think about the question, having ideas for proving either direction.</p>&#xA;&#xA;<p>I know that there are approaches that have been proven to not work, and there are probably more that have a history of failing. There also seem to be so-called <em>barriers</em> that many proof attemps fail to overcome. </p>&#xA;&#xA;<p>We want to avoid investigating into dead-ends, so what are they?</p>&#xA;
complexity theory reference request history p vs np reference question
0
1,878
Prove fingerprinting
<blockquote>&#xA; <p><strong>Possible Duplicate:</strong><br>&#xA; <a href="https://cs.stackexchange.com/questions/1692/prove-fingerprinting">Prove fingerprinting</a> </p>&#xA;</blockquote>&#xA;&#xA;&#xA;&#xA;<p>Let $a \neq b$ be two integers from the interval $[1, 2^n].$ Let $p$ be a random prime with $ 1 \le p \le n^c.$ Prove that&#xA;$$\text{prob}(a \equiv b \pmod{p}) \le c \ln(n)/(n^{c-1}).$$</p>&#xA;&#xA;<p>Hint: As a consequence of the prime number theorem, exactly $n/ \ln(n) \pm O(n/\ln(n))$ many numbers from $\{ 1, \ldots, n \}$ are prime.</p>&#xA;&#xA;<p>Conclusion: we can compress $n$ bits to $O(\log(n))$ bits and get a quite small false-positive rate.</p>&#xA;
randomness
0
1,883
Reconstructing Graphs from Degree Distribution
<p>Given a degree distribution, how fast can we construct a graph that follows the given degree distribution? A link or algorithm sketch would be good. The algorithm should report a "no" incase no graph can be constructed and any one example if multiple graphs can be constructed.</p>&#xA;
algorithms graphs
1
1,886
If $L$ is context-free and $R$ is regular, then $L / R$ is context-free?
<p>I'm am stuck solving the next exercise:</p>&#xA;&#xA;<p>Argue that if $L$ is context-free and $R$ is regular, then $L / R = \{ w \mid \exists x \in R \;\text{s.t}\; wx \in L\} $ (i.e. the <a href="https://en.wikipedia.org/wiki/Right_quotient" rel="noreferrer">right quotient</a>) is context-free.</p>&#xA;&#xA;<p>I know that there should exist a PDA that accepts $L$ and a DFA that accepts $R$. I'm now trying to combine these automata to a PDA that accepts the right quotient. If I can build that I proved that $L/R$ is context-free. But I'm stuck building this PDA.</p>&#xA;&#xA;<p>This is how far I've made it: </p>&#xA;&#xA;<blockquote>&#xA; <p>In the combined PDA the states are a cartesian product of the states of the seperate automata. And the edges are the edges of the DFA but only the ones for which in the future a final state of the original PDA of L can be reached. But don't know how to write it down formally.</p>&#xA;</blockquote>&#xA;
formal languages context free finite automata closure properties pushdown automata
0
1,887
Why isn't this undecidable problem in NP?
<p>Clearly there aren't any undecidable problems in NP. However, according to <a href="http://en.wikipedia.org/wiki/NP_%28complexity%29">Wikipedia</a>:</p>&#xA;&#xA;<blockquote>&#xA; <p>NP is the set of all decision problems for which the instances where the answer is "yes" have [.. proofs that are] verifiable in polynomial time by a deterministic Turing machine.</p>&#xA; &#xA; <p>[...]</p>&#xA; &#xA; <p>A problem is said to be in NP if and only if there exists a verifier for the problem that executes in polynomial time.</p>&#xA;</blockquote>&#xA;&#xA;<p>Now consider the following problem:</p>&#xA;&#xA;<blockquote>&#xA; <p>Given a <a href="http://en.wikipedia.org/wiki/Diophantine_equation">Diophantine equation</a>, does it have any integer solutions?</p>&#xA;</blockquote>&#xA;&#xA;<p>Given a solution, it's easy to verify in polynomial time that it really <em>is</em> a solution: just plug the numbers into the equation. Thus, the problem is in NP. However, <em>solving</em> this problem is famously <a href="http://en.wikipedia.org/wiki/Hilbert%27s_tenth_problem">known to be undecidable</a>!</p>&#xA;&#xA;<p><em>(Similarly, it seems the halting problem should be in NP, since the "yes"-solution of "this program halts at the N-th step" can be verified in N steps.)</em></p>&#xA;&#xA;<p>Obviously there's something wrong with my understanding, but what is it?</p>&#xA;
complexity theory computability undecidability decision problem
1
1,901
Common idea in Karatsuba, Gauss and Strassen multiplication
<p>The identities used in multiplication algorithms by</p>&#xA;&#xA;<ul>&#xA;<li><p><a href="http://en.wikipedia.org/wiki/Karatsuba_algorithm#The_basic_step">Karatsuba</a> (integers)</p></li>&#xA;<li><p><a href="http://en.wikipedia.org/wiki/Multiplication_algorithm#Gauss.27s_complex_multiplication_algorithm">Gauss</a> (complex numbers)</p></li>&#xA;<li><p><a href="http://en.wikipedia.org/wiki/Strassen_algorithm">Strassen</a> (matrices)</p></li>&#xA;</ul>&#xA;&#xA;<p>seem very closely related. Is there a common abstract framework/generalization?</p>&#xA;
algorithms matrices
1
1,903
What is the growth rate of the world wide web?
<p>Is there any way to estimate how much data is added to the world wide web each second? Are there any studies about this? </p>&#xA;
computer networks
0
1,904
Nilsson's sequence score for 8-puzzle problem in A* algorithm
<p>I am learning the <a href="http://en.wikipedia.org/wiki/A*_search_algorithm" rel="nofollow noreferrer">A* search algorithm</a> on an 8-puzzle problem.</p>&#xA;&#xA;<p>I don't have questions about A*, but I have some for the heuristic score - Nilsson's sequence score.</p>&#xA;&#xA;<p><a href="http://www.heyes-jones.com/astar.html" rel="nofollow noreferrer">Justin Heyes-Jones web pages - A* Algorithm</a> explains A* very clearly. It has a picture for Nilsson's sequence scores.</p>&#xA;&#xA;<p><img src="https://i.stack.imgur.com/Wbl63.jpg" alt="Nilsson&#39;s sequence scores"></p>&#xA;&#xA;<p>It explains:</p>&#xA;&#xA;<p><strong>Nilsson's sequence score</strong></p>&#xA;&#xA;<blockquote>&#xA; <p>A tile in the center scores 1 (since it should be empty)</p>&#xA; &#xA; <p>For each tile not in the center, if the tile clockwise to it is not the one that should be clockwise to it then score 2. </p>&#xA; &#xA; <p>Multiply this sequence by three and finally add the total distance you need to move each tile back to its correct position. </p>&#xA;</blockquote>&#xA;&#xA;<p>I can't understand the steps above for calculating the scores.</p>&#xA;&#xA;<p>For example, for the start state, what h = 17?</p>&#xA;&#xA;<blockquote>&#xA; <p>0 A C </p>&#xA; &#xA; <p>H B D </p>&#xA; &#xA; <p>G F E</p>&#xA;</blockquote>&#xA;&#xA;<p>So, by following the description, </p>&#xA;&#xA;<p><code>B</code> is in the center, so we have 1</p>&#xA;&#xA;<p>Then <code>for each title not in the center, if the **tile** clockwise to **it** is not the one that should be clockwise to it then score 2.</code> I am not sure what this statement means. </p>&#xA;&#xA;<p>What does the double starred <code>title</code> refer to? </p>&#xA;&#xA;<p>What does the double starred <code>it</code> refer to?</p>&#xA;&#xA;<p>Does the double starred <code>it</code> refer to the center title (B in this example)? Or does it refer to each title not in the center?</p>&#xA;&#xA;<p>Is the next step that we start from <code>A</code>? So <code>C</code> should not be clockwise to <code>A</code>, then we have 2. And then <code>B</code> should be clockwise to <code>A</code>, then we ignore, and so on and so forth?</p>&#xA;
algorithms machine learning search algorithms heuristics
0
1,905
An oracle to separate NP from coNP
<p>How to prove that $\mathsf{NP}^A \neq \mathsf{coNP}^A$ ? I am just looking for a such oracle TM $M$ and a recursive language $L(M) = L$ for which this holds. </p>&#xA;&#xA;<p>I know the proof where you show that there is an oracle $A$ such that $\mathsf{P}^A \neq \mathsf{NP}^A$ and an oracle $A$ such that $\mathsf{P}^A = \mathsf{NP}^A$. I have a hint that I should find such oracle $A$ by extending the proof of $\mathsf{P}^A \neq \mathsf{NP}^A$ but wherever I search and read, it is "obvious" or "straightforward" everywhere but I just do not see how prove it at all.</p>&#xA;
complexity theory relativization
1
1,907
Grammar in formal languages versus programming language theory
<p>Grammars seem to be used for different purposes. In formal languages, they are used to describe sequences of symbols. In programming language theory, they are used to describe objects in a term algebra (possibly enriched with some implicit, extra structure such as variable scoping rules). My question is, are these two kinds of grammars the <em>same notation reused for unrelated purposes</em>, or are they <em>describing the same things</em>? If they are unrelated, then do we have nomenclature to distinguish them?</p>&#xA;&#xA;<p>For instance, the grammar</p>&#xA;&#xA;<pre><code>e ::= 1 | e e&#xA;</code></pre>&#xA;&#xA;<p>could be describing a set of strings that includes "1", "1 1", and "1 1 1", or it could be describing a set of terms that includes "1", "1 1", "(1 1) 1", and "1 (1 1)".</p>&#xA;
formal languages programming languages
0
1,913
Recurrences and Generating Functions in Algorithms
<p>Combinatorics plays an important role in computer science. We frequently utilize combinatorial methods in both analysis as well as design in algorithms. For example one method for finding a $k$-vertex cover set in a graph might just inspect all $\binom{n}{k}$ possible subsets. While the binomial functions grows exponentially, if $k$ is some fixed constant we end up with a polynomial time algorithm by asymptotic analysis.</p>&#xA;&#xA;<p>Often times real-life problems require more complex combinatorial mechanisms which we may define in terms of recurrences. One famous example is the <a href="http://en.wikipedia.org/wiki/Fibonacci_number">fibonacci sequence</a> (naively) defined as:</p>&#xA;&#xA;<p>$f(n) = \begin{cases}&#xA; 1 &amp; \text{if } n = 1 \\&#xA; 0 &amp; \text{if } n = 0 \\&#xA; f(n-1) + f(n-2) &amp; \text{otherwise}&#xA; \end{cases}&#xA;$</p>&#xA;&#xA;<p>Now computing the value of the $n$th term grows exponentially using this recurrence, but thanks to dynamic programming, we may compute it in linear time. Now, not all recurrences lend themselves to DP (off hand, the factorial function), but it is a potentially exploitable property when defining some count as a recurrence rather than a generating function.</p>&#xA;&#xA;<p>Generating functions are an elegant way to formalize some count for a given structure. Perhaps the most famous is the binomial generating function defined as:</p>&#xA;&#xA;<p>$(x + y)^\alpha = \sum_{k=0}^\infty \binom{\alpha}{k}x^{\alpha - k}y^k$</p>&#xA;&#xA;<p>Luckily this has a closed form solution. Not all generating functions permit such a compact description. </p>&#xA;&#xA;<blockquote>&#xA; <p>Now my question is this: how often are generating functions used in <em>design</em> of algorithms? It is easy to see how they may be exploited to understand the rate of growth required by an algorithm via analysis, but what can they tell us about a problem when creating a method to solve some problem?</p>&#xA;</blockquote>&#xA;&#xA;<p>If many times the same count may be reformulated as a recurrence it may lend itself to dynamic programming, but again perhaps the same generating function has a closed form. So it is not so evenly cut.</p>&#xA;
algorithms algorithm analysis combinatorics
1
1,914
Find median of unsorted array in $O(n)$ time
<p>To find the median of an unsorted array, we can make a min-heap in $O(n\log n)$ time for $n$ elements, and then we can extract one by one $n/2$ elements to get the median. But this approach would take $O(n \log n)$ time.</p>&#xA;&#xA;<p>Can we do the same by some method in $O(n)$ time? If we can, then how?</p>&#xA;
algorithms time complexity
1
1,915
What is postorder traversal on this simple tree?
<p>Given the following tree: </p>&#xA;&#xA;<p><img src="https://i.stack.imgur.com/GbJzO.png" alt="tree"></p>&#xA;&#xA;<p>Which traversal method would give as result the following output: CDBEA?</p>&#xA;&#xA;<p>The answer in my study guide is <em>Postorder</em>, but I think postorder would output: DEBCA. Am I wrong?</p>&#xA;
algorithms trees
1
1,917
Vapnik-Chervonenkis Dimension: why cannot four points on a line be shattered by rectangles?
<p>So I'm reading <em>"Introduction to Machine Learning"</em> 2nd edition, by Bishop, et. all. On page 27 they discuss the Vapnik-Chervonenkis Dimension which is,</p>&#xA;&#xA;<blockquote>&#xA; <p><em>"The maximum number of points that can be shattered by H [the hypothesis class] is called the Vapnik-Chervonenkis (VC) Dimension of H, is denoted VC(H) and measures the capacity of H."</em></p>&#xA;</blockquote>&#xA;&#xA;<p>Whereas "shatters" indicates a hypothesis $h \in H$ for a set of N data points such that it separates the positive examples from the negative. In such an example it is said that "H shatters N points".</p>&#xA;&#xA;<p>So far I think I understand this. However, the authors lose me with the following:</p>&#xA;&#xA;<blockquote>&#xA; <p><em>"For example, four points on a line cannot be shattered by rectangles."</em></p>&#xA;</blockquote>&#xA;&#xA;<p>There must be some concept here I'm not fully understanding, because I cannot understand why this is the case. Can anyone explain this to me? </p>&#xA;
machine learning vc dimension
1
1,918
Preprocess an array for counting an element in a slice (reduction to RMQ?)
<p>Given an array $a_1,\ldots,a_n$ of natural numbers $\leq k$, where $k$ is a constant, I want to answer in $O(1)$ queries of the form: "how many times does $m$ appear in the array between indices $i$ and $j$"?</p>&#xA;&#xA;<p>The array should be preprocessed in linear time. In particular I'd like to know if there's a reduction to Range Minimum Query.</p>&#xA;&#xA;<hr>&#xA;&#xA;<p>This is equivalent to RMQ in the case where $k=1$ and you want to query the number of ones within an interval. So we can use <a href="http://en.wikipedia.org/wiki/Range_Queries#Statement_Of_The_Problem">it</a>.<br>&#xA;<sup>I couldn't answer my own question because of limits of SE.</sup></p>&#xA;
algorithms arrays algorithm design
1