source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
128,604 | While reading Compilers by Alfred Aho , I came across this statement: The problem of generating the
optimal target code from a source program is undecidable in general. The Wikipedia entry on optimizing compilers reiterates the same without a proof. Here's my question: Is there a proof (formal or informal) of why this statement is true? If so, please provide it. | Optimized program must have the same behavior as the original program. Consider the following program: int main() {
f();
g();
} , where it's guaranteed that $f$ is pure function. The only question is: does it finish its execution? If it does, then we can replace main() 's body with g() . Otherwise, we should replace it with an infinite loop. Unfortunately, verifying whether f() finishes its execution is undecidable . Another example is the program with body print(f(42)) , where f is pure. The optimal program would just replace f(42) with its value. However, there is no algorithm that does this. We may try to compute it in compile-time, but it may never finish. Another example (now without infinite loops). Assume that your program defines a context-free grammar and $f(x)$ checks whether string $x$ belongs to the language defined by this grammar (for any CFG we can build such $f$ automatically). Then if $f$ is a constant "true", then if (f(x)) {
g()
} can be optimized to g() . Unfortunately, checking that grammar accepts all strings is called a universality problem and is known to be undecidable . | {
"source": [
"https://cs.stackexchange.com/questions/128604",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/-1/"
]
} |
128,614 | For example, a valid number would be 6165156 and an invalid number would be 1566515. I have tried many times to construct a finite state machine for this with no success, which leads me to believe the language is not regular. However, I am unsure how to formally prove this if that is indeed the case. I tried applying the pumping lemma but I am not completely sure how to apply it to this particular language. Any help is appreciated! | Optimized program must have the same behavior as the original program. Consider the following program: int main() {
f();
g();
} , where it's guaranteed that $f$ is pure function. The only question is: does it finish its execution? If it does, then we can replace main() 's body with g() . Otherwise, we should replace it with an infinite loop. Unfortunately, verifying whether f() finishes its execution is undecidable . Another example is the program with body print(f(42)) , where f is pure. The optimal program would just replace f(42) with its value. However, there is no algorithm that does this. We may try to compute it in compile-time, but it may never finish. Another example (now without infinite loops). Assume that your program defines a context-free grammar and $f(x)$ checks whether string $x$ belongs to the language defined by this grammar (for any CFG we can build such $f$ automatically). Then if $f$ is a constant "true", then if (f(x)) {
g()
} can be optimized to g() . Unfortunately, checking that grammar accepts all strings is called a universality problem and is known to be undecidable . | {
"source": [
"https://cs.stackexchange.com/questions/128614",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/123745/"
]
} |
128,841 | GJ Woeginger lists 116 invalid proofs of P vs. NP problem . Scott Aaronson published " Eight Signs A Claimed P≠NP Proof Is Wrong " to reduce hype each time someone attempts to settle P vs. NP. Some researchers even refuse to proof-read papers settling the "P versus NP" question . I have 3 related questions: Why are people not using proof assistants that could verify whether a proof of P vs. NP is correct? How hard or how much effort would it be to state P vs. NP in a proof assistant in the first place? Is there currently any software that would be at least in principle capable of verifying a P vs. NP proof? | I'm going to disagree with DW. I think that it is possible (although difficult) for a P vs. NP result to be stated in a proof assistant, and moreover, I wouldn't trust any supposed proofs unless they were formalized in this way, unless they came from very reputable sources. In particular, none of the resources DW states are based on type theory, which is a very promising direction for proof assistants. Coq has been used to formalize the proof of the 4-color theorem among others, so it's clearly capable of some heavy mathematical lifting. To answer your specific questions: The main reason is that theorem provers aren't widely accepted in the mathematical community. Learning them takes effort, and mathematicians are often skeptical of the underlying techniques (type theory, constructive math, etc.)
But there are some fields where leading researchers are very comfortable with making large developments formalized in a proof assistant, like category theory, programming language theory, formal logic, etc. So I think there is as much of a cultural issue as an inherent feasibility issue. The other reason is that, so far, most of the purported "proofs" have been by cranks, who don't want to formalize their result because it will inevitably reveal the flaws. It is not hard at all to state P vs. NP in a proof assistant. One could use Turing Machines, but it would probably be easier to model a simple Turing-complete programming language using inductive families to model small-step semantics, and define the run-time as the number of steps a program takes. You could define $P$ as the languages accepted by programs halting in a polynomial number of steps, and $NP$ as languages that can be verified in polytime with a polynomial-length certificate. EDIT: It turns out there are existing techniques for showing that algorithms run in polynomial time in a theorem prover. So this could be used either to show a polytime algorithm for an NP-hard problem, or to derive a contradiction from the existence of such an algorithm. There is tons of software that is capable of verifying such a proof, provided the proof was written using that software . The two candidates I'd put the most stock in are Coq and Lean . Coq in particular has been used to verify several major results in mathematics. | {
"source": [
"https://cs.stackexchange.com/questions/128841",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/86549/"
]
} |
128,980 | I am an electrical engineer and trying to make a transition into machine learning. I read in multiple articles that I have to learn data structures and algorithms, before this I have to learn about mathematical proofs. I started studying it on my own using the material available on MIT's OCW, while I did grasp the concepts of induction and well ordering etc.. I've been struggling with the exercises for a very long time and it's really frustrating. I can easily deal with any type of proofs that I saw before (e.g. once I saw the proof of a recurrence question I became pretty good at proving them). My problems start when I face an unusual question. I feel like I am memorizing the proofs rather than learn how to prove. Is there any way (or any resources) that can improve my proving skills in a way that whenever I see an unusual question (like the checkers tiles and chess tiles type of questions) I don't have to stare at them for 2 hours before giving up? | I feel like i am memorizing the proofs rather than learn how to prove You can't learn "how to prove". "Proving" is not a mechanical process, but rather a creative one where you have to invent a new technique to solve a given problem. A professional mathematician could spend their entire life attempting to prove a given statement and never succeed. I can easily deal with any type of proofs that i saw before ( eg. once i saw the proof of a recurrence question i became pretty good at prooving them). My problems start when i face an unusual question. That is normal. Any mathematics "proofs" course isn't designed to teach you how to take an arbitrary problem you've never seen before and be able to solve it (since nobody, not even the best mathematics professors can do that). Rather, your learning goals are Learn how to "read" proofs and judge their correctness Learn how to "write" down a proof in the right mathematical language Learn about known proof "techniques" and how to apply them If you are working on a new, unknown problem, it is normal that you might not be able to solve it. However, knowing and having memorized other proof techniques may help you. Often proofs involve combining a new idea with existing known proof techniques. The more, and the more varied the proofs you already know are, the better your chance of being able to solve the given problem. You are on the right track. You should simply keep studying proof techniques. The exercises you are doing are good. Don't worry if you get stuck. As you get more experienced and your "toolbox" of techniques grows, you will be able to solve exercises that are less "alike" the previous ones you have seen. | {
"source": [
"https://cs.stackexchange.com/questions/128980",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/115699/"
]
} |
129,705 | In programming language theory, people study the theory behind programming languages. But I have never heard any formal definition of programming languages themselves. What is the formal definition, not of a particular programming language like Python or C++, but of programming languages themselves? | To taper expectations a little bit, I will first note that the term "programming language" is deliberately broad: it is intended to be open to some interpretation. It means, no more and no less, any convention that is used for describing instructions for computers to execute. This includes, for example, not just C++ and Python, but also things like Nondeterministic programming , where we actually don't tell the computer exactly what to do, but give it several alternatives and allow it to choose any one of them; declarative logic languages like Datalog where we give the computer a set of logical axioms and ask it to deduce all the true statements from those axioms; and even very low-level descriptions like Turing machines and electrical circuits , where we give the program explicitly as electrical or mechanical components. All of these are ways of describing instructions to computers, so all are valid programming langauges at very different levels of abstraction. However, programming languages researchers do generally agree on some common formal components of programming languages that should always be present, and these serve as a general definition. Namely:
every programming language is defined by a syntax and a semantics. Syntax. This is a formal grammar which gives the set of programs that can be written. Importantly, the formal grammar consists of finitely many syntax elements, which are described in terms of other syntax elements. For example a simple grammar is: Variable := x | y | z
Term := 0 | 1 | Term + Term | Variable
Program := set Variable = Term | return Term | Program; Program In this simple language, we have three syntax elements: Variables, Terms, and Programs. In a formal grammar, each syntax element has finitely many cases for how it can be constructed via other syntax elements. For example, a program is either an assignment (setting a variable to equal a term, e.g. set x = x + 1 ), a return statement, or a sequence of two programs which should be executed one after the other. Semantics. Syntax is just describing the set of valid programs; but it doesn't say anything about what those programs mean . Semantics is a way of assigning meaning to programs. Unlike syntax, which is almost always given as a formal grammar as above, semantics can be given in at least two different ways: these include "denotational semantics", where we assign a mathematical object such as a function to each program, or "operational semantics", where we describe the execution of a program in a more true-to-life way as a sequence of steps. To illustrate this, starting with denotational semantics: we would say that the term 3 + 5 + 8 is assigned the meaning of 16 . More interestingly, the program set x = x + 3 + 5 is assigned the meaning of the mathematical function mapping every integer to that integer plus 8. Operational semantics, on the other hand, is very different. We would say that the term 3 + 5 + 8 evaluates to 8 + 8 which in turn evaluates to 16 . We would also say that the program set x = x + 3 + 5 in a context where x = 5 evaluates to a context where x = 13. So, instead of giving a meaning to each term or program itself, we give a meaning between terms called "evaluates to": we give a formal definition of what it means for A to evaluate to B in the context C . In any case, the semantics of a language, whether denotational or operational (or something else) gives meaning to the symbols and allows us to make sense of what programs compute, not just what they look like. Putting these together, we get the following definition. Definition: A programming language consists of (1) a syntax, given as a formal grammar; and (2) a semantics, given either as denotational semantics which gives a meaning to each syntax element, or an operational semantics which says when two programs or program contexts relate. | {
"source": [
"https://cs.stackexchange.com/questions/129705",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/21753/"
]
} |
129,708 | i, sq ← 1, 1
while sq < n
for j ← 1 to sq
k ← 1
while k ≤ j
k ← 2 ∗ k
i ← i + 1
sq ← i ∗ i I have expressed the running time of the "for" loop as a sum in this way : $$\sum_{j=1}^{i^2} \log(j)$$ In a similar way, how can I express the running time of the outer "while" loop with sigma in terms of $i$ ?
I have tried the following: $$\sum_{i=1}^\sqrt{n} i^2\log(i)$$ | To taper expectations a little bit, I will first note that the term "programming language" is deliberately broad: it is intended to be open to some interpretation. It means, no more and no less, any convention that is used for describing instructions for computers to execute. This includes, for example, not just C++ and Python, but also things like Nondeterministic programming , where we actually don't tell the computer exactly what to do, but give it several alternatives and allow it to choose any one of them; declarative logic languages like Datalog where we give the computer a set of logical axioms and ask it to deduce all the true statements from those axioms; and even very low-level descriptions like Turing machines and electrical circuits , where we give the program explicitly as electrical or mechanical components. All of these are ways of describing instructions to computers, so all are valid programming langauges at very different levels of abstraction. However, programming languages researchers do generally agree on some common formal components of programming languages that should always be present, and these serve as a general definition. Namely:
every programming language is defined by a syntax and a semantics. Syntax. This is a formal grammar which gives the set of programs that can be written. Importantly, the formal grammar consists of finitely many syntax elements, which are described in terms of other syntax elements. For example a simple grammar is: Variable := x | y | z
Term := 0 | 1 | Term + Term | Variable
Program := set Variable = Term | return Term | Program; Program In this simple language, we have three syntax elements: Variables, Terms, and Programs. In a formal grammar, each syntax element has finitely many cases for how it can be constructed via other syntax elements. For example, a program is either an assignment (setting a variable to equal a term, e.g. set x = x + 1 ), a return statement, or a sequence of two programs which should be executed one after the other. Semantics. Syntax is just describing the set of valid programs; but it doesn't say anything about what those programs mean . Semantics is a way of assigning meaning to programs. Unlike syntax, which is almost always given as a formal grammar as above, semantics can be given in at least two different ways: these include "denotational semantics", where we assign a mathematical object such as a function to each program, or "operational semantics", where we describe the execution of a program in a more true-to-life way as a sequence of steps. To illustrate this, starting with denotational semantics: we would say that the term 3 + 5 + 8 is assigned the meaning of 16 . More interestingly, the program set x = x + 3 + 5 is assigned the meaning of the mathematical function mapping every integer to that integer plus 8. Operational semantics, on the other hand, is very different. We would say that the term 3 + 5 + 8 evaluates to 8 + 8 which in turn evaluates to 16 . We would also say that the program set x = x + 3 + 5 in a context where x = 5 evaluates to a context where x = 13. So, instead of giving a meaning to each term or program itself, we give a meaning between terms called "evaluates to": we give a formal definition of what it means for A to evaluate to B in the context C . In any case, the semantics of a language, whether denotational or operational (or something else) gives meaning to the symbols and allows us to make sense of what programs compute, not just what they look like. Putting these together, we get the following definition. Definition: A programming language consists of (1) a syntax, given as a formal grammar; and (2) a semantics, given either as denotational semantics which gives a meaning to each syntax element, or an operational semantics which says when two programs or program contexts relate. | {
"source": [
"https://cs.stackexchange.com/questions/129708",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/125947/"
]
} |
129,727 | An odious number is defined as an integer that has odd binary Hamming weight. I need an implementation of algorithm that finds the nth odious number, preferably recursive. Any ideas? A python script is also wanted, but I can write it myself once the algorithm is found. More description can be found on formula section in OEIS: A000069 . | To taper expectations a little bit, I will first note that the term "programming language" is deliberately broad: it is intended to be open to some interpretation. It means, no more and no less, any convention that is used for describing instructions for computers to execute. This includes, for example, not just C++ and Python, but also things like Nondeterministic programming , where we actually don't tell the computer exactly what to do, but give it several alternatives and allow it to choose any one of them; declarative logic languages like Datalog where we give the computer a set of logical axioms and ask it to deduce all the true statements from those axioms; and even very low-level descriptions like Turing machines and electrical circuits , where we give the program explicitly as electrical or mechanical components. All of these are ways of describing instructions to computers, so all are valid programming langauges at very different levels of abstraction. However, programming languages researchers do generally agree on some common formal components of programming languages that should always be present, and these serve as a general definition. Namely:
every programming language is defined by a syntax and a semantics. Syntax. This is a formal grammar which gives the set of programs that can be written. Importantly, the formal grammar consists of finitely many syntax elements, which are described in terms of other syntax elements. For example a simple grammar is: Variable := x | y | z
Term := 0 | 1 | Term + Term | Variable
Program := set Variable = Term | return Term | Program; Program In this simple language, we have three syntax elements: Variables, Terms, and Programs. In a formal grammar, each syntax element has finitely many cases for how it can be constructed via other syntax elements. For example, a program is either an assignment (setting a variable to equal a term, e.g. set x = x + 1 ), a return statement, or a sequence of two programs which should be executed one after the other. Semantics. Syntax is just describing the set of valid programs; but it doesn't say anything about what those programs mean . Semantics is a way of assigning meaning to programs. Unlike syntax, which is almost always given as a formal grammar as above, semantics can be given in at least two different ways: these include "denotational semantics", where we assign a mathematical object such as a function to each program, or "operational semantics", where we describe the execution of a program in a more true-to-life way as a sequence of steps. To illustrate this, starting with denotational semantics: we would say that the term 3 + 5 + 8 is assigned the meaning of 16 . More interestingly, the program set x = x + 3 + 5 is assigned the meaning of the mathematical function mapping every integer to that integer plus 8. Operational semantics, on the other hand, is very different. We would say that the term 3 + 5 + 8 evaluates to 8 + 8 which in turn evaluates to 16 . We would also say that the program set x = x + 3 + 5 in a context where x = 5 evaluates to a context where x = 13. So, instead of giving a meaning to each term or program itself, we give a meaning between terms called "evaluates to": we give a formal definition of what it means for A to evaluate to B in the context C . In any case, the semantics of a language, whether denotational or operational (or something else) gives meaning to the symbols and allows us to make sense of what programs compute, not just what they look like. Putting these together, we get the following definition. Definition: A programming language consists of (1) a syntax, given as a formal grammar; and (2) a semantics, given either as denotational semantics which gives a meaning to each syntax element, or an operational semantics which says when two programs or program contexts relate. | {
"source": [
"https://cs.stackexchange.com/questions/129727",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/125977/"
]
} |
130,500 | Many computer languages have complex regular expressions tools. For example, in Javascript you have global flags, escape characters, whitespace character, assertions, character classes, groups and ranges etc. I'm wondering if using just the 3 basic regular expressions operators as defined in formal languages , that is concatenation, alternation and Kleene star can achieve the same result as any pattern described with more tools as for example in Javascript. Is there a theorem about this? | Regular expressions using only concatenation, alternation and Kleene star describe regular languages. In contrast, extended regular expressions available in modern programming languages can describe non-regular languages. For example, (.*)\1 describes the language $\{ ww : w \in \Sigma^* \}$ , which is not even context-free. | {
"source": [
"https://cs.stackexchange.com/questions/130500",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/70335/"
]
} |
133,386 | I would like to know if there is any reason why many programming languages use the notation % for the modulo operator? It is used in the most "famous" languages: C C++ C# Go Java Julia Lua Perl Python | The earliest known use of % for modulo was in B, which was the progenitor of C, which was the ancestor (or at least godparent) of most languages that do the same, hence the operator's ubiquity . Why did Thompson and Richie pick % ? It had to be a printable ASCII character that wouldn't conflict with B's other features. % was available, and it resembles the / division operator, making it the obvious choice. p.s. the creator of ASCII invented \ to represent " reverse division ", so it wasn't a candidate for modulo. | {
"source": [
"https://cs.stackexchange.com/questions/133386",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/83257/"
]
} |
133,799 | This question connects different disciplines so it's awkward to choose a SE site for it, but I'll go with this one because here (I hope) the shared culture will make information transfer easier. So computers as we know them use electricity and I don't know what other invisible things that I don't understand. I was wondering, is this a matter of efficiency, or of necessity? Can one achieve universal computation with just "moving parts"? Perhaps "Newtonian physics" is some term for this, although I guess it includes gravity which isn't really what I mean. You know, just good old solid pieces of matter moving around. To get some picture of what I mean, here is a "LEGO Turing machine". I'm afraid that the big gray block on top uses electricity, but could one replace it with a "mechanical" thing, powered perhaps by rotating a piece? I have no idea how such things be designed, and the state transitions for a universal TM have to be fairly complicated, so I have no intuition for whether this is possible or not. | Sure. Electricity is unrelated to the model of computation.
The only thing you can't actually build is the infinite tape, for obvious reasons. In this sense, anything that can be built is essentially equivalent to a deterministic finite automaton. Here's a Turing Machine made of wood: https://www.youtube.com/watch?v=vo8izCKHiF0&ab_channel=RichardRidel | {
"source": [
"https://cs.stackexchange.com/questions/133799",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/114089/"
]
} |
133,875 | I found the following answer: $L_{17} = \{ \langle M \rangle \mid \text{$M$ is a TM, and $M$ is the only TM that accepts $L(M)$} \}$ . R. This is the empty set, since every language has an infinite number of TMs that accept it. As I know number of TMs is $\aleph_0$ and number of languages is $2^{\aleph_0}$ , so how can it be possible that "every language has an infinite number of TMs that accept it"? source of the solution here | The correct version of the claim states that every computable language is accepted by infinitely many Turing machines. Indeed, if $L$ is computable, then there is a Turing machine $T$ that accepts it. Let $T_n$ be $T$ together with $n$ unreachable states. Then $T_n$ also accepts $L$ , and the machines $T_n$ are all different from one another. | {
"source": [
"https://cs.stackexchange.com/questions/133875",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/82044/"
]
} |
133,895 | Pure Prolog (Prolog limited to Horn clauses only) is Turing-complete. In fact, a single Horn clause is enough for Turing-completeness . However, pure Prolog is incapable of expressing list intersection . (Disequality, dif/2 , would allow it to do it, but dif/2 is not Horn, unlike equality). This seems like a paradox, at first glance. Is there a simple explanation? | Turing-complete means "can compute every function on natural numbers that a Turing machine can compute". It means exactly that and only that. A list is not a natural number, and list intersection is not a function on natural numbers. Note: it is, of course, possible to encode lists as natural numbers, which would then make list intersection a function on natural numbers. And I have no doubt that, given you chose a suitable encoding of lists, Pure Prolog will be perfectly capable of expressing list intersection. To put it another way: just because Pure Prolog is not capable of expressing list intersection using the particular representation of lists that was chosen for General Prolog does not mean that there does not exist a representation of lists more suitable for use with Pure Prolog such that Pure Prolog is capable of expressing intersection of those particular lists . | {
"source": [
"https://cs.stackexchange.com/questions/133895",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/6616/"
]
} |
134,846 | Computers are an exceptionally powerful tool for various computations, but they don't excel at storing decimal numbers. However, people have managed to overcome these issues: not storing the number in a decimal format, which is limited to very few decimal places, but as an integer instead, while keeping track of the number's precision. Still, how can a computer simplify computations just like humans do? Take a look at this basic example $$\sqrt{3} \times (\frac{4}{\sqrt{3}} - \sqrt{3}) = \sqrt{3} \times \frac{4}{\sqrt{3}} - \sqrt{3} \times \sqrt{3} = 4 - 3 = 1$$ That's how a human would solve it. Meanwhile, a computer would have a fun time calculating the square root of 3, diving 4 by it, subtracting the square root of 3 from the result and multiplying everything again by the square root of 3. It would surely defeat a human in terms of speed, but it would lack in terms of accuracy. The result will be really close to 1, but not 1 exactly. A computer has no idea that, for instance, $\sqrt{3} \times{\sqrt{3}}$ is equal to $3$ . This is only one of the uncountable examples out there. Did people already find a solution, as it seems elementary for mathematics and computations? If they didn't, is this because it didn't serve any purpose in the real world? | Sage is an open source computer algebra system . Let's see if it can handle your basic example: sage: sqrt(3) * (4/sqrt(3) - sqrt(3)) 1 What is happening under the hood? Sage is storing everything as a symbolic expression, which it is able to manipulate and simplify using some basic rules. Here is another example: sage: 1 + exp(pi*i) 0 So sage can also handle complex numbers. Computers never handle real numbers, since real numbers cannot be represented exactly on a computer. Instead, they either handle approximate representations of real numbers (usually floating point numbers but sometimes fixed point numbers ), or they represent real numbers symbolically, as in the example above. Sage can convert between the two representations (in one direction!), and it can handle floating point numbers of arbitrary accuracy . For example, sage: RealField(100)(pi^2/6 - sum(1/n^2 for n in range(1,10001))) 0.000099995000166666666333333336072 This computes $\pi^2/6 - \sum_{n=1}^{10^4} 1/n^2$ to 100 bits of accuracy (in the mantissa ). Another approach worth mentioning is interval arithmetic , which is a way of computing expressions with a guaranteed level of accuracy, using provable error brackets. Interval arithmetic is used in computational geometry , together with exact representation of rational numbers. In theoretical computer science there are several other notions of real computation, but they are mostly of theoretical interest. See the answers to this question . | {
"source": [
"https://cs.stackexchange.com/questions/134846",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/131274/"
]
} |
134,864 | According to this picture ROM is classified under Main Memory , but isn’t ROM a secondary storage because it’s external and non volatile ?
Any clarification with reference links will be highly appreciated.
Thank you. | Sage is an open source computer algebra system . Let's see if it can handle your basic example: sage: sqrt(3) * (4/sqrt(3) - sqrt(3)) 1 What is happening under the hood? Sage is storing everything as a symbolic expression, which it is able to manipulate and simplify using some basic rules. Here is another example: sage: 1 + exp(pi*i) 0 So sage can also handle complex numbers. Computers never handle real numbers, since real numbers cannot be represented exactly on a computer. Instead, they either handle approximate representations of real numbers (usually floating point numbers but sometimes fixed point numbers ), or they represent real numbers symbolically, as in the example above. Sage can convert between the two representations (in one direction!), and it can handle floating point numbers of arbitrary accuracy . For example, sage: RealField(100)(pi^2/6 - sum(1/n^2 for n in range(1,10001))) 0.000099995000166666666333333336072 This computes $\pi^2/6 - \sum_{n=1}^{10^4} 1/n^2$ to 100 bits of accuracy (in the mantissa ). Another approach worth mentioning is interval arithmetic , which is a way of computing expressions with a guaranteed level of accuracy, using provable error brackets. Interval arithmetic is used in computational geometry , together with exact representation of rational numbers. In theoretical computer science there are several other notions of real computation, but they are mostly of theoretical interest. See the answers to this question . | {
"source": [
"https://cs.stackexchange.com/questions/134864",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/131302/"
]
} |
134,868 | I'm trying to find a grammar for $L = \{w \text{ | }w \in \{a,b\}^*, |w|_a=|w|_b-1\}$ , which is proving to be tricky. I know that $L_2 = \{w \text{ | }w \in \{a,b\}^*, |w|_a=|w|_b\}$ has the following one, so I have been trying to modify it so that I "force" to have one more $b$ , but I don't see how to do this. The obvious choice would be to replace $\epsilon$ with $b$ , but that would potentially get two more $b$ 's. Is there a trick for this one? $$\begin{align}
S &\to \epsilon \\
S &\to aSbS \\
S &\to bSaS \enspace.
\end{align}$$ | Sage is an open source computer algebra system . Let's see if it can handle your basic example: sage: sqrt(3) * (4/sqrt(3) - sqrt(3)) 1 What is happening under the hood? Sage is storing everything as a symbolic expression, which it is able to manipulate and simplify using some basic rules. Here is another example: sage: 1 + exp(pi*i) 0 So sage can also handle complex numbers. Computers never handle real numbers, since real numbers cannot be represented exactly on a computer. Instead, they either handle approximate representations of real numbers (usually floating point numbers but sometimes fixed point numbers ), or they represent real numbers symbolically, as in the example above. Sage can convert between the two representations (in one direction!), and it can handle floating point numbers of arbitrary accuracy . For example, sage: RealField(100)(pi^2/6 - sum(1/n^2 for n in range(1,10001))) 0.000099995000166666666333333336072 This computes $\pi^2/6 - \sum_{n=1}^{10^4} 1/n^2$ to 100 bits of accuracy (in the mantissa ). Another approach worth mentioning is interval arithmetic , which is a way of computing expressions with a guaranteed level of accuracy, using provable error brackets. Interval arithmetic is used in computational geometry , together with exact representation of rational numbers. In theoretical computer science there are several other notions of real computation, but they are mostly of theoretical interest. See the answers to this question . | {
"source": [
"https://cs.stackexchange.com/questions/134868",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/129525/"
]
} |
135,048 | I'm pretty confused so I hope I don't mix up the different terms here. The two's complement representation of decimal 0 is simply 000 The two's complement of 000 is 111 I imagine that complementing a number is equivalent to flipping bits in binary The nine's complement of 000 is 999 This is what confuses me. Are two's complement and nine's complement similar (except for the base change obviously)? If they are, then I'd expect the nine's complement of 000 to be 888 because 8 is the biggest digit in radix 9 and therefore the complement operation would assign the highest digit ( 8 ) to the lowest value input ( 0 ) [I imagine a folding from the center] Obviously this is totally wrong but I'm not sure which part I've misunderstood. | You are very confused due what is simply poor terminology, to be honest. Both your statements 2 and 3 are false due to the same misunderstanding. For each base $b$ there are two mainstream variants of the 'complement', the radix complement and the diminished radix complement . The two most common bases in computer science are base $2$ and base $10$ . Confusingly, the definitions usually used are: one's complement: the diminished radix complement of base $2$ two's complement: the radix complement of base $2$ nine's complement: the diminished radix complement of base $10$ ( not $9$ !) ten's complement: the radix complement of base $10$ . | {
"source": [
"https://cs.stackexchange.com/questions/135048",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/128617/"
]
} |
135,113 | In my Computability and Complexity class, we are focusing on P, NP,
NP-complete, and NP-hard problems and the one thing that keeps coming up
is the SAT problem, in the context of reduction from one complexity to
another or to explain certain concepts. Why is SAT so ubiquitous in theoretical computer science? | SAT was the first problem shown to be NP-complete, in Stephen Cook's seminal paper. Even nowadays, when introducing the theory of NP-completeness, the starting point is usually the NP-completeness of SAT. SAT is also amenable to surprisingly successful heuristic algorithms, implemented by software known as SAT solvers . As a result, there is a lot of practical interest into formulating problems efficiently as instances of SAT. SAT also shows up in fine-grained complexity, one of whose main assumptions is the strong exponential time hypothesis , which is a conjecture on the computational complexity of SAT. | {
"source": [
"https://cs.stackexchange.com/questions/135113",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/131046/"
]
} |
135,237 | If RAM is a short term memory and SSD is a long term memory, why don't microarchitecture of computer nowadays use SSD or another long term memory for saving temporary data like hidden variable for programming? If it's about speed, then SSD can improve its speed, is it possible that SSD will become faster than RAM at some point? If SSD has address for memory location and data for opcode/instruction/operand like RAM, then will it possibly act like RAM? | There's two simple reasons, one fundamental and one related to our current technology. First the technical one: volatile storage is (generally) faster than non-volatile storage. It has fewer requirements - it only needs to store the data for a short while until it gets refreshed, so it's not a surprise that it often is faster. But the fundamental reason is that memory gets slower to access the bigger it is. This is why modern architectures don't just have 'RAM' and 'disk', there's layers upon layers of increasing size memory, with only the topmost layer being non-volatile: CPU registers L1 cache L2 cache L3 cache RAM itself Cache on the disk micro-controller The disk itself | {
"source": [
"https://cs.stackexchange.com/questions/135237",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/131674/"
]
} |
135,262 | In the alpha-beta pruning version of the minimax algorithm, when one evaluates a state p with $\alpha$ and $\beta$ cutoff and gets a v value, i.e., v = alphabeta(p, $\alpha$ , $\beta$ ) are these properties true? alphabeta(p, - $\infty$ , $\beta$ ) = v when $\alpha$ < v alphabeta(p, $\alpha$ , $\infty$ ) = v when v < $\beta$ alphabeta(p, $\alpha$ ', $\beta$ ') = v when $\alpha$ $\le$ $\alpha$ ' $\le$ $\beta$ ' $\le$ $\beta$ if v > $\beta$ , then alphabeta(p, $\beta$ , $\infty$ ) = alphabeta(p, $\alpha$ , $\infty$ ) if v < $\alpha$ , then alphabeta(p, - $\infty$ , $\alpha$ ) = alphabeta(p, - $\infty$ , $\beta$ ) I've reached to this results studying the algorithm itself after reading a couple of papers. After applying it to a real case I've got an improvement of ~30% (in number of states visited, and this gives about a 30% of time execution improvement also), but I want to know if there is a mathematical background that supports these changes to the algorithm. | There's two simple reasons, one fundamental and one related to our current technology. First the technical one: volatile storage is (generally) faster than non-volatile storage. It has fewer requirements - it only needs to store the data for a short while until it gets refreshed, so it's not a surprise that it often is faster. But the fundamental reason is that memory gets slower to access the bigger it is. This is why modern architectures don't just have 'RAM' and 'disk', there's layers upon layers of increasing size memory, with only the topmost layer being non-volatile: CPU registers L1 cache L2 cache L3 cache RAM itself Cache on the disk micro-controller The disk itself | {
"source": [
"https://cs.stackexchange.com/questions/135262",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/23150/"
]
} |
135,343 | I was reading Introduction to the Theory of Computation by Michael Sipser and I found the following paragraph quite interesting: During the first half of the twentieth century, mathematicians such as Kurt
Godel, Alan Turing, and Alonzo Church discovered that certain basic problems ¨
cannot be solved by computers. One example of this phenomenon is the problem of determining whether a mathematical statement is true or false. This task
is the bread and butter of mathematicians. It seems like a natural for solution
by computer because it lies strictly within the realm of mathematics. But no
computer algorithm can perform this task.
Among the consequences of this profound result was the development of ideas
concerning theoretical models of computers that eventually would help lead to
the construction of actual computers As a CS student, completely new to the theory of computation, this is hard to believe for me. It said that a computer can't solve a basic task such as determining whether a mathematical statement is true or false. Can't it really!? I have programmed lots of codes that determine if a mathematical statement is true or not. For example, a simple code such as return 6 == 2*3 will return true if this statement is true, so why does the text says that a computer can't perform this task? I'm sure I'm missing something here. Perhaps I'm mistaken by the definition of "Mathematical statement".
But I'm quite sure "Is 6 equal to 2 * 3?" is a mathematical statement and can be validated by a computer.
So what did the text mean by that? I'm confused! PS: Sorry if the Complexity theory tag is misplaced here. As I said I'm new to the field and on the same page the author of the book stated that the theories of computability and complexity are closely related. | The claim is not that a computer cannot determine the validity of some mathematical statements. Rather, the claim is that there is a class $\mathcal{C}$ of mathematical statements such that no algorithm can decide, given a statement from class $\mathcal{C}$ , whether it is valid or not. The standard choice for the class $\mathcal{C}$ is statements about natural numbers, for example: Every even integer greater than two is a sum of two primes. The class $\mathcal{C}$ contains all statements of the form: For all natural $n_1$ there exists natural $n_2$ such that for all natural $n_3$ there exists natural $n_4$ such that ... there exists natural $n_{2m}$ such that $P(n_1,\ldots,n_{2m})$ , where $P$ is an expression using logical operators, comparison operators, addition, subtraction, multiplication, division, and integer constants. Another popular choice for the class $\mathcal{C}$ is: The following algorithm halts: ... In both cases, there is no algorithm that takes an arbitrary statement from class $\mathcal{C}$ and correctly outputs whether the statement is valid or not. It is crucial that the algorithm be required to answer correctly for all statements in $\mathcal{C}$ . We can easily write an algorithm that answers correctly on a single statement from $\mathcal{C}$ . Indeed, one of the following algorithms will work: The statement is valid. The statement is not valid. Similarly, we can design an algorithm that answers correctly on two different statements $A,B$ . One of the following will work: The statement is valid. The statement is not valid. If the statement is $A$ , then it is valid, otherwise it is not valid. If the statement is $B$ , then it is valid, otherwise it is not valid. We cannot implement this strategy in the case of infinitely many statements, since an algorithm, by definition, has a finite description. This is the hard part – being able to decide, for infinitely many statements, whether they are valid or not. | {
"source": [
"https://cs.stackexchange.com/questions/135343",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/114457/"
]
} |
135,372 | I had this question in my exam. but my answer is wrong(I didn't receive explanations why...) $$f(\langle M\rangle,1^n)=\left \{ \texttt{the lexicographically smallest } x\in\left \{ 0,1 \right \}^n \cap L(M) \texttt{ if } n>100\texttt{ and }L(M)\cap \left \{ 0,1 \right \}^n \neq\varnothing \texttt{, otherwise undefined}\right \}$$ I answered it is computable. for input $(\langle M\rangle,1^{n})$ when $n \geq101$ I run the machine on all possible inputs in $\Sigma^{n}$ and output the first result when conditions are met. I was wrong and apparently the language is not computable. what did I miss? | The claim is not that a computer cannot determine the validity of some mathematical statements. Rather, the claim is that there is a class $\mathcal{C}$ of mathematical statements such that no algorithm can decide, given a statement from class $\mathcal{C}$ , whether it is valid or not. The standard choice for the class $\mathcal{C}$ is statements about natural numbers, for example: Every even integer greater than two is a sum of two primes. The class $\mathcal{C}$ contains all statements of the form: For all natural $n_1$ there exists natural $n_2$ such that for all natural $n_3$ there exists natural $n_4$ such that ... there exists natural $n_{2m}$ such that $P(n_1,\ldots,n_{2m})$ , where $P$ is an expression using logical operators, comparison operators, addition, subtraction, multiplication, division, and integer constants. Another popular choice for the class $\mathcal{C}$ is: The following algorithm halts: ... In both cases, there is no algorithm that takes an arbitrary statement from class $\mathcal{C}$ and correctly outputs whether the statement is valid or not. It is crucial that the algorithm be required to answer correctly for all statements in $\mathcal{C}$ . We can easily write an algorithm that answers correctly on a single statement from $\mathcal{C}$ . Indeed, one of the following algorithms will work: The statement is valid. The statement is not valid. Similarly, we can design an algorithm that answers correctly on two different statements $A,B$ . One of the following will work: The statement is valid. The statement is not valid. If the statement is $A$ , then it is valid, otherwise it is not valid. If the statement is $B$ , then it is valid, otherwise it is not valid. We cannot implement this strategy in the case of infinitely many statements, since an algorithm, by definition, has a finite description. This is the hard part – being able to decide, for infinitely many statements, whether they are valid or not. | {
"source": [
"https://cs.stackexchange.com/questions/135372",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/101029/"
]
} |
135,376 | I saw a joke on twitter today that got me thinking on how to perform a time complexity analysis of this algorithm such as you can express that the worst case is dependent on the input value in addition to the input size. The joke algorithm was this sleep sort algorithm in javascript const arr = [20, 5, 100, 1, 90, 200, 40, 29]
for(let item of input) {
setTimeout(() => console.log(item), item)
} // Console Output
1
5
20
29
40
90
100
200 If we were to describe its Time Complexity and only took into consideration the size of the input, it would be O(n) . But from a practical standpoint that wouldn't be really accurate as the Worst Case Time of the implementation is heavily dependent on the actual value of each array element, so is it possible to convey this in a Time Complexity Analysis notation? Is there such a thing as O(max(n) + n) , for example? | The claim is not that a computer cannot determine the validity of some mathematical statements. Rather, the claim is that there is a class $\mathcal{C}$ of mathematical statements such that no algorithm can decide, given a statement from class $\mathcal{C}$ , whether it is valid or not. The standard choice for the class $\mathcal{C}$ is statements about natural numbers, for example: Every even integer greater than two is a sum of two primes. The class $\mathcal{C}$ contains all statements of the form: For all natural $n_1$ there exists natural $n_2$ such that for all natural $n_3$ there exists natural $n_4$ such that ... there exists natural $n_{2m}$ such that $P(n_1,\ldots,n_{2m})$ , where $P$ is an expression using logical operators, comparison operators, addition, subtraction, multiplication, division, and integer constants. Another popular choice for the class $\mathcal{C}$ is: The following algorithm halts: ... In both cases, there is no algorithm that takes an arbitrary statement from class $\mathcal{C}$ and correctly outputs whether the statement is valid or not. It is crucial that the algorithm be required to answer correctly for all statements in $\mathcal{C}$ . We can easily write an algorithm that answers correctly on a single statement from $\mathcal{C}$ . Indeed, one of the following algorithms will work: The statement is valid. The statement is not valid. Similarly, we can design an algorithm that answers correctly on two different statements $A,B$ . One of the following will work: The statement is valid. The statement is not valid. If the statement is $A$ , then it is valid, otherwise it is not valid. If the statement is $B$ , then it is valid, otherwise it is not valid. We cannot implement this strategy in the case of infinitely many statements, since an algorithm, by definition, has a finite description. This is the hard part – being able to decide, for infinitely many statements, whether they are valid or not. | {
"source": [
"https://cs.stackexchange.com/questions/135376",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/131846/"
]
} |
135,385 | (this is a cross-post from mathoverflow ) Assume I have an undirected edge-weighted complete graph $G$ of $N$ nodes (every node is connected to every other node, and each edge has an associated weight). Assume that each node has a unique identifier. Let's say I then have an input, $c$ of three edges (e.g $c=[4,7,6]$ ).
Does an algorithm exist that lets me search $G$ for instances of $c$ , and returns the identifiers of the matching nodes? The cycles it returns must be closed loops, such as $[A, D, B, \text{(then back to A)}]$ , rather than $[D, A, B, A]$ Here is a poorly-drawn example: . | The claim is not that a computer cannot determine the validity of some mathematical statements. Rather, the claim is that there is a class $\mathcal{C}$ of mathematical statements such that no algorithm can decide, given a statement from class $\mathcal{C}$ , whether it is valid or not. The standard choice for the class $\mathcal{C}$ is statements about natural numbers, for example: Every even integer greater than two is a sum of two primes. The class $\mathcal{C}$ contains all statements of the form: For all natural $n_1$ there exists natural $n_2$ such that for all natural $n_3$ there exists natural $n_4$ such that ... there exists natural $n_{2m}$ such that $P(n_1,\ldots,n_{2m})$ , where $P$ is an expression using logical operators, comparison operators, addition, subtraction, multiplication, division, and integer constants. Another popular choice for the class $\mathcal{C}$ is: The following algorithm halts: ... In both cases, there is no algorithm that takes an arbitrary statement from class $\mathcal{C}$ and correctly outputs whether the statement is valid or not. It is crucial that the algorithm be required to answer correctly for all statements in $\mathcal{C}$ . We can easily write an algorithm that answers correctly on a single statement from $\mathcal{C}$ . Indeed, one of the following algorithms will work: The statement is valid. The statement is not valid. Similarly, we can design an algorithm that answers correctly on two different statements $A,B$ . One of the following will work: The statement is valid. The statement is not valid. If the statement is $A$ , then it is valid, otherwise it is not valid. If the statement is $B$ , then it is valid, otherwise it is not valid. We cannot implement this strategy in the case of infinitely many statements, since an algorithm, by definition, has a finite description. This is the hard part – being able to decide, for infinitely many statements, whether they are valid or not. | {
"source": [
"https://cs.stackexchange.com/questions/135385",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/131856/"
]
} |
135,405 | In CLRS, exercise 4.4-5 the following question is asked: Use a recursion tree to determine a good asymptotic upper bound on the recurrence $$T(n) = T(n-1) + T(n/2) + n$$ In my recursion tree, the sum of level 0 is $$n$$ level 1 is $$(3/2)n - 2/2^1$$ level 2 $$(3/2)^2n-14/2^2$$ level 3 is $$(3/2)^3n - 74/2^3$$ and so on. My issues is that the rule governing the constants 2, 14, 74 etc. is difficult to express as a function of the level so that I can create a sum for all levels of the tree. Would it be correct to say that the cost at each level of the tree is $$(3/2)^in - O(1)$$ and thus avoid the problem of having to sum all of the constant terms via big O notation, or is this approach incorrect? If so, why and what should I do instead? | The claim is not that a computer cannot determine the validity of some mathematical statements. Rather, the claim is that there is a class $\mathcal{C}$ of mathematical statements such that no algorithm can decide, given a statement from class $\mathcal{C}$ , whether it is valid or not. The standard choice for the class $\mathcal{C}$ is statements about natural numbers, for example: Every even integer greater than two is a sum of two primes. The class $\mathcal{C}$ contains all statements of the form: For all natural $n_1$ there exists natural $n_2$ such that for all natural $n_3$ there exists natural $n_4$ such that ... there exists natural $n_{2m}$ such that $P(n_1,\ldots,n_{2m})$ , where $P$ is an expression using logical operators, comparison operators, addition, subtraction, multiplication, division, and integer constants. Another popular choice for the class $\mathcal{C}$ is: The following algorithm halts: ... In both cases, there is no algorithm that takes an arbitrary statement from class $\mathcal{C}$ and correctly outputs whether the statement is valid or not. It is crucial that the algorithm be required to answer correctly for all statements in $\mathcal{C}$ . We can easily write an algorithm that answers correctly on a single statement from $\mathcal{C}$ . Indeed, one of the following algorithms will work: The statement is valid. The statement is not valid. Similarly, we can design an algorithm that answers correctly on two different statements $A,B$ . One of the following will work: The statement is valid. The statement is not valid. If the statement is $A$ , then it is valid, otherwise it is not valid. If the statement is $B$ , then it is valid, otherwise it is not valid. We cannot implement this strategy in the case of infinitely many statements, since an algorithm, by definition, has a finite description. This is the hard part – being able to decide, for infinitely many statements, whether they are valid or not. | {
"source": [
"https://cs.stackexchange.com/questions/135405",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/131790/"
]
} |
136,333 | As far as I can tell, we have invented tools and algorithm to: Detect a wider range of colors at a larger range than humans or any other animals on the planet Detect sound with wavelengths inaccessible to humans or most animals on the planet But why is it that dogs can smell COVID or Cancer and we can't produce a similar tool to "smell diseases"? Why can't we mimic the dog's sense of smell: is it a hardware limitation or a software one? Am I mistaken in thinking that this sense is the hardest to mimic? | We can actually detect some diseases via smell, and the term to search for is olfaction . The general problem is known as breath analysis . However, the research into olfaction and machine learning is rather new (perhaps even surprisingly new). As Lötsch et al. point out, little research (prior to the very recent research) on olfaction and machine learning has been performed, with a few exceptions: Quantifying olfactory perception: mapping olfactory perception space by using multidimensional scaling and self-organizing maps , Mamlouk et al., Neurocomputing , 2003. Relationships between molecular structure and perceived odor quality of ligands for a human olfactory receptor , Sanz et al., Chem Senses , 2008. Diagnosis and Classification of 17 Diseases from 1404 Subjects via Pattern Analysis of Exhaled Molecules , Nakhleh et al., ACS Nano , 2017. And the one mentioned above, Machine Learning in Human Olfactory Research , Lötsch et al, Chemical senses , 2019. I don't know whether the problem in general is harder, but as you are touching on in your question, the problem is much harder from a hardware perspective. Where imaging only needs a simple camera, and hearing only need a simple microphone, to detect smell you need a so-called as chromatography–mass spectrometry instrument. As the Wikipedia article mentions: Breath gas analysis consists of the analysis of volatile organic compounds, for example in blood alcohol testing, and various analytical methods can be applied. Here are some pointers from popular science that should assist you in getting into the literature: Scientists Invent An AI That Can Smell 17 Diseases From Your Breath, Including Cancers Innovative AI Breath Analyzer Diagnoses Diseases by “Smell” AI is acquiring a sense of smell that can detect illnesses in human breath | {
"source": [
"https://cs.stackexchange.com/questions/136333",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/132811/"
]
} |
138,603 | After reading the question I'm still not sure how CPU does branching. I understand that we have an instruction counter which points to the current instruction. And after performing conditional jump it either stays the same (increments as usual) or increases (jumps) and points to another branch, that's clear. The problem: to define conditional jump we need a conditional jump? I mean in order to evaluate an IF the processor has to evaluate condition and IF it's true, then jump, otherwise not. It's an endless recursion. So how does the conditional jump work on the lowest level? | The problem: to define conditional jump we need a conditional jump? I mean in order to evaluate an IF the processor has to evaluate condition and IF it's true, then jump, otherwise not. It's an endless recursion. Processors have some level of code that is directly executed, by hardware circuits. That might not be the same level that they expose as their ISA, processors may translate ISA-level instructions into some other form before really executing them or even treat ISA-level instructions as small subroutines that are implemented in micro-code, but there is still some level is not interpreted in terms of something else and built into the physical structure of the hardware. Below is a diagram of a simple processor. The "if" in the logic of "if we need to branch then take PC+offset as next PC, otherwise take PC+4" is implemented by the top left multiplexer (labeled Mux ). A multiplexer does not actually "do one thing or the other", it combines two signals and a control signal via the formula: (~condition & a) | (condition & b) . That's just a boolean formula that can be implemented easily as a physical circuit. | {
"source": [
"https://cs.stackexchange.com/questions/138603",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/134967/"
]
} |
139,307 | Consider this algorithm iterating over $2$ arrays $(A$ and $B)$ size of $ A = n$ size of $ B = m$ Please note that $m \leq n$ The algorithm is as follows for every value in A:
// code
for every value in B:
// code The time complexity of this algorithm is $O(n+m)$ But given that $m$ is strictly lesser than or equal to $n$ , can this be considered as $O(n)$ ? | Yes: $n+m \le n+n=2n$ which is $O(n)$ , and thus $O(n+m)=O(n)$ For clarity, this is true only under the assumption that $m\le n$ . Without this assumption, $O(n)$ and $O(n+m)$ are two different things - so it would be important to write $O(n+m)$ instead of $O(n)$ . | {
"source": [
"https://cs.stackexchange.com/questions/139307",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/114885/"
]
} |
140,078 | Which language class are today's modern programming languages like Java, JavaScript, and Python in? It appears (?) they are not context-free and not regular languages. Are these programming languages context-sensitive or decidable languages? I am very confused! I know that context-free is more powerful than regular languages and that context-sensitive is more powerful than context-free. Are modern programming languages both context-free and context-sensitive? | Practically no programming language, modern or ancient, is truly context-free, regardless of what people will tell you. But it hardly matters. Every programming language can be parsed; otherwise, it wouldn't be very useful. So all the deviations from context freeness have been dealt with. What people usually mean when they tell you that programming languages are context-free because somewhere in the documentation there's a context-free grammar, is that the set of well-formed programs (that is, the "language" in the sense of formal language theory) is a subset of a context-free grammar, conditioned by a set of constraints written in the rest of the language documentation. That's mostly how programs are parsed: a context-free grammar is used, which recognises all valid and some invalid programs, and then the resulting parse tree is traversed to apply the constraints. To justify describing the language as "context-free", there's a tendency to say that these constraints are "semantic" (and therefore not part of the language syntax). [Note 1] But that's not a very meaningful use of the word "semantic", since rules like "every variable must be declared" (which is common, if by no means universal) is certainly syntactic in the sense that you can easily apply it without knowing anything about the meaning of the various language constructs. All it requires is verifying that a symbol used in some scope also appears in a declaration in an enclosing scope. However, the "also appears" part makes this rule context-sensitive. That rule is somewhat similar to the constraints mentioned in this post about Javascript (linked to from one of your comments to your question): that neither a Javascript object definition nor a function parameter list can define the same identifier twice, another rule which is both clearly context-sensitive and clearly syntactic. In addition, many languages require non-context-free transformations prior to the parse; these transformations are as much part of the grammar of the language as anything else. For example: Layout sensitive block syntax, as in Python, Haskell and many data description languages. (Context-sensitive because parsing requires that all whitespace prefixes in a block be the same length.) Macros, as in Rust, C-family languages, Scheme and Lisp, and a vast number of others. Also, template expansion, at least in the way that it is done in C++. User-definable operators with user-definable precedences, as in Haskell, Swift and Scala. (Scala doesn't really have user-definable precedence, but I think it is still context-sensitive. I might be wrong, though.) None of this in any way diminishes the value of context-free parsing, neither in practical nor theoretical terms. Most parsers are and will continue to be fundamentally based on some context-free algorithm. Despite a lot of trying, no-one yet has come up with a grammar formalism which is both more powerful than context-free grammars and associated with an algorithm for transforming a grammar into a parser without adding hand-written code. (To be clear: the goal I refer to is a formalism which is more powerful than context-free grammars, so that it can handle constraints like "variables must be declared before they are used" and the other features mentioned above, but without being so powerful that it is Turing complete and therefore undecidable.) Notes Excluding rules which cannot be implemented in a context-free grammar in order to say that the language is context-free strikes me as a most peculiar way to define context-freeness. Of course, if you remove all context-sensitive aspects of a language, you end up with a context-free superset, but it's no longer the same language. | {
"source": [
"https://cs.stackexchange.com/questions/140078",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/104688/"
]
} |
140,881 | There are many reasons why numbers larger than 64 bits must be computed. For example, cryptographic algorithms usually have to perform operations on numbers that are 256 bits or even larger in some cases. However, the programming languages that I use can only handle at maximum, 64 bit integers, so how do computers perform operations on numbers that are larger than 64 bits in size and which programming languages support computation of these larger numbers? | in school you (probably) memorized the common operations (addition, subtraction, multiplication and division) for decimal 1 digit numbers. Then you learned how to do operations on larger numbers using those memorized operations by doing the computation part by part for example long multiplication and long division. A computer can do the same algorithms using "digits" of whatever word size they can use carrying over the overflow into the next digit. More advanced algorithms exist that operate a bit faster but still operate on the same principle of the large numbers being sequences of 32-bit or 64-bit digits. | {
"source": [
"https://cs.stackexchange.com/questions/140881",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/137182/"
]
} |
142,862 | I have a set $S$ , which contains $n$ real numbers, which generically are all different. Now suppose I know all the sums of its subsets, can I recover the original set $S$ ? I have $2^n $ data. This is far more than $n$ , the number of unknowns. | No you can't. Consider any set $S=\{a,b,c\}$ with $a+b+c=0$ , and the set $S'=\{a+b,b+c,c+a\}$ . The subset sums for $S$ are $0, a, b, c, a+b, b+c, c+a, a+b+c=0$ . The subset sums for $S'$ are $0, a+b, b+c, c+a, a+2b+c=b, b+2c+a=c, c+2a+b = a, 2(a+b+c)=0$ . Hence, you can't distinguish $S$ and $S'$ from the subset sums: $0, a, b, c, a+b, b+c, c+a, 0$ . If all elements are non-negative, then the smallest subset sums should respectively correspond to the empty set and the singletons made up of the smallest two elements, thus you can know the smallest two elements. Once you know the smallest $k$ elements, you can know the subset sums corresponding to the subsets made up of these $k$ elements. Extract them, then the smallest subset sum should correspond to the $(k+1)$ -th smallest element. Repeat the process above, you will finally get all elements. | {
"source": [
"https://cs.stackexchange.com/questions/142862",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/133279/"
]
} |
143,026 | In Sipser's Introduction to the Theory of Computation , the author explains that two strings can be compared by “zigzagging” back and forth between them and “crossing off” one symbol at a time (i.e., replacing them with a symbol such as $x$ ). This process is displayed in the following figure (from Sipser): However, this process modifies the strings being compared, which would be problematic if the Turing machine needs to access these strings in the future. What are ways of performing a string comparison without modifying the strings? | Create two new types of marks: $\dot{0}, \dot 1$ . Those two will act "like" $x$ , but can still keep the information about the string. So when you cross-off a letter, add a "dot" to it at the top instead of fully replacing it with $x$ . Then, if you want the original strings back, after you are done comparing, go through the entire strings and remove the "dots": replace $\dot 0$ with $0$ and $\dot 1$ with $1$ . | {
"source": [
"https://cs.stackexchange.com/questions/143026",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/128320/"
]
} |
143,180 | Fair warning: I don't actually know a functional language so I'm doing all the pseudocode in Python I'm trying to understand why functional languages disallow variable reassignment, e.g. x = x + 1 . Referential transparency, pure functions, and the dangers of side effects are all mentioned, but the examples tend to go for the low-hanging fruit of functions that depend on mutable globals, which are also discouraged in imperative languages. My question involves variables created and mutated within the function. For example: def numsum1(n):
sum = 0
i = 1
while i <= n:
sum = sum + i
i = i + 1
return sum The functional way of doing this seems to be tail recursion, where the updated sum and i are passed from function call to function call. I know that there are existing higher-order functions for this, but I think this illustrates the similarity to numsum1 more plainly: def numsum2(n): return numsumstep(0, 1, n)
def numsumstep(sum, i, n):
if i <= n:
return numsumstep(sum + i, i + 1, n)
else:
return sum numsum1 and numsum2 do the exact same thing (with tail call optimization) and are both referentially transparent. I do see why numsum1 is internally referentially opaque; the expressions i + 1 and sum + i change in value with each iteration and thus cannot be replaced by a constant value. But why does that matter if numsum1 itself is referentially transparent? Are there examples of functions that become referentially opaque solely because of reassigning local variables? | In a pure functional programming language, there is no real notion of time at all. So, saying that a variable x has value a at one point and then b later simply doesn't make any sense – it's like asking a character in a painting why she always stares in the same direction. The advantage of having no time is that you never † need to worry about the order in which computations happen. If a variable is in scope then it also has the correct value, i.e. the value it has been assigned. (Which assignment may actually be “after” the computation in which it is needed – definitions can be reordered at will.) Whereas in an imperative language – well, consider this program: def numsum1(n):
sum = 0
i = 1
while i <= n:
sum = sum + i
i = i + 1
midterm = sum
while i >= 0:
sum = sum - i
i = i - 1
return (midterm, i) If for some reason you need to refactor and pull the midterm definition behind the second loop, overlooking that it actually mutates sum again, then you would get the wrong result. Now, you might well argue that this is defeated if you need to use recursion to basically fake mutation. Isn't there just as much, or even more , potential for mistakes if you have a recursive call using a parameter still called x that is effectively the same variable anyway? – Not quite, because outside of the recursive calls the variable is guaranteed to stay the same. The refactoring problem with the above example wouldn't happen in a functional language. Furthermore, as Odalrick already wrote , recursion isn't actually what's normally used to replace loops in functional languages. The idiomatic Haskell version of your program is import Data.List (sum)
numsum :: Int -> Int
numsum n = sum [1..n] ...or, using more general-purpose tools, numsum n = foldl' (+) 0 . take n $ iterate (+1) 1 † That's a bit of an exaggeration. Of course, you do sometimes need to take time into account even in a functional language. Obviously, if it runs somehow interactively ( IO monad in Haskell), then those parts are subject to latency considerations. And even for completely pure computations, one side effect that you can't possibly avoid is memory consumption . And that's indeed the one thing that Haskell truely isn't good at: it's really easy to write code that typechecks, works, is correct, but takes gigabytes of memory (when a few kilobytes should have been enough) because some thunks are never garbage-collected. | {
"source": [
"https://cs.stackexchange.com/questions/143180",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/133031/"
]
} |
144,047 | The question is basically in the title. I know that a computers hardware is of course some physical object, where as the interpreter is some abstract thing that does something abstract with the code it is given. Yet, can we say that the way the processor is build is the implementation of an interpreter? Machine language is a sequence of physical states (0s and 1s ) on some physical memory, and if this physical states order follows certain rules (the syntax of the machine language) then the processor is build in a way that naturally leads to performing calculation steps (and changing some of the memory), e.g. "run" the programm. As this question's answer points out, compilers translate from one language to another, and interpreters for each language "run" the programm. It would be just consistent if this stretches down to machine language as well, and that's why I'm asking. If one can make the analogy, when would it break down? What features of language semantics (given by the interpreter), for example expressions and values, are there that we can't find at the physical level anymore? In what way does the processor behave differently from what is expected of an interpreter? | It's not such a bad way of looking at things. On most modern CPUs, the instruction set architecture (ISA for short) is abstract, in the sense that it doesn't dictate that it must be implemented using certain hardware techniques or it is not a compliant implementation. Nothing in the ISA specification specifies whether or not it uses register renaming, or branch prediction, or whether vector instructions are parallel or pipeline-streaming, or even whether the core is scalar or superscalar. Indeed, on many modern CPUs, there is a certain amount of translation from the ISA into an internal representation to be executed more efficiently, such as Intel's micro-operations ( uOps for short ). | {
"source": [
"https://cs.stackexchange.com/questions/144047",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/143460/"
]
} |
146,012 | I'm looking for a data structure that supports the following operations: add(elem) - Add an element to the data structure. remove_random() - Remove and return a random element. The best I got so far is just shuffling a list on every insertion (or on every lookup), and popping from the top. However, this can be quite slow, so I'm looking for a more specialized data structure. Assume we can generate random numbers for free. | You can achieve constant amortized time per operation by keeping a dynamically-sized array $A$ (using the doubling/halving technique). To insert an element append it at the end. To implement remove_random() generate a random index $k$ between $1$ and $n$ , swap $A[k]$ with $A[n]$ and delete (and return) $A[n]$ . If you want a non-amortized worst-case bound on the time complexity, then an AVL in which each node $v$ has been augmented to also store the size of the subtree rooted in $v$ supports both those operation in $O(\log n)$ worst-case time per operation. To implement remove_random() simply generate a random number $k$ between $1$ and $n$ and find the element $e$ of rank $k$ in the tree. Then delete $e$ from the tree and return it. | {
"source": [
"https://cs.stackexchange.com/questions/146012",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/145562/"
]
} |
146,553 | In answering this question , I was looking for references (textbooks, papers, or implementations) which represent a graph using a set (e.g. hashtable) for the adjacent vertices, rather than a list. That is, the graph is a map from vertex labels to sets of adjacent vertices: graph: Map<V, Set<V>> In fact, I thought that this representation was completely standard and commonly used, since it allows O(1) querying for an edge existence, O(1) edge deletion, and O(1) iterating over the elements of the adjacency set. I have always represented graphs this way both in my own implementations and teaching. To my surprise, most algorithms textbooks do not cover this directly, and instead represent it using a list of labels: graph: Map<V, List<V>> As far as I understand, adjacency lists seem strictly worse: both representations support O(1) vertex additions and iteration over adjacent edges, but adjacency lists require O(m) for edge removal or edge existence (in the worst case). Yet I am baffled that, for example Cormen Leiserson Rivest Stein: Introduction to Algorithms , Morin: Open Data Structures , and Wikipedia all suggest using adjacency lists. They mainly contrast adjacency lists with adjacency matrices, but the idea of storing adjacent elements as a set is only mentioned briefly in an off-hand comment as an alternative to the list representation, if at all. (For example, Morin mentions this on page 255, "What type of collection should be used to store each element of adj?") I must be missing something basic. Q: What is the advantage of using a list instead of a set for adjacent vertices? Is this a pedagogical choice, an aversion to hashmaps/hashsets, a historical accident, or something else? This question is closely related, but asks about the representation graph: Set<(V, V)> . The top answer suggests using my representation. Looking for a bit more context on this. The second answer suggests hash collisions are a problem. But if hash sets are not preferred, another representation of maps and sets can be used, and we still get great performance for edge removal with a possible additional logarithmic factor in cost. Bottom line: I don't understand why anyone would implement the edges as a list, unless all vertex degrees are expected to be small. | In many algorithms we don't need to check whether two vertices are adjacent, like in search algorithms, DFS, BFS, Dijkstra's, and many other algorithms. In the cases where we only need to enumerate the neighborhoods, a list/vector/array far outperforms typical set structures. Python's set uses a hashtable underneath, which is both much slower to iterate over, and uses much more memory. If you want really efficient algorithms (and who doesn't), you take this into account. If you need $O(1)$ lookup of adjacencies and don't intend to do much neighborhood enumeration (and can afford the space), you use an adjacency matrix. If expected $O(1)$ is good enough, you can use hashtables, sets, trees, or other datastructures with the performance you need. I suspect, however, that you don't hear about this so often, is because in algorithms classes, it makes analysis much simpler to use lists, because we don't need to talk about expected running time and hash functions. Editing in two comments from leftaroundabout and jwezorek. Many real world graphs are very sparse and you often see $O(1)$ -sized degrees for most of the graphs. This means that even if you want to do lookup, looping through a list is not necessarily much slower, and can in many cases be much faster. As a "proof", I add some statistics from the graphs from Stanford Network Analysis Platform . Out of approximately 100 large graphs, the average degrees are Avg. degree number of graphs < 10 35 < 20 43 < 30 10 < 40 4 < 50 2 < 70 3 < 140 1 < 350 1 | {
"source": [
"https://cs.stackexchange.com/questions/146553",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/24088/"
]
} |
146,618 | I’m a CS senior with and Individual Study period this coming semester, and I’ve decided I’d like to learn more about Programming Language Concepts. More specifically, different programming paradigms, like Functional and Logic programming. Not sure how most universities handle ISs but I’m to essentially write a syllabus outlining what I’ll be learning, and how I’ll show that I’ve learned it. Since I’ve only really spent time working in an Object Oriented context, I’m looking for some recommendations for concepts from Functional/Logic/any other paradigms I might benefit from learning about. Apologies if this is not the place to be asking this question. | Very good explanations of programming paradigms and the programming concepts from which those paradigms are built are found in Peter van Roy's works. Especially in the book Concepts, Techniques, and Models of Computer Programming by Peter Van Roy and Seif Haridi . (Unfortunately, the companion wiki does not seem to exist any more.) CTM (as it is colloquially known) uses the multi-paradigm Distributed Oz programming language to introduce all the major programming paradigms. Peter van Roy also made this amazing poster that shows the 34 major paradigms and their relations and positions on various axis . The poster is basically an incredibly compressed version of CTM. A more thorough explanation of that poster is contained in the article Programming Paradigms for Dummies: What Every Programmer Should Know which appeared as a chapter in the book New Computational Paradigms for Computer Music , edited by G. Assayag and A. Gerzso. It explains for example very concisely and easily understandable, what a programming paradigm actually is , what a programming concept is, and how the two are related. There are about 34 principal Programming Paradigms, as identified by Peter van Roy and Seif Haridi: active object programming / object-capability programming ADT functional programming ADT imperative programming concurrent constraint programming concurrent object-oriented programming / shared-state concurrent programming constraint (logic) programming continuation programming descriptive declarative programming deterministic logic programming event-loop programming first-oder functional programming functional programming functional reactive programming (FRP) / weak synchronous programming imperative programming imperative search programming lazy concurrent constraint programming lazy dataflow programming / lazy declarative concurrent programming lazy functional programming monotonic dataflow programming / declarative concurrent programming multi-agent dataflow programming multi-agent programming / message-passing concurrent programming nonmonotonic dataflow programming / concurrent logic programming relational & logic programming sequential object-oriented programming / stateful functional programming software-transactional memory (STM) strong synchronous programming Programming Paradigms, in turn, are composed of Programming Concepts, and Peter van Roy and Seif Haridi have identified 18 of those: by-need synchronization cell (state) closure continuation instantaneous computation local cell (private state) log name (unforgeable constant) nondeterministic choice port (channel) procedure record search single assignment solver synchronization on partial termination thread unification (equality) Note, that poster completely ignores typing, and there is of course a significant difference between a System F <:ω -style type system, a Scala-style type system, or a dynamic duck-typed type system, let alone a dependent type system à la Idris , Agda , Coq , Guru , or ATS . Another great book that demonstrates several major programming paradigms is Structure and Interpretation of Computer Programs by Harold Abelson and Gerald Jay Sussman . This book was the basis of MIT's CS101 for several decades. The main difference between CTM and SICP is that CTM demonstrates most major paradigms using a language that supports them (mostly Distributed Oz, but also some others). SICP OTOH demonstrates them by implementing them in a language that does not support them natively (a subset of Scheme). Seeing Object-Orientation implemented in a dozen or so lines of code is friggin' awesome. You can find video recordings and course materials of the course from MIT's short-lived ArsDigita University project . Lambda the Ultimate – The Programming Languages Weblog is a great resource for all things programming languages. Activity has slowed down in recent years, but there is still a lot going on. The discussions below the articles and the discussions in the forums are at least as valuable as the articles themselves, if not more. If you are interested in some controversial views, I can recommend studying the Design Principles behind Smalltalk by Dan Ingalls. For example, they contain this nugget of wisdom: Operating System : An operating system is a collection of things that don't fit into a language. There shouldn't be one. On a personal note, my own experience has been that really understanding a programming paradigm is only possible one paradigm at a time and in languages which force you into the paradigm Ideally, you would use a language which takes the paradigm to the extreme. In multi-paradigm languages, it is much too easy to "cheat" and fall back on a paradigm that you are more comfortable with. And using a paradigm as a library is only really possible in languages like Scheme which are specifically designed for this kind of programming. Learning lazy functional programming in Java, for example, is not a good idea, although there are libraries for that. Here's some of my favorites: object-orientation in general : Self prototype-based object-orientation : Self class-based object-orientation : Newspeak static class-based object-orientation : Eiffel multiple dispatch based OO : Dylan functional + object-orientation : Scala functional programming : Haskell pure functional programming : Haskell lazy pure functional programming : Haskell static functional programming : Haskell dynamic functional programming : Clojure imperative programming : Lua concurrent programming : Clojure message-passing concurrent programming : Erlang metaprogramming : Racket language-oriented programming : Intentional Domain Workbench other interesting ideas : Unison : code is immutable and content-adressable, which has some deep implications . Rust : "safe" and "low level / bare metal" don't need to be mutually exclusive. TypeScript : how do you capture all the crazy stunts ECMAScript programmers pull into a mostly-sound static type system? Note that there are many languages in the "typed web programming" field, but most of them try to be "better" ECMAScripts or "better than " ECMAScript, whereas TypeScript tries to make existing ECMAScript safe. Equally important as the language semantics is its Type System . Unfortunately, I don't know of any similarly informative visualization of the different aspects of type systems. I am also not intimately familiar with Type Theory, unfortunately. (If you want to understand type systems, you should read Benjamin Pierce's Types and Programming Languages .) Some of the important aspects are: dynamic vs. static typing, also gradual typing, optional typing, soft typing latent vs. manifest typing implicit vs. explicit typing structural vs. nominal vs. duck typing strong vs. weak typing parametric polymorphism (also higher-rank and higher-kinded), ad-hoc polymorphism, inclusion polymorphism, bounded polymorphism, subtype polymorphism at the intersection of subtyping and parametric polymorphism: covariance, contravariance, invariance System F , System F ω , System F <: , System F ω <: , and its various extensions, variants, subsets, and derivatives, including Damas-Hindley-Milner , but also type systems that move away from System F (e.g. the Dependent Object Type Calculus underlying Scala's Type System ) the Barendregt Lambda Cube various forms of Type Inference, including Algorithm W , Flow-based, unification-based, etc. Kinds Dependent Typing, Linear Types, Ownership Types, Effect Types, World Types And probably many other things I forgot. In your question, you mention that you have experience with OO. In my personal experience, OO tends to almost universally be taught really badly. I am not saying that is what happened to you, but it is something I have noticed. So, even though you specifically asked about Functional and Logic Programming, here are some OO pointers as well. The term "Object-Orientation" was coined by Dr. Alan Kay, and he defines it thus : OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. Let's break that down: messaging ("virtual method dispatch", if you are not familiar with Smalltalk) state-process should be locally retained protected hidden extreme late-binding of all things Implementation-wise, messaging is a late-bound procedure call, and if procedure calls are late-bound, then you cannot know at design time what you are going to call, so you cannot make any assumptions about the concrete representation of state. So, really it is about messaging, late-binding is an implementation of messaging and encapsulation is a consequence of it. He later on clarified that " The big idea is 'messaging' ", and regrets having called it "object-oriented" instead of "message-oriented", because the term "object-oriented" puts the focus on the unimportant thing (objects) and distracts from what is really important (messaging): Just a gentle reminder that I took some pains at the last OOPSLA to try to remind everyone that Smalltalk is not only NOT its syntax or the class library, it is not even about classes. I'm sorry that I long ago coined the term "objects" for this topic because it gets many people to focus on the lesser idea. The big idea is "messaging" -- that is what the kernal of Smalltalk/Squeak is all about (and it's something that was never quite completed in our Xerox PARC phase). The Japanese have a small word -- ma -- for "that which is in between" -- perhaps the nearest English equivalent is "interstitial". The key in making great and growable systems is much more to design how its modules communicate rather than what their internal properties and behaviors should be. Think of the internet -- to live, it (a) has to allow many different kinds of ideas and realizations that are beyond any single standard and (b) to allow varying degrees of safe interoperability between these ideas. (Of course, today, most people don't even focus on objects but on classes, which is even more wrong.) Messaging is fundamental to OO, both as metaphor and as a mechanism. If you send someone a message, you don't know what they do with it. The only thing you can observe, is their response. You don't know whether they processed the message themselves (i.e. if the object has a method), if they forwarded the message to someone else (delegation / proxying), if they even understood it. That's what encapsulation is all about, that's what OO is all about. You cannot even distinguish a proxy from the real thing, as long as it responds how you expect it to. A more "modern" term for "messaging" is "dynamic method dispatch" or "virtual method call", but that loses the metaphor and focuses on the mechanism. So, there are two ways to look at Alan Kay's definition: if you look at it standing on its own, you might observe that messaging is basically a late-bound procedure call and late-binding implies encapsulation, so we can conclude that #1 and #2 are actually redundant, and OO is all about late-binding. However, he later clarified that the important thing is messaging, and so we can look at it from a different angle: messaging is late-bound. Now, if messaging were the only thing possible, then #3 would trivially be true: if there is only one thing, and that thing is late-bound, then all things are late-bound. And once again, encapsulation follows from messaging. Similar points are also made in On Understanding Data Abstraction, Revisited by William R. Cook and also his Proposal for Simplified, Modern Definitions of "Object" and "Object Oriented" . Dynamic dispatch of operations is the essential characteristic of objects. It means that the operation to be invoked is a dynamic property of the object itself. Operations cannot be identified statically, and there is no way in general to exactly what operation will executed in response to a given request, except by running it. This is exactly the same as with first-class functions, which are always dynamically dispatched. Benjamin Pierce in Types and Programming Languages argues that the defining feature of Object-Orientation is Open Recursion . So: according to Alan Kay, OO is all about messaging. According to William Cook, OO is all about dynamic method dispatch (which is really the same thing). According to Benjamin Pierce, OO is all about Open Recursion, which basically means that self-references are dynamically resolved (or at least that's a way to think about), or, in other words, messaging. As you can see, the person who coined the term "OO" has a rather metaphysical view on objects, Cook has a rather pragmatic view, and Pierce a very rigorous mathematical view. But the important thing is: the philosopher, the pragmatist and the theoretician all agree! Messaging is the one pillar of OO. Note that there is no mention of inheritance here! Inheritance is not essential for OO. In general, most OO languages have some way of implementation re-use but that doesn't necessarily have to be inheritance. It could also be some form of delegation, for example. In fact, The Treaty of Orlando discusses delegation as an alternative to inheritance and how different forms of delegation and inheritance lead to different design points within the design space of object-oiented languages. (Note that actually even in languages that support inheritance, like Java, people are actually taught to avoid it, again indicating that it is not necessary for OO.) | {
"source": [
"https://cs.stackexchange.com/questions/146618",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/146211/"
]
} |
146,632 | The halting problem is NP hard, to my knowledge any NP problem can be reduced to a NP hard problem. Let us define a new computational complexity class called HP(Hypercomputational polynomal-time), The class of all problems solvable in polynomial time on this particular hyper computer. This would include the halting problem. Would HP = NP or(HP ⊇ NP)? As a stronger version of this, would HP = RE? and/or CO-RE? | Very good explanations of programming paradigms and the programming concepts from which those paradigms are built are found in Peter van Roy's works. Especially in the book Concepts, Techniques, and Models of Computer Programming by Peter Van Roy and Seif Haridi . (Unfortunately, the companion wiki does not seem to exist any more.) CTM (as it is colloquially known) uses the multi-paradigm Distributed Oz programming language to introduce all the major programming paradigms. Peter van Roy also made this amazing poster that shows the 34 major paradigms and their relations and positions on various axis . The poster is basically an incredibly compressed version of CTM. A more thorough explanation of that poster is contained in the article Programming Paradigms for Dummies: What Every Programmer Should Know which appeared as a chapter in the book New Computational Paradigms for Computer Music , edited by G. Assayag and A. Gerzso. It explains for example very concisely and easily understandable, what a programming paradigm actually is , what a programming concept is, and how the two are related. There are about 34 principal Programming Paradigms, as identified by Peter van Roy and Seif Haridi: active object programming / object-capability programming ADT functional programming ADT imperative programming concurrent constraint programming concurrent object-oriented programming / shared-state concurrent programming constraint (logic) programming continuation programming descriptive declarative programming deterministic logic programming event-loop programming first-oder functional programming functional programming functional reactive programming (FRP) / weak synchronous programming imperative programming imperative search programming lazy concurrent constraint programming lazy dataflow programming / lazy declarative concurrent programming lazy functional programming monotonic dataflow programming / declarative concurrent programming multi-agent dataflow programming multi-agent programming / message-passing concurrent programming nonmonotonic dataflow programming / concurrent logic programming relational & logic programming sequential object-oriented programming / stateful functional programming software-transactional memory (STM) strong synchronous programming Programming Paradigms, in turn, are composed of Programming Concepts, and Peter van Roy and Seif Haridi have identified 18 of those: by-need synchronization cell (state) closure continuation instantaneous computation local cell (private state) log name (unforgeable constant) nondeterministic choice port (channel) procedure record search single assignment solver synchronization on partial termination thread unification (equality) Note, that poster completely ignores typing, and there is of course a significant difference between a System F <:ω -style type system, a Scala-style type system, or a dynamic duck-typed type system, let alone a dependent type system à la Idris , Agda , Coq , Guru , or ATS . Another great book that demonstrates several major programming paradigms is Structure and Interpretation of Computer Programs by Harold Abelson and Gerald Jay Sussman . This book was the basis of MIT's CS101 for several decades. The main difference between CTM and SICP is that CTM demonstrates most major paradigms using a language that supports them (mostly Distributed Oz, but also some others). SICP OTOH demonstrates them by implementing them in a language that does not support them natively (a subset of Scheme). Seeing Object-Orientation implemented in a dozen or so lines of code is friggin' awesome. You can find video recordings and course materials of the course from MIT's short-lived ArsDigita University project . Lambda the Ultimate – The Programming Languages Weblog is a great resource for all things programming languages. Activity has slowed down in recent years, but there is still a lot going on. The discussions below the articles and the discussions in the forums are at least as valuable as the articles themselves, if not more. If you are interested in some controversial views, I can recommend studying the Design Principles behind Smalltalk by Dan Ingalls. For example, they contain this nugget of wisdom: Operating System : An operating system is a collection of things that don't fit into a language. There shouldn't be one. On a personal note, my own experience has been that really understanding a programming paradigm is only possible one paradigm at a time and in languages which force you into the paradigm Ideally, you would use a language which takes the paradigm to the extreme. In multi-paradigm languages, it is much too easy to "cheat" and fall back on a paradigm that you are more comfortable with. And using a paradigm as a library is only really possible in languages like Scheme which are specifically designed for this kind of programming. Learning lazy functional programming in Java, for example, is not a good idea, although there are libraries for that. Here's some of my favorites: object-orientation in general : Self prototype-based object-orientation : Self class-based object-orientation : Newspeak static class-based object-orientation : Eiffel multiple dispatch based OO : Dylan functional + object-orientation : Scala functional programming : Haskell pure functional programming : Haskell lazy pure functional programming : Haskell static functional programming : Haskell dynamic functional programming : Clojure imperative programming : Lua concurrent programming : Clojure message-passing concurrent programming : Erlang metaprogramming : Racket language-oriented programming : Intentional Domain Workbench other interesting ideas : Unison : code is immutable and content-adressable, which has some deep implications . Rust : "safe" and "low level / bare metal" don't need to be mutually exclusive. TypeScript : how do you capture all the crazy stunts ECMAScript programmers pull into a mostly-sound static type system? Note that there are many languages in the "typed web programming" field, but most of them try to be "better" ECMAScripts or "better than " ECMAScript, whereas TypeScript tries to make existing ECMAScript safe. Equally important as the language semantics is its Type System . Unfortunately, I don't know of any similarly informative visualization of the different aspects of type systems. I am also not intimately familiar with Type Theory, unfortunately. (If you want to understand type systems, you should read Benjamin Pierce's Types and Programming Languages .) Some of the important aspects are: dynamic vs. static typing, also gradual typing, optional typing, soft typing latent vs. manifest typing implicit vs. explicit typing structural vs. nominal vs. duck typing strong vs. weak typing parametric polymorphism (also higher-rank and higher-kinded), ad-hoc polymorphism, inclusion polymorphism, bounded polymorphism, subtype polymorphism at the intersection of subtyping and parametric polymorphism: covariance, contravariance, invariance System F , System F ω , System F <: , System F ω <: , and its various extensions, variants, subsets, and derivatives, including Damas-Hindley-Milner , but also type systems that move away from System F (e.g. the Dependent Object Type Calculus underlying Scala's Type System ) the Barendregt Lambda Cube various forms of Type Inference, including Algorithm W , Flow-based, unification-based, etc. Kinds Dependent Typing, Linear Types, Ownership Types, Effect Types, World Types And probably many other things I forgot. In your question, you mention that you have experience with OO. In my personal experience, OO tends to almost universally be taught really badly. I am not saying that is what happened to you, but it is something I have noticed. So, even though you specifically asked about Functional and Logic Programming, here are some OO pointers as well. The term "Object-Orientation" was coined by Dr. Alan Kay, and he defines it thus : OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. Let's break that down: messaging ("virtual method dispatch", if you are not familiar with Smalltalk) state-process should be locally retained protected hidden extreme late-binding of all things Implementation-wise, messaging is a late-bound procedure call, and if procedure calls are late-bound, then you cannot know at design time what you are going to call, so you cannot make any assumptions about the concrete representation of state. So, really it is about messaging, late-binding is an implementation of messaging and encapsulation is a consequence of it. He later on clarified that " The big idea is 'messaging' ", and regrets having called it "object-oriented" instead of "message-oriented", because the term "object-oriented" puts the focus on the unimportant thing (objects) and distracts from what is really important (messaging): Just a gentle reminder that I took some pains at the last OOPSLA to try to remind everyone that Smalltalk is not only NOT its syntax or the class library, it is not even about classes. I'm sorry that I long ago coined the term "objects" for this topic because it gets many people to focus on the lesser idea. The big idea is "messaging" -- that is what the kernal of Smalltalk/Squeak is all about (and it's something that was never quite completed in our Xerox PARC phase). The Japanese have a small word -- ma -- for "that which is in between" -- perhaps the nearest English equivalent is "interstitial". The key in making great and growable systems is much more to design how its modules communicate rather than what their internal properties and behaviors should be. Think of the internet -- to live, it (a) has to allow many different kinds of ideas and realizations that are beyond any single standard and (b) to allow varying degrees of safe interoperability between these ideas. (Of course, today, most people don't even focus on objects but on classes, which is even more wrong.) Messaging is fundamental to OO, both as metaphor and as a mechanism. If you send someone a message, you don't know what they do with it. The only thing you can observe, is their response. You don't know whether they processed the message themselves (i.e. if the object has a method), if they forwarded the message to someone else (delegation / proxying), if they even understood it. That's what encapsulation is all about, that's what OO is all about. You cannot even distinguish a proxy from the real thing, as long as it responds how you expect it to. A more "modern" term for "messaging" is "dynamic method dispatch" or "virtual method call", but that loses the metaphor and focuses on the mechanism. So, there are two ways to look at Alan Kay's definition: if you look at it standing on its own, you might observe that messaging is basically a late-bound procedure call and late-binding implies encapsulation, so we can conclude that #1 and #2 are actually redundant, and OO is all about late-binding. However, he later clarified that the important thing is messaging, and so we can look at it from a different angle: messaging is late-bound. Now, if messaging were the only thing possible, then #3 would trivially be true: if there is only one thing, and that thing is late-bound, then all things are late-bound. And once again, encapsulation follows from messaging. Similar points are also made in On Understanding Data Abstraction, Revisited by William R. Cook and also his Proposal for Simplified, Modern Definitions of "Object" and "Object Oriented" . Dynamic dispatch of operations is the essential characteristic of objects. It means that the operation to be invoked is a dynamic property of the object itself. Operations cannot be identified statically, and there is no way in general to exactly what operation will executed in response to a given request, except by running it. This is exactly the same as with first-class functions, which are always dynamically dispatched. Benjamin Pierce in Types and Programming Languages argues that the defining feature of Object-Orientation is Open Recursion . So: according to Alan Kay, OO is all about messaging. According to William Cook, OO is all about dynamic method dispatch (which is really the same thing). According to Benjamin Pierce, OO is all about Open Recursion, which basically means that self-references are dynamically resolved (or at least that's a way to think about), or, in other words, messaging. As you can see, the person who coined the term "OO" has a rather metaphysical view on objects, Cook has a rather pragmatic view, and Pierce a very rigorous mathematical view. But the important thing is: the philosopher, the pragmatist and the theoretician all agree! Messaging is the one pillar of OO. Note that there is no mention of inheritance here! Inheritance is not essential for OO. In general, most OO languages have some way of implementation re-use but that doesn't necessarily have to be inheritance. It could also be some form of delegation, for example. In fact, The Treaty of Orlando discusses delegation as an alternative to inheritance and how different forms of delegation and inheritance lead to different design points within the design space of object-oiented languages. (Note that actually even in languages that support inheritance, like Java, people are actually taught to avoid it, again indicating that it is not necessary for OO.) | {
"source": [
"https://cs.stackexchange.com/questions/146632",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/145676/"
]
} |
146,652 | Why is a Language L(M) {has at least 10 strings} recognizable and L(N) {has at most 10 strings} is not? {⟨⟩:() has at least 10 strings}
{⟨N⟩:(N) has at most 10 strings} My proof (I dont know if I'm wrong) L(N) => I could think of a turing machine that takes as an input 10 strings of the language and accept them, if the turing machine accepts them then it's turing recognizable but not turing decidable (given Rice's Theorem) But I can't figure it out a proof for L(M) even though I know it's undecidable also fro Rice's Theorem Please, If I'm doing something wrong also on L(N) reasoning, let me know | Very good explanations of programming paradigms and the programming concepts from which those paradigms are built are found in Peter van Roy's works. Especially in the book Concepts, Techniques, and Models of Computer Programming by Peter Van Roy and Seif Haridi . (Unfortunately, the companion wiki does not seem to exist any more.) CTM (as it is colloquially known) uses the multi-paradigm Distributed Oz programming language to introduce all the major programming paradigms. Peter van Roy also made this amazing poster that shows the 34 major paradigms and their relations and positions on various axis . The poster is basically an incredibly compressed version of CTM. A more thorough explanation of that poster is contained in the article Programming Paradigms for Dummies: What Every Programmer Should Know which appeared as a chapter in the book New Computational Paradigms for Computer Music , edited by G. Assayag and A. Gerzso. It explains for example very concisely and easily understandable, what a programming paradigm actually is , what a programming concept is, and how the two are related. There are about 34 principal Programming Paradigms, as identified by Peter van Roy and Seif Haridi: active object programming / object-capability programming ADT functional programming ADT imperative programming concurrent constraint programming concurrent object-oriented programming / shared-state concurrent programming constraint (logic) programming continuation programming descriptive declarative programming deterministic logic programming event-loop programming first-oder functional programming functional programming functional reactive programming (FRP) / weak synchronous programming imperative programming imperative search programming lazy concurrent constraint programming lazy dataflow programming / lazy declarative concurrent programming lazy functional programming monotonic dataflow programming / declarative concurrent programming multi-agent dataflow programming multi-agent programming / message-passing concurrent programming nonmonotonic dataflow programming / concurrent logic programming relational & logic programming sequential object-oriented programming / stateful functional programming software-transactional memory (STM) strong synchronous programming Programming Paradigms, in turn, are composed of Programming Concepts, and Peter van Roy and Seif Haridi have identified 18 of those: by-need synchronization cell (state) closure continuation instantaneous computation local cell (private state) log name (unforgeable constant) nondeterministic choice port (channel) procedure record search single assignment solver synchronization on partial termination thread unification (equality) Note, that poster completely ignores typing, and there is of course a significant difference between a System F <:ω -style type system, a Scala-style type system, or a dynamic duck-typed type system, let alone a dependent type system à la Idris , Agda , Coq , Guru , or ATS . Another great book that demonstrates several major programming paradigms is Structure and Interpretation of Computer Programs by Harold Abelson and Gerald Jay Sussman . This book was the basis of MIT's CS101 for several decades. The main difference between CTM and SICP is that CTM demonstrates most major paradigms using a language that supports them (mostly Distributed Oz, but also some others). SICP OTOH demonstrates them by implementing them in a language that does not support them natively (a subset of Scheme). Seeing Object-Orientation implemented in a dozen or so lines of code is friggin' awesome. You can find video recordings and course materials of the course from MIT's short-lived ArsDigita University project . Lambda the Ultimate – The Programming Languages Weblog is a great resource for all things programming languages. Activity has slowed down in recent years, but there is still a lot going on. The discussions below the articles and the discussions in the forums are at least as valuable as the articles themselves, if not more. If you are interested in some controversial views, I can recommend studying the Design Principles behind Smalltalk by Dan Ingalls. For example, they contain this nugget of wisdom: Operating System : An operating system is a collection of things that don't fit into a language. There shouldn't be one. On a personal note, my own experience has been that really understanding a programming paradigm is only possible one paradigm at a time and in languages which force you into the paradigm Ideally, you would use a language which takes the paradigm to the extreme. In multi-paradigm languages, it is much too easy to "cheat" and fall back on a paradigm that you are more comfortable with. And using a paradigm as a library is only really possible in languages like Scheme which are specifically designed for this kind of programming. Learning lazy functional programming in Java, for example, is not a good idea, although there are libraries for that. Here's some of my favorites: object-orientation in general : Self prototype-based object-orientation : Self class-based object-orientation : Newspeak static class-based object-orientation : Eiffel multiple dispatch based OO : Dylan functional + object-orientation : Scala functional programming : Haskell pure functional programming : Haskell lazy pure functional programming : Haskell static functional programming : Haskell dynamic functional programming : Clojure imperative programming : Lua concurrent programming : Clojure message-passing concurrent programming : Erlang metaprogramming : Racket language-oriented programming : Intentional Domain Workbench other interesting ideas : Unison : code is immutable and content-adressable, which has some deep implications . Rust : "safe" and "low level / bare metal" don't need to be mutually exclusive. TypeScript : how do you capture all the crazy stunts ECMAScript programmers pull into a mostly-sound static type system? Note that there are many languages in the "typed web programming" field, but most of them try to be "better" ECMAScripts or "better than " ECMAScript, whereas TypeScript tries to make existing ECMAScript safe. Equally important as the language semantics is its Type System . Unfortunately, I don't know of any similarly informative visualization of the different aspects of type systems. I am also not intimately familiar with Type Theory, unfortunately. (If you want to understand type systems, you should read Benjamin Pierce's Types and Programming Languages .) Some of the important aspects are: dynamic vs. static typing, also gradual typing, optional typing, soft typing latent vs. manifest typing implicit vs. explicit typing structural vs. nominal vs. duck typing strong vs. weak typing parametric polymorphism (also higher-rank and higher-kinded), ad-hoc polymorphism, inclusion polymorphism, bounded polymorphism, subtype polymorphism at the intersection of subtyping and parametric polymorphism: covariance, contravariance, invariance System F , System F ω , System F <: , System F ω <: , and its various extensions, variants, subsets, and derivatives, including Damas-Hindley-Milner , but also type systems that move away from System F (e.g. the Dependent Object Type Calculus underlying Scala's Type System ) the Barendregt Lambda Cube various forms of Type Inference, including Algorithm W , Flow-based, unification-based, etc. Kinds Dependent Typing, Linear Types, Ownership Types, Effect Types, World Types And probably many other things I forgot. In your question, you mention that you have experience with OO. In my personal experience, OO tends to almost universally be taught really badly. I am not saying that is what happened to you, but it is something I have noticed. So, even though you specifically asked about Functional and Logic Programming, here are some OO pointers as well. The term "Object-Orientation" was coined by Dr. Alan Kay, and he defines it thus : OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. Let's break that down: messaging ("virtual method dispatch", if you are not familiar with Smalltalk) state-process should be locally retained protected hidden extreme late-binding of all things Implementation-wise, messaging is a late-bound procedure call, and if procedure calls are late-bound, then you cannot know at design time what you are going to call, so you cannot make any assumptions about the concrete representation of state. So, really it is about messaging, late-binding is an implementation of messaging and encapsulation is a consequence of it. He later on clarified that " The big idea is 'messaging' ", and regrets having called it "object-oriented" instead of "message-oriented", because the term "object-oriented" puts the focus on the unimportant thing (objects) and distracts from what is really important (messaging): Just a gentle reminder that I took some pains at the last OOPSLA to try to remind everyone that Smalltalk is not only NOT its syntax or the class library, it is not even about classes. I'm sorry that I long ago coined the term "objects" for this topic because it gets many people to focus on the lesser idea. The big idea is "messaging" -- that is what the kernal of Smalltalk/Squeak is all about (and it's something that was never quite completed in our Xerox PARC phase). The Japanese have a small word -- ma -- for "that which is in between" -- perhaps the nearest English equivalent is "interstitial". The key in making great and growable systems is much more to design how its modules communicate rather than what their internal properties and behaviors should be. Think of the internet -- to live, it (a) has to allow many different kinds of ideas and realizations that are beyond any single standard and (b) to allow varying degrees of safe interoperability between these ideas. (Of course, today, most people don't even focus on objects but on classes, which is even more wrong.) Messaging is fundamental to OO, both as metaphor and as a mechanism. If you send someone a message, you don't know what they do with it. The only thing you can observe, is their response. You don't know whether they processed the message themselves (i.e. if the object has a method), if they forwarded the message to someone else (delegation / proxying), if they even understood it. That's what encapsulation is all about, that's what OO is all about. You cannot even distinguish a proxy from the real thing, as long as it responds how you expect it to. A more "modern" term for "messaging" is "dynamic method dispatch" or "virtual method call", but that loses the metaphor and focuses on the mechanism. So, there are two ways to look at Alan Kay's definition: if you look at it standing on its own, you might observe that messaging is basically a late-bound procedure call and late-binding implies encapsulation, so we can conclude that #1 and #2 are actually redundant, and OO is all about late-binding. However, he later clarified that the important thing is messaging, and so we can look at it from a different angle: messaging is late-bound. Now, if messaging were the only thing possible, then #3 would trivially be true: if there is only one thing, and that thing is late-bound, then all things are late-bound. And once again, encapsulation follows from messaging. Similar points are also made in On Understanding Data Abstraction, Revisited by William R. Cook and also his Proposal for Simplified, Modern Definitions of "Object" and "Object Oriented" . Dynamic dispatch of operations is the essential characteristic of objects. It means that the operation to be invoked is a dynamic property of the object itself. Operations cannot be identified statically, and there is no way in general to exactly what operation will executed in response to a given request, except by running it. This is exactly the same as with first-class functions, which are always dynamically dispatched. Benjamin Pierce in Types and Programming Languages argues that the defining feature of Object-Orientation is Open Recursion . So: according to Alan Kay, OO is all about messaging. According to William Cook, OO is all about dynamic method dispatch (which is really the same thing). According to Benjamin Pierce, OO is all about Open Recursion, which basically means that self-references are dynamically resolved (or at least that's a way to think about), or, in other words, messaging. As you can see, the person who coined the term "OO" has a rather metaphysical view on objects, Cook has a rather pragmatic view, and Pierce a very rigorous mathematical view. But the important thing is: the philosopher, the pragmatist and the theoretician all agree! Messaging is the one pillar of OO. Note that there is no mention of inheritance here! Inheritance is not essential for OO. In general, most OO languages have some way of implementation re-use but that doesn't necessarily have to be inheritance. It could also be some form of delegation, for example. In fact, The Treaty of Orlando discusses delegation as an alternative to inheritance and how different forms of delegation and inheritance lead to different design points within the design space of object-oiented languages. (Note that actually even in languages that support inheritance, like Java, people are actually taught to avoid it, again indicating that it is not necessary for OO.) | {
"source": [
"https://cs.stackexchange.com/questions/146652",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/146246/"
]
} |
147,771 | I'm reading CSAPP and couldn't wrap my head around this part: Summary of what the section says: Intel Core i7 support a 48-bit virtual address space and 52-bit physical address space. Core i7 uses a four-level page table hierarchy. Then the book shows a picture of the breakdown of a PTE: Please note the 40-bit PPN. It goes on to say that " the 40-bit PPN points to the beginning of the appropriate page table. Notice that this imposes a 4 KB alignment requirement on page tables ". My question is what does the bolded line mean? And why there is a 4 KB alignment requirement? I know (theoretically) how virtual memory and page tables work but don't get this alignment requirement. To further explain my confusion: What does it mean to say " ...alignment requirement on page tables "? Does it mean that the PTE has to be 4KB in chunk (this was described a page before and doesn't really seem to need any further proof), or something else? | The physical address for the start of a page frame or page table is obtained by taking the 40-bit PPN and appending 12 zero bits. That gives you a 52-bit physical address, which is the start of the frame or page table. A consequence is that frames or page tables must start at a physical address that is a multiple of $2^{12}=4096$ , i.e., that is aligned at a multiple of 4KB. | {
"source": [
"https://cs.stackexchange.com/questions/147771",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/135889/"
]
} |
147,777 | I'm currently trying to show that the language $L_2=\{0^n \text{ } | \text{ } n=2^k, k\geq 0\}$ is not regular by using the Pumping Lemma (at least I think it is not regular, because I couldn't find any regular expressions or DFA for it). I know all the steps that I need to go through, but I am having a very hard time figuring out which specific $z\in L_2$ I need to use. I tried using $z=0^{2n}=0^{2^{k+1}}$ and $z^{2^n}$ , but I had no luck. Do you think I'm doing something wrong and using the wrong z's or are the above two okay to work with, but I'm just not comprehending it? | The physical address for the start of a page frame or page table is obtained by taking the 40-bit PPN and appending 12 zero bits. That gives you a 52-bit physical address, which is the start of the frame or page table. A consequence is that frames or page tables must start at a physical address that is a multiple of $2^{12}=4096$ , i.e., that is aligned at a multiple of 4KB. | {
"source": [
"https://cs.stackexchange.com/questions/147777",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/145727/"
]
} |
147,873 | We often hear about some algorithms' running time that is polynomial, and some algorithms' running time that is exponential.
But is there an algorithm whose time complexity is between polynomial time and exponential time? | There is a category of time complexity called quasi-polynomial . It consists of a time complexity of $2^{\mathcal{O}(\log^cn)}$ , for $c> 1$ . It is asymptoticaly greater than any polynomial function, but lesser than exponential time. Another category is sub-exponential time which name speaks for itself. It is sometimes defined as $2^{o(n)}$ . The problem of graph isomorphism can be solved in sub-exponential time, but no algorithm in polynomial time is known. | {
"source": [
"https://cs.stackexchange.com/questions/147873",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/143833/"
]
} |
147,959 | In the (unlikely) event that $P=NP$ with a constructive proof of a polynomial time algorithm that solves 3SAT, obviously things will be very different. However, practically, it could happen that the degree of the polynomial run time is very large (e.g. $\Omega(n^{10000})$ ) such that any reasonably large problem is still out of reach for our current computing technology. My question is: is it possible to find/construct a problem that have a lower-bound polynomial complexity $\Omega(n^p)$ to compute but an upper bound $O(n^q)$ to verify, with $q$ being quite small (e.g. $q=1$ ) and $p\gg q$ . This problem would function essentially the same as current problems for which no known polynomial algorithms exist (e.g. factorization), and thus would still be usable in e.g. security systems in even in the case of $P=NP$ . | No such problem is known (not with a known mathematical proof of a lower bound). Of course cryptographers would jump on it if we had one. As a result, cryptography is currently based on assumptions that we hope are true but we cannot prove (these are sometimes called "hardness assumptions"), and then we prove that if the assumption is valid, then the cryptographic scheme will be secure against attack. Many good cryptosystems offer a reasonable candidate for a problem for which verification is much easier than solving, but we have no mathematical proof that this is necessarily so. For instance, factoring large integers seems like a good candidate for such a problem: it is easy to verify that you have correctly factored a large number, but it seems to be hard to find those factors in the first place. Breaking AES also seems like a good candidate for such a problem: it is easy for a cryptanalyst to verify that they have found the right AES key to decrypt some known plaintext pairs, but it seems to be hard to find the right AES key in the first place. However, we have no mathematical proof for any of these. You might be shocked to learn that there is no explicitly known function family for which we can prove a super-linear lower bound (i.e., for every explicit function we can think of, we cannot rule out the possibility that it can be computed by a linear-size circuit). See here for a catalog of just how weak our known results are. This highlights just how far we are from being able to prove useful lower bounds. I have some reading for you that might be helpful: Provable Lower Bounds for some Algorithmic Problems? Why haven't we proven many things computationally secure yet? Is there a cryptography algorithm that will remain safe if P=NP? Is it possible to construct an encryption scheme for which breaking is NP complete but there nearly always exists an efficient breaking algorithm? Is AES reducible to an NP-complete problem? Cryptography systems based on NP complete problems What is the relation between computational security and provable security? One early attempt at what you're hoping for are Merkle puzzles . These show a gap of the form you mention, with $O(n^2)$ time for attackers to break (i.e., to solve) but $O(n)$ time for defenders to compute (i.e., to verify). In your notation, this amounts to $p=2$ and $q=1$ . This result holds only under the unproven assumption that solving a single puzzle takes $n$ steps of computation, but we have no concise (constant-length) puzzle for which we have a proof that solving will take that long, so even Merkle puzzles don't do what you are hoping for -- they still rely on an unproven assumption. And, there is a proof that the basic approach found in Merkle puzzles doesn't generalize to larger gaps (larger ratios of $p/q$ ), so they don't seem to lead somewhere that will be practically useful, even if we ignore that they rely on an unproven assumption like every other cryptosystem. | {
"source": [
"https://cs.stackexchange.com/questions/147959",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/146603/"
]
} |
148,055 | There is the axiom you should always prefer tail-recursion over regular recursion whenever possible. (I'm not considering tabulation as an alternative in this question). I understand the idea by why is that the case? Is it only because of the compiler can optimize the recursive call in case of tail recursion? if that is the only reason, why the compiler would not be able to optimize the regular recursive call? | if that is the only reason, why the compiler would not be able to optimize the regular recursive call? You are focusing on the wrong thing here: the reason the optimization works is because of the tail part, not because of the recursion part. Tail-recursion elimination is a special case of tail-call elimination, not a special case of some form of recursion optimization. Normally , when you call a subroutine, you have to "remember" where you called from and what the current state is, so that you can continue the execution when you come back from the return . So, for example, if you have something like: function foo() {
bar();
baz();
qux();
} Then before the call to bar() and before the call to baz() , you have to store the current state and restore it after bar() returns and after baz() returns. But , since the call to qux() is a tail-call , you know that there is nothing that will be executed after you return back, so you can skip the whole "remember and restore" bit. Instead, you can simply jump to qux . It is literally equivalent to a GOTO . Tail-recursion is the intersection of a tail-call and a recursive call: it is a recursive call that also is in tail position, or a tail-call that also is a recursive call. This means that a tail-recursive call can be optimized the same way as a tail-call. Now, an obvious question is: if a tail-recursive call can be optimized the same way as a tail-call, why do we care specifically about tail-recursive calls and not just about tail-calls in general? Well, first of all: we do care about tail-calls in general. Tail-calls allow writing some code in a very elegant manner that is hard to express otherwise. For example, if you have tail-calls, then you can simply express a state machine using subroutine calls: every state is a subroutine and every transition is a call – this makes the code look like a direct translation of how you would draw the state machine on paper, and the control flow graph of the code matches exactly the state machine. Since these calls never return, you would quickly run out of stack space if tail-calls weren't optimized. Without tail-calls, state machines are typically implemented as state tables, with GOTO s, and that code does not look at all like the drawing of a state machine. The reason why we care about tail-recursion as distinct from general tail-calls, and specifically why we care about direct tail-recursion , i.e. where a subroutine directly tail-calls itself ( foo calls foo ) instead of indirectly via another subroutine ( foo calls bar , bar calls foo ) is because some widely-used platforms do not support generalized non-local control constructs such as GOTO , which are needed to efficiently implement tail-calls. In other words: optimizing tail-recursion is the same as a loop (which most target languages support), optimizing tail-calls is the same as an unrestricted GOTO (which many target platforms, e.g. JVM, ECMAScript, CLI) do not support. For example, on the JVM, it is possible to implement tail-calls, but it is complex and slow and hinders interoperability with other JVM languages, because the JVM does not have unrestricted GOTO or the ability to reflectively manipulate the stack or Continuations or something similar. The JVM does , however, have a GOTO that allows to jump to a different location within the same method . Java uses this to implement loops for example, but it can also be used to implement direct tail-recursion . So, the reason why we care about direct tail-recursion specially, is because there are widely-used platforms where implementing direct tail-recursion is easy but implementing general tail-calls is infeasible (meaning, it is technically possible but it wouldn't make sense because it negates the reasons why you chose that platform in the first place – e.g. on the JVM, it makes your language slow or badly interoperable with other JVM languages, but the performance of the JVM and the ability to interoperate with other languages are precisely the reasons why you chose the JVM as a platform in the first place). An important sidenote: in your question, you used the term "optimization", as did I in my answer. However, it is important to distinguish between an optimization and a language feature . An optimization is a private internal implementation detail of a particular version of a particular implementation of the language. It is entirely optional. For example, compiler A may perform a particular optimization but compiler B may not. A real-world instance of this is that the Oracle HotSpot JVM performs Escape Analysis but no Escape Detection whereas the Azul Zing JVM does perform both EA and ED. And neither the Oracle HotSpot JVM nor Azul Zing JVM perform tail-call optimization but the Eclipse OpenJ9 JVM (formerly IBM J9) does eliminate some tail-calls under some conditions. So, an optimization may or may not be implemented at all by a particular implementation, and it may or may not be performed in a particular situation. A language feature , however, must be implemented by all conforming implementations. Tail-Call optimization (TCO) is, as the name implies, an optimization . It is not mandatory. The corresponding language feature is typically called Proper Tail-Calls or Properly Implemented Tail-Call Handling (PITCH) . A language with Proper Tail-Calls or PITCH basically has a section in its language specification that says "all implementations must perform TCO under these conditions", and so in some sense, PITCH is simply just "language-mandated TCO", but it is important to distinguish between an optional optimization that may or may not exist in a particular implementation and may or may not be performed in a particular situation, and a mandatory feature that must be implemented by all implementations and must be performed under all circumstances prescribed in the specification. For example, many C and C++ compilers will perform TCO under some limited set of circumstances, but there is no guarantee if or when they will do it. So, you cannot write code that relies on it (like the state machine example above) because you cannot know when you write the code whether the optimization will actually happen or not. The same thing applies to Tail-Recursion Elimination (TRE) . TRE is an optimization that is not guaranteed to happen. As far as I know, there is no common name for the language feature that corresponds to the optimization (like there is with PITCH / Proper Tail-Calls for TCO). It is typically just called Tail-Recursion , although I call it Proper Tail-Recursion in analogy to the Proper Tail-Calls. | {
"source": [
"https://cs.stackexchange.com/questions/148055",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/144900/"
]
} |
149,446 | Problem Background: Let $a\in(0,1)$ to be an irrational number. Suppose there is a black box, the input is a real number in $[0,1]\backslash \{a\}$ , denoted as $x$ , the black box outputs boolean values according to the following rules: When $x>a$ , the output is True . When $x<a$ , the output is False . Given $k$ , we need to find a number $b$ such that $$
|a-b|<10^{-k},\quad b>a,\quad \text{$b$ has $k$ decimal places}
$$ For example, let $a=\sqrt{2}/2\approx0.70710678\cdots$ , $k=3$ , then $b$ should be $0.708$ . Here again: we don't know $a$ , our purpose is to find $b$ . I thought of four ways to find $b$ , listed below: (The next four methods all use the above example) Linear search: Generate a list of length $1001$ in steps of $0.001$ : $$
[0, 0.001, 0.002, \cdots,0.998,0.999, 1]
$$ Traverse this list, for each value $x$ in it, put it into the black box, if the output is True , stop traversing immediately, and $x$ at this time is $b$ . The time complexity of this method is: $O(10^k)$ . Binary Search: Generate a list of length $1001$ in steps of $0.001$ : $$
[0, 0.001, 0.002, \cdots,0.998,0.999, 1]
$$ Set two pointers. At first, the index of the left pointer is $0$ and the index of the right pointer is $1000$ . Calculate mid = (left + right) // 2. Put the $x$ value at mid into the black box, if the output is True , move the right pointer to mid, otherwise move the left pointer to mid. Repeat the above steps until left = right - 1, then the $x$ at the right pointer is $b$ . The time complexity of this method is: $O(k\log10)$ . Linear Search + Use Previous Results: Step 1: $$
[0, 1]\to[0.0, 0.1, 0.2, \cdots,0.9, 1.0]\to [0.7,0.8]
$$ Step 2: $$
[0.7,0.8]\to[0.70,0.71,0.72,\cdots,0.79,0.80]\to[0.70, 0.71]
$$ Step 3: $$
[0.70,0.71]\to[0.700,0.701,0.702,\cdots,0.709,0.710]\to[0.707, 0.708]\to 0.708
$$ The time complexity of this method is: $O(10k)$ Binary Search + Use Previous Results: Similar to the method above, The time complexity of this method is: $O(k\log10)$ Are there any faster algorithms? If so, what is it? | You are in fact asked to find $b$ independent bits of information (such that $2^{-b}\sim10^{-k}$ )*, using queries that return a single bit of information each. So you can't get an answer in less than $b$ queries, and this is achieved by binary search. *Justification: the problem is equivalent to finding an integer $n$ such that $|a\cdot10^k-n|<1$ , with $n$ in range $[0,10^k]\sim[0,2^b]$ . | {
"source": [
"https://cs.stackexchange.com/questions/149446",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/146883/"
]
} |
150,417 | Context: When trying to tame real-world datasets that contain outliers and noise, the interquartile mean is a handy tool: you sort the data, throw away the top and bottom 25% of the data and take the mean of what's left. (Of course, you can choose other partitioning than top and bottom 25%.) Which led me to wonder: is there any efficiency to be gained only partially sorting the array? That is, if we describe three groups: A is the low quartile, B is the middle, and C is the high quartile, we don't care if A or C are sorted: we're going to discard them. And we don't care if B is sorted since we're only going to take the mean of its values. It's sufficient that the data is partitioned into those three groups. The question: is there a "partial sorting" algorithm that is more efficient than a full sort that will yield those three groups? Are there additional savings if the array is always a power of 2 (assume N >= 4)? What if you want to adjust the partition boundaries other than quartiles? Does that make it less efficient? Update I've added "partitioning" to the title, since (I now know) that's the correct term for what this question is about. Thank you to everyone with good answers! | The algorithm quickselect can return the $k$ -th value of an unordered array in average linear time. It can be "improved" (though not so much in practice) using the median of medians to guarantee worst case linear time. Using that, you can quickselect the $\frac{N}4$ -th, $\frac{N}2$ -th and $\frac{3N}4$ -th values. The algorithm will partition the array into the four desired parts. All this can be done in linear time. It is optimal since you need to check each element at least once. As long as you use a constant number of them, you could use other values than quartiles (like deciles, for example). | {
"source": [
"https://cs.stackexchange.com/questions/150417",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/121952/"
]
} |
150,443 | I was reading about hash functions in crypto and a website had mentioned that they were collision free, which obviously isn't possible if there are infinite input values that are mapped to outputs of a finite length. So what happens in the event of a hash collision? How do crypto currencies overcome this problem? Also, this is slightly off topic, but why can't a hash value be decoded? I mean if values are being mapped, to mapped to an output finite list of encoded values why can't you reverse engineer the process? It can't be truly random there has to be a method for which a hash function maps its values right? Please let me know if I am using any terminology wrong or am completely mistaken in any assumptions I make in my question. I am just trying to learn so please let me know. | The algorithm quickselect can return the $k$ -th value of an unordered array in average linear time. It can be "improved" (though not so much in practice) using the median of medians to guarantee worst case linear time. Using that, you can quickselect the $\frac{N}4$ -th, $\frac{N}2$ -th and $\frac{3N}4$ -th values. The algorithm will partition the array into the four desired parts. All this can be done in linear time. It is optimal since you need to check each element at least once. As long as you use a constant number of them, you could use other values than quartiles (like deciles, for example). | {
"source": [
"https://cs.stackexchange.com/questions/150443",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/149604/"
]
} |
150,895 | Consider the regular expressions $(1+01)^*(0+\epsilon)$ $(1^*011^*)^*(0+\epsilon) + 1^*(0+\epsilon)$ , where $\epsilon$ is the empty string. I need to determine if these expressions are equivalent. Intuitively it seems they are equivalent because they seem to generate the languages of strings without two consecutive $0s$ . a. Is this correct? b. How it can be proved mathematically? | One way to prove that two regular expressions $r_1,r_2$ generate the same language is to show both inclusions: Show that if $w$ is generated by $r_1$ then it is generated by $r_2$ . Show that if $w$ is generated by $r_2$ then it is generated by $r_1$ . Another way is to mechanically convert the regular expressions to NFAs, then to DFAs, then use the product construction to construct a DFA for the symmetric difference of the languages generated by the two regular expressions, then to show that no accepting state is reachable from the initial state. You are suggesting a third way — show that both regular expressions generate a particular language $L$ . You can use the methods above, or other methods, to show separately that each of $r_1,r_2$ generate the language $L$ . A fourth way is to use the algebra of regular expressions, some axiomatizations of which are complete for the equational theory of regular expressions (which means that if two regular expressions generate the same language, then this can be proved using the axioms). | {
"source": [
"https://cs.stackexchange.com/questions/150895",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/145771/"
]
} |
152,064 | A standalone statement of my question Given a program that takes no argument, we are interested in whether the program will eventually terminate. My question is this: Theoretically speaking, can we always find a proof of the termination/non-termination of a program? Clarification Unlike the general halting problem, this problem does not require a mechanical procedure to generate a proof for each program which can potentially depend on the procedure itself, but instead, it allows the proof to depend on the specified program. It is thus a much weaker version. There are obviously some proofs for some terminations and some non-terminations, and there are cases that remain unknown to this day (such as the evaluation of incrementing a number until finding a counter example of Collatz conjecture). But more generally, is there any result on this? Is it always possible to prove whether the program terminates or not? Or is it provable that some programs cannot be proved either ways? (Note that the answer to this question does not require solving, say, Collatz conjecture because it will only say there is a proof maybe that it terminates or maybe that it does not) What I have thought about? Cases that are easy are these two: If it terminates, we just run the program and the termination proves itself. If it falls into some repetitive periods, we track the history of all variables and we can prove that it does not terminate by remarking that it goes into a loop after certain step. One case remains where the non-termination will never fall into periods and keep visiting new states. In this case, my first thought is that it comes down (almost) to prove the unboundedness (of some sort) of a sequence (of some kind of structures) defined by a program. So maybe a weaker version of my question would be: Is the unboundedness of a sequence of natural numbers (generated by the program) always provable? | Actually this is no different from the halting problem unsolvability. If you have any formal system T with a proof verifier program V that can reason about programs (as you desire in your question), then let H be the program that does the following on input (P,X): For each string s in length-lexicographic order: If V( "Program P halts on input X." , s ) then output "true". If V( "Program P does not halt on input X." , s ) then output "false". Here V(Q,s) outputs "true" if s is a valid proof of Q and "false" otherwise. V always halts, because that is what it means for T to have a proof verifier program. (And we cannot use and do not care about formal systems that do not.) Now, if for every (P,X) there is always a proof over T of either "Program P halts on input X." or its negation, then H solves the halting problem (because H eventually checks each possible proof), which is impossible. Here I am assuming that your T is sound for program halting (i.e. does not prove a false statement about program halting). Otherwise, it is possible that T proves the wrong thing and hence H fails to solve the halting problem. By the way, unsolvability of the halting problem and another computability question called the zero-guessing problem are very important facts that can be used to easily prove the generalized incompleteness theorem , by essentially the same kind of reasoning. Incidentally, Godel proved his incompleteness theorem for PA under an assumption called ω-consistency, which is essentially equivalent to PA being sound for program halting. Rosser removed that assumption by a clever trick. But Rosser's version also can be proven easily using the zero-guessing problem instead of the halting problem. | {
"source": [
"https://cs.stackexchange.com/questions/152064",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/151253/"
]
} |
152,613 | Arrays are generally presented as data structures with $\Theta(N)$ traversal and $\Theta(1)$ random element access. However, this seems inconsistent: if array access is really $\Theta(1)$ , this means that the size of an element index is bounded by a constant (e.g., int64), since it can be processed in $\Theta(1)$ . However, this implies that the array size is bounded by a constant (e.g., 2 64 elements), which makes traversal $\Theta(1)$ . If traversal is $\Theta(N)$ , then the size of the index has an information theoretic lower bound of $\Theta(\log N)$ . Accessing an arbitrary $k$ -th element requires processing the entire integer k, which has $\Omega(\log N)$ worst-case time-complexity (as $k$ can be arbitrary large). In which model can both "standard complexities" defined for arrays, $\Theta(N)$ traversal and $\Theta(1)$ access simultaneously be true without the model being inconsistent? | It's a good question. From a pragmatic perspective, we tend not to worry about it. From a theoretical perspective, see the transdichotomous model . In particular, a standard assumption is that there is some integer $M$ that is large enough (i.e., larger than the input size; larger than the maximum size of memory needed), and then we assume that each word of memory can store $\lg M$ bits, and we assume that each memory access takes $O(1)$ time. Notice how this solves your paradox. In particular, we'll have $N \le M$ , so there is enough space in the array to store the entire array. Also, each array access really does take $O(1)$ time. The solution here is that $M$ grows with the size of the array, and is not a single fixed constant. You can think of the transdichotomous model as assuming that we'll build a machine that is large enough to handle the data we're processing. Of course, the larger the data you have, the more bits you need to have in the address, so we need a wider bus to memory, so the cost grows -- but we just "ignore" this or assume it grows as needed. There is a sense in which this model is cheating a little bit. While the time complexity of a memory fixed is $O(1)$ (fixed, independent of $N$ or $M$ ), the dollar cost of the computer does grow with $N$ or $M$ : to handle a larger value of $M$ , we need a larger data bus to memory, so the computer costs more. So there is a sense in which the transdichotomous model is cheating, and the cost of such a computer will go up as $\Theta(M \log M)$ , not as $\Theta(M)$ . In effect, the transdichotomous model is assuming we use increasing amounts of parallelism as the size of our data grows. You could argue that it is realistic, or that it is "sweeping under the rug" the cost of increasing the size of data buses, etc. I think both viewpoints have some validity. This is analogous to how we think about Turing machines as a reasonable model for everyday computers. Of course, any one computer has a fixed amount of memory, so in principle we could say that it is a finite-state machine (with a gigantic but fixed number of possible states), so it can only accept a regular language, not all decidable languages. But that isn't a very useful viewpoint. A way out of that is to assume that when our computer runs out of storage, we go out and buy more hard drives, as many as are needed, to store the data we are working with, so its amount of memory is unlimited, and thus a Turing machine is a good model. A pragmatic answer is that Turing machines are closer to every computer than finite automata. | {
"source": [
"https://cs.stackexchange.com/questions/152613",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/151726/"
]
} |
153,104 | PRNGs (pseudorandom number generators) generally have a bit length for the binary numbers they generate (e.g. 32 bits, 64 bits). This is the universe of their possible numbers. It seems that they typically cycle when they reach their 2^bitlen number. Is there one that, before it cycles, will generate each possible number in its universe, exactly once, without repetition, for larger bitspaces (e.g. 64, 128, 256), without large memory requirements (such as storing numbers generated so far, or a shuffled list of future numbers). Another way to put this: Is there an algorithm than exhaustively “traverses” or “visits” each binary number in a bitspace, exactly once, in a pseudorandom order? (with large bitspace and low memory usage, as above) EDITS: I now have the impression that, while PRNGs do typically eventually cycle (repeat their sequence), the point at which one cycles is specific to the algorithm, and is usually NOT after 2^bitlen numbers (where bitlen is the size in bits of their binary output). Please correct this if wrong. I expect that a PRNG is still a PRNG if it generates a duplicate number (within its period, before it cycles), or if it never generates (within its period) some numbers that are possible in the bitspace of its output. I'm not looking for these kinds. I'm looking for an algorithm that works as if it's just reading off N-bit binary numbers from a massive randomly-shuffled list of all N-bit binary numbers, but without the massive memory requirement of such a list. I'm interested in algorithms meeting this criteria that are pseudorandom in a cryptographically-secure way, and also those not cryptographically-secure that are quicker or simpler. | Sure. Pick a block cipher (i.e., pseudorandom permutation ), $E_K$ , and a random key for it, $K$ . Let $x_i=E_K(i)$ . Then this has the properties you are looking for. Short explanation: As the block cipher $E_K$ maps each n -bit value uniquely to another n -bit value, all the resulting values must be different for different input values.
Effectively that means $E_K$ creates a permutation of n -bit values that can be varied by changing $K$ . | {
"source": [
"https://cs.stackexchange.com/questions/153104",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/152306/"
]
} |
153,121 | Data only accessible in a scope, seems to still be maintained by the stack. What is the reason that entering and exiting scopes (in general) does not do the same "prologue and epilogue" instructions that are done when entering and exiting functions? What test1 and test2 show, in test2 8 byte is allocated in the stack, but it has already left the scope for int a when declaring int b . The ISA is x86 (compiled on godbolt.org with x86-64) but I assume this behavior might exist in many different standards, and ask from a more general computer science point of view. void test1(){
int a;
{ a = 141; }
a = 257;
}
test1():
push rbp
mov rbp, rsp
mov DWORD PTR [rbp-4], 141
mov DWORD PTR [rbp-4], 257
nop
pop rbp
ret
void test2(){
{ int a = 141; }
int b = 257;
}
test2():
push rbp
mov rbp, rsp
mov DWORD PTR [rbp-4], 141
mov DWORD PTR [rbp-8], 257
nop
pop rbp
ret | Sure. Pick a block cipher (i.e., pseudorandom permutation ), $E_K$ , and a random key for it, $K$ . Let $x_i=E_K(i)$ . Then this has the properties you are looking for. Short explanation: As the block cipher $E_K$ maps each n -bit value uniquely to another n -bit value, all the resulting values must be different for different input values.
Effectively that means $E_K$ creates a permutation of n -bit values that can be varied by changing $K$ . | {
"source": [
"https://cs.stackexchange.com/questions/153121",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/152243/"
]
} |
153,158 | I assume that computers make many mistakes (like errors, bugs, glitches, etc.), which can be observed from the amount of questions asked everyday on different communities (like Stack Overflow) showing people trying to fix such issues. If computers really make many errors (as I assumed earlier) then critical tasks (like signing in or receiving a receipt) must be designed to be almost error-free, unlike most of the tasks of most software and video games. | If computers really make many errors (as I assumed earlier)... Your assumption is wrong. Firstly, computers do not (except in extreme cases) "make many errors", humans do. Computers simply do what they are told to, very quickly and very well. Extraordinarily well, all things considered. Secondly, what you're perceiving as a high rate of errors is in reality a high perception of errors. A banking system that handles millions of transactions per day may have a hidden bug (a human error in coding) that in a very specific set of circumstances does something incorrect. Those circumstances may only occur after years of correct operations. One failure in several billion operations is not a high error rate, but you hear about the failure and assume that the banking system must be error prone. Back in the days when humans handled all the transactions - way, way back when that was even possible - the average error rate for transactions was many orders of magnitude higher. You can see that today in accounting circles, where end-of-month processing routinely involves tracking down human data entry errors to figure out why the (electronically recorded and processed) books don't balance. then critical tasks (like signing in or receiving a receipt) must be designed to be almost error-free While signing into a system is a relatively simple thing to implement, the big-ticket flaws you've heard about generally come from outside of your own implementation. OpenSSL's Heartbleed vulnerability potentially allowed an attacker to snoop on your secure communications, Spectre/Meltdown potentially allows malware to snoop on your program's memory, and there are all sorts of interesting attacks on encryption that make life difficult. And any time you hear about one of these things it's sensational, front-page news on the tech sites and a flurry of "how do I protect against this" posts on programming sites. And yet billions of logins happen daily, more billions of transactions get processed, making the true failure rate almost absurdly low. unlike most of the tasks of software and video games. Software development for critical systems is an entirely different environment than game development. For one thing, only one of the two is about making money as quickly and cheaply as possible. Games developers don't actually care if a few bugs go through to production, because who cares if your car clips through walls in an out-of-the-way part of the map, or in some rare case manages to slide through the ground and you end up falling out of the map... just reload a save or something, and we'll patch that later. After all, modern gamers are used to being used as alpha testers on games, just as long as they get to have some fun who cares? Especially if they're paying for the privilege. And it's not like you have the time or the budget to rigorously test every possible code path. Critical systems development can't be that sloppy. Every element is tested thoroughly, ever combination of elements is further tested. A lot of very bright people are paid a lot of money to find new ways to torture your code to make it break down and produce errors, and when they succeed it's back to the dev team to find out whay it broke and fix that issue. The customer doesn't get to see the system in action until it has been tested in every conceivable way. Because when it fails really bad things happen, and save points are really far apart in the real world. Why do we rely on computers in critical fields? The simple reality is that there is no better option available to us. The things we use computers for simply can't be handled by humans alone in the volume, speed and accuracy required. Are there alternatives? Maybe. Analog computing is great for some things, quantum computing offers some interesting possibilities. But for right now general-purpose computing is the best option on so many levels, and it's what we've spent most of the last century focusing on. | {
"source": [
"https://cs.stackexchange.com/questions/153158",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/152378/"
]
} |
153,698 | I'm trying to find examples of languages that don't seem regular, but are. A reference to where such examples may be found is also appreciated. So far I've found two. One is $L_1=\{a^ku\,\,|\,\,u\in \{a,b\}^∗$ and $u$ contains at least $k$ $a$ 's, for $k\geq 1\}$ , from this post , and the other is $L_2 = \{uww^rv\,\,|\,\, u,w,v\in\{a,b\}^+\}$ , which is an exercise (exercise 19 from section 4.3) in An Introduction to Formal Languages and Automata by Peter Linz. I suppose the aspect of seeming to be regular depends on your familiarity with the topic, but, for me, I would have said that those languages were not regular at a first glance. The trick seems to be to write a simple language in more complicated terms, like using $ww^R$ , which reminds us of the irregular language of even length palindromes. I'm not looking for extremely complicated ways of expressing a regular language, just some examples where the definition of the language seems to rely on concepts that usually make a language irregular, but are then "absorbed" by the other terms in the definition. | My favorite example of this, which is often used as a difficult/tricky exercise, is the language: $$L=\{w\in \{0,1\}^*:w \text{ has an equal number of } 01\text{ and }10\}$$ This has the strong flavor of the non-regular "same number of $0$ and $1$ ", but the alternation of $0$ and $1$ makes it regular nonetheless. | {
"source": [
"https://cs.stackexchange.com/questions/153698",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/125650/"
]
} |
153,752 | I can't seem to get a straight answer as to what a CPU clock is. I know there are quartz crystal clocks that work by the bending of quartz that happens as an electrical current is passed through it. Do modern CPUs also use a crystal with the same property? | Modern clocks are originally generated by quartz crystal oscillators of about 20MHz or so, and then the frequency is multiplied by one or more phase-locked loops to generate the clock signals for different parts of the system. (such as 4GHz for a CPU core). This is mostly a question of electronics design, though. | {
"source": [
"https://cs.stackexchange.com/questions/153752",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/153045/"
]
} |
154,386 | $A$ is an array of length $n$ $B$ is an $n\times n$ matrix \ I want to return an array C of size n such that: $$C_{i} = \sum_{j=1}^{n} \max(0, a_i - b_{ij}) $$ In pseudocode it could be like below for i = 1 to n:
C[i] = 0
for j = 1 to n:
C[i] += max(0, a[i] - b[i,j]) this runs on O(n^2) but it is possible to lower that. | That's not possible. You have to read in the entire $B$ matrix to determine the correct answer, which fundamentally requires $O(n^2)$ time. | {
"source": [
"https://cs.stackexchange.com/questions/154386",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/153410/"
]
} |
157,790 | I have been studying the necessity of a WHILE loop when defining the Ackermann Function.
I am looking to write a program to compute the Ackermann function in a high level language such as Python or JavaScript to compare it to the WHILE language . The Ackermann function is defined recursively, and recursive calls do not exist in the WHILE language. Every program with an alternative method to recursion has used a stack. Stacks do not exist in the WHILE language. Is there any way to program the Ackermann function in a HLL without a stack or recursion? | Good question! It is possible using only natural numbers and arithmetic to implement a stack, due to Gödel numbering . What's the basic idea? Well, a stack is basically a nested sequence of pairs: the stack $(1, 2, 3)$ (with $1$ on top) can be thought of as $(1, (2, 3))$ . And in turn, we can encode pairs using this neat formula: $$
\texttt{encode}(a, b) = a + \binom{a + b + 1}{2} = a + \frac{(a + b + 1)(a + b)}{2}
$$ This is called a pairing function and this one is due to Cantor.
Some examples may make the function more clear: $$
\texttt{encode}(0, 0) = 0 + 0 = 0\\
\texttt{encode}(0, 1) = 0 + 1 = 1 \\
\texttt{encode}(1, 0) = 1 + 1 = 2 \\
\texttt{encode}(0, 2) = 0 + 3 = 3 \\
\texttt{encode}(1, 1) = 1 + 3 = 4 \\
\texttt{encode}(2, 0) = 2 + 3 = 5 \\
\texttt{encode}(0, 3) = 0 + 6 = 6 \\
\cdots
$$ Then we can also define a corresponding function $\texttt{decode}(n)$ which returns a pair of integers, so that $\texttt{decode}(\texttt{encode}(a, b)) = (a, b)$ .
The kicker is that both encode and decode are definable as WHILE programs! Implementing encode and decode It should be clear how to implement $\texttt{encode}$ : WHILE programs have arithmetic, so we can simply compute the answer in a single assignment statement. For $\texttt{decode}$ , there are some more efficient ways, but one way that works is simply to loop over all pairs integers and try encoding them: decode(n):
a := 0
b := 0
done := 0
while done == 0:
c := encode(a, b)
if c == n:
done := 1
else if a > 0:
a := a - 1
b := b + 1
else:
a := b + 1
b := 0 The line c := encode(a, b) is a subprocedure: it can be simply replaced inline with the definition of encode. Implementing a stack What operations does a stack data type need to support? There are basically just four operations: empty: S returning an empty stack; push: (S, nat) -> S pushing a new value; pop: S -> (S, nat) popping the top value, and is_empty: S -> bool to check whether the stack is empty. Each of these can be implemented using encode and decode . For the empty stack, we can use the natural number 0. For push, we can use push(stack, n) = encode(stack, n) + 1 and for pop, we can use: pop(stack) = if stack == 0 then (0, 0) else decode(stack - 1) where the return value, an ordered pair, is stored into two designated variables.
Finally, is_empty is just checking whether stack == 0 . Implementing Ackermann As you noted, recursive functions can be implemented using WHILE loops and a stack. So implementing the Ackermann function is just a matter of applying the stack implementation above. Each time you want to push or pop from the stack, you replace with the above procedures. You can have as many stacks as you want, stored in different natural number variables. The same trick works to implement any recursive or Turing-computable function; this is why WHILE is Turing-complete. Notes Finally, two caveats. First, none of these encodings are particularly efficient. Even the basic encode function is quite unwieldy; nested calls to it to create a stack creates absolutely astronomical integers very quickly. Second, for any of this to work, it's important that the natural numbers in the WHILE language are true integers, not the fixed-width integers that are common in real computer architectures. For fixed-width integers, the WHILE language is certainly weaker than arbitrary computation -- it cannot implement any nontrivial Turing-computable functions, let alone the Ackermann function. As a result of both of these limitations, in practice, WHILE is not really sufficient for general computation with recursive functions. Instead, real compilers rely on the program stack and dynamically allocated memory on the heap to implement complex data structures and computations. | {
"source": [
"https://cs.stackexchange.com/questions/157790",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/148394/"
]
} |
158,836 | I have seen this C code showing an implentation of Peterson's Critical Section algorithm. It is obviously skeletal and hardwired for two threads but the logic is supposed to be correct in detail. Despite reading and talking I remain with a grain of doubt about the following line (and the similar line for Thread B): `while (turn == 1 && flag1) skip; When it is compiled, the while clause will generate multiple instructions which it seems to me can lead to a race condition in a pre-emptive scheduling model. While I trust the proofs etc. I have not seen a good way to refute my concern. (I know this is an oldie but goodie so feel free to respond with a link.) | Good question! It is possible using only natural numbers and arithmetic to implement a stack, due to Gödel numbering . What's the basic idea? Well, a stack is basically a nested sequence of pairs: the stack $(1, 2, 3)$ (with $1$ on top) can be thought of as $(1, (2, 3))$ . And in turn, we can encode pairs using this neat formula: $$
\texttt{encode}(a, b) = a + \binom{a + b + 1}{2} = a + \frac{(a + b + 1)(a + b)}{2}
$$ This is called a pairing function and this one is due to Cantor.
Some examples may make the function more clear: $$
\texttt{encode}(0, 0) = 0 + 0 = 0\\
\texttt{encode}(0, 1) = 0 + 1 = 1 \\
\texttt{encode}(1, 0) = 1 + 1 = 2 \\
\texttt{encode}(0, 2) = 0 + 3 = 3 \\
\texttt{encode}(1, 1) = 1 + 3 = 4 \\
\texttt{encode}(2, 0) = 2 + 3 = 5 \\
\texttt{encode}(0, 3) = 0 + 6 = 6 \\
\cdots
$$ Then we can also define a corresponding function $\texttt{decode}(n)$ which returns a pair of integers, so that $\texttt{decode}(\texttt{encode}(a, b)) = (a, b)$ .
The kicker is that both encode and decode are definable as WHILE programs! Implementing encode and decode It should be clear how to implement $\texttt{encode}$ : WHILE programs have arithmetic, so we can simply compute the answer in a single assignment statement. For $\texttt{decode}$ , there are some more efficient ways, but one way that works is simply to loop over all pairs integers and try encoding them: decode(n):
a := 0
b := 0
done := 0
while done == 0:
c := encode(a, b)
if c == n:
done := 1
else if a > 0:
a := a - 1
b := b + 1
else:
a := b + 1
b := 0 The line c := encode(a, b) is a subprocedure: it can be simply replaced inline with the definition of encode. Implementing a stack What operations does a stack data type need to support? There are basically just four operations: empty: S returning an empty stack; push: (S, nat) -> S pushing a new value; pop: S -> (S, nat) popping the top value, and is_empty: S -> bool to check whether the stack is empty. Each of these can be implemented using encode and decode . For the empty stack, we can use the natural number 0. For push, we can use push(stack, n) = encode(stack, n) + 1 and for pop, we can use: pop(stack) = if stack == 0 then (0, 0) else decode(stack - 1) where the return value, an ordered pair, is stored into two designated variables.
Finally, is_empty is just checking whether stack == 0 . Implementing Ackermann As you noted, recursive functions can be implemented using WHILE loops and a stack. So implementing the Ackermann function is just a matter of applying the stack implementation above. Each time you want to push or pop from the stack, you replace with the above procedures. You can have as many stacks as you want, stored in different natural number variables. The same trick works to implement any recursive or Turing-computable function; this is why WHILE is Turing-complete. Notes Finally, two caveats. First, none of these encodings are particularly efficient. Even the basic encode function is quite unwieldy; nested calls to it to create a stack creates absolutely astronomical integers very quickly. Second, for any of this to work, it's important that the natural numbers in the WHILE language are true integers, not the fixed-width integers that are common in real computer architectures. For fixed-width integers, the WHILE language is certainly weaker than arbitrary computation -- it cannot implement any nontrivial Turing-computable functions, let alone the Ackermann function. As a result of both of these limitations, in practice, WHILE is not really sufficient for general computation with recursive functions. Instead, real compilers rely on the program stack and dynamically allocated memory on the heap to implement complex data structures and computations. | {
"source": [
"https://cs.stackexchange.com/questions/158836",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/82493/"
]
} |
158,842 | Can we have an algorithm that takes some input and does something random to it (in such a way that the algorithm does terminate) which does not have a worst-case running time upper-bound? A (non-)example which shows what I mean: If let's say that my algorithm is one that takes an input number, generates a random number, adds them both and puts the program to sleep for that number of seconds. Would the running time here be unbounded (even though the algorithm would terminate)? I can see how in this example, it might not be, because we can represent the running time in terms of both the input size and the randomly generated number. But could there be some other (possibly non-deterministic) algorithm with such traits? That does something random to the input, terminates and does not have an upper bound for worst-case running time? Thank you! | Good question! It is possible using only natural numbers and arithmetic to implement a stack, due to Gödel numbering . What's the basic idea? Well, a stack is basically a nested sequence of pairs: the stack $(1, 2, 3)$ (with $1$ on top) can be thought of as $(1, (2, 3))$ . And in turn, we can encode pairs using this neat formula: $$
\texttt{encode}(a, b) = a + \binom{a + b + 1}{2} = a + \frac{(a + b + 1)(a + b)}{2}
$$ This is called a pairing function and this one is due to Cantor.
Some examples may make the function more clear: $$
\texttt{encode}(0, 0) = 0 + 0 = 0\\
\texttt{encode}(0, 1) = 0 + 1 = 1 \\
\texttt{encode}(1, 0) = 1 + 1 = 2 \\
\texttt{encode}(0, 2) = 0 + 3 = 3 \\
\texttt{encode}(1, 1) = 1 + 3 = 4 \\
\texttt{encode}(2, 0) = 2 + 3 = 5 \\
\texttt{encode}(0, 3) = 0 + 6 = 6 \\
\cdots
$$ Then we can also define a corresponding function $\texttt{decode}(n)$ which returns a pair of integers, so that $\texttt{decode}(\texttt{encode}(a, b)) = (a, b)$ .
The kicker is that both encode and decode are definable as WHILE programs! Implementing encode and decode It should be clear how to implement $\texttt{encode}$ : WHILE programs have arithmetic, so we can simply compute the answer in a single assignment statement. For $\texttt{decode}$ , there are some more efficient ways, but one way that works is simply to loop over all pairs integers and try encoding them: decode(n):
a := 0
b := 0
done := 0
while done == 0:
c := encode(a, b)
if c == n:
done := 1
else if a > 0:
a := a - 1
b := b + 1
else:
a := b + 1
b := 0 The line c := encode(a, b) is a subprocedure: it can be simply replaced inline with the definition of encode. Implementing a stack What operations does a stack data type need to support? There are basically just four operations: empty: S returning an empty stack; push: (S, nat) -> S pushing a new value; pop: S -> (S, nat) popping the top value, and is_empty: S -> bool to check whether the stack is empty. Each of these can be implemented using encode and decode . For the empty stack, we can use the natural number 0. For push, we can use push(stack, n) = encode(stack, n) + 1 and for pop, we can use: pop(stack) = if stack == 0 then (0, 0) else decode(stack - 1) where the return value, an ordered pair, is stored into two designated variables.
Finally, is_empty is just checking whether stack == 0 . Implementing Ackermann As you noted, recursive functions can be implemented using WHILE loops and a stack. So implementing the Ackermann function is just a matter of applying the stack implementation above. Each time you want to push or pop from the stack, you replace with the above procedures. You can have as many stacks as you want, stored in different natural number variables. The same trick works to implement any recursive or Turing-computable function; this is why WHILE is Turing-complete. Notes Finally, two caveats. First, none of these encodings are particularly efficient. Even the basic encode function is quite unwieldy; nested calls to it to create a stack creates absolutely astronomical integers very quickly. Second, for any of this to work, it's important that the natural numbers in the WHILE language are true integers, not the fixed-width integers that are common in real computer architectures. For fixed-width integers, the WHILE language is certainly weaker than arbitrary computation -- it cannot implement any nontrivial Turing-computable functions, let alone the Ackermann function. As a result of both of these limitations, in practice, WHILE is not really sufficient for general computation with recursive functions. Instead, real compilers rely on the program stack and dynamically allocated memory on the heap to implement complex data structures and computations. | {
"source": [
"https://cs.stackexchange.com/questions/158842",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/159018/"
]
} |
1 | Possible Duplicate: Write an Elevator Pitch / Tagline Note: We are closing this domain naming thread. It is asking the entirely wrong question. See this blog post for details: Domain Names: Wrong Question We're going to keep the name cstheory.stackexchange.com. But we WILL be setting up redirects from the more "popular" domains names. (e.g. seasonedadvice.com to cooking.stackexchange.com, basicallymoney.com to money.stackexchange.com, and others as we go through the list). New question: " Write and Elevator Pitch / Tagline! " Click here to contribute ideas and vote. [original message text below] Update : this post is now closed. CSTheory.org it is.. I'll start the ball rolling with the canonical question from 'The 7 essential meta questions'. Please post each idea as an answer, and if you know, indicate whether it's taken or not. Based on past experience with other SE sites, a domain name that's parked but not taken is admissible. Note that you can vote more than once! | CSTheory.org Extremely official and informative.
(A bit better grammatically than TheoryCS.org, I think; but both are good). | {
"source": [
"https://cstheory.meta.stackexchange.com/questions/1",
"https://cstheory.meta.stackexchange.com",
"https://cstheory.meta.stackexchange.com/users/80/"
]
} |
92 | I thought the all QA here should be covered by mathoverflow.net ? | I am personally torn about this. It is great to see top mathematicians and computer scientists interacting about deep questions. I also participate in both MO and TheoryOverflow (or whatever we decide to call ourselves). However, my strong impression is that the TCS community at MathOverflow is a small and marginal part of the site. Only the most clearly mathematical questions appear suitable for MO. If there isn't a direct relationship to set theory, foundations, or logic, or the question isn't related to a hot complexity topic to which heavy mathematics machinery has been applied to, then the question and its answers appear likely to remain in a dusty corner of the site. Of course we could encourage the TCS community to join MO. But I think there is a strong part of the TCS community (at least as defined by EATCS, if not SIGACT) which is altogether unlikely to find its questions relevant to MO. I would like to see questions about programming language semantics domain theory models of concurrency algorithmic game theory quantum complexity theory of parallel computation and distributed systems automata theory in databases and verification finite model theory and I don't think any of these questions would currently be welcome on MO. | {
"source": [
"https://cstheory.meta.stackexchange.com/questions/92",
"https://cstheory.meta.stackexchange.com",
"https://cstheory.meta.stackexchange.com/users/440/"
]
} |
11 | Functional programming has a theoretical basis in lambda calculus and combinatory logic . As someone involved with statistical computing, I find these concepts to be very useful for modeling. Is there an equivalent mathematical basis of imperative programming , or did it simply grow out of practical hardware application in machine language and the subsequent development of FORTRAN ? | In general, when mathematics is used to study some X , one first needs a model of X , and then develops a theory, a set of results about that model. I guess that theory may be said to be a "theoretical basis" for X . Now set X=computation. There are many models of computation, many involving "state". Each model has its own "theory" and it is sometimes possible to "translate" between models. I believe it's hard to say which model is more "basic"---they are simply designed with different goals in mind. Turing machines were designed to define what is computable . So they make a good model if you care about whether there exists an algorithm for a certain problem. This model is sometimes abused to study the efficiency of algorithms or the hardness of problems, under the pretext that it's good enough, at least if you only care about polynomial/non-polynomial. The RAM model is closer to a real computer and therefore better if you want a precise analysis of an algorithm. To put lower bounds on the hardness of problems it is better to not use a model that resembles too much today's computers because you want to cover a wide range of possible computers, while still being more precise than just polynomial/non-polynomial. In this context, I saw for example the cell-probe model used. If you care about correctness , then still other models are useful. Here you have operational semantics (which I'd say is the analogue of lambda calculus for statefull computations), axiomatic semantics (developed in 1969 by Hoare based on Floyd's inductive assertions from 1967, which are popularized by Knuth in The Art of Computer Programming , volume 1), and others. To summarize, I think you are after models of computation. There are many such models, developed with various goals in minds, and many have state, so they correspond to imperative programming. If you want to know if something can be computed, then look at Turing machines. If you care about efficiency look at RAM models. If you care about correctness look at models that end in "semantics", such as operational semantics. Finally, let me mention that there is a big book online only about Models of Computation by John Savage. It is mostly about efficiency. For the correctness part I recommend you start with the classic papers of Floyd (1967) , Hoare (1969) , Dijkstra (1975) , and Plotkin (1981) . They are all pretty cool. | {
"source": [
"https://cstheory.stackexchange.com/questions/11",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/7/"
]
} |
12 | I took a class once on Computability and Logic. The material included a correlation between complexity / computability classes (R, RE, co-RE, P, NP, Logspace, ...) and Logics (Predicate calculus, first order logic, ...). The correlation included several results in one fields, that were obtained using techniques from the other field. It was conjectured that P != NP could be attacked as a problem in Logic (by projecting the problem from the domain of complexity classes to logics). Is there a good summary of these techniques and results? | It's possible that you're asking about results in finite model theory (such as the characterization of P and NP in terms of various fragments of logic). The recent attempted proof of P != NP initially made heavy use of such concepts, and some good references (taken from the wiki ) are Erich Gradel's review of FMT and descriptive complexity Ron Fagin's article on descriptive complexity | {
"source": [
"https://cstheory.stackexchange.com/questions/12",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/81/"
]
} |
26 | This question is in regard to the Fisher-Yates algorithm for returning a random shuffle of a given array. The Wikipedia page says that its complexity is O(n), but I think that it is O(n log n). In each iteration i, a random integer is chosen between 1 and i. Simply writing the integer in memory is O(log i), and since there are n iterations, the total is O(log 1) + O(log 2) + ... + O(log n) = O(n log n) which isn't better the the naive algorithm. Am I missing something here? Note: The naive algorithm is to assign each element a random number in the interval (0,1) , then sort the array with regard to the assigned numbers. | I suspect that here, like in most algorithms work, the cost of reading and writing $O(\log n)$ bit numbers is assumed to be a constant. It's a minor sin, as long as you don't get carried away and collapse P and PSPACE by accident . | {
"source": [
"https://cstheory.stackexchange.com/questions/26",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/90/"
]
} |
34 | A shuffle of two strings is formed by interspersing the characters into a new string, keeping the characters of each string in order. For example, MISSISSIPPI is a shuffle of MISIPP and SSISI . Let me call a string square if it is a shuffle of two identical strings. For example, ABCABDCD is square, because it is a shuffle of ABCD and ABCD , but the string ABCDDCBA is not square. Is there a fast algorithm to determine whether a string is square, or is it NP-hard? The obvious dynamic programming approach doesn't seem to work. Even the following special cases appear to be hard: (1) strings in which each character appears at most four six times, and (2) strings with only two distinct characters. As Per Austrin points out below, the special case where each character occurs at most four times can be reduced to 2SAT. Update: This problem has another formulation that may make a hardness proof easier. Consider a graph G whose vertices are the integers 1 through n; identify each edge with the real interval between its endpoints. We say that two edges of G are nested if one interval properly contains the other. For example, the edges (1,5) and (2,3) are nested, but (1,3) and (5,6) are not, and (1,5) and (2,8) are not. A matching in G is non-nested if no pair of edges is nested. Is there a fast algorithm to determine whether G has a non-nested perfect matching, or is that problem NP-hard? Unshuffling a string is equivalent to finding a non-nested perfect matching in a disjoint union of cliques (with edges between equal characters). In particular, unshuffling a binary string is equivalent to finding a non-nested perfect matching in a disjoint union of two cliques. But I don't even know if this problem is hard for general graphs, or easy for any interesting classes of graphs. There is an easy polynomial-time algorithm to find perfect non- crossing matchings. Update (Jun 24, 2013): The problem is solved! There are now two independent proofs that identifying square strings is NP-complete. In November 2012, Sam Buss and Michael Soltys announced a reduction from 3-partition , which shows that the problem is hard even for strings over a 9-character alphabet. See "Unshuffling a Square is NP-Hard ", Journal of Computer System Sciences 2014. In June 2013, Romeo Rizzi and Stéphane Vialette published a reduction from the longest common subsequence problem. See " On Recognizing Words That Are Squares for the Shuffle Product ", Proc. 8th International Computer Science Symposium in Russia , Springer LNCS 7913, pp. 235–245. There is also a simpler proof that finding non-nested perfect matchings is NP-hard, due to Shuai Cheng Li and Ming Li in 2009. See " On two open problems of 2-interval patterns ", Theoretical Computer Science 410(24–25):2410–2423, 2009. | Michael Soltys and I have succeeded in proving that the problem of determining whether a string can be written as a square shuffle is NP complete. This applies even over a finite alphabet with only $7$ distinct symbols, although our proof is written for an alphabet with $9$ symbols. This question is still open for smaller alphabets, say with only $2$ symbols. We have not looked at the problem under the restriction that each symbol appears only $6$ times (or, more generally, a constant number of times); so that question is still open. The proof uses a reduction from $3$-Partition. It is too long to post here, but a preprint, "Unshuffling a string is $\text{NP}$-hard", is available from our web pages at: http://www.math.ucsd.edu/~sbuss/ResearchWeb/Shuffle/ and http://www.cas.mcmaster.ca/~soltys/#Papers . The paper has been published in the Journal of Computer System Sciences: http://www.sciencedirect.com/science/article/pii/S002200001300189X | {
"source": [
"https://cstheory.stackexchange.com/questions/34",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/111/"
]
} |
39 | I'm trying to understand the relationship between graph isomorphism and the hidden subgroup problem. Is there a good reference for this ? | References can be found in martinschwarz's answer, but here's a summary of a couple reductions. The symmetric group $S_n$ acts on graphs of n vertices by permuting the vertices. Determining whether two graphs are isomorphic is polynomial-time equivalent to computing a polynomial-size generating set for $Aut(G)$. Reduction to the HSP over the symmetric group $S_n$ (where $n$ is the number of variables in the graph). The function $f$ is $f(p)=p(G)$ where $p$ is a permutation in $S_n$, and $p(G)$ is the permuted version of $G$. Then $f$ is constant on cosets of $Aut(G)$ and distinct on distinct cosets (note that the image of $f$ consists of all graphs isomorphic to $G$). Since the hidden subgroup is exactly $Aut(G)$, if we could solve this HSP then we would have the generating set for $Aut(G)$, which is all we need to solve GI (see above). Reduction to the HSP over $S_n \wr \mathbb{Z}/2\mathbb{Z}$. If we want to know if two graphs $G$ and $H$ on $n$ vertices are isomorphic, consider the graph $K$ which is the disjoint union of $G$ and $H$ on $2n$ vertices. Let $\mathbb{Z}/2\mathbb{Z}$ act on the vertices by swapping $i$ with $n+i$ for $i=1,...,n$. Either $Aut(K) = Aut(G) \times Aut(H)$ or $Aut(K) = (Aut(G) \times Aut(H)) semidirect \mathbb{Z}/2\mathbb{Z}$. As before, let $f(x)=x(K)$ where $x$ is now an element of $S_n \wr \mathbb{Z}/2\mathbb{Z}$ that acts on $K$ as described. The hidden subgroup associated to $f$ is exactly $Aut(K)$, as in the previous reduction. If we solve this HSP, we get a generating set for $Aut(K)$. It is then easy to check whether the generating set contains any element that swaps the copy of $G$ with the copy of $H$ inside $K$ (has nontrivial $\mathbb{Z}/2\mathbb{Z}$ component). | {
"source": [
"https://cstheory.stackexchange.com/questions/39",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/80/"
]
} |
45 | I'm familiar with a lot of results that use the PCP theorem (mainly in approximating algorithms), but I've never come across a clear explanation of the PCP theorem (ie, that $\mathsf{NP} = \mathsf{PCP}(O(\log(n)),O(1))$). What are good papers/books to read for that? | Both Goldreich's complexity textbook and Arora and Barak's complexity textbook have chapters devoted to explaining the proof of the PCP theorem (with pictures!). Also, Dinur's paper is worthwhile to read, if you haven't tried to tackle it yet. It's at least more approachable (in my opinion) than the original proof, and you can get a good intuition for how the proof works by skimming just the first 12 pages (and delve into the technical proofs contained in the latter chunk of the paper later, if you prefer). | {
"source": [
"https://cstheory.stackexchange.com/questions/45",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/120/"
]
} |
52 | Assuming that P != NP, I believe it has been shown that there are problems which are not in P and not NP-Complete. Graph Isomorphism is conjectured to be such a problem. Is there any evidence of more such 'layers' in NP? i.e. A hierachy of more than three classes starting at P and culminating in NP, such that each is a proper superset of the other? Is it possible that the hierarchy is infinite? | Yes! In fact, there is provably an infinite hierarchy of increasingly harder problems between P and NP-complete under the assumption that P!=NP. This is a direct corollary of the proof of Ladner's Theorem (which established the non-emptiness of NP\P) Formally, we know that for every set S not in P, there exists S' not in P such that S' is Karp-reducible to S but S is not Cook-reducible to S'. Therefore, if P != NP, then there exists an infinite sequence of sets S 1 , S 2 ... in NP\P such that S i+1 is Karp-reducible to S i but S i is not Cook-reducible to S i+1 . Admittedly, the overwhelming majority of such problems are highly unnatural in nature. | {
"source": [
"https://cstheory.stackexchange.com/questions/52",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/126/"
]
} |
79 | Factoring and graph isomorphism are problems in NP that are not known to be in P nor to be NP-Complete. What are some other (sufficiently different) natural problems that share this property? Artificial examples coming directly from the proof of Ladner's theorem do not count. Are any of these example provably NP-intermediate, assuming only some "reasonable" hypothesis? | Here's a collection of some of the responses of problems between P and NPC: Factoring Isomorphism problems: Graph Isomorphism [not NPC unless $\sum_2^p=\prod_2^p$ ] (via @Jeff Kinne), Graph Automorphism, Group Isomorphism, Automorphism, Ring Isomorphism and Automorphism (via @Joshua Grochow) Computing the rotation distance between two binary trees or the flip distance between two triangulations of the same convex polygon (via @David Eppstein) The Turnpike Problem of reconstructing points on line from distances (via @Suresh Venkat) Problems arising from the Unique Games Conjecture (via @Moritz) Discrete Log Problem and others related to cryptographic assumptions (via @Joe Fitzsimons) Determining winner in parity games (via @mashca) Determining who has the highest chance of winning a stochastic game (via @Peter Shor on MO) Numbers in boxes problems (via @Joshua Grochow) Agenda control for balanced single-elimination tournaments (via @virgi) Knot triviality (via @JeffE) (Assuming NEXP≠EXP) padded versions of NEXP -complete problems (via @Joshua Grochow) Problems in TFNP (via @Marcos Villagra) Intersecting Monotone SAT (via @András Salamon) Minimum Circuit Size Problem (via @Eric Allender) Deciding whether a given triangulated 3-manifold is a 3-sphere (via @Joe O'Rourke and @Peter Shor) The Cutting Stock Problem with a constant number of object lengths (via @Suresh Venkat) Monotone Self-Duality (via @Danu) Planar Minimum Bisection (via @turkistany) Pigeonhole Subset Sum (via @user834) Square Root Sums (via @JeffE) Deciding Whether a Graph Admits a Graceful Labeling (via @Oleksandr Bondarenko) Gap version of the closest vector in lattice problem GapCVP $(\sqrt{n})$ (via @MCH) The linear divisibility problem [known to be $\gamma$ -complete but not NPC] (via @Oleksandr Bondarenko) Finding the VC dimension (via @Mohammad al Turkistany) Finding the minimum dominating set in a tournament (via @Mohammad al Turkistany) | {
"source": [
"https://cstheory.stackexchange.com/questions/79",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/123/"
]
} |
88 | Sorry for the catchy title. I want to understand, what should one have to do to disprove the Church-Turing thesis? Somewhere I read it's mathematically impossible to do it! Why? Turing, Rosser etc used different terms to
differentiate between: "what can be computed" and "what can be
computed by a Turing machine". Turing's 1939 definition regarding this is:
"We shall use the expression "computable function" to mean a function
calculable by a machine, and we let "effectively calculable" refer to
the intuitive idea without particular identification with any one of
these definitions". So, the Church-Turing thesis can be stated as follows:
Every effectively calculable function is a computable function. So again, how will the proof look like if one disproves this conjecture? | While it seems quite hard to prove the Church-Turing thesis because of the informal nature of "effectively calculable function", we can imagine what it would mean to disprove it. Namely, if someone built a device which (reliably) computed a function that cannot be computed by any Turing machine, that would disprove the Church-Turing thesis because it would establish existence of an effectively calculable function that is not computable by a Turing machine. | {
"source": [
"https://cstheory.stackexchange.com/questions/88",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/-1/"
]
} |
93 | What are the best current lower bounds for time and circuit depth for 3SAT? | As far as I know, the best known "model-independent" time lower bound for SAT is the following. Let $T$ and $S$ be the running time and space bound of any SAT algorithm. Then we must have $T \cdot S \geq n^{2 \cos(\pi/7) - o(1)}$ infinitely often. Note $2 \cos(\pi/7) \approx 1.801$. (The result that Suresh cites is a little obsolete.) This result appeared in STACS 2010, but that is an extended abstract of a much longer paper, which you can get here: http://www.cs.cmu.edu/~ryanw/automated-lbs.pdf Of course, the above work builds on a lot of prior work which is mentioned in Lipton's blog (see Suresh's answer). Also, as the space bound S gets close to n, the time lower bound T gets close to n as well. You can prove a better "time-space tradeoff" in this regime; see Dieter van Melkebeek's survey of SAT time-space lower bounds from 2008. If you restrict yourself to multitape Turing machines, you can prove $T \cdot S \geq n^{2-o(1)}$ infinitely often. That was proved by Rahul Santhanam, and follows from a similar lower bound that's known for PALINDROMES in this model. We believe you should be able to prove a quadratic lower bound that is "model-independent" but that has been elusive for some time. For non-uniform circuits with bounded fan-in, I know of no depth lower bound better than $\log n$. | {
"source": [
"https://cstheory.stackexchange.com/questions/93",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/204/"
]
} |
138 | Why do most people prefer to use many-one reductions to define NP-completeness instead of, for instance, Turing reductions? | Two reasons: (1) just a matter of minimality: being NPC under many-one reductions is a formally stronger statement and if you get the stronger statement (as Karp did and as you almost always do) then why not say so? (2) Talking about many-one reductions gives rise to a richer, more delicate, hierarchy. For example the distinction NP vs co-NP disappears under Turing reductions. This is similar in spirit to why often one uses Logspace-reductions rather than polytime ones. | {
"source": [
"https://cstheory.stackexchange.com/questions/138",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/127/"
]
} |
174 | Wikipedia only lists two problems under "unsolved problems in computer science" : P = NP? The existence of one-way functions What are other major problems that should be added to this list? Rules: Only one problem per answer Provide a brief description and any relevant links | Can multiplication of $n$ by $n$ matrices be done in $O(n^2)$ operations? The exponent of the best known upper bound even has a special symbol, $\omega$. Currently $\omega$ is approximately 2.376, by the Coppersmith-Winograd algorithm . A nice overview of the state of the art is Sara Robinson, Toward an Optimal Algorithm for Matrix Multiplication , SIAM News, 38(9), 2005. Update: Andrew Stothers (in his 2010 thesis ) showed that $\omega < 2.3737$, which was improved by Virginia Vassilevska Williams (in a July 2014 preprint ) to $\omega < 2.372873$. These bounds were both obtained by a careful analysis of the basic Coppersmith-Winograd technique. Further Update (Jan 30, 2014): François Le Gall has proved that $\omega < 2.3728639$ in a paper published in ISSAC 2014 ( arXiv preprint ). | {
"source": [
"https://cstheory.stackexchange.com/questions/174",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/7/"
]
} |
175 | Parity-L is the set of languages recognized by a non-deterministic Turing machine which can only distinguish between an even number or odd number of "acceptance" paths (rather than a zero or non-zero number of acceptance paths), and which is further restricted to work in logarithmic space. Solving a linear system of equations over ℤ 2 is a complete problem for Parity-L, and so Parity-L is contained in P. What other complexity class relations would be known, if Parity-L and P were equal? | parity-$L$ is in $NC^2$ and parity-$L=P$ would mean that $P$ can be simulated in parallel $\log^2$ time or in $\log^2$ space (since $NC^2$ is in $DSPACE(log^2 n)$) | {
"source": [
"https://cstheory.stackexchange.com/questions/175",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/248/"
]
} |
189 | Paul Erdős talked about the "Book" where God keeps the most elegant proof of each mathematical theorem. This even inspired a book (which I believe is now in its 4th edition): Proofs from the Book . If God had a similar book for algorithms, what algorithm(s) do you think would be a candidate(s)? If possible, please also supply a clickable reference and the key insight(s) which make it work. Only one algorithm per answer, please. | Union-find is a beautiful problem whose best algorithm/datastructure ( Disjoint Set Forest ) is based on a spaghetti stack. While very simple and intuitive enough to explain to an intelligent child, it took several years to get a tight bound on its runtime. Ultimately, its behavior was discovered to be related to the inverse Ackermann Function, a function whose discovery marked a shift in perspective about computation (and was in fact included in Hilbert's On the Infinite ). Wikipedia provides a good introduction to Disjoint Set Forests . | {
"source": [
"https://cstheory.stackexchange.com/questions/189",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/126/"
]
} |
200 | Is it possible to translate a boolean formula B into an equivalent conjunction of Horn clauses? The Wikipedia article about HornSAT seems to imply that it is, but I have not been able to chase down any reference. Note that I do not mean "in polynomial time", but rather "at all". | No. Conjunctions of Horn clauses admit least Herbrand models, which disjunctions of positive literals don't. Cf. Lloyd, 1987, Foundations of Logic Programming . Least Herbrand models have the property that they are in the intersections of all satisfiers. The Herbrand models for $(a \lor b)$ are $\{\{a\}, \{b\}, \{a,b\}\}$, which doesn't contain its intersection, so as arnab says, $(a \lor b)$ is an example of a formula which can't be expressed as a conjunction of Horn clauses. Incorrect answer overwritten | {
"source": [
"https://cstheory.stackexchange.com/questions/200",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/161/"
]
} |
238 | Are there any references (online or in book form) that organize and discuss TCS theorems by proof technique? Garey and Johnson do this for the various kinds of widget constructions needed for NP-completeness proofs (particularly in chapter 3 of their book), but I'm wondering if there's anything that treats proof techniques in TCS more broadly. So for example, topics might include diagonalization, broken down further by the type of construction used; proofs by computation histories; tableau constructions; incompressibility arguments, etc. I suppose I could just chop up a standard theory of computation text and rearrange the sections, but it would be great if there is something out there that also provides some additional commentary and shows where there are commonalities between the techniques being used. Just to be clear, since any text is going to use proofs, what I'm really interested in finding is a reference where the proof techniques themselves are the actual subject matter. In addition to chapter 3 of Garey and Johnson, here's another partial example that just occurred to me: in Li and Vitanyi , in chapter 6 they discuss the incompressibility method and give examples of how to apply the technique. | The Complexity Theory Companion by Hemaspaandra and Ogihara . It's not exhaustive in terms of techniques (I imagine no such book is), but I think it qualifies as an answer to your question. Here are the titles of the chapters: The Self-Reducibility Technique. The One-Way Function Technique. The Tournament Divide and Conquer Technique. The Isolation Technique. The Witness Reduction Technique. The Polynomial Interpolation Technique. The Nonsolvable Group Technique. The Random Restriction Technique. The Polynomial Technique. | {
"source": [
"https://cstheory.stackexchange.com/questions/238",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/184/"
]
} |
252 | Let $G= (V, E)$ be a non-regular connected graph whose degree is bounded. Suppose that each node contain a unique token. I want to uniformly shuffle the tokens amongst the graph using only local swaps (i.e. exchange of the tokens between two adjacent nodes) ? Is there a lower bound known for this problem ? The only idea I had is to use a random walk result, then to see how much swaps I need to "simulate" the effect of random walks transporting tokens on the graph. | The Complexity Theory Companion by Hemaspaandra and Ogihara . It's not exhaustive in terms of techniques (I imagine no such book is), but I think it qualifies as an answer to your question. Here are the titles of the chapters: The Self-Reducibility Technique. The One-Way Function Technique. The Tournament Divide and Conquer Technique. The Isolation Technique. The Witness Reduction Technique. The Polynomial Interpolation Technique. The Nonsolvable Group Technique. The Random Restriction Technique. The Polynomial Technique. | {
"source": [
"https://cstheory.stackexchange.com/questions/252",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/76/"
]
} |
276 | What were the most surprising results in complexity? I think it would be useful to have a list of unexpected/surprising results. This includes both results that were surprising and came out of nowhere and also results that turned out different than people expected. Edit : given the list by Gasarch, Lewis, and Ladner on the complexity blog (pointed out by @Zeyu), let's focus this community wiki on results not on their list. Perhaps this will lead to a focus on results after 2005 (as per @Jukka's suggestion). An example: Weak Learning = Strong Learning [Schapire 1990] : (Surprisingly?) Having any edge over random guessing gets you PAC learning. Lead to the AdaBoost algorithm. | Here is the guest post by Bill Gasarch with help from Harry Lewis and Richard Ladner: http://blog.computationalcomplexity.org/2005/12/surprising-results.html | {
"source": [
"https://cstheory.stackexchange.com/questions/276",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/123/"
]
} |
284 | A problem P is said to be in APX if there exists some constant c > 0 such that a polynomial-time approximation algorithm exists for P with approximation factor 1 + c. APX contains PTAS (seen by simply picking any constant c > 0) and P. Is APX in NP? In particular, does the existence of a polynomial-time approximation algorithm for some approximation factor imply that the problem is in NP? | APX is defined as a subset of NPO, so yes, if an optimization problem is in APX then the corresponding decision problem is in NP. However, if what you're asking is whether an arbitrary problem must be in NP (or NPO) if there is a poly time O(1)-approximation, then the answer is no. I don't know of any natural problems that serve as a counter-example, but one could define an artificial maximization problem where the objective is the sum of two terms, a large term that is easily optimized in P, and a much smaller term that adds a small amount if part of the solution encodes an answer to some hard problem (outside of NP). Then you could find, say, a 2-approximation in poly time by concentrating on the easy term, but finding an optimal solution would require solving the hard problem. | {
"source": [
"https://cstheory.stackexchange.com/questions/284",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/275/"
]
} |
305 | Two ways of analyzing the efficiency of an algorithm are to put an asymptotic upper bound on its runtime, and to run it and collect experimental data. I wonder if there are known cases where there is a significant gap between (1) and (2). By this I mean that either (a) the experimental data suggests a tighter asymptotic or (b) there are algorithms X and Y such that the theoretical analysis suggests that X is much better than Y and the experimental data suggests that Y is much better than X. Since experiments usually reveal average-case behavior, I expect most interesting answers to refer to average-case upper bounds. However, I don't want to rule out possibly interesting answers that talk about different bounds, such as Noam's answer about Simplex. Include data structures. Please put one algo/ds per answer. | The most glaring example is of course the Simplex method that runs quickly in practice, suggesting poly-timeness, but takes exponential time in theory. Dan Spielman just got the Nevanlinna award to a large extent for explaining this mystery. More generally, many instances of Integer-programming can be solved quite well using standard IP-solvers, e.g. combinatorial auctions for most distributions attempted on significant sized inputs could be solved -- http://www.cis.upenn.edu/~mkearns/teaching/cgt/combinatorial-auctions-survey.pdf | {
"source": [
"https://cstheory.stackexchange.com/questions/305",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/236/"
]
} |
314 | There is often-quoted philosophical justification for believing that P != NP even without proof. Other complexity classes have evidence that they are distinct, because if not, there would be "surprising" consequences (like the collapse of the polynomial hierarchy). My question is, what is the basis for belief that the class PPAD is intractable? If there was a polynomial time algorithm for finding Nash equilibria, would this imply anything about other complexity classes? Is there a heuristic argument for why it should be hard? | PPAD is pretty "low" above P and not much would change in our understanding of complexity if it was shown equal to P (except that the few problems in PPAD would now be in P). The main "evidence" that PPAD!=P is an oracle separation, which is essentially equivalent to the combinatorial fact that no "black-box simulation" exists. | {
"source": [
"https://cstheory.stackexchange.com/questions/314",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/25/"
]
} |
343 | You might often find cutting plane methods, variable propagation, branch and bound, clause learning, intelligent backtracking or even handwoven human heuristics in SAT solvers. Yet for decades the best SAT solvers have relied heavily on resolution proof techniques and use a combination of other things simply for aid and to direct resolution-style search. Obviously, it's suspected that ANY algorithm will fail to decide the satisfiability question in polynomial time in at least some cases. In 1985, Haken proved in his paper "The intractability of resolution" that the pigeon hole principle encoded in CNF does not admit polynomial sized resolution proofs. While this does prove something about the intractability of resolution-based algorithms, it also gives criteria by which cutting edge solvers can be judged - and in fact one of the many considerations that goes into designing a SAT solver today is how it is likely to perform on known 'hard' cases. Having a list of classes of Boolean formulas that provably admit exponentially sized resolution proofs is useful in the sense it gives 'hard' formulas to test new SAT solvers against. What work has been done in compiling such classes together? Does anyone have a reference containing such a list and their relevant proofs? Please list one class of Boolean formula per answer. | Hard instances for resolution : Tseitin's formulas (over expander graphs). Weak ($ m $ to $ n$) pigeonhole principle (exponential in $n$ lower bounds, for any $ m>n $). Random 3CNF's with $ n $ variables and $ O(n^{1.5-\epsilon})$ clauses, for $ 0<\epsilon<1/2 $. Good, relatively up-to-date, technical survey for proof complexity lower bounds, see: Nathan Segerlind: The Complexity of Propositional Proofs. Bulletin of Symbolic Logic 13(4): 417-481 (2007) available at: http://www.math.ucla.edu/~asl/bsl/1304/1304-001.ps | {
"source": [
"https://cstheory.stackexchange.com/questions/343",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/3624/"
]
} |
376 | Are there any benefits to calculating the time complexity of an algorithm using lambda calculus? Or is there another system designed for this purpose? Any references would be appreciated. | Ohad is quite right about the problems that the lambda calculus faces as a basis for talking about complexity classes. There has been a fair bit of work done on characterising complexity of reducibility in the lambda calculus, particularly around the work on labelled and optimal reductions from Lèvy's PhD thesis. Generally speaking, good cost models for the lambda calculus should not assign a constant weight to all beta reductions: intuitively, substituting a large subterm into many, differently scoped places should cost more than contracting a small K redex, and if one wants a certain amount of invariance of cost under different rewrite strategies, this becomes essential. Two links: Lawall & Mairson, 1996, Optimality and inefficiency: what isn't a cost model of the lambda calculus? (.ps.gz) – Seminal survey of issues bearing on choice of cost model, and why many plausible ideas don't work. Dal Lago & Martini, 2008, The weak lambda calculus as a reasonable machine – Offers a cost model for the call-by-value lambda calculus, together with good discussion of the literature. | {
"source": [
"https://cstheory.stackexchange.com/questions/376",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/7/"
]
} |
426 | The field of distributed computing has fallen woefully short in developing a single mathematical theory to describe distributed algorithms. There are several 'models' and frameworks of distributed computation that are simply not compatible with each other. The sheer explosion of varying temporal properties (asynchrony, synchrony, partial synchrony), various communication primitives (message passing vs. shared memory, broadcast vs. unicast), multiple fault models (fail stop, crash recover, send omission, byzantine, and so on) has left us with an intractable number of system models, frameworks, and methodologies, that comparing relative solvability results and lower bounds across these models and frameworks has become arduous, intractable, and at times, impossible. My question is very simply, why is that so? What is so fundamentally different about distributed computing (from its sequential counterpart) that we haven't been able to collate the research into a unified theory of distributed computing? With sequential computing, Turing Machines, Recursive Functions, and Lambda Calculus all truned out to be equivalent. Was this just a stroke of luck, or did we really do a good job in encapsulating sequential computing in a manner that is yet to be accomplished with distributed computing? In other words, is distributed computing inherently unyielding to an elegant theory (and if so, how and why?), or are we simply not smart enough to discover such a theory? The only reference I could find that addresses this issue is: " Appraising two decades of distributed computing theory research " by Fischer and Merritt DOI: 10.1007/s00446-003-0096-6 Any references or expositions would be really helpful. | My take is that the abstractly-motivated Turing machine model of computation was a good approximation of technology until very recently, whereas models of distributed computing, from the get-go, have been motivated by the real world, which is always messier than abstractions. From, say, 1940-1995, the size of problem instances, the relative "unimportance" of parallelism and concurrency, and the macro-scale of computing devices, all "conspired" to keep Turing machines an excellent approximation of real-world computers. However, once you start dealing with massive datasets, ubiquitous need for concurrency, biology through the algorithmic lens, etc., it is much less clear if there is an "intuitive" model of computation. Perhaps problems hard in one model are not hard -- strictly less computationally complex -- in another. So I believe that mainstream computational complexity is finally catching up (!) with distributed computing, by starting to consider multiple models of computation and data structures, motivated by real-world considerations. | {
"source": [
"https://cstheory.stackexchange.com/questions/426",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/175/"
]
} |
448 | Ask even someone with a background in computer science what a regular expression is, and the answer is likely to go beyond the constraint of being within reach of a finite-state automaton. For example, the “regular expression” /^1?$|^(11+?)\1+$/ created by noted Perl personality Abigail (and part of Perl's test suite since 2002) describes a machine that accepts only composite unary numbers, but exercise 4.5 (b) in the third edition of Peter Linz's An Introduction to Formal Languages and Automata has the reader use the pumping lemma to prove that $\mathcal{L} = \left\{ a^n : n\ \mathrm{is\ not\ a\ prime\ number} \right\}$ is not a regular language. In contexts where the distinction is important, what should we call the strictly more powerful expressions? | Larry Wall proposed that we use "regular expression" for the formalism Kleene proposed, and "regex" for expressions for the widely used extensions. It's a fairly widely followed convention. If you want to make it clear that you are talking about regular expressions in the formal languages sense, it is usually not difficult to translate into talk of regular languages. The power of regexes comes from backtracking, and there has been work done on automata for regular languages with backtracking. See, in particular, Becchi & Crowley, 2008, Extending Finite Automata to Efficiently Match Perl-Compatible Regular Expressions . | {
"source": [
"https://cstheory.stackexchange.com/questions/448",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/362/"
]
} |
499 | What interesting differences are there between theory and practice of security and cryptography? Most interesting will of course be examples that suggest new avenues for theoretical research based on practical experience :). Answers might include (but are not limited to): Examples where theory suggests something is possible but it never gets used in practice Examples where theory suggests that something is safe that is not safe in practice Examples of something in widespread practical use has little theory behind it. ... Caveat If your answer is essentially of the form "Theory is about asymptotics, but practice is not," then either the theory should be really central, or the answer should include specific examples where the practical experience on real-world instances differs from the expectations based on the theory. One example I know of: secure circuit evaluation. Very powerful in theory, but too complicated to ever use in practice, because it would involve taking your code, unrolling it into a circuit, and then doing secure evaluation of each gate one at a time. | Oh boy, where to start. The big one is definitely black boxes. Crypto researchers make a fuss about things like uninstantiability problem of the Random Oracle Model. Security researchers are at the other extreme and would like everything to be usable as a black box, not just hash functions. This is a constant source of tension. To illustrate, if you look at the formal analysis of security protocols, for example BAN logic , you will see that symmetric encryption is treated as an "ideal block cipher." There is a subtle distinction here — BAN logic (and other protocol analysis techniques) don't claim to be security proofs; rather, they are techniques for finding flaws. Therefore it is not strictly true that the ideal cipher model is involved here. However, it is empirically true that most of the security analysis tends to be limited to the formal model, so the effect is the same. We haven't even talked about practitioners yet. These guys typically don't even have a clue that crypto primitives are not intended to be black boxes, and I doubt this is ever going to change — decades of trying to beat this into their heads hasn't made a difference. To see how bad the problem is, consider this security advisory relating to API signature forgeability. The bug is partly due to the length-extension attack in the Merkle-Damgard construction (which is something really really basic), and affects Flickr, DivShare, iContact, Mindmeister, Myxer, RememberTheMilk, Scribd, Vimeo, Voxel, Wizehhive and Zoomr. The authors note that this is not a complete list. I do think practitioners deserve the lion's share of the blame for this sad state of affairs. On the other hand, perhaps crypto theorists need to rethink their position as well. Their line has been: "black-boxes are impossible to build; we're not even going to try." To which I say, since it is clear that your constructions are going to get (mis)used as black boxes anyway, why not at least try to make them as close to black boxes as possible? The paper Merkle-Damgard Revisited is a great example of what I'm talking about. They study the security notion that "the arbitrary length hash function H must behave as a random oracle when the fixed-length building block is viewed as a random oracle or an ideal block-cipher." This kind of theoretical research has the potential to be hugely useful in practice. Now let's get to your example of circuit evaluation. I beg to disagree with your reasoning. It's not like you would take a compiled binary and blindly turn it into a circuit. Rather, you'd apply circuit evaluation only to the underlying comparison function which is usually quite simple. Fairplay is an implementation of circuit evaluation. A colleague of mine who's worked with it tells me that it is surprisingly fast. While it is true that efficiency is a problem with circuit evaluation (and I do know of real-world instances where it was rejected for this reason), it is far from a showstopper. The second reason I disagree with you is that if you think about some of the typical real-life scenarios in which you might conceivably want to carry out oblivious circuit evaluation — for example, when two companies are figuring out whether to merge — the computational costs involved are trivial compared to the overall human effort and budget. So why then does no one use generic secure function evaluation in practice? Great question. This brings me to my second difference between theory and practice: trust actually exists in practice! Not everything needs to be done in the paranoid model. The set of problems that people actually want to solve using crypto is much, much smaller than what cryptographers imagine. I know someone who started a company trying to sell secure multiparty computation services to enterprise clients. Guess what — no one wanted it. The way they approach these problems is to sign a contract specifying what you can and cannot do with the data, and that you will destroy the data after you're done using it for the intended purpose. Most of the time, this works just fine. My final point of difference between theory and practice is about PKI. Crypto papers frequently stick a sentence somewhere saying "we assume a PKI." Unfortunately, digital certificates for end users (as opposed to websites or employees in a corporate context, where there is a natural hierarchy) never materialized. This classic paper describes the hilarity that ensues when you ask normal people to use PGP. I'm told that the software has improved a lot since then, but the underlying design and architectural issues and human limitations are not much different today. I don't think cryptographers should be doing anything differently as a consequence of this lack of a real-world PKI, except to be aware of the fact that it limits the real-world applicability of cryptographic protocols. I threw it in because it's something I'm trying to fix. | {
"source": [
"https://cstheory.stackexchange.com/questions/499",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/129/"
]
} |
524 | Scott Aaronson proposed an interesting challange : can we use supercomputers today to help solve CS problems in the same way that physicists use large particle colliders? More concretely, my proposal is to
devote some of the world’s computing
power to an all-out attempt to answer
questions like the following: does
computing the permanent of a 4-by-4
matrix require more arithmetic
operations than computing its
determinant? He concludes that this would require ~$10^{123}$ floating point operations, which is beyond our current means. The slides are available and are also worth reading. Is there any precedence for solving open TCS problems through brute force experimentation? | In "Finding Efficient Circuits Using SAT-solvers", Kojevnikov, Kulikov, and Yaroslavtsev have used SAT solvers to find better circuits for computing $MOD_k$ function. I have used computers to find proofs of time-space lower bounds, as described here . But that was only feasible because I was working with an extremely restrictive proof system. Maverick Woo and I have been working for some time to find the "right" domain for proving circuit upper/lower bounds using computers. We had hoped that we may resolve $CC^0$ vs $ACC^0$ (or a very weak version of it) using SAT solvers, but this is looking more and more unlikely. (I hope Maverick doesn't mind me saying this...) The first generic problem with using brute-force search to prove nontrivial lower bounds is that it just takes too damn long, even on a very fast computer. The alternative is to try to use SAT solvers, QBF solvers, or other sophisticated optimization tools, but they do not seem to be enough to offset the enormity of the search space. Circuit synthesis problems are among the hardest practical instances one can come by. The second generic problem is that the "proof" of the resulting lower bound (obtained by running brute-force search and finding nothing) would be insanely long and apparently yield no insight (other than the fact that the lower bound holds). So a big challenge to "experimental complexity theory" is to find interesting lower bound questions for which the eventual "proof" of the lower bound is short enough to be verifiable, and interesting enough to lead to further insights. | {
"source": [
"https://cstheory.stackexchange.com/questions/524",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/7/"
]
} |
529 | As far as I understand, the geometric complexity theory program attempts to separate $VP \neq VNP$ by proving that the permament of a complex-valued matrix is much harder to compute than the determinant. The question I had after skimming through the GCT Papers: Would this immediately imply $P \neq NP$, or is it merely a major step towards this goal? | The short answer is 'no'. No such implication is known. There are two main obstacles: Going from arithmetic circuit complexity to boolean complexity (VP≠VNP implies P/poly≠NP/poly) and then going from boolean circuit complexity (P/poly≠NP/poly) to uniform complexity (P≠NP). Neither of these steps is known. I believe that P/poly≠NP/poly implies VP≠VNP, however. | {
"source": [
"https://cstheory.stackexchange.com/questions/529",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/512/"
]
} |
562 | Let class A denote all the graphs of size $n$ which have a Hamiltonian cycle. It is easy to produce a random graph from this class--take $n$ isolated nodes, add a random Hamiltonian cycle and then add edges randomly. Let class B denote all the graphs of size $n$ which do not have a Hamiltonian cycle. How can we pick a random graph from this class? (or do something close to that) | This is impossible (unless NP=coNP) since in particular that implies a poly-time function whose range is the non-Hamiltonian graphs (the function goes from the random string to the output graph), which in turn will imply an NP-proof of non-Hamiltonianicity (to prove G doesn't have an Hamiltonian circuit, show x that maps to it.) | {
"source": [
"https://cstheory.stackexchange.com/questions/562",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/547/"
]
} |
585 | Wikipedia defines a second preimage attack as: given a fixed message m1, find a different message m2 such that hash(m2) = hash(m1). Wikipedia defines a collision attack as: find two arbitrary different messages m1 and m2 such that hash(m1) = hash(m2). The only difference that I can see is that in a second preimage attack, m1 already exists and is known to the attacker. However, that doesn't strike me as being significant - the end goal is still to find two messages that produce the same hash. What are the essential differences in how a second preimage attack and collision attack are carried out? What are the differences in results? (As an aside, I can't tag this question properly. I'm trying to apply the tags "cryptography security pre-image collision" but I don't have enough reputation. Can someone apply the appropriate tags?) | I can motivate the difference for you with attack scenarios. In a first preimage attack , we ask an adversary, given only $H(m)$, to find $m$ or some $m'$ such that $H(m')$ = $H(m)$. Suppose a website stores $\{username, H(password)\}$ in its databases instead of $\{username, password\}$. The website can still verify the authenticity of the user by accepting their password and comparing $H(input) =? H(password)$ (with probability of $1/2^n$ for some large $n$ for false positives). Now suppose this database is leaked or is otherwise comprimised. A first preimage attack is the situation where an adversary only has access to a message digest and is trying to generate a message that hashes to this value. In a second preimage attack , we allow the adversary more information. Specifically, not only do we give him $H(m)$ but also give him $m$. Consider the hash function $H(m) = m^d \mod{pq}$ where $p$ and $q$ are large primes and $d$ is a public constant. Obviously for a first preimage attack this becomes the RSA problem and is believed to be hard. However, in the case of the second preimage attack finding a collision becomes easy. If one sets $m' = mpq + m$, $H(mpq + m) = (mpq + m)^d \mod{pq} = m^d \mod{pq}$. And so the adversary has found a collision with little to no computation. We would like one way hash functions to be resistant to second preimage attacks because of digital signature schemes, in which case $H(document)$ is considered public information and is passed along (through a level of indirection) with every copy of the document. Here an attacker has access to both $document$ and $H(document)$. If the attacker can come up with a variation on the original document (or an entirely new message) $d'$ such that $H(d') = H(document)$ he could publish his document as though he were the original signer. A collision attack allows the adversary even more opportunity. In this scheme, we ask the adversary (can I call him Bob?) to find any two messages $m_1$ and $m_2$ such that $H(m_1) = H(m_2)$. Due to the pigeonhole principle and the birthday paradox, even 'perfect' hash functions are quadratically weaker to collision attacks than preimage attacks. In other words, given an unpredictable and irreversible message digest function $f(\{0,1\}^*) = \{0,1\}^n$ which takes $O(2^n)$ time to brute force, a collision can always be found in expected time $O(sqrt(2^n)) = O(2^{n/2})$. Bob can use a collision attack to his advantage in many ways. Here is one of the simpliest: Bob finds a collision between two binaries $b$ and $b'$ ($H(b) = H(b')$) such that b is a valid Microsoft Windows security patch and $b'$ is malware. (Bob works for Windows). Bob sends his security patch up the chain of command, where behind a vault they sign the code and ship the binary to Windows users around the world to fix a flaw. Bob can now contact and infect all Windows computers around the world with $b'$ and the signature that Microsoft computed for $b$. Beyond these sorts of attack scenarios, if a hash function is believed to be collision resistant, that hash function is also more likely to be preimage resistant. | {
"source": [
"https://cstheory.stackexchange.com/questions/585",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/403/"
]
} |
624 | This paper suggests that there are combinators (representing symbolic computations) that can not be represented by the Lambda calculus (if I understand things correctly): | There are several things that one may want to do in practice and that cannot be directly expressed in the lambda calculus. The SF calculus is an example. Its expressive power is not news; the interesting part of the paper (not shown in the slides) is the category theory behind it. The SF calculus is analogous to a lisp implementation where you allow functions to inspect the representation of their argument — so you can write things like (print (lambda (x) (+ x 2))) ⟹ "(lambda (x) (+ x 2))" . Another important example is Plotkin's parallel or . Intuitively speaking, there's a general result that states that lambda calculus is sequential: a function that takes two arguments must pick one to evaluate first. It's impossible to write a lambda term or such that ( or ⊤ ⊥) ⟹ ⊤ , ( or ⊥ ⊤) ⟹ ⊤ and or ⊥ ⊥ ⟹ ⊥ (where ⊥ is a non-terminating term and ⊤ is a terminating term). This is known as “parallel or” because a parallel implementation could make one step of each reduction and stop whenever one of the argument terminates. Yet another thing you can't do in the lambda calculus is input/output. You'd have to add extra primitives for it. Of course, all these examples can be represented in the lambda calculus by adding one level of indirection, essentially representing lambda terms as data. But then the model becomes less interesting — you lose the relationship between functions in the modeled language and lambda abstractions. | {
"source": [
"https://cstheory.stackexchange.com/questions/624",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/608/"
]
} |
625 | Is there a relationship between the Turing Machine and the Lambda calculus - or did they just happen to arise about the same time? | The lambda calculus is older than Turing's machine model, apparently dating from the period 1928-1929 (Seldin 2006), and was invented to encapsulate the notion of a schematic function that Church needed for a foundational logic he devised. It was not invented to capture the general notion of computable function, and indeed a weaker typed version would have served his purposes better. It seems to be incidental to the purpose of that the calculus Church invented turned out to be Turing complete, although later Church used the lambda calculus as his foundation for what he called the effectively computable functions (1936), which Turing appealed to in his paper. Church's simple theory of types (1940) provides a more moderate, typed theory of functions that suffices to express the syntax of higher-order logic but does not express all recursive functions. This theory can be seen as being more in tune with Church's original motivation. References Church (1936). An unsolvable problem in elementary number theory. American Journal of Mathematics 58:345—363. Church (1940). A formulation of the simple theory of types . Journal of Symbolic Logic 5(2):56—68. Seldin (2006). The logic of Curry and Church . In Handbook of the History of Logic, vol.5: Logic from Russell to Church , p. 819—874. North-Holland: Amsterdam. Note This answer is substantially revised due to objections by Kaveh and Sasho. I recommend the Wikipedia timeline that Kaveh suggested, History of the Church–Turing thesis , which has some choice quotes from seminal articles. | {
"source": [
"https://cstheory.stackexchange.com/questions/625",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/608/"
]
} |
627 | Barry Jay in his book makes some bold claims - basically by saying that, at the core of a program, everything is either atomic or composed. Then things can be easily iterated, filtered, updated, just by
navigating this composition relationship. Is this a new frontier in Computer Science for computer languages - or are we just going back to LISP? | The lambda calculus is older than Turing's machine model, apparently dating from the period 1928-1929 (Seldin 2006), and was invented to encapsulate the notion of a schematic function that Church needed for a foundational logic he devised. It was not invented to capture the general notion of computable function, and indeed a weaker typed version would have served his purposes better. It seems to be incidental to the purpose of that the calculus Church invented turned out to be Turing complete, although later Church used the lambda calculus as his foundation for what he called the effectively computable functions (1936), which Turing appealed to in his paper. Church's simple theory of types (1940) provides a more moderate, typed theory of functions that suffices to express the syntax of higher-order logic but does not express all recursive functions. This theory can be seen as being more in tune with Church's original motivation. References Church (1936). An unsolvable problem in elementary number theory. American Journal of Mathematics 58:345—363. Church (1940). A formulation of the simple theory of types . Journal of Symbolic Logic 5(2):56—68. Seldin (2006). The logic of Curry and Church . In Handbook of the History of Logic, vol.5: Logic from Russell to Church , p. 819—874. North-Holland: Amsterdam. Note This answer is substantially revised due to objections by Kaveh and Sasho. I recommend the Wikipedia timeline that Kaveh suggested, History of the Church–Turing thesis , which has some choice quotes from seminal articles. | {
"source": [
"https://cstheory.stackexchange.com/questions/627",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/608/"
]
} |
632 | I recently heard this - "A non-deterministic machine is not the same as a probabilistic machine. In crude terms, a non-deterministic machine is a probabilistic machine in which probabilities for transitions are not known". I feel as if I get the point but I really don't. Could someone explain this to me (in the context of machines or in general)? Edit 1: Just to clarify, the quote was in context of finite automaton, but the question is meaningful for Turing machines too as others have answered. Also, I hear people say - "... then I choose object x from the set non-deterministically". I used to think they mean - "randomly". Hence the confusion. | It's important to understand that computer scientists use the term "nondeterministic" differently from how it's typically used in other sciences. A nondeterministic TM is actually deterministic in the physics sense--that is to say, an NTM always produces the same answer on a given input: it either always accepts, or always rejects. A probabilistic TM will accept or reject an input with a certain probability, so on one run it might accept and on another it might reject. In more detail: At each step in the computation performed by an NTM, instead of having a single transition rule, there are multiple rules that can be invoked. To determine if the NTM accepts or rejects, you look at all possible branches of the computation. (So if there are, say, exactly 2 transitions to choose from at each step, and each computation branch has a total of N steps, then there will be $2^N$ total brances to consider.) For a standard NTM, an input is accepted if any of the computation branches accepts. This last part of the definition can be modified to get other, related types of Turing machines. If you are interested in problems that have a unique solution, you can have the TM accept if exactly one branch accepts. If you are interested in majority behavior, you can define the TM to accept if more than half of the branches accept. And if you randomly (according to some probability distribution) choose one of the possible branches, and accept or reject based on what that branch does, then you've got a probabilistic TM. | {
"source": [
"https://cstheory.stackexchange.com/questions/632",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/-1/"
]
} |
668 | Rice's theorem states that every nontrivial property of the set recognized by some Turing machine is undecidable. I am looking for complexity-theoretic Rice-type theorem that tells us which nontrivial properties of NP sets are intractable. | Proving such a complexity theoretic version of Rice's Theorem was a motivation for me to study program obfuscation. Rice's theorem says in essence, that it is hard to understand the functions that programs compute, given the program. However, the reason these problems are undecideable is that they are infinitary. Even on one input, a program may never halt, and we need to consider what
the program does on infinitely many inputs. A finitary version of Rice's theorem would fix the input size and running time of a program,
and say that the program is hard to understand. Once you've fixed these, you might as well
view the program as a Boolean circuit. What properties of the function computed by a Boolean circuit are hard to compute? One example is ``not always 0'', which is the NP-complete
Satisfiability problems. But unlike Rice's Theorem, there are some properties that are
non-trivial but easy, even without understanding the circuit. We can always know that:
the function computed by a circuit has a bounded circuit complexity (the size of the circuit). Also, we can always evaluate the circuit on inputs of our choice. So say a property of $f_C$ is easy with Black-box access if it can be compute,d
in probabilistic polynomial time by an algorithm that gets as input $n$, a bound on $|C|$ and oracle access to $f_C$. For example, satisfiability is not easy with black-box access, because we could imagine a circuit of size $n$ that only produces answer 1 on a
randomly chosen input $x$. No black box algorithm could distinguish such a circuit from one that always returned 0, since the probability of querying the oracle on $x$ is exponentially small. On the other hand, the property $f(0..0)=1$ is black-box easy.
The question is: are there any properties of $f_C$ that are decideable in probabilistic polynomial-time that are not easy with Black-box access? While this question is open as far as I know, our intended approach was ruled out. We had
hoped to prove this by showing that cryptographically secure program obfuscation was possible. However, Boaz proved the opposite: that it was impossible. This implicitly shows that black-box access to circuits is more limited than full access to the circuit description, but the proof is non-constructive, so I can't name any property as above
that is easy given the circuit description but not with black-box access. It is interesting (at least to me) if such a property could be reverse engineered from our paper. Here is the reference:Boaz Barak, Oded Goldreich, Russell Impagliazzo, Steven Rudich, Amit Sahai, Salil P. Vadhan, Ke Yang: On the (Im)possibility of Obfuscating Programs. CRYPTO 2001:1-18 | {
"source": [
"https://cstheory.stackexchange.com/questions/668",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/495/"
]
} |
791 | Update : The obstruction set (i.e. the NxM "barrier" between colorable and uncolorable grid sizes) for all monochromatic-rectangle-free 4-colorings is now known . Anyone feel up to trying 5-colorings? ;) The following question arises out of Ramsey Theory . Consider a $k$-coloring of the $n$-by-$m$ grid graph. A monochromatic rectangle exists whenever four cells with the same color are arranged as the corners of some rectangle. For example, $(0,0), (0,1), (1,1),$ and $(1,0)$ form a monochromatic rectangle if they have the same color. Similarly, $(2,2), (2,6), (3,6),$ and $(3,2)$ form a monochromatic rectangle, if colored with the same color. Question : Does there exist a $4$-coloring of the $17$-by-$17$ grid graph that does not contain a monochromatic rectangle? If so, provide the explicit coloring. Some known facts: $16$-by-$17$ is $4$-colorable without a monochromatic rectangle, but the known coloring scheme does not appear to extend to the $17$-by-$17$ case. (I'm omitting the known $16$-by-$17$ coloring because it would very likely be a red herring for deciding $17$-by-$17$.) $18$-by-$19$ is NOT $4$-colorable without a monochromatic rectangle. $17$-by-$18$ and $18$-by-$18$ are also unknown cases; an answer to these would be interesting as well. Disclaimer: Bill Gasarch has a $289 (USD) bounty on a positive answer to this question; you can reach him through his blog. A note on etiquette: I'll make sure he knows the source of any correct answer (should one arise). He brought it up again during a rump session at Barriers II, and I find it interesting, so I'm forwarding the question here (without his knowledge; though I highly doubt he would mind). | Some of you are probably aware of this, but the 17 x 17 coloring problem has been solved by Bernd Steinbach and Christian Posthoff. See Gasarch's blog post here . | {
"source": [
"https://cstheory.stackexchange.com/questions/791",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/108/"
]
} |
799 | Ladner's Theorem states that if P ≠ NP, then there is an infinite hierarchy of complexity classes strictly containing P and strictly contained in NP. The proof uses the completeness of SAT under many-one reductions in NP. The hierarchy contains complexity classes constructed by a kind of diagonalization, each containing some language to which the languages in the lower classes are not many-one reducible. This motivates my question: Let C be a complexity class, and let D be a complexity class that strictly contains C. If D contains languages that are complete for some notion of reduction, does there exist an infinite hierarchy of complexity classes between C and D, with respect to the reduction? More specifically, I would like to know if there are results known for D = P and C = LOGCFL or C = NC , for an appropriate notion of reduction. Ladner's paper already includes Theorem 7 for space-bounded classes C, as Kaveh pointed out in an answer. In its strongest form this says: if NL ≠ NP then there is an infinite sequence of languages between NL and NP, of strictly increasing hardness. This is slightly more general than the usual version (Theorem 1), which is conditional on P ≠ NP. However, Ladner's paper only considers D = NP. | The answer to your question is "yes" for a wide variety of classes and reductions, including logspace reductions and the classes you mentioned, as is proved in these papers: H. Vollmer. The gap-language technique revisited . Computer Science Logic, Lecture Notes in Computer Science Vol. 533, pages 389-399, 1990. K. Regan and H. Vollmer. Gap-languages and log-time complexity classes . Theoretical Computer Science, 188(1-2):101-116, 1997. (You can download gzipped postscript files of these papers here .) The proofs follow the basic principle of Uwe Schöning's extension of Ladner's theorem: Uwe Schöning. A uniform approach to obtain diagonal sets in complexity classes . Theoretical Computer Science 18(1):95-103, 1982. Schöning's proof has always been my favorite proof of Ladner's theorem -- it's both simple and general. | {
"source": [
"https://cstheory.stackexchange.com/questions/799",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/109/"
]
} |
844 | Genetic algorithms don't get much traction in the world of theory, but they are a reasonably well-used metaheuristic method (by metaheuristic I mean a technique that applies generically across many problems, like annealing, gradient descent, and the like). In fact, a GA-like technique is quite effective for Euclidean TSP in practice. Some metaheuristics are reasonably well studied theoretically: there's work on local search , and annealing. We have a pretty good sense of how alternating optimization ( like k-means ) works. But as far as I know, there's nothing really useful known about genetic algorithms. Is there any solid algorithmic/complexity theory about the behavior of genetic algorithms, in any way, shape or form ? While I've heard of things like schema theory , I'd exclude it from discussion based on my current understanding of the area for not being particularly algorithmic (but I might be mistaken here). | Y. Rabinovich, A. Wigderson. Techniques for bounding the convergence rate of genetic algorithms. Random Structures Algorithms, vol. 14, no. 2, 111-138, 1999. (Also available from Avi Wigderson's home page ) | {
"source": [
"https://cstheory.stackexchange.com/questions/844",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/80/"
]
} |
914 | This question most likely has a simple answer; however, I do not see it. Let $g:\mathbb{N} \rightarrow \mathbb{N}$ be an uncomputable function and $c$ a positive real number. Can there be a computable function $f : \mathbb{N} \rightarrow \mathbb{N}$ such that, for all $n$ large enough: $g(n) \leq f(n) \leq c \cdot g(n)$ (that is $f(n) = \Theta(g(n)$)? | Sure: just take g(n) = n + halt(n) (where halt(n)=1 if TM number n halts, and 0 ow). | {
"source": [
"https://cstheory.stackexchange.com/questions/914",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/482/"
]
} |
944 | I've been learning a few bits of category theory. It certainly is a different way of looking at things. (Very rough summary for those who haven't seen it: category theory gives ways of expressing all kinds of mathematical behavior solely in terms of functional relationships between objects. For example, things like the Cartesian product of two sets are defined completely in terms of how other functions behave with it, not in terms of what elements are members of the set.) I have some vague understanding that category theory is useful on the programming languages/logic (the "Theory B") side, and am wondering how much algorithms and complexity ("Theory A") could benefit. It might help me get off the ground though, if I know some solid applications of category theory in Theory B. (I am already implicitly assuming there are no applications in Theory A found so far, but if you have some of those, that's even better for me!) By "solid application", I mean: (1) The application depends so strongly on category theory that it's very difficult to achieve without using the machinery. (2) The application invokes at least one non-trivial theorem of category theory (e.g. Yoneda's lemma). It could well be that (1) implies (2), but I want to make sure these are "real" applications. While I do have some "Theory B" background, it's been a while, so any de-jargonizing would be much appreciated. (Depending on what kind of answers I get, I might turn this question into community wiki later. But I really want good applications with good explanations, so it seems a shame not to reward the answerer(s) with something.) | I can think of one instance where category theory was directly "applied" to solve an open problem in programming languages: Thorsten Altenkirch, Peter Dybjer, Martin Hofmann, and Phil Scott, "Normalization by evaluation for typed lambda calculus with coproducts" . From their abstract: "We solve the decision problem for simply typed lambda calculus with strong binary sums, equivalently the word problem for free cartesian closed categories with binary coproducts. Our method is based on the semantical technique known as 'normalization by evaluation' and involves inverting the interpretation of the syntax into a suitable sheaf model and from this extracting appropriate unique normal forms." In general, though, I think that category theory is not usually applied to prove deep theorems in programming languages (of which there aren't so many), but instead offers a conceptual framework that is often useful (for example in the above, the idea of (pre)sheaf semantics). An important historical example is Eugenio Moggi's suggestion that the notion of monad (which is basic and ubiquitous in category theory) could be used as part of a semantic explanation of side effects in programming languages (e.g., state, nondeterminism). This also inspired some reflection on the syntax of programming languages, for example leading directly to the "Monad typeclass" in Haskell (used to encapsulate effects). More recently (the past decade), this explanation of effects in terms of monads has been revisited from the point of view of the old connection (established by category theorists, in the 60s) between monads and algebraic theories: see Martin Hyland and John Power's, "The Category Theoretic Understanding of Universal Algebra: Lawvere Theories and Monads" . The idea is that the monadic view of effects is compatible with the (in some ways more appealing) algebraic view of effects, wherein effects (e.g., store) can be explained in terms of operations (e.g., "lookup" and "update") and associated equations (e.g., idempotency of update). There is a recent paper building on this connection by Paul-André Melliès, "Segal condition meets computational effects" , which also relies heavily on ideas coming from "higher category theory" (for example the notion of "Yoneda structure" as a way of organizing presheaf semantics). Another, related class of examples comes from linear logic . Soon after its introduction by Jean-Yves Girard in the 80s (with an aim of a better understanding of constructive logic), solid connections to category theory were established. For some explanation of this connection, see John Baez and Mike Stay's, "Physics, Topology, Logic and Computation: A Rosetta Stone" . Finally, this answer would be incomplete without reference to sigfpe's illuminating blog "A Neighborhood of infinity" . In particular you could check out "A Partial Ordering of some Category Theory applied to Haskell" . | {
"source": [
"https://cstheory.stackexchange.com/questions/944",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/225/"
]
} |
Subsets and Splits