source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
9,795 | I'm wondering this based on several places online that call $\sf NP=$ co-$\sf NP$ a major open problem... but I can't find any indication as to whether or not this is the same as $\sf P=NP$ problem... | No. It is another open problem and certainly related, but different. The complexity class co-$\mathsf{NP}$ is the set of languages whose complements are in $\mathsf{NP}$; that is, the set of decision problems for which a "no" answer has a deterministic polynomial-time verifier. So for example, the question "Is this SAT formula unsatisfiable?" If the answer is "no", then there is some satisfying assignment of the variables that proves this; that's the certificate for the verifier. It is possible that $\mathsf{P} \neq \mathsf{NP}$, yet $\mathsf{NP} = $co-$\mathsf{NP}$. But on the other hand, if $\mathsf{P} = \mathsf{NP}$, then $\mathsf{NP} = $co-$\mathsf{NP}$ for sure. This is because if a language is in $\mathsf{P}$, then its complement is also in $\mathsf{P}$, so if $\mathsf{P} = \mathsf{NP}$, then that goes for every language in $\mathsf{NP}$ as well. | {
"source": [
"https://cs.stackexchange.com/questions/9795",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/6569/"
]
} |
9,813 | Are there NP-complete problems which have proven subexponential-time algorithms? I am asking for the general case inputs, I am not talking about tractable special cases here. By sub-exponential, I mean an order of growth above polynomials, but less than exponential, for example $n^{\log n}$. | Depends on what you mean by subexponential. Below I explain a few meanings of "subexponential" and what happens in each case. Each of these classes is contained in the classes below it. I. $2^{n^{o(1)}}$ If by subexpoential you mean $2^{n^{o(1)}}$ , then a conjecture in complexity theory called ETH (Exponential Time Hypothesis) implies that no $\mathsf{NP}$ -hard problem can have an algorithm with running-time $2^{n^{o(1)}}$ . Note that this class is closed under composition with polynomials. If we have a subexponential time algorithm for any $\mathsf{NP}$ -hard problem, we can combine it with a polynomial-time reduction from SAT to it obtain a subexponential algorithm for 3SAT which would violate ETH. II. $\bigcap_{0 < \epsilon} 2^{O(n^\epsilon)}$ , i.e. $2^{O(n^\epsilon)}$ for all $0 < \epsilon$ The situation is similar to the previous one. It is closed under polynomials so no $\mathsf{NP}$ -hard problem can be solved in this time without violating ETH. III. $\bigcup_{\epsilon < 1} 2^{O(n^\epsilon)}$ , i.e. $2^{O(n^\epsilon)}$ for some $\epsilon < 1$ If by subexponential you mean $2^{O(n^\epsilon)}$ for some $\epsilon<1$ then the answer is yes, there are provably such problems. Take an $\mathsf{NP}$ -complete problem like SAT. It has a brute-force algorithm that runs in time $2^{O(n)}$ . Now consider the padded version of SAT by adding a string of size $n^k$ to the inputs: $$SAT' = \{\langle \varphi,w\rangle \mid \varphi\in SAT \text{ and } |w|=|\varphi|^k \}$$ Now this problem is $\mathsf{NP}$ -hard and can be solved in time $2^{O(n^\frac{1}{k})}$ . IV. $2^{o(n)}$ This contains the previous class, the answer is similar. V. $\bigcap_{0 < \epsilon}2^{\epsilon n}$ , i.e. $2^{\epsilon n}$ for all $\epsilon>0$ This contains the previous class, the answer is similar. VI. $\bigcup_{\epsilon < 1}2^{\epsilon n}$ , i.e. $2^{\epsilon n}$ for some $\epsilon<1$ This contains the previous class, the answer is similar. What does subexponential mean? "Above polynomial" is not an upper-bound but a lower-bound and is referred to as superpolynomial . Functions like $n^{\lg n}$ are called quasipolynomial , and as the name indicates are almost polynomial and far from being exponential, subexponential is usually used to refer a much larger class of functions with much faster growth rates. As the name indicates, "subexponential" means faster than exponential . By exponential we usually mean functions in class $2^{\Theta(n)}$ , or in the nicer class $2^{n^{\Theta(1)}}$ (which is closed under composition with polynomials). Subexponential should be close to these but smaller.
There are different ways to do this and there is not a standard meaning.
We can replace $\Theta$ by $o$ in the two definitions of exponential and obtain I and IV. The nice thing about them is that they are uniformly defined (no quantifier over $\epsilon$ ). We can replace $\Theta$ with a multiplicative coefficient $\epsilon$ for all $\epsilon>0$ , we get II and V. Their are close to I and IV but nonuniformly defined. The last option is to replace $\Theta$ with a multiplicative constant $\epsilon$ for some $\epsilon<1$ . This gives II and VI. Which one should be called subexponential is arguable. Usually people use the one they need in their work and refer to it as subexponential. I is my personal preference, it is a nice class: it is closed under composition with polynomials and it is uniformly defined. It is similar to $\mathsf{Exp}$ which uses $2^{n^{O(1)}}$ . II is seems to be used in the definition of the complexity class $\mathsf{SubExp}$ . III is used for algorithmic upper-bounds, like those mentioned in Pal's answer. IV is also common. V is used to state the ETH conjecture. Intersections ( II and V ) are not that useful for algorithmic upper-bounds, their main use seems to be complexity theory.
In practice, you will not see a difference between I and II or between IV and V . IMHO the later three definition ( IV , V , VI ) are too sensitive, they might be useful for particular problems, but they are not robust which decreases their usefulness as classes. Robustness and nice closure properties are part of the reason why famous complexity classes like $\mathsf{L}$ , $\mathsf{P}$ , $\mathsf{NP}$ , $\mathsf{PSpace}$ , and $\mathsf{Exp}$ are interesting. Summary IMHO, the main definitions are I and III . We already have subexponential algorithms for $\mathsf{NP}$ -hard problems in the sense of III and we cannot have them in the sense of I without violating ETH. | {
"source": [
"https://cs.stackexchange.com/questions/9813",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/3134/"
]
} |
9,829 | If $A^2$ is regular, does it follow that $A$ is regular? My attempt on a proof: Yes, for contradiction assume that $A$ is not regular. Then $A^2 = A \cdot A$ . Since concatenation of two non-regular language is not regular $A^2$ cannot be regular. This contradicts our assumption. So $A$ is regular. So if $A^2$ is regular then $A$ is regular. Is the proof correct? Can we generalize this to $A^3$ , $A^4$ , etc...? And also if $A^*$ is regular then $A$ need not be regular? Example: $A=\lbrace 1^{2^i} \mid i \geq 0\rbrace$ is not regular but $A^*$ is regular. | Consider Lagrange's four square theorem . It states that if $B = \{1^{n^2}| n \geq 0\}$ then $B^4 = \{1^n | n \geq 0\}$. If $B^2$ is regular, take $A = B$ else take $A = B^2$. Either way, this proves the existence of irregular $A$ such that $A^2$ is regular. | {
"source": [
"https://cs.stackexchange.com/questions/9829",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/4752/"
]
} |
10,047 | According to this page , Dijkstra's algorithm is just BFS with a priority queue. Is it really that simple? I think not. | You can implement Dijkstra's algorithm as BFS with a priority queue (though it's not the only implementation). Dijkstra's algorithm relies on the property that the shortest path from $s$ to $t$ is also the shortest path to any of the vertices along the path. This is exactly what BFS does. Or in another perspective: how would Dijkstra's algorithm behave if all the weights were 1? Exactly like BFS. | {
"source": [
"https://cs.stackexchange.com/questions/10047",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/7007/"
]
} |
10,318 | I've been looking into the math behind converting from any base to any base. This is more about confirming my results than anything. I found what seems to be my answer on mathforum.org but I'm still not sure if I have it right. I have the converting from a larger base to a smaller base down okay because it is simply take first digit multiply by base you want add next digit repeat. My problem comes when converting from a smaller base to a larger base. When doing this they talk about how you need to convert the larger base you want into the smaller base you have. An example would be going from base 4 to base 6 you need to convert the number 6 into base 4 getting 12. You then just do the same thing as you did when you were converting from large to small. The difficulty I have with this is it seems you need to know what one number is in the other base. So I would of needed to know what 6 is in base 4. This creates a big problem in my mind because then I would need a table. Does anyone know a way of doing this in a better fashion. I thought a base conversion would help but I can't find any that work. And from the site I found it seems to allow you to convert from base to base without going through base 10 but you first need to know how to convert the first number from base to base. That makes it kinda pointless. Commenters are saying I need to be able to convert a letter into a number. If so I already know that. That isn't my problem however.
My problem is in order to convert a big base to a small base I need to first convert the base number I have into the base number I want. In doing this I defeat the purpose because if I have the ability to convert these bases to other bases I've already solved my problem. Edit: I have figured out how to convert from bases less than or equal to 10 into other bases less than or equal to 10. I can also go from a base greater than 10 to any base that is 10 or less. The problem starts when converting from a base greater than 10 to another base greater than 10. Or going from a base smaller than 10 to a base greater than 10. I don't need code I just need the basic math behind it that can be applied to code. | This seems a very basic question to me, so excuse me if I lecture you a bit. The most important point for you to learn here is that a number is not its digit representation . A number is an abstract mathematical object, whereas its digit representation is a concrete thing, namely a sequence of symbols on a paper (or a sequence of bits in compute memory, or a sequence of sounds which you make when you communicate a number). What is confusing you is the fact that you never see a number but always its digit representation. So you end up thinking that the number is the representation. Therefore, the question to ask is not "how do I convert from one base to another" but rather "how do I find out which number is represented by a given string of digits" and "how do I find the digit representation of a given number". Once we have the answers, it will be easy to answer the original question, too. So let us produce two functions in Python, one for converting a digit representation to a number, and another for doing the opposite. Note: when we run the function Python will of course print on the screen the number it got in base 10. But this does not mean that the computer is keeping numbers in base 10 (it isn't). It is irrelevant how the computer represents the numbers. def toDigits(n, b):
"""Convert a positive number n to its digit representation in base b."""
digits = []
while n > 0:
digits.insert(0, n % b)
n = n // b
return digits
def fromDigits(digits, b):
"""Compute the number given by digits in base b."""
n = 0
for d in digits:
n = b * n + d
return n Let us test these: >>> toDigits(42, 2)
[1, 0, 1, 0, 1, 0]
>>> toDigits(42, 3)
[1, 1, 2, 0]
>>> fromDigits([1,1,2,0],3)
42 Armed with conversion functions, your problem is solved easily: def convertBase(digits, b, c):
"""Convert the digits representation of a number from base b to base c."""
return toDigits(fromDigits(digits, b), c) A test: >>> convertBase([1,1,2,0], 3, 2)
[1, 0, 1, 0, 1, 0] Note: we did not pass through base 10 representation! We converted the base $b$ representation to the number, and then the number to base $c$ . The number was not in any representation. (Actually it was, the computer had to represent it somehow, and it did represent it using electrical signals and funky stuff that happens in chips, but certainly those were not 0's and 1's.) | {
"source": [
"https://cs.stackexchange.com/questions/10318",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/6912/"
]
} |
10,360 | I am seeking help understanding Floyd's cycle detection algorithm. I have gone through the explanation on wikipedia ( http://en.wikipedia.org/wiki/Cycle_detection#Tortoise_and_hare ) I can see how the algorithm detects cycle in O(n) time. However, I am unable to visualise the fact that once the tortoise and hare pointers meet for the first time, the start of the cycle can be determined by moving tortoise pointer back to start and then moving both tortoise and hare one step at a time. The point where they first meet is the start of the cycle. Can someone help by providing an explanation, hopefully different from the one on wikipedia, as I am unable to understand/visualise it? | You can refer to "Detecting start of a loop in singly linked list" , here's an excerpt: Distance travelled by slowPointer before meeting $= x+y$ Distance travelled by fastPointer before meeting $=(x + y + z) + y = x + 2y + z$ Since fastPointer travels with double the speed of slowPointer , and time is constant for both when both pointers reach the meeting point. So by using simple speed, time and distance relation ( slowPointer traveled half the distance): \begin{align*}
2*\operatorname{dist}(\text{slowPointer}) &= \operatorname{dist}(\text{fastPointer})\\
2(x+y) &= x+2y+z\\
2x+2y &= x+2y+z\\
x &= z
\end{align*} Hence by moving slowPointer to start of linked list, and making both slowPointer and fastPointer to move one node at a time, they both have same distance to cover . They will reach at the point where the loop starts in the linked list. | {
"source": [
"https://cs.stackexchange.com/questions/10360",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/7183/"
]
} |
10,538 | A binary indexed tree has very less or relatively no literature as compared to other data structures. The only place where it is taught is the topcoder tutorial . Although the tutorial is complete in all the explanations, I cannot understand the intuition behind such a tree? How was it invented? What is the actual proof of its correctness? | Intuitively, you can think of a binary indexed tree as a compressed representation of a binary tree that is itself an optimization of a standard array representation. This answer goes into one possible derivation. Let's suppose, for example, that you want to store cumulative frequencies for a total of 7 different elements. You could start off by writing out seven buckets into which the numbers will be distributed: [ ] [ ] [ ] [ ] [ ] [ ] [ ]
1 2 3 4 5 6 7 Now, let's suppose that the cumulative frequencies look something like this: [ 5 ] [ 6 ] [14 ] [25 ] [77 ] [105] [105]
1 2 3 4 5 6 7 Using this version of the array, you can increment the cumulative frequency of any element by increasing the value of the number stored at that spot, then incrementing the frequencies of everything that come afterwards. For example, to increase the cumulative frequency of 3 by 7, we could add 7 to each element in the array at or after position 3, as shown here: [ 5 ] [ 6 ] [21 ] [32 ] [84 ] [112] [112]
1 2 3 4 5 6 7 The problem with this is that it takes O(n) time to do this, which is pretty slow if n is large. One way that we can think about improving this operation would be to change what we store in the buckets. Rather than storing the cumulative frequency up to the given point, you can instead think of just storing the amount that the current frequency has increased relative to the previous bucket. For example, in our case, we would rewrite the above buckets as follows: Before:
[ 5 ] [ 6 ] [21 ] [32 ] [84 ] [112] [112]
1 2 3 4 5 6 7
After:
[ +5] [ +1] [+15] [+11] [+52] [+28] [ +0]
1 2 3 4 5 6 7 Now, we can increment the frequency within a bucket in time O(1) by just adding the appropriate amount to that bucket. However, the total cost of doing a lookup now becomes O(n), since we have to recompute the total in the bucket by summing up the values in all smaller buckets. The first major insight we need to get from here to a binary indexed tree is the following: rather than continuously recomputing the sum of the array elements that precede a particular element, what if we were to precompute the total sum of all the elements before specific points in the sequence? If we could do that, then we could figure out the cumulative sum at a point by just summing up the right combination of these precomputed sums. One way to do this is to change the representation from being an array of buckets to being a binary tree of nodes. Each node will be annotated with a value that represents the cumulative sum of all the nodes to the left of that given node. For example, suppose we construct the following binary tree from these nodes: 4
/ \
2 6
/ \ / \
1 3 5 7 Now, we can augment each node by storing the cumulative sum of all the values including that node and its left subtree. For example, given our values, we would store the following: Before:
[ +5] [ +1] [+15] [+11] [+52] [+28] [ +0]
1 2 3 4 5 6 7
After:
4
[+32]
/ \
2 6
[ +6] [+80]
/ \ / \
1 3 5 7
[ +5] [+15] [+52] [ +0] Given this tree structure, it's easy to determine the cumulative sum up to a point. The idea is the following: we maintain a counter, initially 0, then do a normal binary search up until we find the node in question. As we do so, we also the following: any time that we move right, we also add in the current value to the counter. For example, suppose we want to look up the sum for 3. To do so, we do the following: Start at the root (4). Counter is 0. Go left to node (2). Counter is 0. Go right to node (3). Counter is 0 + 6 = 6. Find node (3). Counter is 6 + 15 = 21. You could imagine also running this process in reverse: starting at a given node, initialize the counter to that node's value, then walk up the tree to the root. Any time you follow a right child link upward, add in the value at the node you arrive at. For example, to find the frequency for 3, we could do the following: Start at node (3). Counter is 15. Go upward to node (2). Counter is 15 + 6 = 21. Go upward to node (4). Counter is 21. To increment the frequency of a node (and, implicitly, the frequencies of all nodes that come after it), we need to update the set of nodes in the tree that include that node in its left subtree. To do this, we do the following: increment the frequency for that node, then start walking up to the root of the tree. Any time you follow a link that takes you up as a left child, increment the frequency of the node you encounter by adding in the current value. For example, to increment the frequency of node 1 by five, we would do the following: 4
[+32]
/ \
2 6
[ +6] [+80]
/ \ / \
> 1 3 5 7
[ +5] [+15] [+52] [ +0] Starting at node 1, increment its frequency by 5 to get 4
[+32]
/ \
2 6
[ +6] [+80]
/ \ / \
> 1 3 5 7
[+10] [+15] [+52] [ +0] Now, go to its parent: 4
[+32]
/ \
> 2 6
[ +6] [+80]
/ \ / \
1 3 5 7
[+10] [+15] [+52] [ +0] We followed a left child link upward, so we increment this node's frequency as well: 4
[+32]
/ \
> 2 6
[+11] [+80]
/ \ / \
1 3 5 7
[+10] [+15] [+52] [ +0] We now go to its parent: > 4
[+32]
/ \
2 6
[+11] [+80]
/ \ / \
1 3 5 7
[+10] [+15] [+52] [ +0] That was a left child link, so we increment this node as well: 4
[+37]
/ \
2 6
[+11] [+80]
/ \ / \
1 3 5 7
[+10] [+15] [+52] [ +0] And now we're done! The final step is to convert from this to a binary indexed tree, and this is where we get to do some fun things with binary numbers. Let's rewrite each bucket index in this tree in binary: 100
[+37]
/ \
010 110
[+11] [+80]
/ \ / \
001 011 101 111
[+10] [+15] [+52] [ +0] Here, we can make a very, very cool observation. Take any of these binary numbers and find the very last 1 that was set in the number, then drop that bit off, along with all the bits that come after it. You are now left with the following: (empty)
[+37]
/ \
0 1
[+11] [+80]
/ \ / \
00 01 10 11
[+10] [+15] [+52] [ +0] Here is a really, really cool observation: if you treat 0 to mean "left" and 1 to mean "right," the remaining bits on each number spell out exactly how to start at the root and then walk down to that number. For example, node 5 has binary pattern 101. The last 1 is the final bit, so we drop that to get 10. Indeed, if you start at the root, go right (1), then go left (0), you end up at node 5! The reason that this is significant is that our lookup and update operations depend on the access path from the node back up to the root and whether we're following left or right child links. For example, during a lookup, we just care about the right links we follow. During an update, we just care about the left links we follow. This binary indexed tree does all of this super efficiently by just using the bits in the index. The key trick is the following property of this perfect binary tree: Given node n, the next node on the access path back up to the root in which we go right is given by taking the binary representation of n and removing the last 1. For example, take a look at the access path for node 7, which is 111. The nodes on the access path to the root that we take that involve following a right pointer upward is Node 7: 111 Node 6: 110 Node 4: 100 All of these are right links. If we take the access path for node 3, which is 011, and look at the nodes where we go right, we get Node 3: 011 Node 2: 010 (Node 4: 100, which follows a left link) This means that we can very, very efficiently compute the cumulative sum up to a node as follows: Write out node n in binary. Set the counter to 0. Repeat the following while n ≠ 0: Add in the value at node n. Clear the rightmost 1 bit from n. Similarly, let's think about how we would do an update step. To do this, we would want to follow the access path back up to the root, updating all nodes where we followed a left link upward. We can do this by essentially doing the above algorithm, but switching all 1's to 0's and 0's to 1's. The final step in the binary indexed tree is to note that because of this bitwise trickery, we don't even need to have the tree stored explicitly anymore. We can just store all the nodes in an array of length n, then use the bitwise twiddling techniques to navigate the tree implicitly. In fact, that's exactly what the bitwise indexed tree does - it stores the nodes in an array, then uses these bitwise tricks to efficiently simulate walking upward in this tree. Hope this helps! | {
"source": [
"https://cs.stackexchange.com/questions/10538",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/6823/"
]
} |
10,837 | I have came across a weird experience in C programming. Consider this code: int main(){
int array1[6] = {0, 1, 2, 3, 4, 5};
int array2[6] = {6, 7, 8, 9, 10, 11};
printf("%d\n", array1[-1]);
return 0;
} When I compile and run this, I don't get any errors or warnings. As my lecturer said, the array index -1 accesses another variable. I'm still confused, why on earth does a programming language have this capability? I mean, why allow negative array indices? | The array indexing operation a[i] gains its meaning from the following features of C The syntax a[i] is equivalent to *(a + i) . Thus it is valid to say 5[a] to get at the 5th element of a . Pointer-arithmetic says that given a pointer p and an integer i , p + i the pointer p advanced by i * sizeof(*p) bytes The name of an array a very quickly devolves to a pointer to the 0th element of a In effect, array-indexing is a special case of pointer-indexing. Since a pointer can point to any place inside an array, any arbitrary expression that looks like p[-1] is not wrong by examination, and so compilers don't (can't) consider all such expressions as errors. Your example a[-1] where a is actually the name of an array is actually invalid. IIRC, it is undefined if there's a meaningful pointer value as the result of the expression a - 1 where a is know to be a pointer to the 0th element of an array. So, a clever compiler could detect this and flag it as an error. Other compilers can still be compliant while allowing you to shoot yourself in the foot by giving you a pointer to a random stack slot. The computer science answer is: In C, the [] operator is defined on pointers, not arrays. In particular, it's defined in terms of pointer arithmetic and pointer dereference. In C, a pointer is abstractly a tuple (start, length, offset) with the condition that 0 <= offset <= length . Pointer arithmetic is essentially lifted arithmetic on the offset, with the caveat that if the result of the operation violates the pointer condition, it is an undefined value. De-referencing a pointer adds an additional constraint that offset < length . C has a notion of undefined behaviour which allows a compiler to concretely represent that tuple as a single number, and not have to detect any violations of the pointer condition. Any program that satisfies the abstract semantics will be safe with the concrete (lossy) semantics. Anything that violates the abstract semantics can be, without comment, accepted by the compiler and it can do anything it wants to do with it. | {
"source": [
"https://cs.stackexchange.com/questions/10837",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/7395/"
]
} |
10,960 | How can I sort a list of 5 integers such that in the worst case it takes 7 compares? I don't care about how many other operations are performed. I don't know anything particular about the integers. I've tried a few different divide and conquer approaches which get me down to 8 compares, such as following a mergesort approach, or combining mergesort with using binary search to find the insertion position, but every time I end up with 8 compares worst case. Right now I'm just looking for a hint, not a solution. | There is only one way to start this process (and for nearly all of your decisions of what to compare in later steps, there is only one correct one). Here's how to figure it out. First, note that there are $2^7 =128$ possible answers you can get for your comparisons, and $5! = 120$ different permutations you need to distinguish between. The first comparison is easy: you have to compare two keys, and since you don't know anything about them, all choices are equally good. So let's say you compare $a$ and $b$, and find that $a \leq b$. You now have $2^6 = 64$ possible answers left, and $60$ possible permutations remaining (since we have eliminated half of them). Next, we can either compare $c$ and $d$, or we can compare $c$ to one of the keys we used in the first comparison. If we compare $c$ and $d$, and learn that $c \leq d$, then we have $32$ remaining answers and $30$ possible permutations. On the other hand, if we compare $c$ with $a$, and we discover that $a \leq c$, we have $40$ possible permutations remaining, because we have eliminated $1/3$ of the possible permutations (those with $c \leq a \leq b$). We only have $32$ possible remaining answers, so we're out of luck. So now we know that we have to compare the first and second keys, and the third and fourth keys. We can assume that we have $a\leq b$ and $c \leq d$. If we compare $e$ to any of these four keys, by the same argument we used in the previous step, we might only eliminate $1/3$ of the permutations remaining, and we're out of luck. So we have to compare two of the keys $a,b,c,d$. Taking into account symmetry, we have two choices, compare $a$ and $c$ or compare $a$ and $d$. A similar counting argument shows we must compare $a$ and $c$. We can assume without loss of generality that $a \leq c$, and now we have $a \leq b$ and $a \leq c \leq d$. Since you asked for a hint, I won't go through the rest of the argument. You have four comparisons left. Use them wisely. | {
"source": [
"https://cs.stackexchange.com/questions/10960",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/6728/"
]
} |
11,029 | If I have a list of key values from 1 to 100 and I want to organize them in an array of 11 buckets, I've been taught to form a mod function $$ H = k \bmod \ 11$$ Now all the values will be placed one after another in 9 rows. For example, in the first bucket there will be $0, 11, 22 \dots$. In the second, there will be $1, 12, 23 \dots$ etc. Let's say I decided to be a bad boy and use a non-prime as my hashing function - take 12.
Using the Hashing function $$ H = k \bmod \ 12$$ would result in a hash table with values $0, 12, 24 \dots $ in the first bucket, $1, 13, 25 \dots$ etc. in the second and so on. Essentially they are the same thing. I didn't reduce collisions and I didn't spread things out any better by using the prime number hash code and I can't see how it is ever beneficial. | Consider the set of keys $K=\{0,1,...,100\}$ and a hash table where the number of buckets is $m=12$. Since $3$ is a factor of $12$, the keys that are multiples of $3$ will be hashed to buckets that are multiples of $3$: Keys $\{0,12,24,36,...\}$ will be hashed to bucket $0$. Keys $\{3,15,27,39,...\}$ will be hashed to bucket $3$. Keys $\{6,18,30,42,...\}$ will be hashed to bucket $6$. Keys $\{9,21,33,45,...\}$ will be hashed to bucket $9$. If $K$ is uniformly distributed (i.e., every key in $K$ is equally likely to occur), then the choice of $m$ is not so critical. But, what happens if $K$ is not uniformly distributed? Imagine that the keys that are most likely to occur are the multiples of $3$. In this case, all of the buckets that are not multiples of $3$ will be empty with high probability (which is really bad in terms of hash table performance). This situation is more common that it may seem. Imagine, for instance, that you are keeping track of objects based on where they are stored in memory. If your computer's word size is four bytes, then you will be hashing keys that are multiples of $4$. Needless to say that choosing $m$ to be a multiple of $4$ would be a terrible choice: you would have $3m/4$ buckets completely empty, and all of your keys colliding in the remaining $m/4$ buckets. In general: Every key in $K$ that shares a common factor with the number of buckets $m$ will be hashed to a bucket that is a multiple of this factor. Therefore, to minimize collisions, it is important to reduce the number of common factors between $m$ and the elements of $K$. How can this be achieved? By choosing $m$ to be a number that has very few factors: a prime number . | {
"source": [
"https://cs.stackexchange.com/questions/11029",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/4348/"
]
} |
11,116 | In a depth first tree, there are the edges define the tree (i.e the edges that were used in the traversal). There are some leftover edges connecting some of the other nodes. What is the difference between a cross edge and a forward edge? From wikipedia: Based on this spanning tree, the edges of the original graph can be divided into three classes: forward edges, which point from a node of the tree to one of its descendants, back edges, which point from a node to one of its ancestors, and cross edges, which do neither. Sometimes tree edges, edges which belong to the spanning tree itself, are classified separately from forward edges. If the original graph is undirected then all of its edges are tree edges or back edges. Doesn't an edge that is not used in the traversal that points from one node to another establish a parent-child relationship? | Wikipedia has the answer: All types of edges appear in this picture. Trace out DFS on this graph (the nodes are explored in numerical order), and see where your intuition fails. This will explain the diagram:- Forward edge: (u, v), where v is a descendant of u, but
not a tree edge.It is a non-tree edge that connects a vertex to a descendent in a DFS-tree. Cross edge: any other edge. Can go between vertices in
same depth-first tree or in different depth-first trees. (layman) It is any other edge in graph G. It connects vertices in two different DFS-tree or two vertices in the same DFS-tree neither of which is the ancestor of the other.(formal) | {
"source": [
"https://cs.stackexchange.com/questions/11116",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/863/"
]
} |
11,181 | I know that there exists a Turing Machine, if a function is computable. Then how to show that the function is not computable or there aren't any Turing Machine for that. Is there anything like a Pumping lemma? Similarly, how can we show a language is not recursively enumerable? | Before I answer your general question, let me first take a step back, give some history background, and answer a preliminary question: Do non-computable functions even exist? [notational note: we can relate any function $f$ with a language $L_f=\{ (x,y) \mid y=f(x) \}$ and then discuss the decidability of $L_f$ rather than the computability of $f$ ] Undecidable languages do exist There are some languages that no Turing machine can decide. The argument is simple: there are "only" countably many $(\aleph_0)$ different TMs, but uncountably many $(\aleph)$ different languages. Thus there are at most $\aleph_0$ decidable languages, and the rest (infinitely many) are undecidable. Further reading: "Why are there more non-computable functions than computable ones?" and "Why are the total functions not enumerable?" . In order to put our hand on a specific undecidable language, the idea is to use a technique named diagonalization (Georg Cantor, 1873) which was originally used to show that there are more real numbers than integers, or in other words, that $\aleph > \aleph_0$ . The idea for constructing the first undecidable language is simple: we list all Turing machines (which is possible since they are recusively enumerable!), and create a language that disagrees with each TM on at least one input. $$
\begin{array}{c|ccccc}
& \varepsilon & 0 & 1& 00 & \cdots \\ \hline
M_1 & \color{red}{1} & 0 & 1 & 0 & 1 \\
M_2 & 0 & \color{red}{1} & 0 & 0 & 0 \\
M_3 & 0 & 0 & \color{red}{0} & 1 & 0 \\
\vdots & & & & \ddots &
\end{array}
$$ In the above, each row is one TM and each column is one input. The value of the cell is 0 if the TM rejects or never halts, and 1 if the TM accept that input.
We define the language $D$ to be such that $D$ contains the $i$ -th input if and only if the $i$ -th TM does not accept that input. Following the table above, $\varepsilon \notin D$ since $M_1$ accepts $\varepsilon$ . Similarly, $0\notin D$ , but $1\in D$ since $M_3$ does not accept $1$ . Now, assume $M_k$ decides $D$ and look up line $k$ in the table: if there is $1$ in the $k$ -th column, then $M_k$ accepts that input but it is not in $D$ , and if there is a $0$ there, the input is in $D$ but $M_k$ does not accept it. Therefore, $M_k$ doesn't decide $D$ , and we reached contradiction. Now for your question. There are several ways to prove that a language is undecidable. I'll try to touch the most common ones. 1. Direct proof The first method, is to directly show that a language is undecidable, by showing that no TM can decide it. This ususally follows the diagonalization method shown above. Example. Show that the (complement of the) diagonal language $$\overline {L_D} = \{ \langle M \rangle \mid \langle M \rangle \notin L(M) \}$$ is undecidable. Proof. Assume $\overline {L_D}$ is decidable, and let $M_D$ be its decider. There are two cases: $M_D$ accepts $\langle M_D \rangle$ : but then, $\langle M_D \rangle \in L(M_D)$ so $\langle M \rangle \notin \overline {L_D}$ . So this can't happen if $M_D$ decides $\overline {L_D}$ . $M_D$ does not accept $\langle M_D \rangle$ : so $\langle M_D \rangle \notin L(M_D)$ and thus $\langle M \rangle \in \overline {L_D}$ . But if it is in $L_D$ , $M_D$ should have accepted it, and we reached contradiction again. 2. Closure properties Sometimes we can use closure properties to show some language is not decidable, based on other languages we already know not to be decidable. Specifically,
if $L$ is not decidable (we write $L\notin R$ ), then also its complement $\overline{L}$ is undecidable: if there is decider $M$ for $\overline{L}$ we could just use it to decide $L$ by accepting whenever $M$ rejects and vice versa. Since $M$ always halts with an answer (it is a decider), we can always invert its answer. Conclusion: The diagonal language ${L_D} = \{ \langle M \rangle \mid \langle M \rangle \in L(M) \}$ is undecidable, $L_D \notin R$ . A similar argument can be applied by noting that if both $L$ and its complement $\overline{L}$ are recursively enumerable, both are decidable. This is particular useful if we want to prove that a language is not recursively enumerable, a strong property than undecidability. 3. Reducing from an undecidable problem Usually, it's quite difficult to directly prove that a language is undecidable (unless it is already constructed in a "diagonal" fashion). The last and most common method for proving undecidability is to use another language which we already know to be undecidable. The idea is to reduce one language to another: to show that if one is decidable, then the other must also be decidable, but one of them is already known to be undecidable which leads to the conclusion that the first one is also undecidable. Read more about reductions in "What are common techniques for reducing problems to each other?" . Example. Show that the diagonal language $$HP = \{ \langle M,x \rangle \mid M \text{ halts on } x \}$$ is undecidable. Proof. We know that $L_D$ is undecidable. We reduce $L_D$ to $HP$ (this is denoted $L_D \le HP$ ), that is, we show that if $HP$ was decidable we could use its decider to decide $L_D$ , which is a contradiction. The reduction works by converting a candidate $w$ for $L_D$ (i.e. an input for any potential decider/acceptor for $L_D$ ) to a candidate $w'$ for $HP$ such that $w\in L_D$ if and only if $w' \in HP$ . We make sure that this conversion is computable. Thus, deciding $w'$ tells us whether or not $w\in L_D$ , so if we can decide HP we would be also able to decide $L_D$ .¹ The conversion is as follows. Take some $w=\langle M \rangle$ , and output $w'=\langle M' , \langle M \rangle\rangle$ ,² where $M'$ is a TM that behaves just like $M$ , but if $M$ rejects, then $M'$ goes into an infinite loop. Let's see that $w,w'$ satisfy the requirements. If $w\in L_D$ , it means that $M$ halts and accepts the input $\langle M\rangle$ . Therefore, $M'$ also halts and accepts the input $\langle M\rangle$ .
Thus, $\langle M', \langle M\rangle \rangle \in HP$ . On the other hand, if $w\notin L_D$ then $M$ either rejects or never halts on $\langle M\rangle$ . In both cases $M'$ will go into an infinite loop on $\langle M\rangle$ . Thus, $\langle M', \langle M\rangle \rangle \notin HP$ , and we are done showing that $w\in L_D$ if and only if $w'\in HP$ , and have thus shown that $HP\notin R$ . Further reading: many examples for reductions and proving undecidablility of languages can be found via the reductions tag. There are some more restriction on the reduction to be valid. The conversion itself must be computable , and well defined for any input. An input of $HP$ looks like $\langle M,x\rangle$ , where $M$ is a TM and $x$ is some string. So here we choose the string $x$ to be an encoding of the machine $M$ , which is just some string.. 4. Rice's Theorem "So every time we wish to prove $L$ is undecidable, we need to reduce $L_D$ (or $HP$ ) to it? Isn't there any shortcut?" Well, in fact, there is. This is Rice's Theorem . The theorem says that many languages that have a certain structure, are undecidable. Because all these languages have this certain structure, we can do the reduction once and apply it to any language that admits a similar structure. The theorem is formally stated in the following way, Theorem (Rice). Given a property $\emptyset \subsetneq S \subsetneq RE$ , the following language $L_S$ is undecidable $$ L_S = \{ \langle M \rangle \mid L(M) \in S \} $$ The set $S$ is a subset of languages in $RE$ ; we call it a property because it describes a property of the accepted language $L(M)$ . All the TMs whose language satisfies this property belong to $L_S$ . For instance, $S$ can be the property that the accepted language $L(M)$ contains exactly two words: $$S_2 = \{ L \mid |L| =2 , L \in RE\}.$$ In this case $L_{S_2}$ is the set of all TMs whose language consists of exactly two words: $$L_{S_2} = \{\langle M \rangle \mid L(M) \in S\} = \{\langle M \rangle \mid |L(M)|=2\}.$$ The property can be very simple, but it cannot be all the RE languages, or none of the RE languages. If $S=\emptyset$ or $S=RE$ then the property is said to be trivial , and the induced $L_S$ is computable. An example for a simple $S$ is one the contains only a single language, say $S_{complete}=\{ \Sigma^*\}$ . Note that although $S$ contains only a single language, there are infinitely many machines $M$ whose language is $\Sigma^*$ , so $L_{S_{compete}}$ is infinite, and undecidable. The theorem is very powerful to prove undecidability of many languages. Example. The language $L_\emptyset = \{ \langle M \rangle \mid M \text{ never reaches the accepting state} \}$ , is undecidable Proof. We can write $L_\emptyset$ as $\{ \langle M \rangle \mid L(M)=0 \}$ , that is $L_\emptyset=L_S$ for the property $S=\{ L \in RE, |L|=0\}$ . This is a non-trivial property (it includes the language $L=\emptyset$ , but does not include, for instance, the language $L=\{1, 11, 111,\ldots\}$ . Therefore, by Rice's Theorem, $L_\emptyset$ is undecidable. We now prove the theorem. As mentioned above, we are going to show a reduction from $HP$ to $L_S$ (for any arbitrary non-trivial $S$ ). Proof. Let $S$ be a non-trivial property, $\emptyset \subsetneq S \subsetneq RE$ . We show $HP \le L_S$ , that is, we reduce $HP$ to $L_S$ so that if we can decide $L_S$ we will be able to decide $HP$ (which we know to be impossible, therefore, $L_S$ cannot be decidable). In the proof below we assume that the empty language is not part of $S$ , that is $\emptyset \notin S$ . (if the empty language is in $S$ , an equivalent proof works on the complement property $\overline S = RE \setminus S$ , I'll omit the details). Since $S$ is nontrivial, it includes at least one language; let's call that language $L_0$ and assume $M_0$ is a machine that accepts $L_0$ (such machine exists, since $S$ includes only languages in RE). Recall that in such a reduction (See section 3 above), we need to show how to convert an input $w$ for $HP$ into an input $w'$ for $L_S$ so that $$ w\in HP \quad \text{ if and only if }\quad w' \in L_S$$ Let $w=(\langle M \rangle, x)$ , we convert it into $w'=\langle M' \rangle$ where the description of the machine $M'$ (on an input $x'$ ) is the following: Run $M$ on $x$ . If Step 1 above halts, run $M_0$ on $x'$ and accept/reject accordingly. We see that this conversion is valid. First note that it is simple to construct the description of $M'$ given $w=(\langle M \rangle, x)$ . If $w\in HP$ , then $M$ halts on $x$ . In this case, $M'$ proceeds to step 2, and behaves just like $M_0$ . Therefore its accepted language is $L(M')=M_0\in S$ . Therefore, $w'=\langle M' \rangle \in L_S$ . If $w\notin HP$ then $M$ loops on $x$ . This case, $M'$ loops on any input $x'$ — it gets stuck in step 1. The language accepted by $M'$ in this case is empty, $L(M')=\emptyset \notin S$ . Therefore, $w'=\langle M' \rangle \notin L_S$ . 4.1 The Extended Rice Theorem Rice's theorem gives us an easy way to show that a certain language $L$ that satisfies certain properties is undecidable, that is, $L \notin R$ . The extended version of Rice's theorem allows us to determine whether the language is recursively-enumerable or not, that is, determines whether $L \notin RE$ , by checking if $L$ satisfies some additional properties. Theorem (Rice, extended). Given a property $S \subseteq RE$ , the language $$ L_S = \{ \langle M \rangle \mid L(M) \in S \} $$ is recursively-enumerable ( $L_S \in RE$ ) if and only if all the following three statements jointly hold For any two $L_1, L_2 \in RE$ , if $L_1 \in S$ and also $L_1 \subseteq L_2$ then also $L_2 \in S$ . If $L_1\in S$ then there exists a finite subset $L_2 \subseteq L_1$ so that $L_2 \in S$ . The set of all finite languages in $S$ is enumerable (in other words: there is a TM that enumerates all the finite languages $L\in S$ ). Proof. This is an "if and only if" theorem, and we should prove both its directions. First, we show that if one of the conditions (1,2,3) does not hold, then $L_S \notin RE$ . After that we will show that if all three conditions hold simultaneously, then $L_S \in RE$ . If (3) does not hold, then $L_S \notin RE$ . Let's assume that $L_S \in RE$ , and we'll see that we have a way to accept any finite languages in $S$ (and thus, the set of all these languages is RE), thus condition (3) holds and we reach a contradiction. How to decide if a finite $L$ belongs to $S$ or not? Easily – we use the description of $L$ to construct a machine $M_L$ that accepts only the words in $L$ , and now we run the machine of $L_S$ on $M_L$ (remember - we assumed $L_S\in RE$ , so there is a machine that accepts $L_S$ !). If $L\in S$ then $\langle M_L \rangle \in L_S$ and since $L_S\in RE$ , its machine will say yes on the input $\langle M_L \rangle$ , and we are done. If (1) does not hold, then $L_S \notin RE$ . We assume that $L_S \in RE$ and we'll show that we have a way to decide $HP$ , leading to a contradiction. Because condition (1) doesn't hold, there is a language $L_1 \in S$ and a superset of it, $L_2 \supseteq L_1$ so that $L_2 \notin S$ . Now we are going to repeat the argument used in Section 4 to decide $HP$ : given an input $(\langle M \rangle,x)$ for $HP$ , we construct a machine $M'$ whose language is $L_1$ if $(\langle M \rangle,x)\notin HP$ or otherwise, its language is $L_2$ . Then, we can decide $HP$ : either $M$ halts on $x$ , or the RE-machine for $L_S$ accepts $M'$ ; we can run both in parallel and are guaranteed that at least one will halt. Let's give the details of constructing $M'$ (on input $x'$ ): Do the following in parallel: 1.1 run $M$ on $x$ . 1.2 run the machine of $L_1$ on $x'$ If 1.2 halts and accepts - accept. If 1.1 halts: run the machine of $L_2$ on $x'$ . Why does this work?
If $(\langle M \rangle,x)\notin HP$ then 1.1 never halts, and $M'$ accepts exactly all the inputs that are being accepted at step 1.2, so $L(M')=L_1$ .
On the other hand, if $(\langle M \rangle,x)\in HP$ then, at some point step 1.1 halts and $M'$ accepts exactly $L_2$ . It may happen that $1.2$ accepts beforehand, but since $L_1 \subseteq L_2$ , this doesn't change the language of $M'$ in this case. If (2) does not hold, then $L_S \notin RE$ . Again, we will assume $L_S\in RE$ and show that $HP$ becomes decidable, which is a contradiction. If condition (2) doesn't hold, then there exists $L_1\in S$ , all its finite subsets $L_2 \subseteq L_1$ satisfy $L_2 \notin S$ (note that $L_1$ must be infinite, since $L_1\subseteq L_1$ ).
As in the above, in order to decide $HP$ for a given input $(\langle M \rangle,x)$ , we construct a machine $M'$ whose language is $L_1$ if $(\langle M \rangle,x)\notin HP$ and some finite $L_2$ otherwise. The contradiction follows in a similar way as above. The construction of this machine is quite similar to the previous $M'$ we constructed. The machine $M'$ (on input $x'$ ) does: Runs $M$ on $x$ for $|x'|$ steps. If $M$ halts during step 1 – reject Otherwise, run the machine of $L_1$ on $x'$ . It holds that, if $(\langle M \rangle,x)\in HP$ , then at some point, say after 1000 steps, $M$ halts on $x$ . Therefore, step 1 will halt on (and reject) any input $x'$ of length $>1000$ . Therefore, in this case, $L(M')$ is finite . Also note that $L(M') \subseteq L_1$ , and in particular, by our assumptions on the invalidity of condition (2), we have that $L(M) \notin S$ . On the other hand, if $(\langle M \rangle,x)\notin HP$ , then step 1 never halts, and we never reject at step 2. In this case it is easy to see that $L(M)=L_1$ and in particular, $L(M)\in S$ . We are left to show the other direction of the extended theorem. That is, we need to show that if all the conditions (1,2,3) hold, then we have a TM that accepts $L_S$ , that is, $L_S \in RE$ . In other words, we need to show a machine $M_S$ so that for any input $\langle M \rangle$ for which $L(M) \in S$ , the machine accepts this input, $M_S(\langle M \rangle) \to \textsf{accept}$ . Here is how the machine $M_S$ behaves (on input $\langle M \rangle$ ): Let $M_{\text{enum }S}$ be the machine that enumerates all the finite languages in $S$ , guaranteed by condition (3). Run the following in parallel (by dovetailing, see e.g., this and this ) for $i=1,2,...$ 2.1 Run $M_{\text{enum }S}$ until it outputs the language $L_i$ 2.2. Check if $M$ accepts all the words of $L_i$ (run $M$ on these words, again in parallel). 2.3. If for some $i$ , $M$ accepts all the words of $L_i$ – accept. Why does it work?
If $L(M) \in S$ then it has a finite subset $L_j \in S$ , and once $M_{\text{enum }S}$ outputs that subset, step 2.2/2.3 will find that $M$ accepts all the words in that language and accept. On the other hand, if $L(M) \notin S$ it cannot be accepting all the words in $L_i$ for any $i=1,2,...$ . Indeed, by condition (1), any $L' \supseteq L_i$ is also in $S$ , so if $M$ accepts all the words in $L_i$ for some $i$ , then $L(M)\supseteq L_i$ and thus $L(M) \in S$ , in contradiction. Finally, note that the following is a simple (and very useful) corollary of the above: Corollary (Rice, extended). Given a non trivial property $S \subsetneq RE$ , so that $\emptyset \in S$ , the language $$ L_S = \{ \langle M \rangle \mid L(M) \in S \} $$ is not recursively-enumerable, that is, $L_S \notin RE$ . | {
"source": [
"https://cs.stackexchange.com/questions/11181",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/947/"
]
} |
11,263 | There is this standard algorithm for finding longest path in undirected trees using two depth-first searches: Start DFS from a random vertex $v$ and find the farthest vertex from it; say it is $v'$. Now start a DFS from $v'$ to find the vertex farthest from it. This path is the longest path in the graph. The question is, can this be done more efficiently? Can we do it with a single DFS or BFS? (This can be equivalently described as the problem of computing the diameter of an undirected tree.) | We perform a depth-first search in post order and aggregate results on the way,
that is we solve the problem recursively. For every node $v$ with children $u_1,\dots,u_k$ (in the search tree) there are
two cases: The longest path in $T_v$ lies in one of the subtrees $T_{u_1},\dots,T_{u_k}$. The longest path in $T_v$ contains $v$. In the second case, we have to combine the one or two longest paths from $v$ into
one of the subtrees; these are certainly those to the deepest leaves. The length
of the path is then $H_{(k)} + H_{(k-1)} + 2$ if $k>1$, or $H_{(k)}+1$ if $k=1$,
with $H = \{ h(T_{u_i}) \mid i=1,\dots,k\}$ the multi set of subtree heights¹. In pseudo code, the algorithm looks like this: procedure longestPathLength(T : Tree) = helper(T)[2]
/* Recursive helper function that returns (h,p)
* where h is the height of T and p the length
* of the longest path of T (its diameter) */
procedure helper(T : Tree) : (int, int) = {
if ( T.children.isEmpty ) {
return (0,0)
}
else {
// Calculate heights and longest path lengths of children
recursive = T.children.map { c => helper(c) }
heights = recursive.map { p => p[1] }
paths = recursive.map { p => p[2] }
// Find the two largest subtree heights
height1 = heights.max
if (heights.length == 1) {
height2 = -1
} else {
height2 = (heights.remove(height1)).max
}
// Determine length of longest path (see above)
longest = max(paths.max, height1 + height2 + 2)
return (height1 + 1, longest)
}
} $A_{(k)}$ is the $k$-smallest value in $A$ (order statistic). | {
"source": [
"https://cs.stackexchange.com/questions/11263",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/4980/"
]
} |
11,458 | There are two quicksort partition methods mentioned in Cormen: (the argument A is the array, and [p, r] is the range, inclusive, to perform the partition on. The returned value is the index to the pivot after the partition.) Hoare-Partition(A, p, r)
x = A[p]
i = p - 1
j = r + 1
while true
repeat
j = j - 1
until A[j] <= x
repeat
i = i + 1
until A[i] >= x
if i < j
swap( A[i], A[j] )
else
return j and: Lomuto-Partition(A, p, r)
x = A[r]
i = p - 1
for j = p to r - 1
if A[j] <= x
i = i + 1
swap( A[i], A[j] )
swap( A[i + 1], A[r] )
return i + 1 Disregarding the method of choosing the pivot, in what situations is one preferable to the other? I know for instance that Lomuto preforms relatively poorly when there is a high percentage of duplicate values ( i.e. where say more than 2/3rds the array is the same value ), whereas Hoare performs just fine in that situation. What other special cases make one partition method significant better than the other? | Pedagogical Dimension Due to its simplicity Lomuto's partitioning method might be easier to implement. There is a nice anecdote in Jon Bentley's Programming Pearl on Sorting: “Most discussions of Quicksort use a partitioning scheme based on two approaching indices [...] [i.e. Hoare's]. Although the basic idea of that scheme is straightforward, I have always found the details tricky - I once spent the better part of two days chasing down a bug hiding in a short partitioning loop. A reader of a preliminary draft complained that the standard two-index method is in fact simpler than Lomuto's and sketched some code to make his point; I stopped looking after I found two bugs.” Performance Dimension For practical use, ease of implementation might be sacrificed for the sake of efficiency. On a theoretical basis, we can determine the number of element comparisons and swaps to compare performance. Additionally, actual running time will be influenced by other factors, such as caching performance and branch mispredictions. As shown below, the algorithms behave very similar on random permutations except for the number of swaps . There Lomuto needs thrice as many as Hoare! Number of Comparisons Both methods can be implemented using $n-1$ comparisons to partition an array of length $n$ . This is essentially optimal, since we need to compare every element to the pivot for deciding where to put it. Number of Swaps The number of swaps is random for both algorithms, depending on the elements in the array. If we assume random permutations , i.e. all elements are distinct and every permutation of the elements is equally likely, we can analyze the expected number of swaps. As only relative order counts, we assume that the elements are the numbers $1,\ldots,n$ . That makes the discussion below easier since the rank of an element and its value coincide. Lomuto's Method The index variable $j$ scans the whole array and whenever we find an element $A[j]$ smaller than pivot $x$ , we do a swap. Among the elements $1,\ldots,n$ , exactly $x-1$ ones are smaller than $x$ , so we get $x-1$ swaps if the pivot is $x$ . The overall expectation then results by averaging over all pivots. Each value in $\{1,\ldots,n\}$ is equally likely to become pivot (namely with prob. $\frac1n$ ), so we have $$
\frac1n \sum_{x=1}^n (x-1) = \frac n2 - \frac12\;.
$$ swaps on average to partition an array of length $n$ with Lomuto's method. Hoare's Method Here, the analysis is slightly more tricky: Even fixing pivot $x$ , the number of swaps remains random. More precisely: The indices $i$ and $j$ run towards each other until they cross, which always happens at $x$ (by correctness of Hoare's partitioning algorithm!). This effectively divides the array into two parts: A left part which is scanned by $i$ and a right part scanned by $j$ . Now, a swap is done exactly for every pair of “misplaced” elements, i.e. a large element (larger than $x$ , thus belonging in the right partition) which is currently located in the left part and a small element located in the right part.
Note that this pair forming always works out, i.e. there the number of small elements initially in the right part equals the number of large elements in the left part. One can show that the number of these pairs is hypergeometrically $\mathrm{Hyp}(n-1,n-x,x-1)$ distributed: For the $n-x$ large elements we randomly draw their positions in the array and have $x-1$ positions in the left part.
Accordingly, the expected number of pairs is $(n-x)(x-1)/(n-1)$ given that the pivot is $x$ . Finally, we average again over all pivot values to obtain the overall expected number of swaps for Hoare's partitioning: $$
\frac1n \sum_{x=1}^n \frac{(n-x)(x-1)}{n-1} = \frac n6 - \frac13\;.
$$ (A more detailed description can be found in my master's thesis , page 29.) Memory Access Pattern Both algorithms use two pointers into the array that scan it sequentially . Therefore both behave almost optimal w.r.t. caching. Equal Elements and Already Sorted Lists As already mentioned by Wandering Logic, the performance of the algorithms differs more drastically for lists that are not random permutations. On an array that is already sorted, Hoare's method never swaps, as there are no misplaced pairs (see above), whereas Lomuto's method still does its roughly $n/2$ swaps! The presence of equal elements requires special care in Quicksort.
(I stepped into this trap myself; see my master's thesis , page 36, for a “Tale on Premature Optimization”)
Consider as extreme example an array which filled with $0$ s. On such an array, Hoare's method performs a swap for every pair of elements - which is the worst case for Hoare's partitioning - but $i$ and $j$ always meet in the middle of the array. Thus, we have optimal partitioning and the total running time remains in $\mathcal O(n\log n)$ . Lomuto's method behaves much more stupidly on the all $0$ array: The comparison A[j] <= x will always be true, so we do a swap for every single element ! But even worse: After the loop, we always have $i=n$ , so we observe the worst case partitioning, making the overall performance degrade to $\Theta(n^2)$ ! Conclusion Lomuto's method is simple and easier to implement, but should not be used for implementing a library sorting method. Clarification In this answer, I explained why a good implementation of the “crossing-pointer scheme” from Hoare's partitioning method is superior to the simpler scheme of Lomuto's method, and I stand by everything I said on that topic. Alas, this is strictly speaking not what the OP was asking! The pseudocode for Hoare-Partition as given above does not have the desirable properties I lengthily praised, since it fails to exclude the pivot element from the partitioning range. As a consequence, the pivot is “lost” in the swapping and cannot be put into its final position after partitioning, and hence be excluded it from recursive calls.
(That means the recursive calls do no longer fulfill the same randomness assumptions and the whole analysis seems to break down! Robert Sedgewick's PhD dissertation discusses this issue in detail.) For pseudocode of the desirable implementation analyzed above, see my master's thesis, Algorithm 1 .(That code is due to Robert Sedgewick). | {
"source": [
"https://cs.stackexchange.com/questions/11458",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/6728/"
]
} |
11,475 | As I understand, the assignment problem is in P as the Hungarian algorithm can solve it in polynomial time - O(n 3 ). I also understand that the assignment problem is an integer linear programming problem, but the Wikipedia page states that this is NP-Hard. To me, this implies the assignment problem is in NP-Hard. But surely the assignment problem can't be in both P and NP-Hard, otherwise P would equal NP? Does the Wikipedia page simply mean that the general algorithm for solving all ILP problems is NP-Hard? A few other sources state that ILP is NP-Hard so this is really confusing my understanding of complexity classes in general. | If a problem is NP-Hard it means that there exists a class of instances of that problem whose are NP-Hard.
It is perfectly possible for other specific classes of instances to be solvable in polynomial time. Consider for example the problem of finding a 3-coloration of a graph . It is a well-known NP-Hard problem. Now imagine that its instances are restricted to graphs that are, for example, trees. Clearly you can easily find a 3-coloration of a tree in polynomial time (indeed you can also find a 2-coloration). Consider decision problems for a second.
A method of proving the hardness of a decision problem $P$ is devising a polynomial (Karp) reduction from another problem $Q$ that is known to be NP-Hard.
In this reduction you show that there exists a function $f$ that maps each instance $q$ of the problem $Q$ to an instance of the problem $P$ such that:
$q$ is a yes instance for $Q \iff f(q)$ is a yes instance for $P$.
This implies that solving $f(q)$ must be "at least as difficult" as solving $q$ itself. Notice how it's not required for the image of $f$ to be equal to the set of the instances of $P$ . Therefore it's perfectly possibile for problem $P$ restricted to some subset of instances to not be hard. To return to your original question: The assignment problem can be solved in polynomial time, i.e., a solution to each instance of the assignment problem can be computed in polynomial time. ILP is NP-Hard: in general it might be hard to compute a solution to an ILP problem, i.e. there are instances of ILP that are hard. Some specific instances of ILP can be solved in polynomial time. | {
"source": [
"https://cs.stackexchange.com/questions/11475",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/1554/"
]
} |
11,667 | WP has an adequate discussion of paging , which I think I understand.. However I am confused by the articles repeated use of the term Page Frame . I thought frames and pages were different things. Could someone please clarify the difference. | Short version: "page" means "virtual page" (i.e. a chunk of virtual address space) and "page frame" means "physical page" (i.e. a chunk of physical memory). That's it, pretty much. It's important to keep the two concepts distinct because at any given time, a page may not be backed by a page frame (it could be a zero-fill page which hasn't been accessed, or paged out to secondary memory), and a page frame may back multiple pages (sometimes in different address spaces, e.g. shared memory or memory-mapped files). | {
"source": [
"https://cs.stackexchange.com/questions/11667",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/7421/"
]
} |
11,836 | I'm wondering if there is a standard way of measuring the "sortedness" of an array? Would an array which has the median number of possible inversions be considered maximally unsorted? By that I mean it's basically as far as possible from being either sorted or reverse sorted. | No, it depends on your application. The measures of sortedness are often refered to as measures of disorder , which are functions from $N^{<N}$ to $\mathbb{R}$, where $N^{<N}$ is the collection of all finite sequences of distinct nonnegative integers. The survey by Estivill-Castro and Wood [1] lists and discusses 11 different measures of disorder in the context of adaptive sorting algorithms. The number of inversions might work for some cases, but is sometimes insufficient. An example given in [1] is the sequence $$\langle \lfloor n/2 \rfloor + 1, \lfloor n/2 \rfloor + 2, \ldots, n, 1, \ldots, \lfloor n/2 \rfloor \rangle$$ that has a quadratic number of inversions, but only consists of two ascending runs. It is nearly sorted, but this is not captured by inversions. [1] Estivill-Castro, Vladmir, and Derick Wood. "A survey of adaptive sorting algorithms." ACM Computing Surveys (CSUR) 24.4 (1992): 441-476. | {
"source": [
"https://cs.stackexchange.com/questions/11836",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/6728/"
]
} |
11,841 | I'm doing some research regarding NFAs and inclusion problems with them. I know that in general, the inclusion problems, and converting to an unambiguous NFA, are both PSPACE-complete. I'm wondering, are there any sub-classes of NFA for which these can be decided efficiently? In particular, the NFAs I'm looking at accept finite language where all words have the same Parikh vector. | No, it depends on your application. The measures of sortedness are often refered to as measures of disorder , which are functions from $N^{<N}$ to $\mathbb{R}$, where $N^{<N}$ is the collection of all finite sequences of distinct nonnegative integers. The survey by Estivill-Castro and Wood [1] lists and discusses 11 different measures of disorder in the context of adaptive sorting algorithms. The number of inversions might work for some cases, but is sometimes insufficient. An example given in [1] is the sequence $$\langle \lfloor n/2 \rfloor + 1, \lfloor n/2 \rfloor + 2, \ldots, n, 1, \ldots, \lfloor n/2 \rfloor \rangle$$ that has a quadratic number of inversions, but only consists of two ascending runs. It is nearly sorted, but this is not captured by inversions. [1] Estivill-Castro, Vladmir, and Derick Wood. "A survey of adaptive sorting algorithms." ACM Computing Surveys (CSUR) 24.4 (1992): 441-476. | {
"source": [
"https://cs.stackexchange.com/questions/11841",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/2253/"
]
} |
11,893 | I often hear phrases like 'true concurrency semantics' and 'true concurrency equivalences' without any references. What does those terms mean and why are they important? What are some examples of true concurrency equivalences and what is the need for them? E.g. in which cases they are more applicable than more standard equivalences (bisimulation, trace equivalence, etc)? | The term "true concurrency" arises in the theoretical study of concurrent and parallel computation. It is in contrast to interleaving concurrency. True concurrency is concurrency that cannot be reduced to interleaving. Concurrency is interleaved if at each step in the computation, only one atomic computing action (e.g. an exchange of messages between sender and receiver) can take place. Concurrency is true if more than one such atomic action take place in a step. The simplest way of distinguishing both is to look at the rule for parallel composition. In an interleaving based setting, it would look something like this: $$\frac{P \rightarrow P'}{P|Q \rightarrow P'|Q}$$ This rule enforces that only one process in a parallel composition can execute an atomic action. For true concurrency, a rule like the following would be more appropriate. $$\frac{P \rightarrow P'\quad Q \rightarrow Q'}{P|Q \rightarrow P'|Q'}$$ This rule allows both participants in a parallel composition to execute atomic actions. Why would one be interested in interleaved concurrency, when concurrency theory is really the study of systems that execute computation steps in parallel? The answer is, and that's a great insight, that for simple forms of message passing concurrency, true concurrency and interleaving based concurrency are not contextually distinguishable. In other words, interleaved concurrency behaves like true concurrency as far as observers can see. Interleaving is a good decomposition of true concurrency. Since interleaving is easier to handle in proofs, people often only study the simpler interleaving based concurrency (e.g. CCS and $\pi$-calculi). However, this simplicity disappears for concurrent computation with richer forms of observation (e.g. timed computation): the difference between true concurrency and interleaved concurrency becomes observable. Standard equivalences like bisimulations and traces have the same definitions for true and interleaving based concurrency. But they may or may not equate different processes, depending on the underlying calculus. Let me give an informal explanation of why interleaving and truly concurrent interaction are indistinguishable in simple process calculi. The setting is
a CCS or $\pi$-like calculus. Say we have a program $$
P \quad=\quad \overline{x} \ |\ \overline{y} \ |\ x.y.\overline{a} \ |\ y.\overline{b}
$$
Then we have the following truly concurrent reduction:
\begin{eqnarray*}
P &\rightarrow& y.\overline{a} \ |\ \overline{b}
\end{eqnarray*}
This reduction step can be matched by the following interleaved steps:
\begin{eqnarray*}
P &\rightarrow & \overline{x} \ |\ x.y.\overline{a} \ |\ \overline{b} \\
&\rightarrow & y.\overline{a} \ |\ \overline{b}
\end{eqnarray*}
The only difference between both is that the former takes one step, while the
latter two. But simple calculi cannot detect the number of steps used to reach
a process. At the same time, $P$ has the following second interleaved reduction sequence:
\begin{eqnarray*}
P &\rightarrow & \overline{y} \ |\ y.\overline{a} \ |\ y.\overline{b} \\
&\rightarrow & \overline{a} \ |\ y.\overline{b}
\end{eqnarray*}
But this is also a reduction sequence in a truly concurrent setting, as long
as true concurrency is not forced (i.e. interleaved executions are allowed even
when there is potential for more than one interaction at a time). | {
"source": [
"https://cs.stackexchange.com/questions/11893",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/147/"
]
} |
12,102 | I have an integer linear program (ILP) with some variables $x_i$ that are intended to represent boolean values. The $x_i$'s are constrained to be integers and to hold either 0 or 1 ($0 \le x_i \le 1$). I want to express boolean operations on these 0/1-valued variables, using linear constraints. How can I do this? More specifically, I want to set $y_1 = x_1 \land x_2$ (boolean AND), $y_2 = x_1 \lor x_2$ (boolean OR), and $y_3 = \neg x_1$ (boolean NOT). I am using the obvious interpretation of 0/1 as Boolean values: 0 = false, 1 = true. How do I write ILP constraints to ensure that the $y_i$'s are related to the $x_i$'s as desired? (This could be viewed as asking for a reduction from CircuitSAT to ILP, or asking for a way to express SAT as an ILP, but here I want to see an explicit way to encode the logical operations shown above.) | Logical AND: Use the linear constraints $y_1 \ge x_1 + x_2 - 1$ , $y_1 \le x_1$ , $y_1 \le x_2$ , $0 \le y_1 \le 1$ , where $y_1$ is constrained to be an integer. This enforces the desired relationship. (Pretty neat that you can do it with just linear inequalities, huh?) Logical OR: Use the linear constraints $y_2 \le x_1 + x_2$ , $y_2 \ge x_1$ , $y_2 \ge x_2$ , $0 \le y_2 \le 1$ , where $y_2$ is constrained to be an integer. Logical NOT: Use $y_3 = 1-x_1$ . Logical implication: To express $y_4 = (x_1 \Rightarrow x_2)$ (i.e., $y_4 = \neg x_1 \lor x_2$ ), we can adapt the construction for logical OR. In particular, use the linear constraints $y_4 \le 1-x_1 + x_2$ , $y_4 \ge 1-x_1$ , $y_4 \ge x_2$ , $0 \le y_4 \le 1$ , where $y_4$ is constrained to be an integer. Forced logical implication: To express that $x_1 \Rightarrow x_2$ must hold, simply use the linear constraint $x_1 \le x_2$ (assuming that $x_1$ and $x_2$ are already constrained to boolean values). XOR: To express $y_5 = x_1 \oplus x_2$ (the exclusive-or of $x_1$ and $x_2$ ), use linear inequalities $y_5 \le x_1 + x_2$ , $y_5 \ge x_1-x_2$ , $y_5 \ge x_2-x_1$ , $y_5 \le 2-x_1-x_2$ , $0 \le y_5 \le 1$ , where $y_5$ is constrained to be an integer. And, as a bonus, one more technique that often helps when formulating problems that contain a mixture of zero-one (boolean) variables and integer variables: Cast to boolean (version 1): Suppose you have an integer variable $x$ , and you want to define $y$ so that $y=1$ if $x \ne 0$ and $y=0$ if $x=0$ . If you additionally know that $0 \le x \le U$ , then you can use the linear inequalities $0 \le y \le 1$ , $y \le x$ , $x \le Uy$ ; however, this only works if you know an upper and lower bound on $x$ . Alternatively, if you know that $|x| \le U$ (that is, $-U \le x \le U$ ) for some constant $U$ , then you can use the method described here . This is only applicable if you know an upper bound on $|x|$ . Cast to boolean (version 2): Let's consider the same goal, but now we don't know an upper bound on $x$ . However, assume we do know that $x \ge 0$ . Here's how you might be able to express that constraint in a linear system. First, introduce a new integer variable $t$ . Add inequalities $0 \le y \le 1$ , $y \le x$ , $t=x-y$ . Then, choose the objective function so that you minimize $t$ . This only works if you didn't already have an objective function. If you have $n$ non-negative integer variables $x_1,\dots,x_n$ and you want to cast all of them to booleans, so that $y_i=1$ if $x_i\ge 1$ and $y_i=0$ if $x_i=0$ , then you can introduce $n$ variables $t_1,\dots,t_n$ with inequalities $0 \le y_i \le 1$ , $y_i \le x_i$ , $t_i=x_i-y_i$ and define the objective function to minimize $t_1+\dots + t_n$ . Again, this only works nothing else needs to define an objective function (if, apart from the casts to boolean, you were planning to just check the feasibility of the resulting ILP, not try to minimize/maximize some function of the variables). For some excellent practice problems and worked examples, I recommend Formulating Integer Linear Programs:
A Rogues' Gallery . | {
"source": [
"https://cs.stackexchange.com/questions/12102",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/755/"
]
} |
12,587 | If not, then what does it mean when for some state $q$ and some symbol $a$, $\delta(q, a)$ does not exist? | You seem to have stumbled on a contentious issue. Apparently computer scientists like to argue. I certainly like to argue, so here goes! My answer is an unequivocal: No. A deterministic finite automata does not need a transition from every state for every symbol. The meaning when $\delta(q,a)$ does not exist is simply that the DFA does not accept the input string. While you can create a definition of DFA that requires that $\delta(q,a)$ does exist, it is simply not the case that a missing transition makes the resulting structure (whatever you call it) in any way nondeterministic as many of the commenters are claiming. If you are taking a course on automata theory then the next topic will be context-free languages and push-down automata where the distinction between nondeterministic and deterministic automata is crtical, and you need to use the correct definition of non-determinism. Non-determinism is associated with having more than one legal transition. I think we all agree with the following Wikipedia definition (which I'll show in just a second is slightly ambiguous): A deterministic finite automaton $M$ is a 5-tuple, ( $Q$ , $\Sigma$ , $\delta$ , $q_0$ , $F$ ), consisting of a finite set of states ( $Q$ ) a finite set of input symbols called the alphabet ( $\Sigma$ ) a transition function ( $\delta : Q \times \Sigma \rightarrow Q$ ) a start state ( $q_0 \in Q)$ a set of accept states ( $F \subseteq Q$ ). Let $w = a_1 a_2 \cdots a_n$ be a string over the alphabet $\Sigma$ . The automaton $M$ accepts the string $w$ if a sequence of states, $r_0, r_1, \ldots, r_n$ , exists in $Q$ with the following conditions: $r_0 = q_0$ $r_{i+1} = \delta(r_i, a_{i+1})$ , for $i = 0, \ldots, n−1$ $r_n \in F$ . The ambiguity, and the controversy is over the defintion of the transition function, $\delta$ (number "3" in the first bulleted list.) We all agree that what differentiates a DFA from an NFA is that $\delta$ is a function rather than a relation . But is $\delta$ a partial function or a total function ? The definition of the DFA works just fine if $\delta$ is a partial function. Given an input string, if you reach a state $q_i$ with an input symbol $a_j$ where there is no next state then the automata simply does not accept. Moreover when you extend this definition to create the definition of push-down automata it will be the case that you must make the distinction that push-down automata with transition functions that are partial functions are classified as deterministic, not nondeterministic. If the partial function bothers you then here is a trivial transformation that makes $\delta$ a total function. (This transformation is not like the subset construction algorithm, it adds at most O(1) states, is linear in the original number of states, and can be extended to work with PDAs. None of those facts is true of the subset construction algorithm.) add a state $q_{\mathrm{error}}$ for every pair $(q_i, s_j)$ where $\delta$ is undefined, define $\delta(q_i, s_j) = q_{\mathrm{error}}$ . This automata has a $\delta$ that is a total function and accepts and rejects exactly the same set of states that your original automata accepted and rejected. Edit, January 2019 Commenter @Alex Smart rightly critiques me for neither giving references, nor for explaining why we should care. So here goes: The reason we care about the exact definition of determinism vs non-determinism, is that some classes of non-deterministic automata are more powerful than their deterministic cousins, and some classes of non-deterministic automata are not more powerful than their deterministic cousins. For finite automata and Turing machines the deterministic and non-deterministic variants are of equivalent power. For pushdown automata there are languages where the distinction is important: There are NPDA that accept the language, and no DPDA accepts the language. For the linear bounded automata the question is (or was last time I checked) open. The increase of power of NPDA over DPDA comes from allowing multiple transitions, not from turning the transition function from a total function to a partial function. Books from the compiler community: Aho and Ullman, Principles of Compiler Design , 1977: First defines NFA (page 88) with a transition relation, then (p. 90-91): We say a finite automaton is deterministic if
1. It has no transitions on input $\epsilon$ .
2. For each state $s$ and input symbol $a$ , there is at most one edge labeled $a$ leaving $s$ . Aho, Sethi, and Ullman, Compilers, principles, tecniques, and tools , 1988 reprint, is similar, it first defines NFA with a transition relation, then (p. 115-116): A deterministic finite automata (DFA, for short) is a special case of a non-deterministic finitie automaton in which ... there is at most one edge labeled $a$ leaving $s$ . (Note that in the comments @Alex Smart says, "the dragon specifically mentions that the function is total." I assume he is talking about the later edition with co-author Lam, which I don't have access to at the moment.) Appel, Modern Compiler Implementation in Java , 1988 (p. 22): In a deterministic finite automaton (DFA), no two edges leaving from the same state are labeled with the same symbol. Appel then goes on to explain that when using DFA to recognize longest matches we explicitly make use of the missing transitions to decide when to stop (p. 23): when a dead state (a nonfinal state with no output transitions) is reached, the variables [which record the longest match we've seen so far] tell what token was matched, and where it ended. Books from the switching-theory community: Kohavi, Switching and Finite Automata Theory, 2/e , 1978, p. 611 says: Because a state diagram describes a deterministic machine, the next state transition must be determined uniquely by the present state and the presently scanned input symbol. I would typically interpret uniquely to mean "exactly one", not "no more than one". (I.e., Kohavi seems to be saying that determinism requires a total function) Books from the theory-of-computation community: Here it seems to be more common to define DFAs before NFAs, and require DFAs to have a total transition function, but then define NPDAs before DPDAs, and define "determinism" as being a restriction of the transition relation to having no-more-than-one entry for each state/symbol pair. This is true of Hopcroft and Ullman, 1979, Lewis and Papadimitriou, 1981, and, especially of Sipser, 2006, who uses the definition of DFA pedagogically to introduce precise formal definitions, and explain their importance and explicitly says (p.36): the transition function, $\delta$ , specifies exactly one next state for each possible combination of a state and an input symbol. This seems to follow the historical development. Deterministic finite automata were introduced in the 40s and 50s. Non-determinisitc finite automata were introduced in the paper by Rabin and Scott, "Finite automata and their decision problems, IBM J. Rsrch and Dvpt , 3(2):114-125, 1959. Following earlier authors, Rabin and Scott define deterministic finite automata (which they call ordinary automata) as having a transition function "defined on the Cartesian product $s\times\Sigma$ of all pairs of states and symbols." (Which I would interpret as meaning a total function). Interestingly Rabin and Scott, also define non-deterministic finite automata in terms of a total function! Page 120, Definition 9: A nondeterministic (finite) automaton ... is a system where ... $M$ is a function[!] of $S\times\Sigma$ with values in the set of all subsets of $S$ . That is: the transition function being total does not make the system deterministic! Sipser 2006 follows Rabin and Scott and uses a total transition function from states/symbols to the power set of states for his definitions of non-deterministic finite automata, non-deterministic PDA, and non-deterministic Turing Machines, but skips the topic of deterministic PDA. Both Hopcroft and Ullman, 1979, and Lewis and Papadimitriou, 1981 use partial functions in their definitions of deterministic PDAs. They first define NPDAs with a transition relation, and then when they get to PDAs, Lewis and Papadimitriou say (p. 135), A pushdown automaton is deterministic , intuitively speaking, if there is at most one transition applicable to each configuration. While Hopcroft and Ullman say (p. 112): The PDA ... is deterministic in the sense that at most one move is possible from any ID. | {
"source": [
"https://cs.stackexchange.com/questions/12587",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/8184/"
]
} |
13,287 | We mostly write programme in high level language. So while studying I came across assembly language. So an assembler converts assembly language to machine language and a compiler does the same with high level language. I found assembly language has instructions like move r1 r3 , move a 5 etc. And it is rather hard to study. So why was assembly language created?or was it the one that came first even before high level language? Why am I studying about assemblers in my computer engineering class? | "So why was assembly language created?" Assembly language was created as an exact shorthand for machine level coding, so that you wouldn't have to count 0s and 1s all day. It works the same as machine level code: with instructions and operands. "Which one came first?" Wikipedia has a good article about the History of Programming Languages "Why am I studying about assemblers in my computer engineering class?" Though it's true, you probably won't find yourself writing your next
customer's app in assembly, there is still much to gain from learning
assembly. Today, assembly language is used primarily for direct
hardware manipulation, access to specialized processor instructions,
or to address critical performance issues. Typical uses are device
drivers, low-level embedded systems, and real-time systems. Assembly language is as close to the processor as you can get as a programmer
so a well designed algorithm is blazing -- assembly is great for speed
optimization. It's all about performance and efficiency. Assembly
language gives you complete control over the system's resources. Much
like an assembly line, you write code to push single values into
registers, deal with memory addresses directly to retrieve values or
pointers. (source: codeproject.com ) | {
"source": [
"https://cs.stackexchange.com/questions/13287",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/9094/"
]
} |
13,356 | Specifically: 1) A direct-mapped cache with 4096 blocks/lines in which each block has 8 32-bit words. How many bits are needed for the tag and index fields, assuming a 32-bit address? 2) Same question as 1) but for fully associative cache ? Correct me if I'm wrong, is it: tag bits = address bit length - exponent of index - exponent of offset? [Is the offset = 3 due to 2^3 = 8 or is it 5 from 2^5 = 32?] | The question as stated is not quite answerable. A word has been defined to be 32-bits. We need to know whether the system is "byte-addressable" (you can access an 8-bit chunk of data) or "word-addressable" (smallest accessible chunk is 32-bits) or even "half-word addressable" (the smallest chunk of data you can access is 16-bits.) You need to know this to know what the lowest-order bit of an address is telling you. Then you work from the bottom up. Let's assume the system is byte addressable. Then each cache block contains 8 words*(4 bytes/word)=32=2 5 bytes, so the offset is 5 bits. The index for a direct mapped cache is the number of blocks in the cache (12 bits in this case, because 2 12 =4096.) Then the tag is all the bits that are left, as you have indicated. As the cache gets more associative but stays the same size there are fewer index bits and more tag bits. | {
"source": [
"https://cs.stackexchange.com/questions/13356",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/9161/"
]
} |
13,625 | I'm trying to understand algorithm complexity, and a lot of algorithms are classified as polynomial. I couldn't find an exact definition anywhere. I assume it is the complexity that is not exponential. Do linear/constant/quadratic complexities count as polynomial? An answer in simple English will be appreciated :) | First, consider a Turing machine as a model (you can use other models too as long as they are Turing equivalent) of the algorithm at hand. When you provide an input of size $n$ , then you can think of the computation as a sequence of the machine's configuration after each step, i.e., $c_0, c_1, \ldots$ . Hopefully, the computation is finite, so there is some $t$ such $c_0, c_1, \ldots, c_t$ . Then $t$ is the running time of the given algorithm for an input of size $n$ . An algorithm is polynomial (has polynomial running time) if for some $k,C>0$ , its running time on inputs of size $n$ is at most $Cn^k$ . Equivalently, an algorithm is polynomial if for some $k>0$ , its running time on inputs of size $n$ is $O(n^k)$ . This includes linear, quadratic, cubic and more. On the other hand, algorithms with exponential running times are not polynomial. There are things in between - for example, the best known algorithm for factoring runs in time $O(\exp(Cn^{1/3} \log^{2/3} n))$ for some constant $C > 0$ ; such a running time is known as sub-exponential . Other algorithms could run in time $O(\exp(A\log^C n))$ for some $A > 0$ and $C > 1$ , and these are known as quasi-polynomial . Such an algorithm has very recently been claimed for discrete log over small characteristics. | {
"source": [
"https://cs.stackexchange.com/questions/13625",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/9538/"
]
} |
13,669 | It seems that on this site, people will often correct others for confusing "algorithms" and "problems." What are the difference between these? How do I know when I should be considering algorithms and considering problems? And how do these relate to the concept of a language in formal language theory? | For simplicity, I'll begin by only considering "decision" problems, which have a yes/no answer. Function problems work roughly the same way, except instead of yes/no, there is a specific output word associated with each input word. Language : a language is simply a set of strings. If you have an alphabet, such as $\Sigma$ , then $\Sigma^*$ is the set of all words containing only the symbols in $\Sigma$ .
For example, $\{0,1 \}^*$ is the set of all binary sequences of any length.
An alphabet doesn't need to be binary, though. It can be unary, ternary, etc. A language over an alphabet $\Sigma$ is any subset of $\Sigma^*$ . Problem : A problem is some question about some input we'd like answered. Specifically, a decision problem is a question which asks, "Does our given input fulfill property $X$ ? A language is the formal realization of a problem. When we want to reason theoretically about a decision problem, we often examine the corresponding language.
For a decision problem $X$ , the corresponding language is: $L = \{w \mid w$ is the encoding of an input $y$ to problem $X$ ,
and the answer to input $y$ for problem $X$ is "Yes" $ \}$ Determining if the answer for an input to a decision problem is "yes" is equivalent to determining whether an encoding of that input over an alphabet is in the corresponding language. Algorithm : An algorithm is a step-by-step way to solve a problem. Note that there an algorithm can be expressed in many ways and many languages, and that there are many different algorithms solving any given problem. Turing Machine : A Turing Machine is the formal analogue of an algorithm. A Turing Machine over a given alphabet, for each word, either will or won't halt in an accepting state. Thus for each Turing Machine $M$ , there is a corresponding language: $L(M) = \{w \mid M$ halts in an accepting state on input $w\}$ . (There's a subtle difference between Turing Machines that halt on all inputs and halt on yes inputs, which defines the difference between complexity classes $\mathsf{R}$ and $\mathsf{RE}$ .) The relationship between languages and Turing Machines is as follows Every Turing Machine accepts exactly one language There may be more than one Turing Machine that accept a given language There may be no Turing Machine that accepts a given language. We can say roughly the same thing about algorithms and problems: every algorithm solves a single problem, but there may be 0, or many, algorithms solving a given problem. Time Complexity : One of the most common sources of confusion between algorithms and problems is in regards to complexity classes. The correct allocation can be summarized as follows: An algorithm has a time complexity A problem belongs to a complexity class An algorithm can have a certain time complexity. We say an algorithm has a worst-case upper-bounded complexity $f(n)$ if the algorithm halts in at most $f(n)$ steps for any input of size $n$ . Problems don't have run-times, since a problem isn't tied to a specific algorithm which actually runs. Instead, we say that a problem belongs to a complexity class, if there exists some algorithm solving that problem with a given time complexity. $\mathsf{P}, \mathsf{NP}, \mathsf{PSPACE}, \mathsf{EXPTIME}$ etc. are all complexity classes. This means they contain problems, not algorithms. An algorithm can never be in $\mathsf{P}$ , but if there's a polynomial-time algorithm solving a given problem $X$ , then $X$ can be classified in complexity class $\mathsf{P}$ . There could also be a bunch of other algorithms runs in different time complexity will also be able to solve the problem with the same input size under different time complexity, i.e. exponential-time algorithms, but since there already exists a single polynomial-time algorithm accepting $X$ , it is in $\mathsf{P}$ . | {
"source": [
"https://cs.stackexchange.com/questions/13669",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/2253/"
]
} |
13,675 | While reading the paper Holistic twig joins: optimal XML pattern matching I came across the pseudo code for liststack algorithm. (available through google scholar) A function in the algorithm confused me, since I can understand what it is supposed to do, but can't deconstruct the notation: $\qquad \mathtt{Function}\ \mathtt{end}(q\mathtt) \\
\qquad\qquad \mathtt{return}\ \forall q_i \in \mathtt{subtreeNodes}(q) : \mathtt{isLeaf}(q_i) \implies \mathtt{eof}(T_{q_i})$ This function is supposed to return a single boolean result. It is supposed to be true when all lists associated to leaf nodes of a query pattern node are at their end. So true means there are no more nodes in the query pattern to process. But what is the meaning of the set builder(ish?) notation here? Is it $\qquad$ "for all subtree nodes of $q$ for which $\mathtt{isLeaf}(q_i)$ is true $\mathtt{eof}(T_{q_i})$ is also true" (which means that the list is at the end position)? Or is it $\qquad$ "for all subtree nodes of $q$, $\mathtt{isLeaf}(q_i)$ implies $\mathtt{eof}(T_{q_i})$ is true"? Is double arrow representing implies with its truth table? As you can see, I'm having a bit difficulty in associating the colon and its precedence. | For simplicity, I'll begin by only considering "decision" problems, which have a yes/no answer. Function problems work roughly the same way, except instead of yes/no, there is a specific output word associated with each input word. Language : a language is simply a set of strings. If you have an alphabet, such as $\Sigma$ , then $\Sigma^*$ is the set of all words containing only the symbols in $\Sigma$ .
For example, $\{0,1 \}^*$ is the set of all binary sequences of any length.
An alphabet doesn't need to be binary, though. It can be unary, ternary, etc. A language over an alphabet $\Sigma$ is any subset of $\Sigma^*$ . Problem : A problem is some question about some input we'd like answered. Specifically, a decision problem is a question which asks, "Does our given input fulfill property $X$ ? A language is the formal realization of a problem. When we want to reason theoretically about a decision problem, we often examine the corresponding language.
For a decision problem $X$ , the corresponding language is: $L = \{w \mid w$ is the encoding of an input $y$ to problem $X$ ,
and the answer to input $y$ for problem $X$ is "Yes" $ \}$ Determining if the answer for an input to a decision problem is "yes" is equivalent to determining whether an encoding of that input over an alphabet is in the corresponding language. Algorithm : An algorithm is a step-by-step way to solve a problem. Note that there an algorithm can be expressed in many ways and many languages, and that there are many different algorithms solving any given problem. Turing Machine : A Turing Machine is the formal analogue of an algorithm. A Turing Machine over a given alphabet, for each word, either will or won't halt in an accepting state. Thus for each Turing Machine $M$ , there is a corresponding language: $L(M) = \{w \mid M$ halts in an accepting state on input $w\}$ . (There's a subtle difference between Turing Machines that halt on all inputs and halt on yes inputs, which defines the difference between complexity classes $\mathsf{R}$ and $\mathsf{RE}$ .) The relationship between languages and Turing Machines is as follows Every Turing Machine accepts exactly one language There may be more than one Turing Machine that accept a given language There may be no Turing Machine that accepts a given language. We can say roughly the same thing about algorithms and problems: every algorithm solves a single problem, but there may be 0, or many, algorithms solving a given problem. Time Complexity : One of the most common sources of confusion between algorithms and problems is in regards to complexity classes. The correct allocation can be summarized as follows: An algorithm has a time complexity A problem belongs to a complexity class An algorithm can have a certain time complexity. We say an algorithm has a worst-case upper-bounded complexity $f(n)$ if the algorithm halts in at most $f(n)$ steps for any input of size $n$ . Problems don't have run-times, since a problem isn't tied to a specific algorithm which actually runs. Instead, we say that a problem belongs to a complexity class, if there exists some algorithm solving that problem with a given time complexity. $\mathsf{P}, \mathsf{NP}, \mathsf{PSPACE}, \mathsf{EXPTIME}$ etc. are all complexity classes. This means they contain problems, not algorithms. An algorithm can never be in $\mathsf{P}$ , but if there's a polynomial-time algorithm solving a given problem $X$ , then $X$ can be classified in complexity class $\mathsf{P}$ . There could also be a bunch of other algorithms runs in different time complexity will also be able to solve the problem with the same input size under different time complexity, i.e. exponential-time algorithms, but since there already exists a single polynomial-time algorithm accepting $X$ , it is in $\mathsf{P}$ . | {
"source": [
"https://cs.stackexchange.com/questions/13675",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/9580/"
]
} |
13,785 | As a software engineer, I write a lot of code for industrial products. Relatively complicated stuff with classes, threads, some design efforts, but also some compromises for performance. I do a lot of testing, and I am tired of testing, so I got interested in formal proof tools, such as Coq, Isabelle... Could I use one of these to formally prove that my code is bug-free and be done with it? - but each time I check out one of these tools, I walk away unconvinced that they are usable for everyday software engineering. Now, that could only be me, and I am looking for pointers/opinions/ideas about that :-) Specifically, I get the impression that to make one of these tools work for me would require a huge investment to properly define to the prover the objects, methods... of the program under consideration. I then wonder if the prover wouldn't just run out of steam given the size of everything it would have to deal with. Or maybe I would have to get rid of side-effects (those prover tools seem to do really well with declarative languages), and I wonder if that would result in "proven code" that could not be used because it would not be fast or small enough. Also, I don't have the luxury of changing the language I work with, it needs to be Java or C++: I can't tell my boss I'm going to code in OXXXml from now on, because it's the only language in which I can prove the correctness of the code... Could someone with more experience of formal proof tools comment? Again - I would LOVE to use a formal prover tool, I think they are great, but I have the impression they are in an ivory tower that I can't reach from the lowly ditch of Java/C++... (PS: I also LOVE Haskell, OCaml... don't get the wrong idea: I am a fan of declarative languages and formal proof, I am just trying to see how I could realistically make that useful to software engineering) Update: Since this is fairly broad, let's try the following more specific questions: 1) are there examples of using provers to prove correctness of industrial Java/C++ programs? 2) Would Coq be suitable for that task? 3) If Coq is suitable, should I write the program in Coq first, then generate C++/Java from Coq? 4) Could this approach handle threading and performance optimizations? | I'll try to give a succinct answer to some of your questions. Please bear in mind that this is not strictly my field of research, so some of my info may be outdated/incorrect. There are many tools that are specifically designed to formally prove properties of Java and C++. However I need to make a small digression here: what does it mean to prove correctness of a program? The Java type checker proves a formal property of a Java program, namely that certain errors, like adding a float and an int , can never occur! I imagine you are interested in much stronger properties, namely that your program can never enter into an unwanted state, or that the output of a certain function conforms to a certain mathematical specification. In short, there is a wide gradient of what "proving a program correct" can mean, from simple security properties to a full proof that the program fulfills a detailed specification. Now I'm going to assume that you are interested in proving strong properties about your programs. If you are interested in security properties (your program can not reach a certain state), then in general it seems the best approach is model checking . However if you wish to fully specify the behavior of a Java program, your best bet is to use a specification language for that language, for instance JML . There are such languages for specifying the behavior of C programs, for instance ACSL , but I don't know about C++. Once you have your specifications, you need to prove that the program conforms to that specification. For this you need a tool that has a formal understanding of both your specification and the operational semantics of your language (Java or C++) in order to express the adequacy theorem , namely that the execution of the program respects the specification. This tool should also allow you to formulate or generate the proof of that theorem. Now both of these tasks (specifying and proving) are quite difficult, so they are often separated in two: One tool that parses the code, the specification and generates the adequacy theorem. As Frank mentioned, Krakatoa is an example of such a tool. One tool that proves the theorem(s), automatically or interactively. Coq interacts with Krakatoa in this manner, and there are some powerful automated tools like Z3 which can also be used. One (minor) point: there are some theorems which are much too hard to be proven with automated methods, and automatic theorem provers are known to occasionally have soundness bugs which make them less trustworthy. This is an area where Coq shines in comparison (but it is not automatic!). If you want to generate Ocaml code, then definitely write in Coq (Gallina) first, then extract the code. However, Coq is terrible at generating C++ or Java, if it is even possible. Can the above tools handle threading and performance issues? Probably not, performance and threading concerns are best handled by specifically designed tools, as they are particularly hard problems. I'm not sure I have any tools to recommend here, though Martin Hofmann's PolyNI project seems interesting. In conclusion: formal verification of "real world" Java and C++ programs is a large and well-developed field, and Coq is suitable for parts of that task. You can find a high-level overview here for example. | {
"source": [
"https://cs.stackexchange.com/questions/13785",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/9667/"
]
} |
14,456 | I know how to code for factorials using both iterative and recursive (e.g. n * factorial(n-1) for e.g.). I read in a textbook (without been given any further explanations) that there is an even more efficient way of coding for factorials by dividing them in half recursively. I understand why that may be the case. However I wanted to try coding it on my own, and I don't think I know where to start though. A friend suggested I write base cases first. and I was thinking of using arrays so that I can keep track of the numbers... but I really can't see any way out to designing such a code. What kind of techniques should I be researching? | The best algorithm that is known is to express the factorial as a product of prime powers. One can quickly determine the primes as well as the right power for each prime using a sieve approach. Computing each power can be done efficiently using repeated squaring, and then the factors are multiplied together. This was described by Peter B. Borwein, On the Complexity of Calculating Factorials , Journal of Algorithms 6 376–380, 1985. ( PDF ) In short, $n!$ can be computed in $O(n(\log n)^3\log \log n)$ time, compared to the $\Omega(n^2 \log n)$ time required when using the definition. What the textbook perhaps meant was the divide-and-conquer method. One can reduce the $n-1$ multiplications by using the regular pattern of the product. Let $n?$ denote $1 \cdot 3 \cdot 5 \dotsm (2n-1)$ as a convenient notation.
Rearrange the factors of $(2n)! = 1 \cdot 2 \cdot 3 \dotsm (2n)$ as
$$(2n)! = n! \cdot 2^n \cdot 3 \cdot 5 \cdot 7 \dotsm (2n-1).$$
Now suppose $n = 2^k$ for some integer $k>0$.
(This is a useful assumption to avoid complications in the following discussion, and the idea can be extended to general $n$.)
Then $(2^k)! = (2^{k-1})!2^{2^{k-1}}(2^{k-1})?$ and by expanding this recurrence,
$$(2^k)! = \left(2^{2^{k-1}+2^{k-2}+\dots+2^0}\right) \prod_{i=0}^{k-1} (2^i)? = \left(2^{2^k - 1}\right) \prod_{i=1}^{k-1} (2^i)?.$$
Computing $(2^{k-1})?$ and multiplying the partial products at each stage takes $(k-2) + 2^{k-1} - 2$ multiplications. This is an improvement of a factor of nearly $2$ from $2^k-2$ multiplications just using the definition. Some additional operations are required to compute the power of $2$, but in binary arithmetic this can be done cheaply (depending on what precisely is required, it may just require adding a suffix of $2^k-1$ zeroes). The following Ruby code implements a simplified version of this. This does not avoid recomputing $n?$ even where it could do so: def oddprod(l,h)
p = 1
ml = (l%2>0) ? l : (l+1)
mh = (h%2>0) ? h : (h-1)
while ml <= mh do
p = p * ml
ml = ml + 2
end
p
end
def fact(k)
f = 1
for i in 1..k-1
f *= oddprod(3, 2 ** (i + 1) - 1)
end
2 ** (2 ** k - 1) * f
end
print fact(15) Even this first-pass code improves on the trivial f = 1; (1..32768).map{ |i| f *= i }; print f by about 20% in my testing. With a bit of work, this can be improved further, also removing the requirement that $n$ be a power of $2$ (see the extensive discussion ). | {
"source": [
"https://cs.stackexchange.com/questions/14456",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/7307/"
]
} |
14,674 | What would be the best introduction to Per Martin-Löfs ideas about type theory? I've looked at some lectures from the Oregon PL summer school, but I'm still sort of puzzled by the following question: What is a type? I know what a set is, since you can define them by the usual ZF axioms and they have a very intuitive concrete model; just think of a basket filled with stuff. However, I've yet to see a reasonable definition of a type and I was wondering if there is some source that would distill this idea for dummy. | A type is a property of computations. It's what you write on the right-hand side of a colon. Let me elaborate on that. Note that the terminology isn't completely standard: some articles or books may use different words for certain concepts. A term is an element of an abstract syntax that is intended to represent computation. Intuitively, it's a parse tree. Formally, it's a finite tree where the nodes belong to some alphabet. An untyped calculus defines a syntax for terms. For example, the (untyped) lambda calculus contains terms (written $M$, $N$, etc.) built from three types of nodes: variables, of arity 0 (a denumerable collection thereof), written $x$, $y$, etc.; application of a variable, of arity 1 (a denumerable collection thereof, with a bijection to variables), written $\lambda x. M$, etc.; application, of arity 2, written $M \, N$. A term is a syntactic construction. A semantics relates terms to computations. There are many types of semantics, the most common being operational (describing how terms can be transformed into other terms) or denotational (describing terms by a transformation into another space, usually built from set theory). A type is a property of terms. A type system for an untyped calculus describes which terms have which types. Mathematically, at the core, a type system is a relation between terms and types. More accurately, a type system is a family of such relations, indexed by contexts — typically, a context provides at least types for variables (i.e. a context is a partial function from variables to types), such that a term may only have a type in contexts that provide a type for all its free variables. What kind of mathematical object a type is depends on the type system. Some type systems are described with types as sets, using notions of set theory such as intersection, union and comprehension. This has the advantage of resting upon familiar mathematical foundations. A limitation of this approach is that it doesn't allow reasoning about equivalent types. Many type systems describe types themselves as terms in a calculus of types. Depending on the type system, these may be the same terms or different terms. I'll use the phrase base term to refer to a term of the calculus that describes computation. For example, the simply typed lambda calculus uses the following calculus of types (written $\tau$, etc.): base types, of arity 0 (a finite or denumerable collection thereof), written $A$, $B$, etc.; function, of arity 2, written $\tau_0 \rightarrow \tau_1$. The relation between terms and types that defines the simply typed lambda calculus is usually defined by typing rules . Typing rules are not the only way to define a type system, but they are common. They work well for compositional type systems, i.e. type systems where the type(s) of a term is built from the types of subterms. Typing rules define a type system inductively: each typing rule is an axiom that states that for any instantiation of the formulas above the horizontal rule, the formula below the rule is also true. See How to read typing rules? for more details. Does there exist a Turing complete typed lambda calculus? may also be of interest. For the simply typed lambda calculus, the typing judgement $\Gamma \vdash M : \tau$ means that $M$ has the type $\tau$ in the context $\Gamma$. I've omitted the formal definition of contexts.
$$
\dfrac{x:\tau \in \Gamma}{\Gamma \vdash x : \tau}(\Gamma)
\qquad
\dfrac{\Gamma, x:\tau_0 \vdash M : \tau_1}{\Gamma \vdash \lambda x.M : \tau_0 \rightarrow \tau_1}(\mathord{\rightarrow}I)
\qquad
\dfrac{\Gamma \vdash M : \tau_0 \rightarrow \tau_1 \quad \Gamma \vdash N : \tau_0}{\Gamma \vdash M\,N : \tau_1}(\mathord{\rightarrow}E)
$$ For example, if $A$ and $B$ are based types, then $\lambda x. \lambda y. x\,y$ has the type $(A \rightarrow B) \rightarrow A \rightarrow B$ in any context (from bottom to top, apply $(\mathord{\rightarrow}I)$ twice, then $(\mathord{\rightarrow}E)$, and finally $(\Gamma)$ on each branch). It is possible to interpret the types of the simply typed lambda calculus as sets. This amounts to giving a denotational semantics for the types. A good denotational semantics for the base terms would assign to each base term a member of the denotation of all of its types. Intuitionistic type theory (also known as Martin-Löf type theory) is more complex that simply typed lambda calculus, as it has many more elements in the calculus of types (and also adds a few constants to the base terms). But the core principles are the same. An important feature of Martin-Löf type theory is that types can contain base terms (they are dependent types ): the universe of base terms and the universe of types are the same, though they can be distinguished by simple syntactic rules (usually known as sorting, i.e. assigning sorts to terms, in rewriting theory). There are type systems that go further and completely mix types and base terms, so that there is no distinction between the two. Such type systems are said to be higher-order . In such calculi, types have types — a type can appear on the left-hand side of the $:$. The calculus of construction is the paradigm of higher-order dependent types. The lambda cube (also known as Barendregt cube) classifies type systems in terms of whether they allow terms to depend on types ( polymorphism — some base terms contain types as subterms), types to depend on terms (dependent types), or types to depend on types ( type operators — the calculus of types has a notion of computation). Most type systems have been given set-theoretical semantics, to tie them with the usual foundations of mathematics. How are programming languages and foundations of mathematics related? and What is the difference between the semantic and syntactic views of function types? may be of interest here. There has also been work on using type theory as a foundation of mathematics — set theory is the historic foundation, but it is not the only possible choice. Homotopy type theory is an important milestone in this direction: it describes the semantics of intentional intuitionistic type theory in terms of homotopy theory and constructs set theory in this framework. I recommend Benjamin Pierce's books Types and Programming Languages and Advances Topics in Types and Programming Languages . They are accessible to any undergraduate with no prerequisite other than basic familiarity with formal mathematical reasoning. TAPL describes many type systems; dependent types are the subject of chapter 2 of ATTAPL. | {
"source": [
"https://cs.stackexchange.com/questions/14674",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/10394/"
]
} |
14,733 | Using the following recursive Fibonacci algorithm: def fib(n):
if n==0:
return 0
elif n==1
return 1
return (fib(n-1)+fib(n-2)) If I input the number 5 to find fib(5), I know this will output 5 but how do I examine the complexity of this algorithm? How do I calculate the steps involved? | Most of the times, you can represent the recursive algorithms using recursive equations. In this case the recursive equation for this algorithm is $T(n) = T(n-1) + T(n-2) + \Theta(1)$. Then you can find the closed form of the equation using the substitution method or the expansion method (or any other method used to solve recurrences). In this case you get $T(n) = \Theta(\phi^n)$, where $\phi$ is the golden ratio ($\phi = \frac{(1 + \sqrt{5})}{2}$). If you want to find out more about how to solve recurrences I strongly recommend you to read chapter 4 of Introduction to Algorithms . | {
"source": [
"https://cs.stackexchange.com/questions/14733",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/7173/"
]
} |
14,739 | One naive approach in solving multiple pattern matching problem is to call single pattern matching procedure on each of the pattern. There must be some drawbacks in this approach, given the variety of multiple pattern matching algorithms such as Aho Cornsick algorithm, which prove to be more efficient. So what are the drawbacks on this straightforward yet naive approach? In what scenario is this algorithm doing unnecessary works? | Most of the times, you can represent the recursive algorithms using recursive equations. In this case the recursive equation for this algorithm is $T(n) = T(n-1) + T(n-2) + \Theta(1)$. Then you can find the closed form of the equation using the substitution method or the expansion method (or any other method used to solve recurrences). In this case you get $T(n) = \Theta(\phi^n)$, where $\phi$ is the golden ratio ($\phi = \frac{(1 + \sqrt{5})}{2}$). If you want to find out more about how to solve recurrences I strongly recommend you to read chapter 4 of Introduction to Algorithms . | {
"source": [
"https://cs.stackexchange.com/questions/14739",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/4662/"
]
} |
15,017 | Since buying computation power is much affordable than in the past, are the knowledge of algorithms and being efficient getting less important? It's clear that you would want to avoid an infinite loop, so, not everything goes. But if you have better hardware, could you have somehow worse software? | I really like the example from Introduction to Algorithms book, which illustrates significance of algorithm efficiency: Let's compare two sorting algorithms: insertion sort and merge sort . Their complexity is $O(n^2) = c_1n^2$ and $O(n\log n) = c_2n \lg n$ respectively. Typically merge sort has a bigger constant factor, so let's assume $c_1 < c_2$. To answer your question, we evaluate execution time of a faster computer (A) running insertion sort algorithm against slower computer (B) running merge sort algorithm. We assume: the size of input problem is 10 million numbers: $n=10^7$; computer A executes $10^{10}$ instructions per second (~ 10GHz); computer B executes only $10^7$ instructions per second (~ 10MHz); the constant factors are $c_1=2$ (what is slightly overestimated) and $c_2=50$ (in reality is smaller). So with these assumptions it takes $$
\frac{2 \cdot (10^7)^2 \text{ instructions}}
{10^{10} \text{ instructions}/\text{second}}
= 2 \cdot 10^4 \text{ seconds}
$$
for the computer A to sort $10^7$ numbers and $$
\frac{50 \cdot 10^7 \lg 10^7 \text{ instructions}}
{10^{7} \text{ instructions}/\text{second}} \approx 1163 \text{ seconds}$$ for the computer B. So the computer, which is 1000 times slower, can solve the problem 17 times faster. In reality the advantage of merge sort will be even more significant and increasing with the size of the problem. I hope this example helps to answer your question. However, this is not all about algorithm complexity. Today it is almost impossible to get a significant speedup just by the use of the machine with higher CPU frequency. People need to design algorithms for multi-core systems that scale well. This is also a tricky task, because with the increase of cores, an overhead (for managing memory accesses, for instance) increases as well. So it's nearly impossible to get a linear speedup. So to sum up, the design of efficient algorithms today is equally important as before, because neither frequency increase nor extra cores will give you the speedup compared to the one brought by the efficient algorithm. | {
"source": [
"https://cs.stackexchange.com/questions/15017",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/10637/"
]
} |
16,092 | I am pursuing a BS in Computer Science, but I am at an early point of it, and I am pretty sure I will be happy with my choice given that it seems like an academically and career flexible education to pursue. Having said that, there seems to be a variety of definitions about what Computer Science really is in respects to academia, the private-sector, and the actual "Science" in "Computer Science" I would love to have answers(Or shared pondering) as to the breadth of things an education in Computer Science can be applied to, and ultimately the variety of paths those within Computer Science have pursued. | Computer science is a misnomer - there is actually no "science" in computer science, since computer science is not about observing nature. Rather, parts of computer science are engineering , and parts are mathematics . The more theoretical parts of computer science are purely mathematical. For example, what is a good algorithm for sorting? How do we define the semantics of programming languages? How can we be sure that a cryptographic system is secure? When computer science gets applied, it becomes more like engineering. For example, what is the best way to implement a matrix multiplication algorithm? How should we design a computer language to facilitate writing large programs? How can we design a cryptographic system to protect online banking? In contrast, science is about laws of nature , and more generally about natural phenomena . The phenomena involved in computer science are man-made. Some aspects of computer science can be viewed as experimental in this sense, for example the empirical study of social networks, the empirical study of computer networks, the empirical study of viruses and their spread, and computer education (both teaching computer science and using computers to teach other subjects). Most of these examples are border-line computer science, and are more properly multidisciplinary. The closest one gets to the scientific method in computer science is perhaps the study of networks and other hardware devices, which is mainstream in the subarea known unofficially as "systems". These examples notwithstanding, most of the core of computer science is not science at all. Computer science is just a name - it doesn't need to make sense. As for the scope of computer science, the best definitions is perhaps: that which computer scientists do. Computer science, like every other academic discipline, is a wide area, and it is difficult to chart completely. If you want a sampling of what people consider computer science, you can look at the research areas of your faculty. | {
"source": [
"https://cs.stackexchange.com/questions/16092",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/-1/"
]
} |
16,226 | I want to know which algorithm is fastest for multiplication of two n-digit numbers?
Space complexity can be relaxed here! | As of now Fürer's algorithm by Martin Fürer has a time complexity of $n \log(n)2^{Θ(log*(n))}$ which uses Fourier transforms over complex numbers. His algorithm is actually based on Schönhage and Strassen's algorithm which has a time complexity of $Θ(n\log(n)\log(\log(n)))$ Other algorithms which are faster than Grade School Multiplication algorithm are Karatsuba multiplication which has a time complexity of $O(n^{\log_{2}3})$ ≈ $O(n^{1.585})$ and Toom 3 algorithm which has a time complexity of $Θ(n^{1.465})$ Note that these are the fast algorithms. Finding fastest algorithm for multiplication is an open problem in Computer Science. References : Fürer's algorithm FFT based multiplication of large numbers Fast Fourier transform Toom–Cook multiplication Schönhage–Strassen algorithm Karatsuba algorithm | {
"source": [
"https://cs.stackexchange.com/questions/16226",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/10700/"
]
} |
16,230 | I'm writing a paper on the topic of applications affected more by memory performance than processor performance. I've got a lot written regarding the gap between the two, however I can't seem to find anything about the applications that might benefit more from memory performance. I suppose these are applications that make a large amount of memory references, but I have no idea what kind of applications would make such large number of references to make it stand out? Can you please give me any pointers on how to proceed, some links? | As of now Fürer's algorithm by Martin Fürer has a time complexity of $n \log(n)2^{Θ(log*(n))}$ which uses Fourier transforms over complex numbers. His algorithm is actually based on Schönhage and Strassen's algorithm which has a time complexity of $Θ(n\log(n)\log(\log(n)))$ Other algorithms which are faster than Grade School Multiplication algorithm are Karatsuba multiplication which has a time complexity of $O(n^{\log_{2}3})$ ≈ $O(n^{1.585})$ and Toom 3 algorithm which has a time complexity of $Θ(n^{1.465})$ Note that these are the fast algorithms. Finding fastest algorithm for multiplication is an open problem in Computer Science. References : Fürer's algorithm FFT based multiplication of large numbers Fast Fourier transform Toom–Cook multiplication Schönhage–Strassen algorithm Karatsuba algorithm | {
"source": [
"https://cs.stackexchange.com/questions/16230",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/10855/"
]
} |
16,266 | Say you have two polynomials: $3 + x$ and $2x^2 + 2$. I'm trying to understand how FFT helps us multiply these two polynomials. However, I can't find any worked out examples. Can someone show me how FFT algorithm would multiply these two polynomials. (Note: there is nothing special about these polynomials, but I wanted to keep it simple to make it easier to follow.) I've looked at the algorithms in pseudocode, but all of them seem to be have problems (don't specify what the input should be, undefined variables). And surprisingly, I can't find where anyone has actually walked through (by hand) an example of multiplying polynomials using FFT. | Suppose we use fourth roots of unity, which corresponds to substituting $1,i,-1,-i$ for $x$ . We also use decimation-in-time rather than decimation-in-frequency in the FFT algorithm. (We also apply a bit-reversal operation seamlessly.) In order to compute the transform of the first polynomial, we start by writing the coefficients: $$ 3,1,0,0. $$ The Fourier transform of the even coefficients $3,0$ is $3,3$ , and of the odd coefficients $1,0$ is $1,1$ . (This transform is just $a,b \mapsto a+b,a-b$ .) Therefore the transform of the first polynomial is $$ 4,3+i,2,3-i. $$ This is obtained using $X_{0,2} = E_0 \pm O_0$ , $X_{1,3} = E_1 \mp i O_1$ . ( From twiddle factor calculation ). Let's do the same for the second polynomial. The coefficients are $$2,0,2,0.$$ The even coefficients $2,2$ transform to $4,0$ , and the odd coefficients $0,0$ transform to $0,0$ . Therefore the transform of the second polynomial is $$ 4,0,4,0. $$ We obtain the Fourier transform of the product polynomial by multiplying the two Fourier transforms pointwise: $$ 16, 0, 8, 0. $$ It remains to compute the inverse Fourier transform. The even coefficients $16,8$ inverse-transform to $12,4$ , and the odd coefficients $0,0$ inverse-transform to $0,0$ . (The inverse transform is $x,y \mapsto (x+y)/2,(x-y)/2$ .) Therefore the transform of the product polynomial is $$6,2,6,2.$$ This is obtained using $X_{0,2} = (E_0 \pm O_0)/2$ , $X_{1,3} = (E_1 \mp i O_1)/2$ .
We have obtained the desired answer $$ (3 + x)(2 + 2x^2) = 6+2x+6x^2+2x^3. $$ | {
"source": [
"https://cs.stackexchange.com/questions/16266",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/10599/"
]
} |
16,684 | I'm a fledgling computer science scholar, and I'm being asked to write a paper which involves integer factorization. As a result, I'm having to look into Shor's algorithm on quantum computers. For the other algorithms, I was able to find specific equations to calculate the number of instructions of the algorithm for a given input size (from which I could calculate the time required to calculate on a machine with a given speed). However, for Shor's algorithm, the most I can find is its complexity: O( (log N)^3 ) . Is there either some way I can find its speed/actual complexity from its Big-O Notation? If not, is there someone who can tell me what I want, or how to find it? | The best estimate I know of can be found in Efficient networks for quantum factoring , by David Beckman, Amalavoyal N. Chari, Srikrishna Devabhaktuni, and John Preskill, which gives $72 (\log N)^3$. Having said that, a straight comparison of number of steps on a quantum computer versus number of steps on a classical computer is problematic for various reasons. First, as D.W.'s answer says, the number of steps depends on the exact architecture of the quantum computer, which we won't have until one is built. Second, the time required for a single step on a quantum computer is likely to be quite a bit slower than a single step on a classical computer. 1 Again, we won't know how much slower until quantum computers exist. 1 If it was faster, you could use the same architecture to build a classical computer that would be at least as fast, and probably faster because for a classical computer, you don't need to worry about maintaining quantum coherence. | {
"source": [
"https://cs.stackexchange.com/questions/16684",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/11162/"
]
} |
18,536 | I have come across many sorting algorithms during my high school studies. However, I never know which is the fastest (for a random array of integers). So my questions are: Which is the fastest currently known sorting algorithm? Theoretically, is it possible that there are even faster ones? So, what's the least complexity for sorting? | In general terms, there are the $O(n^2)$ sorting algorithms, such as insertion sort, bubble sort, and selection sort, which you should typically use only in special circumstances; Quicksort, which is worst-case $O(n^2)$ but quite often $O(n\log n)$ with good constants and properties and which can be used as a general-purpose sorting procedure; the $O(n\log n)$ algorithms, like merge-sort and heap-sort, which are also good general-purpose sorting algorithms; and the $O(n)$, or linear, sorting algorithms for lists of integers, such as radix, bucket and counting sorts, which may be suitable depending on the nature of the integers in your lists. If the elements in your list are such that all you know about them is the total order relationship between them, then optimal sorting algorithms will have complexity $\Omega(n\log n)$. This is a fairly cool result and one for which you should be able to easily find details online. The linear sorting algorithms exploit further information about the structure of elements to be sorted, rather than just the total order relationship among elements. Even more generally, optimality of a sorting algorithm depends intimately upon the assumptions you can make about the kind of lists you're going to be sorting (as well as the machine model on which the algorithm will run, which can make even otherwise poor sorting algorithms the best choice; consider bubble sort on machines with a tape for storage). The stronger your assumptions, the more corners your algorithm can cut. Under very weak assumptions about how efficiently you can determine "sortedness" of a list, the optimal worst-case complexity can even be $\Omega(n!)$. This answer deals only with complexities. Actual running times of implementations of algorithms will depend on a large number of factors which are hard to account for in a single answer. | {
"source": [
"https://cs.stackexchange.com/questions/18536",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/8870/"
]
} |
18,537 | I need to describing a Turing machine that computes $\lceil\log_{2}(n)\rceil$ I know that: n = 1, 2, 3, 4, 5, 6, 7, 8, ... f(n) = 0, 1, 2, 2, 3, 3, 3, 3, ... So I'm thinking of putting $n$ on the tape. Then keeping a count of how many times I multiply 2*2 until it is greater than than $n$. For example for n=5, 2*2*2=8, number of two's is 3 so then $f(n)$ is 3. I don't know how to translate this to the ticker tape of the Turing machine. But would something like this work? Put $n$ 1's on the tape followed by a 0. Compute 1^(2^1), then check if 1's on the left of the 0 on the tape is less than or equal to the 1's on the right of the 0. If its not then repeat it for 1^(2^(1)). It keeps doing this until the left side has less than or equal number of 1's. | In general terms, there are the $O(n^2)$ sorting algorithms, such as insertion sort, bubble sort, and selection sort, which you should typically use only in special circumstances; Quicksort, which is worst-case $O(n^2)$ but quite often $O(n\log n)$ with good constants and properties and which can be used as a general-purpose sorting procedure; the $O(n\log n)$ algorithms, like merge-sort and heap-sort, which are also good general-purpose sorting algorithms; and the $O(n)$, or linear, sorting algorithms for lists of integers, such as radix, bucket and counting sorts, which may be suitable depending on the nature of the integers in your lists. If the elements in your list are such that all you know about them is the total order relationship between them, then optimal sorting algorithms will have complexity $\Omega(n\log n)$. This is a fairly cool result and one for which you should be able to easily find details online. The linear sorting algorithms exploit further information about the structure of elements to be sorted, rather than just the total order relationship among elements. Even more generally, optimality of a sorting algorithm depends intimately upon the assumptions you can make about the kind of lists you're going to be sorting (as well as the machine model on which the algorithm will run, which can make even otherwise poor sorting algorithms the best choice; consider bubble sort on machines with a tape for storage). The stronger your assumptions, the more corners your algorithm can cut. Under very weak assumptions about how efficiently you can determine "sortedness" of a list, the optimal worst-case complexity can even be $\Omega(n!)$. This answer deals only with complexities. Actual running times of implementations of algorithms will depend on a large number of factors which are hard to account for in a single answer. | {
"source": [
"https://cs.stackexchange.com/questions/18537",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/11155/"
]
} |
18,797 | What is the difference between minimum spanning tree algorithm and a shortest path algorithm? In my data structures class we covered two minimum spanning tree algorithms (Prim's and Kruskal's) and one shortest path algorithm (Dijkstra's). Minimum spanning tree is a tree in a graph that spans all the vertices and total weight of a tree is minimal. Shortest path is quite obvious, it is a shortest path from one vertex to another. What I don't understand is since minimum spanning tree has a minimal total weight, wouldn't the paths in the tree be the shortest paths? Can anybody explain what I'm missing? Any help is appreciated. | Consider the triangle graph with unit weights - it has three vertices $x,y,z$, and all three edges $\{x,y\},\{x,z\},\{y,z\}$ have weight $1$. The shortest path between any two vertices is the direct path, but if you put all of them together you get a triangle rather than a tree. Every collection of two edges forms a minimum spanning tree in this graph, yet if (for example) you choose $\{x,y\},\{y,z\}$, then you miss the shortest path $\{x,z\}$. In conclusion, if you put all shortest paths together, you don't necessarily get a tree. | {
"source": [
"https://cs.stackexchange.com/questions/18797",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/10511/"
]
} |
19,141 | From what I have learned asymptotically tight bound means that it is bound from above and below as in theta notation.
But what does asymptotically tight upper bound mean for Big-O notation? | Saying that a big-O bound is "asymptotically tight" basically means that the author should have written $\Theta(-)$. For example, $O(x^2)$ means that it's no more than some constant times $x^2$ for all large enough $x$; "asymptotically tight" means it really is some constant times $x^2$ for large enough $x$ and not, say, some constant times $x^{1.999}$. | {
"source": [
"https://cs.stackexchange.com/questions/19141",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/11131/"
]
} |
19,151 | Define the language $L$ as $L = \{a, b\}^* - \{ww\mid w \in \{a, b\}^*\}$. In other words, $L$ contains the words that cannot be expressed as some word repeated twice. Is $L$ context-free or not? I've tried to intersect $L$ with $a^*b^*a^*b^*$, but I still can't prove anything. I also looked at Parikh's theorem, but it doesn't help. | It's context-free. Here's the grammar: $S \to A | B|AB|BA$ $A \to a|aAa|aAb|bAb|bAa$ $B \to b|aBa|aBb|bBb|bBa$ $A$ generates words of odd length with $a$ in the center. Same for $B$ and $b$. I'll present a proof that this grammar is correct. Let $L = \{a,b\}^* \setminus \{ww \mid w \in \{a,b\}^*\}$ (the language in the question). Theorem. $L = L(S)$. In other words, this grammar generates the language in the question. Proof. This certainly holds for all odd-length words, since this grammar generates all odd-lengths words, as does $L$. So let's focus on even-length words. Suppose $x \in L$ has even length. I'll show that $x \in L(G)$. In particular, I claim that $x$ can be written in the form $x=uv$, where both $u$ and $v$ have odd length and have different central letters. Thus $x$ can be derived from either $AB$ or $BA$ (according to whether $u$'s central letter is $a$ or $b$). Justification of claim: Let the $i$th letter of $x$ be denoted $x_i$, so that $x = x_1 x_2 \cdots x_n$. Then since $x$ is not in $\{ww \mid w \in \{a,b\}^{n/2}\}$, there must exist some index $i$ such that $x_i \ne x_{i+n/2}$. Consequently we can take $u = x_1 \cdots x_{2i-1}$ and $v = x_{2i} \cdots x_n$; the central letter of $u$ will be $x_i$, and the central letter of $v$ will be $x_{i+n/2}$, so by construction $u,v$ have different central letters. Next suppose $x \in L(G)$ has even length. I'll show that we must have $x \in L$. If $x$ has even length, it must be derivable from either $AB$ or $BA$; without loss of generality, suppose it is derivable from $AB$, and $x=uv$ where $u$ is derivable from $A$ and $v$ is derivable from $B$. If $u,v$ have the same lengths, then we must have $u\ne v$ (since they have different central letters), so $x \notin \{ww \mid w \in \{a,b\}^*\}$. So suppose $u,v$ have different lengths, say length $\ell$ and $n-\ell$ respectively. Then their central letters are $u_{(\ell+1)/2}$ and $v_{(n-\ell+1)/2}$. The fact that $u,v$ have different central letters means that $u_{(\ell+1)/2} \ne v_{(n-\ell+1)/2}$. Since $x=uv$, this means that $x_{(\ell+1)/2} \ne x_{(n+\ell+1)/2}$. If we attempt to decompose $x$ as $x=ww'$ where $w,w'$ have the same length, then we'll discover that $w_{(\ell+1)/2} = x_{(\ell+1)/2} \ne x_{(n+\ell+1)/2} = w'_{(\ell+1)/2}$, i.e., $w\ne w'$, so $x \notin \{ww \mid w \in \{a,b\}^*\}$. In particular, it follows that $x \in L$. | {
"source": [
"https://cs.stackexchange.com/questions/19151",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/12256/"
]
} |
19,577 | I know that Idris has dependent types but isn't turing complete. What can it not do by giving up Turing completeness, and is this related to having dependent types? I guess this is quite a specific question, but I don't know a huge amount about dependent types and related type systems. | Idris is Turing Complete! It does check for totality (termination when programming with data, productivity when programming with codata) but doesn't require that everything is total. Interestingly, having data and codata is enough to model Turing Completeness since you can write a monad for partial functions. I did this, years ago, in Coq - it's probably bitrotted by now but here it is nevertheless: http://eb.host.cs.st-andrews.ac.uk/Partial/partial.v . You do need one escape to actually run such things, but Idris allows you to do that. Idris won't reduce partial functions at the type level, in order to keep type checking decidable. Also, only total programs can reasonably be believed as proofs. | {
"source": [
"https://cs.stackexchange.com/questions/19577",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/12668/"
]
} |
19,591 | I know that it can be proven PROLOG is Turing-complete by constructing a program that simulates a Turing machine like this: turing(Tape0, Tape) :-
perform(q0, [], Ls, Tape0, Rs),
reverse(Ls, Ls1),
append(Ls1, Rs, Tape).
perform(qf, Ls, Ls, Rs, Rs) :- !.
perform(Q0, Ls0, Ls, Rs0, Rs) :-
symbol(Rs0, Sym, RsRest),
once(rule(Q0, Sym, Q1, NewSym, Action)),
action(Action, Ls0, Ls1, [NewSym|RsRest], Rs1),
perform(Q1, Ls1, Ls, Rs1, Rs).
symbol([], b, []).
symbol([Sym|Rs], Sym, Rs).
action(left, Ls0, Ls, Rs0, Rs) :- left(Ls0, Ls, Rs0, Rs).
action(stay, Ls, Ls, Rs, Rs).
action(right, Ls0, [Sym|Ls0], [Sym|Rs], Rs).
left([], [], Rs0, [b|Rs0]).
left([L|Ls], Ls, Rs, [L|Rs]). Source However, I’m wondering which parts of the PROLOG language one could strip away (esp. function symbols, clause overloading, recursion, unification) without losing Turing completeness. Are function symbols themselves Turing complete? | It's a fairly reliable rule of thumb that Turing-completeness depends on the ability to construct answers or intermediate values of unrestricted "size" and the ability to loop or recurse an unrestricted number of times. If you have those two things, you probably have Turing-completeness. (More specifically, if you can construct Peano arithmetic, then you certainly have Turing-completeness!) Let's assume for the moment that you've already stripped arithmetic. We'll also assume that you don't have any non-logical features like atom_chars , assert , and so on, which enable general shenanigans. If you stripped out function symbols, you can't construct answers or intermediates of unrestricted size; you can only use atoms which appear in the program and the query. As a result, the set of all possible solutions to any query is finite , so taking the least fixed point of the program/query will always terminate. Datalog (a relational database query language based on Prolog) works on this principle. Similarly, if you restricted Prolog to primitive recursion only (that includes no recursion as a degenerate case), then the amount of recursion that you can do is bounded by the size of the query, so all computation terminates. So you need general recursion for Turing-completeness. And, of course, if you have general recursion, you can cut a whole bunch of features and retain Turing-completeness, including general unification (construction and top-level pattern matching is sufficient), negation, and the cut. | {
"source": [
"https://cs.stackexchange.com/questions/19591",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/8415/"
]
} |
19,605 | This Github repo hosts a very cool project where the creator is able to, give an integer sequence, predict the most likely next values by searching the smallest/simplest programs that output that integer sequence. I was trying to approach the same idea using lambda-calculus instead of a stack-based language, but I was stuck on the enumeration of valid programs on LC's grammar. Anyway, what is the field studying that kind of idea and how can I grasp the current state-of-art? | It's a fairly reliable rule of thumb that Turing-completeness depends on the ability to construct answers or intermediate values of unrestricted "size" and the ability to loop or recurse an unrestricted number of times. If you have those two things, you probably have Turing-completeness. (More specifically, if you can construct Peano arithmetic, then you certainly have Turing-completeness!) Let's assume for the moment that you've already stripped arithmetic. We'll also assume that you don't have any non-logical features like atom_chars , assert , and so on, which enable general shenanigans. If you stripped out function symbols, you can't construct answers or intermediates of unrestricted size; you can only use atoms which appear in the program and the query. As a result, the set of all possible solutions to any query is finite , so taking the least fixed point of the program/query will always terminate. Datalog (a relational database query language based on Prolog) works on this principle. Similarly, if you restricted Prolog to primitive recursion only (that includes no recursion as a degenerate case), then the amount of recursion that you can do is bounded by the size of the query, so all computation terminates. So you need general recursion for Turing-completeness. And, of course, if you have general recursion, you can cut a whole bunch of features and retain Turing-completeness, including general unification (construction and top-level pattern matching is sufficient), negation, and the cut. | {
"source": [
"https://cs.stackexchange.com/questions/19605",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/11547/"
]
} |
19,609 | The question is really confusing me. I know every context sensitive grammar is monotonic but not vice versa. e.g. AB--->BA is monotonic but not context sensitive. Can someone explain to me in simple terms why this is? | It's a fairly reliable rule of thumb that Turing-completeness depends on the ability to construct answers or intermediate values of unrestricted "size" and the ability to loop or recurse an unrestricted number of times. If you have those two things, you probably have Turing-completeness. (More specifically, if you can construct Peano arithmetic, then you certainly have Turing-completeness!) Let's assume for the moment that you've already stripped arithmetic. We'll also assume that you don't have any non-logical features like atom_chars , assert , and so on, which enable general shenanigans. If you stripped out function symbols, you can't construct answers or intermediates of unrestricted size; you can only use atoms which appear in the program and the query. As a result, the set of all possible solutions to any query is finite , so taking the least fixed point of the program/query will always terminate. Datalog (a relational database query language based on Prolog) works on this principle. Similarly, if you restricted Prolog to primitive recursion only (that includes no recursion as a degenerate case), then the amount of recursion that you can do is bounded by the size of the query, so all computation terminates. So you need general recursion for Turing-completeness. And, of course, if you have general recursion, you can cut a whole bunch of features and retain Turing-completeness, including general unification (construction and top-level pattern matching is sufficient), negation, and the cut. | {
"source": [
"https://cs.stackexchange.com/questions/19609",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/12719/"
]
} |
19,771 | I know this is probably very basic, I just can't wrap my head around it. We recently studied about Dijkstra's algorithm for finding the shortest path between two vertices on a weighted graph. My professor said this algorithm will not work on a graph with negative edges, so I tried to figure out what could be wrong with shifting all the edges weights by a positive number, so that they all be positive, when the input graph has negative edges in it. For example, let's consider the following input graph: Now if I'll add 3 to all edges, it's obvious that the shortest path (between $s$ and $t$ ) has changed: Thus this kind of operation might result in wrong output. And this, basically, what I don't get. Why does this happen? Why is shifting the values has such a dramatic effect on the shortest path? This is totally counter-intuitive, at least for me. Your thoughts? | Dijkstra relies on one "simple" fact: if all weights are non-negative, adding an edge can never make a path shorter. That's why picking the shortest candidate edge (local optimality) always ends up being correct (global optimality). If that is not the case, the "frontier" of candidate edges does not send the right signals; a cheap edge might lure you down a path with positive weights while an expensive one hides a path with negative weights. For details, I recommend you check out a correctness proof and try to do it with negative weights; observe where it breaks. | {
"source": [
"https://cs.stackexchange.com/questions/19771",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/11972/"
]
} |
20,117 | I'm trying to wrap my head around an NP-completeness proof which seem to revolve around SAT/3CNF-SAT. Maybe it's the late hour but I'm afraid I can't think of a 3CNF formula that cannot be satisfied (I'm probably missing something obvious). Can you give me an example for such formula? | Technically, you can write $x\wedge \neg x$ in 3-CNF as $(x\vee x\vee x)\wedge (\neg x\vee \neg x\vee \neg x)$, but you probably want a "real" example. In that case, a 3CNF formula needs at least 3 variables. Since each clause rules out exactly one assignment, that means you need at least $2^3=8$ clauses in order to have a non-satisfiable formula. Indeed, the simplest one is: $$(x\vee y\vee z)\wedge (x\vee y\vee \neg z)\wedge (x\vee \neg y\vee z)\wedge(x\vee \neg y\vee \neg z)\wedge(\neg x\vee y\vee z)\wedge(\neg x\vee y\vee \neg z)\wedge(\neg x\vee \neg y\vee z)\wedge(\neg x\vee \neg y\vee \neg z)$$
It is not hard to see that this formula is unsatsifiable. | {
"source": [
"https://cs.stackexchange.com/questions/20117",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/11171/"
]
} |
21,727 | I'm currently reading a book (and a lot of wikipedia) about quantum physics and I've yet to understand how a quantum computer can be faster than the computers we have today. How can a quantum computer solve a problem in sub-exponential time that a classic computer can only solve in exponential time? | A quantum computer by itself isn't faster. Instead, it has a different model of computation . In this model, there are algorithms for certain (not all!) problems, which are asymptotically faster than the fastest possible (or fastest known, for some problems) classical algorithms. I recommend reading The Limits of Quantum by Scott Aaronson: it's a short popular article explaining just what we can expect from quantum computers. | {
"source": [
"https://cs.stackexchange.com/questions/21727",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/11713/"
]
} |
21,728 | Could somebody explain the difference between dependent types and refinement types? As I understand it, a refinement type contains all values of a type fulfilling a predicate. Is there a feature of dependent types which distinguishes them? If it helps, I came across Refined types via the Liquid Haskell project, and dependent types via Coq and Agda. That said, I'm looking for an explanation of how the theories differ. | The main differences are along two dimensions -- in the underlying theory,
and in how they can be used. Lets just focus on the latter. As a user, the "logic" of specifications in LiquidHaskell and refinement type systems generally, is restricted to decidable fragments
so that verification (and inference) is completely automatic, meaning one does not require "proof terms" of the sort needed in the full dependent setting. This leads to significant automation. For example, compare insertion sort in LH: http://ucsd-progsys.github.io/lh-workshop/04-case-study-insertsort.html#/ordered-lists vs. in Idris https://github.com/davidfstr/idris-insertion-sort/blob/master/InsertionSort.idr However, the automation comes at a price. One cannot use arbitrary functions as specifications as one can in the fully dependent world,
which restricts the class of properties one can write. Thus, one goal of refinement systems is to extend the class of what
can be specified, while that of fully dependent systems is to automate what can be proved. Perhaps there is a happy meeting ground where we can
get the best of both worlds! | {
"source": [
"https://cs.stackexchange.com/questions/21728",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/2253/"
]
} |
22,435 | I am interested in the time complexity of a compiler. Clearly this is a very complicated question as there are many compilers, compiler options and variables to consider. Specifically, I am interested in LLVM but would be interested in any thoughts people had or places to start research. A quite google seems to bring little to light. My guess would be that there are some optimisation steps which are exponential, but which have little impact on the actual time. Eg, exponential based on the number are arguments of a function. From the top of my head, I would say that generating the AST tree would be linear. IR generation would require stepping through the tree while looking up values in ever growing tables, so $O(n^2)$ or $O(n\log n)$. Code generation and linking would be a similar type of operation. Therefore, my guess would be $O(n^2)$, if we removed exponentials of variables which do not realistically grow. I could be completely wrong though. Does anyone have any thoughts on it? | The best book to answer your question would probably be: Cooper and Torczon, "Engineering a Compiler," 2003. If you have access to a university library you should be able to borrow a copy. In a production compiler like llvm or gcc the designers make every effort to keep all the algorithms below $O(n^2)$ where $n$ is the size of the input. For some of the analysis for the "optimization" phases this means that you need to use heuristics rather than producing truly optimal code. The lexer is a finite state machine, so $O(n)$ in the size of the input (in characters) and produces a stream of $O(n)$ tokens that is passed to the parser. For many compilers for many languages the parser is LALR(1) and thus processes the token stream in time $O(n)$ in the number of input tokens. During parsing you typically have to keep track of a symbol table, but, for many languages, that can be handled with a stack of hash tables ("dictionaries"). Each dictionary access is $O(1)$, but you may occasionally have to walk the stack to look up a symbol. The depth of the stack is $O(s)$ where $s$ is the nesting depth of the scopes. (So in C-like languages, how many layers of curly braces you are inside.) Then the parse tree is typically "flattened" into a control flow graph. The nodes of the control flow graph might be 3-address instructions (similar to a RISC assembly language), and the size of the control flow graph will typically be linear in the size of the parse tree. Then a series of redundancy elimination steps are typically applied (common subexpression elimination, loop invariant code motion, constant propagation, ...). (This is often called "optimization" although there is rarely anything optimal about the result, the real goal is to improve the code as much as is possible within the time and space constraints we have placed on the compiler.) Each redundancy elimination step will typically require proofs of some facts about the control flow graph. These proofs are typically done using data flow analysis . Most data-flow analyses are designed so that they will converge in $O(d)$ passes over the flow graph where $d$ is (roughly speaking) the loop nesting depth and a pass over the flow graph takes time $O(n)$ where $n$ is the number of 3-address instructions. For more sophisticated optimizations you might want to do more sophisticated analyses. At this point you start running into tradeoffs. You want your analysis algorithms to take much less than $O(n^2)$ time in the size of the whole-program's flow graph, but this means you need to do without information (and program improving transformations) that might be expensive to prove. A classic example of this is alias analysis, where for some pair of memory writes you would like to prove that the two writes can never target the same memory location. (You might want to do an alias analysis to see if you could move one instruction above the other.) But to get accurate information about aliases you might need to analyze every possible control path through the program, which is exponential in the number of branches in the program (and thus exponential in the number of nodes in the control flow graph.) Next you get into register allocation. Register allocation can be phrased as a graph-coloring problem , and coloring a graph with a minimal number of colors is known to be NP-Hard. So most compilers use some kind of greedy heuristic combined with register spilling with the goal of reducing the number of register spills as best as possible within reasonable time bounds. Finally you get into code generation. Code generation is typically done a maximal basic-block at a time where a basic block is a set of linearly connected control flow graph nodes with a single entry and single exit. This can be reformulated as a graph covering problem where the graph you are trying to cover is the dependence graph of the set of 3-address instructions in the basic block, and you are trying to cover with a set of graphs that represent the available machine instructions. This problem is exponential in the size of the largest basic block (which could, in principle, be the same order as the size of the entire program), so this is again typically done with heuristics where only a small subset of the possible coverings are examined. | {
"source": [
"https://cs.stackexchange.com/questions/22435",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/5373/"
]
} |
22,497 | I was watching the lecture by Jim Weirich, titled ' Adventures in Functional Programming '. In this lecture, he introduces the concept of Y-combinators, which essentially finds the fixed point for higher order functions. One of the motivations, as he mentions it, is to be able to express recursive functions using lambda calculus so that the theory by Church (anything that is effectively computable can be computed using lambda calculus) stays. The problem is that a function cannot call itself simply so, because lambda calculus does not allow named functions, i.e., $$n(x, y) = x + y$$ cannot bear the name '$n$', it must be defined anonymously: $$(x, y) \rightarrow x + y $$ Why is it important for lambda calculus to have functions that are not named? What principle is violated if there are named functions? Or is it that I just misunderstood jim's video? | The main theorem regarding this issue is due to a British mathematician from
the end of the 16th century, called William Shakespeare . His best known
paper on the subject is entitled " Romeo and Juliet " was published in
1597, though the research work was conducted a few years earlier,
inspired but such precursors as Arthur Brooke and William Painter. His main result, stated in Act II. Scene II , is the famous theorem : What's in a name? that which we call a rose By any other name would smell as sweet; This theorem can be intuively understood as "names do not contribute
to meaning". The greater part of the paper is devoted to an example complementing
the theorem and showing that, even though names contribute no meaning,
they are the source of endless problems. As pointed out by Shakespeare, names can be changed without changing
meaning, an operation that was later called $\alpha$-conversion by Alonzo
Church and his followers. As a
consequence, it is not necessarily simple to determine what is denoted
by a name. This raises a variety of issues such as developing a
concept of environment where the name-meaning association are
specified, and rules to know what is the current environment when you
try to determine the meaning associated with a name. This baffled
computer scientists for a while, giving rise to technical
difficulties such as the infamous Funarg problem . Environments remain
an issue in some popular programming languages, but it is generally
considered physically unsafe to be more specific, almost as lethal as
the example worked out by Shakespeare in his paper. This issue is also close to the problems raised in formal language theory,
when alphabets and formal systems have to be defined up to an
isomorphism , so as to underscore that the symbols of the alphabets are
abstract entities , independent of how they "materialize" as elements
from some set. This major result by Shakespeare shows also that science was then diverging
from magic and religion, where a being or a meaning may have a true name . The conclusion of all this is that for theoretical work, it is often
more convenient not to be encumbered by names, even though it may feel
simpler for practical work and everyday life. But recall that not
everyone called Mom is your mother. Note : The issue was addressed more recently by the 20th century American logician Gertrude Stein . However, her mathematician colleagues are still
pondering the precise technical implications of her main theorem : Rose is a rose is a rose is a rose. published in 1913 in a short communication entitled "Sacred Emily". | {
"source": [
"https://cs.stackexchange.com/questions/22497",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/15533/"
]
} |
22,589 | I've always wondered why processors stopped at 32 registers. It's by far the fastest piece of the machine, why not just make bigger processors with more registers? Wouldn't that mean less going to the RAM? | First, not all processor architectures stopped at 32 registers. Almost all the RISC architectures that have 32 registers exposed in the instruction set actually have 32 integer registers and 32 more floating point registers (so 64). (Floating point "add" uses different registers than integer "add".) The SPARC architecture has register windows . On the SPARC you can only access 32 integer registers at a time, but the registers act like a stack and you can push and pop new registers 16 at a time. The Itanium architecture from HP/Intel had 128 integer and 128 floating point registers exposed in the instruction set. Modern GPUs from NVidia, AMD, Intel, ARM and Imagination Technologies, all expose massive numbers of registers in their register files. (I know this to be true of the NVidia and Intel architectures, I am not very familiar with the AMD, ARM and Imagination instruction sets, but I think the register files are large there too.) Second, most modern microprocessors implement register renaming to eliminate unnecessary serialization caused by needing to reuse resources, so the underlying physical register files can be larger (96, 128 or 192 registers on some machines.) This (and dynamic scheduling) eliminates some of the need for the compiler to generate so many unique register names, while still providing a larger register file to the scheduler. There are two reasons why it might be difficult to further increase the number of registers exposed in the instruction set. First, you need to be able to specify the register identifiers in each instruction. 32 registers require a 5 bit register specifier, so 3-address instructions (common on RISC architectures) spend 15 of the 32 instruction bits just to specify the registers. If you increased that to 6 or 7 bits, then you would have less space to specify opcodes and constants. GPUs and Itanium have much larger instructions. Larger instructions come at a cost: you need to use more instruction memory, so your instruction cache behavior is less ideal. The second reason is access time. The larger you make a memory the slower it is to access data from it. (Just in terms of basic physics: the data is stored in 2-dimensional space, so if you are storing $n$ bits, the average distance to a specific bit is $O(\sqrt{n})$ .) A register file is just a small multi-ported memory, and one of the constraints on making it larger is that eventually you would need to start clocking your machine slower to accommodate the larger register file. Usually in terms of total performance this is a lose. | {
"source": [
"https://cs.stackexchange.com/questions/22589",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/15649/"
]
} |
22,693 | Let me clarify: Given a scatterplot of some given number of points n, if I want to find the closest point to any point in the plot mentally, I can immediately ignore most points in the graph, narrowing my choices down to some small, constant number of points nearby. Yet, in programming, given a set of points n, in order to find the closest point to any one, it requires checking every other point, which is ${\cal O}(n)$ time. I am guessing that the visual sight of a graph is likely the equivalent of some data structure I am incapable of understanding; because with programming, by converting the points to a more structured method such as a quadtree, one can find the closest points to $k$ points in $n$ in $k\cdot\log(n)$ time, or ammortized ${\cal O}(\log n)$ time. But there is still no known ${\cal O}(1)$ ammortized algorithms (that I can find) for point-finding after data restructuring. So why does this appear to be possible with mere visual inspection? | Your model of what you do mentally is incorrect. In fact, you operate in two steps: Eliminate all points that are too far, in $O(1)$ time. Measure the $m$ points that are about as close, in $\Theta(m)$ time. If you've played games like pétanque (bowls) or curling, this should be familiar — you don't need to examine the objects that are very far from the target, but you may need to measure the closest contenders. To illustrate this point, which green dot is closest to the red dot? (Only by a little over 1 pixel, but there is one that's closest.) To make things easier, the dots have even been color-coded by distance. This picture contains $m=10$ points which are nearly on a circle, and $n \gg 10$ green points in total. Step 1 lets you eliminate all but about $m$ points, but step 2 requires checking each of the $m$ points. There is no a priori bound for $m$. A physical observation lets you shrink the problem size from the whole set of $n$ points to a restricted candidate set of $m$ points. This step is not a computation step as commonly understood, because it is based on a continuous process. Continuous processes are not subject to the usual intuitions about computational complexity and in particular to asymptotic analysis. Now, you may ask, why can't a continuous process completely solve the problem? How does it come to these $m$ points, why can't we refine the process to get $m=1$? The answer is that I cheated a bit: I presented a set of points which is generated to consists of $m$ almost-closest points and $n-m$ points which are further. In general, determining which points lie within a precise boundary requires a precise observation which has to be performed point by point. A coarse process of elimination lets you exclude many obvious non-candidates, but merely deciding which candidates are left requires enumerating them. You can model this system in a discrete, computational world. Assume that the points are represented in a data structure that sorts them into cells on a grid, i.e. the point $(x,y)$ is stored in a list for the cell $(\lfloor x \rfloor, \lfloor y \rfloor)$. If you're looking for the points that are closest to $(x_0, y_0)$ and the cell that contains this point contains at most one other point, then it is sufficient to check the containing cell and the 8 neighboring cells. The total number of points in these 9 cells is $m$. This model respects some key properties of the human model: $m$ is potentially unbounded — a degenerate worse case of e.g. points lying almost on a circle is always possible. The practical efficiency depends on having selected a scale that matches the data (e.g. you'll save nothing if your dots are on a piece of paper and your cells are 1 km wide). | {
"source": [
"https://cs.stackexchange.com/questions/22693",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/15762/"
]
} |
23,010 | This idea occurred to me as a kid learning to program and
on first encountering PRNG's. I still don't know how realistic
it is, but now there's stack exchange. Here's a 14 year-old's scheme for an amazing compression algorithm: Take a PRNG and seed it with seed s to get a long sequence
of pseudo-random bytes. To transmit that sequence to another party,
you need only communicate a description of the PRNG, the appropriate seed
and the length of the message. For a long enough sequence, that
description would be much shorter then the sequence itself. Now suppose I could invert the process. Given enough time and
computational resources, I could do a brute-force search and find
a seed (and PRNG, or in other words: a program) that produces my
desired sequence (Let's say an amusing photo of cats being mischievous). PRNGs repeat after a large enough number of bits have been generated,
but compared to "typical" cycles my message is quite short so this
dosn't seem like much of a problem. Voila, an effective (if rube-Goldbergian) way to compress data. So, assuming: The sequence I wish to compress is finite and known in advance. I'm not short on cash or time (Just as long as a finite amount
of both is required) I'd like to know: Is there a fundamental flaw in the reasoning behind the scheme? What's the standard way to analyse these sorts of thought experiments? Summary It's often the case that good answers make clear not only the answer,
but what it is that I was really asking. Thanks for everyone's patience
and detailed answers. Here's my nth attempt at a summary of the answers: The PRNG/seed angle doesn't contribute anything, it's no more
than a program that produces the desired sequence as output. The pigeonhole principle: There are many more messages of
length > k than there are (message generating) programs of
length <= k. So some sequences simply cannot be the output of a
program shorter than the message. It's worth mentioning that the interpreter of the program
(message) is necessarily fixed in advance. And it's design
determines the (small) subset of messages which can be generated
when a message of length k is received. At this point the original PRNG idea is already dead, but there's
at least one last question to settle: Q: Could I get lucky and find that my long (but finite) message just
happens to be the output of a program of length < k bits? Strictly speaking, it's not a matter of chance since the
meaning of every possible message (program) must be known
in advance. Either it is the meaning of some message
of < k bits or it isn't . If I choose a random message of >= k bits randomly (why would I?),
I would in any case have a vanishing probability of being able to send it
using less than k bits, and an almost certainty of not being able
to send it at all using less than k bits. OTOH, if I choose a specific message of >= k bits from those which
are the output of a program of less than k bits (assuming there is
such a message), then in effect I'm taking advantage of bits already
transmitted to the receiver (the design of the interpreter), which
counts as part of the message transferred. Finally: Q: What's all this entropy / kolmogorov complexity business? Ultimately, both tell us the same thing as the (simpler) pigeonhole
principle tells us about how much we can compress: perhaps
not at all, perhaps some, but certainly not as much as we fancy
(unless we cheat). | You've got a brilliant new compression scheme, eh? Alrighty, then... ♫ Let's all play, the entropy game ♫ Just to be simple, I will assume you want to compress messages of exactly $n$ bits, for some fixed $n$. However, you want to be able to use it for longer messages, so you need some way of differentiating your first message from the second (it cannot be ambiguous what you have compressed). So, your scheme is to determine some family of PRNG/seeds such that if you want to compress, say, $01000111001$, then you just write some number $k$, which identifies some precomputed (and shared) seed/PRNG combo that generates those bits after $n$ queries. Alright. How many different bit-strings of length $n$ are there? $2^n$ (you have n choices between two items; $0$ and $1$). That means you will have to compute $2^n$ of these combos. No problem. However, you need to write out $k$ in binary for me to read it. How big can $k$ get? Well, it can be as big as $2^n$. How many bits do I need to write out $2^n$? $\log{2^n} = n$. Oops! Your compression scheme needs messages as long as what you're compressing! "Haha!", you say, "but that's in the worst case! One of my messages will be mapped to $0$, which needs only $1$ bit to represent! Victory!" Yes, but your messages have to be unambiguous! How can I tell apart $1$ followed by $0$ from $10$? Since some of your keys are length $n$, all of them must be, or else I can't tell where you've started and stopped. "Haha!", you say, "but I can just put the length of the string in binary first! That only needs to count to $n$, which can be represented by $\log{n}$ bits! So my $0$ now comes prefixed with only $\log{n}$ bits, I still win!" Yes, but now those really big numbers are prefixed with $\log{n}$ bits. Your compression scheme has made some of your messages even longer! And half of all of your numbers start with $1$, so half of your messages are that much longer! You then proceed to throw out more ideas like a terminating character, gzipping the number, and compressing the length itself, but all of those run into cases where the resultant message is just longer. In fact, for every bit you save on some message, another message will get longer in response. In general, you're just going to be shifting around the "cost" of your messages. Making some shorter will just make others longer. You really can't fit $2^n$ different messages in less space than writing out $2^n$ binary strings of length $n$. "Haha!", you say, "but I can choose some messages as 'stupid' and make them illegal! Then I don't need to count all the way to $2^n$, because I don't support that many messages!" You're right, but you haven't really won. You've just shrunk the set of messages you support. If you only supported $a=0000000011010$ and $b=111111110101000$ as the messages you send, then you can definitely just have the code $a\rightarrow 0$, $b\rightarrow 1$, which matches exactly what I've said. Here, $n=1$. The actual length of the messages isn't important, it's how many there are. "Haha!", you say, "but I can simply determine that those stupid messages are rare! I'll make the rare ones big, and the common ones small! Then I win on average!" Yep! Congratulations, you've just discovered entropy ! If you have $n$ messages, where the $i$th message has probability $p_i$ of being sent, then you can get your expected message length down to the entropy $H = \sum_{i=1}^np_i\log(1/p_i)$ of this set of messages. That's a kind of weird expression, but all you really need to know is that's it's biggest when all messages are equally likely, and smaller when some are more common than others. In the extreme, if you know basically every message is going to be $a=000111010101$. Then you can use this super efficient code: $a\rightarrow0$, $x\rightarrow1x$ otherwise. Then your expected message length is basically $1$, which is awesome, and that's going to be really close to the entropy $H$. However, $H$ is a lower bound, and you really can't beat it, no matter how hard you try. Anything that claims to beat entropy is probably not giving enough information to unambiguously retrieve the compressed message, or is just wrong. Entropy is such a powerful concept that we can lower-bound (and sometimes even upper -bound) the running time of some algorithms with it, because if they run really fast (or really slow), then they must be doing something that violates entropy. | {
"source": [
"https://cs.stackexchange.com/questions/23010",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/-1/"
]
} |
23,068 | Today we discussed in a lecture a very simple algorithm for finding an element in a sorted array using binary search . We were asked to determine its asymptotic complexity for an array of $n$ elements. My idea was, that it is obvisously $O(\log n)$, or $O(\log_2 n)$ to be more specific because $\log_2 n$ is the number of operations in the worst case. But I can do better, for example if I hit the searched element the first time - then the lower bound is $\Omega(1)$. The lecturer presented the solution as $\Theta(\log n)$ since we usually consider only worst case inputs for algorithms. But when considering only worst cases, whats the point of having $O$ and $\Omega$-notation when all worst cases of the given problem have the same complexity ($\Theta$ would be all we need, right?). What am I missing here? | Landau notation denotes asymptotic bounds on functions . See here for an explanation of the differences among $O$, $\Omega$ and $\Theta$. Worst-, best-, average or you-name-it-case time describe distinct runtime functions: one for the sequence of highest runtime of any given $n$, one for that of lowest, and so on.. Per se, the two have nothing to do with each other. The definitions are independent. Now we can go ahead and formulate asymptotic bounds on runtime functions: upper ($O$), lower ($\Omega$) or both ($\Theta$). We can do either for worst-, best- or any other case. For instance, in binary search we get a best-case runtime asymptotic of $\Theta(1)$ and a worst-case asymptotic of $\Theta(\log n)$. | {
"source": [
"https://cs.stackexchange.com/questions/23068",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/12756/"
]
} |
23,593 | There are lots of questions about how to analyze the running time of algorithms (see, e.g., runtime-analysis and algorithm-analysis ). Many are similar, for instance those asking for a cost analysis of nested loops or divide & conquer algorithms, but most answers seem to be tailor-made. On the other hand, the answers to another general question explain the larger picture (in particular regarding asymptotic analysis) with some examples, but not how to get your hands dirty. Is there a structured, general method for analysing the cost of algorithms? The cost might be the running time (time complexity), or some other measure of cost, such as the number of comparisons executed, the space complexity, or something else. This is supposed to become a reference question that can be used to point beginners to; hence its broader-than-usual scope. Please take care to give general, didactically presented answers that are illustrated by at least one example but nonetheless cover many situations. Thanks! | Translating Code to Mathematics Given a (more or less) formal operational semantics you can translate an algorithm's (pseudo-)code quite literally into a mathematical expression that gives you the result, provided you can manipulate the expression into a useful form. This works well for additive cost measures such as number of comparisons, swaps, statements, memory accesses, cycles some abstract machine needs, and so on. Example: Comparisons in Bubblesort Consider this algorithm that sorts a given array A : bubblesort(A) do 1
n = A.length; 2
for ( i = 0 to n-2 ) do 3
for ( j = 0 to n-i-2 ) do 4
if ( A[j] > A[j+1] ) then 5
tmp = A[j]; 6
A[j] = A[j+1]; 7
A[j+1] = tmp; 8
end 9
end 10
end 11
end 12 Let's say we want to perform the usual sorting algorithm analysis, that is count the number of element comparisons (line 5). We note immediately that this quantity does not depend on the content of array A , only on its length $n$. So we can translate the (nested) for -loops quite literally into (nested) sums; the loop variable becomes the summation variable and the range carries over. We get: $\qquad\displaystyle C_{\text{cmp}}(n) = \sum_{i=0}^{n-2} \sum_{j=0}^{n-i-2} 1 = \dots = \frac{n(n-1)}{2} = \binom{n}{2}$, where $1$ is the cost for each execution of line 5 (which we count). Example: Swaps in Bubblesort I'll denote by $P_{i,j}$ the subprogram that consists of lines i to j and by $C_{i,j}$ the costs for executing this subprogram (once). Now let's say we want to count swaps , that is how often $P_{6,8}$ is executed. This is a "basic block", that is a subprogram that is always executed atomically and has some constant cost (here, $1$). Contracting such blocks is one useful simplification that we often apply without thinking or talking about it. With a similar translation as above we come to the following formula: $\qquad\displaystyle C_{\text{swaps}}(A) = \sum_{i=0}^{n-2} \sum_{j=0}^{n-i-2} C_{5,9}(A^{(i,j)})$. $A^{(i,j)}$ denotes the array's state before the $(i,j)$-th iteration of $P_{5,9}$. Note that I use $A$ instead of $n$ as parameter; we'll soon see why. I don't add $i$ and $j$ as parameters of $C_{5,9}$ since the costs do not depend on them here (in the uniform cost model , that is); in general, they just might. Clearly, the costs of $P_{5,9}$ depend on the content of $A$ (the values A[j] and A[j+1] , specifically) so we have to account for that. Now we face a challenge: how do we "unwrap" $C_{5,9}$? Well, we can make the dependency on the content of $A$ explicit: $\qquad\displaystyle C_{5,9}(A^{(i,j)}) = C_5(A^{(i,j)}) +
\begin{cases}
1 &, \mathtt{A^{(i,j)}[j] > A^{(i,j)}[j+1]} \\
0 &, \text{else}
\end{cases}$. For any given input array, these costs are well-defined, but we want a more general statement; we need to make stronger assumptions. Let us investigate three typical cases. The worst case Just from looking at the sum and noting that $C_{5,9}(A^{(i,j)}) \in \{0,1\}$, we can find a trivial upper bound for cost: $\qquad\displaystyle C_{\text{swaps}}(A) \leq \sum_{i=0}^{n-2} \sum_{j=0}^{n-i-2} 1
= \frac{n(n-1)}{2} = \binom{n}{2}$. But can this happen , i.e. is there an $A$ for this upper bound is attained? As it turns out, yes: if we input an inversely sorted array of pairwise distinct elements, every iteration must perform a swap¹. Therefore, we have derived the exact worst-case number of swaps of Bubblesort. The best case Conversely, there is a trivial lower bound: $\qquad\displaystyle C_{\text{swaps}}(A) \geq \sum_{i=0}^{n-2} \sum_{j=0}^{n-i-2} 0
= 0$. This can also happen: on an array that is already sorted, Bubblesort does not execute a single swap. The average case Worst and best case open quite a gap. But what is the typical number of swaps? In order to answer this question, we need to define what "typical" means. In theory, we have no reason to prefer one input over another and so we usually assume a uniform distribution over all possible inputs, that is every input is equally likely. We restrict ourselves to arrays with pairwise distinct elements and thus assume the random permutation model. Then, we can rewrite our costs like this²: $\qquad\displaystyle \mathbb{E}[C_{\text{swaps}}] = \frac{1}{n!} \sum_{A} \sum_{i=0}^{n-2} \sum_{j=0}^{n-i-2} C_{5,9}(A^{(i,j)})$ Now we have to go beyond simple manipulation of sums. By looking at the algorithm, we note that every swap removes exactly one inversion in $A$ (we only ever swap neighbours³). That is, the number of swaps performed on $A$ is exactly the number of inversions $\operatorname{inv}(A)$ of $A$. Thus, we can replace the inner two sums and get $\qquad\displaystyle \mathbb{E}[C_{\text{swaps}}] = \frac{1}{n!} \sum_{A} \operatorname{inv}(A)$. Lucky for us, the average number of inversions has been determined to be $\qquad\displaystyle \mathbb{E}[C_{\text{swaps}}] = \frac{1}{2} \cdot \binom{n}{2}$ which is our final result. Note that this is exactly half the worst-case cost. Note that the algorithm was carefully formulated so that "the last" iteration with i = n-1 of the outer loop that never does anything is not executed. "$\mathbb{E}$" is mathematical notation for "expected value", which here is just the average. We learn along the way that no algorithm that only swaps neighbouring elements can be asymptotically faster than Bubblesort (even on average) -- the number of inversions is a lower bound for all such algorithms. This applies to e.g. Insertion Sort and Selection Sort . The General Method We have seen in the example that we have to translate control structure into mathematics; I will present a typical ensemble of translation rules. We have also seen that the cost of any given subprogram may depend on the current state , that is (roughly) the current values of variables. Since the algorithm (usually) modifies the state, the general method is slightly cumbersome to notate. If you start feeling confused, I suggest you go back to the example or make up your own. We denote with $\psi$ the current state (imagine it as a set of variable assignments). When we execute a program P starting in state $\psi$, we end up in state $\psi / \mathtt{P}$ (provided P terminates). Individual statements Given just a single statement S; , you assign it costs $C_S(\psi)$. This will typically be a constant function. Expressions If you have an expression E of the form E1 ∘ E2 (say, an arithmetic expression where ∘ may be addition or multiplication, you add up costs recursively: $\qquad\displaystyle C_E(\psi) = c_{\circ} + C_{E_1}(\psi) + C_{E_2}(\psi)$. Note that the operation cost $c_{\circ}$ may not be constant but depend on the values of $E_1$ and $E_2$ and evaluation of expressions may change the state in many languages, so you may have to be flexible with this rule. Sequence Given a program P as sequence of programs Q;R , you add the costs to $\qquad\displaystyle C_P(\psi) = C_Q(\psi) + C_R(\psi / \mathtt{Q})$. Conditionals Given a program P of the form if A then Q else R end , the costs depend on the state: $\qquad\displaystyle C_P(\psi) = C_A(\psi) +
\begin{cases}
C_Q(\psi/\mathtt{A}) &, \mathtt{A} \text{ evaluates to true under } \psi \\
C_R(\psi/\mathtt{A}) &, \text{else}
\end{cases}$ In general, evaluating A may very well change the state, hence the update for the costs of the individual branches. For-Loops Given a program P of the form for x = [x1, ..., xk] do Q end , assign costs $\qquad\displaystyle C_P(\psi) = c_{\text{init_for}} + \sum_{i=1}^k c_{\text{step_for}} + C_Q(\psi_i \circ \{\mathtt{x := xi\}})$ where $\psi_i$ is the state before processing Q for value xi , i.e. after the iteration with x being set to x1 , ..., xi-1 . Note the extra constants for loop maintenance; the loop variable has to be created ($c_{\text{init_for}}$) and assigned its values ($c_{\text{step_for}}$). This is relevant since computing the next xi may be costly and a for -loop with empty body (e.g. after simplifying in a best-case setting with a specific cost) does not have zero cost if it performs iterations. While-Loops Given a program P of the form while A do Q end , assign costs $\qquad\displaystyle C_P(\psi) \\\qquad\ = C_A(\psi) +
\begin{cases}
0 &, \mathtt{A} \text{ evaluates to false under } \psi \\
C_Q(\psi/\mathtt{A}) + C_P(\psi/\mathtt{A;Q}) &, \text{ else}
\end{cases}$ By inspecting the algorithm, this recurrence can often be represented nicely as a sum similar to the one for for-loops. Example: Consider this short algorithm: while x > 0 do 1
i += 1 2
x = x/2 3
end 4 By applying the rule, we get $\qquad\displaystyle C_{1,4}(\{i := i_0; x := x_0\}) \\\qquad\ = c_< +
\begin{cases}
0 &, x_0 \leq 0 \\
c_{+=} + c_/ + C_{1,4}(\{i := i_0 + 1; x := \lfloor x_0/2 \rfloor\}) &, \text{ else}
\end{cases}$ with some constant costs $c_{\dots}$ for the individual statements. We assume implicitly that these do not depend on state (the values of i and x ); this may or may not be true in "reality": think of overflows! Now we have to solve this recurrence for $C_{1,4}$. We note that neither the number of iterations not the cost of the loop body depend on the value of i , so we can drop it. We are left with this recurrence: $\qquad\displaystyle C_{1,4}(x) =
\begin{cases}
c_> &, x \leq 0 \\
c_> + c_{+=} + c_/ + C_{1,4}(\lfloor x/2 \rfloor) &, \text{ else}
\end{cases}$ This solves with elementary means to $\qquad\displaystyle C_{1,4}(\psi) = \lceil \log_2 \psi(x) \rceil \cdot (c_> + c_{+=} + c_/) + c_>$, reintroducing the full state symbolically; if $\psi = \{ \dots, x := 5, \dots\}$, then $\psi(x) = 5$. Procedure Calls Given a program P of the form M(x) for some parameter(s) x where M is a procedure with (named) parameter p , assign costs $\qquad\displaystyle C_P(\psi) = c_{\text{call}} + C_M(\psi_{\text{glob}} \circ \{p := x\})$. Note again the extra constant $c_{\text{call}}$ (which might in fact depend on $\psi$!). Procedure calls are expensive due to how they are implemented on real machines, and sometimes even dominate runtime (e.g. evaluating the Fibonacci number recurrence naively). I gloss over some semantic issues you might have with the state here. You will want to distinguish global state and such local to procedure calls. Let's just assume we pass only global state here and M gets a new local state, initialized by setting the value of p to x . Furthermore, x may be an expression which we (usually) assume to be evaluated before passing it. Example: Consider the procedure fac(n) do
if ( n <= 1 ) do 1
return 1 2
else 3
return n * fac(n-1) 4
end 5
end As per the rule(s), we get: $\qquad\displaystyle\begin{align*} C_{\text{fac}}(\{n := n_0\})
&= C_{1,5}(\{n := n_0\}) \\
&= c_{\leq} +
\begin{cases}
C_2(\{n := n_0 \}) &, n_0 \leq 1 \\
C_4(\{n := n_0 \}) &, \text{ else}
\end{cases} \\
&= c_{\leq} +
\begin{cases}
c_{\text{return}} &, n_0 \leq 1 \\
c_{\text{return}} + c_* + c_{\text{call}} + C_{\text{fac}}(\{n := n_0 - 1\})
&, \text{ else}
\end{cases}
\end{align*}$ Note that we disregard global state, as fac clearly does not access any. This particular recurrence is easy to solve to $\qquad\displaystyle C_{\text{fac}}(\psi) = \psi(n) \cdot (c_{\leq} + c_{\text{return}})
+ (\psi(n) - 1) \cdot (c_* + c_{\text{call}})$ We have covered the language features you will encounter in typical pseudo code. Beware hidden costs when analysing high-level pseudo code; if in doubt, unfold. The notation may seem cumbersome and is certainly a matter of taste; the concepts listed can not be ignored, though. However, with some experience you will be able to see right away which parts of the state are relevant for which cost measure, for instance "problem size" or "number of vertices". The rest can be dropped -- this simplifies things significantly! If you think now that this is far too complicated, be advised: it is ! Deriving exact costs of algorithms in any model that is so close to real machines as to enable runtime predictions (even relative ones) is a tough endeavour. And that's not even considering caching and other nasty effects on real machines. Therefore, algorithm analysis is often simplified to the point of being mathematically tractable. For instance, if you don't need exact costs, you can over- or underestimate at any point (for upper resp. lower bounds): reduce the set of constants, get rid of conditionals, simplify sums, and so on. A note on asymptotic cost What you will usually find in literature and on the webs is the "Big-Oh analysis". The proper term is asymptotic analysis which means that instead of deriving exact costs as we did in the examples, you only give costs up to a constant factor and in the limit (roughly speaking, "for big $n$"). This is (often) fair since abstract statements have some (generally unknown) costs in reality, depending on machine, operating system and other factors, and short runtimes may be dominated by the operating system setting up the process in the first place and whatnot. So you get some perturbation, anyway. Here is how asymptotic analysis relates to this approach. Identify dominant operations (that induce costs), that is operations that occur most often (up to constant factors). In the Bubblesort example, one possible choice is the comparison in line 5. Alternatively, bound all constants for elementary operations by their maximum (from above) resp. their minimum (from below) and perform the usual analysis. Perform the analysis using execution counts of this operation as cost. When simplifying, allow estimations. Take care to only allow estimations from above if your goal is an upper bound ($O$) resp. from below if you want lower bounds ($\Omega$). Make sure you understand the meaning of Landau symbols . Remember that such bounds exist for all three cases ; using $O$ does not imply a worst-case analysis. Further reading There are many more challenges and tricks in algorithm analysis. Here is some recommended reading. How to come up with the runtime of algorithms? How to describe algorithms, prove and analyse them? Why use comparisons instead of runtime for comparing two algorithms? How can we assume that basic operations on numbers take constant time? What constitutes one unit of time in runtime analysis? Solving or approximating recurrence relations for sequences of numbers Basics of Amortised Analysis There are many questions tagged algorithm-analysis around that use techniques similar to this. | {
"source": [
"https://cs.stackexchange.com/questions/23593",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/98/"
]
} |
27,625 | I am currently reading and watching about genetic algorithm and I find it very interesting (I haven't had the chance to study it while I was at the university). I understand that mutations are based on probability (randomness is the root of evolution) but I don't get why survival is. From what I understand, an individual $I$ having a fitness $F(i)$ such as for another individual $J$ having a fitness $F(j)$ we have $F(i) > F(j)$, then $I$ has a better probability than $J$ to survive to the next generation. Probability implies that $J$ may survive and $I$ may not (with "bad luck"). I don't understand why this is good at all? If $I$ would always survive the selection, what would go wrong in the algorithm? My guess is that the algorithm would be similar to a greedy algorithm but I am not sure. | The main idea is that by allowing suboptimal individuals to survive, you can switch from one "peak" in the evolutionary landscape to another through a sequence of small incremental mutations. On the other hand, if you only are allowed to go uphill it requires a gigantic and massively unlikely mutation to switch peaks. Here is a diagram showing the difference: Practically, this globalization property is the main sellling point of evolutionary algorithms - if you just want to find a local maxima there exist more efficient specialized techniques. (eg., L-BFGS with finite difference gradient and line search) In the real world of biological evolution, allowing suboptimal individuals to survive creates robustness when the evolutionary landscape changes. If everyone is concentrated at a peak, then if that peak becomes a valley the whole population dies (eg., dinosaurs were the most fit species until there was an asteroid strike and the evolutionary landscape changed). On the other hand, if there is some diversity in the population then when the landscape changes some will survive. | {
"source": [
"https://cs.stackexchange.com/questions/27625",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/19332/"
]
} |
27,656 | Why Do Computers Use the Binary Number System (0,1)? Why don't they use Ternary Number System (0,1,2) or any other number system instead? | Since we're in Computer Science, I'll answer this way: they don't. What do we mean by a "computer?" There are many definitions, but in computer science as a science, the most common is the Turing machine. A turing machine is defined by several aspects: a state-set, a transition table, a halting set, and important for our discussion, an alphabet. This alphabet refers to the symbols which the machine can read as input, and that it can write to its tape. (You could have different input and tape alphabets, but let's not worry about that for now.) So, I can make a Turing machine with input alphabet $\{0,1\}$, or $\{a,b\}$, or $\{0,1,2\}$,
or $\{\uparrow,\downarrow\}$. It doesn't matter. The fact is, I can use any alphabet I choose to encode data. So, I can say that $0001001$ is 9, or I can say that $\uparrow \uparrow \uparrow \downarrow \uparrow \uparrow \downarrow$ is 9. It doesn't matter, since they're just symbols we can distinguish. The trick is that binary is enough. Any sequence of bits can be interpreted as a number, so you can convert from binary to any other system and back. But, it turns out unary is enough too. You can encode 9 as 111111111. This isn't particularly efficient, but it has the same computational power. Things get even crazier when you look into alternate models of computation, like the Lambda calculus. Here, you can view numbers as functions. In fact, you can view everything as functions. Things are encoded not as bits, 0s and 1s, but as closed mathematical functions with no mutable state. See the Church numerals for how you can do numbers this way. The point is that, 0s and 1s is a completely hardware specific issue, and the choice is arbitrary. What encoding you're using isn't particularly relevant to computer science, outside of a few subfields like operating systems or networking. | {
"source": [
"https://cs.stackexchange.com/questions/27656",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/19347/"
]
} |
27,860 | These two seem very similar and have almost an identical structure. What's the difference? What are the time complexities for different operations of each? | Heap just guarantees that elements on higher levels are greater (for max-heap) or smaller (for min-heap) than elements on lower levels, whereas BST guarantees order (from "left" to "right"). If you want sorted elements, go with BST. by Dante is not a geek Heap is better at findMin/findMax (O(1)), while BST is good at all finds (O(logN)). Insert is O(logN) for both structures. If you only care about findMin/findMax (e.g. priority-related), go with heap. If you want everything sorted, go with BST. by xysun | {
"source": [
"https://cs.stackexchange.com/questions/27860",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/16382/"
]
} |
28,200 | I'm studying CPU's and I know how it reads a program from the memory and execute its instructions. I also understand that an OS separates programs in processes, and then alternate between each one so fast that you think that they're running at the same time, but in fact each program runs alone in the CPU. But, if the OS is also a bunch of code running in the CPU, how can it manage the processes? I've been thinking and the only explanation I could think is: when the OS loads a program from the external memory to RAM, it adds its own instructions in the middle of the original program instructions, so then the program is executed, the program can call the OS and do some things. I believe there's an instruction that the OS will add to the program, that will allow the CPU to return to the OS code some time. And also, I believe that when the OS loads a program, it checks if there's some prohibted instructions (that would jump to forbidden adresses in the memory) and eliminates then. Am I thinking rigth? I'm not a CS student, but in fact, a math student. If possible, I would want a good book about this, because I did not find anyone that explains how the OS can manage a process if the OS is also a bunch of code running in the CPU, and it can't run at the same time of the program. The books only tell that the OS can manage things, but now how. | No. The operating system does not mess around with the program's code injecting new code into it. That would have a number of disadvantages. It would be time-consuming, as the OS would have to scan through the entire executable making its changes. Normally, part of the executable are only loaded as needed. Also, inserting is expensive as you have to move a load of stuff out of the way. Because of the undecidability of the halting problem, it's impossible to know where to insert your "Jump back to the OS" instructions. For example, if the code includes something like while (true) {i++;} , you definitely need to insert a hook inside that loop but the condition on the loop ( true , here) could be arbitrarily complicated so you can't decide how long it loops for. On the other hand, it would be very inefficient to insert hooks into every loop: for example, jumping back out to the OS during for (i=0; i<3; i++) {j=j+i;} would slow down the process a lot. And, for the same reason, you can't detect short loops to leave them alone. Because of the undecidability of the halting problem, it's impossible to know if the code injections changed the meaning of the program. For example, suppose you use function pointers in your C program. Injecting new code would move the locations of the functions so, when you called one through the pointer, you'd jump to the wrong place. If the programmer was sick enough to use computed jumps, those would fail, too. It would play merry hell with any anti-virus system, since it would change virus code, too and muck up all your checksums. You could get around the halting-problem problem by simulating the code and inserting hooks in any loop that executes more than a certain fixed number of times. However, that would require extremely expensive simulation of the whole program before it was allowed to execute. Actually, if you wanted to inject code, the compiler would be the natural place to do it. That way, you'd only have to do it once but it still wouldn't work for the second and third reasons given above. (And somebody could write a compiler that didn't play along.) There are three main ways that the OS regains control from processes. In co-operative (or non-preemptive) systems, there's a yield function that a process can call to give control back to the OS. Of course, if that's your only mechanism, you're reliant on the processes behaving nicely and a process that doesn't yield will hog the CPU until it terminates. To avoid that problem, a timer interrupt is used. CPUs allow the OS to register callbacks for all the different types of interrupts that the CPU implements. The OS uses this mechanism to register a callback for a timer interrupt that is fired periodically, which allows it to execute its own code. Every time a process tries to read from a file or interact with the hardware in any other way, it's asking the OS to do work for it. When the OS is asked to do something by a process, it can decide to put that process on hold and start running a different one. This might sound a bit Machiavellian but it's the right thing to do: disk I/O is slow so you may as well let process B run while process A is waiting for the spinning lumps of metal to move to the right place. Network I/O is even slower. Keyboard I/O is glacial because people are not gigahertz beings. | {
"source": [
"https://cs.stackexchange.com/questions/28200",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/19870/"
]
} |
29,210 | I'm still learning functional programming (with f#) and I recently started reading about computation expressions. I still don't fully understand the concept and one thing that keeps me unsure when reading all the articles regarding monads (most of them are written basing on Haskell) is the relation between computation expressions and monads. Having written all that, here's my question (two questions actually): Is every F# computation expression a monad? Can every monad be expressed with F# computation expression? I've read this post of Tomas Petricek and if I understand it well, it states that computation expressions are more than monads, but I'm not sure if I interpret this correctly. | First of all, computation expressions are a language feature, while monads are mathematical abstractions, so from this point of view, they are completely different things . But that would not be a very useful answer :-). Computation expressions are a language feature that gives you a syntax which can be used for programming with computations (or data types) that have the monadic structure, but they can be also used with other structures. You can read my F# computation expression zoo paper for more details, but computation expressions can be used with: Monads, but also additive monads (what Haskellers call MonadPlus or MonadOr ) Composed computations (what Haskellers call monad transformers) Computations that are monadic, but support other F# constructs like exception handling Monoids (and a couple of variations without monadic bind) Applicative functors (though this is only implemented in a research extension) So, computation expressions are certainly closely linked to monads, but they are not linked to them that closely. This is in contrast e.g. with Haskell's do notation, which is much more closely linked to monads (although even that can be used with computations that are not strictly mathematically monads). | {
"source": [
"https://cs.stackexchange.com/questions/29210",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/20928/"
]
} |
29,475 | To try to test whether an algorithm for some problem is correct, the usual starting point is to try running the algorithm by hand on a number of simple test cases -- try it on a few example problem instances, including a few simple "corner cases". This is a great heuristic: it's a great way to quickly weed out many incorrect attempts at an algorithm, and to gain understanding about why the algorithm doesn't work. However, when learning algorithms, some students are tempted to stop there: if their algorithm works correctly on a handful of examples, including all of the corner cases they can think to try, then they conclude that the algorithm must be correct. There's always a student who asks: "Why do I need to prove my algorithm correct, if I can just try it on a few test cases?" So, how do you fool the "try a bunch of test cases" heuristic? I'm looking for some good examples to show that this heuristic is not enough. In other words, I am looking for one or more examples of an algorithm that superficially looks like it might be correct, and that outputs the right answer on all of the small inputs that anyone is likely to come up with, but where the algorithm actually doesn't work. Maybe the algorithm just happens to work correctly on all small inputs and only fails for large inputs, or only fails for inputs with an unusual pattern. Specifically, I am looking for: An algorithm. The flaw has to be at the algorithmic level. I am not looking for implementation bugs. (For instance, at a bare minimum, the example should be language-agnostic, and the flaw should relate to algorithmic concerns rather than software engineering or implementation issues.) An algorithm that someone might plausibly come up with. The pseudocode should look at least plausibly correct (e.g., code that is obfuscated or obviously dubious is not a good example). Bonus points if it is an algorithm that some student actually came up with when trying to solve a homework or exam problem. An algorithm that would pass a reasonable manual test strategy with high probability. Someone who tries a few small test cases by hand should be unlikely to discover the flaw. For instance, "simulate QuickCheck by hand on a dozen small test cases" should be unlikely to reveal that the algorithm is incorrect. Preferably, a deterministic algorithm. I've seen many students think that "try some test cases by hand" is a reasonable way to check whether a deterministic algorithm is correct, but I suspect most students would not assume that trying a few test cases is a good way to verify probabilistic algorithms. For probabilistic algorithms, there's often no way to tell whether any particular output is correct; and you can't hand-crank enough examples to do any useful statistical test on the output distribution. So, I'd prefer to focus on deterministic algorithms, as they get more cleanly to the heart of student misconceptions. I'd like to teach the importance of proving your algorithm correct, and I'm hoping to use a few examples like this to help motivate proofs of correctness. I would prefer examples that are relatively simple and accessible to undergraduates; examples that require heavy machinery or a ton of mathematical/algorithmic background are less useful. Also, I don't want algorithms that are "unnatural"; while it might be easy to construct some weird artificial algorithm to fool the heuristic, if it looks highly unnatural or has an obvious backdoor constructed just to fool this heuristic, it probably won't be convincing to students. Any good examples? | A common error I think is to use greedy algorithms, which is not always the correct approach, but might work in most test cases. Example: Coin denominations, $d_1,\dots,d_k$ and a number $n$,
express $n$ as a sum of $d_i$:s with as few coins as possible. A naive approach is to use the largest possible coin first,
and greedily produce such a sum. For instance, the coins with value $6$, $5$ and $1$
will give correct answers with greedy for all numbers between $1$ and $14$
except for the number $10 = 6+1+1+1+1 = 5+5$. | {
"source": [
"https://cs.stackexchange.com/questions/29475",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/755/"
]
} |
29,487 | Assuming l1 and l2 cache requests result in a miss, does the processor stall until main memory has been accessed? I heard about the idea of switching to another thread, if so what is used to wake up the stalled thread? | Memory latency is one of the fundamental problems studied in computer architecture research. Speculative Execution Speculative execution with out-of-order instruction issue is often able to find useful work to do to fill the latency during an L1 cache hit, but usually runs out of useful work after 10 or 20 cycles or so. There have been several attempts to increase the amount of work that can be done during a long-latency miss. One idea was to try to do value prediction (Lipasti, Wilkerson and Shen, (ASPLOS-VII):138-147, 1996). This idea was very fashionable in academic architecture research circles for a while but seems not to work in practice. A last-gasp attempt to save value prediction from the dustbin of history was runahead execution (Mutlu, Stark, Wilkerson, and Patt (HPCA-9):129, 2003). In runahead execution you recognize that your value predictions are going to be wrong, but speculatively execute anyway and then throw out all the work based on the prediction, on the theory that you'll at least start some prefetches for what would otherwise be L2 cache misses. It turns out that runahead wastes so much energy that it just isn't worth it. A final approach in this vein which may be getting some traction in industry involves creating enormously long reorder buffers. Instructions are executed speculatively based on branch prediction, but no value prediction is done. Instead all the instructions that are dependent on a long-latency load miss sit and wait in the reorder buffer. But since the reorder buffer is so large you can keep fetching instructions if the branch predictor is doing a decent job you will sometimes be able to find useful work much later in the instruction stream. An influential research paper in this area was Continual Flow Pipelines (Srinivasan, Rajwar, Akkary, Gandhi, and Upton (ASPLOS-XI):107-119, 2004). (Despite the fact that the authors are all from Intel, I believe the idea got more traction at AMD.) Multi-threading Using multiple threads for latency tolerance has a much longer history, with much greater success in industry. All the successful versions use hardware support for multithreading. The simplest (and most successful) version of this is what is often called FGMT ( fine grained multi-threading ) or interleaved multi-threading . Each hardware core supports multiple thread contexts (a context is essentially the register state, including registers like the instruction pointer and any implicit flags registers). In a fine-grained multi-threading processor each thread is processed in -order. The processor keeps track of which threads are stalled on a long-latency load miss and which are ready for their next instruction and it uses a simple FIFO scheduling strategy on each cycle to choose which ready thread to execute that cycle. An early example of this on a large scale was Burton Smith's HEP processors (Burton Smith went on to architect the Tera supercomputer, which was also a fine-grained multi-threading processor). But the idea goes much further back, into the 1960s, I think. FGMT is particularly effective on streaming workloads. All modern GPUs (graphics processing units) are multicore where each core is FGMT, and the concept is also widely used in other computing domains. Sun's T1 was also multicore FMGT, and so is Intel's Xeon Phi (the processor that is often still called "MIC" and used to be called "Larabee"). The idea of Simultaneous Multithreading (Tullsen, Eggers, and Levy, (ISCA-22):392-403, 1995) combines hardware multi-threading with speculative execution. The processor has multiple thread contexts, but each thread is executed speculatively and out-of-order. A more sophisticated scheduler can then use various heuristics to fetch from the thread that is most likely to have useful work ( Malik, Agarwal, Dhar, and Frank, (HPCA-14:50-61), 2008 ). A certain large semiconductor company started using the term hyperthreading for simultaneous multithreading, and that name seems to be the one most widely used these days. Low-level microarchitectural concerns I realized after rereading your comments that you are also interested in the signalling that goes on between processor and memory. Modern caches usually allow multiple misses to be simultaneously outstanding. This is called a Lockup-free cache (Kroft, (ISCA-8):81-87, 1981). (But the paper is hard to find online, and somewhat hard to read. Short answer: there's a lot of book-keeping but you just deal with it. The hardware book-keeping structure is called a MSHR (miss information/status holding register), which is the name Kroft gave it in his 1981 paper.) | {
"source": [
"https://cs.stackexchange.com/questions/29487",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/21220/"
]
} |
29,552 | I'd like to enumerate all undirected graphs of size $n$, but I only need one instance of each isomorphism class . In other words, I want to enumerate all non-isomorphic (undirected) graphs on $n$ vertices. How can I do this? More precisely, I want an algorithm that will generate a sequence of undirected graphs $G_1,G_2,\dots,G_k$, with the following property: for every undirected graph $G$ on $n$ vertices, there exists an index $i$ such that $G$ is isomorphic to $G_i$. I would like the algorithm to be as efficient as possible; in other words, the metric I care about is the running time to generate and iterate through this list of graphs. A secondary goal is that it would be nice if the algorithm is not too complex to implement. Notice that I need to have at least one graph from each isomorphism class, but it's OK if the algorithm produces more than one instance. In particular, it's OK if the output sequence includes two isomorphic graphs, if this helps make it easier to find such an algorithm or enables more efficient algorithms, as long as it covers all possible graphs. My application is as follows: I have a program that I want to test on all graphs of size $n$. I know that if two graphs are isomorphic, my program will behave the same on both (it will either be correct on both, or incorrect on both), so it suffices to enumerate at least one representative from each isomorphism class, and then test the program on those inputs. In my application, $n$ is fairly small. Some candidate algorithms I have considered: I could enumerate all possible adjacency matrices, i.e., all symmetric $n\times n$ 0-or-1 matrices that have all 0's on the diagonals. However, this requires enumerating $2^{n(n-1)/2}$ matrices. Many of those matrices will represent isomorphic graphs, so this seems like it is wasting a lot of effort. I could enumerate all possible adjacency matrices, and for each, test whether it is isomorphic to any of the graphs I've previously output; if it is not isomorphic to anything output before, output it. This would greatly shorten the output list, but it still requires at least $2^{n(n-1)/2}$ steps of computation (even if we assume the graph isomorphism check is super-fast), so it's not much better by my metric. It's possible to enumerate a subset of adjacency matrices. In particular, if $G$ is a graph on $n$ vertices $V=\{v_1,\dots,v_n\}$, without loss of generality I can assume that the vertices are arranged so that $\deg v_1 \le \deg v_2 \le \cdots \le \deg v_n$. In other words, every graph is isomorphic to one where the vertices are arranged in order of non-decreasing degree. So, it suffices to enumerate only the adjacency matrices that have this property. I don't know exactly how many such adjacency matrices there are, but it is many fewer than $2^{n(n-1)/2}$, and they can be enumerated with much fewer than $2^{n(n-1)/2}$ steps of computation. However, this still leaves a lot of redundancy: many isomorphism classes will still be covered many times, so I doubt this is optimal. Can we do better? If I understand correctly, there are approximately $2^{n(n-1)/2}/n!$ equivalence classes of non-isomorphic graphs. Can we find an algorithm whose running time is better than the above algorithms? How close can we get to the $\sim 2^{n(n-1)/2}/n!$ lower bound? I care primarily about tractability for small $n$ (say, $n=5$ or $n=8$ or so; small enough that one could plausibly run such an algorithm to completion), not so much about the asymptotics for large $n$. Related: Constructing inequivalent binary matrices (though unfortunately that one does not seem to have received a valid answer). | Probably the easiest way to enumerate all non-isomorphic graphs for small vertex counts is to download them from Brendan McKay's collection . The enumeration algorithm is described in paper of McKay's [1] and works by extending non-isomorphs of size n-1 in all possible ways and checking to see if the new vertex was canonical. It's implemented as geng in McKay's graph isomorphism checker nauty . [1]: B. D. McKay, Applications of a technique for labelled enumeration , Congressus Numerantium, 40 (1983) 207-221. | {
"source": [
"https://cs.stackexchange.com/questions/29552",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/755/"
]
} |
29,589 | Question: "Certain properties of a programming language may require that the only way to get the code written in it be executed is by interpretation. In other words, compilation to a native machine code of a traditional CPU is not possible. What are these properties?" Compilers: Principles and Practice by Parag H. Dave and Himanshu B. Dave (May 2, 2012) The book gives no clue about the answer. I tried to find the answer on Concepts of Programming Languages (SEBESTA), but to no avail. Web searches were of little avail too. Do you have any clue? | The distinction between interpreted and compiled code is probably a
fiction, as underlined by Raphael's comment : the claim seems to be trivially wrong without further assumptions: if there is
an interpreter, I can always bundle interpreter and code in one executable ... The fact is that code is always interpreted, by software, by hardware
or a combination of both, and the compiling process cannot tell which
it will be. What you perceive as compilation is a translation process from one
language $S$ (for source) to another language $T$ (for target). And, the
interpreter for $S$ is usually different from the interpreter for $T$. The compiled program is translated from one syntactic form $P_S$ to
another syntactic form $P_T$, such that, given the intended semantics
of the languages $S$ and $T$, $P_S$ and $P_T$ have the same
computational behavior, up to a few things that you are usually trying
to change, possibly to optimize, such as complexity or simple efficiency (time, space,
surface, energy consumption). I am trying not to talk of functional equivalence, as it would require precise definitions. Some compilers have been actually used simply to reduce the size of
the code, not to "improve" execution. This was the case for language used in the Plato system (though they did not call it compiling). You may consider your code fully compiled if, after the compiling
process, you no longer need the interpreter for $S$. At least, that is
the only way I can read your question, as an engineering rather than
theoretical question (since, theoretically, I can always rebuild the
interpreter). One thing that may raise problem, afaik, is meta-circularity . That
is when a program will manipulate syntactic structures in its own source
language $S$, creating program fragment that are then intepreted as if
they had been part of the original program. Since you can produce
arbitrary program fragments in the language $S$ as the result of arbitrary computation manipulating meaningless syntactic fragments, I would guess you can
make it nearly impossible (from an engineering point of view) to
compile the program into the language $T$, so that it now generate
fragments of $T$. Hence the interpreter for $S$ will be needed, or at
least the compiler from $S$ to $T$ for on-the-fly compiling of
generated fragments in $S$ (see also this document ). But I am not sure how this can be formalized properly (and do not have
time right now for it). And impossible is a big word for an issue that is not formalized. Futher remarks Added after 36 hours. You may want to skip this very long sequel. The many comments to this question show two views of the problem: a
theoretical view that see it as meaningless, and an engineering view
that is unfortunately not so easily formalized. There are many ways to look at interpretation and compilation, and I
will try to sketch a few. I will attempt to be as informal as I can manage The Tombstone Diagram One of the early formalization (early 1960s to late 1990) is the T or Tombstone diagrams . These diagrams presented in composable graphical
elements the implementation language of the interpreter or compiler,
the source language being interpreted or compiled, and the target
language in the case of compilers. More elaborate versions can add
attributes. These graphic representations can be seen as axioms,
inference rules, usable to mechanically derive processor generation
from a proof of their existence from the axioms, à la Curry-Howard
(though I am not sure that was done in the sixties :). Partial evaluation Another interesting view is the partial evaluation paradigm. I am
taking a simple view of programs as a kind of function implementation
that computes an answer given some input data. Then an interpreter
$I_S$ for the language $S$ is a program that take a program $p_S$
written in $S$ and data $d$ for that program, and computes the result
according to the semantics of $S$. Partial evaluation is a technique
for specializing a program of two arguments $a_1$ and $a_2$, when only
one argument, say $a_1$, is known. The intent is to have a faster
evaluation when you finally get the second argument $a_2$. It is
especially useful if $a_2$ changes more often than $a_1$ as the cost
of partial evaluation with $a_1$ can be amortized on all the
computations where only $a_2$ is changing. This is a frequent situation in algorithm design (often the topic of
the first comment on SE-CS), when some more static part of the data is
pre-processed, so that the cost of the pre-processing can be amortized
on all applications of the algorithm with more variable parts of the
input data. This is also the very situation of interpreters, as the first argument
is the program to be executed, and is usually executed many times with
different data (or has subparts executed many times with different
data). Hence it become a natural idea to specialize an interpreter for
faster evaluation of a given program by partially evaluating it on
this program as first argument. This may be seen as a way of
compiling the program, and there has been significant
research work on compiling by partial evaluation of a interpreter on
its first (program) argument. The Smn theorem The nice point about the partial evaluation approach is that it does
take its roots in theory (though theory can be a liar), notably in Kleene's Smn theorem . I am trying here to give an intuitive
presentation of it, hoping it will not upset pure theoreticians. Given a Gödel numbering $\varphi$ of recursive functions, you can
view $\varphi$ as your hardware, so that given the Gödel number $p$
(read object code ) of a program $\varphi_p$ is the function defined
by $p$ (i.e. computed by the object code on your hardware). In its simplest form, the theorem is stated in wikipedia as follows
(up to a small change in notation): Given a Gödel numbering $\varphi$ of recursive functions, there is a primitive recursive function $\sigma$ of two arguments with the following property: for every Gödel number $q$ of a partial computable function $f$ with two arguments, the expressions $\varphi_{\sigma(q,x)}(y)$ and $f(x,y)$ are defined for the same combinations of natural numbers $x$ and $y$, and their values are equal for any such combination. In other words, the following extensional equality of functions holds for every $x$:
$\;\;\varphi_{\sigma(q,x)} \simeq \lambda y.\varphi_q(x,y).\,$ Now, taking $q$ as the interpreter $I_S$, $x$ as the source code of a
program $p_S$, and $y$ as the data $d$ for that program, we can write:
$\;\;\varphi_{\sigma(I_S,p_S)} \simeq \lambda d.\varphi_{I_S}(p_S,d).\,$ $\varphi_{I_S}$ may be seen as the execution of the interpreter $I_S$
on the hardware, i.e., as a black-box ready to interpret programs
written in language $S$. The function $\sigma$ may be seen as a function that specializes the
interpreter $I_S$ for the program $P_S$, as in partial evaluation.
Thus the Gödel number $\sigma(I_S,p_S)$ may be seen has object code that is
the compiled version of program $p_S$. So the function $\;C_S = \lambda q_S.\sigma((I_S,q_S)$ may be seen as
a function that take as argument the source code of a program $q_S$
written in language $S$, and return the object code version for that
program. So $C_S$ is what is usually called a compiler. Some conclusions However, as I said: "theory can be a liar", or actually seem to be one. The problem is that we
know nothing of the function $\sigma$. There are actually many such
functions, and my guess is that the proof of the theorem may use a
very simple definition for it, which might be no better, from an
engineering point of view, than the solution proposed by Raphael: to
simply bundle the source code $q_S$ with the interpreter $I_S$. This
can always be done, so that we can say: compiling is always
possible. Formalizing a more restrictive notion of what is a compiler would
require a more subtle theoretical approach. I do not know what may
have been done in that direction. The very real work done on partial
evaluation is more realistic from an engineering point of view. And
there are of course other techniques for writing compilers, including
extraction of programs from the proof of their specification, as
developed in the context of type-theory, based on the Curry-Howard
isomorphism (but I am getting outside my domain of competence). My purpose here has been to show that Raphael's remark is not "crazy",
but a sane reminder that things are not obvious, and not even
simple. Saying that something is impossible is a strong statement
that does require precise definitions and a proof, if only to have a
precise understanding of how and why it is impossible . But building
a proper formalization to express such a proof may be quite difficult. This said, even if a specific feature is not compilable, in the sense
understood by engineers, standard compiling techniques can always be
applied to parts of the programs that do not use such a feature, as is
remarked by Gilles' answer. To follow on Gilles' key remarks that, depending on the language, some
thing may be done at compile-time, while other have to be done at
run-time, thus requiring specific code, we can see that the concept of
compilation is actually ill-defined, and is probably not definable in
any satisfactory way. Compilation is only an optimization process, as
I tried to show in the partial evaluation section, when I compared
it with static data preprocessing in some algorithms. As a complex optimization process, the concept of compilation actually
belongs to a continuum. Depending on the characteristic of the
language, or of the program, some information may be available
statically and allow for better optimization. Others things have to be
postponed to run-time. When things get really bad, everything has to
be done at run-time at least for some parts of the program, and
bundling source-code with the interpreter is all you can do. So this
bundling is just the low end of this compiling continuum. Much of the research on compilers is about finding ways to do statically what used to be done dynamically. Compile-time garbage collection seems a good example. Note that saying that the compilation process should produce machine
code is no help. That is precisely what the bundling can do as the
interpreter is machine code (well, thing can get a bit more complex
with cross-compilation). | {
"source": [
"https://cs.stackexchange.com/questions/29589",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/13305/"
]
} |
29,755 | Searching an array of $N$ elements using binary search takes, in the worst case $\log_2 N$ iterations because, at each step we trim half of our search space.
If, instead, we used 'ternary search', we'd cut away two-thirds of our search space at each iteration, so the worst case should take $\log_3 N < \log_2 N$ iterations... It seems that ternary search is faster, so why do we use binary search? | If you apply binary search, you have $$\log_2(n)+O(1)$$ many comparisons. If you apply ternary search, you have $$ 2 \cdot \log_3(n) + O(1)$$ many comparisons, as in each step, you need to perform 2 comparisons to cut the search space into three parts. Now if you do the math, you can observe that:
$$ 2 \cdot \log_3(n) + O(1) = 2 \cdot \frac{\log(2)}{\log(3)} \log_2(n)+ O(1) $$ Since we know that $2 \cdot \frac{\log(2)}{\log(3)} > 1$, we actually get more comparisons with ternary search. By the way: $n$-ary search may make a lot of sense in case if comparisons are quite costly and can be parallelized, as then, parallel computers can be applied. Note that argument can be generalized to $n$-ary search quite easily. You just need to show that the function $f(k) = (k-1) \cdot \frac{\log(2)}{\log(k)}$ is strictly monotone increasing for integer values of $k$. | {
"source": [
"https://cs.stackexchange.com/questions/29755",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/21084/"
]
} |
29,809 | I want to compress file size through making my own numbering system which is 80-based number, I do really want to know whether this even possible ? I learnt that Hexadecimal uses symbols like A, B, C, D, E, F to represent 10,11,12,13,14,15 -- and that's what i want to do to my own numbering system but in a bigger scale . Please correct me if i'm missing something . Is it possible ? | While you will need fewer 80-based numbers than 2-based numbers (bits) to encode the same file, the only way to store these 80-based numbers on a computer is to encode them as bits. So you do not gain anything. In fact you actually lose space, since 80 is not a power of 2: You will need 7 bits for each 80-based number, but in these 7 bits you could instead encoed 128 different states, if you used them directly. | {
"source": [
"https://cs.stackexchange.com/questions/29809",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/21616/"
]
} |
29,830 | In some places in the world, people don't usually have access to (and hence little knowledge of) computers, and even if they have, hard- and software are outdated and usage plagued by power outages and such. Access to (good) books also tends to be lacking. How can I teach computer science under such circumstances? I'm worried that without being able to do experiments and apply what they learn, they won't learn (well) at all even though they are incredibly motivated and devote most of their time to this hobby. Is it possible to teach CS only theoretically? | Asking how you can study computer science without computers is a bit like asking how you can study cosmology without telescopes. Sure, it's nice to be able to look at the things you're studying and it's often very helpful to be able to play around with things. But there's a whole lot you can do without access to a computer: in extremis , you could probably do almost all of a undergrad course with no computers. In practical terms, access to computers helps reinforce a lot of what you learn in a computer science course. Programming courses are, obviously, much more natural with access to a computer. On the other hand, being forced to write code on paper does encourage people to think about their code and make sure it really works, rather than just running it through a compiler again and again until it compiles and then running trivial test cases again and again until the obvious bugs go away. Topics that would be most natural without computers would be the more mathematical ones. All the background mathematics, such as combinatorics and probability. Computability, formal languages, logic, complexity theory, algorithm design and analysis, information and coding theory. Anything to do with quantum computation! | {
"source": [
"https://cs.stackexchange.com/questions/29830",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/21646/"
]
} |
29,841 | I am given the following decision problem: A program $ \Pi $ takes as input a pair of strings and outputs either $true$ or $false$.
It is guaranteed that $\Pi$ terminates on any input. Does there exist a pair ($I_1,I_2$) of strings such that $\Pi$ terminates on ($I_1,I_2$) with output value $true$? It is clear that $\Pi$ is semi-decidable and to proof this, I am asked to give a semi-decision procedure. However, how do I enumerate all possible pairs strings? Or how do I enumerate all possbile (single) strings in general? Of course, such a program may never terminate, but that is no problem because I am only asking for semi-decidability. EDIT2: Solution (Java) | Asking how you can study computer science without computers is a bit like asking how you can study cosmology without telescopes. Sure, it's nice to be able to look at the things you're studying and it's often very helpful to be able to play around with things. But there's a whole lot you can do without access to a computer: in extremis , you could probably do almost all of a undergrad course with no computers. In practical terms, access to computers helps reinforce a lot of what you learn in a computer science course. Programming courses are, obviously, much more natural with access to a computer. On the other hand, being forced to write code on paper does encourage people to think about their code and make sure it really works, rather than just running it through a compiler again and again until it compiles and then running trivial test cases again and again until the obvious bugs go away. Topics that would be most natural without computers would be the more mathematical ones. All the background mathematics, such as combinatorics and probability. Computability, formal languages, logic, complexity theory, algorithm design and analysis, information and coding theory. Anything to do with quantum computation! | {
"source": [
"https://cs.stackexchange.com/questions/29841",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/21655/"
]
} |
29,864 | I think I know what a "hard" real-time operating system is. It is an operating system with a scheduler that provides a contract with the application programmer. An application provides a deadline with each resource allocation request. If the deadline requests are feasible , the scheduler guarantees that each resource will be allocated to the requesting application before the deadline. The guarantee is sufficient to enable an application programmer to reason about the maximum latencies and minimum throughputs of specific requests. All the definitions I find of "soft" real-time systems seem vacuous to me. Wikipedia says the usefulness of a result degrades after its deadline, thereby degrading the system's quality of service. Uhhhh. Okay. By that criteria Windows 95 was a soft real time system and so was 3BSD and so is Linux. Wikipedia is not a great source, but the next couple of Google hits aren't much better. For example http://users.ece.cmu.edu/~koopman/des_s99/real_time/ says In a soft real-time system, a degraded operation in a rarely occurring peak load can be tolerated. That's not a contract, that's a fancy way of saying nothing. What are examples of real soft real-time guarantees/contracts offered by real operating systems? I'm looking for answers of the form: In (OS-name) if programmer does (what-programmer-needs-to-do) then the operating system guarantees (what-the-system-guarantees). | You've got it right, and Wikipedia is as informative as can be — soft real-time is not a formal characterization, it's a value judgement. Another way to say “soft real-time” is “I wish it was real-time”, or perhaps more accurately “it should be real-time but that's too hard”. If you really want to word in the form of a guarantee, it's a guarantee of best effort rather than a guarantee of specific performance. Or, to quote the Erlang FAQ (Erlang is a programming language originally designed for use in telephony): What does soft realtime mean? Cynics will say "basically nothing". (…) Many telecomms systems have less strict requirements [than hard realtime], for instance they might require a statistical guarantee along the lines of "a database lookup takes less than 20ms in 97% of cases". Soft realtime systems, such as Erlang, let you make that sort of guarantee. And this does provide a useful definition. Soft real-time indicates a design which is optimized towards each individual task taking no more than a certain amount of time , rather than towards optimizing the total time spent to perform all tasks. For example, a soft real-time system would aim to complete 99.9% of the requests in 10ms and process 100 requests per second, where a non-real-time might aim to proceed 200 requests per second but allow the occasional request to block for 50ms or more. A hard real-time would guarantee one request every 15ms no matter what. Soft real-time is used for applications where a missed deadline means a degradation of service, but is not performance-critical. Multimedia and telephony are some typical use cases. Each audio or video frame must be rendered it time, or else it has to be skipped; but skipping a frame is not the end of the world. The designers of Erlang had similar objectives on reliability in other matters: they observed that it was better for a telephone exchange to very occasionally drop a call, but to be absolutely sure that the exchange as a whole would keep working come what may, than to ever risk catastrophic failure in trying to maintain connections at all cost. In contrast, something like controlling a motor requires that the software never misses a deadline. This has costs: the overall performance is typically slower, and only relatively simple behaviors can be achieved. On the other side of the spectrum, a number crunching application typically cares only about overall performance — what matters is how fast the 1000x1000 matrices are multiplied, not how fast each column is calculated. | {
"source": [
"https://cs.stackexchange.com/questions/29864",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/7459/"
]
} |
29,867 | What are some of the best practices when implementing system call functionality for handling/avoiding "Time of check to time of use" (TOCTTOU) security issues? | You've got it right, and Wikipedia is as informative as can be — soft real-time is not a formal characterization, it's a value judgement. Another way to say “soft real-time” is “I wish it was real-time”, or perhaps more accurately “it should be real-time but that's too hard”. If you really want to word in the form of a guarantee, it's a guarantee of best effort rather than a guarantee of specific performance. Or, to quote the Erlang FAQ (Erlang is a programming language originally designed for use in telephony): What does soft realtime mean? Cynics will say "basically nothing". (…) Many telecomms systems have less strict requirements [than hard realtime], for instance they might require a statistical guarantee along the lines of "a database lookup takes less than 20ms in 97% of cases". Soft realtime systems, such as Erlang, let you make that sort of guarantee. And this does provide a useful definition. Soft real-time indicates a design which is optimized towards each individual task taking no more than a certain amount of time , rather than towards optimizing the total time spent to perform all tasks. For example, a soft real-time system would aim to complete 99.9% of the requests in 10ms and process 100 requests per second, where a non-real-time might aim to proceed 200 requests per second but allow the occasional request to block for 50ms or more. A hard real-time would guarantee one request every 15ms no matter what. Soft real-time is used for applications where a missed deadline means a degradation of service, but is not performance-critical. Multimedia and telephony are some typical use cases. Each audio or video frame must be rendered it time, or else it has to be skipped; but skipping a frame is not the end of the world. The designers of Erlang had similar objectives on reliability in other matters: they observed that it was better for a telephone exchange to very occasionally drop a call, but to be absolutely sure that the exchange as a whole would keep working come what may, than to ever risk catastrophic failure in trying to maintain connections at all cost. In contrast, something like controlling a motor requires that the software never misses a deadline. This has costs: the overall performance is typically slower, and only relatively simple behaviors can be achieved. On the other side of the spectrum, a number crunching application typically cares only about overall performance — what matters is how fast the 1000x1000 matrices are multiplied, not how fast each column is calculated. | {
"source": [
"https://cs.stackexchange.com/questions/29867",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/-1/"
]
} |
29,880 | On Linux, the files /dev/random and /dev/urandom files are the blocking and non-blocking (respectively) sources of pseudo-random bytes. They can be read as normal files: $ hexdump /dev/random
0000000 28eb d9e7 44bb 1ac9 d06f b943 f904 8ffa
0000010 5652 1f08 ccb8 9ee2 d85c 7c6b ddb2 bcbe
0000020 f841 bd90 9e7c 5be2 eecc e395 5971 ab7f
0000030 864f d402 74dd 1aa8 925d 8a80 de75 a0e3
0000040 cb64 4422 02f7 0c50 6174 f725 0653 2444
... Many other unix variants provide /dev/random and /dev/urandom as well, without the blocking/non-blocking distinction. The Windows equivalent is the CryptGenRandom() function . How does the operating system generate pseudo-randomness? | The title and the body of your question ask two different questions: how the OS creates entropy (this should really be obtains entropy), and how it generates pseudo-randomness from this entropy. I'll start by explaining the difference. Where does randomness come from? Random number generators (RNG) come in two types: Pseudo-random number generators (PRNG), also called deterministic random bit generators (DRBG) or combinations thereof, are deterministic algorithms which maintain a fixed-size variable internal state and compute their output from that state. Hardware random number generator (HRNG), also called “true” random number generators, are based on physical phenomena. “True” is a bit of a misnomer, because there are no sources of information that are known to be truly random , only sources of information that are not known to be predictable. Some applications, such as simulations of physical phenomena, can be content with random numbers that pass statistical tests. Other applications, such as the generation of cryptographic keys, require a stronger property: unpredictability . Unpredictability is a security property, not (only) a statistical property: it means that an adversary cannot guess the output of the random number generator. (More precisely, you can measure the quality of the RNG by measuring the probability for an adversary to guess each bit of RNG output. If the probability is measurably different from 1/2, the RNG is bad.) There are physical phenomena that produce random data with good statistical properties — for example, radioactive decay, or some astronomical observations of background noise, or stock market fluctuations. Such physical measurements need conditioning ( whitening ), to turn biased probability distributions into a uniform probability distribution. A physical measurement that is known to everyone isn't good for cryptography: stock market fluctuations might be good for geohashing , but you can't use them to generate secret keys . Cryptography requires secrecy : an adversary must not be able to find out the data that went into conditioning. There are cryptographically secure pseudo-random number generators (CSPRNG): PRNG algorithms whose output is suitable for use in cryptographic applications, in addition to having good statistical properties . One of the properties that make a CSPRNG cryptographically secure is that its output does not allow an adversary to reconstruct the internal state (knowing all the bits but one produced by a CSPRNG does not help to find the missing bit). I won't go into how to make a CSPRNG because that's the easy bit — you can follow recipes given by professional cryptographers (use a standard algorithm, such as Hash_DRBG, HMAC_DRBG or CTR_DRBG from NIST SP 800-90A ) or the ANSI X9.31 PRNG . The CSPRNG requires two properties of its state in order to be secure: The state must be kept secret from the start and at all times (though exposure of the state will not reveal past outputs). The state must be linear: the RNG must never be started twice from the same state. Architecture of a random number generator In practice, almost all good random number generators combine a CSPRNG with one or more entropy sources . To put it succintly, entropy is a measure of the unpredictability of a source of data. Basing a random number generator purely on a hardware RNG is difficult: The raw physical data is likely to need conditioning anyway, to turn probabilistic data into a uniform distribution. The output from the source of randomness must be kept secret. Entropy sources are often slow compared with the demand. Thus the RNG in an operating system almost always works like this : Accumulate sufficient entropy to build an unpredictable internal state. Run a CSPRNG , using the accumulated entropy as the seed, i.e. as the initial value of the internal state. Optionally, periodically mix additional entropy into the internal state. (This is not strictly necessary, since entropy is not “consumed” at any measurable rate . It helps against certain threats that leak the RNG state without compromising the whole system.) A random number generation service is part of the job of an operating system, because entropy gathering requires access to hardware, and entropy sources constitute a shared resource: the operating system must assemble them and derive output from them that will suit applications. Pseudo-random conditioning of the entropy sources is required in the operating system; it might as well be cryptographically secure, because this isn't fundamentally harder (and it is required on operating systems where applications do not trust each other; on fully cooperative systems, each application would have to run its own CSPRNG internally if the operating system didn't provide one anyway). Most systems with persistent storage will load an RNG seed from disk (I'll use “disk” as an abbreviation for any kind of persistent storage) when they boot, and overwrite the seed with some fresh pseudo-random data generated from that seed, or if available with random data generated from that seed plus another entropy source. This way, even if entropy is not available after a reboot, the entropy from a previous session is reused. Some care must be taken about the saved state. Remember how I said the state must be linear? If you boot twice from the same disk state, you'll get the same RNG outputs. If this is a possibility in your environment, you need another source of entropy. Take care when restoring from backups, or when cloning a virtual machine . One technique for cloning is to mix the stored entropy with some environmental data that is predictable but unique (e.g. time and MAC address); beware that if the environmental data is predictable, anyone in possession of the stored VM state can reconstruct the seed of a forked VM instance. Entropy sources Finding (and correctly using) entropy sources is the most challenging part of random number generation in an operating system. The available entropy sources will necessarily be dependent on the hardware and on which environment the hardware runs in. If you're lucky, your hardware provides a peripheral which can be used as an entropy source: a hardware random number generator , either dedicated or side-purposed. For example: thermal noise avalanche noise from an avalanche noise various types of (combinations of) oscillators , such as ring oscillators radioactive decay various quantum phenomena that I couldn't explain acoustic noise camera noise NIST SP800-90B provides design guidelines for hardware RNG. Evaluating a hardware RNG is difficult . Hardware RNG are typically delicate beasts, which need to be used with care: many types require some time after boot and some time between reads in order to destabilize, they are often sensitive to environmental conditions such as the temperature, etc. Intel x86-64 processors based on the Ivy Bridge architecture provide the RdRand instruction which provides the output from a CSPRNG seeded by thermal noise . Most smartphone processors include a hardware entropy source, though Android doesn't always use it. Systems that lack a strong entropy source have to make do with combining weak entropy sources and hoping ( ensuring would be too strong a word) that they will suffice. Random mouse movements are popular for client machines, and you might have seen the security show by certain cryptography programs that ask you to move the mouse (even though on any 21st century PC operating system the OS will have accumulated entropy without the application needing to bother). If you want to look at an example, you can look at Linux, though beware that it isn't perfect . In particular, /dev/random blocks too often (because it blocks until enough entropy is available, with an overly conservative notion of entropy), whereas /dev/urandom is almost always good except on first boot but gives no indication when it doesn't have enough entropy. Linux has drivers for many HRNG devices , and in accumulates entropy from various devices (including input devices ) and disk timings. If you have (confidential) persistent storage, you can use it to save entropy from one boot to the next, as indicated above. The first boot is a delicate time: the system may be in a fairly predictable state at that point, especially on mass-produced devices that essentially operate out of the factory in the same way. Some embedded devices that have persistent storage are provisioned with an initial seed in the factory (produced by a RNG running on a computer in the factory). In virtualized server environments, initial entropy can be provisioned when instantiating a virtual machine from the host or from an entropy server. Badly-seeded devices are a widespread problem in practice — a study of public RSA keys found that many servers and devices had keys that were generated with a poor RNG, most likely a good PRNG that was insufficiently seeded. As an OS designer, you cannot solve this problem on your own: it is the job of the entity in control of the deployment chain to ensure that the RNG will be properly seeded at first boot. Your task as an OS designer is to provide a proper RNG, including an interface to provide that first seed, and to ensure proper error signaling if the RNG is used before it is properly seeded. | {
"source": [
"https://cs.stackexchange.com/questions/29880",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/21431/"
]
} |
30,457 | I've come across that question : "Give examples of two regular languages which their union doesn't output a regular language. " This is pretty shocking to me because I believe that regular languages are closed under union. Which means to me that if I take two regular languages and union them, I must get a regular language. And I think I understand the proof of that : In my words, if the languages are regular, then there exist automatas that recognize them. If we take all the states (union), and we add a new state for the entry point, and we modify the transition function for the new state with epsilon, we are ok. We also show that there exist a path from every state etc. Can you tell me where I'm wrong, or maybe another way to approach the question. Source of the question, exercise 4, in french. Also, the same question is asked with the intersection. | There's a significant difference between the question as you pose it and the question posed in the exercise. The question asks for an example of a set of regular languages $L_{1}, L_{2}, \ldots$ such that their union
$$
L = \bigcup_{i=1}^{\infty}L_{i}
$$
is not regular. Note the range of the union: $1$ to $\infty$. Regular languages are closed under finite union, and the proofs runs along the lines that you sketch in the question, however this falls apart under infinite union. We can show this by taking $L_{i} = \{0^{i}1^{i}\}$ for each $i$ (with $\Sigma = \{0,1\}$). The infinite union of these languages of course gives the canonical non-regular (context-free) language $L = \{0^{i}1^{i}\mid i \in \mathbb{N}\}$. As an aside, we can see easily where the normal proof fails. Imagine the the same construction where we add a new start state and $\varepsilon$-transitions to the old start states. If we do this with an infinite set of automata we have build an automata with an infinite number of states, obviously contradicting the definition of a finite automata. Lastly, I'm guessing the confusion may arise from the phrasing of the original question, which starts "Donner deux exemples des suites de langages...", which is ( roughly , my French is a bit rusty, but externally verified!) "Give two examples of sequences of languages...", rather than "Give two examples of languages...". An incautious reading may mistake the second for the first though. | {
"source": [
"https://cs.stackexchange.com/questions/30457",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/21275/"
]
} |
30,466 | I am trying to create a dfa for L={w: every run of a's has length either two or three} this is my attempt at the solution..i feel like I am missing something..? | There's a significant difference between the question as you pose it and the question posed in the exercise. The question asks for an example of a set of regular languages $L_{1}, L_{2}, \ldots$ such that their union
$$
L = \bigcup_{i=1}^{\infty}L_{i}
$$
is not regular. Note the range of the union: $1$ to $\infty$. Regular languages are closed under finite union, and the proofs runs along the lines that you sketch in the question, however this falls apart under infinite union. We can show this by taking $L_{i} = \{0^{i}1^{i}\}$ for each $i$ (with $\Sigma = \{0,1\}$). The infinite union of these languages of course gives the canonical non-regular (context-free) language $L = \{0^{i}1^{i}\mid i \in \mathbb{N}\}$. As an aside, we can see easily where the normal proof fails. Imagine the the same construction where we add a new start state and $\varepsilon$-transitions to the old start states. If we do this with an infinite set of automata we have build an automata with an infinite number of states, obviously contradicting the definition of a finite automata. Lastly, I'm guessing the confusion may arise from the phrasing of the original question, which starts "Donner deux exemples des suites de langages...", which is ( roughly , my French is a bit rusty, but externally verified!) "Give two examples of sequences of languages...", rather than "Give two examples of languages...". An incautious reading may mistake the second for the first though. | {
"source": [
"https://cs.stackexchange.com/questions/30466",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/22254/"
]
} |
30,634 | I have finished developing an app for Android and intend to publish it with GPL -- I want it to be open source.
However, the nature of the application (a game) is that it asks riddles and has the answers coded into the string resource. I can't publish the answers!
I was told to look into storing passwords securely -- but I haven't found anything appropriate. Is it possible to publish my source code with a string array hidden, encrypted, or otherwise obscured? Maybe by reading the answers from an online database? Update Yuval Filmus's solution below worked. When I first read it I was still not sure how to do it. I found some solutions, for the second option: storing the hashed solution in the source and calculating the hash everytime the user guesses. To do this in javascript there is the crypto-js library at http://code.google.com/p/crypto-js/ .
For Android, use the MessageDigest function. There is an application (on fdroid/github) called HashPass which does this. | You have at least two options, depending on what problem you want to solve. If you want innocent readers of your code to not get the answers inadvertently, or you at least want to make it a bit difficult so that users are not tempted, you can encrypt the solutions and store the key as part of your code, perhaps a result of some computation (to make it even more difficult). If you want to prevent users from retrieving the answer, you can use a one-way function, or in computer jargon, a hash function . Store a hash of the answer, and they you can test whether the answer is correct without it being possible to deduce the answer at all without finding it first. This has the disadvantage that it is harder to check for an answer that is close to the correct answer, though there are some solutions even to this problem. | {
"source": [
"https://cs.stackexchange.com/questions/30634",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/22396/"
]
} |
30,639 | I've looked around the net for an answer to this question and it seems as if everybody implicitly knows the answer except me. Presumably this is because the only people who care are those who have had tertiary education on the subject. I, on the other hand, have been thrown in the deep end for a high school assignment. My question is, how exactly are programming languages related to formal languages? Everywhere I read, something along the lines of "formal languages are used for defining the grammar of programming languages" is said. Now from what I've been able to gather, a formal language is a series of production rules that apply to a specific set of symbols (the language's alphabet). These production rules define a set of transformations, such as: b -> a aaa->c This can be applied such that: abab->aaaa aaaa-> ca Just as a side note, if we define that our formal language's alphabet as {a,b,c}, then a and b are non terminals and c is terminal as it can not be transformed (please correct me if I'm wrong about that). So given all that, how on earth does this apply to programming languages? Often it is also stated that regex is used to parse a language in it's text form to ensure the grammar is correct. This makes sense. Then it is stated that regex are defined by formal languages. Regex return true or false (in my experience at least) depending on if the finite state automata that represents the regex reaches the goal point. As far as I can see, that has nothing to do with transformations*. For the compiling of the program itself, I suppose a formal language would be able to transform code into consecutively lower level code, eventually reaching assembly via a complex set of rules, which the hardware could then understand. So that's things from my confused point of view. There's probably a lot of things fundamentally wrong with what I have said, and that is why I'm asking for help. *Unless you consider something like (a|b)*b*c->true to be a production rule, in which case the rule requires a finite state automata (ie: regex). This makes no sense as we just said that | Whoever told you that regular expressions are used to parse code was spreading disinformation. Classically (I don't know to what extent this is true in modern compilers), the parsing of code – conversion of code from text to a syntax tree – is composed of two stages: Lexical analysis: Processes the raw text into chunks such as keywords , numerical constants , strings , identifiers and so on. This is classically implemented using some sort of finite state machine, similar in spirit to a deterministic finite automaton (DFA). Parser: Run after lexical analysis, and converts the raw text into an annotated syntax tree. The grammar of programming languages is (to first approximation) context-free (actually, one needs an even stricter subset), and this allows certain efficient algorithms to parse the lexified code into a syntax tree. This is similar to the problem of recognizing whether a given string belongs to some context-free grammar, the difference being that we also want the proof in the form of a syntax tree. Grammars for programming languages are written as context-free grammars, and this representation is used by parser generators to construct fast parsers for them. A simple example would have some non-terminal STATEMENT and then rules of the form STATEMENT$\to$IF-STATEMENT, where IF-STATEMENT$\to$if CONDITION then BLOCK endif, or the like (where BLOCK$\to$STATEMENT|BLOCK;STATEMENT, for example). Usually these grammars are specified in Backus-Naur form (BNF). The actual specifications of programming languages are not context-free. For example, a variable cannot appear if it hadn't been declared in many languages, and languages with strict typing might not allow you to assign an integer to a string variable. The parser's job is only to convert the raw code into a form which is easier to process. I should mention that there are other approaches such as recursive descent parsing which doesn't actually generate a parse tree, but processes your code as it parses it. Although it doesn't bother to generate the tree, in all other respects it operates at the same level as described above. | {
"source": [
"https://cs.stackexchange.com/questions/30639",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/22397/"
]
} |
30,778 | A* search finds optimal solution to problems as long as the heuristic is admissible which means it never overestimates the cost of the path to the from any given node (and consistent but let us focus on being admissible at the moment). But why does it always find the optimal solution if the heuristic underestimates? For example, if it underestimates a non optimal path by more than what it underestimates the optimal one, isn't that equivalent to over estimating? | A* maintains a priority queue of options that it's considering, ordered by how good they might be. It keeps searching until it finds a route to the goal that's so good that none of the other options could possibly make it better. How good an alternative might be is based on the heuristic and on actual costs found in the search so far. If the heuristic underestimates, the other options will look better than they really are. A* thinks those other options might improve the route, so it checks them out. If the heuristic only underestimates by a little bit, maybe some of those routes will turn out to be useful. On the other hand, if the heuristic overestimates, A* can think that the alternatives to the route already has are all terrible, so it won't bother to look at them. But the heuristic overestimates so they might be much better than they seem. For example, suppose you're trying to drive from Chicago to New York and your heuristic is what your friends think about geography. If your first friend says, "Hey, Boston is close to New York" (underestimating), then you'll waste time looking at routes via Boston. Before long, you'll realise that any sensible route from Chicago to Boston already gets fairly close to New York before reaching Boston and that actually going via Boston just adds more miles. So you'll stop considering routes via Boston and you'll move on to find the optimal route. Your underestimating friend cost you a bit of planning time but, in the end, you found the right route. Suppose that another friend says, "Indiana is a million miles from New York!" Nowhere else on earth is more than 13,000 miles from New York so, if you take your friend's advice literally, you won't even consider any route through Indiana. This makes you drive for nearly twice as long and cover 50% more distance . Oops. | {
"source": [
"https://cs.stackexchange.com/questions/30778",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/22558/"
]
} |
32,149 | While reading a book, I came across a paragraph given below: In order to synchronize all of a computer’s operations, a system clock—a small quartz
crystal located on the motherboard—is used. The system clock sends out a signal on a regular basis to all other computer components. And another paragraph: Many personal computers today have system clocks that run at 200 MHz, and all devices (such as CPUs) that are synchronized with these system clocks run at either the system clock speed or at a multiple of or a fraction of the system clock speed. Can anyone kindly tell: What is the function of the system clock? And what is meant by “synchronize” in the first paragraph? Is there any difference between “system clock” and “CPU clock”? If yes, then what is the function of the CPU clock? | The system clock is needed to synchronize all components on the motherboard, which means they all do their work only if the clock is high; never when it's low. And because the clock speed is set above the longest time any signal needs to propagate through any circuit on the board, this system is preventing signals from arriving before other signals are ready and thus keeps everything safe and synchronized. The CPU clock has the same purpose, but is only used on the chip itself. Because the CPU needs to perform more operations per time than the motherboard, the CPU clock is much higher. And because we don't want to have another oscillator (e.g. because they also would need to be synchronized), the CPU just takes the system clock and multiplies it by a number, which is either fixed or unlocked (in that case the user can change the multiplier in order to over- or underclock the CPU). | {
"source": [
"https://cs.stackexchange.com/questions/32149",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/11636/"
]
} |
32,397 | Is there a difference between perfect, full and complete tree? Or are these the same words to describe the same situation? | Yes, there is a difference between the three terms and the difference can be explained as: Full Binary Tree: A Binary Tree is full if every node has 0 or 2 children. Following are examples of a full binary tree. 18
/ \
15 20
/ \
40 50
/ \
30 50 Complete Binary Tree: A Binary Tree is complete Binary Tree if all levels are completely filled except possibly the last level and the last level has all keys as left as possible. 18
/ \
15 30
/ \ / \
40 50 100 40
/ \ /
8 7 9 Perfect Binary Tree: A Binary tree is Perfect Binary Tree in which all internal nodes have two children and all leaves are at same level. 18
/ \
15 30
/ \ / \
40 50 100 40 | {
"source": [
"https://cs.stackexchange.com/questions/32397",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/23030/"
]
} |
32,845 | I don't understand why the Halting Problem is so often used to dismiss the possibility of determining whether a program halts. The Wikipedia article correctly explains that a deterministic machine with finite memory will either halt or repeat a previous state. You can use the algorithm which detects whether a linked list loops to implement the Halting Function with space complexity of O(1). It seems to me that the Halting Problem proof is nothing more than a so-called "paradox," a self-referencing contradiction in the same way as the Liar's paradox. The only conclusion it makes is that the Halting Function is susceptible to such malformed questions. So, excluding paradoxical programs, the Halting Function is decidable. So why do we hold it as evidence of the contrary? 4 years later : When I wrote this, I had just watched this video ( update : the video has been taken down). A programmer gets some programs, must determine which ones terminate, and the video goes on to explain that it's impossible. I was frustrated, because I knew that given some arbitrary programs, there was some possibility the protagonist could prove whether they terminated. I knew many real-life algorithms had already been formally proven to terminate. The concept of generality was lost somehow. It's the difference between saying "some programs cannot be proven to terminate," and, "no program can be proven to terminate." The failure to make this distinction, by every single reference I found online, was how I came to the title for this question. For this reason, I really appreciate the answer that redefines the halting function as ternary instead of boolean. | Because a lot of really practical problems are the halting problem in disguise. A solution to them solves the halting problem. You want a compiler that finds the fastest possible machine code for a given program? Actually the halting problem. You have JavaScript, with some variables at a high security levels, and some at a low security level. You want to make sure that an attacker can't get at the high security information. This is also just the halting problem. You have a parser for your programming language. You change it, but you want to make sure it still parses all the programs it used to. Actually the halting problem. You have an anti-virus program, and you want to see if it ever executes a malicious instruction. Actually just the halting problem. As for the wikipedia example, yes, you could model a modern computer as a finite-state machine. But there's two problems with this. Every computer would be a different automaton, depending on the exact number of bits of RAM. So this isn't useful for examining a particular piece of code, since the automaton is dependent on the machine on which it can run. You'd need $2^n$ states if you have n bits of RAM. So for your modern 8GB computer, that's $2^{32000000000}$. This is a number so big that wolfram alpha doesn't even know how to interpret it. When I do $2^{10^9}$ it says that it has $300000000$ decimal digits. This is clearly much to large to store in a normal computer. The Halting problem lets us reason about the relative difficulty of algorithms. It lets us know that, there are some algorithms that don't exist, that sometimes, all we can do is guess at a problem, and never know if we've solved it. If we didn't have the halting problem, we would still be searching for Hilbert's magical algorithm which inputs theorems and outputs whether they're true or not. Now we know we can stop looking, and we can put our efforts into finding heuristics and second-best methods for solving these problems. UPDATE: Just to address a couple of issues raised in the comments. @Tyler Fleming Cloutier: The "nonsensical" problem arises in the proof that the halting problem is undecidable, but what's at the core of undecidability is really having an infinite search space. You're searching for an object with a given property, and if one doesn't exist, there's no way to know when you're done. The difficulty of a problem can be related to the number of quantifiers it has. Trying to show that there exists ($\exists$) an object with an arbitrary property, you have to search until you find one. If none exists, there's no way (in general) to know this. Proving that all objects ($\forall$) have a property is hard, but you can search for an object without the property to disprove it. The more alternations there are between forall and exists, the harder a problem is. For more on this, see the Arithmetic Hierarchy . Anything above $\Sigma^0_0=\Pi^0_0$ is undecidable, though level 1 is semi-decidable. It's also possible to show that there are undecidable problems without using a nonsensical paradox like the Halting problem or Liars paradox. A Turing Machine can be encoded using a string of bits, i.e. an integer. But a problem can be encoded as a language, i.e. a subset of the integers. It's known that there is no bijection between the set of integers and the set of all subsets of the integers. So there must be some problems (languages) which don't have an associated Turing machine (algorithm). @Brent: yes, this admits that this is decidable for modern computers. But it's decidable for a specific machine. If you add a USB drive with disk space, or the ability to store on a network, or anything else, then the machine has changed and the result doesn't still hold. It also has to be said that there are going to be many times where the algorithm says "this code will halt" because it the code will fail and run out of memory, and that adding a single extra bit of memory would cause the code to succeed and give a different result. The thing is, Turing machines don't have an infinite amount of memory. There's never a time where an infinite amount of symbols are written to the tape. Instead, a Turing machine has "unbounded" memory, meaning that you can keep getting more sources of memory when you need it. Computers are like this. You can add RAM, or USB sticks, or hard drives, or network storage. Yes, you run out of memory when you run out of atoms in the universe. But having unlimited memory is a much more useful model. | {
"source": [
"https://cs.stackexchange.com/questions/32845",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/8233/"
]
} |
33,666 | I was reading about Iota and Jot and found this section confusing: Unlike Iota, where the syntactic tree for a string can branch either on the left or on the right, Jot syntax is uniformly left-branching. As a result, Iota is strictly context-free, but Jot is a regular language. My understanding is that both Iota and Jot are Turing complete. But apparently, one is context-free, and the other is regular! Surely regular languages can't be Turing complete? | In short, the answer is yes. But you're mixing two completely unrelated meanings of the term "language" (yes, this is confusing): A set of strings. "Context-free language" means "a set of strings which can be recognized using a context-free grammar". A way of specifying a computation. "Turing-complete language" means "a way of specifying programs in which the Turing machine can be specified". Note that you can talk about "the C++ language" from two completely unrelated viewpoints, using the two unrelated meanings of the word "language": C++ as a set of strings which are legal according to the C++ grammar C++ as a way of specifying programs. The traits of "the C++ language" from these two viewpoints are unrelated. More examples to help you separate these concepts: The expression "[a-z]+@[a-z].[a-z]" describes a set of strings recognizable by finite automata, i.e. a regular language. However, it's just that - a set of strings: is not a way of specifying programs (unless you ascribe a way to interpret each such string as a program), so it does not make sense to talk about whether or not it is Turing-complete. The language of flowcharts is a way of specifying programs; depending on the particular flavor of flowcharts, it may or may not be Turing-complete. However, flowcharts aren't strings, so it makes absolutely no sense to talk about flowcharts in the sense "language as a set of strings". | {
"source": [
"https://cs.stackexchange.com/questions/33666",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/20766/"
]
} |
33,769 | Recently in my CS class I've been introduced to the Turing Machine. After the class, I spent over 2 hours trying to figure out what is the relationship between a tape and a machine. I was completely unaware of the existence of computer tapes or how tapes and machines interacted until today. I still can't see why a machine would read tapes but a scanner is perhaps a closer conception to the Turing machine where paper is considered a tape and whatever goes inside of a scanner is whatever a Turing machine would do. But in any case, isn't the idea of a Turing machine quite archaic? We have so many physical (rather than hypothetical) devices in our office or living room that seems to do what the Turing Machine does. Can someone provide a better example drawing from reality so that the essential functionalities of this hypothetical conception is captured? | Turing machines are one of the "original" Turing-complete computation models, along with the $\lambda$ calculus and the recursively defined recursive functions. Nowadays in many areas of theoretical computer science a different model is used, the RAM machine, which is much closer to actual computers. Since both models are p-equivalent (they simulate each other with at most polynomial blow-up), from the point of view of questions like P vs. NP, both models are equivalent. | {
"source": [
"https://cs.stackexchange.com/questions/33769",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/20559/"
]
} |
34,067 | Are all Morse code strings uniquely decipherable? Without the spaces, ......-...-..---.-----.-..-..-.. could be Hello World but perhaps the first letter is a 5 -- in fact it looks very unlikely an arbitrary sequence of dots and dashes should have a unique translation. One might possibly use the Kraft inequality but that only applies to prefix codes . Morse code with spaces is prefix code in which messages can always be uniquely decoded. Once we remove the spaces this is no longer true. In the case I am right, and all Morse code message can't be uniquely decoded, is there a way to list all the possible messages? Here are some related exercise I found on codegolf.SE https://codegolf.stackexchange.com/questions/36735/morse-decode-golf https://codegolf.stackexchange.com/questions/131/morse-code-translator | The following are both plausible messages, but have a completely different meaning: SOS HELP = ...---... .... . .-.. .--. => ...---.........-...--.
I AM HIS DATE = .. .- -- .... .. ... -.. .- - . => ...---.........-...--. | {
"source": [
"https://cs.stackexchange.com/questions/34067",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/3131/"
]
} |
35,155 | I found a statement (without explanation) that a language $A = 0^*$ is decidable. How is that possible? I mean, how would we build a Turing machine that would accept (or reject) a possibly infinite string of 0's? I also thought that maybe we could create an enumerator that would create all words from $0^*$ with increasing length, but I am not sure if we can. So is $0^*$ a decidable language? And if so, why? | $0^*$ is the set of finite strings consisting only of $0$. There are no possibly infinite strings in $0^*$. It is trivially regular because the regex $0^*$ accepts exactly $A$ by definition. All regular problems are computable so we can definitely create a Turing machine for it (look up NFA's and DFA's for more info on the connection of Turing machines to regular languages). This is just a confusion in what is meant by Kleene closure. If you look here you can see that it is the union of all strings of length 1, 2, 3, ... and so on for all natural numbers. Infinity is not a natural number so there are no strings of infinite length in $A$. | {
"source": [
"https://cs.stackexchange.com/questions/35155",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/24611/"
]
} |
35,371 | I'm having trouble intuitively understanding why PSPACE is generally believed to be different from EXPTIME. If PSPACE is the set of problems solvable in space polynomial in the input size $f(n)$, then how can there be a class of problems that experience greater exponential time blowup and do not make use of exponential space? Yuval Filmus' answer is already extremely helpful. However, could anyone sketch my a loose argument why it might be the case that PSPACE ≠ EXPTIME (i.e. that PSPACE is not a proper subset of EXPTIME)? Won't we need exponential space in order to beat the upperbound for the total number of system configurations achievable with space that scales polynomially with input size? Just to say, I can understand why EXPTIME ≠ EXPSPACE is an open matter, but I lack understanding regarding the relationship between PSPACE and EXPTIME. | Let's refresh the definitions. PSPACE is the class of problems that can be solved on a deterministic Turing machine with polynomial space bounds: that is, for each such problem, there is a machine that decides the problem using at most $p(n)$ tape cells when its input has length $n$, for some polynomial $p$. EXP is the class of problems that can be solved on a deterministic Turing machine with exponential time bounds: for each such problem, there is a machine that decides the problem using at most $2^{p(n)}$ steps when its input has length $n$, for some polynomial $p$. First, we should say that these two classes might be equal. They seem more likely to be different but classes sometimes turn out to be the same: for example, in 2004, Reingold proved that symmetric logspace is the same as ordinary logspace; in 1987, Immerman and Szelepcsényi independently proved that NL$\;=\;$co-NL (and, in fact, that NSPACE[$f(n)$]$\;=\;$co-NSPACE[$f(n)$] for any $f(n)\geq \log n$). But, at the moment, most people believe that PSPACE and EXP are different. Why?
Let's look at what we can do in the two complexity classes. Consider a problem in PSPACE . We're allowed to use $p(n)$ tape cells to solve an input of length $n$ but it's hard to compare that against EXP , which is specified by a time bound. How much time can we use for a PSPACE problem? If we only write to $p(n)$ tape cells, there are $2^{p(n)}$ different strings that could appear on the tape, assuming a binary alphabet. The tape head could be in any of $p(n)$ different places and the Turing machine could be in one of $k$ different states. So the total number of configurations is $T(n) = k\,p(n)\,2^{p(n)}\!$. By the pigeonhole principle, if we run for $T(n)+1$ steps, we must visit a configuration twice but, since the machine is deterministic, that means it will loop around and visit that same configuration infinitely often, i.e., it won't halt. Since part of the definition of being in PSPACE is that you have to decide the problem, any machine that doesn't terminate doesn't solve a PSPACE problem. In other words, PSPACE is the class of problems that are decidable using at most $p(n)$ space and at most $k\,p(n)\,2^{p(n)}$ time, which is at most $2^{q(n)}$ for some polynomial $q$. So we've shown that PSPACE$\;\subseteq\;$EXP . And how much space can we use for an EXP problem? Well, we're allowed $2^{p(n)}$ steps and the head of a Turing machine can only move one position at each step. Since the head can't move more than $2^{p(n)}$ positions, we can only use that many tape cells. That's what the difference is: although both PSPACE and EXP are problems that can be solved in exponential time, PSPACE is restricted to polynomial space use, whereas EXP can use exponential space. That already suggests that EXP ought to be more powerful. For example, suppose you're trying to solve a problem about graphs. In PSPACE , you can look at every subset of the vertices (it only takes $n$ bits to write down a subset). You can use some working space to compute on each subset but, once you've finished working on a subset, you must erase that working space and re-use it for the next subset. In EXP , on the other hand, you can not only look at every subset but you don't need to reuse your working space, so you can remember what you learnt about each one individually. That seems like it should be more powerful. Another intuition for why they should be different is that the time and space hierarchy theorems tell us that allowing even a tiny bit more space or time strictly increases what you can compute. The hierarchy theorems only let you compare like with like (e.g., they show that PSPACE$\;\subsetneq\;$EXPSPACE and P$\;\subsetneq\;$EXP ) so they don't directly apply to PSPACE vs EXP but they do give us a strong intuition that more resource means that more problems become solvable. | {
"source": [
"https://cs.stackexchange.com/questions/35371",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/25876/"
]
} |
35,759 | I have a high-level understanding of the $P=NP$ problem and I understand that if it were absolutely "proven" to be true with a provided solution, it would open the door for solving numerous problems within the realm of computer science. My question is, if someone were to publish a indisputable, constructive proof of $P=NP$, what are some of the immediate effects that we would see of such a discovery? I'm not asking for opinionated views of what the world would look like in 5-10 years. Instead, it is my understanding that this is such a fundamentally unsolvable problem that it could radically change the way we compute... many things (yeah, this is where my ignorance is showing...) that we can't easily calculate today. What kind of near-immediate effect would a thorough, accurate, and constructive proof of $P=NP$ have on the practical world? | People have given good answers assuming that $P=NP$ with some really large constant. I'm going to play the optimist and assume that we find a proof of $P=NP$ with a tractably small constant. Perhaps not likely, but I'm going to try to give some insight into what sorts of things would happen if we could efficiently solve all $NP$ problems. Compilers: Some computer programs would get slightly faster, since compilers use graph coloring for register allocation. We would be able to allocate for large numbers of registers exactly. Existing compilers using an approximate solution (like chordal graphs) would get better output, and those using an exact solution would get faster. Facility location: Businesses would be able to find the optimal place to place factories and supply depots to ship to their stores, when there are possibly thousands of stores and factories. Would likely not be a huge improvement over modern approximations, but would reduce costs. Buying plane tickets: airline tickets are weird since they don't follow triangle equality. Sometimes it's cheaper to fly from A -> B -> C than directly from A -> C, something that doesn't come up when modelling distances. It would be easy to make a website that finds the absolute cheapest sequence of flights that visit some number of cities and starts and ends in your hometown. Circuit design: electrical circuits on a chip are basically Boolean formulas. Things like minimization could be efficiently calculated, so our hardware would get a bit more efficient. Scheduling: mad that your school put two of your exams on the same time? If $P=NP$ your school could either how many timeslots they need so no student has a conflict, or given a number of time slots, minimize the number of conflicts. This is just a sampling of practical applications that we'd see if $NP$-completeness weren't a barrier. I'm sure I've missed many, but if the given construction had a good constant, the implications would be far reaching. | {
"source": [
"https://cs.stackexchange.com/questions/35759",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/21051/"
]
} |
35,994 | Randomized Quick Sort is an extension of Quick Sort in which the pivot element is chosen randomly. What can be the worst case time complexity of this algorithm. According to me, it should be $O(n^2)$ , as the worst case happens when randomly chosen pivot is selected in sorted or reverse sorted order. But in some texts [1] [2] its worst case time complexity is written as $O(n\log{n})$ What's correct? | Both of your sources refer to the "worst-case expected running time" of $O(n \log n).$ I'm guessing this refers to the expected time requirement, which differs from the absolute worst case. Quicksort usually has an absolute worst-case time requirement of $O(n^2)$. The worst case occurs when, at every step, the partition procedure splits an $n$-length array into arrays of size $1$ and $n-1$. This "unlucky" selection of pivot elements requires $O(n)$ recursive calls, leading to a $O(n^2)$ worst-case. Choosing the pivot randomly or randomly shuffling the array prior to sorting has the effect of rendering the worst-case very unlikely, particularly for large arrays. See Wikipedia for a proof that the expected time requirement is $O(n\log n)$. According to another source , "the probability that quicksort will use a quadratic number of compares when sorting a large array on your computer is much less than the probability that your computer will be struck by lightning." Edit: Per Bangye's comment, you can eliminate the worst-case pivot selection sequence by always selecting the median element as the pivot. Since finding the median takes $O(n)$ time, this gives $\Theta(n \log n)$ worst-case performance. However, since randomized quicksort is very unlikely to stumble upon the worst case, the deterministic median-finding variant of quicksort is rarely used. | {
"source": [
"https://cs.stackexchange.com/questions/35994",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/14481/"
]
} |
37,571 | The $n$th Fibonacci number can be computed in linear time using the following recurrence: def fib(n):
i, j = 1, 1
for k in {1...n-1}:
i, j = j, i+j
return i The $n$th Fibonacci number can also be computed as $\left[\varphi^n / \sqrt{5}\right]$. However, this has problems with rounding issues for even relatively small $n$. There are probably ways around this but I'd rather not do that. Is there an efficient (logarithmic in the value $n$ or better) algorithm to compute the $n$th Fibonacci number that does not rely on floating point arithmetic? Assume that integer operations ($+$, $-$, $\times$, $/$) can be performed in constant time. | You can use matrix powering and the identity
$$
\begin{bmatrix} 1 & 1 \\ 1 & 0 \end{bmatrix}^n = \begin{bmatrix} F_{n+1} & F_n \\ F_n & F_{n-1} \end{bmatrix}.
$$
In your model of computation this is an $O(\log n)$ algorithm if you use repeated squaring to implement the powering. | {
"source": [
"https://cs.stackexchange.com/questions/37571",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/9258/"
]
} |
39,597 | I'm writing a research paper and I have to basically say that one microcontroller is slower than an other microprocessor. However, I'm worried that simply saying that it's 'slower' wouldn't be appropriate in a research paper. Am I right? Is it OK to just say that one processor is 'slower', or do I need to say something else? What else could I say? The best I have come up with is that one has 'less computational power' than the other or that the microcontroller has 'low computational power'. Unfortunately, these expressions don't seem to be too popular when searching online. So, what would be a better and academically correct way of saying this? | Let me see if I can clear up a few potential misconceptions here. Sometimes people think that when they write a research paper they have to use fancy language: it's not enough to just say what they mean, but rather, it has to be written in academic code with more complex-sounding language. Or, they think that using bigger words will make them sound more authoritative. This is not the case. If anything, it leads to papers that are overly pompous and unnecessarily hard to read. Instead, I suggest you figure out what you mean, and then write that. Don't worry too much about how to say it (whether the word you are using is OK in a research paper). Do worry about being precise: figure out exactly what you mean, and then be precise in your wording. You have a good intuition. Your hesitation about just saying one processor is slower than another is valid. (But not because you can't say one thing is slower than another in a research paper.) The issue I see with that wording is that it is not very precise. There are many things that 'slower' could mean. What exactly do you mean by 'slower'? Slower in what way? And how do you know? What evidence do you have? Can you quantify it? How would you measure 'slowness' in a quantitative, defensible way? Once you can answer those questions, then you can figure out how to write something more convincing in your paper. For instance, "processor X is 20% slower on the SpecCPU benchmark than processor Y" is more precise than "processor X is slower than processor Y", and backs up the claim with evidence. But first you need to figure out precisely what you mean by 'slower', and why it matters to your argument, and then you can figure out how to be more precise in what you write and what evidence you can provide to back up your statement. You won't always need to write with this level of care and precision. Sometimes, when you are just providing intuition or background, the specifics don't matter so much, and then you can just say that X is slower than Y. But if that statement plays a key role in your paper -- maybe it is a key part of the motivation for your paper, or it is a key part of the reasoning that underpins the design of your system -- then you should try to be as precise as you can, and provide evidence for the statement. | {
"source": [
"https://cs.stackexchange.com/questions/39597",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/28943/"
]
} |
39,920 | I am reading the book: " Code: The Hidden Language of Computer Hardware and Software " and in Chapter 2 author says: Morse code is said to be a binary (literally meaning two by two) code
because the components of the code consists of only two things - a dot
and a dash. Wikipedia on the other hand says: Strictly speaking it is not binary, as there are five fundamental
elements (see quinary). However, this does not mean Morse code cannot
be represented as a binary code. In an abstract sense, this is the
function that telegraph operators perform when transmitting messages (see quinary). But then again, another Wikipedia page includes Morse Code in 'List of binary codes.' I am very confused because I would think Morse Code actually is ternary . You have 3 different types of 'possibilities': a silence, a short beep or a long beep. It is impossible to represent Morse Code in 'stirct binary' isn't it? By 'strict binary' I mean, think of stream of binary: 1010111101010.. How am I supposed to represent a silence, a short beep and / or a long beep? Only way I can think of is 'word size' a computer implements. If I (and the CPU / the interpreter of the code) know that it will be reading 8 bits every time, then I can represent Morse Code. I can simply represent a short beep with a 1 or a long beep with a 0 and the silences will be implicitly represented by the word length.(Let's say 8 bits..) So again, I have this 3rd variable/the 3rd asset in my hand: the word size. My thinking is like this: I can reserve the first 3 bits for how many bits to be read, and last 5 bits for the Morse code in a 8bit word. Like 00110000 will mean 'A'. And I am still in 'binary' BUT I need the word size which makes it ternary isn't it? The first 3 bits say: Read only 1 bit from the following 5 bits. Instead of binary, if we use trinary, we can show morse code like: 101021110102110222 etc.. where 1 is: dit 0 is: dah and 2 is silence. By using 222 we can code the long silence, so if you have a signal like *- *--- *- you can show it like: 102100022210, but it is not directly possible using only with 1's and 0's UNLESS you come up with something like a 'fixed' word size as I mentioned, but well this is interpreting, not saving the Morse Code as it is in binary. Imagine something like a piano, you have only the piano buttons. You want to leave a message in Morse Code for someone and you can paint buttons to black. There is no way you can leave a clear message, isn't it? You need at least one more color so you can put the silences (the ones between characters and words. This is what I mean by trenary. I am not asking if you can represent Morse Code in 57-ary or anything else. I have e-mailed the author (Charles Petzold) about this; he says that he demonstrates in Chapter 9 of "Code" that Morse Code can be interpreted as a binary code. Where am I wrong with my thinking? Is what I am reading in the book, that the Morse Code being a Binary a fact or not? Is it somehow debatable? Why is Morse Code is told be quinary in one Wikipedia page, and it is also listed in List of Binary Codes page? Edit: I have e-mailed the author and got a reply: -----Original Message----- From: Koray Tugay
Sent: Tuesday, March 3, 2015 3:16 PM To: [email protected] Subject: Is Morse Code really binary? Sir, could you take a look at my question here: Is Morse Code binary, ternary or quinary? quinary ? Regards, Koray Tugay From: "Charles Petzold" To: "'Koray Tugay'" Subject: RE: Is Morse Code really binary? Date: 3 Mar 2015 23:04:35 EET Towards the end of Chapter 9 in "Code" I demonstrate that Morse Code can be interpreted as a binary code. I am not hiding his e-mail address as it is really easy to find on the web anyway. | Morse code is a prefix ternary code (for encoding 58 characters) on top of a prefix binary code encoding the three symbols. This was a much shorter answer when accepted. However, considering the
considerable misunderstandings between users, and following a request
from the OP, I wrote this much longer answer. The first "nutshell"
section gives you the gist of it. Contents In a (big) nutshell Codes: basic points Codes: definitions The Morse code Analysing the three levels of representation Remarks on this analysis The importance of analog to logical transition Conclusion In a (big) nutshell When asking "Is Morse Code binary, ternary or quinary?" there is no
comparing possible answers unless one fixes some criteria for an acceptable
answer. Indeed, without proper criteria, one can contrive explanations
for nearly any kind of structure. The criteria I have chosen are the
following: it should reflect the three-tiered description of Morse-code
with the dot/dash representation in the second tier; it should fit the presentation and mathematical tools developed
for the theoretical analysis of codes, as much as possible; it should be as simple as possible; it should clearly make apparent the properties of the Morse code. This is intended to preclude arbitrary hacking, that ignores basic
concepts of code theory as scientifically studied, and which may have
some appeal by giving an illusion of systematic analysis, though
addressed too informally to be conclusive. This site is supposed to be about computer science , not programming. We should use a minimum of
established science and accepted concepts to answer a technical
question. A quick analysis of the standard shows that all symbols used in
Morse code are ultimately coded in binary , since it is transmitted
as a string of units of equal length, whith a signal that can be on or
off for each unit. This indicates that Morse messages are ultimately
coded in a logical alphabet $\Sigma_1=\{0,1\}$. But that says nothing of the internal structure of the code. The
information to be encoded is a string on an alphabet of 58 symbols
(according to the standard) including 57 characters and a space.
This corresponds to an alphabet
$\Sigma_3=\{A,B,\dots,Z,0,1,\dots,9,?,=,\dots,\times,@,[\;]\}\;$,
the last symbl being the space. However, the standard specifies that there is an intermediate alphabet
$\Sigma_2$, based on dot and dash and possibly other symbols. It
is quite clear that strings in $\Sigma_3^*$ are to be coded as strings
in $\Sigma_2^*$, and that strings in $\Sigma_2^*$ are to be coded as strings
in $\Sigma_1^*$ So, given that there is no choice for $\Sigma_1$ and $\Sigma_3$, the
question must be understood as: " What number of symbols should we
consider in the intermediate alphabet $\Sigma_2$ so as to best
axplain the structure and the properties of the whole Morse code, "
which also entails specifying the two encodings between the three levels. Given the fact that the Morse code is a prefix homomorphic (variable length) code that
precludes any ambiguity when decoding a signal, we can explain
simply this essential property with a ternary alphabet $\Sigma_2=${ dot , dash , sep }, and two coding scheme $C_{3\to 2}$ from $\Sigma_3$
to $\Sigma_2$, and $C_{2\to 1}$ from $\Sigma_2$ to $\Sigma_1$, which are both
homomorphic and prefix, thus both unambiguous codes, and thus able to be
composed to give an unambiguous prefix encoding of the 58 symbols into
binary. Hence Morse code is composed of a prefix ternary code expressed in the
alphabet $\{$ dot , dash , sep $\}$ , with these three symbols themselves
encoded in binary with the following codewords: dot $\to 10$, dash $\to 1110$, and sep $\to 00$ Note that what is known as the space between consecutive dot or dash is
actually included in the representation of dot and dash , as this
is the usual mathematical representation for such types of codes, which
are usually defined as string homomorphisms from source symbols to
codewords expressed with target symbols, as I just did. This departs a little from some of the presentation given in the
standard, which aims more at specifying intuitively the code for
users, rather than at analysing it for its structural properties.
But the encoding is the same in both cases. Even without the precise timings of the standard, a decoder of the
analog signal could still translate it into the ternary alphabet
we suggest, so that the above understanding of the ternary code would
still be valid. Codes: basic points This answer is based on the Standard ITU-R M.1677-1 , dated October
2009 (thanks to Jason C for the reference). I shall use the
terminology dot and dash , rather than dit and dah , as it is
the terminology used by this standard. Before we start discussing the Morse code, we need to agree on what a
code is. The difficult discussions on this question obviously requires
it. Fundamentally, information needs to be represented in order to be
transmitted or otherwise processed. A code is a system to translate
information from one system of representation into another . This is
a very general definition. We must be careful not to confuse the
concept of a representation , and that of a code from one
representation (the source ) to another (the target ). A representation can take many forms, such as variable electric
voltage, colored dots on paper, string of characters, numerals, binary
strings of 0's and 1's, etc. It is important to distinguish between
analog and formal (or logical, or abstract) representation. An analog/physical representation is a drawing, a varying voltage
level, a shape (for a letter). A logical/formal/abstract representation is a mathematical
representation with abstract graphs, strings of symbols, or other
mathematical entities. Though some information may originally be analog, we usually
convert it to a logical representation so as to be able to define precisely
its processing by mathematical means, or by people. Conversely, we dealing with logical representation using physical
devices, such as computer or transmitters, we need to give an
analog form to the logical representation. For the purpose of this analysis, the only analog form we consider
is that used for transmission, as described in the standard. But even
then, we will consider that the first step is to interpret this
analog representation as a direct implementation of an identically
structured logical representation, on which we build our analysis of
what kind of code Morse code may be. Code theory is a mathematical body
of knowledge based on the analysis of logical representations. However we shall come back on the analog/logical transition in the
discussion at the end. Codes: definitions Our logical view is that the code is used to translate sources strings
on an source alphabet $S$ to a target alphabet $T$. It is
often the case that both alphabets are identical, usually binary, when
the purpose is to add some extra property to the representation of
information, such as making it more resistant to errors (error
detection and correction), or making the representation smaller by
removing redundancy (lossless code compression) and possibly with
carefully controled loss of some information (lossy compression). However, the purpose of Morse code is to provide only a way to
represent strings on a large alphabet, into strings based on a much
smaller alphabet (actually binary), using an intermediate alphabet
almost binary (dots and dashes) to better adapted to human perception
and manipulative abilities. This is achieved by what is called variable-length code : Using terms from formal language theory, the precise mathematical
definition is as follows: Let $S$ and $T$ be two finite sets, called the
source and target alphabets, respectively. A code $C: S \to T^*$ is
a total function mapping each symbol from $S$ to a sequence of
symbols over $T$, and the extension of $C$ to a homomorphism of
$S^*$ into $T^*$, which naturally maps each sequence of source
symbols to a sequence of target symbols, is referred to as its
extension. We call codeword the image $C(s)\in T^*$ of a symbol $s\in S$. A variable-length code $C$ is uniquely decodable if the
corresponding homomorphism of $S^*$ into $T^*$ is injective . That
means that any string in $T^*$ can be the image of at most one string
in $S^*$. We also say that the code is unambiguous , meaning that
any string can be unambiguously decoded, if at all. A variable-length code is a prefix code if no codeword is the prefix
of another. It is also alled instantaneous code , or context-free
code . The reason for these names is that, when reading a target string
that begins with a codeword $w$ of a prefix code, you recognize the
end of the codeword as soon as you read its last symbol, without
having to know/read the next symbol. As a consequence, prefix codes
are unambiguous and very easy to decode fast. It is easily shown that unique decodability and the prefix property
are closed under composition of codes. Note that the definition as a homomorphism implies that there is no
special separation between codewords. It is their structure, such as
the prefix property, that allows identifying them unambiguously. Indeed, if there were such separation symbols, they would have to be
part of the target alphabet, since they would be necessary to decode
string from the target alphabet. Then it would be quite simple to
revert to the theoretical model of variable-length code by appending
the separator to the preceding code word. If that were to raise
contextual difficulty (due for example to multiple separators), that
would only be a hint that the code is more complex than apparent.
This is a good reason to stick to the theoretical model described
above. The Morse code The Morse code is described in the standard at three levels: 3 . it is intended to provide an encoding of natural language text,
using 57 characters (27 letters, 10 digits, 20 synbols and
ponctuations) and an inter-word space to cut the character string into
words. The inter-word space is used like a special character, that can
be mixed with the others, which I shall note SEP . 2 . all of these characters are to be encoded as successions of dash and dot , using an inter-letter space, which I shall note sep , to
separate the dash and dot of one letter from those of the next
letter. 1 . The dash and dot , as well as sep are to be encoded as signal
or absence of signal (called spacing) with length precisely defined
in terms of some accepted unit. In particular, the dash and dot encoding a letter must be separated by an inter-element space, that I
shall note σ . This already calls for a few conclusions. The message to be transmitted and received in analog form is a
successions of length units (space length or time length), such that a
signal is on of off for the whole duration of each unit as specified
in the Annex 1, Part I, section 2 of the standard : 2 Spacing and length of the signals
2.1 A dash is equal to three dots.
2.2 The space between the signals forming the same letter is equal to one dot.
2.3 The space between two letters is equal to three dots.
2.4 The space between two words is equal to seven dots. This is clearly an analog encoding in what is known as a bit
stream, which can be logically represented in binary notation by a
string of 0 ans 1 , standing for the analog off and on . In order to abstract away issues related to analog representation,
we can thus consider that Morse code messages are transmitted as bit
strings, that we shall note with 0 and 1 . Hence the the above excerpt from the standard can be expressed
logically as: 0 . A dot is represented by 1 . 1 . A dash is represented by 111 . 2 . An inter-element space σ is represented by 0 . 3 . An inter-letter space sep is represented by 000 . 4 . An inter-word space SEP is represented by 0000000 . So we could see Morse code as using 5 code words in binary to encode
these 5 symbols. Except for the fact that this is not quite how the
system is described, there is some more to it, and it is not the most
convenient way it can be thought of, from a naive or a mathematical
point of view. Note also that this description is intended for laymen, not code
theory specialists. For that reason it describes more the visible
appearance than the internal structure that justifies it. It has no
reason to preclude other descriptions that are compatible with this
one, though mathematically more structured, to emphasize the
properties of the code. But first, we should note that the complete description of the code
involves 3 levels of representation, immediately recognizable: 3 . The text, composed of a string of characters, including SEP . 2 . The encoding of a letter string as a string of dot , dash and sep . 1 . The encoding of a level 2 string of these three symbols as a binary string. We may possibly discuss as to what symbols is encoded in what, but it
is an essential aspect of Morse code that it has these three levels
of representation, with characters at the top, dot s and dash es in the
middle, and bits 0 and 1 at the bottom. This implies that there are necessarily two codes, one from level 3 to
level 2, and the other from level 2 to level 1. Analysing the three levels of representation In order to have a consistent analysis of this 3-tiers coding system,
we should first analyse what kind of information is relevant at each
level. 1 . The bit string, by definition, and by necessity of its analog
representation, is composed only of 0 and 1 . 3 . At the text level, we need and alphabet of 58 symbols, including
the 57 characters and the inter-word space SEP . All 58 of them have
to have ultimately a binary encoding. But, though the Morse code
standard specifies these 57+1 characters, it does not specifies how
they should be used to encode information. That is the role of
English and other natural languages. The Morse code provides other
system with an alphabet of 58 symbols, on which they could build some
58-ary code, but Morse code is not itself a 58-ary code. 2 . At the dot and dash level, all we need is these two symbols in
order to code the 57 characters, i.e. provide a codeword for each as a
string of dot and dash , together with some separator sep to mark
when one letter finished, and another start. We also need some means
of encoding the inter-word space SEP . We might try to provide for it
directly at leavel 1, but this would mess-up the otherwise tier-structured
organization of the code. Indeed, the description of the standard might rightly be criticized
for doing just that. But the authors may have thought that their presentation
would be simpler to grasp for the average user. Also it follows a
traditional description of Morse code, that predates this kind of
mathematical analysis. This calls for several remarks: at level 3, the letter level, the inter-letter space sep is no
longer meaningful. This is quite normal, since it has no more
meaning in the universe of letters than the space separating two
written characters on paper. It is necessary at level 2 to recognize
codewords representing the letters, but that is all. similarly at level 2, the inter-element space σ is no longer
meaningful. It has no meaning in the world of dot and dash , but
is only necessary at level 1 to identify the binary code words
representing dot , dash . But at level 1, it is not
distinguishable from the bit 0 . So the inter-element space σ is no longer anything special. It is
just one use of 0 . However, as explained previously, if the code $\Sigma_2^*\to\Sigma_1^*$ is
to be analyzed using knowledge of variable length codes, separators
should be appended into the codewords they follow, so as to define the
code as a simple string homomorphism. This implies the following partial specification of the code: dot $\to$ 10 and dash $\to$ 1110 The level 2 alphabet $\Sigma_2$ needs at least one other symbol, the
inter-letter space noted sep , which should be 000 according to the
letter of the standard. However, the definition of the variable length
code as a homomorphism required appending the inter-element space 0 to each codeword for dot and dash . Hence we must have only 00 as
codeword for sep , so that toghether with the ending 0 from the
preceeding dot or dash , it makes 3 0 as required by the
standard. This always work since there is no provision in the standard
for having two inter-letter separators following each other. This is enough to encode the alphabet $\Sigma_2=${ dot , dash , sep } with a homomorphic code $C_{2\to
1} : \Sigma_2\to\Sigma_1^*$ defined as follow: dot $\to$ 10 dash $\to$ 1110 sep $\to$ 00 And we have the good surprise to discover that no codeword is a prefix
of another. Hence we have a prefix code, which is unambiguous and easy
to decode. We can now proceed similarly to define the code $C_{3\to 2}:
\Sigma_3\to\Sigma_2^*$. The standard uses strings of dot and dash as codewords for the
characters in $\Sigma_3$, in the way given by the tables of the
standard for example dot dot dash dot to represent the letter
$f$. Again, these codewords are separated by inter-letter spaces. In order
to define the code as a homomorphism, we must include the separator in
the codewords, so that the definition of the homomorphism becomes
rather: $f\to$ dot dot dash dot sep This applies to each of the 57 characters in the alphabet $\Sigma_3$.
But again we also need the word separator SEP , which, according to
the standard, is 0000000 . We first note that already 3 bits 0 are
provided by the code, 2 by the sep that ends the last letter of the
word, and 1 by the 0 bit that end the last dot or dash of the
encoding of that last letter. Hence SEP must ultimately be coded as
the remaining 0000 . But to respect the tiered approach, SEP should be encoded in some
codeword from $\Sigma_2^*$. Since sep is binary encoded as 00 , it
follow that SEP can be encoded as sep sep . Hence we can encode the alphabet
$\Sigma_3=\{A,B,\dots,Z,0,1,\dots,9,?,=,\dots,\times,@,$ SEP $\}$,
with a homomorphic code $C_{3\to 2} : \Sigma_3\to\Sigma_2^*$
defined as follows: $A \to$ dot dash se p $B \to$ dash do t dot dot sep ... $Z \to$ dash dash dot dot sep ... $7 \to$ dash dash dot dot dot sep ... SEP $\to$ sep sep (for the word separator) And we have the further surprise to see that no codeword is a prefix
of another. Hence the code $C_{3\to 2}$ is a prefix code too. Since the prefix property is closed under composition of codes, the
Morse code $C_{Morse}= C_{2\to 1}\circ C_{3\to 2}$ is a prefix code. We can thus conclude that the Morse code can be understood, and easily
analyzed, as the composition of a prefix binary encoding of a 3
symbols alphabet { dot , dash , sep } into a binary alphabet, and a
prefix encoding of a 58 symbol alphabet (57 characters and one space)
into the 3 letters alphabet. The composition itself is a prefix encoding of the 58 symbols into a
binary representation. Remarks on this analysis. It is always difficult to establish that a presentation of a structure
is the best one can come up with. It seems however that the above
analysis meets the criteria set up at the beginning of this answer:
closeness to the 3-tiered definition, formally presented according
to current coding theory, simplicity, and evidencing the main
properties of the code. Note that there is little point in looking for error correction
properties. The Morse code may not even detect a single bit error as
it may simply change two dot into one dash . However, it causes
only local errors. Regarding compression, the ternary encoding was designed to
approximately reduce the number of dots and dashes, in an
approximative kind of Huffman coding . But the two composed codes
could easily be made denser. Regarding the size of alphabets, there is no choice for the binary and
the 58 symbols alphabet. The intermediate alphabet could contains more
symbols, but what would be the purpose? However, some people would be inclined to recognize the space DET at
level 2, thus making the alphabet quaternary , then using it directly
at level 3, encoded as itself in level 2. This would meet the standard definition, for DET encoded in binary
as 0000 . But it would prevent the analysis of the binary encoding
$C_{2\to 1}$ as a prefix code, making it harder to show that
$C_{Morse}$ is a prefix code, hence unambiguous. Indeed, such a choice would make the binary string 0000 ambiguous,
decodable as either SEP or as sep sep . The ambiguity would have
to be resolved with a contextual rule that sep cannot follow itself,
making the formalization more complex. The importance of analog to logical transition. This analysis relies heavily on the fact that the decomposition of the
on/off signal into units of equal lengths indicates clearly an
analog representation of a binary string. Furthermore, the lengths
in units are exactly right for the above analysis, which seems
unlikely to have happened by chance (though it is possible). However, from a (too cursory) look at the original patent 1647 , it does not
seem to have been that precise, with sentences such as (on top of page
2): The sign of a distinct numeral, or of a compound numeral when used in
a sentence of words or of numerals, consists of a distance or space of
separation between the characters of greater extent than the distance
used in separating the characters that compose any such distinct or
compound numeral. People who were later sending by hand or receiving by ear were also
unlikely to be that precise either. Indeed, their fist , i.e. their
timing, was often recognizable. This view is also supported by the
fact that spacing lengths are not always respected , particularly when
learning Morse code. These situations correspond to an analog view of the code as short
signal (dot), medium signal (dash), and short, medium and long
pause. Direct transposition into a logical alphabet would naturally
give a quinary alphabet, into which the 58 symbols have to be
coded. This of course is no longer a 3-tiered presentation of the
Morse code. However, in order to make sense (and possibly avoid ambiguity), this
alphabet should be used with the constraint that two signal symbols
( dot or dash ) cannot follow each other, and that pause symbols
cannot follow each other either. Analysis of the code and its
properties would be made more complex, and the natural way to simplify
it would be to do what was done: introduce proper timings to turn it
into the composition of two codes, leading to the fairly simple
analysis given above (remember that it includes showing the code is
prefix). Furthermore, it is not strictly necessary to follow exact timings in the
analog representation. Since the decoder of the analog
translation can distinguish short, medium and long pauses, by whatever
means, it should just mimic what was done in the binary case. Hence
short and medium signal (necessarily followed by a pause) are
recognized as logical dot or dash . Short pauses are forgotten, as
only serving to mark the end of dot or dash . Medium pauses are
recognized as sep , and long pauses are recognized as two sep in
succession. Hence the analog signal is represented in a ternary
alphabet, which can be used as before to encode the 58 symbols
alphabet. Our initial analysis can be used even when timings are not
strictly respected. Alternatively, the signal-pause alternance could be used to turn this
quinary alphabet into a ternary one, keeping only the three durations
as symbols of the alphabet, and using contextual analysis to determine
whether a given duration is signal or pause. But this is again a bit
complex to analyze. This just shows that there are many ways to look at things, but they
are not necessarily convenient, and may not all lend themselves easily to
analysis with the mathematical tools that have been developed to
analyze codes. More references to the patents can be found on the Internet. Conclusion Given the precise timings of the standard, a good answer seems to be
to consider Morse code as the composition of a ternary prefix
encoding (of 58 characters) into a 3 symbols alphabet, composed with
a binary prefix encoding of these three symbol. Without the precise timing of the standard, the binary level can no
longer be considered. Then the analog to logical decoding naturally
takes place at the level of the intermediate alphabet of dot and dash . However, the analog to logical decoder can stil decode to
the previous 3 symbols alphabet, thus preserving the applicability of
our analysis. | {
"source": [
"https://cs.stackexchange.com/questions/39920",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/29254/"
]
} |
40,097 | I thought I understood dependent typing (DT) properly, but the answer to this question: https://cstheory.stackexchange.com/questions/30651/why-was-there-a-need-for-martin-l%C3%B6f-to-create-intuitionistic-type-theory has had me thinking otherwise. After reading up on DT and trying to understand what they are, I'm trying to wonder, what do we gain by this notion of DTs? They seem to be more flexible and powerful than simply typed lambda calculus (STLC), although I can't understand "how/why" exactly. What is that we can do with DTs that cannot be done with STLC? Seems like adding DTs makes the theory more complicated, but what's the benefit? From the answer to the above question: Dependent types were proposed by de Bruijn and Howard who wanted to
extend the Curry-Howard correspondence from propositional to
first-order logic. This seems to make sense at some level, but I'm still unable to grasp the big-picture of "how/why"? Maybe an example explicitly show this extension of the C-H correspondence to FO logic could help hit the point home in understanding what is the big deal with DTs? I'm not sure I comprehend this as well I ought to. | Expanding my comment: Dependent types can type more programs. "More" simply means that the set of programs typable with dependent types is a proper superset of the programs typable in the simply-typed $\lambda$-calculus (STLC). An example would be $List_{2*3+4}(\alpha)$, the lists of length $10$, carrying elements of type $\alpha$. The expression $2*3+4$ is at the same time a program and part of a type. You cannot do this in the STLC. The key rule that distinguishes dependent from non-dependent types is application: $$
\newcommand{\TYPES}[3]{#1 \vdash #2 : #3}
\newcommand{\SUBST}[2]{\{#1/#2\}}
\frac{
\TYPES{\Gamma}{\color{red}{M}}{A \rightarrow B}
\qquad
\TYPES{\Gamma}{\color{red}{N}}{A} }{
\TYPES{\Gamma}{\color{red}{MN}}{B} }
\qquad
\frac{
\TYPES{\Gamma}{{\color{red}M}}{\Pi x^A. B}
\qquad
\TYPES{\Gamma}{\color{red}{N}}{A} }{
\TYPES{\Gamma}{{\color{red}{MN}}}{B\SUBST{{\color{red}N}}{x}} }
$$ On the left you have the STLC, where programs in the premises 'flow' only into the program of the conclusion. In contrast, in the dependent application rule on the right, the program $N$ from the right premise 'flows' into the type in the conclusion$^1$. In order to be able to parameterise types by programs, the syntax of dependent types must be richer, and to ensure that types are well-formed we use a second 'typing system' called kinds that constrains the types. This kinding system is essentially the STLC, but "one level up". There are many explanations of dependent types. Some examples. Dependent Types at Work , by Bove and Dybjer. Dependent Types, by Aspinall and Hofmann. Dependently Typed Programming in Agda , by Norell and Chapman. Lambda Calculi with Types , by Barendregt. $^1$ In terms of colours: with non-dependent types, black expressions in the conclusion are constructed from black expressions in the premises while
red expressions in the conclusion are constructed from red expressions in the premises. With dependent types the colours can be mixed by having black parts of the conclusion being constructed from red and black parts of the premise. | {
"source": [
"https://cs.stackexchange.com/questions/40097",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/8879/"
]
} |
40,400 | Suppose a program was written in two distinct languages, let them be language X and language Y, if their compilers generate the same byte code, why I should use language X instead of the language Y? What defines that one language is faster than other? I ask this because often you see people say things like: "C is the fastest language, ATS is a language fast as C". I was seeking to understand the definition of "fast" for programming languages. | There are many reasons that may be considered for choosing a language
X over a language Y. Program readability, ease of programming,
portability to many platforms, existence of good programming
environments can be such reasons. However, I shall consider only the
speed of execution as requested in the question. The question does not
seem to consider, for example, the speed of development. Two languages can compile to the same bytecode, but it does not mean
that the same code will be produced, Actually bytecode is only code for a specific virtual machine. It does
have engineering advantages, but does not introduce fundamental
differences with compiling directly for a specific harware. So you
might as well consider comparing two languages compiled for direct
execution on the same machine. This said, the issue of relative speed of languages is an old one,
dating back to the first compilers. For many years, in those early times, professional considered that
hand written code was faster than compiled code. In other words,
machine language was considered faster than high level languages such
as Cobol or Fortran. And it was, both faster and usually smaller. High
level languages still developed because they were much easier to use
for many people who were not computer scientists. The cost of using
high level languages even had a name: the expansion ratio, which could
concern the size of the generated code (a very important issue in
those times) or the number of instructions actually executed. The
concept was mainly experimental, but the ratio was greater than 1 at
first, as compilers did a fairly simple minded job by today standards. Thus machine language was faster than say, Fortran. Of course, that changed over the years, as compilers became more
sophisticated, to the point that programming in assembly language is
now very rare. For most applications, assembly language programs
compete poorly with code generated by optimizing compilers. This shows that one major issue is the quality of the compilers
available for the language considered, their ability to analyse the source
code, and to optimize it accordingly. This ability may depend to some extend on the features of the language
to emphasize the structural and mathematical properties of the source
in order to make the work easier for the compiler. For example, a
language could allow the inclusion of statements about the algebraic
properties of user defined functions, so as to allows the compiler to
use these properties for optimization purposes. The compiling process may be easier, hence producing better code, when
the programming paradigm of the language is closer to the features of
the machines that will intepret the code, whether real or virtual
machine. Another point is whether the paradigms implemented in the language are
closed to the type of problem being programmed. It is to be expected
that a programming language specialized for specific programming
paradigms will compile very efficiently features related to that
paradigm. Hence the choice of a programming language may depend, for
clarity and for speed, of the choice of a programming language
adapted to the kind of problem being programmed. The popularity of C for system programming is probably due to the fact
that C is close to the machine architecture, and that system
programming is directly related to that architecture too. Some other problem will be more easily programmed, with faster
execution using logic programming and constraint resolution languages . Complex reactive systems can be very efficiently programmed with specialized synchronous programming languages like Esterel which embodies very specialized knowledge about such systems and generate very fast code. Or to take an extreme example, some languages are highly specialized,
such as syntax description languages used to program parsers. A parser
generator is nothing but a compiler for such languages. Of course, it
is not Turing complete, but these compilers are extremely good for
their specialty: producing efficient parsing programs. The domain of
knowledge being restricted, the optimization techniques can be very
specialized and tuned very finely. These parser generators are usually
much better than what could be obtained by writing code in another
language. There are many highly specialized languages with compilers that produce excellent and fast code for a restricted class of problems. Hence, when writing a large system, it may be advisable not to rely on
a single language, but to choose the best language for different
components of the system. This, of course, raises problems of
compatibility. Another point that matters often is simply the existence of efficient libraries for the topics being programmed. Finally, speed is not the only criterion and may be in conflict with
other criteria such as code safety (for exemple with respect to bad
input, or resilience to system errors), memory use, ease of
programming (though paradigm compatibility may actually help that),
object code size, program maintainability, etc. Speed is not always the most important parameter. Also it may take different guises, like complexity which may be average complexity or worse case complexity. Also, in a large system as in a smaller program, there are parts where speed is critical, and others where it matters little. And it s not always easy to determine that in advance. | {
"source": [
"https://cs.stackexchange.com/questions/40400",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/27523/"
]
} |
41,664 | It's my understanding that when you XOR something, the result is the sum of the two numbers mod $2$ . Why then does $4 \oplus 2 = 6$ and not $0$ ? $4+2=6$ , $6%2$ doesn't equal $6$ . I must be missing something about what "addition modulo 2" means, but what? 100 // 4 010 // XOR against 2 110 = 6 // why not zero if xor = sum mod 2? | The confusion here stems from a missing word. A correct statement is "The result of XORing two bits is the same as that of adding those two bits mod 2." For example, $(0+1)\bmod 2 = 1\bmod 2 = 1=(0\text{ XOR }1)$ and $(1+1) \bmod 2= 2\bmod 2 = 0 =(1\text{ XOR }1)$ | {
"source": [
"https://cs.stackexchange.com/questions/41664",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/20503/"
]
} |
42,988 | There are lots of definitions online about what a Context-Free Grammar is, but nothing I find is satisfying my primary trouble: What context is it free of? To investigate, I Googled "context sensitive grammar" but I still failed to find what the "context" was all about. Can someone please explain what the context term refers to in these names? | You are right, there always is a context in some sense. I don't think you can understand what "context" means in "context-free" without understanding a production. A production is a substitution rule. It says that, to generate strings within the language, you can substitute what is on the left for what is on the right: A -> xy This means that the abstract sequence A can be replaced by the character "x" followed by the character "y". You can also have more complex productions: zA -> xy This means that the character "z" followed by the abstract sequence A can be replaced by the characters "x" and "y". A context-free production simply means that there is only one thing on the left hand side. The first example is context-free, because A can be replaced by "x" and "y" no matter what comes before or after it. However, in the second example, the character "z" has to appear before the A, and then the combination can be replaced by "x" and "y", so there is some context involved. A context-free grammar is then just a grammar with only context-free productions. The second example is actually an example of an unrestricted production. There is another category that is between context-free and unrestricted called "context-sensitive". An example of a context-sensitive production is: zA -> zxy The difference being that what comes before A (and after) on the left hand side has to be preserved on the right. This effectively means that only A is substituted, but can only be substituted in the proper context. | {
"source": [
"https://cs.stackexchange.com/questions/42988",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/4348/"
]
} |
44,305 | Perhaps my limited understanding of the subject is incorrect, but this is what I understand so far: Functional programming is based off of Lambda Calculus, formulated by
Alonzo Church. Imperative programming is based off of the Turing machine model, made
by Alan Turing, Church's student. Lambda calculus is as powerful and able as the Turing Machine, meaning they are equivalent in computational power. If functional programming is based off of Lambda Calculus and not the Turing machine, then why are some (or all) of them described to be Turing complete, and not Lambda complete or something like that? Is the term "Turing completeness" special in any way to Turing machines, or is it just a word? Lastly, if imperative languages are based off of the Turing Machine, and computers are basically Turing machines, without infinite memory, does that mean they perform better than functional programming languages on our modern PCs? If that's the case, then what would be the equivalent of a lambda calculus machine? I know this seems to be 3 separate questions, but they're all closely related, and each is dependent on the previous question being a valid question to begin with. | In a nutshell : What characterizes imperative programming languages as close to Turing
machines and to usual computers such as PCs, (themselves closer to
random access machines (RAM) rather than to Turing machine) is the
concept of an explicit memory that can be modified to store (intermediate results). It is an automata view of computation, with a concept of a
state (comprising both finite state control and memory content) that can change as the computation proceed. Most other models are more abstract. Though they may express the
computation as a succession of transformation steps of an original
structure, these transformation are applied in a sort of intemporal
universe of mathematical meanings. This may preserve properties, such
as referential transparency, that may make mathematical analysis
simpler. But it is more remote from natural physical models that rely
on the concpet of memory. Thus there are no natural functional machines, except in a larger sense
as explained below, since software is not really separable from
hardware. The reference to Turing as the yardstick of computability comes probably from the fact that his model, the Turing machine was closest to this physical realizability constraint, which made it a more intuitive model of computation. Further considerations : There are many models of computation, which were designed to capture in
the most general possible way the concept of a computation. They
include Turing machines, actually in many different flavors, the
lambda calculus (flavors too), semi-Thue rewriting systems, partial
recursive function,
combinatory logic. They all capture some aspects of the various techniques used by
mathematicians to express or conduct computations. And most have
been used to some extent as the basis of some programming language
design (e.g. Snobol for rewriting systems, APL for combinators, Lisp/Scheme for lambda calculus) and can often be combined in diverse ways in modern programming languages. One major result is that all these computation models were proved
equivalent, which lead to the Church-Turing thesis that no physically
realizable models of computation can do more than any of these models.
A model of computation is said Turing complete if it can be proved
to be equivalent to one of these models, hence equivalent to all of
them. The name could have been different. The choice of the Turing machine
(TM) as the reference is probably due to the fact that it is probably
the simplest of these models, mimicking closely (though
simplistically) the way a human computes and fairly easy to implement
(in a limited finite form) as a physical device, to such an extent
that Turing machines have been constructed with Lego sets . The basic idea requires no mathematical sophistication. It is probably the simplicity and
realizability of the model that gave it this reference position. At the time Alan Turing created his computing device, other proposals
were on the table to serve as formal definition of computability, a
crucial issue for the foundations of mathematics (see Entscheidungsproblem ). The Turing proposal was considered by the
experts of the time as the one most convincingly encompassing known
work on what calculability should be (see Computability and
Recursion , R.I. Soare, 1996, see section 3.2). The various proposals were proved equivalent, but Turing's was more convincing. [from comments by Yuval Filmus] It should be noted that, from a hardware point of view, our computers
are not Turing machines, but rather what is called Random Access
Machines (RAM) , which are also Turing complete. Purely imperative language (whatever that might mean) are probably the
formalisms used for the most basic models, such as Turing machines, or
the assembly language (skipping its binary coding) of computers. Both
are notoriously unreadable, and very hard to write significant
programs with. Actually, even assembly language has some higher level
features to ease programming a bit, compared to direct use of machine
instructions. Basic imperative models are closed to the physical
worlds, but not very usable. This led quickly to the development of higher level models of
computation, which started mixing to it a variety of computational
techniques, such as subprogram and function calls, naming of memory
location, scoping of names, quantification and dummy variables,
already used in some form in mathematics and logic, and even very
abstract concepts such as reflection ( Lisp 1958). The classification of programming languages into programming paradigm
such as imperative, functional, logic, object oriented is based of the
preeminence of some of these techniques in the design of the language,
and the presence or absence fo some computing features that enforce
some properties for programs or program fragments written in the
language. Some models are convenient for physical machines. Some others are more
convenient for a high-level description of algorithms, it that may
depend on the type of algorithm that is to be described. Some
theoretician even use non deterministic specification of algorithms,
and even that cn be translated in more conventional programming terms.
But there is no mismatch problem, because we developed a sophisticated compiler/interpreter technology that can translate each model into another as needed (which is also the basis of the Church-Turing thesis). Now, you should never look at your computer as raw hardware. It does
contain boolean circuitry that does very elementary processing. But
much of it is driven by micro-programs inside the computer that you
never get to know about. Then you have the operating system that makes
your machine appear even different from what the hardware does, On top
of that you may have a virtual machine that executes byte-code, and
then a high-level language such as Pyva and Jathon, or Haskell, or
OCaml, that can be compiled into byte code. At each level you see a different computation model. It is very hard
to separate hardware level from the software level thus to assign a
specific computational model to a machine. And since they are all
intertranslatable, the idea of an ultimate hardware computation model
is pretty much an illusion. The lambda calculus machine does exist: it is a computer that can
reduce lambda calculus expressions. Ad that is easily done. About specialized machine architectures Actually, complementing Peter Taylor's answer , and following up on
hardware/software intertwinning, specialized machines have been
produced to be better adapted to a specific paradigm, and had their
basic software written in a programming language based on that
paradigm. These include The Burroughs B5000 and its successors (1960s), that were adapted for
efficient implementation of recursion, represented at the time by
the language Algol 60 . The Western Digital WD/9000 Pascal MicroEngine , a machines based on
a microprogrammed bytecode specialied for the Pascal programming
language, in the early 1980s. Several brands of Lisp Machines in the 1980s. Fundamentally, these are also imperative hardware structures, but mitigated with
special harware features or microprogrammed interpreters to better
adapt to the intended paradigm. Actually, hardware specialized for specific paradigms does not seem to
have ever been successful in the long run. The reason is that the
compiling technology to implement any paradigm on vanilla hardware
became more and more effective, so that specialized hardware was not
so much needed. In addition, harware performance was fast improving,
but the cost of improvement (including evolution of basic software)
was more easily amortized on vanilla hardware than on specialized
hardware. Specialized hardware could not compete in the long run. Nevertheless, and though I have no precise data on this, I would suspect that these ventures left some ideas that did influence the evolution of machines, memories, and instruction sets architecture. | {
"source": [
"https://cs.stackexchange.com/questions/44305",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/22046/"
]
} |
44,422 | In my Algorithms and Data Structures course, professors, slides and the book ( Introduction to Algorithms, 3rd edition ) have been using the word NIL to denote for example a child of a node (in a tree) that does not exist. Once, during a lecture, instead of saying NIL , my classmate said null , and the professor corrected him, and I don't understand why professors emphasise this word. Is there a reason why people use the word NIL instead of null , or none , or any other word? Does NIL have some particular meaning that the others do not have? Is there some historical reason? Note that I have seen also a few places around the web where, e.g., the word null was used instead of NIL , but usually this last one is used. | As far as I'm concerned, null , nil , none and nothing are common names for the same concept: a value which represents the “absence of a value”, and which is present in many different types (called nullable types ). This value is typically used where a value is normally present, but may be omitted, for example an optional parameter. Different programming languages implement this differently, and some languages might not have any such concept. In languages with pointers, it's a null pointer . In many object-oriented languages, null is not an object: calling any method on it is an error. To give a few examples: In Lisp, nil is commonly used to stand for the absence of a value. Unlike most other languages, nil has structure — it's a symbol whose name is "NIL" . It's also the empty list (because a list should be a cons cell, but sometimes there is no cons cell because the list is empty). Whether it's implemented by a null pointer under the hood, or as a symbol like any other, is implementation-dependent. In Pascal, nil is a pointer value (valid in any pointer type) that may not be dereferenced. In C and C++, any pointer type includes a NULL value which is distinct from any pointer to a valid object. In Smalltalk, nil is an object with no method defined. In Java and in C#, null is a value of any object type. Any attempt to access a field or method of null triggers an exception. In Perl, undef is distinct from any other scalar value and used throughout the language and library to indicate the absence of a “real” value. In Python, None is distinct from any other value and used throughout the language and library to indicate the absence of a “real” value. In ML (SML, OCaml), None is a value of the any type in the type scheme 'a option , which contains None and Some x for any x of type 'a . In Haskell, the similar concept uses the names Nothing and Just x for the values and Maybe a for the type. In algorithm presentations, which name is used tends to stem from the background of the presenter or the language that is used in code examples. In semantics presentations, different names may be used to refer to e.g. the NULL identifier which denotes a pointer constant in the language, and the $\mathsf{nil}$ value in the semantics. I don't think there's any standard naming scheme, and some presentations leave it up to a font difference, or don't go into concrete syntax at all. It's possible that your lecturer wants to use the word null for a null pointer constant in the programming language used in the course (Java or C#?), and NIL to denote the absence of a node in some data structures, which may or may not be implemented as a null pointer constant (for example, as seen above, in Lisp, NIL is often not implemented as a null pointer). This distinction would be relevant when discussing implementation techniques for data structures. When discussing the data structures themselves, the null-pointer-constant concept is irrelevant, only the not-equal-to-any-other-value concept matters. There is no standard naming scheme. Another lecturer or textbook could use different names. | {
"source": [
"https://cs.stackexchange.com/questions/44422",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/-1/"
]
} |
44,430 | I'm reading a book on compilers, Engineering a Compiler, 2nd ed. by Keith D. Cooper & Linda Torczon and I came a across two new terms that I can't understand, they are: transitive successor and transitive predecessor of a node $i$ I tried to find definitions of those online, but I'm having hard time with it. Everything I find talks about transitive reduction. Can someone give a clear definition of this? Any help is appreciated. | As far as I'm concerned, null , nil , none and nothing are common names for the same concept: a value which represents the “absence of a value”, and which is present in many different types (called nullable types ). This value is typically used where a value is normally present, but may be omitted, for example an optional parameter. Different programming languages implement this differently, and some languages might not have any such concept. In languages with pointers, it's a null pointer . In many object-oriented languages, null is not an object: calling any method on it is an error. To give a few examples: In Lisp, nil is commonly used to stand for the absence of a value. Unlike most other languages, nil has structure — it's a symbol whose name is "NIL" . It's also the empty list (because a list should be a cons cell, but sometimes there is no cons cell because the list is empty). Whether it's implemented by a null pointer under the hood, or as a symbol like any other, is implementation-dependent. In Pascal, nil is a pointer value (valid in any pointer type) that may not be dereferenced. In C and C++, any pointer type includes a NULL value which is distinct from any pointer to a valid object. In Smalltalk, nil is an object with no method defined. In Java and in C#, null is a value of any object type. Any attempt to access a field or method of null triggers an exception. In Perl, undef is distinct from any other scalar value and used throughout the language and library to indicate the absence of a “real” value. In Python, None is distinct from any other value and used throughout the language and library to indicate the absence of a “real” value. In ML (SML, OCaml), None is a value of the any type in the type scheme 'a option , which contains None and Some x for any x of type 'a . In Haskell, the similar concept uses the names Nothing and Just x for the values and Maybe a for the type. In algorithm presentations, which name is used tends to stem from the background of the presenter or the language that is used in code examples. In semantics presentations, different names may be used to refer to e.g. the NULL identifier which denotes a pointer constant in the language, and the $\mathsf{nil}$ value in the semantics. I don't think there's any standard naming scheme, and some presentations leave it up to a font difference, or don't go into concrete syntax at all. It's possible that your lecturer wants to use the word null for a null pointer constant in the programming language used in the course (Java or C#?), and NIL to denote the absence of a node in some data structures, which may or may not be implemented as a null pointer constant (for example, as seen above, in Lisp, NIL is often not implemented as a null pointer). This distinction would be relevant when discussing implementation techniques for data structures. When discussing the data structures themselves, the null-pointer-constant concept is irrelevant, only the not-equal-to-any-other-value concept matters. There is no standard naming scheme. Another lecturer or textbook could use different names. | {
"source": [
"https://cs.stackexchange.com/questions/44430",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/10511/"
]
} |
44,594 | I have recently stumbled upon the following interesting article which claims to efficiently compress random data sets by always more than 50%, regardless of the type and format of the data. Basically it uses prime numbers to uniquely construct a representation of 4-byte data chunks which are easy to decompress given that every number is a unique product of primes. In order to associate these sequences with the primes it utilizes a dictionary. My question is: Is this really feasible as the authors suggest it? According to the paper, their results are very efficient and always compress data to a smaller size. Won't the dictionary size be enormous? Couldn't this be used to iteratively re-compress the compressed data using the same algorithm? It is obvious, and has been demonstrated, that such techniques (where the compressed data is re-compressed as many times as possible, dramatically reducing the file size) are impossible; indeed, there would be no bijection between the set of all random data and the compressed data. So why does this feel like it would be possible? Even if the technique is not perfect as of yet, it can obviously be optimized and strongly improved. Why is this not more widely known/studied? If indeed these claims and experimental results are true, couldn't this revolutionalize computing? | always compress random data sets by more than 50% That's impossible. You can't compress random data, you need some structure to take advantage of. Compression must be reversible, so you can't possibly compress everything by 50% because there are far less strings of length $n/2$ than there are of length $n$. There are some major issues with the paper: They use 10 test files without any indication of their content. Is the data really random? How were they generated? They claim to achieve compression ratios of at least 50%, while their test data shows they achieve at most 50%. This algorithm defines a lossless strategy which makes use of the prime numbers present in the decimal number
system What? Prime numbers are prime numbers regardless of the base. Issue #1 with decompression: prime factorization is a hard problem, how do they do it efficiently? Issue #2 with decompression ( this is the kicker ): they multiply the prime numbers together, but doing so you lose any information about the order, since $2\cdot 5 = 10 = 5\cdot 2$. I don't think it is possible to decompress at all using their technique. I don't think this paper is very good. | {
"source": [
"https://cs.stackexchange.com/questions/44594",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/35691/"
]
} |
44,823 | On the Wikipedia page for quantum algorithm I read that [a]ll problems which can be solved on a quantum computer can be solved on a classical computer. In particular, problems which are undecidable using classical computers remain undecidable using quantum computers. I expected that the fundamental changes that a quantum computer brings would lead to the possibility of not only solving problems that could already be solved with a classical computer, but also new problems that could not be solved before. Why is it that a quantum computer can only solve the same problems? | Because a quantum computer can be simulated using a classical computer: it's essentially just linear algebra. Given a probability distribution for each of the qubits, you can keep track of how each quantum gate modifies those distributions as time progresses. This isn't very efficient (which is why people want to build actual quantum computers) but it works. | {
"source": [
"https://cs.stackexchange.com/questions/44823",
"https://cs.stackexchange.com",
"https://cs.stackexchange.com/users/35949/"
]
} |
Subsets and Splits