source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
76,382
This is the question: A spreadsheet keeps track of student scores on all the exams in a course. Each row of the spreadsheet corresponds to one student, and each column in a row corresponds to his/her score on one of the exams. There are r students and c exams, so the spreadsheet has r rows and c columns. Devise an algorithm to compute the exam averages for each student and the class average for each exam. (It's from an ungraded practice problem set, not homework). It seems fairly straightforward to me to do this in linear time($O(c*r)$), but I suspect there is a faster way to do it. Is there a reasonable way to do this with better O()? My thoughts: At first I thought I could do this by storing the average values and updating them, but I don't think this would really satisfy the requirements of the problem: given just the exam scores, getting the averages would take at least as much time, if not more. Any help is appreciated! In particular, helpful hints are welcomed, and a helpful hint leading me to the answer will be accepted as an answer.
To compute the exact mean (no confidence interval or estimate) of each exam, you must at least observe every student's exam score. This takes $\Omega(r)$ per exam. There are $c$ exams you must do this for, this problem should take at least $\Omega(c \cdot r)$ time.
{ "source": [ "https://cs.stackexchange.com/questions/76382", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/71666/" ] }
78,083
I am new to understanding computer science algorithms. I understand the process of the binary search, but I am having a slight misunderstanding with its efficiency. In a size of $s = 2^n$ elements, it would take, on average, $n$ steps to find a particular element. Taking the base 2 logarithm of both sides yields $\log_2(s) = n$. So wouldn't the average number of steps for the binary search algorithm be $\log_2(s)$? This Wikipedia article on the binary search algorithm says that the average performance is $O(\log n)$. Why is this so? Why isn't this number $\log_2(n)$?
When you change the base of logarithm the resulting expression differs only by a constant factor which, by definition of Big-O notation , implies that both functions belong to the same class with respect to their asymptotic behavior. For example $$\log_{10}n = \frac{\log_{2}n}{\log_{2}10} = C \log_{2}{n}$$ where $C = \frac{1}{\log_{2}10}$. So $\log_{10}n$ and $\log_{2}n$ differs by a constant $C$, and hence both are true: $$\log_{10}n \text{ is } O(\log_{2}n)$$ $$\log_{2}n \text{ is } O(\log_{10}n)$$ In general $\log_{a}{n}$ is $O(\log_{b}{n})$ for positive integers $a$ and $b$ greater than 1. Another interesting fact with logarithmic functions is that while for constant $k>1$, $n^k$ is NOT $O(n)$, but $\log{n^k}$ is $O(\log{n})$ since $\log{n^k} = k\log{n}$ which differs from $\log{n}$ by only constant factor $k$.
{ "source": [ "https://cs.stackexchange.com/questions/78083", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/75095/" ] }
79,942
So merge sort is a divide and conquer algorithm. While I was looking at the above diagram, I was thinking if it was possible to basically bypass all the divide steps. If you iterated over the original array while jumping by two, you could get the elements at at index i and i+1 and put them into their own sorted arrays. Once you have all these sub-arrays ([7,14], [3,12], [9,11] and [2,6] as shown in the diagram), you could simply proceed with the normal merge routine to get a sorted array. Is iterating through the array and immediately generating the required sub-arrays less efficient than performing the divide steps in their entirety?
The confusion arises from difference between the conceptual description of the algorithm, and its implementation . Logically merge sort is described as splitting up the array into smaller arrays, and then merging them back together. However, "splitting the array" doesn't imply "creating an entirely new array in memory", or anything like that - it could be implemented in code as /* * Note: array is now split into [0..n) and [n..N) */ i.e. no actual work takes place, and the "splitting" is purely conceptual. So what you suggest certainly does work, but logically you're still "splitting" the arrays - you just don't need any work from the computer to do so :-)
{ "source": [ "https://cs.stackexchange.com/questions/79942", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/60596/" ] }
80,677
So I’m currently working on something and I have converted all decimal digits 0-9 into binary. But now I want to take say 6 in binary and increase its order of magnitude by base 10 (turning 6 into 60) without converting back to base 10. Is this possible and if so is there a way to do it with any number, X --> X0 ? EDIT 1: sorry the first part of the question was super vague and I forgot to mention I’m trying to do this with logic gates.
I assume that the task is to compute $mul(10, a)= 10a$. You don't need to do multiplication. A single binary adder is enough since $$10a = 2^3a + 2a$$ meaning you add one-time left-shifted $a$ to 3-time left-shifted $a$. For general multiplication $mul(x,y)$ please see this article .
{ "source": [ "https://cs.stackexchange.com/questions/80677", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/76675/" ] }
80,923
I have this [kind of funny] question in mind. Why is the non-deterministic finite automaton called non-deterministic while we define the transitions for inputs. Well, even though there are multiple and epsilon transitions, they are defined which means that the machine is deterministic for those transitions. Which means it's deterministic.
"Deterministic" means "if you put the system in the same situation twice, it is guaranteed to make the same choice both times". "Non-deterministic" means "not deterministic", or in other words, "if you put the system in the same situation twice, it might or might not make the same choice both times". A non-deterministic finite automaton (NFA) can have multiple transitions out of a state. This means there are multiple options for what it could do in that situation. It is not forced to always choose the same one; on one input, it might choose the first transition, and on another input it might choose the same transition. Here you can think of "situation" as "what state the NFA is in, together with what symbol is being read next from the input". Even when both of those are the same, a NFA still might have multiple matching transitions that can be taken out of that state, and it can choose arbitrarily which one to take. In contrast, a DFA only has one matching transition that can be taken in that situation, so it has no choice -- it will always follow the same transition whenever it is in that situation.
{ "source": [ "https://cs.stackexchange.com/questions/80923", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/54886/" ] }
81,537
A set is countable if it has a bijection with the natural numbers, and is computably enumerable (c.e.) if there exists an algorithm that enumerates its members. Any non-finite computably enumerable set must be countable since we can construct a bijection from the enumeration. Are there any examples of countable sets that are not computably enumerable? That is, a bijection between this set and the natural numbers exists, but there is no algorithm that can compute this bijection.
Are there any examples of countable sets that are not enumerable? Yes. All subsets of the natural numbers are countable but not all of them are enumerable. (Proof: there are uncountably many different subsets of $\mathbb{N}$ but only countably many Turing machines that could act as enumerators.) So any subset of $\mathbb{N}$ that you already know is not recursively enumerable is an example – such as the set of all numbers coding Turing machines that halt for every input.
{ "source": [ "https://cs.stackexchange.com/questions/81537", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/12185/" ] }
81,813
So given two DFAs, is the problem of finding if they generate the same language a Decidable problem? I already know that Equality of two CFL is not Decidable but what about Equality of two DFAs? considering most of the problems with DFAs are decidable, is this decidable as well ?
In order to decide whether the languages generated by two DFAs $A_1,A_2$ by the same, construct a DFA $A_\Delta$ for the symmetric difference $L(A_1) \Delta L(A_2) := (L(A_1) \setminus L(A_2)) \cup (L(A_2) \setminus L(A_1))$, and check whether $L(A_\Delta) = \emptyset$. Here are some more details. You can construct $A_\Delta$ using the product construction : construct a product automaton, and use $(F_1 \times \overline{F_2}) \cup (\overline{F_1} \times F_2)$ as the set of accepting states. In order to check whether $L(A_\Delta)$ is empty or not, it suffices to check whether some accepting state is reachable from the initial state, and this can be done using BFS/DFS.
{ "source": [ "https://cs.stackexchange.com/questions/81813", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/77793/" ] }
82,169
I would like to know if there is a rule to prove this. For example, if I use the distributive law I will get only $(A \lor A) \land (A \lor \neg B)$.
There are many ways to see this. One is a truth table. Another is to use the distributive rule: $$ A \lor (A \land \lnot B) = (A \land \top) \lor (A \land \lnot B) = A \land (\top \lor \lnot B) = A \land \top = A. $$
{ "source": [ "https://cs.stackexchange.com/questions/82169", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/78333/" ] }
82,180
Suppose you have an array of size $n \geq 6$ containing integers from $1$ to $n − 5$, inclusive, with exactly five repeated. I need to propose an algorithm that can find the repeated numbers in $O(n)$ time. I cannot, for the life of me, think of anything. I think sorting, at best, would be $O(n\log n)$? Then traversing the array would be $O(n)$, resulting in $O(n^2\log n)$. However, I'm not really sure if sorting would be necessary as I've seen some tricky stuff with linked list, queues, stacks, etc.
You could create an additional array $B$ of size $n$. Initially set all elements of the array to $0$. Then loop through the input array $A$ and increase $B[A[i]]$ by 1 for each $i$. After that you simply check the array $B$: loop over $A$ and if $B[A[i]] > 1$ then $A[i]$ is repeated. You solve it in $O(n)$ time at the cost of memory which is $O(n)$ and because your integers are between $1$ and $n-5$.
{ "source": [ "https://cs.stackexchange.com/questions/82180", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/78344/" ] }
82,626
I am new to programming language theory. I was watching some online lectures in which the instructor claimed that a function with polymorphic type forall t: Type, t->t be the identity, but did not explain why. Can someone explain to me why? Maybe a proof of the claim from first principles.
The first thing to note is that this isn't necessarily true. For example, depending on the language a function with that type, besides being the identity function, could: 1) loop forever, 2) mutate some state, 3) return null , 4) throw an exception, 5) perform some I/O, 6) fork a thread to do something else, 7) do call/cc shenanigans, 8) use something like Java's Object.hashCode , 9) use reflection to determine if the type is an integer and increment it if so, 10) use reflection to analyze the call stack and do something based on the context within which it is called, 11) probably many other things and certainly arbitrary combinations of the above. So the property that leads to this, parametricity, is a property of the language as a whole and there are stronger and weaker variations of it. For many of the formal calculi studied in type theory, none of the above behaviors can occur. For example, for System F /the pure polymorphic lambda calculus, where parametricity was first studied, none of the above behaviors above can occur. It simply doesn't have exceptions, mutable state, null , call/cc , I/O, reflection, and it's strongly normalizing so it can't loop forever. As Gilles mentioned in a comment, the paper Theorems for free! by Phil Wadler is a good introduction to this topic and its references will go further into the theory, specifically the technique of logical relations. That link also lists some other papers by Wadler on the topic of parametricity. Since parametricity is a property of the language, to prove it requires first formally articulating the language and then a relatively complicated argument. The informal argument for this particular case assuming we're in the polymorphic lambda calculus is that since we know nothing about t we can't perform any operations on the input (e.g. we can't increment it because we don't know if it is a number) or create a value of that type (for all we know t = Void , a type with no values at all). The only way to produce a value of type t is to return the one that is given to us. No other behaviors are possible. One way to see that is to use strong normalization and show that there is only one normal form term of this type.
{ "source": [ "https://cs.stackexchange.com/questions/82626", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/76133/" ] }
82,679
Many seem to believe that $P\ne NP$, but many also believe it to be very unlikely that this will ever be proven. Is there not some inconsistency to this? If you hold that such a proof is unlikely, then you should also believe that sound arguments for $P\ne NP$ are lacking. Or are there good arguments for $P\ne NP$ being unlikely, in a similar vein to say, the Riemann hypothesis holding for large numbers, or the very high lower bounds on the number of existing primes with a small distance apart viz. the Twin Prime conjecture?
People are skeptical because: No proof has come from an expert without having been rescinded shortly thereafter So much effort has been put into finding a proof, with no success, that it's assumed one will be either substantially complicated, or invent new mathematics for the proof The "proofs" that arise frequently fail to address hurdles which are known to exist. For example, many claim that 3SAT is not in P, while providing an argument that also applies to 2SAT. To be clear, the skepticism is of the proofs, not of the result itself.
{ "source": [ "https://cs.stackexchange.com/questions/82679", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/78941/" ] }
83,178
On the Wikipedia page for Fixed Point Combinators is written the rather mysterious text The Y combinator is an example of what makes the Lambda calculus inconsistent. So it should be regarded with suspicion. However it is safe to consider the Y combinator when defined in mathematic logic only. Have I entered into some sort of spy novel? What in the world is meant by the statements that $\lambda$-calculus is "inconsistent" and that it should be "regarded with suspicion" ?
It's inspired from real events, but the way it's stated is barely recognizable and “should be regarded with suspicion” is nonsense. Consistency has a precise meaning in logic: a consistent theory is one where not all statements can be proved. In classical logic, this is equivalent to the absence of a contradiction, i.e. a theory is inconsistent if and only if there is a statement $A$ such that the theory proves both $A$ and its negation $\neg A$. So what does this mean regarding the lambda calculus? Nothing. The lambda calculus is a rewriting system, not a logical theory. It is possible to view the lambda calculus in relation to logic. Regard variables as representing a hypothesis in a proof, lambda abstractions as proofs under a certain hypothesis (represented by the variable), and application as putting together a conditional proof and proof of the hypothesis. Then the beta rule corresponds to simplifying a proof by applying modus ponens , a fundamental principle of logic. This, however, only works if the conditional proof is combined with a proof of the right hypothesis. If you have a conditional proof that assumes $n=3$ and you also have a proof of $n=2$, you can't combine them together. If you want to make this interpretation of the lambda calculus work, you need to add a constraint that only proofs of the proper hypothesis get applied to conditional proofs. This is called a type system , and the constraint is the typing rule that says that when you pass an argument to a function, the type of the argument must match the parameter type of the function. The Curry-Howard correspondence is a parallel between typed calculi and proof systems. types correspond to logical statements; terms correspond to proofs; inhabited types (i.e. types such that there is a term of that type) correspond to true statements (i.e. statements such that there is a proof of that statement); program evaluation (i.e. rules such as beta) correspond to transformations of proofs (which had better transform correct proofs into correct proofs). A typed calculus that has a fixed point combinator such as $Y$ allows building a term of any type (try evaluating $Y (\lambda x.x)$), so if you take the logical interpretation through the Curry-Howard correspondence, you get an inconsistent theory. See Does the Y combinator contradict the Curry-Howard correspondence? for more details. This is not meaningful for the pure lambda calculus, i.e. for the lambda calculus without types. In many typed calculi, it's impossible to define a fixed point combinator. Those typed calculi are useful with respect to their logical interpretation, but not as a basis for a Turing-complete programming language. In some typed calculi, it's possible to define a fixed point combinator. Those typed calculi are useful as a basis for a Turing-complete programming language, but not with respect to their logical interpretation. In conclusion: The lambda calculus is not “inconsistent”, that concept does not apply. A typed lambda calculus that assigns a type to every lambda term is inconsistent. Some typed lambda calculi are like that, others make some terms untypable and are consistent. Typed lambda calculi are not the sole raison d'être for the lambda calculus, and even inconsistent typed lambda calculi are very useful tools — just not to prove things.
{ "source": [ "https://cs.stackexchange.com/questions/83178", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/23002/" ] }
84,487
If one attempted to download a file at a speed of 800 Mb/s (100 MB/s) onto a hard drive with a write speed of 500 Mb/s (62.5 MB/s), what would happen? Would the system cap the download speed?
Many protocols, including TCP which is most widely used protocol on the Internet, use something called flow control. Flow control simply means that TCP will ensure that a sender is not overwhelming a receiver by sending packets faster than it can empty its buffer. The idea is that a node receiving data will send some kind of feedback to the node sending the data to let it know about its current condition. So, two way feedback allows both machine to optimally use their resources and prevent any problems due to mismatch in their hardware. https://en.wikipedia.org/wiki/Flow_control_(data)
{ "source": [ "https://cs.stackexchange.com/questions/84487", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/80760/" ] }
84,643
In Strassen's matrix multiplication, we state one strange ( at least to me) fact that matrix multiplication of two 2 x 2 takes 7 multiplication. Question : How to prove that it is impossible to multiply two 2 x 2 matrices in 6 multiplications? Please note that matrices are over integers.
This is a classical result of Winograd: On multiplication of 2x2 matrices . Strassen showed that the exponent of matrix multiplication is the same as the exponent of the tensor rank of matrix multiplication tensors: the algebraic complexity of $n\times n$ matrix multiplication is $O(n^\alpha)$ iff the tensor rank of $\langle n,n,n \rangle$ (the matrix multiplication tensor corresponding to the multiplication of two $n\times n$ matrices) is $O(n^\alpha)$. Strassen's algorithm uses the easy direction to deduce an $O(n^{\log_27})$ from the upper bound $R(\langle 2,2,2 \rangle) \leq 7$. Winograd's result implies that $R(\langle 2,2,2 \rangle)=7$. Landsberg showed that the border rank of $\langle 2,2,2 \rangle$ is also 7, and Bläser et al. recently extended that to support rank and border support rank. Border rank and support rank are weaker (=smaller) notions of rank that have been used (in the case of border rank) or proposed (in the case of support rank) in the fast matrix multiplication algorithms.
{ "source": [ "https://cs.stackexchange.com/questions/84643", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/69130/" ] }
84,649
I have two arrays, namely $a$ and $b$. Both have the same length $n$. I have to find the maximum value of $\sum a_i b_j$, in which every element can be used at most one time. My algorithm for solving this problem is: Sort both $a$ and $b$ in non increasing order. Pick the values from the array in order of greatest to smallest. Calculate their product and add them to sum. On the arrays $a = \{2,3,4\}$ and $b = \{4,5,6\}$, the algorithm runs as follows: Firstly, sorting the arrays: $a = \{4,3,2\}$ and $b = \{6,5,4\}$. Then picking values from the first to last, gives the answer $(4\cdot6) + (3\cdot5) + (2\cdot4) = 24 + 15 + 8 = 47$. Here what I have used is a greedy algorithm. How to prove its correctness? What I want to know is, how to prove that this algorithm always gives the maximum answer?
This is a classical result of Winograd: On multiplication of 2x2 matrices . Strassen showed that the exponent of matrix multiplication is the same as the exponent of the tensor rank of matrix multiplication tensors: the algebraic complexity of $n\times n$ matrix multiplication is $O(n^\alpha)$ iff the tensor rank of $\langle n,n,n \rangle$ (the matrix multiplication tensor corresponding to the multiplication of two $n\times n$ matrices) is $O(n^\alpha)$. Strassen's algorithm uses the easy direction to deduce an $O(n^{\log_27})$ from the upper bound $R(\langle 2,2,2 \rangle) \leq 7$. Winograd's result implies that $R(\langle 2,2,2 \rangle)=7$. Landsberg showed that the border rank of $\langle 2,2,2 \rangle$ is also 7, and Bläser et al. recently extended that to support rank and border support rank. Border rank and support rank are weaker (=smaller) notions of rank that have been used (in the case of border rank) or proposed (in the case of support rank) in the fast matrix multiplication algorithms.
{ "source": [ "https://cs.stackexchange.com/questions/84649", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/73952/" ] }
84,723
Quick sort algorithm can be divided into following steps Identify pivot. Partition the linked list based on pivot. Divide the linked list recursively into 2 parts. Now, if I always choose last element as pivot, then identifying the pivot element (1st step) takes $\mathcal O(n)$ time. After identifying the pivot element, we can store its data and compare it with all other elements to identify the correct partition point (2nd step). Each comparison will take $\mathcal O(1)$ time as we store the pivot data and each swap takes $\mathcal O(1)$ time. So in total it takes $\mathcal O(n)$ time for $n$ elements. So the recurrence relation is: $T(n) = 2T(n/2) + n$ which is $\mathcal O(n \log n)$ which is the same as in merge sort with a linked list. So why is merge sort preferred over quick sort for linked lists?
The memory access pattern in Quicksort is random, also the out-of-the-box implementation is in-place, so it uses many swaps if cells to achieve ordered result. At the same time the merge sort is external, it requires additional array to return ordered result. In array it means additional space overhead, in the case if linked list, it is possible to pull value out and start merging nodes. The access is more sequential in nature. Because of this, the quicksort is not natural choice for linked list while merge sort takes great advantage. The Landau notation might (more or less, because Quicksort is still $\mathcal O(n^2)$) agree, but the constant is far higher. In the average case both algorithms are in $\mathcal O(n\log n)$ so the asymptotic case is the same, but preference is strictly due to hidden constant and sometimes the stability is the issue (quicksort is inherently unstable, mergsort is stable).
{ "source": [ "https://cs.stackexchange.com/questions/84723", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/63873/" ] }
84,860
Let's say we are working with a system that has 40 physical address bits. The total physical address space (assuming byte-addressable memory) is $2^{40}$ bytes, or 1 TiB. And if virtual addresses are 48 bits in length, that means there are more addresses available to virtual memory than there are locations in physical memory. This makes sense to me, because the "excess" addresses could refer to hard disk locations as well. However, what I don't understand is how the translation between virtual and physical addresses occurs. I assume there is a mapping stored somewhere which links VAS locations to the physical locations. If there are more virtual address locations than physical locations, how can all of these mappings possibly be stored in memory? At minimum you would need 48 bits to store each virtual address, and then another 40 to store the physical location it maps to. So obviously you cannot just store a 1:1 mapping of each virtual address to its physical counterpart, as mapping every location would take more memory than physical memory itself. What exactly am I missing here?
The trick to making this work is "paging." When bringing data from a hard disk into physical memory, you don't just bring a few bytes. You bring an entire page. 4k bytes is a very common page size. If you only need to keep track of pages, not each individual byte, the mapping becomes much cheaper. If you have a 48 bit address space and 4096 byte pages, you only need to track which of the 2^36 pages (roughly 69 billion pages). That's much easier! The record of where all of the pages are found is known as a "page table." If you actually need 1-256 TiB of memory, then giving up a few gigabytes to store this page table isn't a big deal. In practice however, we'll do things like use multi-level page tables , which lets us be a bit more efficient, keeping pages only for regions of the address space that we are actually using.
{ "source": [ "https://cs.stackexchange.com/questions/84860", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/81196/" ] }
85,023
In automata theory, we all read automata as finite automata, from the very beginning. What I want to know is, why are automata finite? To be clear, what is it in an automaton that is finite - the alphabet, language, strings made with regular expressions, or what? And are there (in theory) any non-finite automata?
All automaton models you'll typically encounter are finitely represented; otherwise there would be uncountably many, which means they are not captured by Turing-complete models. Or, in CS-think, they'd be useless¹. "Finite automata" are called finite because they only have a finite set of configurations (the input string aside). Pushdown automata, for instance, have a stack that can have arbitrary content -- there are infinitely many possible configurations. Nota bene: Configurations of PDAs are still finitely represented! In fact, any computational model that falls inside Turing-computability has to have finitely representable configurations, otherwise TMs wouldn't be able to simulate them. I consciously disregard hypercomputation here for the purpose of this question.
{ "source": [ "https://cs.stackexchange.com/questions/85023", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/70573/" ] }
85,327
I have noticed that I find it far easier to write down mathematical proofs without making any mistakes, than to write down a computer program without bugs. It seems that this is something more widespread than just my experience. Most people make software bugs all the time in their programming, and they have the compiler to tell them what the mistake is all the time. I've never heard of someone who wrote a big computer program with no mistakes in it in one go, and had full confidence that it would be bugless. (In fact, hardly any programs are bugless, even many highly debugged ones). Yet people can write entire papers or books of mathematical proofs without any compiler ever giving them feedback that they made a mistake, and sometimes without even getting feedback from others. Let me be clear. this is not to say that people don't make mistakes in mathematical proofs, but for even mildly experienced mathematicians, the mistakes are usually not that problematic, and can be solved without the help of some "external oracle" like a compiler pointing to your mistake. In fact, if this wasn't the case, then mathematics would scarcely be possible it seems to me. So this led me to ask the question: What is so different about writing faultless mathematical proofs and writing faultless computer code that makes it so that the former is so much more tractable than the latter? One could say that it is simply the fact that people have the "external oracle" of a compiler pointing them to their mistakes that makes programmers lazy, preventing them from doing what's necessary to write code rigorously. This view would mean that if they didn't have a compiler, they would be able to be as faultless as mathematicians. You might find this persuasive, but based on my experience programming and writing down mathematical proofs, it seems intuitively to me that this is really not explanation. There seems to be something more fundamentally different about the two endeavours. My initial thought is, that what might be the difference, is that for a mathematician, a correct proof only requires every single logical step to be correct. If every step is correct, the entire proof is correct. On the other hand, for a program to be bugless, not only every line of code has to be correct, but its relation to every other line of code in the program has to work as well. In other words, if step $X$ in a proof is correct, then making a mistake in step $Y$ will not mess up step $X$ ever. But if a line of code $X$ is correctly written down, then making a mistake in line $Y$ will influence the working of line $X$, so that whenever we write line $X$ we have to take into account its relation to all other lines. We can use encapsulation and all those things to kind of limit this, but it cannot be removed completely. This means that the procedure for checking for errors in a mathematical proof is essentially linear in the number of proof-steps, but the procedure for checking for errors in computer code is essentially exponential in the number of lines of code. What do you think? Note: This question has a large number of answers that explore a large variety of facts and viewpoints. Before you answer, please read all of them and answer only if you have something new to add. Redundant answers, or answers that don't back up opinions with facts, may be deleted.
Let me offer one reason and one misconception as an answer to your question. The main reason that it is easier to write (seemingly) correct mathematical proofs is that they are written at a very high level. Suppose that you could write a program like this: function MaximumWindow(A, n, w): using a sliding window, calculate (in O(n)) the sums of all length-w windows return the maximum sum (be smart and use only O(1) memory) It would be much harder to go wrong when programming this way, since the specification of the program is much more succinct than its implementation . Indeed, every programmer who tries to convert pseudocode to code, especially to efficient code, encounters this large chasm between the idea of an algorithm and its implementation details . Mathematical proofs concentrate more on the ideas and less on the detail. The real counterpart of code for mathematical proofs is computer-aided proofs . These are much harder to develop than the usual textual proofs, and one often discovers various hidden corners which are "obvious" to the reader (who usually doesn't even notice them), but not so obvious to the computer. Also, since the computer can only fill in relatively small gaps at present, the proofs must be elaborated to such a level that a human reading them will miss the forest for the trees. An important misconception is that mathematical proofs are often correct. In fact, this is probably rather optimistic. It is very hard to write complicated proofs without mistakes, and papers often contain errors. Perhaps the most celebrated recent cases are Wiles' first attempt at (a special case of) the modularity theorem (which implies Fermat's last theorem), and various gaps in the classification of finite simple groups, including some 1000+ pages on quasithin groups which were written 20 years after the classification was supposedly finished. A mistake in a paper of Voevodsky made him doubt written proofs so much that he started developing homotopy type theory , a logical framework useful for developing homotopy theory formally, and henceforth used a computer to verify all his subsequent work (at least according to his own admission). While this is an extreme (and at present, impractical) position, it is still the case that when using a result, one ought to go over the proof and check whether it is correct. In my area there are a few papers which are known to be wrong but have never been retracted, whose status is relayed from mouth to ear among experts.
{ "source": [ "https://cs.stackexchange.com/questions/85327", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/56687/" ] }
85,377
The Church-Turing thesis states that everything that can physically be computed, can be computed on a Turing Machine. The paper "Analog computation via neural networks" (Siegelmannn and Sontag, Theoretical Computer Science , 131:331–360, 1994; PDF ) claims that a neural net of a certain form (the settings are presented in the paper) is more powerful. The authors say that, in exponential time, their model can recognize languages that are uncomputable in the Turing machine model. Doesn't this contradict the Church-Turing thesis?
No, it's still consistent with the Church-Turing thesis, their model comes equipped with genuine real numbers (as in arbitrary elements of $\mathbb{R}$), which pretty much immediately extends the power beyond that of a Turing Machine. In fact, the title of 1.2.2 is "The meaning of (non computable) real weight", where they discuss why their model is built to include non-computable components. There are in fact many models of computation that exceed the power of Turing Machines (q.v. Hypercomputation ). The catch is that none of these are apparently able to be constructed in the real world (but maybe in the $\mathbb{R}$ world - sorry, couldn't resist).
{ "source": [ "https://cs.stackexchange.com/questions/85377", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/81701/" ] }
85,938
Is there any difference between structural-recursion and Tail-recursion or they both are same? I see that in both of these recursions , the recursive function is called on the subset of the orignal items.
Structural recursion: recursive calls are made on structurally smaller arguments. Tail recursion: the recursive call is the last thing that happens. There is no requirement that the tail recursion should be called on a smaller argument. In fact, quite often tail recursive functions are designed to loop forever. For example, here's a trivial tail recursion (not very useful, but it is tail recursion): def f(x): return f(x+1) We actually have to be a bit more careful. There may be several recursive calls in a function, and not all of them need to be tail recursive: def g(x): if x < 0: return 42 # no recursive call elif x < 20: return 2 + g(x - 2) # not tail recursive (must add 2 after the call) else: return g(x - 3) # tail recursive One speaks of tail recursive calls . A function whose recursive calls are all tail-recursive is then called a tail-recursive function.
{ "source": [ "https://cs.stackexchange.com/questions/85938", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/82302/" ] }
86,160
Note, while I know how to program, I'm quite a beginner at CS theory. According to this answer Turing completeness is an abstract concept of computability. If a language is Turing complete, then it is capable of doing any computation that any other Turing complete language can do. And any program written in any Turing complete language can be rewritten in another . Ok. This makes sense. I can translate (compile) C into Assembly (and I do it everyday!), and can translate Assembly into C (You can write a virtual machine in C). And the same applies to any other language - you can compile any language into Assembly, and then run it in a VM written in another other language. But can any program written in a Turing complete language be re-written in another? What if my Assembly has a LIGHTBUTTON opcode? I physically can't emulate that language on a system (language) without a lightbulb. Ok. So you'll say that since we're dealing with computer theory , we're not discussing physical device limitations. But what about a device that doesn't have multiplication? division? To the best of my knowledge (though this is more of a question for math.SE), one can't emulate multiplication (and definitely not division) with addition and subtraction [1]. So how would a "turing complete language" (which can add, subtract, and jump) emulate another language which can add, subtract, multiply and jump? EDIT [1]. On arbitrary real numbers.
Turing-completeness says one thing and one thing only: a model of computation is Turing-complete, if any computation that can be modeled by a Turing Machine can also be modeled by that model. So, what are the computations a Turing Machine can model? Well, first and foremost, Alan Turing and all of his colleagues were only ever interested in functions on natural numbers. So, the Turing Machine (and the λ-calculus, the SK combinator calculus, μ-recursive functions, …) only talk about the computability of functions on natural numbers. If you are not talking about a function on natural numbers, then the concept of Turing-completeness doesn't even make sense, it is simply not applicable. Note, however, that we can encode lots of interesting things as natural numbers. We can encode strings as natural numbers, we can encode graphs as natural numbers, we can encode booleans as natural numbers. We can encode Turing Machines as natural numbers, which allows us to create Turing Machines that talk about Turing Machines! And, of course, not all functions on natural numbers are computable. A Turing Machine can only compute some functions on natural numbers, the λ-calculus can only compute some functions on natural numbers, the SK combinator calculus can only compute some functions on natural numbers, …. Surprisingly (or not), it turns out that every model of computation (that is actually realizable in our physical universe) can compute the same functions on natural numbers (at least for all the models we have found up till now). [Note: obviously, there are weaker models of computation, but we have not yet found one that is stronger , except some that are obviously incompatible with our physical universe, such as models using real numbers or time travel.] This fact, that after a long time of searching for lots of different models, we find, every single time, that they can compute exactly the same functions, is the basis for the Church-Turing-Thesis, which says (roughly) that all models of computation are equally powerful, and that all of them capture the "ideal" notion of what it means to be "computable". (There is also a second, more philosophical aspect of the CTT, namely that a human following an algorithm can also compute exactly the same functions a TM can compute and no more.) However , none of this says anything about how efficient the various models are how convenient they are to use what else they can do besides compute functions on the natural numbers And that is precisely where the differences between different models of computation (and programming languages) come into play. As an example of different performance, both a Random Access Machine and a Turing Machine can copy an array. But, a RAM needs $O(size_{array})$ operations to do that, while a TM needs $O(size_{array}^2)$ operations, since it needs to skip across $size_{array}$ elements of the array for copying each element, and there are $size_{array}$ elements to copy. As an example for different convenience, you can just compare code written in a very high-level language, code written in assembly, and the description of a TM for solving the same problem. And your light switch is an example of the third kind of difference, things that some models can do that are not functions on natural numbers and thus have nothing to do with Turing-completeness. To answer your specific questions: But can any program written in a Turing complete language be re-written in another? No. Only if the program computes a Turing-computable function on natural numbers. And even then, it might need a complex encoding. For example, λ-calculus doesn't even have natural numbers, they need to be encoded using functions (because functions is the only thing λ-calculus has). This encoding of the input and output can be very complex, as can expressing the algorithm. So, while it is true that any program can be rewritten, the rewritten program may be much more complex, much larger, use much more memory, and be much slower. What if my Assembly has a LIGHTBUTTON opcode? I physically can't emulate that language on a system (language) without a lightbulb. A lightbulb is not a Turing-computable function on natural numbers. Really, a lightbulb is neither a function nor a computation. Switching a lightbulb on and off is an I/O side-effect. Turing Machines don't model I/O side-effects, and Turing-completess is not relevant to them. On arbitrary real numbers. Turing-completeness only deals with computable functions on natural numbers, it doesn't concern itself with real numbers. Turing-completeness is simply not very interesting when it comes to questions like yours for two reasons: It is not a very high hurdle. All you need is IF , GOTO , WHILE , and a single integer variable (assuming the variable can hold arbitrarily large integers). Or, recursion. Lots and lots and lots of stuff is Turing-complete. The card game Magic: The Gathering is Turing-complete. CSS3 is Turing-complete. The sendmail configuration file is Turing-complete. The Intel x86 MMU is Turing-complete. The Intel x86 MOV instruction is Turing-complete. PowerPoint animations are Turing-complete. Excel (without scripting, only using formulas) is Turing-complete. The BGP routing protocol is Turing-complete. sed is Turing-complete. Apache mod_rewrite rules are Turing-complete. Google for " (accidentally OR surprisingly) turing complete " to find some other interesting examples. If almost everything is Turing-complete, being Turing-complete stops being an interesting property. It is not actually necessary to be useful. Lots of useful stuff isn't Turing-complete. CSS before version 3 isn't Turing-complete (and the fact that CSS3 is isn't actually used by anyone). SQL before 1999 was not Turing-complete, yet, it was tremendously useful even then. The C programming language without additional libraries doesn't seem to be Turing-complete . Dependently-typed languages are, more or less by definition, not Turing-complete, yet, you can write operating systems, web servers, and games in them. Edwin Brady, the author of Idris, uses the term "Tetris-complete" to talk about some of these aspects. Being Tetris-complete isn't rigorously defined (other than the obvious "can be used to implement Tetris"), but it encompasses stuff like being high-level enough and expressive enough that you can write a game without going insane, being able to interact with the outside world (input and output), being able to express side-effects, being able to write an event loop, being able to express reactive, asynchronous, and concurrent programming, being able to interact with the operating system, being able to interact with foreign libraries (in other words: being able to call and be called by C code) and so on. Those are much more interesting features of a general purpose programming language than Turing-completeness is. You may find my answer to the question you linked interesting, which touches on some of the same points even though it answers a different question.
{ "source": [ "https://cs.stackexchange.com/questions/86160", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/82508/" ] }
86,954
So I understand the idea that the decision problem is defined as Is there a path P such that the cost is lower than C? and you can easily check this is true by verifying a path you receive. However, what if there is no path that fits this criteria? How would you verify the answer of "no" without solving the best path TSP problem, and finding out the best one has a worse cost than C?
NP is the class of problems where you can verify "yes" instances. No guarantee is given that you can verify "no" instances. The class of problems where you can verify "no" instances in polynomial time is co-NP . Any language in co-NP is the complement of some language in NP , and vice-versa. Examples include things like non-3-colourability. The problem you describe, "Is there no TSP path with length at most $C$?" is also in co-NP : if you unpick the double-negation, a "no" instance to that problem is a "yes" instance to TSP and we can verify those in polynomial time. There are some problems, such as integer factorization and any problem in P , that we know to be in both NP and co-NP . (Thanks to user21820 for pointing this out.) It's not known whether NP and co-NP are the same set of problems. If they're the same, then we can verify both "yes" and "no" instances of TSP. If they're different, then P$\,\neq\,$NP , since we know that P$\,=\,$co-P (because we can just negate the answer of a deterministic machine, giving the answer to the complement problem).
{ "source": [ "https://cs.stackexchange.com/questions/86954", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/81348/" ] }
86,962
Say we had a function $4^k$. The asymptotic of this function would be $\Theta(4^k)$, and thus we would say the function has exponential run-time. But what if $k = \log{n}$? Then our same function would be $4^k = 4^{\log{n}} = n^{\log{4}} \in \Theta(n^{\log{4}})$. (Step 2 to 3 is accomplished via a change of logarithm base). Would we now say the function runs in polynomial time? Or something more nuanced, like “The function is exponential in respect to k, and polynomial in respect to n”?
NP is the class of problems where you can verify "yes" instances. No guarantee is given that you can verify "no" instances. The class of problems where you can verify "no" instances in polynomial time is co-NP . Any language in co-NP is the complement of some language in NP , and vice-versa. Examples include things like non-3-colourability. The problem you describe, "Is there no TSP path with length at most $C$?" is also in co-NP : if you unpick the double-negation, a "no" instance to that problem is a "yes" instance to TSP and we can verify those in polynomial time. There are some problems, such as integer factorization and any problem in P , that we know to be in both NP and co-NP . (Thanks to user21820 for pointing this out.) It's not known whether NP and co-NP are the same set of problems. If they're the same, then we can verify both "yes" and "no" instances of TSP. If they're different, then P$\,\neq\,$NP , since we know that P$\,=\,$co-P (because we can just negate the answer of a deterministic machine, giving the answer to the complement problem).
{ "source": [ "https://cs.stackexchange.com/questions/86962", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/48600/" ] }
88,057
If you calculate the area of a rectangle, you just multiply the height and the width and get back the unit squared. Example: 5cm * 10cm = 50cm² In contrast, if you calculate the size of an image, you also multiply the height and the width, but you get back the unit - Pixel - just as it was the unit of the height and width before multiplying. Example: What you actually calculate is the following: 3840 Pixel * 2160 Pixel = 8294400 Pixel What I would expect is: 3840 Pixel * 2160 Pixel = 8294400 Pixel² Why is that the unit at multiplying Pixels is not being squared?
Because "pixel" isn't a unit of measurement: it's an object. So, just like a wall that's 30 bricks wide by 10 bricks tall contains 300 bricks (not bricks-squared), an image that's 30 pixels wide by 10 pixels tall contains 300 pixels (not pixels-squared).
{ "source": [ "https://cs.stackexchange.com/questions/88057", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/84300/" ] }
89,115
I have seen numerous proofs (such as this ) that the Halting problem is in the class of NP. However, the Halting problem is non-computable. Does it make sense to discuss the complexity of computing a function that cannot be computed?
Computational complexity studies the computational resources required to decide problems in some particular model of computation. Because of this, it makes no sense to talk about the complexity of a problem that is not computable in the model of computation you're talking about. Or, to put it the other way around, it only makes sense to talk about the computational complexity of the halting problem with respect to models of computation in which that problem is computable. For example, you could talk about the complexity of recursively enumerable problems using Turing machines with an oracle for the halting problem as your model of computation. The proof you link to does not show that the halting problem is in NP . It shows that the halting problem is NP -hard, which just means that every problem in NP is polynomial-time reducible to the halting problem.
{ "source": [ "https://cs.stackexchange.com/questions/89115", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/79616/" ] }
89,865
Some chap said the following: Anyone who attempts to generate random numbers by deterministic means is, of course, living in a state of sin. That's always taken to mean that you can't generate true random numbers with just a computer. And he said that when computers were the equivalent size of a single Intel 8080 microprocessor (~6000 valves). Computers have gotten more complex, and I believe that von Von Neumann's statement may no longer be true. Consider that an implemented software only algorithm is impossible. They run on physical hardware. True random number generators and their entropy sources are also made of hardware. This Java fragment put into a loop: file.writeByte((byte) (System.nanoTime() & 0xff)); can create a data file which I've represented as an image: You can see structure, but with a lot of randomness as well. The thing of interest is that this PNG file is 232KB in size, yet contains 250,000 grey scale pixels. The PNG compression level was maximum. That's only a compression ratio of 7%, ie. fairly non compressible. What's also interesting is that the file is unique. Every generation of this file is a slightly different pattern and has similar ~7% compressibility. I highlight this as it's critical to my argument. That's ~7bits/byte entropy. That will reduce of course upon use of a stronger compression algorithm. But not reduce to anything near 0 bits/byte. A better impression can be had by taking the above image and substituting its colour map for a random one:- Most of the structure (in the top half) disappears as it was just sequences of similar but marginally different values. Is this a true entropy source created by just executing a Java program on a multi taking operating system? Not a uniformly distributed random number generator, but the entropy source for one? An entropy source built of software running on physical hardware that just happens to be a PC. Supplemental In order to confirm that every image generates fresh entropy without a fixed pattern common to all, 10 consecutive images were generated. These were then concatenated and compressed with the strongest archiver I can get to compile (paq8px). This process will eliminate all common data, including auto correlation leaving only the changes /entropy. The concatenated file compressed to ~66%, which leads to an entropy rate of ~5.3 bits/byte or 10.5Mbits /image. A surprising amount of entropy $ \smile $ Supplemental 2 There have been negative comments that my entropy by compression test methodology is flawed, only giving a loose upper bound estimate. So I've now run the concatenated file though NIST's official cryptographic entropy assessment test, SP800-90B_EntropyAssessment . This is as good as it gets for non IID entropy measurement. This is the report (sorry this question is getting long, but the issue is complex):- Running non-IID tests... Entropic statistic estimates: Most Common Value Estimate = 7.88411 Collision Test Estimate = 6.44961 Markov Test Estimate = 5.61735 Compression Test Estimate = 6.65691 t-Tuple Test Estimate = 7.40114 Longest Reapeated Substring Test Estimate = 8.00305 Predictor estimates: Multi Most Common in Window (MultiMCW) Test: 100% complete Correct: 3816 P_avg (global): 0.00397508 P_run (local): 0.00216675 Multi Most Common in Window (Multi MCW) Test = 7.9748 Lag Test: 100% complete Correct: 3974 P_avg (global): 0.00413607 P_run (local): 0.00216675 Lag Prediction Test = 7.91752 MultiMMC Test: 100% complete Correct: 3913 P_avg (global): 0.00407383 P_run (local): 0.00216675 Multi Markov Model with Counting (MultiMMC) Prediction Test = 7.9394 LZ78Y Test: 99% complete Correct: 3866 P_avg (global): 0.00402593 P_run (local): 0.00216675 LZ78Y Prediction Test = 7.95646 Min Entropy: 5.61735 The result is that NIST believes that I have generated 5.6 bits/byte of entropy. My DIY compression estimate puts this at 5.3 bits/byte, marginally more conservative. -> The evidence seems to support the notion that a computer just running software can generate real entropy. And that von Neumann was wrong (but perhaps correct for his time). I offer the following references that might support my claim:- Are there any stochastic models of non determinism in the rate of program execution? WCET Analysis of Probabilistic Hard Real-Time Systems Is there a software algorithm that can generate a non-deterministic chaos pattern? and the relevance of chaotic effects. Parallels with the Quantum entropic uncertainty principle Aleksey Shipilёv's blog entry regarding the chaotic behaviour of nanoTime(). His scatter plot is not dissimilar to mine.
If you're using some hardware source of entropy/randomness, you're not "attempting to generate randomness by deterministic means" (my emphasis). If you're not using any hardware source of entropy/randomness, then a more powerful computer just means you can commit more sins per second.
{ "source": [ "https://cs.stackexchange.com/questions/89865", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/31167/" ] }
90,259
I am starting read a book about Computational Complexity and Turing Machines. Here is quote: An algorithm (i.e., a machine) can be represented as a bit string once we decide on some canonical encoding. This assertion is provided as a simple fact, but I can't understand it. For example, if I have an algorithm which takes $x$ as input and computes $(x+1)^2$ or: int function (int x){ x = x + 1; return x**2; } How that can this be represented as string using alphabet $\{0, 1\}^*$?
The most naive and simple answer to your question is that the code provided (and compiled machine code) are in-fact represented as syntactic strings of {0,1}*. Additionally, since you are talking about turing machines, the programs they run are a linear list of operations/instructions, there is no reason these cannot be represented as bits/bytes.
{ "source": [ "https://cs.stackexchange.com/questions/90259", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/81894/" ] }
90,288
There are relativistic spacetimes (e.g. M-H spacetimes; see Hogarth 1994) where a worldline of infinite duration can be contained in the past of a finite observer. This means that a normal observer can have access to an infinite number of a computation steps. Assuming it's possible for a computer to functional perfectly for an infinite length of time (and I know that's a big ask): one could construct a computer HM which travels along this infinite worldline, computing the halting problem for a given M. If M halts, HM sends a signal to the finite observer. If after an infinite number of steps the observer doesn't get a signal, the observer knows that M loops, solving the halting problem. So far, this sounds okay to me. My question is: if what I've said so far is correct, how does this alter Turing's proof that the halting problem is undecidable? Why does his proof fail in these spacetimes ?
Note that Turing's proof is one of mathematics, not of physics. Within the model of a Turing machine Turing defined, undecidability of the halting problem has been proven and is a mathematical fact. Hence, Turing's proof will not 'fail' in the spacetimes, it will simply not prove anything about the relation of the halting problem and time dilation. However, what you'll likely want to know is whether a 'time dilation Turing machine' can solve the halting problem. If you want to study this the influence of 'time dilation' on a Turing machine, you'll have to specify a formal model by which we can formally understand what it means for a Turing machine to make use of time dilation. Unfortunately, this format is ill-suited for providing such a formal model (unless someone else has written a paper about it) as creating the model is far too broad. However, it isn't unlikely that some formalisation indeed is able to solve the halting problem. This paper by Scott Aaronson, Mohammad Bavarian and Giulio Gueltrini looks at computational models under the assumption that so-called Closed time-like loops exist and conclude that the halting problem is indeed computable within that model.
{ "source": [ "https://cs.stackexchange.com/questions/90288", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/86814/" ] }
90,304
Say I have two real number types. They may be floating or fixed point. How can I construct a new type whose values are at least the union of the two with the minimal number of bits? There are 3 cases to consider: Fixed (Qa.x) $\cup$ Fixed (Qb.y) - I think the best here is to use Qmax(a,b).max(x,y). I think this is optimal since I can't come up with anything smaller that will accurately represent the type. Float (FaEx) $\cup$ Float (FbEy) - I think the best here is to use Fmax(a,b)Emax(x,y). Again I can't think of a more optimal solution. I am using Q notation for representing fixed point types. I don't know how floating point types are typically represented; I'm using an analogous representation where FaEx means a bits of mantissa and x bits of exponent. The difficult case is: Fixed (Qa.x) $\cup$ Float (FbEy) - The best I can come up with is Qmax(a,n).max(x,m) where n is the minimal bits to represent the biggest number the float can be and m is the minimal number of bits to represent the smallest positive fraction the float can be. This seems extremely inefficient as it extends the floating point's most accurate precision to its entire range. Thus for any decent sized floating point type the resulting union type will be extremely large. Here are some ASCII diagrams of the three cases (simplified), and why I think I'm wrong: 0/4 1/4 2/4 3/4 4/4 5/4 6/4 7/4 8/4 9/4 10/4 11/4 12/4 13/4 14/4 15/4 16/4 | . v . | . v . | . v . | . v . | . v . | . v . | . v . | . v . | . v . | . v . | . v . | . v . | . v . | . v . | . v . | . v . | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Q1.2 |.......|.......|.......|.......|.......|.......|.......|........................................................................ U ................................................................................................................................. Q0.3 |...|...|...|...|...|...|...|.................................................................................................... = ................................................................................................................................. Q1.3 |...|...|...|...|...|...|...|...|...|...|...|...|...|...|........................................................................ 0/4 1/4 2/4 3/4 4/4 5/4 6/4 7/4 8/4 9/4 10/4 11/4 12/4 13/4 14/4 15/4 16/4 | . v . | . v . | . v . | . v . | . v . | . v . | . v . | . v . | . v . | . v . | . v . | . v . | . v . | . v . | . v . | . v . | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ F1E2 ........|...|...|.......|.......|...............|...............|...............................|................................ U ................................................................................................................................. F2E1 ................|...|...|...|...|.......|.......|.......|........................................................................ = ................................................................................................................................. F2E2 ................................|.......|.......|.......|.......|...............|...............|...............|................ 0/4 1/4 2/4 3/4 4/4 5/4 6/4 7/4 8/4 9/4 10/4 11/4 12/4 13/4 14/4 15/4 16/4 | . v . | . v . | . v . | . v . | . v . | . v . | . v . | . v . | . v . | . v . | . v . | . v . | . v . | . v . | . v . | . v . | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Q0.3 |...|...|...|...|...|...|...|.................................................................................................... U ................................................................................................................................. F1E2 ........|...|...|.......|.......|...............|...............|...............................|................................ = ................................................................................................................................. Q2.3 |...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|...|........ F?E? |...|...|...|...|...|...|...|...|.......|.......|.......|.......|...............................|...............................| From my math the best I could do would be Q2.3, but it is fairly obvious that there should exist some floating point type that stops having the necessary accuracy once the floating point part's accuracy is no longer needed. Of course I have to be careful if the fixed point type is more accurate than even the most accurate range of the floating point type, but I still feel like I'm missing a nice solution. Any idea what binary type will be the smallest superset of the union between a fixed and floating point type? NOTE: I know that this also emphasizes the benefits and drawbacks of fixed and floating types, but I feel like it should be possible to do at least a little bit better. Especially in the situation where the types have known range boundaries.
Note that Turing's proof is one of mathematics, not of physics. Within the model of a Turing machine Turing defined, undecidability of the halting problem has been proven and is a mathematical fact. Hence, Turing's proof will not 'fail' in the spacetimes, it will simply not prove anything about the relation of the halting problem and time dilation. However, what you'll likely want to know is whether a 'time dilation Turing machine' can solve the halting problem. If you want to study this the influence of 'time dilation' on a Turing machine, you'll have to specify a formal model by which we can formally understand what it means for a Turing machine to make use of time dilation. Unfortunately, this format is ill-suited for providing such a formal model (unless someone else has written a paper about it) as creating the model is far too broad. However, it isn't unlikely that some formalisation indeed is able to solve the halting problem. This paper by Scott Aaronson, Mohammad Bavarian and Giulio Gueltrini looks at computational models under the assumption that so-called Closed time-like loops exist and conclude that the halting problem is indeed computable within that model.
{ "source": [ "https://cs.stackexchange.com/questions/90304", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/86832/" ] }
90,329
In one of my homework I am requested to find a Context-free-grammar (CFG) and a push down automaton (PDA) for the following language: $L = \{x_1\#x_2\#...\#x_k | k \geq 2, \text{ each } x_i \in \{a, b\}^*, \text{ and for some } i \text{ and } j, x_i=x_j^\mathcal{R}\}$ My problem is that the statement $\text{ ... and for some } i \text{ and } j, x_i=x_j^\mathcal{R}\}$, i.e. any two pairs in the sequence must be each others reverses, forces us to make all $x_i$'s palindromes, as that is the only way any two are guaranteed reverses of each other. If that interpretation is correct, I think the problem is impossible to solve with a Context free grammar, or not? This leads me to believe I interpreted the statement wrong and it means something else, but I can't figure out what.
Note that Turing's proof is one of mathematics, not of physics. Within the model of a Turing machine Turing defined, undecidability of the halting problem has been proven and is a mathematical fact. Hence, Turing's proof will not 'fail' in the spacetimes, it will simply not prove anything about the relation of the halting problem and time dilation. However, what you'll likely want to know is whether a 'time dilation Turing machine' can solve the halting problem. If you want to study this the influence of 'time dilation' on a Turing machine, you'll have to specify a formal model by which we can formally understand what it means for a Turing machine to make use of time dilation. Unfortunately, this format is ill-suited for providing such a formal model (unless someone else has written a paper about it) as creating the model is far too broad. However, it isn't unlikely that some formalisation indeed is able to solve the halting problem. This paper by Scott Aaronson, Mohammad Bavarian and Giulio Gueltrini looks at computational models under the assumption that so-called Closed time-like loops exist and conclude that the halting problem is indeed computable within that model.
{ "source": [ "https://cs.stackexchange.com/questions/90329", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/86853/" ] }
91,330
EDIT: I've now asked a similar question about the difference between categories and sets. Every time I read about type theory (which admittedly is rather informal), I can't really understand how it differs from set theory, concretely . I understand that there is a conceptual difference between saying "x belongs to a set X" and "x is of type X", because intuitively, a set is just a collection of objects, while a type has certain "properties". Nevertheless, sets are often defined according to properties as well, and if they are, then I am having trouble understanding how this distinction matters in any way. So in the most concrete way possible, what exactly does it imply about $x$ to say that it is of type $T$, compared to saying that it is an element in the set $S$? (You may pick any type and set that makes the comparison most clarifying).
To understand the difference between sets and types, ones has to go back to pre -mathematical ideas of "collection" and "construction", and see how sets and types mathematize these. There is a spectrum of possibilities on what mathematics is about. Two of these are: We think of mathematics as an activity in which mathematical objects are constructed according to some rules (think of geometry as the activity of constructing points, lines and circles with a ruler and a compass). Thus mathematical objects are organized according to how they are constructed , and there are different types of construction. A mathematical object is always constructed in some unique way, which determines its unique type. We think of mathematics as a vast universe full of pre-existing mathematical objects (think of the geometric plane as given). We discover, analyze and think about these objects (we observe that there are points, lines and circles in the plane). We collect them into set . Usually we collect elements that have something in common (for instance, all lines passing through a given point), but in principle a set may hold together an arbitrary selection of objects. A set is specified by its elements, and only by its elements. A mathematical object may belong to many sets. We are not saying that the above possibilities are the only two, or that any one of them completely describes what mathematics is. Nevertheless, each view can serve as a useful starting point for a general mathematical theory that usefully describes a wide range of mathematical activities. It is natural to take a type $T$ and imagine the collection of all things that we can construct using the rules of $T$ . This is the extension of $T$ , and it is not $T$ itself. For instance, here are two types that have different rules of construction, but they have the same extension: The type of pairs $(n, p)$ where $n$ is constructed as a natural number, and $p$ is constructed as a proof demonstrating that $n$ is an even prime number larger than $3$ . The type of pairs $(m, q)$ where $m$ is constructed as a natural number, and $q$ is constructed as a proof demonstrating that $m$ is an odd prime smaller than $2$ . Yes, these are silly trivial examples, but the point stands: both types have nothing in their extension, but they have different rules of construction. In contrast, the sets $$\{ n \in \mathbb{N} \mid \text{$n$ is an even prime larger than $3$} \}$$ and $$\{ m \in \mathbb{N} \mid \text{$m$ is an odd prime smaller than $2$} \}$$ are equal because they have the same elements. Note that type theory is not about syntax. It is a mathematical theory of constructions, just like set theory is a mathematical theory of collections. It just so happens that the usual presentations of type theory emphasize syntax, and consequently people end up thinking type theory is syntax. This is not the case. To confuse a mathematical object (construction) with a syntactic expression that represents it (a term former) is a basic category mistake that has puzzled logicians for a long time, but not anymore.
{ "source": [ "https://cs.stackexchange.com/questions/91330", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/56687/" ] }
91,773
I am a CS undergraduate. I understand how Turing came up with his abstract machine (modeling a person doing a computation), but it seems to me to be an awkward, inelegant abstraction. Why do we consider a "tape", and a machine head writing symbols, changing state, shifting the tape back and forth? What is the underlying significance? A DFA is elegant - it seems to capture precisely what is necessary to recognize the regular languages. But the Turing machine, to my novice judgement, is just a clunky abstract contraption. After thinking about it, I think the most idealized model of computation would be to say that some physical system corresponding to the input string, after being set into motion, would reach a static equilibrium which, upon interpretation equivalent to the the one used to form the system from the original string, would correspond to the correct output string. This captures the notion of "automation", since the system would change deterministically based solely on the original state. Edit : After reading a few responses, I've realized that what confuses me about the Turing machine is that it does not seem minimal. Shouldn't the canonical model of computation obviously convey the essence of computability? Also, in case it wasn't clear I know that DFAs are not complete models of computation. Thank you for the replies.
Well, a DFA is just a Turing machine that's only allowed to move to the right and that must accept or reject as soon as it runs out of input characters. So I'm not sure one can really say that a DFA is natural but a Turing machine isn't. Critique of the question aside, remember that Turing was working before computers existed. As such, he wasn't trying to codify what electronic computers do but, rather, computation in general. My parents have a dictionary from the 1930s that defines computer as "someone who computes" and this is basically where Turing was coming from: for him, at that time, computation was about slide rules, log tables, pencils and pieces of paper. In that mind-set, rewriting symbols on a paper tape doesn't seem like a bad abstraction. OK, fine, you're saying (I hope!) but we're not in the 1930s any more so why do we still use this? Here, I don't think there's any one specific reason. The advantage of Turing machines is that they're reasonably simple and we're decently good at proving things about them. Although formally specifying a Turing machine program to do some particular task is very tedious, once you've done it a few times, you have a reasonable intuition about what they can do and you don't need to write the formal specifications any more. The model is also easily extended to include other natural features, such as random access to the tape. So they're a pretty useful model that we understand well and we also have a pretty good understanding of how they relate to actual computers. One could use other models but one would then have to do a huge amount of translation between results for the new model and the vast body of existing work on what Turing machines can do. Nobody has come up with a replacement for Turing machines that have had big enough advantages to make that look like a good idea.
{ "source": [ "https://cs.stackexchange.com/questions/91773", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/88378/" ] }
93,109
In a Part Test for GATE Preparation there was a question : f(n): if n is even: f(n) = n/2 else f(n) = f(f(n-1)) I answered "It will terminate for all integers", because even for some negative integers, it will terminate as Stack Overflow Error . But my friend disagreed saying that since this is not implemented code and just pseudocode, it will be infinite recursion in case of some negative integers. Which answer is correct and why?
The correct answer is that this function does not terminate for all integers (specifically, it does not terminate on -1). Your friend is correct in stating that this is pseudocode and pseudocode does not terminate on a stack overflow. Pseudocode is not formally defined, but the idea is that it does what is says on the tin. If the code doesn't say "terminate with a stack overflow error" then there is no stack overflow error. Even if this was a real programming language, the correct answer would still be "does not terminate", unless the use of a stack is part of the definition of the language. Most languages do not specify the behavior of programs that might overflow the stack, because it's difficult to know precisely how much stack a program will use. If running the code on an actual interpreter or compiler causes a stack overflow, in many languages, that's a discrepancy between the formal semantics of the language and the implementation. It is generally understood that implementations of a language will only do what can be done on a concrete computer with finite memory. If the program dies with a stack overflow, you're supposed to buy a bigger computer, recompile the system if necessary to support all that memory, and try again. If the program is non-terminating then you may have to keep doing this forever. Even the fact that a program will or will not overflow the stack is not well-defined, since some optimizations such as tail call optimization and memoization can allow an infinite chain of function calls in constant-bound stack space. Some language specifications even mandate that implementations perform tail call optimization when possible (this is common in functional programming languages). For this function, f(-1) expands to f(f(-2)) ; the outer call to f is a tail call so it doesn't push anything on the stack, thus only f(-2) goes onto the stack, and that returns -1 , so the stack is back to the same state it was in at the beginning. Thus with tail call optimization f(-1) loops forever in constant memory.
{ "source": [ "https://cs.stackexchange.com/questions/93109", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/40307/" ] }
93,129
Let $A$ be a sorted array of $n$ positive integers (sorted in non-decreasing order, that is there can be equal consecutive elements). Can we check whether some positive integer $x$ is a sum of $k$ elements of $A$ in $O(n^2)$ or $O(n^3)$ time complexity? If yes what would be the pseudocode? This seems to be a knapsack problem to me and according to Wikipedia it's an NP-complete problem. So even if the array was unsorted in the first place and we wanted to sort it it would take $O(n\log n)$ time which doesn't really help if the problem is NP-complete anyway. Yet I wonder if some optimization may be made to achieve better time. Please treat $x$ and $k$ as constants, for running time analysis.
The correct answer is that this function does not terminate for all integers (specifically, it does not terminate on -1). Your friend is correct in stating that this is pseudocode and pseudocode does not terminate on a stack overflow. Pseudocode is not formally defined, but the idea is that it does what is says on the tin. If the code doesn't say "terminate with a stack overflow error" then there is no stack overflow error. Even if this was a real programming language, the correct answer would still be "does not terminate", unless the use of a stack is part of the definition of the language. Most languages do not specify the behavior of programs that might overflow the stack, because it's difficult to know precisely how much stack a program will use. If running the code on an actual interpreter or compiler causes a stack overflow, in many languages, that's a discrepancy between the formal semantics of the language and the implementation. It is generally understood that implementations of a language will only do what can be done on a concrete computer with finite memory. If the program dies with a stack overflow, you're supposed to buy a bigger computer, recompile the system if necessary to support all that memory, and try again. If the program is non-terminating then you may have to keep doing this forever. Even the fact that a program will or will not overflow the stack is not well-defined, since some optimizations such as tail call optimization and memoization can allow an infinite chain of function calls in constant-bound stack space. Some language specifications even mandate that implementations perform tail call optimization when possible (this is common in functional programming languages). For this function, f(-1) expands to f(f(-2)) ; the outer call to f is a tail call so it doesn't push anything on the stack, thus only f(-2) goes onto the stack, and that returns -1 , so the stack is back to the same state it was in at the beginning. Thus with tail call optimization f(-1) loops forever in constant memory.
{ "source": [ "https://cs.stackexchange.com/questions/93129", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/70335/" ] }
93,297
I’m not even a CS student, so this might be a stupid question, but please bear with me... In the pre-computer era, we can only implement an array data structure with something like an array of drawers. Since one have to locate the drawer with corresponding index before extracting the value from it, the time complexity of array lookup is $O(log(n))$, assuming binary search. However, the invention of computers made a big difference. Modern computers can read from their RAM so fast that we now consider the time complexity of array lookup to be $O(1)$ (even it’s technically not the case, because it takes more time to move the register over a greater distance, etc) Another example is Python dictionaries. While one might get a dictionary access complexity of $O(n)$ with an ill-written overloaded __hash__ magic method (or ridiculously bad luck, i.e. keys having lots of hash collisions), it’s usually presumed to be $O(1)$. In this case, time complexity depends on both the hash table implementation of Python dictionaries, and the keys’ implementation of the hash functions. Does this imply that hardware/implementation can affect the time complexity of algorithms? (While both examples are about data structures instead of algorithms, the latter are built on the former, and I've never heard of time complexity of data structures, so I'm using the term "algorithms" here) To me, algorithms are abstract and conceptual, whose properties like time/space complexity shouldn’t be affected by whether they’re implemented in a specific way, but are they?
Sure. Certainly. Here's how to reconcile your discomfort. When we analyze the running time of algorithms, we do it with respect to a particular model of computation . The model of computation specifies things like the time it takes to perform each basic operation (is an array lookup $O(\log n)$ time or $O(1)$ time?). The running time of the algorithm might depend on the model of computation. Once you've picked a model of computation, the analysis of the algorithm is a purely abstract, conceptual, mathematical exercise that no longer depends on hardware. However, in practice we usually want to pick a model of computation that reflects the reality of our hardware -- at least to a reasonable degree. So, if hardware changes, we might decide to analyze our algorithms under a different model of computation that is more appropriate to the new hardware. That is how the hardware can affect the running time. The reason this is non-obvious is because, in introductory classes, we often don't talk about the model of computation. We just implicitly make some assumptions, without ever making them explicit. That's reasonable, for pedagogical purposes, but it has a cost -- it hides away this aspect of the analysis. Now you know.
{ "source": [ "https://cs.stackexchange.com/questions/93297", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/46230/" ] }
93,372
When machine code is actually being executed by hardware and the CPU, what does it look like? Would it look like binary, as in instructions being represented by ones and zeros, or would it be something made up of hexadecimal digits where opcodes are bytes presented as hex numbers which can be broken back down into binary numbers, like bytecode?
The best answer I can give is, it doesn't really "look" like anything. The instruction currently being executed by the CPU is represented by a series of wires, some of which have a high voltage, some of which have a low voltage. You can interpret the high and low voltages as zeroes and ones, but you can equally well interpret groups of high and low voltages as hexadecimal digits, or as an assembly instruction like ADD $0 $1 (which is closest to how the CPU interprets it). These numbers and mnemonics themselves are conveniences for humans to read; internally, it's nothing but voltages on wires. Out of these options, binary is "closest to the metal", in that the zeroes and ones map directly to the high and low voltages on the wires. But none of the others are incorrect, and they're frequently more useful: there's a reason people look at hex-dumps of executables, but almost never binary-dumps.
{ "source": [ "https://cs.stackexchange.com/questions/93372", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/89874/" ] }
93,798
Safe programming languages (PL) are gaining popularity. I wonder what is the formal definition of safe PL. For example, C is not safe, but Java is safe. I suspect that the property “safe” should be applied to a PL implementation rather than to the PL itself. If so, let’s discuss a definition of safe PL implementation. My own attempts to formalize this notion led to a strange outcome, so I would like to hear other opinions. Please, do not say that every PL has unsafe commands. We can always take a safe subset.
There is no formal definition of "safe programming language"; it's an informal notion. Rather, languages that claim to provide safety usually provide a precise formal statement of what kind of safety is being claimed/guaranteed/provided. For instance, the language might provide type safety, memory safety, or some other similar guarantee.
{ "source": [ "https://cs.stackexchange.com/questions/93798", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/69135/" ] }
93,846
I am reading the lecture notes and have a question. I am trying to understand the beginning of Section 3 on page 2. Problem: Given an input stream $\sigma$, compute (or approximate) its length $m$. Naive solution: $O(\log m)$ bits, exact solution. I don't understand why it is not $O(m)$ bits but $O(\log m)$ bits. Any help would be greatly appreciated.
There is no formal definition of "safe programming language"; it's an informal notion. Rather, languages that claim to provide safety usually provide a precise formal statement of what kind of safety is being claimed/guaranteed/provided. For instance, the language might provide type safety, memory safety, or some other similar guarantee.
{ "source": [ "https://cs.stackexchange.com/questions/93846", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/90265/" ] }
95,464
Here are four tenets I cannot reconcile: Double exponential time algorithms run in $O(2^{2^{n^k}})$ time with $k \in \mathbb{N}$ constant Exponential time algorithms run in $O(2^{n^k})$ with $k \in \mathbb{N}$ constant The former bound grows stricly faster than the latter; i.e., there exist algorithms that run in double exponential time but not in exponential time Applying $a^{b^c} = a^{bc}$ to the double exponential bound we have $O(2^{2^{n^k}}) = O(2^{2^{nk}}) = O(2^{2nk})$, which falls within the previously stated exponential bound I feel I am missing some subtlety relating to the definition of an exponential-time algorithm as running in $O(2^{\mathrm{poly}(n)})$ rather than $O(2^{n})$, but I am not sure precisely where the subtlety lies.
The issue comes down to ambiguous terminology. $(a^b)^c = a^{bc}$, but $a^{(b^c)} \neq a^{bc}$. In other words, exponents aren't associative. Conventionally, nested exponentials without parentheses are grouped in this second way, because it's more useful. So $2^{2^n} = 2^{(2^n)} \neq 2^{2n}$. If we wanted to talk about $(2^2)^n$, we could just write $2^{2n}$ instead, so we reserve the double exponential notation for the other case.
{ "source": [ "https://cs.stackexchange.com/questions/95464", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/2357/" ] }
95,790
This is a naïve and, therefore, possibly malformed question, so apologies in advance! My view is that a Turing Machine can be seen as the computational basis for procedural/imperative programming languages. Similarly, the lambda calculus is the foundation for functional programming languages. I have recently learnt that the Church-Turing Thesis also shows mutual equivalence with a third model of computation: general recursive functions . Are there any programming languages that use this as their model of computation? If not, is there a technical reason why; i.e., besides "No one's tried yet"?
Direct answer to the question: yes, there are esoteric and highly impractical PLs based on $\mu$-recursive functions (think Whitespace), but no practical programming language is based on $\mu$-recursive functions due to valid reasons. General recursive (i.e., $\mu$-recursive) functions are significantly less expressive than lambda calculi. Thus, they make a poor foundation for programming languages. You are also not correct that the TM is the basis of imperative PLs: in reality, good imperative programming languages are much closer to $\lambda$-calculus than they are to Turing machines. In terms of computability, $\mu$-recursive functions, Turing machine, and the untyped $\lambda$-calculus are all equivalent. However, the untyped LC has good properties that none of the other two have. It is very simple (only 3 syntactic forms and 2 computational rules), is highly compositional, and can express programming constructs relatively easily. Moreover, equipped with a simple type system (e.g., System $F\omega$ extended with $\mathsf{fix}$), the $\lambda$-calculus can be extremely expressive in that it can express many complex programming constructs easily, correctly and compositionally. You can also extend the $\lambda$-calculus easily to include constructs that are not lambdas. None of the other computational models mentioned above give you those nice properties. The Turing machine is neither compositional nor universal (you need to have a TM for each problem). There are no concepts of "functions", "variables" or "composition". It is also not exactly true that TMs are the basis of imperative PLs - FWIW, imperative PLs are much, much closer to lambda calculi with control operators than to Turing machines. See Peter J. Landin's "A Correspondence Between ALGOL 60 and Church's Lambda-Notation" for a detailed explanation. If you have programmed in Brainf**k (which actually implements a rather simple Turing machine), you will know that Turing machines are not a good idea for programming. $\mu$-recursive functions are similar to TMs in this respect. They are compositional, but not nearly as compositional as the LC. You also just can't encode useful programming constructs in $\mu$-recursive functions. Moreover, the $\mu$-recursive functions only compute over $\mathbb{N}$, and to compute over anything else you'd need to encode your data into natural numbers using some sort of Gödel numbering, which is painful. So, it is not a coincidence that most programming languages are somehow based off the $\lambda$-calculus! The $\lambda$-calculus has good properties: expressiveness, compositionality and extensibility, that other systems lack. However, Turing machines are good for studying computational complexity, and $\mu$-recursive functions are good for studying the logical notion of computability. They both have outstanding properties that the $\lambda$-calculus lacks, but in the field of programming $\lambda$-calculus clearly wins. In fact, there are many, many more Turing complete systems out there, but they lack any outstanding property whatsoever. Conway's Game of Life, LaTeX macros, and even (some claim) DNA are all Turing complete, but no one programs (i.e. do serious programming) with Conway or studies computational complexity using LaTeX macros. They simply lack good properties. Turing complete per se is nearly meaningless when it comes to programming. Also, many non-Turing complete computational systems are very useful when it comes to programming. Regular expressions and yacc are not Turing complete, but they are extremely powerful in solving a certain class of problems. Coq is also not Turing complete, but it is incredibly powerful (it's actually considered much more expressive than its Turing complete cousin, OCaml). When it comes to programming, Turing completeness is not the key, as many (close to) useless systems are uninterestingly Turing complete. You're not going to claim that Brainf**k or Whitespace are more powerful programming languages than Coq, are you? An expressive foundation is the key to powerful programming languages, and that's why modern programming languages are almost always based on the $\lambda$-calculus.
{ "source": [ "https://cs.stackexchange.com/questions/95790", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/91986/" ] }
96,108
If there is an algorithm running in time $O(f(n))$ for some problem A, and somebody comes up with an algorithm running in time, $O(f(n)/g(n))$, where $g(n) = o(f(n))$, is it considered an improvement over the previous algorithm? Does it make sense, in the context of theoretical computer science, to come up with such an algorithm?
No, an algorithm running in time $O(f(n)/g(n))$, where $g(n) = o(f(n))$, is not necessarily considered an improvement. For example, suppose that $f(n) = n$ and $g(n) = 1/n$. Then $O(f(n)/g(n)) = O(n^2)$ is a worse time bound than $O(f(n)) = O(n)$. In order to improve upon an algorithm running in time $f(n)$, you need to come up with an algorithm running in time $o(f(n))$, that is, in time $g(n)$ for some function $g(n) = o(f(n))$. If all you know is that an algorithm runs in time $O(f(n))$, then it is not clear whether an algorithm running in time $O(g(n))$ is an improvement, whatever $f(n),g(n)$ are. This is because big O is only an upper bound on the running time. Instead, it is common to consider the worst-case time complexity, and to estimate it as a big $\Theta$ rather than just as a big $O$.
{ "source": [ "https://cs.stackexchange.com/questions/96108", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/90025/" ] }
96,679
Someone in a discussion brought up that (he reckons) there can be at least continuum number of strategies to approach a specific problem. The specific problem was trading strategies (not algorithms but strategies) but I think thats beside the point for my question. This got me thinking about the cardinality of the set of algorithms. I have been searching around a bit but have come up with nothing. I've been thinking that, since turing machines operate with a finite set of alphabet and the tape has to be indexable thus countable, it's impossible to have uncountable number of algorithms. My set theory is admittedly rusty so I am not certain at all my reasoning is valid and I probably wouldn't be able to prove it, but it's an interesting thought. What is the cardinality of the set of algorithms?
An algorithm is informally described as a finite sequence of written instructions for accomplishing some task. More formally, they're identified as Turing machines, though you could equally well describe them as computer programs. The precise formalism you use doesn't much matter but the fundamental point is that each algorithm can be written down as a finite sequence of characters, where the characters are chosen from some finite set, e.g., roman letters, ASCII or zeroes and ones. For simplicity, let's assume zeroes and ones. Any sequence of zeroes and ones is just a natural number written in binary. That means there are at most a countable infinity of algorithms, since every algorithm can be represented as a natural number. For full credit, you should be worried that some natural numbers might not code valid programs, so there might be fewer algorithms than natural numbers. (For bonus credit, you might also be wondering if it's possible that two different natural numbers represent the same algorithm.) However, print 1 , print 2 , print 3 and so on are all algorithms and all different, so there are at least countably infinitely many algorithms. So we conclude that the set of algorithms is countably infinite.
{ "source": [ "https://cs.stackexchange.com/questions/96679", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/93130/" ] }
96,706
How can one generate all unlabeled trees with $\le n$ nodes? That is, generate and store the adjacency matrices of those graphs? (not just count them ) Visualization of all unlabeled trees with $\le6$ nodes:
An algorithm is informally described as a finite sequence of written instructions for accomplishing some task. More formally, they're identified as Turing machines, though you could equally well describe them as computer programs. The precise formalism you use doesn't much matter but the fundamental point is that each algorithm can be written down as a finite sequence of characters, where the characters are chosen from some finite set, e.g., roman letters, ASCII or zeroes and ones. For simplicity, let's assume zeroes and ones. Any sequence of zeroes and ones is just a natural number written in binary. That means there are at most a countable infinity of algorithms, since every algorithm can be represented as a natural number. For full credit, you should be worried that some natural numbers might not code valid programs, so there might be fewer algorithms than natural numbers. (For bonus credit, you might also be wondering if it's possible that two different natural numbers represent the same algorithm.) However, print 1 , print 2 , print 3 and so on are all algorithms and all different, so there are at least countably infinitely many algorithms. So we conclude that the set of algorithms is countably infinite.
{ "source": [ "https://cs.stackexchange.com/questions/96706", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/83577/" ] }
97,148
I was reading about data compression algorithms and the theoretical limit for data compression. Recently I encountered a compression method called "Combinatorial Entropy Encoding", the main idea of this method is to encode the file as the characters presented in the file, their frequencies and the index of these characters permutation represented by the file. These documents may help explaining this method: https://arxiv.org/pdf/1703.08127 http://www-video.eecs.berkeley.edu/papers/vdai/dcc2003.pdf https://www.thinkmind.org/download.php?articleid=ctrq_2014_2_10_70019 However, in the first document I've read that by using this method they could compress some text to less than the Shannon limit (They didn't consider the space needed to save the frequency of the characters and the space needed to save the meta data of the file). I thought about it and I found that this method won't be very efficient for very small files but on the other hand it may work well with large files. Actually I don't fully understand this algorithm or the Shannon limit very well, I just know it's the sum of the probability of each character multiplied by $log_2$ of the reciprocal of the probability. So I have some questions: Does this compression method really compresses files to smaller than the Shannon limit? Is there any compression algorithm that compresses files to less than the Shannon limit (the answer to this question as far as I know is no)? Can a compression method that compresses files to smaller than the Shannon limit ever exist? If combinatorial encoding really compresses files beyond the Shannon limit, isn't it possible to compress the file again and again until we reach the file size we want?
Actually I don't fully understand this algorithm or the Shannon limit very well, I just know it's the sum of the probability of each character multiplied by log2 of the reciprocal of the probability. Herein lies the crux. The Shannon limit is not some universal property of a string of text. It is the property of a string of text plus a model that provides (possibly context-dependent) probabilities of symbols. It tells us how well that model could compress the text, assuming the model is accurate . If you use one model to compute the Shannon limit and then a different model to compress, if the second model is more accurate you can beat the original Shannon limit you had computed, but that's not really relevant.
{ "source": [ "https://cs.stackexchange.com/questions/97148", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/90553/" ] }
97,539
I don't know where else to ask this question, I hope this is a good place. I'm just curious to know if its possible to make a lambda calculus generator; essentially, a loop that will, given infinite time, produce every possible lambda calculus function. (like in the form of a string). Since lambda calculus is so simple, having only a few elements to its notation I thought it might be possible (though, not very useful) to produce all possible combinations of that notation elements, starting with the simplest combinations, and thereby produce every possible lambda calculus function. Of course, I know almost nothing about lambda calculus so I have no idea if this is really possible. Is it? If so, is it pretty straightforward like I've envisioned it, or is it technically possible, but so difficult that it is effectively impossible? PS. I'm not talking about beta-reduced functions, I'm just talking about every valid notation of every lambda calculus function.
Sure, this is a standard encoding exercise. First of all, let $p : \mathbb N^2 \to \mathbb N$ any bijective computable function, called a pairing function. A standard choice is $$ p(n,m) = \dfrac{(n+m)(n+m+1)}{2}+n $$ One can prove that this is a bijection, so given any natural $k$ , we can compute $n,m$ such that $p(n,m)=k$ . To enumerate lambda terms, fix any enumeration for variables names: $x_0,x_1,x_2,\ldots$ . Then, for each natural number $i$ , print $lambda(i)$ , defined recursively as follows: if $i$ is even, let $j=i/2$ and return variable $x_j$ if $i$ is odd, let $j=(i-1)/2$ if $j$ is even, let $k=j/2$ and find $n,m$ such that $p(n,m)=k$ ; compute $N=lambda(n), M=lambda(m)$ ; return application $(NM)$ if $j$ is odd, let $k=(j-1)/2$ and find $n,m$ such that $p(n,m)=k$ ; compute $M=lambda(m)$ ; return abstraction $(\lambda x_n.\ M)$ This program is justified by the following "algebraic" bijection involving the set of all lambda terms $\Lambda$ : $$ \Lambda \simeq \mathbb N + (\Lambda^2 + \mathbb N \times \Lambda) $$ which is read as "the lambda terms, syntactically, are the disjoint union of 1) variables (represented as a natural), 2) applications (made by two lambda terms), and 3) abstraction (a pair variable/natural + lambda term)". Given that, we recursively apply computable bijections $\mathbb N^2 \simeq \mathbb N$ (i.e. the function $p$ ) and $\mathbb N + \mathbb N \simeq \mathbb N$ (the standard even/odd one) to obtain the algorithm above. This procedure is general, and will work on almost any language generated through a context-free grammar, which will provide a similar isomorphism to the one above.
{ "source": [ "https://cs.stackexchange.com/questions/97539", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/47422/" ] }
99,806
I understand the concept behind for and do/while loops, but I am trying to understand what is happening at the hardware level that allows a loop to run infinitely. Technically wouldn't it have to stop at some point because there are only a couple billion transistors in a microprocessor? Maybe my logic is off.
step 1: take a calculator step 2: input a number step 3: add 1 to the number step 4: subtract 1 from the number step 5: goto step 3 If you didn't eventually get tired or bored you would be switching between the 2 results forever. Computers don't get tired or bored.
{ "source": [ "https://cs.stackexchange.com/questions/99806", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/93267/" ] }
100,206
The Problem There is no easy way to get a permutation with a regex. Permutation: Getting a word $$w=x_1…x_n$$ ("aabc") to another order, without changing number or kind of letters. Regex: Regular expression. For verification: "Regex permutations without repetition" The answer creates JavaScript code instead of a regex, assuming this would be more simple. "How to find all permutations of a given word in a given text" – The answer doesn't use regexes either. "Regex to match all {1, 2, 3, 4} without repetition" – The answer uses regexes, but it's neither adaptable nor simple. This answer even claims: "A regular expression cannot do what you're asking for. It cannot generate permutations from a string" . The kind of solution I am searching for It should have the form: »aabc« (or anything else you could use a opening and closing parentheses) (aabc)! (similar to (abc)? but with another symbol in the end) [aabc]! (similar to [abc]+ but with another symbol in the end) Advantages of these solutions They are: easy adaptable reusable Why this should exist Regexes are a way to describe a grammar of a regular language. They have the full power to be any kind of regular language. Let's say, regular languages are powerful enough for permutations (proof below) – why is there no easy way to express this? So my question is: (Why) Is my proof wrong? If it is right: Why is there no easy way to express permutations? The proof Regular expressions are one way to note the grammar of a regular language. They can describe any regular languages grammar. Another way to describe any regular languages (that have a finite number of letters within their alphabet) grammar are non-deterministic Automatons (with a finite number of states). Having a finite number of letters I can create this automaton: (Example. Formal: see below) Grammar that accepts permutations of "abbc": (sry for numbers on top, maybe someone knows how to make this part looking better) s -> ah¹ s -> bh² s -> ch³ h¹ -> bh¹¹ h¹ -> ch¹² h² -> ah¹¹ (no typo! equivalence) h² -> bh²² h² -> ch²³ h³ -> ah¹² h³ -> bh²³ h¹¹ -> bc h¹¹ -> cb h¹² -> bb h²² -> ac h²² -> ca h²³ -> ab h²³ -> ba More formal: (using a finite-state-automaton but this could be made with grammar as well) A word q (with finite length) to which any permutation should reach an accepting state. X is the finite alphabet. Set of states S contains any order of letters up to the length of q. (So the size of S is finite.) Plus one state of "any longer word". state transition function d which takes a letter and moves on the state that corresponds to the now read part of the word. F is a set of that states that are exact permutations of q. So it is possible to create a finite-state automaton for accepting permutations of a given word. Moving on with the proof So I have proven that regular languages have the power to check for permutations, haven't I? So why is there no approach to reach this with Regexes? It's a useful functionality.
The fundamental theorems of formal language theory are that regular expressions, regular grammars, deterministic finite automata (DFAs) and nondeterministic finite automata (NFAs) all describe the same kinds of languages: namely the regular languages. The fact that we can describe these languages in so many completely different ways suggests that there's something natural and important about these languages, in the same way that the equivalence of Turing machines, the lambda calculus and all kinds of other things suggests that the computable languages are natural and important. They're not just an artifact of whatever random decisions the original discoverer made. Suppose we add a new rule for creating regular expressions: if $R$ is a regular expression, then $\pi(R)$ is a regular expression, and it matches every permutation of every string matched by $R$ . So, for example, $L(\pi(abc)) = \{abc, acb, bac, bca, cab, cba\}$ . The problem is that this breaks the fundamental equivalences described above. $L\big(\pi((ab)^*))\big)$ is the language of strings that contain an equal number of $a$ s and $b$ s and this isn't a regular language. Compare this with, for example, adding a negation or reversal operator to regular expressions, which doesn't change the class of languages that are accepted. So, to answer the title question, regular expressions can't do permutations and we don't add that ability because then regular expressions wouldn't match regular languages. Having said that, it's possible that "regular expressions with permutations" would also be an interesting class of languages with lots of different characterizations.
{ "source": [ "https://cs.stackexchange.com/questions/100206", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/96507/" ] }
100,599
I would like to know if there has been any work relating legal code to complexity. In particular, suppose we have the decision problem "Given this law book and this particular set of circumstances, is the defendant guilty?" What complexity class does it belong to? There are results that have proven that the card game Magic: the Gathering is both NP and Turing-complete so shouldn't similar results exist for legal code?
It's undecidable because a law book can include arbitrary logic. A silly example censorship law would be "it is illegal to publicize any computer program that does not halt". The reason results for MTG exist and are interesting is because it has a single fixed set of (mostly) unambiguous rules, unlike law which is ever changing, horribly localized and endlessly ambiguous.
{ "source": [ "https://cs.stackexchange.com/questions/100599", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/58193/" ] }
100,604
Hi I have been reading Ch 34(NP-Completeness) Section 34.1 of CLRS and I am confused why do we need to consider different encodings. Everything is represented as binary at the end so why consider different encodings of the input? Any help is highly appreciated.
It's undecidable because a law book can include arbitrary logic. A silly example censorship law would be "it is illegal to publicize any computer program that does not halt". The reason results for MTG exist and are interesting is because it has a single fixed set of (mostly) unambiguous rules, unlike law which is ever changing, horribly localized and endlessly ambiguous.
{ "source": [ "https://cs.stackexchange.com/questions/100604", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/94354/" ] }
101,038
I tried few cases and found any two spanning tree of a simple graph has some common edges. I mean I couldn't find any counter example so far. But I couldn't prove or disprove this either. How to prove or disprove this conjecture?
No, consider the complete graph $K_4$ : It has the following edge-disjoint spanning trees:
{ "source": [ "https://cs.stackexchange.com/questions/101038", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/58433/" ] }
101,324
I totally understand what big $O$ notation means. My issue is when we say $T(n)=O(f(n))$ , where $T(n)$ is running time of an algorithm on input of size $n$ . I understand semantics of it. But $T(n)$ and $O(f(n))$ are two different things. $T(n)$ is an exact number, But $O(f(n))$ is not a function that spits out a number, so technically we can't say $T(n)$ equals $O(f(n))$ , if one asks you what's the value of $O(f(n))$ , what would be your answer? There is no answer.
Strictly speaking, $O(f(n))$ is a set of functions. So the value of $O(f(n))$ is simply the set of all functions that grow asymptotically not faster than $f(n)$ . The notation $T(n) = O(f(n))$ is just a conventional way to write that $T(n) \in O(f(n))$ . Note that this also clarifies some caveats of the $O$ notation. For example, we write that $(1/2) n^2 + n = O(n^2)$ , but we never write that $O(n^2)=(1/2)n^2 + n$ . To quote Donald Knuth (The Art of Computer Programming, 1.2.11.1): The most important consideration is the idea of one-way equalities . [...] If $\alpha(n)$ and $\beta(n)$ are formulas that involve the $O$ -notation, then the notation $\alpha(n)=\beta(n)$ means that the set of functions denoted by $\alpha(n)$ is contained in the set denoted by $\beta(n)$ .
{ "source": [ "https://cs.stackexchange.com/questions/101324", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/63468/" ] }
101,812
Does anyone know how efficient was the first Turing machine that Alan Turing made? I mean how many moves did it do per second or so... I'm just curious. Also couldn't find any info about it on the web.
"Turing machines" (or "a-machines") are a mathematical concept, not actual, physical devices. Turing came up with them in order to write mathematical proofs about computers, with the following logic: Writing proofs about physical wires and switches is extremely difficult. Writing proofs about Turing machines is (relatively) easy. Anything physical wires and switches can do, you can build a Turing machine to do (*) (**). But Turing never built an actual machine that wrote symbols on a paper tape. Other people have, but only as a demonstration: here's one you can make out of a business card , for example. Why did he never build a physical Turing machine? To put it simply, it just wouldn't be that useful. The thing is, nobody's ever come up with a model of computation that's stronger than a Turing machine (in that it can compute things a Turing machine can't). And it's been proven that several other models of computation, such as the lambda calculus or the Python programming language, are "Turing-complete": they can do everything a Turing machine can. So for anything except a mathematical proof, it's generally much more useful to use one of these other models. Then you can use the Turing machines in your proofs without any loss of generality. (*) Specifically, any calculation : a Turing machine can't turn on a lightbulb, for example, but lightbulbs aren't very interesting from a theory-of-computation standpoint. (**) As has been pointed out in the comments, Turing's main definition of "computer" was a human following an algorithm. He conjectured that there's no computation a human can do that a Turing machine can't do—but nobody has been able to prove this, in part because defining exactly what a human mind can do is incredibly difficult. Look into the Church-Turing Thesis if you're interested.
{ "source": [ "https://cs.stackexchange.com/questions/101812", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/98050/" ] }
102,466
As part of some blockchain-related research I am currently undertaking, the notion of using blockchains for a variety of real-world applications are thrown about loosely. Therefore, I propose the following questions: What important/crucial real-world applications use blockchain? To add on to the first question, more specifically, what real-world applications actually need blockchain - who may or may not currently use it? From a comment, I further note that this disregards the notion of cryptocurrencies. However, the use of smart contracts can have other potential applications aside from benefits they can pose to the area of cryptocurrencies
Apart from Bitcoin and Ethereum (if we are generous) there are no major and important uses today. It is important to notice that blockchains have some severe limitations. A couple of them being: It only really works for purely digital assets The digital asset under control needs to keep its value even if it's public All transactions need to be public A rather bad confirmation time Smart contracts are scary Purely digital assets If an asset is actually a physical asset with just a digital "twin" that is being traded, we will risk that local jurisdiction (i.e. your law enforcement) can have a different opinion of ownership than what is on the blockchain. To take an example; suppose that we are trading (real and physical) bikes on the blockchain, and that on the blockchain, we put its serial number. Suppose further that I hack your computer and put the ownership of your bike to be me. Now, if you go to the police, you might be able to convince them that the real owner of the bike is you, and thus I have to give it back. However, there is no way of making me give you the digital twin back, thus there is a dissonance: the bike is owned by you, but the blockchain claims it's owned by me. There are many such proposed use cases (trading physical goods on a blockchain) out in the open of trading bikes, diamonds, and even oil. The digital assets keep value even if public There are many examples where people want to put assets on the blockchain, but are somehow under the impression that that gives some kind of control. For instance, musician Imogen Heap is creating a product in which all musicians should put their music on the blockchain and automatically be paid when a radio plays your hit song. They are under the impression that this creates an automatic link between playing the song and paying for the song. The only thing it really does is to create a very large database for music which is probably quite easy to download. There is currently no way around having to put the full asset visible on the chain. Some people are talking about "encryptions", "storing only the hash", etc., but in the end, it all comes down to: publish the asset, or don't participate. Public transactions In business it is often important to keep your cards close to your chest. You don't want real time exposure of your daily operations. Some people try to make solutions where we put all the dairy farmers' production on the blockchain together with all the dairy stores' inventory. In this way we can easily send trucks to the correct places! However, this makes both farmers and traders liable for inflated prices if they are overproducing/under-stocked. Other people want to put energy production (solar panels, wind farms) on the blockchain. However, no serious energy producer will have real time production data out for the public. This has major impact on the stock value and that kind of information is the type you want to keep close to your chest. This also holds for so-called green certificates , where you ensure you only use "green energy". Note : There are theoretical solutions that build on zero-knowledge proofs that would allow transactions to be secret. However, these are nowhere near practical yet, and time will show if this item can be fixed. Confirmation time You can, like Ethereum, make the block time as small as you would like. In Bitcoin, the block time is 10 minutes, and in Ethereum it is less than a minute (I don't remember the specific figure). However, the smaller block time, the higher the chance of long-lived forks. To ensure your transaction is confirmed you still have to wait quite long. There are currently no good solutions here either. Smart contracts are scary Smart contract are difficult to write. They are computer programs that move assets from one account to another (or more complicated). However, we want traders and "normal" people to be able to write these contracts, and not rely on computer science programming experts. You can't undo a transaction. This is a tough nut to crack! If you are doing high value trading, and end up writing a zero too much in the transaction (say \$10M instead of \$1M), you call your bank immediately! That fixes it. If not, let's hope you have insurance. In a blockchain setting, you have neither a bank, nor insurance. Those \$9M are gone and it was due to a typo in a smart contract or in a transaction. Smart contracts is really playing with fire. It's too easy to empty all your assets in a single click. And it has happened, several times. People have lost hundreds of millions of dollars due to smart contract errors. Source: I am working for an energy company doing wind and solar energy production as well as trading oil and gas. Have been working on blockchain solution projects.
{ "source": [ "https://cs.stackexchange.com/questions/102466", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/65360/" ] }
102,647
According to Wikipedia : Shannon's entropy measures the information contained in a message as opposed to the portion of the message that is determined (or predictable). Examples of the latter include redundancy in language structure or statistical properties relating to the occurrence frequencies of letter or word pairs, triplets etc. So entropy is a measure of the amount of information contained in a message. Entropy coders are used to losslessy compress such a message to the minimum number of bits needed to represent it (entropy). To me this looks like a perfect entropy encoder would be all that is needed to losslessy compress a message as much as possible. Many compression algorithms however use steps before entropy coding to supposedly reduce the entropy of the message. According to german Wikipedia Entropiekodierer werden häufig mit anderen Kodierern kombiniert. Dabei dienen vorgeschaltete Verfahren dazu, die Entropie der Daten zu verringern. In english: Entropy coders are frequently combined with other encoders. Previous steps serve to reduce the entropy of the data. i.e. bzip2 uses the Burrows-Wheeler-Transform followed by a Move-To-Front-Transform before applying entropy coding (Huffman coding in this case). Do these steps really reduce the entropy of the message, which would imply reducing the amount of information contained in the message? This seems contradictory to me, since that would mean that information was lost during compression, preventing lossless decompression. Or do they merely transform the message to improve the efficiency of the entropy coding algorithm? Or does entropy not correspond directly to the amount of information in the message?
A lot of casual descriptions of entropy are confusing in this way because entropy is not quite as neat and tidy a measure as sometimes presented. In particular, the standard definition of Shannon entropy stipulates that it only applies when, as Wikipedia puts it, "information due to independent events is additive." In other words, independent events must be statistically independent. If they aren't, then you have to find a representation of the data that defines events in ways that make them truly independent. Otherwise, you will overestimate the entropy. To put it yet another way, Shannon entropy only applies to true probability distributions, and not to random processes in general. For concrete examples of processes that don't fit the assumptions of Shannon entropy, consider... Markov processes A Markov process generates a series of events in which the most recent event is sampled from a distribution that depends on one or more previous events. Obviously a huge number of real-world phenomena are better modeled as Markov processes than as discrete, independent probability distributions. For example: the text you're reading right now! The naively calculated Shannon entropy rate of a Markov process will always be greater than or equal to the true entropy rate of the process. To get the true entropy of the process, you need to take into account the statistical dependence between events. In simple cases, the formula for that looks like this : $$ H(\mathcal{S}) = - \sum_i p_i \sum_j \ p_i (j) \log p_i (j) $$ This can also be represented like so : $$ H(Y) = - \sum_{ij} \mu_i P_{ij} \log P_{ij} $$ Again quoting Wikipedia, here " $\mu_i$ is the asymptotic distribution of the chain" -- that is, the overall probability that a given event will occur over a long horizon. This is all a complicated way of saying that even when you can calculate the overall probability of a given event, certain sequences of events are more likely than others to be generated by a Markov process. So for example, the following three strings of English words are increasingly less likely: They ran to the tree The tree ran to they Tree the they to ran But Shannon entropy will assess all three strings as equally likely. The Markov process entropy takes the difference into account, and as a result, it assigns a lower entropy rate to the process. Entropy rates are model-dependent If you zoom way out, here's the big picture: the entropy rate of a given sequence of events from an unknown source is model-dependent. You'll assign a different entropy rate to a particular series of events depending on how you model the process that generated them. And very frequently, your model of the process isn't going to be quite correct. This isn't a simple or easy to solve problem. In fact, in general, it is impossible to assign a true entropy rate to a sufficiently long and complex sequence of events if you don't know what the true underlying process is. This is a central result in algorithmic information theory . What it means in practice is that given an unknown source of sequences of events, different models will yield different entropies, and it's impossible to know which is correct in the long run -- although the one that assigns the lowest entropy is probably the best.
{ "source": [ "https://cs.stackexchange.com/questions/102647", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/98845/" ] }
103,816
The Short Story A famous computer scientist, Tarjan , wrote a book years ago. It contains absolutely bizarre pseudocode. Would someone please explain it? The Long Story Tarjan is known for many accomplishments, including the fact that he was the coinventor of splay trees . He published a book, " Data Structures and Network Algorithms ," during the 1980s. All of the pseudo-code in Tarjan's book is written in a language of his own devising. The pseudo-code conventions are very regimented. It's almost a true language, and one could imagine writing a compiler for it. Tarjan writes that his language is based upon the following three: Dijkstra's Guarded Command Language SETL ALGOL I am hoping that someone familiar with one or two of the above languages, or the work of Tarjan, will be able to answer my question. An example of a function written in Tarjan's language is shown below: heap function mesh (heap nodes h1, h2); if key(h1) > key(h2) → h1 ⟷ h2 fi; right (h1) := if right(h1) = null → h2 |right(h1) ≠ null → mesh (right(h1), h2) fi; if rank (left (h1)) < rank (right (h1)) → left(h1) ⟷ right(h1) fi; rank (h1) := rank(right(h1)) + 1; return h1, end mesh; I have seen lots of pseudo-code, but I have never seen anything like Tarjan's. How does Tarjan's pseudocode work? How can examples of Tarjan's pseudocode be re-written as something which looks more like C or Java? It need not even be C or Java. The if-else construct in Tarjan's language is not only different from C-family languages, but also different from Python, MATLAB and many others.
Table of Contents I will divide my explanation of Tarjan 's pseudocode into the following sections: Tarjan's If-else Blocks (the -> & | operators) Assignment and Equality Tests ( := and = ) There is else if , but no else construct Tarjan's Conditional Assignment Operator := if Additional Examples of Tarjan's if and := if 5.5. Tarjan Arrays (or Lists) Summary of Operators Tarjan's Double-pointed Arrow Operator ( ⟷ ) Tarjan's do-loops are like C/Java while-loops Tarjan's Conditional-assignment operator with all false conditions (1) Tarjan's If-else Blocks (the operators → and | ) The if-else construct is perhaps the most fundamental control structure in Tarjan's language. In addition to C-like if-blocks, if-else behavior is very nearly built-into in Tarjan's assignments and Tarjan's while loops. Tarjan's arrow operator -> (or →) is a delimiter between the condition of a if-statement and the execution block of an if-statement. For example, in Tarjan's language we might have: # Example One if a = 4 → x := 9 fi If we partially translate the line of Tarjan code above into C or Java, we get the following: if (a = 4) x := 9 fi Instead of a right curly braces (as in C and Java) Tarjan ends an if -block with an ALGOL-like backwards spelling of the key-word: fi If we continue translating our above example, we get: if (a = 4) { x := 9 } (2) Assignment and Equality Tests ( := and = ) Tarjan takes these operators from ALGOL (later also seen in Pascal). Tarjan uses = for equality tests, not assignments (so it works like Java == ). For assignment, Tarjan uses := , which works like Java = . Thus, if we continue translating our example, we have: if (a == 4) { x = 9 } A vertical bar (or "pipe" or | ) in Tarjan's language is equivalent to the else if keyword in C or Java. For example, in Tarjan's language we might have: # Example Two if a = 4 → x := 9 | a > 4 → y := 11 fi The Tarjan-code above translates to: if (a == 4) { x = 9 } else if (a > 4) { y = 11 } (3) else if only and no else construct Earlier, I covered the basics of if -statements without describing the nuances. However, we will now discuss a small detail. The last clause in a Tarjan-ian if-else block must always contain an arrow ( → ) operator. As such, there is no else in Tarjan's language, only else if . The closest thing to an else -block in Tarjan's language is to make the rightmost test-condition true . if a = 4 → x := 9 | a > 4 → y := 11 | true → z := 99 fi In C/Java, we would have: if (a == 4) { x = 9 } else if (a > 4) { y = 11 } else { // else if (true) z = 99 } Examples are easier to understand than general descriptions. However, now that we have some examples under our belt, know that the general formal of a Tarjan's if-else construct is as follows: if condition → stuff to do | condition → stuff to do [...] | condition → stuff to do fi The character | is like if else The character → separates the test-condition from the stuff-to-do. (4) Tarjan's Conditional Assignment Operator := if Tarjan's if can be used two very different ways. So far, we have only described one of the uses of the Tarjanian if . Somewhat confusingly, Tarjan still uses the notation/syntax if for the second type of if -construct. Which if is being used is based on context. Analyzing the context is actually very easy to do as the second type of Tarjan- if is always pre-fixed by an assignment operator. For example, we might have the following Tarjan code: # Example Three x := if a = 4 → 9 fi Begin Digression After working with Tarjan code for awhile, you get used to the order of operations. If we parenthesize test condition in the example above, we obtain: x := if (a = 4) → 9 fi a = 4 is not an assignment operation. a = 4 is like a == 4 -- it returns true or false. End Digression It can help to think of := if as syntax for a single operator, distinct from := and if In fact, we will refer to the := if operator as the "conditional assignment" operator. For if we list (condition → action) . For := if we list (condition → value) where value is teh right-hand-side value we might assign to the left-hand-side lhs # Tarjan Example Four lhs := if (a = 4) → rhs fi in C or Java might look like: # Example Four if (a == 4) { lhs = rhs } Consider the following example of "conditional assignment" in Tarjanian code: # Tarjan Instantiation of Example Five x := if a = 4 → 9 | a > 4 → 11 | true → 99 fi In C/Java, we would have: // C/Java Instantiation of Example Five if (a == 4) { x = 9 } else if (a > 4) { x = 11 } else if (true) { // else x = 99 } (5) Summary of Operators: So far, we have: := ...... Assignment operator (C/Java = ) = ...... Equality test (C/Java == ) → ...... Delimiter between test-condition of an if-block and the body of an if-block | ..... C/Java else-if if ... fi ..... if-else block := if... fi ..... Conditional assignment based on an if-else block (5.5) Tarjan Lists/Arrays: Tarjan's Language has built-in array-like containers. The syntax for Tarjan arrays is much more intuitive than the notation for Tarjan if else statements. list1 := ['lion', 'witch', 'wardrobe']; list2a := [1, 2, 3, 4, 5]; list2b := [1, 2]; list3 := ["a", "b", "c", "d"]; list4 := [ ]; # an empty array Tarjan array elements are accessed with parentheses () , not square-brackets [] Indexing begins at 1 . Thus, list3 := ["a", "b", "c", "d"] # list3(1) == "a" returns true # list3(2) == "b" return true Below shows how to create a new array containing the 1st and 5th elements of [1, 2, 3, 4, 5, 6, 7] nums := [1, 2, 3, 4, 5, 6, 7] new_arr := [nums(1), nums(5)] The equality operator is defined for arrays. The following code prints true x := false if [1, 2] = [1, 2, 3, 4, 5] --> x := true print(x) Tarjan's way to test if an array is empty is to compare it to an empty array arr := [1, 2] print(arr = [ ]) # `=` is equality test, not assignment One can create a view (not copy) of a sub-array, by providing multiple indices to operator () combined with .. list3 := ["a", "b", "c", "d"] beg := list3(.. 2) # beg == ["a", "b"] # beg(1) == "a" end := list3(3..) # end == ["c", "d"] # end(1) == "c" mid := list3(2..3) # mid == ["b", "c"] # mid(2) == "c" # `list3(4)` is valid, but `mid(4)` is not (6) Additional Examples of Tarjan's if and := if The following is another examples of an Tarjan conditional assignment ( := if ): # Tarjan Example Six a := (false --> a | true --> b | false --> c1 + c2 | (2 + 3 < 99) --> d) (true --> b) is the leftmost (cond --> action) clause having a true condition. Thus, the original assignment Example Six has the same assignment-behavior as a := b Below is our most complicated example of Tarjan code thus far: # Tarjan Example -- merge two sorted lists list function merge (list s, t); return if s =[] --> t | t = [ ] --> s | s != [ ] and t != [] and s(l) <= t(1) --> [s(1)]& merge(s[2..], t) | s != [ ]and t != [ ] and s(1) > r(l) --> [t(1)] & merge (s,t(2..)) fi end merge; The following is a translation of Tarjan's code for merging two sorted lists. The following is not exactly C or Java, but it is much closer to C/Java than the Tarjan version. list merge (list s, list t) { if (s is empty) { return t; } else if (t is empty){ return s; } else if (s[1] <= t[1]) { return CONCATENATE([s[1]], merge(s[2...], t)); else { // else if (s[1] > t[1]) return CONCATENATE ([t[1]], merge(s,t[2..]); } } Below is yet another example of Tarjan-code and a translation in something similar to C or Java: heap function meld (heap h1, h2); return if h1 = null --> h2 | h2 = null --> h1 | h1 not null and h2 not null --> mesh (h1, h2) fi end meld; Below is the C/Java translation: HeapNode meld (HeapNode h1, HeapNode h2) { if (h1 == null) { return h2; } else if (h2 == null) { return h1; } else { mesh(h1, h2) } } // end function (7) Tarjan's Double-pointed Arrow Operator ( <--> ) Below is an example of Tarjan code: x <--> y What Does a Double Arrow ( ⟷ ) Operator Do in Tarjan's Language? Well, almost all variables in Tarjan's Language are pointers. <--> is a swap operation. The following prints true x_old := x y_old := y x <--> y print(x == y_old) # prints true print(y == x_old) # prints true After performing x <--> y , x points to the object which y used to point to and y points to the object which x used to point to. Below is a Tarjan statement using the <--> operator: x := [1, 2, 3] y := [4, 5, 6] x <--> y Below is a translation from the Tarjan code above to alternative pseudocode: Pointer X = address of array [1, 2, 3]; Pointer Y = address of array [4, 5, 6]; Pointer X_OLD = address of whatever X points to; X = address of whatever Y points to; Y = address of whatever X_OLD points to; Alternatively, we could have: void operator_double_arrow(Array** lhs, Array** rhs) { // swap lhs and rhs int** old_lhs = 0; old_lhs = lhs; *lhs = *rhs; *rhs = *old_lhs; return; } int main() { Array* lhs = new Array<int>(1, 2, 3); Array* rhs = new Array<int>(4, 5, 6); operator_double_arrow(&lhs, &rhs); delete lhs; delete rhs; return 0; } Below is an example of one of Tarjan's functions using the ⟷ operator: heap function mesh (heap nodes h1, h2); if key(h1) > key(h2) → h1 ⟷ h2 fi; right (h1) := if right(h1) = null → h2 |right(h1) ≠ null → mesh (right(h1), h2) fi; if rank (left (h1)) < rank (right (h1)) → left(h1) ⟷ right(h1) fi; rank (h1) := rank(right(h1)) + 1; return h1; end mesh; Below is a translation of Tarjan's mesh function into pseudo-code which is not C, but looks more like C (relatively speaking). The purpose of this is to illustrate how Tarjan's ⟷ operator works. node pointer function mesh(node pointers h1, h2) { if (h1.key) > h2.key) { // swap h1 and h2 node pointer temp; temp = h1; h1 = h2; h2 = temp; } // Now, h2.key <= h1.key if (h1.right == null) { h1.right = h2; } else // h1.key != null { h1.right = mesh(h1.right, h2); } if (h1.left.rank < h1.right.rank ) { // swap h1.left and h1.right node pointer temp; temp = h1; h1 = h2; h2 = temp; } h1.rank = h1.right.rank + 1; return h1; } (8) Tarjan's do-loops are like C/Java while-loops Tarjan's language if and for constructs are familiar for C/Java programmers. However, the Tarjan keyword for a while-loop is do . All do -loops end with the keyword od , which is the backwards spelling of do . Below is an example: sum := 0 do sum < 50 → sum := sum + 1 In C-style pseudocode, we have: sum = 0; while(sum < 50) { sum = sum + 1; } The above is actually not quite right. A Tarjan do-loop is really a C/Java while(true) with an if-else block nested inside. A more literal translation of the Tarjan code is as follows: sum = 0; while(true) { if (sum < 50) { sum = sum + 1; continue; // This `continue` statement is questionable } break; } Below, we have a more complicated Tarjan do -loop: sum := 0 do sum < 50 → sum := sum + 1 | sum < 99 → sum := sum + 5 C/Java-style pseudocode for the complicated Tarjan do -loop is as follows: sum = 0; while(true) { if (sum < 50) { sum = sum + 1; continue; } else if (sum < 99) { sum = sum + 5; continue; } break; } (9) Tarjan's Conditional-assignment operator with all false conditions Although the lengthy explanation above covers most things, a few matters are still left unresolved. I hope that someone else will someday write a new-improved answer based on mine which answers these quandries. Notably, when the conditional assignment operator := if is used, and no condition is true, I am not what value is assigned to the variable. x := if (False --> 1| False --> 2 | (99 < 2) --> 3) fi I am not sure, but it is possible that no assignment is made to x : x = 0; if (false) { x = 1; } else if (false) { x = 2; } else if (99 < 2) { x = 3; } // At this point (x == 0) You could require that the left-hand-side variable seen in an := if statement be previously declared. In that case, even if all conditions are false, the variable will still have a value. Alternatively, perhaps all-false conditions represents a runtime error. Another alternative is to return a special null value, and store null in the left-hand argument of the assignment.
{ "source": [ "https://cs.stackexchange.com/questions/103816", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/99925/" ] }
103,844
Suppose that L0, L1, L2 are languages over the same alphabet and that L0 ⊆ L1 ⊆ L2. Is it true that if L0 and L2 are regular, then L1 must be regular as well? By regular = the set of words accepted by a finite automaton. Suppose L0 = { $a^{\textrm{n}}$ | n = 2} L2 = { $a^{\textrm{n}}$ | n => 0} how can i find a set for L1 that is NOT Regular when there are no parameters or syntax on what the machine accepts or not? I'm thinking L1 = { $a^{\textrm{n}}$ | n = prime number } but I'm not sure how to start proving it
Table of Contents I will divide my explanation of Tarjan 's pseudocode into the following sections: Tarjan's If-else Blocks (the -> & | operators) Assignment and Equality Tests ( := and = ) There is else if , but no else construct Tarjan's Conditional Assignment Operator := if Additional Examples of Tarjan's if and := if 5.5. Tarjan Arrays (or Lists) Summary of Operators Tarjan's Double-pointed Arrow Operator ( ⟷ ) Tarjan's do-loops are like C/Java while-loops Tarjan's Conditional-assignment operator with all false conditions (1) Tarjan's If-else Blocks (the operators → and | ) The if-else construct is perhaps the most fundamental control structure in Tarjan's language. In addition to C-like if-blocks, if-else behavior is very nearly built-into in Tarjan's assignments and Tarjan's while loops. Tarjan's arrow operator -> (or →) is a delimiter between the condition of a if-statement and the execution block of an if-statement. For example, in Tarjan's language we might have: # Example One if a = 4 → x := 9 fi If we partially translate the line of Tarjan code above into C or Java, we get the following: if (a = 4) x := 9 fi Instead of a right curly braces (as in C and Java) Tarjan ends an if -block with an ALGOL-like backwards spelling of the key-word: fi If we continue translating our above example, we get: if (a = 4) { x := 9 } (2) Assignment and Equality Tests ( := and = ) Tarjan takes these operators from ALGOL (later also seen in Pascal). Tarjan uses = for equality tests, not assignments (so it works like Java == ). For assignment, Tarjan uses := , which works like Java = . Thus, if we continue translating our example, we have: if (a == 4) { x = 9 } A vertical bar (or "pipe" or | ) in Tarjan's language is equivalent to the else if keyword in C or Java. For example, in Tarjan's language we might have: # Example Two if a = 4 → x := 9 | a > 4 → y := 11 fi The Tarjan-code above translates to: if (a == 4) { x = 9 } else if (a > 4) { y = 11 } (3) else if only and no else construct Earlier, I covered the basics of if -statements without describing the nuances. However, we will now discuss a small detail. The last clause in a Tarjan-ian if-else block must always contain an arrow ( → ) operator. As such, there is no else in Tarjan's language, only else if . The closest thing to an else -block in Tarjan's language is to make the rightmost test-condition true . if a = 4 → x := 9 | a > 4 → y := 11 | true → z := 99 fi In C/Java, we would have: if (a == 4) { x = 9 } else if (a > 4) { y = 11 } else { // else if (true) z = 99 } Examples are easier to understand than general descriptions. However, now that we have some examples under our belt, know that the general formal of a Tarjan's if-else construct is as follows: if condition → stuff to do | condition → stuff to do [...] | condition → stuff to do fi The character | is like if else The character → separates the test-condition from the stuff-to-do. (4) Tarjan's Conditional Assignment Operator := if Tarjan's if can be used two very different ways. So far, we have only described one of the uses of the Tarjanian if . Somewhat confusingly, Tarjan still uses the notation/syntax if for the second type of if -construct. Which if is being used is based on context. Analyzing the context is actually very easy to do as the second type of Tarjan- if is always pre-fixed by an assignment operator. For example, we might have the following Tarjan code: # Example Three x := if a = 4 → 9 fi Begin Digression After working with Tarjan code for awhile, you get used to the order of operations. If we parenthesize test condition in the example above, we obtain: x := if (a = 4) → 9 fi a = 4 is not an assignment operation. a = 4 is like a == 4 -- it returns true or false. End Digression It can help to think of := if as syntax for a single operator, distinct from := and if In fact, we will refer to the := if operator as the "conditional assignment" operator. For if we list (condition → action) . For := if we list (condition → value) where value is teh right-hand-side value we might assign to the left-hand-side lhs # Tarjan Example Four lhs := if (a = 4) → rhs fi in C or Java might look like: # Example Four if (a == 4) { lhs = rhs } Consider the following example of "conditional assignment" in Tarjanian code: # Tarjan Instantiation of Example Five x := if a = 4 → 9 | a > 4 → 11 | true → 99 fi In C/Java, we would have: // C/Java Instantiation of Example Five if (a == 4) { x = 9 } else if (a > 4) { x = 11 } else if (true) { // else x = 99 } (5) Summary of Operators: So far, we have: := ...... Assignment operator (C/Java = ) = ...... Equality test (C/Java == ) → ...... Delimiter between test-condition of an if-block and the body of an if-block | ..... C/Java else-if if ... fi ..... if-else block := if... fi ..... Conditional assignment based on an if-else block (5.5) Tarjan Lists/Arrays: Tarjan's Language has built-in array-like containers. The syntax for Tarjan arrays is much more intuitive than the notation for Tarjan if else statements. list1 := ['lion', 'witch', 'wardrobe']; list2a := [1, 2, 3, 4, 5]; list2b := [1, 2]; list3 := ["a", "b", "c", "d"]; list4 := [ ]; # an empty array Tarjan array elements are accessed with parentheses () , not square-brackets [] Indexing begins at 1 . Thus, list3 := ["a", "b", "c", "d"] # list3(1) == "a" returns true # list3(2) == "b" return true Below shows how to create a new array containing the 1st and 5th elements of [1, 2, 3, 4, 5, 6, 7] nums := [1, 2, 3, 4, 5, 6, 7] new_arr := [nums(1), nums(5)] The equality operator is defined for arrays. The following code prints true x := false if [1, 2] = [1, 2, 3, 4, 5] --> x := true print(x) Tarjan's way to test if an array is empty is to compare it to an empty array arr := [1, 2] print(arr = [ ]) # `=` is equality test, not assignment One can create a view (not copy) of a sub-array, by providing multiple indices to operator () combined with .. list3 := ["a", "b", "c", "d"] beg := list3(.. 2) # beg == ["a", "b"] # beg(1) == "a" end := list3(3..) # end == ["c", "d"] # end(1) == "c" mid := list3(2..3) # mid == ["b", "c"] # mid(2) == "c" # `list3(4)` is valid, but `mid(4)` is not (6) Additional Examples of Tarjan's if and := if The following is another examples of an Tarjan conditional assignment ( := if ): # Tarjan Example Six a := (false --> a | true --> b | false --> c1 + c2 | (2 + 3 < 99) --> d) (true --> b) is the leftmost (cond --> action) clause having a true condition. Thus, the original assignment Example Six has the same assignment-behavior as a := b Below is our most complicated example of Tarjan code thus far: # Tarjan Example -- merge two sorted lists list function merge (list s, t); return if s =[] --> t | t = [ ] --> s | s != [ ] and t != [] and s(l) <= t(1) --> [s(1)]& merge(s[2..], t) | s != [ ]and t != [ ] and s(1) > r(l) --> [t(1)] & merge (s,t(2..)) fi end merge; The following is a translation of Tarjan's code for merging two sorted lists. The following is not exactly C or Java, but it is much closer to C/Java than the Tarjan version. list merge (list s, list t) { if (s is empty) { return t; } else if (t is empty){ return s; } else if (s[1] <= t[1]) { return CONCATENATE([s[1]], merge(s[2...], t)); else { // else if (s[1] > t[1]) return CONCATENATE ([t[1]], merge(s,t[2..]); } } Below is yet another example of Tarjan-code and a translation in something similar to C or Java: heap function meld (heap h1, h2); return if h1 = null --> h2 | h2 = null --> h1 | h1 not null and h2 not null --> mesh (h1, h2) fi end meld; Below is the C/Java translation: HeapNode meld (HeapNode h1, HeapNode h2) { if (h1 == null) { return h2; } else if (h2 == null) { return h1; } else { mesh(h1, h2) } } // end function (7) Tarjan's Double-pointed Arrow Operator ( <--> ) Below is an example of Tarjan code: x <--> y What Does a Double Arrow ( ⟷ ) Operator Do in Tarjan's Language? Well, almost all variables in Tarjan's Language are pointers. <--> is a swap operation. The following prints true x_old := x y_old := y x <--> y print(x == y_old) # prints true print(y == x_old) # prints true After performing x <--> y , x points to the object which y used to point to and y points to the object which x used to point to. Below is a Tarjan statement using the <--> operator: x := [1, 2, 3] y := [4, 5, 6] x <--> y Below is a translation from the Tarjan code above to alternative pseudocode: Pointer X = address of array [1, 2, 3]; Pointer Y = address of array [4, 5, 6]; Pointer X_OLD = address of whatever X points to; X = address of whatever Y points to; Y = address of whatever X_OLD points to; Alternatively, we could have: void operator_double_arrow(Array** lhs, Array** rhs) { // swap lhs and rhs int** old_lhs = 0; old_lhs = lhs; *lhs = *rhs; *rhs = *old_lhs; return; } int main() { Array* lhs = new Array<int>(1, 2, 3); Array* rhs = new Array<int>(4, 5, 6); operator_double_arrow(&lhs, &rhs); delete lhs; delete rhs; return 0; } Below is an example of one of Tarjan's functions using the ⟷ operator: heap function mesh (heap nodes h1, h2); if key(h1) > key(h2) → h1 ⟷ h2 fi; right (h1) := if right(h1) = null → h2 |right(h1) ≠ null → mesh (right(h1), h2) fi; if rank (left (h1)) < rank (right (h1)) → left(h1) ⟷ right(h1) fi; rank (h1) := rank(right(h1)) + 1; return h1; end mesh; Below is a translation of Tarjan's mesh function into pseudo-code which is not C, but looks more like C (relatively speaking). The purpose of this is to illustrate how Tarjan's ⟷ operator works. node pointer function mesh(node pointers h1, h2) { if (h1.key) > h2.key) { // swap h1 and h2 node pointer temp; temp = h1; h1 = h2; h2 = temp; } // Now, h2.key <= h1.key if (h1.right == null) { h1.right = h2; } else // h1.key != null { h1.right = mesh(h1.right, h2); } if (h1.left.rank < h1.right.rank ) { // swap h1.left and h1.right node pointer temp; temp = h1; h1 = h2; h2 = temp; } h1.rank = h1.right.rank + 1; return h1; } (8) Tarjan's do-loops are like C/Java while-loops Tarjan's language if and for constructs are familiar for C/Java programmers. However, the Tarjan keyword for a while-loop is do . All do -loops end with the keyword od , which is the backwards spelling of do . Below is an example: sum := 0 do sum < 50 → sum := sum + 1 In C-style pseudocode, we have: sum = 0; while(sum < 50) { sum = sum + 1; } The above is actually not quite right. A Tarjan do-loop is really a C/Java while(true) with an if-else block nested inside. A more literal translation of the Tarjan code is as follows: sum = 0; while(true) { if (sum < 50) { sum = sum + 1; continue; // This `continue` statement is questionable } break; } Below, we have a more complicated Tarjan do -loop: sum := 0 do sum < 50 → sum := sum + 1 | sum < 99 → sum := sum + 5 C/Java-style pseudocode for the complicated Tarjan do -loop is as follows: sum = 0; while(true) { if (sum < 50) { sum = sum + 1; continue; } else if (sum < 99) { sum = sum + 5; continue; } break; } (9) Tarjan's Conditional-assignment operator with all false conditions Although the lengthy explanation above covers most things, a few matters are still left unresolved. I hope that someone else will someday write a new-improved answer based on mine which answers these quandries. Notably, when the conditional assignment operator := if is used, and no condition is true, I am not what value is assigned to the variable. x := if (False --> 1| False --> 2 | (99 < 2) --> 3) fi I am not sure, but it is possible that no assignment is made to x : x = 0; if (false) { x = 1; } else if (false) { x = 2; } else if (99 < 2) { x = 3; } // At this point (x == 0) You could require that the left-hand-side variable seen in an := if statement be previously declared. In that case, even if all conditions are false, the variable will still have a value. Alternatively, perhaps all-false conditions represents a runtime error. Another alternative is to return a special null value, and store null in the left-hand argument of the assignment.
{ "source": [ "https://cs.stackexchange.com/questions/103844", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/99991/" ] }
105,065
Source alphabet: $\{a, b, c, d, e, f\}$ Code alphabet: $\{0, 1\}$ $a\colon 0101$ $b\colon 1001$ $c\colon 10$ $d\colon 000$ $e\colon 11$ $f\colon 100$ I thought that for a code to be uniquely decodable, it had to be prefix-free. But in this code, the codeword $c$ is the prefix of codeword $f$ for example, so it is not prefix-free. However my textbook tells me that its reverse is prefix free (I don't understand this), and therefore it is uniquely decodable. Can someone explain what this means, or why it is uniquely decodable? I know it satisfies Kraft's inequality, but that is only a necessary condition, not a sufficient condition.
Your code has the property that if you reverse all codewords, then you get a prefix code. This implies that your code is uniquely decodable. Indeed, consider any code $C = x_1,\ldots,x_n$ whose reverse $C^R := x_1^R,\ldots,x_n^R$ is uniquely decodable. I claim that $C$ is also uniquely decodable. This is because $$ w = x_{i_1} \ldots x_{i_m} \text{ if and only if } w^R = x_{i_m}^R \ldots x_{i_1}^R. $$ In words, decompositions of $w$ into codewords of $C$ are in one-to-one correspondence with decompositions of $w^R$ into codewords of $C^R$ . Since the latter are unique, so are the former. Since prefix codes are uniquely decodable, it follows that the reverse of a prefix code is also uniquely decodable. This is the case in your example. The McMillan inequality states that if $C$ is uniquely decodable then $$ \sum_{i=1}^n 2^{-|x_i|} \leq 1. $$ In other words, a uniquely decodable code satisfies Kraft's inequality. Therefore if all you're interested in is minimizing the expected codeword length, there is no reason to look beyond prefix codes. Sam Roweis gives in his slides a nice example of a uniquely decodable code which is neither a prefix code nor the reverse of a prefix code: $$ 0,01,110. $$ In order to show that this code is uniquely decodable, it suffices to show how to decode the first codeword of a word. If the word starts with a $1$ , then the first codeword is $110$ . If it is of the form $01^*$ , then it must be either $0$ or $01$ . Otherwise, there must be a prefix of the form $01^*0$ . We now distinguish several cases: $$ \begin{array}{c|cccc} \text{prefix} & 00 & 010 & 0110 & 01110 \\\hline \text{codeword} & 0 & 01 & 0 & 01 \end{array} $$ Longer runs of $1$ cannot be decoded at all.
{ "source": [ "https://cs.stackexchange.com/questions/105065", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/101177/" ] }
105,398
How would the ALU in a microprocessor differentiate between a signed number, -7 that is denoted by 1111 and an unsigned number 15, also denoted by 1111?
Short version: it doesn't know. There's no way to tell. If 1111 represents -7, then you have a sign-magnitude representation , where the first bit is the sign and the rest of the bits are the magnitude. In this case, arithmetic is somewhat complicated, since an unsigned add and a signed add use different logic. So you'd probably have a SADD and a UADD opcode, and if you choose the wrong one you get nonsensical results. More often, though, 1111 represents -1, in what's called a two's-complement representation . In this case, the ALU simply doesn't care if the numbers are signed or unsigned! For example, let's take the operation of 1110 + 0001 . In signed arithmetic, this means "-2 + 1", and the result should be -1 ( 1111 ). In unsigned arithmetic, this means "14 + 1", and the result should be 15 ( 1111 ). So the ALU doesn't know whether you want a signed or an unsigned result, and it doesn't care. It just does the addition as if it were unsigned, and if you want to treat that as a signed integer afterward, that's up to you. EDIT: As Ruslan and Daniel Schepler quite rightly point out in the comments, some operands still need separate signed and unsigned versions, even on a two's-complement machine. Addition, subtraction, multiplication, equality, and such all work fine without knowing if the numbers are signed or not. But division and any greater-than/less-than comparisons have to have separate versions. EDIT EDIT: There are some other representations too, like one's-complement , but these are basically never used any more so you shouldn't have to worry about them.
{ "source": [ "https://cs.stackexchange.com/questions/105398", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/100347/" ] }
105,405
I am attempting to make a regular grammar over alphabet {a, b, c} where there is at most one c. So far, I have the regular expression ((a|b)*|c)(a|b)* but am unsure where to go from here; my previous attempts have ended up allowing multiple c's. The solution I has gives (N={s,t}, T={a,b,c}, s, R), s→є, s→as, s→bs, s→ct, t→at, t→bt, t→є, however I do not see how this limits the number of c's generated to at most 1.
Short version: it doesn't know. There's no way to tell. If 1111 represents -7, then you have a sign-magnitude representation , where the first bit is the sign and the rest of the bits are the magnitude. In this case, arithmetic is somewhat complicated, since an unsigned add and a signed add use different logic. So you'd probably have a SADD and a UADD opcode, and if you choose the wrong one you get nonsensical results. More often, though, 1111 represents -1, in what's called a two's-complement representation . In this case, the ALU simply doesn't care if the numbers are signed or unsigned! For example, let's take the operation of 1110 + 0001 . In signed arithmetic, this means "-2 + 1", and the result should be -1 ( 1111 ). In unsigned arithmetic, this means "14 + 1", and the result should be 15 ( 1111 ). So the ALU doesn't know whether you want a signed or an unsigned result, and it doesn't care. It just does the addition as if it were unsigned, and if you want to treat that as a signed integer afterward, that's up to you. EDIT: As Ruslan and Daniel Schepler quite rightly point out in the comments, some operands still need separate signed and unsigned versions, even on a two's-complement machine. Addition, subtraction, multiplication, equality, and such all work fine without knowing if the numbers are signed or not. But division and any greater-than/less-than comparisons have to have separate versions. EDIT EDIT: There are some other representations too, like one's-complement , but these are basically never used any more so you shouldn't have to worry about them.
{ "source": [ "https://cs.stackexchange.com/questions/105405", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/101442/" ] }
105,414
I was looking at some problems about graphs, and I got stuck on this one. Namely, we have given matrix of size $N \cdot N$ representing the length of the shortest path in undirected graph between some pairs of nodes in the graph $(i,j)$ . We don't have the original graph, we instead only have the all pairs shortest paths matrix given. We need to find minimal number $x$ such that there is a way to build graph whose all pairs shortest path matrix will be the same as the given one and the sum of the edges in the graph will be exactly $x$ . Note that for some all pairs shortest paths there wont be way to build such graph, so we should also check if there can be any valid solution. For example: for the matrix below we should output 6, if we link edges (2, 3) with cost 2 and edges (1, 3) with 4 0 6 4 6 0 2 4 2 0 I noticed that the cheapest edges will always exist, so I sorted all the numbers, then I tried using some data structure for checking if from the cheaper edges we have already covered the path from node i to node j, or we should also include new edge in the path. However my approach doesn't give good results. Please share some advice in which direction the right solution should be.
Short version: it doesn't know. There's no way to tell. If 1111 represents -7, then you have a sign-magnitude representation , where the first bit is the sign and the rest of the bits are the magnitude. In this case, arithmetic is somewhat complicated, since an unsigned add and a signed add use different logic. So you'd probably have a SADD and a UADD opcode, and if you choose the wrong one you get nonsensical results. More often, though, 1111 represents -1, in what's called a two's-complement representation . In this case, the ALU simply doesn't care if the numbers are signed or unsigned! For example, let's take the operation of 1110 + 0001 . In signed arithmetic, this means "-2 + 1", and the result should be -1 ( 1111 ). In unsigned arithmetic, this means "14 + 1", and the result should be 15 ( 1111 ). So the ALU doesn't know whether you want a signed or an unsigned result, and it doesn't care. It just does the addition as if it were unsigned, and if you want to treat that as a signed integer afterward, that's up to you. EDIT: As Ruslan and Daniel Schepler quite rightly point out in the comments, some operands still need separate signed and unsigned versions, even on a two's-complement machine. Addition, subtraction, multiplication, equality, and such all work fine without knowing if the numbers are signed or not. But division and any greater-than/less-than comparisons have to have separate versions. EDIT EDIT: There are some other representations too, like one's-complement , but these are basically never used any more so you shouldn't have to worry about them.
{ "source": [ "https://cs.stackexchange.com/questions/105414", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/75308/" ] }
105,618
Let us say there is a program such that if you give a partially filled Sudoku of any size it gives you corresponding completed Sudoku. Can you treat this program as a black box and use this to solve TSP? I mean is there a way to represent TSP problem as partially filled Sudoku, so that if I give you the answer of that Sudoku, you can tell the solution for TSP in polynomial time? If yes, how? how do you represent TSP as a partially filled Sudoku and interpret corresponding filled Sudoku for the result.
For 9x9 Sudoku, no. It is finite so can be solved in $O(1)$ time. But if you had a solver for $n^2 \times n^2$ Sudoku, that worked for all $n$ and all possible partial boards, and ran in polynomial time, then yes, that could be used to solve TSP in polynomial time, as completing a $n^2 \times n^2$ Sudoku is NP-complete. The proof of NP-completeness works by reducing from some NP-complete problem R to Sudoku; then because R is NP-complete, you can reduce from TSP to R (that follows from the definition of NP-completeness); and chaining those reductions gives you a way to use the Sudoku solver to solve TSP.
{ "source": [ "https://cs.stackexchange.com/questions/105618", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/-1/" ] }
106,018
Say I need to simulate the following discrete distribution: $$ P(X = k) = \begin{cases} \frac{1}{2^N}, & \text{if $k = 1$} \\ 1 - \frac{1}{2^N}, & \text{if $k = 0$} \end{cases} $$ The most obvious way is to draw $N$ random bits and check if all of them equals to $0$ (or $1$ ). However, information theory says $$ \begin{align} S & = - \sum_{i} P_i \log{P_i} \\ & = - \frac{1}{2^N} \log{\frac{1}{2^N}} - \left(1 - \frac{1}{2^N}\right) \log{\left(1 - \frac{1}{2^N}\right)} \\ & = \frac{1}{2^N} \log{2^N} + \left(1 - \frac{1}{2^N}\right) \log{\frac{2^N}{2^N - 1}} \\ & \rightarrow 0 \end{align} $$ So the minimum number of random bits required actually decreases as $N$ goes large. How is this possible? Please assume that we are running on a computer where bits is your only source of randomness, so you can't just tose a biased coin.
Wow, great question! Let me try to explain the resolution. It'll take three distinct steps. The first thing to note is that the entropy is focused more on the average number of bits needed per draw, not the maximum number of bits needed. With your sampling procedure, the maximum number of random bits needed per draw is $N$ bits, but the average number of bits needed is 2 bits (the average of a geometric distribution with $p=1/2$ ) -- this is because there is a $1/2$ probability that you only need 1 bit (if the first bit turns out to be 1), a $1/4$ probability that you only need 2 bits (if the first two bits turn out to be 01), a $1/8$ probability that you only need 3 bits (if the first three bits turn out to be 001), and so on. The second thing to note is that the entropy doesn't really capture the average number of bits needed for a single draw. Instead, the entropy captures the amortized number of bits needed to sample $m$ i.i.d. draws from this distribution. Suppose we need $f(m)$ bits to sample $m$ draws; then the entropy is the limit of $f(m)/m$ as $m \to \infty$ . The third thing to note is that, with this distribution, you can sample $m$ i.i.d. draws with fewer bits than needed to repeatedly sample one draw. Suppose you naively decided to draw one sample (takes 2 random bits on average), then draw another sample (using 2 more random bits on average), and so on, until you've repeated this $m$ times. That would require about $2m$ random bits on average. But it turns out there's a way to sample from $m$ draws using fewer than $2m$ bits. It's hard to believe, but it's true! Let me give you the intuition. Suppose you wrote down the result of sampling $m$ draws, where $m$ is really large. Then the result could be specified as a $m$ -bit string. This $m$ -bit string will be mostly 0's, with a few 1's in it: in particular, on average it will have about $m/2^N$ 1's (could be more or less than that, but if $m$ is sufficiently large, usually the number will be close to that). The length of the gaps between the 1's are random, but will typically be somewhere vaguely in the vicinity of $2^N$ (could easily be half that or twice that or even more, but of that order of magnitude). Of course, instead of writing down the entire $m$ -bit string, we could write it down more succinctly by writing down a list of the lengths of the gaps -- that carries all the same information, in a more compressed format. How much more succinct? Well, we'll usually need about $N$ bits to represent the length of each gap; and there will be about $m/2^N$ gaps; so we'll need in total about $mN/2^N$ bits (could be a bit more, could be a bit less, but if $m$ is sufficiently large, it'll usually be close to that). That's a lot shorter than a $m$ -bit string. And if there's a way to write down the string this succinctly, perhaps it won't be too surprising if that means there's a way to generate the string with a number of random bits comparable to the length of the string. In particular, you randomly generate the length of each gap; this is sampling from a geometric distribution with $p=1/2^N$ , and that can be done with roughly $\sim N$ random bits on average (not $2^N$ ). You'll need about $m/2^N$ i.i.d. draws from this geometric distribution, so you'll need in total roughly $\sim Nm/2^N$ random bits. (It could be a small constant factor larger, but not too much larger.) And, notice is that this is much smaller than $2m$ bits. So, we can sample $m$ i.i.d. draws from your distribution, using just $f(m) \sim Nm/2^N$ random bits (roughly). Recall that the entropy is $\lim_{m \to \infty} f(m)/m$ . So this means that you should expect the entropy to be (roughly) $N/2^N$ . That's off by a little bit, because the above calculation was sketchy and crude -- but hopefully it gives you some intuition for why the entropy is what it is, and why everything is consistent and reasonable.
{ "source": [ "https://cs.stackexchange.com/questions/106018", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/46230/" ] }
106,774
Irrational numbers like $\pi$ , $e$ and $\sqrt{2}$ have a unique and non-repeating sequence after the decimal point. If we extract the $n$ -th digit from such numbers (where $n$ is the number of times the method is called) and make a number with the digits as it is, should we not get a perfect random number generator? For example, if we're using $\sqrt{2}$ , $e$ and $\pi$ , the first number is 123, second one is 471, the next one is 184 and so on.
For any reasonable definition of perfect, the mechanism you describe is not a perfect random number generator. Non-repeating isn't enough. The decimal number $0.101001000100001\dots$ is non-repeating but it's a terrible generator of random digits, since the answer is "always" zero, occasionally one, and never anything else. We don't actually know if every digit occurs equally often in the decimal expansion of $\pi$ or $\mathrm{e}$ (though we suspect they do). In many situations, we require random numbers to be unpredictable (indeed, if you asked a random person what "random" means, they'd probably say something about unpredictability). The digits of well-known constants are totally predictable. We usually want to generate random numbers reasonably quickly, but generating successive digits of mathematical constants tends to be quite expensive. It is, however, true that the digits of $\pi$ and $\mathrm{e}$ look statistically random, in the sense that every possible sequence of digits seems to occur about as often as it should. So, for example, each digit does occur very close to one time in ten; each two-digit sequence very close to one in a hundred, and so on.
{ "source": [ "https://cs.stackexchange.com/questions/106774", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/102757/" ] }
107,439
I'm a bit confused on what exactly the meaning of a 'key' is in computer science. I understand key-value pairs, primary keys, etc... But I can't find a definition of what the term 'key' means by itself. As far as I can tell it just means a piece of data. In CLRS, data associated with tree nodes are referred to as 'keys'. Data to search a hash table is called a 'key'. Is this what a 'key' is?
In the most general sense, a key is a piece of information required to retrieve some data. However, this meaning plays out differently depending on exactly what situation you're dealing with. In the contexts you mention, a key is a unique identifier for the complete data used to retrieve it from some location in the structure. Each key is associated with only one item, so it can be used to find a particular set of data. The data structure will usually be organized in such a way that finding the key is much more efficient than a linear search through all of the data. Sometimes the key is actually part of the data and stored along with it (like primary keys in the database); other times, it is segregated from the data itself (like in a hash map). The data structure will also often perform extra processing on the key (and only the key) to support its efficient searching algorithm (such as in a hash map, the key is converted into a hash code, or a database will index the primary keys using a B-tree). In cryptography, a key is used in a sense more akin to physical keys used on locks. They're pieces of data required to obtain the original from the encrypted data (to "unlock" the data, so to speak).
{ "source": [ "https://cs.stackexchange.com/questions/107439", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/103368/" ] }
108,496
I'm trying to argue that N is not equal NP using hierarchy theorems. This is my argument, but when I showed it to our teacher and after deduction, he said that this is problematic where I can't find a compelling reason to accept. We start off by assuming that $P=NP$ . Then it yields that $\mathit{SAT} \in P$ which itself then follows that $\mathit{SAT} \in TIME(n^k)$ . As stands, we are able to do reduce every language in $NP$ to $\mathit{SAT}$ . Therefore, $NP \subseteq TIME(n^k)$ . On the contrary, the time hierarchy theorem states that there should be a language $A \in TIME(n^{k+1})$ , that's not in $TIME(n^k)$ . This would lead us to conclude that $A$ is in $P$ , while not in $NP$ , which is a contradiction to our first assumption. So, we came to the conclusion that $P \neq NP$ . Is there something wrong with my proof?
Then it yields that $SAT \in P$ which itself then follows that $SAT \in TIME(n^k)$ . Sure. As stands, we are able to do reduce every language in $NP$ to $SAT$ . Therefore, $NP \subseteq TIME(n^k)$ . No. Polynomial time reductions aren't free. We can say it takes $O(n^{r(L)})$ time to reduce language $L$ to $SAT$ , where $r(L)$ is the exponent in the polynomial time reduction used. This is where your argument falls apart. There is no finite $k$ such that for all $L \in NP$ we have $r(L) < k$ . At least this does not follow from $P = NP$ and would be a much stronger statement. And this stronger statement does indeed conflict with the time hierarchy theorem, which tells us that $P$ can not collapse into $TIME(n^k)$ , let alone all of $NP$ .
{ "source": [ "https://cs.stackexchange.com/questions/108496", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/51893/" ] }
108,983
Definitions of Turing machines are always explicit about the blank symbol not being part of the input alphabet. I wonder what goes wrong when you would make it part of the input alphabet, because effectively the blank symbol already seems to be part of the input. To explain that 'seems' in the last sentence, consider the following. In the default setup, an infinite number of blank symbols appear on the right of the input. When the tape head moves over the first blank symbol, computation can just continue, as it doesn't need to be an accept or reject state. Now suppose the computation would subsequently write symbols from the input alphabet to the right of that first blank symbol, then return to the leftmost position while also returning to the start state. It would then 'start over' with a different tape. Effectively, it now starts with a different input, where there are input symbols to the right of the blank that weren't there before. The input seems to effectively include the blank symbol. The further behavior of the machine could now also be different: after encountering the blank again, it will now encounter different symbols to the right. Supposing this scenario is indeed possible, why wouldn't you consider the blank symbol part of the input alphabet and why wouldn't you allow including it as part of the 'initial' input? Perhaps it is just a way to define the input such that it isn't always infinite?
The main reason is that it allows the machine to detect the end of its input: it's (the character before) the first blank. If you allowed blanks in the input, the machine could never know whether it might find more input by scanning farther to the right. Of course, you could solve that by having a special "end of input" character but then you have to insist that that can't appear in the input, so you've just shifted the problem one level deeper. It also makes the initial conditions much easier to specify: the input is the non-blank section of the initial tape, which must be finite and contiguous. And if you want a blank character to be a part of the input alphabet, you can always add an extra character (call it "space" or something) and have the machine behave however you want when it sees it.
{ "source": [ "https://cs.stackexchange.com/questions/108983", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/104883/" ] }
109,150
Python quite famously uses indentation to syntactically define blocks of code. (See Compound statements in the Python Language Reference). After years of using Python I'm still intrigued by and very fond of this syntax feature. But I wonder: Apart from Python and its "predecessor"(*) language ABC which other programming languages are out there using indentation for definition of code blocks ? Code blocks means here "multiple statements which in some way are treated as one component". I'm particularly interested in practical programming languages, but esoteric languages might be worth mentioning as well. (*): " Predecessor " is my choice of word in default of knowing here a better one. Guido van Rossum, the creator of Python, described the relationship between Python and ABC regarding indentation in an interview like this: " The choice of indentation for grouping was not a novel concept in Python; I inherited this from ABC. "
Wikipedia has an extensive list of languages that use the off-side rule 1 : ABC Boo BuddyScript Cobra CoffeeScript Converge Curry Elixir ( , do: blocks) Elm F# (if #light "off" is not specified) Genie Haskell (only for where , let , do , or case ... of clauses when braces are omitted) Inform 7 ISWIM, the abstract language that introduced the rule LiveScript Miranda Nemerle Nim occam PROMAL Python Scheme, when using e.g. SRFI 119 Spin XL 1: I've never heard this term before myself.
{ "source": [ "https://cs.stackexchange.com/questions/109150", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/105074/" ] }
109,326
I read this sentence in a book: In VLIW architecture, the compiler/and or assembly writer chooses instructions that can be executed in parallel. What is the difference between assembly writer and compiler? Would an assembly writer also mean the same as assembler?
The "assembly writer" in that book is a human software developer who writes code in assembler language.
{ "source": [ "https://cs.stackexchange.com/questions/109326", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/51700/" ] }
109,333
I have the following statement. I would say it's correct as it's either equal or higher than $\Omega(\log^{10}(n))$ . Because: I know $\log(2^n) = n$ . By that I would guess the same goes for $\log(n^{10})$ unimportant if the number is higher or not. By that it should be n. With this knowledge we already know it's higher than our $\Omega(\log^{10}(n))$ . We also know $\log(n)^2 = \log(n)$ again it will be higher or equal to our omega. Is this correct? $$(3 \log^2 n + 55 \log(n^{10}) + 8 \log n) \cdot \log n = \Omega(\log^{10} n)$$
The "assembly writer" in that book is a human software developer who writes code in assembler language.
{ "source": [ "https://cs.stackexchange.com/questions/109333", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/105300/" ] }
109,607
Since there is only a constant between bases of logarithms, isn't it just alright to write $f(n) = \Omega(\log{n})$ , as opposed to $\Omega(\log_2{n})$ , or whatever the base might be?
It depends where the logarithm is. If it is just a factor, then it doesn't make a difference, because big-O or $\theta$ allows you to multiply by any constant. If you take $O(2^{\log n})$ then the base is important. In base 2 you would have just $O(n)$ , in base 10 it's about $O(n^{0.3010})$ .
{ "source": [ "https://cs.stackexchange.com/questions/109607", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/105487/" ] }
109,611
Assume that I need to determine the reliability of a service. The service includes component a (software reliability=0.95) and component b (software reliability=0.98). I have 2 computers: Computer A (hardware reliability=0.99) and Computer B (hardware reliability=0.99). I have two following cases: Case 1: Deploy both a and b on computer A . For this case, the service reliability is around 0.923 . Case 2: Deploy a on computer A , and b on computer B . For this case, the service reliability is around 0.912 I really wonder why the service reliability in case 2 is lower than in case 1. The thing is A and B have the same hardware reliability. Can someone clarify that?
It depends where the logarithm is. If it is just a factor, then it doesn't make a difference, because big-O or $\theta$ allows you to multiply by any constant. If you take $O(2^{\log n})$ then the base is important. In base 2 you would have just $O(n)$ , in base 10 it's about $O(n^{0.3010})$ .
{ "source": [ "https://cs.stackexchange.com/questions/109611", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/105571/" ] }
109,618
What is "Fibonacci" about the Fibonacci LFSR ? If I read right, Fibonacci LFSR means that it depends on its two last states, but from the example in Wikipedia it doesn't look like two states are taken in consideration (ie. XORing the taps in the current state, shifting and inputing the left bit..). What am I missing?
It depends where the logarithm is. If it is just a factor, then it doesn't make a difference, because big-O or $\theta$ allows you to multiply by any constant. If you take $O(2^{\log n})$ then the base is important. In base 2 you would have just $O(n)$ , in base 10 it's about $O(n^{0.3010})$ .
{ "source": [ "https://cs.stackexchange.com/questions/109618", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/87793/" ] }
110,375
I've been reading for a few weeks about the Lambda Calculus, but I have not yet seen anything that is materially distinct from existing mathematical functions, and I want to know whether it is just a matter of notation, or whether there are any new properties or rules created by the lambda calculus axioms that don't apply to every mathematical function. So, for example, I've read that: "There can be anonymous functions" : Lambda functions aren't anonymous, they're just all called lambda. It is permissible in mathematical notation to use the same variable for different functions if the name is not important. For example, the two functions in a Galois Connection are often both called *. "Functions can accept functions as inputs" : Not new you can do this in with ordinary functions. "Functions are black boxes" : Just inputs and outputs are also valid descriptions of mathematical functions... This may seem like a discussion or opinionated question but I believe that there should be a "correct" answer to this question. I want to know whether lambda calculus is just a notational, or syntactic convention for working with mathematical functions, or whether there are any substantial or semantic differences between lambdas and ordinary functions.
Ironically, the title is on point but not in the way you seem to mean it which is "is the lambda calculus just a notational convention" which is not accurate. Lambda terms are not functions 1 . They are pieces of syntax, i.e. collections of symbols on a page. We have rules for manipulating these collections of symbols, most significantly beta reduction. You can have multiple distinct lambda terms that correspond to the same function. 2 I'll address your points directly. First, lambda is not a name that is being reused. Not only would that be extremely confusing, but we don't write $\lambda(x)$ (or $(\lambda\ x)$ ) which is what we'd do if $\lambda$ was a name for a function, just like we write $f(x)$ . In $f(x)$ we could replace $f$ (if it were defined by a lambda term) with the lambda term producing something like $(\lambda y.y)(x)$ meaning $(\lambda y.y)$ is an expression that can represent a function, not a declaration declaring a function (named $\lambda$ or anything else). At any rate, when we overload terminology/notation, it is (one hopes) done in a manner where it can be disambiguated via context, that certainly can't be the case for lambda terms. Your next point is fine but somewhat irrelevant. This is not a competition where there's Team Lambda Terms and Team Functions, and only one can win. A major application of lambda terms is studying and understanding certain kinds of functions. A polynomial is not a function though we often sloppily identify them. Studying polynomials doesn't mean one thinks that all functions should be polynomials, nor is it the case that polynomials have to "do" something "new" to be worth studying. Set theoretic functions are not black boxes, though they are entirely defined by their input-output relation. (They literally are their input-output relation.) Lambda terms are also not black boxes and they are not defined by their input-output relation. As I've mentioned before, you can have distinct lambda terms that produce the same input-output relation. This also underscores the fact that lambda terms can't be functions, though they can induce functions. 2 In fact, the analogy between polynomials and lambda terms is very close, and I suspect you may not appreciate the distinction between a polynomial and the function it represents, so I'll elaborate a bit. 3 Typically when polynomials are introduced, usually with real coefficients, they are treated as real functions of a particular type. Now consider the theory of linear-feedback shift registers (LFSRs). It is largely the theory of (uni-variate) polynomials over $\mathbb F_2$ , but if we think of that as a function $\mathbb F_2\to\mathbb F_2$ , then there are at most $4$ such functions. There are, however, an infinite number of polynomials over $\mathbb F_2$ . 4 One way to see this is that we can interpret these polynomials as something other than $\mathbb F_2\to\mathbb F_2$ functions, indeed any $\mathbb F_2$ -algebra will do. For LFSRs, we commonly interpret the polynomials as operations on bitstreams, which, if we wanted could be represented as functions $\mathbf{2}^{\mathbb N}\to\mathbf{2}^{\mathbb N}$ , though the vast majority of such functions would not be in the image of the interpretation of an LFSR. This applies to lambda terms as well, we can interpret both of them as things other than functions. They are also both much more tractable objects to work with than the typically uncountably infinite sets of functions. They are both much more computational than arbitrary functions. I can write a program to manipulate polynomials (with coefficients that are computably representable at least) and lambda terms. Indeed, untyped lambda terms are one of the original models of computable functions. This more symbolic/syntactic, calculational/computational perspective is usually more emphasized, especially for the untyped lambda calculus, than the more semantic interpretations of the lambda calculus. Typed lambda terms are far more manageable things and can usually (but not always) easily be interpreted as set theoretic functions, but can also usually be interpreted into an even broader class of things besides functions than the untyped lambda calculus. They also have a rich syntactic theory of their own and a very deep connection to logic . 1 It's possible the issue might go the other way. Maybe you have a misapprehension about what a function is. 2 This is actually not so straightforward. For the untyped lambda calculus, it doesn't really make sense to naively interpret arbitrary lambda terms as set-theoretic functions . You can start to see this when you try to articulate what the domain of the interpretation should be. If I interpret a lambda term as an element of a set $D$ , I also want to be able to interpret it as a function on $D$ and into $D$ since I want to interpret application as function application. You end up with $D^D\subseteq D$ (or a weakening of this) which is true only of the singleton set. What we need for the untyped lambda calculus is a reflexive object , and for the category of sets there are no non-trivial reflexive objects. The story is quite a bit different for typed lambda terms, but can still be non-trivial. 3 If you are clear on this distinction, then the analogy should be pretty informative. 4 This issue doesn't occur with fields of characteristic 0, like the complex numbers, reals, rationals, or integers, so the distinction isn't as sharp, though it still exists.
{ "source": [ "https://cs.stackexchange.com/questions/110375", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/106242/" ] }
110,402
I understand that if there exist 2 or more left or right derivation trees, then the grammar is ambiguous, but I am unable to understand why it is so bad that everyone wants to get rid of it.
Consider the following grammar for arithmetic expressions: $$ X \to X + X \mid X - X \mid X * X \mid X / X \mid \texttt{var} \mid \texttt{const} $$ Consider the following expression: $$ a - b - c $$ What is its value? Here are two possible parse trees: According to the one on the left, we should interpret $a-b-c$ as $(a-b)-c$ , which is the usual interpretation. According to the one on the right, we should interpret it as $a-(b-c) = a-b+c$ , which is probably not what was intended. When compiling a program, we want the interpretation of the syntax to be unambiguous. The easiest way to enforce this is using an unambiguous grammar. If the grammar is ambiguous, we can provide tie-breaking rules, like operator precedence and associativity. These rules can equivalently be expressed by making the grammar unambiguous in a particular way. Parse trees generated using syntax tree generator .
{ "source": [ "https://cs.stackexchange.com/questions/110402", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/106281/" ] }
110,738
This question pretty much explains that they can, but does not show any examples of there being two different trees with the same pre-order traversal. It is also mentioned that the in-order traversal of two different trees can be the same though they are structurally different. Is there an example of this?
Tree Examples (image) : A: B: ‾‾ ‾‾ 1 1 / / \ 2 2 3 / 3 This is an example that fits your scenario, Tree A root׳s value is 1, having a left child with value 2, and his left child has also a left child with value 3. Tree B root׳s value is 1, having a left child with value 2 and a right child with value 3. In both cases the Preorder traversal is 1->2->3.
{ "source": [ "https://cs.stackexchange.com/questions/110738", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/78046/" ] }
110,744
Construct the CFG given the following language: $$\{a^i \; b^j \; c^k \;|\; i = j \; or \; j = k \}$$
Tree Examples (image) : A: B: ‾‾ ‾‾ 1 1 / / \ 2 2 3 / 3 This is an example that fits your scenario, Tree A root׳s value is 1, having a left child with value 2, and his left child has also a left child with value 3. Tree B root׳s value is 1, having a left child with value 2 and a right child with value 3. In both cases the Preorder traversal is 1->2->3.
{ "source": [ "https://cs.stackexchange.com/questions/110744", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/94629/" ] }
110,772
Is the Simple Uniform Hashing Assumption (SUHA) sufficient to show that the worst-case time complexity of hash table lookups is O (1)? It says in the Wikipedia article that this assumption implies that the average length of a chain is $\alpha = n / m$ , but... ...this is true even without this assumption, right? If the distribution is [4, 0, 0, 0] the average length is still 1. ...this is a probabilistic statement, which is of little use when discussing worst case complexity, right? It seems to me like a different assumption would be needed. Something like: The difference between the largest and smallest bucket is bounded by a constant factor. Maybe this is this implied by SUHA? If so, I don't see how.
Tree Examples (image) : A: B: ‾‾ ‾‾ 1 1 / / \ 2 2 3 / 3 This is an example that fits your scenario, Tree A root׳s value is 1, having a left child with value 2, and his left child has also a left child with value 3. Tree B root׳s value is 1, having a left child with value 2 and a right child with value 3. In both cases the Preorder traversal is 1->2->3.
{ "source": [ "https://cs.stackexchange.com/questions/110772", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/21883/" ] }
110,923
I read about NPC and its relationship to PSPACE and I wish to know whether NPC problems can be deterministicly solved using an algorithm with worst case polynomial space requirement, but potentially taking exponential time (2^P(n) where P is polynomial). Moreover, can it be generalised to EXPTIME in general? The reason I am asking this is that I wrote some programs to solve degenerate cases of an NPC problem, and they can consume very large amounts of RAM for hard instances, and I wonder if there is a better way. For reference see https://fc-solve.shlomifish.org/faq.html .
Generally speaking, the following is true for any algorithm: Suppose $A$ is an algorithm that runs in $f(n)$ time. Then $A$ could not take more than $f(n)$ space, since writing $f(n)$ bits requires $f(n)$ time. Suppose $A$ is an algorithm that requires $f(n)$ space. Then in $2^{f(n)}$ time, $A$ can visit each of its different states, therefore can gain nothing by running more than $2^{f(n)}$ time. It follows that: $\mathbf{NP}$ $\subseteq \mathbf{PSPACE}$ The statemement is known as part of the relations between the classes, as depicted by the following diagram: The explanation is simple: a problem $Q$ $\in$ $\mathbf{NP}$ has a polynomial length certificate $y$ . An algorithm that tests all possible certificates is an algorithm that decides $Q$ in time $\large 2^{n^{O(1)}}$ . Its space requirement is: $y$ (polynomial in $n$ ) space required to verify $y$ . Since $y$ is a polynomial certificate, it can be verified in polynomial time, hence cannot possibly require more than polynomial space. Since the sum of two polynomials is also a polynomial, $Q$ can be decided with polynomial space. Example: Suppose $\varphi$ is an instance of 3-CNF on literals $x_1 \dots x_n$ , with $m$ clauses. An assignment $f$ is some function $f:\{x_1\dots x_n\} \rightarrow \{0,1\}$ . It holds that: There are $2^n$ different assignments. Given an assignment $f$ , it takes $O(m)$ time to calculate the value of $\varphi$ , therefore it cannot require more than $O(m)$ space. So an algorithm $A$ that checks all possible assignments will use polynomial space, run in exponential time and decide 3-SAT. It follows that: 3-SAT $\in \mathbf{PSPACE}$ , and since 3-SAT is NP-Complete, $\mathbf{NP}$ $\subseteq \mathbf{PSPACE}$
{ "source": [ "https://cs.stackexchange.com/questions/110923", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/106701/" ] }
113,257
this is a piece of Assembly code section .text global _start ;must be declared for using gcc _start: ;tell linker entry point mov edx, len ;message length mov ecx, msg ;message to write mov ebx, 1 ;file descriptor (stdout) mov eax, 4 ;system call number (sys_write) int 0x80 ;call kernel mov eax, 1 ;system call number (sys_exit) int 0x80 ;call kernel section .data msg db 'Hello, world!',0xa ;our dear string len equ $ - msg ;length of our dear string Given a specific computer system, is it possible to predict precisely the actual run time of a piece of Assembly code.
I can only quote from the manual of a rather primitive CPU, a 68020 processor from around 1986: "Calculating the exact runtime of a sequence of instructions is difficult, even if you have precise knowledge of the processor implementation". Which we don't have. And compared to a modern processor, that CPU was primitive . I can't predict the runtime of that code, and neither can you. But you can't even define what "runtime" of a piece of code is, when a processor has massive caches, and massive out-of-order capabilities. A typical modern processor can have 200 instructions "in flight", that is in various stages of execution. So the time from trying to read the first instruction byte, to retiring the last instruction, can be quite long. But the actual delay to all other work that the processor needs doing may be (and typically is) a lot less. Of course doing two calls to the operating system makes this completely unpredictable. You don't know what "writing to stdout" actually does, so you can't predict the time. And you can't know the clock speed of the computer at the precise moment you run the code. It may be in some power saving mode, the computer may have reduced clock speed because it got hot, so even the same number of clock cycles can take different amounts of time. All in all: Totally unpredictable.
{ "source": [ "https://cs.stackexchange.com/questions/113257", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/106215/" ] }
114,676
In my class a student asked whether all finite automata could be drawn without crossing edges (it seems all my examples did). Of course the answer is negative, the obvious automaton for the language $\{\; x\in\{a,b\}^* \mid \#_a(x)+2\#_b(x) \equiv 0 \mod 5 \;\}$ has the structure of $K_5$ , the complete graph on five nodes. Yuval has shown a similar structure for a related language. My question is the following: how do we show that every finite state automaton for this language is non-planar? With Myhill-Nerode like characterizations it probably can be established that the structure of the language is present in the diagram, but how do we make this precise? And if that can be done, is there a characterization of "planar regular languages"?
It isn't true that every DFA for this language is non-planar: Here is a language that is truly non-planar: $$ \left\{ x \in \{\sigma_1,\ldots,\sigma_6\}^* \middle| \sum_{i=1}^6 i\#_{\sigma_i}(x) \equiv 0 \pmod 7 \right\}. $$ Take any planar FSA for this language. If we remove all unreachable states, we still get a planar graph. Each reachable state has six distinct outgoing edges, which contradicts the known fact that every planar graph has a vertex of degree at most five.
{ "source": [ "https://cs.stackexchange.com/questions/114676", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/4287/" ] }
114,685
The question here is that: There is an unbalanced binary tree with n-nodes. What is the time complexity to balance the tree? The solution I thought of involved solving using Recursion where for the worst-case I took a maximally unbalanced tree like this And then try to balance this using rotations. But I cannot come up with an expression which will give O(log(n)) time complexity. Can I get some help in solving this? I am stuck on how to approach this problem.
It isn't true that every DFA for this language is non-planar: Here is a language that is truly non-planar: $$ \left\{ x \in \{\sigma_1,\ldots,\sigma_6\}^* \middle| \sum_{i=1}^6 i\#_{\sigma_i}(x) \equiv 0 \pmod 7 \right\}. $$ Take any planar FSA for this language. If we remove all unreachable states, we still get a planar graph. Each reachable state has six distinct outgoing edges, which contradicts the known fact that every planar graph has a vertex of degree at most five.
{ "source": [ "https://cs.stackexchange.com/questions/114685", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/78331/" ] }
116,020
I am reading papers in machine learning and they say things like, "This computation took $x$ number of GPU years". What is a GPU year? How long is that?
That means, one year of computation time on a single GPU (or half a year on two GPUs, or a quarter of a year on four GPUs, etc.). If you are thinking of using this term in your own writing, I encourage you to also specify what type of GPU you are using. One-GPU year on a Tesla V100 GPU is a lot more computation than one-GPU year on a K520 GPU. The notion of "GPU-year" is close to meaningless if you don't specify what type of GPU was used.
{ "source": [ "https://cs.stackexchange.com/questions/116020", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/109354/" ] }
116,127
Surely any language with a finite longest word can be made regular by having an automaton with paths to 26 states for all letters and then having each of those states go to another 26 states, etc., with states going to a looping non-final state whenever there are no possible words to be made beginning with the letters you have already gone through. Then make every state that ends on a word final.
The English language is regular if you consider it as a set of single words. However, English is more than a set of words in a dictionary. English grammar is the non-regular part. Given a paragraph, there is no DFA deciding whether it is a well-written paragraph in the English language. Of course, it can say whether each word is an English word or not, but it can not judge whole paragraphs.
{ "source": [ "https://cs.stackexchange.com/questions/116127", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/111040/" ] }
116,408
Reading discussions of the recent quantum supremacy experiment by Google I noticed that a lot of time and effort (in the experiment itself, but also in the excellent blog posts by Scott Aaronson and others explaining the results) is spent on verifying that the quantum computer did indeed compute the thing we believe it to have computed. From a naive point of view this is completely understandable: the essence of any quantum supremacy experiment is that you have the quantum computer perform a task that is hard for a classical computer to achieve, so surely it would also be hard for the classical computer to verify that the quantum computer did complete the task we gave it, right? Well, no. About the first thing you learn when starting to read blogs or talk to people about computational complexity is that, counter-intuitive as it may seem, there exist problems that are hard to solve, but for which it is easy to verify the validity of a given solution: the so called NP problems. Thus it seems that Google could have saved themselves and others a lot of time by using one of these problems for their quantum supremacy experiment rather than the one they did. So my question is why didn't they? An answer for the special case of the NP problem factoring is given in this very nice answer to a different question: https://cs.stackexchange.com/a/116360/26301 . Paraphrasing: the regime where the quantum algorithm starts to out perform the best known classical algorithm starts at a point that requires more than the 53 qubits currently available. So my follow-up question is: does this answer for the special case extend to all NP-problems where quantum speedups are expected or is it specific to factoring? And in the first case: is there a fundamental reason related to the nature of NP that quantum-supremacy 'kicks in later' for NP problems than for sampling problems or is it just that for NP problems better classical algorithms are available due to their being more famous?
there exist problems that are hard to solve, but for which it is easy to verify the validity of a given solution: the so called NP problems. This statement is wrong. There are many NP problems which are easy to solve. "NP" simply means "easy to verify". It does not mean hard to solve. What you are probably thinking of is NP-complete problems which is a subset of the NP problems for which we have very, very good evidence to think they are hard. However, quantum computers are not expected to be able to solve NP-complete problems significantly more "easily" than regular computers. Factoring is also thought to be hard, but the evidence for this is only "very good" and not "very, very good" (in other words: factoring is likely not NP-complete). Factoring is one of very few natural problems which falls in between not being NP-complete and not being easy. The list of problems that we know that are easy to verify, easy to solve on a quantum computer but hard classicly, is even shorter. In fact, I do not know of any problem other than factoring (and the very closely related discrete logarithm problem) with this property. Moreover, any easy to verify problem would likely have the same issue as factoring: $53$ qubits is not that many, and $2^{53}$ is huge, but just within reach of classical computing. $2^{53}$ less than $10^{16}$ , and most classical computers can execute on the order of $10^9$ operations per second. We could run through all possibilities in about $1/3$ rd of a year on a single classical desktop computer. Quantum computers have very few applications which they're known to be good at, and are essentially useless for most hard NP problems.
{ "source": [ "https://cs.stackexchange.com/questions/116408", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/26301/" ] }
116,643
Given $n$ roots, $x_1, x_2, \dotsc, x_n$ , the corresponding monic polynomial is $$y = (x-x_1)(x-x_2)\dotsm(x-x_n) = \prod_{i}^n (x - x_i)$$ To get the coefficients, i.e., $y = \sum_{i}^n a_i x^i$ , a straightforward expansion requires $O \left(n^2\right)$ steps. Alternatively, if $x_1, x_2, \dotsc, x_n$ are distinct, the problem is equivalent to polynomial interpolation with $n$ points: $(x_1, 0), (x_2, 0), \dotsc, (x_n, 0)$ . The fast polynomial interpolation algorithm can be run in $O \left( n \log^2(n) \right)$ time. I want to ask whether there is any more efficient algorithm better than $O \left(n^2\right)$ ? Even if there are duplicated values among $\{x_i\}$ ? If it helps, we can assume that the polynomial is over some prime finite field, i.e., $x_i \in \mathbf{F}_q$ .
This can be done in $O(n \log^2 n)$ time, even if the $x_i$ have duplicates, via the following divide-and-conquer method. First compute the coefficients of the polynomial $f_0(x)=(x-x_1) \cdots (x-x_{n/2})$ (via a recursive call to this algorithm). Then compute the coefficients of the polynomial $f_1(x)=(x-x_{n/2+1})\cdots(x-x_n)$ . Next, compute the coefficients of $f(x)=f_0(x)f_1(x)$ using FFT-based polynomial multiplication. This yields an algorithm whose running time satisfies the recurrence $$T(n) = 2 T(n/2) + O(n \log n).$$ The solution to this recurrence is $T(n) = O(n \log^2 n)$ . This all works even if there are duplicates in the $x_i$ . (You might also be interested in Multi-point evaluations of a polynomial mod p .)
{ "source": [ "https://cs.stackexchange.com/questions/116643", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/111560/" ] }
117,577
Which of the functions among $2^{3^n}$ or $n!$ grows faster? I know that $n^n$ grows faster than $n!$ and $n!$ grows faster than $c^n$ where $c$ is a constant, but what is it in my case?
You can find the result by taking a $\log$ . Hence: $$\log(2^{3^n}) = 3^n$$ $$\log(n!) \leqslant \log(n^n) = n\log n$$ (In the latter equation, we have used the fact that $n! \leqslant n^n$ , as you note in the question.) Of course $3^n$ grows faster than $n \log n$ . As $\log$ is an increasing function, we can say $2^{3^{n}}$ grows faster than $n^n$ , and also $n!$ .
{ "source": [ "https://cs.stackexchange.com/questions/117577", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/106281/" ] }
118,481
I've grown up with computers. While watching old computer TV programmes and documentaries and reading the news about constant issues with these modern systems -- everything from the sheer amount of change/bloat/costs to all the security and privacy issues -- one thing that really stikes me hard is: Why do normal people need computers that are so powerful and complicated? In decades past, we used simpler, less powerful computers to perform all kinds of tasks, without the issues we face today. I'm not suggested we swap out current computers for thin clients. I'm simply saying that current computers seem more powerful than necessary for vast majority of tasks that any person employed by an average company would logically need. Even in the early 1990s, computers had advanced to the point where "all basic input/output tasks" were "solved". If I were running a company, I would create a minimal computer terminal which runs a minimal OS, and which just boots up and displays a "browser-like" interface that talks over HTTPS to my "mainframe". I'd use a simple username/password system, with no password resets or two-factor auth, and once logged in, the employee would see only the sections that are relevant for them, coded by me. For example, a secretary would see a basic form where she can input appointments, list current ones, etc. A different kind of employee, whose job is just to deal with customer support, would only see a minimal list of current support tickets and only have the ability to respond to these in a manner which cannot be misunderstood or abused. I'd log every action so that I can later look up exactly who messed up or went rogue. I notice that modern systems don't seem to work that way. Instead, we have full-fledged PCs with expensive, bloated, and insecure Microsoft or Linux software. We spend enormous amounts of time, effort, and money to educate employees on how to use it, maintain it, and deal with all the problems that inevitably arise from exposing the general public to such complicated systems. Why is this? Why is the only choice between a complex Windows system, a fragile Linux, or some kind of ChromeOS thin client that exposes my data to Google? Why don't we have a privacy-respecting, minimal thin client OS that can't do anything but display basic HTML, basic CSS and connect over HTTPS, has no system storage or ability to change it, and is just something you hook up to a standard display and network cable and mouse and keyboard. I realize one still needs to administer the server/mainframe, but presumably this could be done by skilled professionals, rather than the general public. Can you help me understand why computing works this way today?
You are conflating a number of issues here. Why does my software have all these features to begin with? Because other computers' software has those features, and network effects punish any software developer who doesn't follow the herd. Let's take an example from your question: Why does my web browser need to do anything other than basic HTML and CSS? Well, have you ever tried browsing the modern internet with all JavaScript disabled? It's functionally unusable. The problem is that, once JavaScript existed in a widely used browser (Netscape), people started using it in their webpages. That meant that other browsers had to add support for it to prevent their users from complaining that webpages are broken. And once more users had support for it in their browsers, more webpage authors started assuming that users had it. Round and round the positive feedback loop goes. Software developers have strong incentives for adding features to software, and strong disincentives for removing them or even changing them . It's taking 25 years to kill Flash Player, despite it being an incredibly complicated, bug-ridden, security nightmare black box that Adobe themselves no longer wants anything to do with. You know that all hell is going to break loose on December 31, 2020 when the plug finally gets pulled and people can no longer play their beloved Flash games from 2001. Why do employees of a company need to have full PCs rather than dumb terminals that can only view intranet pages? Because intranet pages work great until an employee needs to do anything other than the specific tasks that the programmer has predetermined are that person's job. What happens when someone else (either inside or outside the company—let's call her "Alice") wants to send you a presentation to review and edit? Every person at every job role at my company has had to do that at some point—managers, engineers, administration, facilities, you name it. You can receive Alice's presentation file on your internal webmail, but you need some way to edit it. And that means your computer needs to be able to edit PowerPoint files, because that's what everyone uses . And PowerPoint files are ludicrously flexible in what sort of content can be contained in them. So we're back to the network effects problem. If Alice sends me a PowerPoint file and I can't edit on my computer because it uses features that my software doesn't support, that's my problem as far as Alice is concerned. It's functionally impossible to forsee all the things that someone might need to do for most modern jobs, for the simple reason that if it were possible, that job would probably already be fully automated. And that's not even accounting for the fact that many companies allow their employees to use company computers for all sorts of things that are not strictly part of their job description, such as streaming music. There's also a more general principle here. As you correctly point out, most PCs are "fundamentally overpowered and overcomplicated for the vast majority of tasks that any person employed by a normal, non-highly specialized IT company, or government entity, would logically need." The problem is that a computer that does all the tasks you need 95% of the time but is useless the remaining 5% of the time when you need to do something weird and specialized—is useless as a computer. I've heard it said that in software design, 10% of the features cover 90% of what any user needs, but that the remaining 10% of what any user needs is different for each user . If you take the set of features used by each user and intersect all of these sets, the result is not sufficient for any user. "This interface would have a simple username/password system, with no demands to reset passwords or 'two-factor auth' or any of that nonsense" This gets its own subheading, because this is a totally separate class of issues from everything else you mention. Go over to Security.SE and read about why these things exist. There are very specific reasons for these security practices, that are totally orthogonal to any discussion about complexity of software.
{ "source": [ "https://cs.stackexchange.com/questions/118481", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/113265/" ] }
118,671
Today I revisited the topic of runtime complexity orders – big-O and big- $\Theta$ . I finally fully understood what the formal definition of big-O meant but more importantly I realised that big-O orders can be considered sets. For example, $n^3 + 3n + 1$ can be considered an element of set $O(n^3)$ . Moreover, $O(1)$ is a subset of $O(n)$ is a subset of $O(n^2)$ , etc. This got me thinking about big-Theta which is also obviously a set. What I found confusing is how each big-Theta order relates to each other. i.e. I believe that $\Theta(n^3)$ is not a subset of $\Theta(n^4)$ . I played around with Desmos (graph visualiser) for a while and I failed to find how each big-Theta order relates to other orders. A simple example Big-Theta example graphs shows that although $f(n) = 2n$ is in $\Theta(n)$ and $g(n) = 2n^2$ is in $\Theta(n^2)$ , the graphs in $\Theta(n)$ are obviously not in $\Theta(n^2)$ . I kind of understand this visually, if I think about how different graphs and bounds might look like but I am having a hard time getting a solid explanation of why it is the way it is. So, my questions are: Is what I wrote about big-O correct? How do big-Theta sets relate to each other, if they relate at all? Why do they relate to each other the way they do? The explanation is probably derivable from the formal definition of big-Theta (might be wrong here) and if someone could relate the explanation back to that definition it would be great. Is this also the reason why big-O is better for analysing complexity? Because it is easier to compare it to other runtimes?
Is what I wrote about big-O correct? Yes. How do big-Theta sets relate to each other, if they relate at all? They are a partition of the space of functions. If $\Theta(f)\cap \Theta(g)\not = \emptyset$ , then $\Theta(f)=\Theta(g)$ . Moreover, $\Theta(f)\subseteq O(f)$ . Why do they relate to each other the way they do? The explanation is probably derivable from the formal definition of big-Theta (might be wrong here) and if someone could relate the explanation back to that definition it would be great. A function $f$ is in $\Theta(g)$ if and only if there are constants $c_1,c_2>0$ such that $c_1 g(n)\leq f(n) \leq c_2g(n)$ for all sufficiently large $n$ . Seeing that the above relation holds is a simple case of doing some substitutions: Suppose there is some $a\in \Theta(f), a\in \Theta(g)$ and $b\in \Theta(f)$ , then we know there exist constants such that (for sufficiently large $n$ ) $c_1 f(n)\leq a(n) \leq c_2f(n)$ $c_3 g(n)\leq a(n) \leq c_4g(n)$ $c_5 f(n)\leq b(n) \leq c_6f(n)$ then $c_5 c_3 g(n)/c_2 \leq c_5 a(n)/c_2 \leq c_5 f(n)\leq b(n)\leq c_6f(n)\leq c_6 a(n)/c_1\leq c_6c_4g(n)/c_1$ and thus $b\in \Theta(g)$ . Is this also the reason why big-O is better for analysing complexity? Because it is easier to compare it to other runtimes? It is not "better". You could say it is worse, because an algorithm being $\Theta(f)$ implies that it is $O(f)$ (but not vice-versa), so " $\Theta$ " is a strictly stronger statement than " $O$ ". The reason " $O$ " is more popular is because " $O$ " expresses an upper bound on the speed of an algorithm, i.e., it is a guarantee it will run in at most a given time. " $\Theta$ " also expresses the same upper bound, but, in addition, also expresses that this upper bound is the best possible upper bound for a given algorithm. E.g., an algorithm running in time $O(n^3)$ can actually turn out to also run in $O(n^2)$ , but an algorithm running in time $\Theta(n^3)$ can not also run in $\Theta(n^2)$ time. From a practical perspective, if we want to know whether an algorithm is fast enough for a practical purpose, knowing it runs in $O(n^2)$ time is good enough to determine whether it is fast enough. The information that it runs in $\Theta(n^2)$ time is not really important to the practical use. If we have determined that an $O(n^2)$ -time algorithm is fast enough for our application, then who cares if the algorithm that was claimed to be $O(n^2)$ is actually $\Theta(n)$ ? Obviously, if you are going to give an upper bound on the running time of an algorithm, you will endeavor to give the best possible upper bound (there is no sense in saying your algorithm is $O(n^3)$ when you could also say it is $O(n^2)$ ). For this reason, when people say " $O$ " they often implicitly mean " $\Theta$ ". The reason people write " $O$ " is because this is easier on a normal keyboard, is customary, conveys the most important information about the algorithm (the upper bound on the speed) and people often can't be bothered to formally prove the lower bound.
{ "source": [ "https://cs.stackexchange.com/questions/118671", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/113426/" ] }
118,676
I was once given this question in an interview: Suppose a piece of paper has 80 columns of alphabets with a fixed size font, and now the paper is shredded vertically, into 80 vertical pieces (so each piece can show a series of alphabets going down vertically), and there are 300 pages of such paper shredded total. Assume they are just English words with no proper names (for people / places). Find a method to reassemble all the papers and give the O() complexity time and space. I proposed a solution where we take one piece of stripe, and go down the row to match for consecutive occurring alphabets, for a proper stripe. So we would build up a dictionary of all English words, for example, for the word Apple , that will mean ap , pp , pl , and le are all valid. And if we go down both stripes, we should expect most (or all) of them match with each other if they were adjacent. But doing this way, it looks like it would be (180 * 300)! = 24000! (factorial) steps, before we can finish the task? I can only find a complicated paper about re-constructing non-fixed size font paper (and it seemed way too complicated for a 20 minute interview question), and another paper that is not public . Is there a good solution to this problem? (Actually, it seems that if it is not fixed size font, it is a easier problem? Because we can just look at the left and right edge: along the left edge, for example, if there is ink, we call it as 1 , and no ink, we call it a 0 . Or we can just scan and collect data: from top edge, go down 3.232cm of blank space, and then 0.012cm of ink, and then 0.027cm of blank space, all the way down to the end of paper, and then we can create a signature. If we do the signature for all 80 x 300 stripes, now we have 48000 signatures. Now we can actually just match the signatures up to tell which strips is adjacent to which stripe. So that would be a linear O(n) solution?)
Is what I wrote about big-O correct? Yes. How do big-Theta sets relate to each other, if they relate at all? They are a partition of the space of functions. If $\Theta(f)\cap \Theta(g)\not = \emptyset$ , then $\Theta(f)=\Theta(g)$ . Moreover, $\Theta(f)\subseteq O(f)$ . Why do they relate to each other the way they do? The explanation is probably derivable from the formal definition of big-Theta (might be wrong here) and if someone could relate the explanation back to that definition it would be great. A function $f$ is in $\Theta(g)$ if and only if there are constants $c_1,c_2>0$ such that $c_1 g(n)\leq f(n) \leq c_2g(n)$ for all sufficiently large $n$ . Seeing that the above relation holds is a simple case of doing some substitutions: Suppose there is some $a\in \Theta(f), a\in \Theta(g)$ and $b\in \Theta(f)$ , then we know there exist constants such that (for sufficiently large $n$ ) $c_1 f(n)\leq a(n) \leq c_2f(n)$ $c_3 g(n)\leq a(n) \leq c_4g(n)$ $c_5 f(n)\leq b(n) \leq c_6f(n)$ then $c_5 c_3 g(n)/c_2 \leq c_5 a(n)/c_2 \leq c_5 f(n)\leq b(n)\leq c_6f(n)\leq c_6 a(n)/c_1\leq c_6c_4g(n)/c_1$ and thus $b\in \Theta(g)$ . Is this also the reason why big-O is better for analysing complexity? Because it is easier to compare it to other runtimes? It is not "better". You could say it is worse, because an algorithm being $\Theta(f)$ implies that it is $O(f)$ (but not vice-versa), so " $\Theta$ " is a strictly stronger statement than " $O$ ". The reason " $O$ " is more popular is because " $O$ " expresses an upper bound on the speed of an algorithm, i.e., it is a guarantee it will run in at most a given time. " $\Theta$ " also expresses the same upper bound, but, in addition, also expresses that this upper bound is the best possible upper bound for a given algorithm. E.g., an algorithm running in time $O(n^3)$ can actually turn out to also run in $O(n^2)$ , but an algorithm running in time $\Theta(n^3)$ can not also run in $\Theta(n^2)$ time. From a practical perspective, if we want to know whether an algorithm is fast enough for a practical purpose, knowing it runs in $O(n^2)$ time is good enough to determine whether it is fast enough. The information that it runs in $\Theta(n^2)$ time is not really important to the practical use. If we have determined that an $O(n^2)$ -time algorithm is fast enough for our application, then who cares if the algorithm that was claimed to be $O(n^2)$ is actually $\Theta(n)$ ? Obviously, if you are going to give an upper bound on the running time of an algorithm, you will endeavor to give the best possible upper bound (there is no sense in saying your algorithm is $O(n^3)$ when you could also say it is $O(n^2)$ ). For this reason, when people say " $O$ " they often implicitly mean " $\Theta$ ". The reason people write " $O$ " is because this is easier on a normal keyboard, is customary, conveys the most important information about the algorithm (the upper bound on the speed) and people often can't be bothered to formally prove the lower bound.
{ "source": [ "https://cs.stackexchange.com/questions/118676", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/108129/" ] }
119,817
I am not very well-versed in the world of theorem proving, much less automated theorem proving, so please correct me if anything I say or assume in my question is wrong. Basically, my question is: are automated theorem provers themselves ever formally proven to work with another theorem prover, or is there just an underlying assumption that any theorem prover was just implemented really really well, extensively tested & reviewed, etc. and so it "must work"? If so, does there always remain some underlying doubt in any proof proven by a formally verified automated theorem prover, as the formal verification of that theorem prover still lies on assuming that the non-formally verified theorem prover was correct in its verification of the former theorem prover, even if it might technically be wrong - as it was not formally verified itself? (That is a mouthful of a question, apologies.) I am thinking of this question in much the same vein as bootstrapping compilers.
I recommend reading Pollack's How to believe a machine-checked proof . It explains how proof assistants are designed to minimize the amount of critical code. There are many levels of formal verification (that's the phrase you're looking for in place of "proven") of a proof assistant: Verify that the algorithms used by the proof assistant are correct. Verify that the implementation of (the critical core of) the proof assistant is correct. Verify that the compiler for the language in which the proof assistant is implemented is correctly designed and implemented. Verify that the hardware on which the proof assistant runs is correctly designed and built. Compute the probability that a cosmic ray passes through the CPU and tricks your proof assistant every time you run it. Estimate the likelihood that you are insane. People put serious effort into these (well, at least the first four). For example, steps 1 and 2 are addressed in Coq Coq Correct! , and steps 3 and 4 in the amazing award-winning CompCert project .
{ "source": [ "https://cs.stackexchange.com/questions/119817", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/114663/" ] }
120,556
Let's use Traveling Salesman as the example, unless you think there's a simpler, more understable example. My understanding of P=NP question is that, given the optimal solution of a difficult problem, it's easy to check the answer, but very difficult to find the solution. With the Traveling Salesman, given the shortest route, it's just as hard to determine it's the shortest route, because you have to calculate every route to ensure that solution is optimal. That doesn't make sense. So what am I missing? I imagine lots of other people encounter a similar error in their understanding as they learn about this.
Your version of the TSP is actually NP-hard, exactly for the reasons you state. It is hard to check that it is the correct solution. The version of the TSP that is NP-complete is the decision version of the problem (quoting Wikipedia): The decision version of the TSP (where given a length L, the task is to decide whether the graph has a tour of at most L) belongs to the class of NP-complete problems. In other words, instead of asking "What is the shortest possible route through the TSP graph?", we're asking "Is there a route through the TSP graph that fits within my budget?".
{ "source": [ "https://cs.stackexchange.com/questions/120556", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/115485/" ] }
120,810
This would be analogous to the Kolmogorov complexity of a string, except that in this case, I'm interested in the algorithm that solves a given problem using the least number of steps. We would therefore have to be able to show that any other algorithm is at best of the same order of complexity as the algorithm in question. I'm asking because I'm working on a paper that makes use of this concept, and I was surprised when I realized that I'm not aware of any name for this concept, though I'll concede I'm risking embarrassment if there is such a name that I'm simply unaware of.
You can say that an algorithm is asymptotically optimal in such a case. In general, people might also say that an algorithm is optimal in some other sense, like assuming some particular complexity-theoretic conjecture like (S)ETH .
{ "source": [ "https://cs.stackexchange.com/questions/120810", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/115733/" ] }
121,080
I understand that GPUs are generally used to do LOTS of calculations in parallel. I understand why we would want to parallelize processes in order to speed things up. However, GPUs aren't always better than CPUs, as far as I know. What kinds of tasks are GPUs bad at? When would we prefer CPU over GPU for processing?
GPUs are bad at doing one thing at a time. A modern high-end GPU may have several thousand cores, but these are organized into SIMD blocks of 16 or 32. If you want to compute 2+2, you might have 32 cores each compute an addition operation, and then discard 31 of the results. GPUs are bad at doing individual things fast. GPUs only recently topped the one-gigahertz mark, something that CPUs did more than twenty years ago. If your task involves doing many things to one piece of data, rather than one thing to many pieces of data, a CPU is far better. GPUs are bad at dealing with data non-locality. The hardware is optimized for working on contiguous blocks of data. If your task involves picking up individual pieces of data scattered around your data set, the GPU's incredible memory bandwidth is mostly wasted.
{ "source": [ "https://cs.stackexchange.com/questions/121080", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/104376/" ] }
121,354
Turing machines are perhaps the most popular model of computation for theoretical computer science. Turing machines don't have random access memory, since we can only do a read where the slider is currently located. This seem unwieldy to me. Why don't theoretical computer scientists use a model with random access memory, like a register machine, as the basic model of computation?
Why don't theoretical computer scientists use a model with random access memory, like a register machine, as the basic model of computation? The short answer is that this model is actually more complicated to describe and prove things about. "Jumping" from one place in memory to another is a more complex operation. Note that it requires: Reading an address (requires that we define how the address is written on the tape) Jumping to where that address says on the tape Alternatively I'm not sure what you have in mind by "register machine", but this also requires care. How large can the registers be? And the machine then has separate mechanisms for how it accesses/modifies the registers, and how it accesses/modifies main memory. In sum, it's a more complicated model and Turing machines are easier to deal with. However, the long answer is that theoretical computer scientists do use a model with random access memory, in real practice: it's called the RAM (Random Access Machine) model . The Turing machine is not considered to be a good model for so-called "fine-grained complexity" (e.g. whether a problem can be solved in $O(n^2)$ or $O(n)$ time), and so this requires more careful models. The RAM model is standard and perhaps, arguably, more accepted than the Turing machine as a model of how real computers work, despite being more complicated. For example, we can show that for a Turing machine to decide if two strings are equal requires $O(n^2)$ time. But this of course does not hold in more accurate models of computation like the RAM model, where it will take $O(n)$ . Therefore: If you just care about whether a problem can be solved at all (or whether it can be solved in polynomial time), Turing machines are considered sufficient. But if you care about exactly how difficult it is to solve, then you have to resort to a more complex model, such as RAMs.
{ "source": [ "https://cs.stackexchange.com/questions/121354", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/56687/" ] }
121,710
Assuming we have $\sf P = NP$ , how would I show how to solve the graph coloring problem in polynomial time? Given a graph $G = (V,E)$ , find a valid coloring $\chi(G) : V \to \{1,2,\cdots,k\}$ for some $k$ satisfying the property that $(u,v) \in E$ implies $\chi(u)\ne\chi(v)$ so as to minimize the fewest number $k$ of “colors”.
There are two cases: $P = NP$ non-constructively: this means we have derived a contradiction from the assumption that $P \neq NP$ , and thus can conclude that $P = NP$ by the law of the excluded middle. In this case, we have no idea what an algorithm to solve graph coloring in polynomial time looks like, or any other problem. We know one exists, because we know that if it doesn't exist, we can derive a contradiction. So a proof of this form is pretty useless for solving problems quickly. $P = NP$ constructively. In this case, we have a polynomial time algorithm for some $NP$ -hard problem, let's say $L$ . If $L$ is NP-hard, then it must solve some other NP-hard problem in polynomial time (i.e. it reduces from that problem). That problem, in turn, either reduces from another problem, or has a direct reduction from every problem in $NP$ . We keep following the trail of reductions until we get to one with a direct proof (probably 3SAT). By composing these reductions, we get an algorithm that solves 3SAT in polynomial time (because each reduction only makes a polynomial number of calls to the previous algorithm, and our starting algorithm for $L$ runs in polynomial time. We then plug that algorithm into the reduction from Cook-Levin theorem , which gives us a way to simulate any algorithm running in Polynomial time with a polynomial number of calls to a 3SAT solver. Again, a polynomial number of calls to a polynomial-time algorithm runs in polynomial time. Finally, there is a simple non-deterministic algorithm that solves Graph-coloring in polynomial time: just guess a coloring and check if it's valid. So we use Cook-Levin to simulate this algorithm in polynomial time. As you can imagine, each time we have to compose a reduction, the degree of our polynomial is going to get higher and higher. So it's entirely possible that $P = NP$ but graph colouring can only be solved in, say, $O(n^{100000000000000})$ time. This is still polynomial time, but it really doesn't buy us much in terms of practically solving problems.
{ "source": [ "https://cs.stackexchange.com/questions/121710", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/87229/" ] }
125,000
I am a Software Developer but I came from a non-CS background so maybe it is a wrong question to ask, but I do not get why logic gates / boolean logic behave the way they do. Why for example: 1 AND 1 = 1 // true AND true 1 OR 0 = 1 // true OR False 0 AND 1 = 0 // false AND true And so on.. Is it a conditional definition for example, like it is like that by definition? Or there is a logical/intuitive explanation for these results? I have searched google, also looked at the Wiki page of logic gates for an explanation of 'why' but I can only find 'how'. I would appreciate any answer or resources.
As stated by user120366 , 16 possible 2-input logic gates exist, I've tabulated them for you here: A|B||0|1|2|3|4|5|6|7|8|9|a|b|c|d|e|f -+-++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+- 0|0||0|0|0|0|0|0|0|0|1|1|1|1|1|1|1|1 0|1||0|0|0|0|1|1|1|1|0|0|0|0|1|1|1|1 1|0||0|0|1|1|0|0|1|1|0|0|1|1|0|0|1|1 1|1||0|1|0|1|0|1|0|1|0|1|0|1|0|1|0|1 A and B are the inputs, 0 through f are the possible permutations of outputs. These gates have been named: 0 = FALSE 1 = AND 2 = A NIMPLY B (A AND NOT B) 3 = A 4 = B NIMPLY A (B AND NOT A) 5 = B 6 = XOR 7 = OR 8 = NOR 9 = XNOR a = NOT B b = B IMPLY A (A OR NOT B) c = NOT A d = A IMPLY B (B OR NOT A) e = NAND f = TRUE 6 of these (0,3,5,a,c,f) discard one or both inputs. The IMPLY and NIMPLY gates are rare, though they are certainly used in formal logic. AND, OR and XOR are easiest for humans to reason with, but for physical hardware, NOR and NAND are also used heavily, because they can be simpler to implement and make smaller circuits. The same probably holds for XNOR. So, as stated earlier, it's not so much that we decided that the gates should behave this way, but that 16 possible gates can be defined, and we came up with descriptive names for them.
{ "source": [ "https://cs.stackexchange.com/questions/125000", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/120343/" ] }
125,304
As far as I've understood it, referring to this system as an 8-bit system points out that one can access 8 bits of data in one instruction. While I understand that we're not saving vast amounts of time by calling it "one byte" instead of "eight bits", is there a particular reason why the latter is/was preferred?
"Back in the day" computers were defined more by their word size, for example the PDP-8 had 12-bit words composed of two 6-bit "bytes". A "nibble" was half a byte, or 3 bits in this case (and here the op codes were 3 bits). It is only in recent decades that 8-bit bytes became so prevalent as to make them the default. Calling the NES 8-bit is less ambiguous than calling it 1 byte, keeping in mind we're talking about a system that came out in 1983.
{ "source": [ "https://cs.stackexchange.com/questions/125304", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/30465/" ] }
126,198
Can you give some real world examples of what graphs algorithms people are actually using in applications? Given a complicated graphs, say social networks, what properties/quantity people want to know about them? —- It would be great if you can give some references. Thanks.
Graphs are definitely one of the most important data structures, and are used very broadly Optimization problems Algorithms like Dijkstra's enable your navigation system / GPS to decide which roads you should drive on to reach a destination. The Hungarian Algorithm can assign each Uber car to people looking for a ride (an assignment problem ) Chess, Checkers, Go and Tic-Tac-Toe are formulated as a game tree (a degenerate graph) and can be "solved" using brute-force depth or breadth first search , or using heuristics with minimax or A* Flow networks and algorithms like maximum flow can be used in modelling utilities networks (water, gas, electricity), roads, flight scheduling, supply chains. Network Topology The minimum spanning tree ensures that your internet traffic gets delivered even when cables break. Topological sort is used in project planning to decide which tasks should be executed first . Disjoint sets help you efficiently calculate currency conversions between NxN currencies in linear time Graph coloring can in theory be used to decide which seats in a cinema should remain free during a infectious disease outbreak . Detecting strongly connected components helps uncover bot networks spreading misinformation on Facebook and Twitter. DAG s are used to perform very large computations distributed over thousands of machines in software like Apache Spark and Tensorflow Specialized types of graphs Bayesian networks were used by NASA to select an operating system for the space shuttle Neural networks are used in language translation, image synthesis (such as fake face generation ), color recovery of black-and-white images , speech synthesis
{ "source": [ "https://cs.stackexchange.com/questions/126198", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/7109/" ] }
127,490
I was reading this article . The author talks about "The Blub Paradox". He says programming languages vary in power. That makes sense to me. For example, Python is more powerful than C/C++. But its performance is not as good as that of C/C++. Is it always true that more powerful languages must necessarily have lesser possible performance when compared to less powerful languages? Is there a law/theory for this?
This is simply not true. And part of why it's false is that the premise isn't well formed. There is no such thing as a fast or slow language. The expressive power of a language is purely a function of its semantics. It is independent of any particular implementation. You can talk about the performance of code generated by GCC, or about the performance of the CPython interpreter. But these are specific implementations of the language. You could write a very slow C compiler, and you can write Python interpreters that are quite fast (like PyPy). So the answer to the question of "is more power necessarily slower" is no, purely because you or I can go write a slow C compiler, that has the same expressive power as GCC, but that is slower than Python. The real question is "why do more powerful languages tend to have slower implementations." The reason is that, if you're considering the C vs Python, the difference in power is abstraction . When you do something in Python, there is a lot more that is implicit that is happening behind the scenes. More stuff to do means more time. But there's also lots of social elements at play. People who need high performance choose low level languages, so they have fine grained control of what the machine is doing. This has led to the idea that low level languages are faster. But for most people, writing in C vs Python will have pretty comparable performance, because most applications don't require that you eke out every last millisecond. This is particularly true when you consider the extra checks that are manually added to program defensively in C. So just because lots of specialists have built fast things in C and C++ doesn't mean they're faster for everything. Finally, some languages have zero cost abstraction. Rust does this, using a type system to ensure memory safety without needing runtime garbage collection. And Go has garbage collection, but it's so fast that you get performance on par with C while still getting extra power. The TLDR is that more powerful languages are sometimes faster in some cases, but this is not a firm rule, and there are exceptions and complications.
{ "source": [ "https://cs.stackexchange.com/questions/127490", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/115941/" ] }