INSTRUCTION
stringlengths 11
999
| RESPONSE
stringlengths 0
999
| SOURCE
stringlengths 16
38
| METADATA
dict |
---|---|---|---|
Do theoretical computer scientists work more with proving theorems, or work more with data?
As theoretical computer scientists, do you work more with proving theorems, or do you work more with data?
According to How to Criticize Computer Scientists, computer scientists can be divided into two: theoretical and experimental.
Even in some/many Computer Science Faculties, not all the academicians there are theoretical computer scientists. Since artificial intelligent (AI) is booming, more people are going towards AI, which relies heavily on data.
Theoretical computer scientists, from my understanding, work more on proving theorems, enjoy the mathematics and/or developing new theorems/algorithms, while some of the experimentalists, in some faculties/countries, prefer to use the existing algorithms, without fully deep-diving into the algorithms. Maybe this also depends on the culture of each faculty/university.
|
There are many areas of computer science. Some main examples are systems, networks, programming languages, robotics, human-computer interaction, artificial intelligence, machine learning, and theory. There is overlap between the areas. Theory in particular is usually only a fraction of any CS department.
Most areas use several different approaches. Some main examples are experiments with data, running simulations, building software, building hardware, human subject experiments, proposing models and algorithms, and proving theorems.
Theoretical CS focuses mostly on proving theorems but often includes a bit of other approaches. For example, a theorist may use simulations or experiments on data to further investigate an algorithm or random process, beyond the information given by a theorem.
(A better place for this kind of question is cs.stackexchange.com.)
|
stackexchange-cstheory
|
{
"answer_score": 2,
"question_score": -1,
"tags": "soft question"
}
|
Approximating Independent Dominating set on bipartite graphs
I'm interested in the following problem: given a bipartite graph, find the smallest independent set of vertices which dominate all other vertices.
My question is: _are there any positive results in the litterature regarding polynomial time guarenteed approximations for this problem?_
On the negative side, we know that no constant factor approximation exists unless $\mathrm{P}=\mathrm{NP}$. Moreover there is some constant $\delta>0$ for which there is no $\delta B$ factor approximation on bipartite graphs with maximum degree bounded by $B$ (for large enough $B$, assuming $\mathrm{P}\neq\mathrm{NP}$). See [M. Chlebík and J. Chlebíková, Approximation Hardness of Dominating Set Problems in Bounded Degree Graphs]. The problem is also more or less inaproximable when not restricted to bipartite graphs.
I was however unable to find anything implying a positive result about polynomial time approximations on bipartite graphs.
|
After a bit more searching, it appears that what I'm looking for is unlikely to exist.
In [1], it is proven that approximating the minimum maximal independence number (which is equivalent to the minimum size of an independent dominating set) within a factor of $O(n^{1-\epsilon})$ is $\mathrm{NP}$-hard for any $\epsilon > 0$. This remains true even when restricted to bipartite graph (this is the part I missed when I first came across this paper).
[1] _Halldórsson, Magnús M._ , **Approximating the minimum maximal independence number**90022-2), Inf. Process. Lett. 46, No. 4, 169-172 (1993). ZBL0778.68041.
|
stackexchange-cstheory
|
{
"answer_score": 1,
"question_score": 3,
"tags": "reference request, graph algorithms, approximation algorithms, approximation hardness"
}
|
Ambiguity of regular expressions
Some regular expressions are ambiguous. Some are not. `a*b*` is unambiguous for example. Expression `a*a*` is ambiguous but it can be written in the unambiguous form 'a*`. The answer to this question gives an algorithm for deciding whether a regular expression is ambiguous.
1. Is there an algorithm for finding an equivalent unambiguous form of any given RE?
2. Are there REs that are inherently ambiguous?
(This question seems relevant by title; not by content)
|
Yes, every regular expression can be converted into an unambiguous one by converting to a DFA and then to a regular expression. And no, there aren't any inherently ambiguous regular languages in the sense described in the question. This is a classic result in automata theory:
R. Book, S. Even, S. Greibach and G. Ott, Ambiguity in graphs and expressions, IEEE Transactions on Computers 20(2) (1971) 149–153.
See also this question over at MO for more details and a reference: <
|
stackexchange-cstheory
|
{
"answer_score": 13,
"question_score": 10,
"tags": "fl.formal languages, regular expressions"
}
|
Is the function $f(a_1 \dotsm a_n) = a_1(a_1a_2)(a_1a_2a_3)\ \dotsm\ (a_1 \dotsm a_n)$ regularity-preserving?
A function $f: A^* \to A^*$ is _regularity-preserving_ if, for each regular language $L$ of $A^*$, the language $f^{-1}(L)$ is regular. I think I have a proof, as a consequence of more general results, that the function defined by $$ f(a_1 \dotsm a_n) = a_1(a_1a_2)(a_1a_2a_3)\ \dotsm\ (a_1 \dotsm a_n) $$ is regularity-preserving. If this result is correct, could someone provide an elementary proof?
|
Here is a proposition for an elementary proof:
Let $\mathcal A=(A,Q,q_0,F,\delta)$ be a DFA for $L$, we want to build a DFA $\mathcal A'=(A,Q',q_0',F',\delta')$ for $f^{-1}(L)$. Intuitively, when reading a word $u$, $\mathcal A'$ will remember the state reached in $\mathcal A$ by $f(u)$, together with the action of $u$ on all states of $\mathcal A$. More formally, we take:
* $Q'=Q\times Q^Q$
* $q_0'=(q_0,\mathit{id})$
* $F'=F\times Q^Q$
* $\delta_a'(p,g)=(\delta_a(g(p)),\delta_a\circ g)$
Where $\delta_a:Q\to Q$ is the transition function associated with a letter $a$.
This ensures that after reading a word $u$, the first component of the state of $\mathcal A'$ gives the state reached by $\mathcal A$ on $f(u)$.
|
stackexchange-cstheory
|
{
"answer_score": 7,
"question_score": 6,
"tags": "automata theory, regular language"
}
|
Proof and computational complexity
I couldn't find documents elaborating on this: if the Curry Howard correspondence is to be interpreted as establishing a strong relation between proofs and programs, should there not be a strong relation between proof and computational complexity?
I'm asking this as a reference request.
|
Maybe the keyword you are looking for is "Implicit Complexity". It is more general than Curry-Howard correspondence, but several lines of research investigate along the axis you are interested in. You can check for instance the publications of Patrick Baillot for many references and pointers.
For a little self-promotion, here are for instance two recent papers [KPP1,KPP2] characterizing via the Curry-Howard correspondence on certain cyclic proofs the following complexity classes, depending on the logical rules incorporated or not in the cyclic proof system:
* Regular languages (no contraction, no cut) [KPP1]
* DLogSpace (contraction, no cut) [KPP1]
* Primitive recursive functions (cut, no contraction) [KPP2]
* Gödel's System T (cut, contraction) [KPP2]
[KPP1] Cyclic Proofs and Jumping Automata. Kuperberg, Pinault, Pous, FSTTCS 2019
[KPP2] Cyclic Proofs, System T and the power of Contraction. Kuperberg, Pinault, Pous, POPL 2021
|
stackexchange-cstheory
|
{
"answer_score": 9,
"question_score": 3,
"tags": "cc.complexity theory, reference request, lo.logic, computability"
}
|
Is this a weaker or stronger form of the halting problem
Halting problem: There is no decider for $L =\\{\langle M,w\rangle ~|~ M$ halts on $w \\}$
This problem: For any $H$ which can decide some infinite subsets of $L$, then I can always, constructively find $\langle M,w\rangle$ such that $H(\langle M,w\rangle)$ is incorrect.
Here the infinite subset part is supposed to be like heuristics. Primitively checking for infinite loops and things. If someone comes up with a program which can mostly determine halting, I should always be able to come up with an adversarial case in which it fails.
Halting implies these adversarial cases exist, but can I find them at all? Can I find them efficiently?
|
The standard proof that the halting problem $L$ is undecidable also gives an efficient algorithm for constructing an instance on which a given Turing machine $H$ fails to solve the halting problem.
For any Turing machine $H$, let $M_H$ be a Turing machine implementing the following algorithm: "On input $\langle P \rangle$ where $P$ is a Turing machine, run $H(\langle P, P \rangle)$. If it outputs $1$, run forever, otherwise halt."
By design, $H(\langle M_H, M_H \rangle)$ fails. Either it runs forever, or else it halts and gives the wrong answer to the question of whether $\langle M_H, M_H \rangle \in L$. Given $\langle H \rangle$, one can efficiently construct $\langle M_H, M_H \rangle$.
|
stackexchange-cstheory
|
{
"answer_score": 3,
"question_score": 1,
"tags": "halting problem"
}
|
Where, if any, is there currently any research being done on the subject of ternary computers?
I had the experience several years ago of working with a team that had developed a ternary computing system. It ran out of funding and was abandoned but I felt it was ahead of its time. Currently, what is the state of this (ternary computing) research and development? Is there a place online that one could suggest to look for more information?
|
Modular counting gates are probably the closest thing in complexity theory to what you're asking about. Modular gates sum their inputs and compare against 0 mod $p$. Many authors consider these gates as taking in values in the range $[0,p-1]$ since you can hook multiple wires between pairs of gates. This paper provides a good summary of results in the area up until its publication date (2010) and a result on probabilistically emulating the AND function with just modular gates in constant depth.
As far as it relates to your original question, much more focus is given to composite moduli of two distinct prime factors (e.g. 6) than to prime moduli such as 3, since the computational power of a composite modulus with distinct factors is greater and much less well understood, as the linked paper describes.
|
stackexchange-cstheory
|
{
"answer_score": 0,
"question_score": -2,
"tags": "computability"
}
|
Are there strongly normalizing lambda terms that cannot be given a System F type?
I know that all well-typed System F terms are strongly normalizing, but is the converse true as well? In other words, does System F typeability precisely characterize program termination? (And if so, how to prove it?) Or are there lambda terms that are strongly normalizing but cannot be given a System F type?
A System F interpreter cannot be implemented in System F. On the other hand, a System F interpreter can be implemented in untyped lambda calculus, but that's not enough. Can a _strongly normalizing_ System F interpreter be implemented in untyped lambda calculus? If yes, this answers our question positively, but I am not sure about it.
|
As you found out yourself, the answer to your question is yes. You found a rather convoluted example, a much simpler example is the following:
$$(\lambda zy.y(zI)(zK))(\lambda x.xx)$$
where $I$ and $K$ are the identity and first-projection combinators. This may be found at p. 204 of Sørensen and Urzyczyn's _Lectures on the Curry-Howard Isomoprphism_. They attribute it to Ronchi Della Rocca and Giannini, and also give a seemingly even simpler example, which is $c_2c_2K$, where $c_2$ is, I believe, the Church integer 2 (I'm not sure about their notation so I may be wrong).
|
stackexchange-cstheory
|
{
"answer_score": 8,
"question_score": 3,
"tags": "lambda calculus, typed lambda calculus"
}
|
upper bound on the total number of fixed-length paths in an acyclic graph
I was wondering if there is an upper bound on the total number of fixed-length paths (path length from 1 to $n-1$ given $n$ nodes) in an acyclic graph (not directed) of $n$ nodes? If so, can you point me to some references?
This question explains the counting $s-t$ path is #P-complete but I'm not sure if the same applies to my question as well.
Thanks!
|
An undirected acyclic graph is a forest. See here: <
So, a rough upper bound would obviously be n^2 since each 2 vertices have at most 1 path between them.
|
stackexchange-cstheory
|
{
"answer_score": 0,
"question_score": -2,
"tags": "graph theory"
}
|
Finding vertex separator such that the induced subgraph has minimal number of edges
My problem is related to edge and vertex cuts with a little twist.
Given a graph $G$ and two vertexes $u$ and $v$. I want to find a set of vertexes $S \subset V$ that disconnects $u$ and $v$ such that the induced subgraph $G[S]$ has minimal number of edges.
Consider the following graph:
 217–222, as cited by Guantao Chen and Xingxing Yu, "A note on fragile graphs", Discrete Math. 2002, So your problem is hard even in the special case of zero edges.
|
stackexchange-cstheory
|
{
"answer_score": 11,
"question_score": 8,
"tags": "graph theory, graph algorithms, np hardness, partition problem, max flow min cut"
}
|
What does x.y notation mean?
, this notation is used but I don't see a definition. What does $x.y$ mean?
|
This is the notation for Harper's "abstract binding structures": `x.t` represents the binding site of a variable `x` and the term `t` the variable scopes over.
Apparently you are in the parts that define variable bindings. $\mathcal{B}[\mathcal{X}]_s$ appears to be the set of terms, or binding structures at sort $s$ whose free variables are among $\mathcal{X}$. So I would expect (but I don't have the book) that there is in fact an explanation for this notation close by.
|
stackexchange-cstheory
|
{
"answer_score": 3,
"question_score": 0,
"tags": "pl.programming languages, notation"
}
|
Linear Integer Arithmetic Satisfiability with Three Literals
I'm stuck on trying to find an unsatisfiable conjunction of the form $a \wedge b \wedge c$ where:
* $a \wedge b$ is satisfiable
* $a \wedge c$ is satisfiable
* $b \wedge c$ is satisfiable
* $a, b, c$ are boolean literals from Linear Integer Arithmetic, i.e. $x \leq y$, $\neg(3 = 5)$, $z = z$, etc.
Is there no such case that this is possible (is there a proof for it), or am I just missing an obvious example?
|
$(x< y) \land (y < z) \land (z < x)$
|
stackexchange-cstheory
|
{
"answer_score": 2,
"question_score": -2,
"tags": "lo.logic, sat, boolean formulas"
}
|
Finding output with unique witness in matrix multiplication
Consider two square matrices $A(x,y)$ and $B(y,z)$ of dimensions $N \times N$ containing boolean entries. Consider the output product matrix $C(x,z)$ where $C = AB$ (not boolean matrix multiplication but the entries store the count of how many $y$ generate a given output entry). My goal is to find only the output entries in $C$ which are generated by exactly one value of $y$ (i.e. a unique witness). In other words, I want to find all $(i,j)$ such that $C(i,j) = 1$. Clearly, I can simply perform the matrix multiplication and iterate over $C$ to identify the output with unique witnesses but can we do better? Is this problem as hard as matrix multiplication itself? Are there are any known reductions that consider the problem of finding output that is generated by a unique $y$ value? I am interested in deterministic algorithms but even something probabilistic may be insightful.
|
You can reduce Boolean matrix multiplication (BMM) to this problem. (BMM is matrix multiplication over the OR/AND semiring with 0 and 1.) Imagine adding one more column to the first matrix A and one more row to the first matrix B, both of which are all-ones. If the BMM of A and B had a 0 in an entry, your new product over the integers will have 1, and if the BMM had a 1 in an entry, your new product over the integers will have at least a 2. Thus determining these "exactly one" entries is at least as hard as BMM.
Whether or not BMM can be solved faster than matrix multiplication over a ring or field is a major open problem.
|
stackexchange-cstheory
|
{
"answer_score": 3,
"question_score": 1,
"tags": "ds.algorithms, graph algorithms, matrices"
}
|
Do such instances always admit a 3D matching?
I want to know whether the following kinds of special instances of the 3D Matching problem are ``yes" instances, i.e., admit a 3D matching.
We are given 3 sets $A,B,C$ containing $m$ elements each, and $n$ tuples $\\{T_i\\}_{i\in [n]}$ where $T_i \in A\times B\times C$. We further know:
(i) Each element of $A\cup B \cup C$ occurs exactly in two tuples. Simple counting shows $n = 2m$.
(ii) The elements of $A\cup B \cup C$ can be **partitioned** into $m$ singleton elements, and $m$ pairs $(x,y)$ where $\\{x,y\\} \subset T_i$ for some $i\in [n]$.
I want to know if such an instance admits a 3D matching. Is there a simple counter-example?
|
How about the following counter-example?
$m=2$, $n=4$.
$A=\\{a_1,a_2\\}$, $B=\\{b_1,b_2\\}$, $C=\\{c_1,c_2\\}$.
$T=\\{(a_1,b_1,c_1), (a_1,b_2,c_2), (a_2,b_1,c_2), (a_2,b_2,c_1)\\}$.
With the partition $A\cup B\cup C = \\{a_1,b_1\\}\cup\\{a_2,b_2\\}\cup\\{c_1\\}\cup\\{c_2\\}$.
|
stackexchange-cstheory
|
{
"answer_score": 1,
"question_score": 0,
"tags": "graph theory, matching, bipartite graphs, examples"
}
|
Reference request: An algebraic characterisation of LTL[XF]-definable word languages
I'm looking for a reference to the fact that LTL[XF]-definable languages (LTL where only the (strict) finally/future modality is allowed) correspond to the variety $\mathbf{R}$ (see: 1). A similar characterisation is available for LTL[XF,XP], namely the variety $\mathbf{DA}$, see: Theorem 11 from 2.
1 Brzowoski, Fich: Languages of R-Trivial Monoids LINK
2 Tesson, P., Thérien, D.: Diamonds are forever: the variety DA LINK
PS: I have an idea how to prove it (by employing the correspondence that R = partially-ordered DFA), but before writing the result it would make sense to check whether this is already known in the literature (although I do not claim any breakthrough).
|
I've just found an answer to my question. It is inside Bojańczyk's notes titled "Languages recognised by finite semigroups and their generalisations to objects such as trees and graphs with an emphasis on definability in monadic second-order logic", more precisely in Section 2.3 here:
|
stackexchange-cstheory
|
{
"answer_score": 2,
"question_score": 5,
"tags": "fl.formal languages, regular language, algebra, linear temporal logic"
}
|
Resource for Understanding this Notation
I am trying to read this paper: < I am familiar with grammars, but I cannot understand the notations in figure 1. Can anyone suggest a resource or book where I can learn these notations? Here is what figure 1 looks like: . However, I cannot find a concrete way to obtain an LTL formula from a counter-free automaton.
Is there any reference that shows such a translation?
[1] Wolfgang Thomas, Safety- and Liveness-properties in Propositional Temporal Logic: Characterisation and Decidability, Mathematical Problems in Computation Theory, Volume 21, 1988
|
As mentioned in the comments, the translation is shown in: Volker Diekert and Paul Gastin. "First-order definable languages." (2008) <
And it goes via a characterization of $LTL$ as $FO[<]$.
|
stackexchange-cstheory
|
{
"answer_score": 5,
"question_score": 3,
"tags": "reference request, automata theory, linear temporal logic"
}
|
Is there a regular bipartite graph where the minimum cuts are trivial?
My question is: Given integers $r$ and $k$, is there an $r$-regular bipartite graph $G = L \cup R$ with $|L| = |R| = k$, which is $r$-edge connected, and such that every minimum cut is trivial?
We can make an $r$-regular $r$-edge connected bipartite graph $G = L \cup R$ with $|L| = |R| = k$, by taking a union of some hamiltonian cycles, but it has many non-trivial minimum cuts. (I say a cut is trivial if it is the set of edges incident on a single vertex).
If $r = 2$, then I think the only $r$-regular connected bipartite is a hamiltonian cycle, so it does not hold. But does it hold in $r >2$? I have also shown that all minimum cuts are trivial in complete bipartite graph $K_{r,r}$ (as long as $r \neq 2$).
In general how can I ensure that the minimum cuts are trivial in a graph?
|
An $r$-regular expander should do it.
The following is a simple observation that I first saw in Li (arXiv:2106.05513): if an $r$-regular graph has conductance $\phi$, then the smaller side $S$ of a minimum cut contains at most $|S| \leq 1/\phi$ vertices. Indeed, by definition of conductance we have that $|E(S,S^c)| \geq \phi r |S|$. Since this defines a minimum cut, $|E(S,S^c)| \leq r$ and hence $|S| \leq |E(S,S^c)|/(\phi r) \leq 1/\phi$.
Assuming that $1/\phi < r$ we see that the smaller side of a minimum cut can only contain $<r$ vertices. Then notice that any set of $1<\ell<r$ vertices necessarily has cut value $>r$, which implies that the minimum cut must be a trivial cut.
|
stackexchange-cstheory
|
{
"answer_score": 3,
"question_score": 1,
"tags": "graph theory, co.combinatorics, bipartite graphs"
}
|
Proving that a given formula in LTL is the smallest way to express it
I am looking for a way to prove that a given LTL formula is expressed with the fewest number of temporal operators possible.
I would like to do this to compare the expressive length with other temporal logic languages like MTL.
How can I prove that a given LTL formula is expressed most succinctly as possible? So as to say, this formula needs at least this many temporal operators.
For example, suppose I want to code that property p holds in every other state for the next 6 states starting from the current state, then I could specify this as, $p \wedge XXp \wedge XXXXp \wedge XXXXXXp = p \wedge XX(p \wedge XX(p \wedge XXp))$, where $X$ is the next operator, but how do I prove that this the smallest way to express this?
|
I'm not sure whether it would work in your case, but in order to show succinctness results in modal/temporal logic (e.g. the fact the two-variable logic over words in exponentially more succinct than unary temporal logic) one can employ formula size games or Adler-Immerman games.
Probably the most recent paper to read is by Lauri Hella and Miikka Vilander. Its free-access version is available on arxiv here.
|
stackexchange-cstheory
|
{
"answer_score": 2,
"question_score": 4,
"tags": "proofs, linear temporal logic"
}
|
Set cover where consecutive sets differ by at most one item
First I define my version of the set cover problem: We have a collection of sets such as $S_1, \dots, S_m$ where each $S_i$ is a subset of $M=\\{1,\dots, m\\}$. The goal is to find the minimum number of $S_i$'s where their union is equal to $M$. This is my standard version of set cover.
Now, suppose each two consecutive set $S_i$ and $S_{i+1}$ in our problem differ in at most one item, i.e., $\big||S_i|-|S_{i-1}|\big| \leq 1$ and either $S_i\subseteq S_{i+1}$ or $S_i\subseteq S_{i+1}$. Does this assumption make the set cover easier to approximate (in polynomial-time)?
|
Take an arbitrary instance $S_1,\ldots,S_n$ of SET COVER. Between $S_1$ and $S_2$, insert a chain of new subsets $$ S_1-x,~ S_1-\\{x,y\\},~ \ldots,~ \\{z\\},~ \emptyset,~ \\{c\\},~ \ldots,~ S_2-\\{a,b\\},~ S_2-\\{a\\}.$$ Do the same for all other pairs of consecutive sets.
The resulting instance satisfies your condition, and it is equivalent (with respect to complexity and with respect to approximability) to the original instance.
|
stackexchange-cstheory
|
{
"answer_score": 1,
"question_score": 0,
"tags": "ds.algorithms, approximation algorithms, approximation hardness, set cover, approximation"
}
|
Hardness of maximizing $x^TAy$ with $\{-1,1\}$ entries
My question concerns the NP-hardness of the following discrete optimization problem:
Given a matrix $A \in \\{ \pm 1 \\}^{m\times n}$,
$$\begin{array}{ll} \underset{x \in \\{ \pm 1 \\}^m ,\, y \in \\{ \pm 1 \\}^n}{\text{maximize}} & x^T A \, y\end{array}$$
Is this problem known to be NP-hard?
|
NP-hardness is proved by Roth and Viswanathan in the paper On the hardness of decoding the gale-berlekamp code
|
stackexchange-cstheory
|
{
"answer_score": 6,
"question_score": 7,
"tags": "cc.complexity theory, np hardness, co.combinatorics, optimization"
}
|
Counterexample request: ill-scoped metavariable solution
This is a question on metavariable (aka holes) resolution in (dependent) type theories.
In many referential implementations (such as Andras Kovacs' elaboration-zoo), there is one step called 'scope check', which checks if the solution of a metavariable is well-scoped. It is unclear to me how a solution can be ill-scoped. I wonder if there is a counterexample showing an ill-scoped solution to a metavariable?
|
I think I just came up with one. The following code block is written in a syntax similar to Agda.
test : (a : _) (B : Set) (b : B) -> a ≡ b
test a B b = refl
Assuming `≡` to be the homogeneous equality type and `refl` to be its constructor, the solution to the underscore is `B`, which is not defined there yet. Type checking the above code (with `open import Agda.Builtin.Equality`) will result in the following error message:
Cannot instantiate the metavariable _1 to solution B
since it contains the variable B
which is not in scope of the metavariable
when checking that the expression b has type _1
|
stackexchange-cstheory
|
{
"answer_score": 3,
"question_score": 3,
"tags": "dependent type, type inference"
}
|
Cost of Numerically Solving a System of P polynomials, each of V variables, and degree D to a Specific Accuracy
Let there be a set of $P$ polynomial equations $f_j(x_1,x_2...x_V)=0$ where $1\leq j\leq P$. For each $f_j$ the coefficients are real and every variable goes up to degree $D$. It is also guaranteed that every root of every $f_j$ is guaranteed to be real.
If we want to numerically find a solution $\\{X_1,X_2,..,X_V\\}$ to the set of polynomials within accuracy $\varepsilon$, what is the $\mathcal{O}()$ cost for this? What's the computational cost?
The coefficients are real, irrational but efficiently calculable. I'm not sure how specific I can get, but they're fractions and square-roots.
For some reason I'm having trouble finding this answer; my advisor seems to think it should be very easy to find. If possible I'd really like to find a source for the information as well.
|
If the coefficients are roots of rational numbers, then they are in particular algebraic numbers. This means that you can encode the coefficients as additional polynomial constraints. So overall, you're looking at a system of Diophantine equations (i.e., polynomials with integer coefficients).
The sets defined by such equations are known as algebraic varieties (and if you have inequalities -- semialgebraic sets).
There are several ways of finding points within the solution set of such a system. One general approach is to use Cylindrical Algebraic Decomposition (CAD), but its complexity is quite bad.
The problem can be solved (to my knowledge) in single-exponential time. See e.g., <
Note that by ``solving'' I mean finding the description of an algebraic point within the set of solutions. Then, approximating it to $\epsilon$ accuracy can be easily done using standard evaluation algorithms for algebraic numbers.
|
stackexchange-cstheory
|
{
"answer_score": 5,
"question_score": 2,
"tags": "cc.complexity theory, polynomials, na.numerical analysis"
}
|
Is coRE closed under concatenation?
I know that RE is closed under union, intersection, and concatenation (but not complement). It is likewise easy to show that coRE is closed under union and intersection (but not complement). What about concatenation? I don't even know which way to conjecture...
|
Yes coRE is closed under concatenation:
Let $L_1, L_2$ be coRE, witnessed by Turing Machines $M_1,M_2$ whose domain is the complement of $L_1,L_2$ respectively.
We then build a Turing Machine $M$ whose domain is the complement of $L_1\cdot L_2$: on input $u$, $M$ will enumerate the finitely many ways to decompose $u$ into $u_1u_2$, and for each of them run $M_1$ on $u_1$ and $M_2$ on $u_2$ in parallel. If one of the machines halts, it means that the decomposition is not a witness that $u$ is in $L_1\cdot L_2$, and $M$ continues with the next decomposition. The machine $M$ will halt once all decompositions have been tried. Therefore, $M$ does not halt on $u$ if and only if $u\in L_1\cdot L_2$.
|
stackexchange-cstheory
|
{
"answer_score": 8,
"question_score": 5,
"tags": "computability, turing machines"
}
|
How to acknowledge answers of TCS in the paper?
I am researcher in a company from last 5 years.
I have received few answers on Theoretical Computer Science stack exchange regarding my current research work. They are not the main part of the paper but some preexisting results and some are like direct observations ( or wikipedia results). I am confused what to do with these answers. I mean Do I need to approach persons who have answered my questions or Should I just write an acknowledgement in the paper.
|
If the answers are anecdotic to your contributions or are merely direct observations/links to wikipedia, then acknowledgments are appropriate, no need to do more. Maybe commenting after these answers to let them know that they are acknowledged in your paper, together with a link to a preprint when available, would also be interesting for everyone in order to see how these answers were useful.
Basically I think that you can do the same with TCS answers as with interactions with your colleagues at coffee break: you judge how important of a contribution it is and you can act accordingly. Both situations are the same in my opinion, I like to view stackexchange as a way to increase the number of people you can "meet at coffee break".
|
stackexchange-cstheory
|
{
"answer_score": 10,
"question_score": 3,
"tags": "soft question"
}
|
Fixed points of fixed-point combinator?
A fixed point `f` of a fixed-point combinator would be a function that has itself as a fixed point: `f(f) = f`. The only such function I could come up with is `id`, which by definition has the _apparently_ stronger property that `id(x) = x` for _all_ `x`. Equivalently, everything is a fixed point of `id`.
My question is: is this actually a stronger property (in untyped lambda calculus), or is `id` the only function with `f(f) = f`?
|
If by "$=$" you mean $\beta$-equality, then the answer is yes, $MX=X$ for all $X$ is a stronger property than $MM=M$.
For example, let $$A := \lambda a.aa(aa)$$ (to save parentheses, I am using the standard left-associative notation for application; in your notation, the above term would be $\lambda a.a(a)(a(a))$) and take $$M := AA.$$ We clearly have $M\to MM$ and therefore $MM=M$. On the other hand, for any normal form $N$, $MN\neq N$, because $MN$ does not normalize (in fact, $M$ does not have a head normal form).
|
stackexchange-cstheory
|
{
"answer_score": 5,
"question_score": 3,
"tags": "lambda calculus, fixed points"
}
|
Communication complexity of reconstructing a random bit-string of length $n$
This seems like a folklore claim but I cannot find any reference to it. If Alice has a bit-string of length $n$ where each entry is independently set to 0 or 1 equiprobably, and Bob's goal is to reconstruct the string with a success probability at least 0.9. What is the simplest way to show that the randomized communication complexity (multiple rounds are allowed) is $\Omega(n)$?
One can use an overkilled proof: such a protocol will lead to a protocol for Disjointness and we know that Disjointness requires $\Omega(n)$ bits of communication.
|
Suppose Alice always sends exactly $k$ bits to Bob during the protocol. On average, how many possible candidates for her $n$-bit string are consistent with the communication transcript? What does that tell you about the probability that Bob guesses which one of those candidates is the correct one?
|
stackexchange-cstheory
|
{
"answer_score": 3,
"question_score": 1,
"tags": "reference request, communication complexity"
}
|
Can I research in web technologies with an academic approach?
I'm an undergraduate computer engineering student. I know that I like to become a researcher in my major in the future. I also work as a junior web developer at a small start-up, and I think I really like web and web technologies. Can I do research in this field and are there problems about web technologies that can be solved by academic research? If so, where should I start? What should I study in order to be prepared to work on these problems? How can I find open problems about the web?
|
Being a web developer, I am sure you realize that most large scale websites contain at least one of databases, high availability server, front end design, an algorithm of some sort, etc.
Each of these areas has an active research community. Database researchers mostly study data structures and algorithms that speed up database operations. UI/UX researchers study human-computer interactions, to find better designs and UIs for users to consume. The algorithm community studies characterising various algorithmic problems and finding fast algorithms for them. The distributed systems researchers study how to build high functioning and available servers. This is of course in no way a summary, but it should give you an idea.
So to answer your question, web research is a very generic field that involves and combines many different disciplines. You’ll have to narrow down exactly what you’re interested in web development before participating in their respective research communities.
|
stackexchange-cstheory
|
{
"answer_score": 1,
"question_score": -3,
"tags": "reference request"
}
|
Lexicographic Boolean satisfiability
Maximal Satisfying Assignment (Lexicographic Boolean satisfiability / LexMaxSAT), the problem of finding the lexicographically maximum x_1, . . . , x_n ∈ {0, 1}^n that satisfies Boolean formula φ, or 0 if φ is not satisfiable, is NP-complete. But how does the certificate look like? Because it is not enough to have assignment that satisfies Boolean formula φ. How can we check that it is the largest in polynomial time?
|
This problem is not in $NP$ (unless $PH$ collapses), since it is already $P^{NP}$-hard, see e.g. [1].
[1] K.W. Wagner. More Complicated Questions about Maxima and Minima, and some Closures of NP. Theoretical Computer Science, 51(1-2):53 –80, 1987.
**Edit:** As it was pointed out in the comments, $NP$ is indeed formally defined as a collection of _decision_ problems and not search problems. So what I should have said is that this problem is not _solvable by a non-deterministic $TM$ in polynomial time_ (unless $PH$ collapses). Also, as it was pointed out by Emil in the comments, the problem as stated is indeed $FP^{NP}$-complete, where $FP$ is the set of _search_ problems solvable in polynomial time.
|
stackexchange-cstheory
|
{
"answer_score": 8,
"question_score": 0,
"tags": "np hardness, complexity classes, sat, np complete"
}
|
A variant of two-counter machine
I would like to show that the halting problem for some variant of two counter machine (Minsky machine) is undecidable:
instead of "if c=0 goto i else goto j", there are "if c>d goto i else goto j" commands (where c,d are the two counters). The inc\dec\goto\halt commands remain the same.
It is not hard to show that this problem is undecidable with 3 counters (for example, given an instance of the original problem, adding a third counter e that always equals 0, and we have that c!=0 iff c>e, d!=0 iff d>e). My question is - is this variant known to be undecidable, with only two counters?
|
No it is not universal, because it could be simulated by a 1-counter machine (with the jump-if-zero instruction) that stores the difference of the two counters and an additional state (or program section) that keeps track if the difference is positive or negative. But a 1-counter machine is not universal.
|
stackexchange-cstheory
|
{
"answer_score": 3,
"question_score": 2,
"tags": "computability"
}
|
Is QMA known to contain Co-NP?
Is QMA known to contain Co-NP? If not, would Co-NP being contained in QMA have any implications for other complexity classes. (e.g. Causing the polynomial heirachy to collapse.)
|
This is not currently known. In my MSc. thesis, I show that, if it were true, then a consequence would be that $NP^{NP}\subseteq QMA$ (Theorem 21, below). I conjecture that in fact $coNP\subseteq QMA$ implies $PH\subseteq QMA$, but I was not able to prove that.
**Lemma 14.** $NP^{QMA\cap coQMA}\subseteq QMA$.
**Theorem 21.** If $QMA$ contains $co\text-NP$, then $NP^{NP}\subseteq QMA$
_Proof._ If $QMA$ contains $coNP$, then, equivalently, $NP\subseteq coQMA$, so we have $NP\subseteq QMA\cap coQMA$. Plugging this in, we get $NP^{NP}\subseteq NP^{QMA\cap coQMA}\subseteq QMA$ by Lemma 14. $\square$
The thesis contains other consequences, and variations and generalizations of this theorem.
|
stackexchange-cstheory
|
{
"answer_score": 3,
"question_score": 1,
"tags": "cc.complexity theory, np hardness, complexity classes, complexity, qma"
}
|
Is function composition associative in non-pure programming languages?
We know that function composition is associative in theoretical programming languages such as STλC, and pure functional programming languages such as Haskell. Is the same true for languages where functions can mutate state and have all sorts of side effects?
|
Yes, composition is still associative, but is not _function_ composition anymore. Instead it is _morphism_ composition in a Kleisli category of a monad that captures the computational effects. The nLab page on monads in computer science describes the basic ideas and is probably a suitable starting point.
|
stackexchange-cstheory
|
{
"answer_score": 2,
"question_score": -1,
"tags": "functional programming, function, imperative programming"
}
|
How does axiom K contradict univalence?
I have seen it claimed several times that axiom K is inconsistent with univalence (e.g. here and here), but I have never seen a proof sketch. Specifically, I'm curious about how this manifests in the Coq theorem prover.
Also, I thought axiom K was equivalent to UIP. Is UIP also inconsistent with univalence?
For what its worth, I am not well versed in homotopy theory. I understand the univalence axiom only in non-homotopic terms, as a map from an isomorphism on types to an equality of the same types.
Edit: Here is a Coq proof based on @L. Garde's example: <
|
You will certainly find it natural that most types, like structures, admit _different_ isomorphisms. Just take the type $\textbf{2}$, with inhabitants $0_\textbf{2}$ and $1_\textbf{2}$. It admits 2 obvious different isomorphisms ( _id_ and _swap_ ), and therefore, by the univalence axiom, the identity type $\textbf{2}=_\textit{U}\textbf{2}$ admits 2 different inhabitants.
This is in contradiction with UIP, and with axiom K which is a special case of it.
See Example 3.1.9, and Theorem 7.2.1 of the HoTT book.
|
stackexchange-cstheory
|
{
"answer_score": 6,
"question_score": 3,
"tags": "type theory, coq, homotopy type theory"
}
|
Status of certain problems in knot theory
I found it somewhat difficult to understand the status of certain problems from knot theory. Is it correct to say that it's been neither proved nor disproved that any of the following problems are NP-complete:
* The equivalence problem for links (and knots) as described in knot theory.
* The unknot recognition problem i.e. to determine if an arbitrary knot is trivial (the empty knot).
* Computing the link crossing number for an arbitrary link (or knot).
But it is known that the above problems are decidable and some (all?) of them are NP-hard. I thought I read something that the computation of the homfly polynomial is NP complete via a reduction to colouring of graphs, but can we confirm that it is the case?
|
To complete the first answer, the equivalence problem is decidable (this dates back to haken, a good reference is Lackenby's survey Elementary Knot Theory ). It is neither known to be in NP nor known to be NP-hard.
The crossing number of a knot/link is not known to be in NP (even if you give me the diagram with the fewest crossings I would need to solve the equivalence problem to recognize my knot). We proved that it is NP-hard for links: < For knots this is open.
|
stackexchange-cstheory
|
{
"answer_score": 4,
"question_score": 2,
"tags": "reference request, np hardness, cg.comp geom, topology"
}
|
Cook inspiration for NP completeness
An academic descendant of Cook just lectured on NP completeness. He said that the idea came from a well-known theorem in first-order logic that talks about completeness of satisfiability for computably enumerable languages. He didn't seem to know exactly which.
Do we know what the theorem is? I bet is not mentioned in the original paper.
**Update**
Here is Stephen Cook himself explaining.
* Completeness for recursively enumerable sets.
* Unsatisfiable predicate calculus formulas are complete for recursively enumerable problems.
* Why can't we do this for propositional formulas?
* Analog of recursively enumerable becomes NP.
* The reductions he used were Turing reductions not Karp's.
|
There are _two_ results this could plausibly be which nicely contrast each other - they differ on whether we look at finite or infinite structures:
* **(Turing, following Godel)** The **_validity_ problem for first-order logic on _arbitrary_ structures** in a sufficiently rich language is $\Sigma^0_1$-complete; for example, the set of (codes of) sentences true in every directed graph is $\Sigma^0_1$-complete.
* **(Trakhtenbrot)** The **_satisfiability_ problem for first-order logic on _finite_ structures** in a sufficiently rich language is $\Sigma^0_1$-complete; for example, the set of (codes of) sentences true in some finite directed graph is $\Sigma^0_1$-complete.
|
stackexchange-cstheory
|
{
"answer_score": 1,
"question_score": 7,
"tags": "cc.complexity theory, np complete, cook levin"
}
|
Regular Expressions that converts into unambiguous automata
Brüggemann-Klein and Wood (1992) proved that a certain kind of regular expressions, that they call “Deterministic Regular expressions”, when converted into automata using the Glushkov's Construction, generate a DFA. Also, all the expressions that generate a DFA via this algorithm are in this class.
Is something known about classes of regular expression that when given as input to some conversion algorithm to automata (Thompson, Glushkov, any algorithm that gives a NFA or $\varepsilon$-NFA in the general case) we get a unambiguous automaton (A NFA such that for every word in the language, only exists one acceptation run)?
|
The paper Ambiguity in Graphs and Expressions (Book et al., 1971) discusses constructing regular expressions that preserve the ambiguity of the input NFA and vice versa.
That is, they give a definition for "ambiguity" in regular expressions (how many valid parses are there for a given word), and show how to construct an NFA that will have the same number of accepting paths for each word. Or, given an NFA, how to construct a regular expression with the same property.
It relates to your question in that the class of unambiguous regular expressions, by their definition, would produce an unambiguous NFA using their construction.
|
stackexchange-cstheory
|
{
"answer_score": 7,
"question_score": 5,
"tags": "reference request, automata theory, regular language, regular expressions"
}
|
Lower bound for the OR problem
Let us have booleans $x_1, \cdots, x_n$. Any algorithm that determines $\bigvee_1^n x_i$ with probability at least $2/3$ requires $\Omega(n)$ time. It is not too difficult to prove this, but the proof would certainly be more than a few lines. I have seen a paper that proves this, but cannot find it. Does anyone know of such a paper? I need this in my paper (I am doing a reduction from this problem) and would prefer not to repeat the argument if someone has written it up in detail already. Any result which would imply this in a few lines would be useful, too.
|
After some more searching, I managed to find a proof in these lecture notes [1]. The proof goes via Yao's principle and the lower bound is n/3. If someone knows of a published paper or a book that I may cite for this fact, I am still interested.
[1]
|
stackexchange-cstheory
|
{
"answer_score": 2,
"question_score": 6,
"tags": "reference request, time complexity, lower bounds, sample complexity"
}
|
Complexity of NFA to DFA minimization with binary threshold
What is the complexity of the following problem?
> Given an NFA $A$ and a number $k\in \mathbb{N}$ in binary encoding, does there exist a DFA $B$ with at most $k$ states such that $L(A)=L(B)$?
Specifically, is it known whether this is PSPACE-complete or EXPTIME-complete?
This is the decision version of NFA to DFA minimization, but the size bound is given in binary. It's well known that if $k$ is given in unary, the problem is PSPACE-complete (by e.g., reduction from NFA universality).
The problem is clearly in EXPTIME: we can determinize $A$ to obtain an equivalent DFA of exponential size, and then minimize it. However, this method cannot be brought down to PSPACE, since the minimal output may indeed be of exponential size.
My intuition says this should be EXPTIME-complete, but I did not see any research on that.
|
The problem is in PSPACE, hence is PSPACE-complete.
DFA minimization is in NL; see Theorem 2.1 of [[S. Cho and D.T. Huynh. The Parallel Complexity of Finite-State Automata Problems. Information and Computation, 97, 1-22 (1992)]](
NL is contained in polyL (deterministic polylogarithmic space).
The subset construction can be implemented by a PSPACE transducer, i.e., a Turing machine whose work tape is PSPACE-bounded. Its output (on an extra tape) will be exponential in general.
By composing the PSPACE transducer with the polyL-machine in the standard space-efficient way (involving the (re-)computation of any bit the polyL-machine requires), we (even) get a PSPACE transducer that given an NFA computes a minimal equivalent DFA.
|
stackexchange-cstheory
|
{
"answer_score": 5,
"question_score": 13,
"tags": "automata theory, minimization"
}
|
Complete problem in $\Sigma_2^p$ - $\Sigma_{2}SAT$
It is known that the following problem is complete in $\Sigma_2^p$:
$\Sigma_{2}SAT$ : Given a quantified boolean formula $\theta = \exists x_1,...,x_l\forall y_1,...,y_m\psi$, where $\psi$ is a boolean propositional formula over the variables $x_1,...,x_l,y_1,...,y_m$ , is $\theta$ valid?
Is it still complete when it is assumed that $\psi$ is in CNF?
It is mentioned in "Computational Complexity: A Modern Approach" by Sanjeev Arora and Boaz Barak that it can be assumed, but no proof is given: (Example 5.9)
|
No. Since universal quantifiers commute with conjunctions, it is easy to see that $\Sigma_2$-SAT with $\psi$ CNF is in NP. If it's really written like this in the book, it's an error.
However, the problem is $\Sigma_2$-complete for $\psi$ a 3-DNF.
|
stackexchange-cstheory
|
{
"answer_score": 11,
"question_score": 8,
"tags": "complexity classes, polynomial hierarchy"
}
|
What is the general definition of 'extensionality' in type theory and how is extensionality defined for positive types?
It is well-known in the literature that (internal) extensionality of a function type means $(\prod_a f~a=g~a)\implies f=g$ (where $=$ is the intensional equality type) and extensionality of a product type means $\sum_{p:a.1=b.1}\text{transport}~p~(a.2)=b.2 \implies a=b$, but how is extensionality of positive types defined?
I can guess that for $a, b: X+Y$ two inhabitants of a sum-type, we might want to say that "either ($a=inl(a'), b=inl(b')$ and $a'=b'$) or ($a=inr(a'),b=inr(b')$ and $a'=b'$) do $a=b$", but it looks impossible, right? Because we do not have an operation for deciding whether $a=inl(a')$ or not, given that $a$ is open.
|
Extensionality is basically the reversibility of the introduction rule. Negative types have reversible introduction rules, while positive types have reversible elimination rules. So you are looking in the wrong direction.
The nlab entry for sum types mentions polarity at the very end.
|
stackexchange-cstheory
|
{
"answer_score": 6,
"question_score": 6,
"tags": "type theory, extensionality"
}
|
How can we compute the VC dimension of a finite class of sets?
Let $F$ be a class of subsets of a finite set $X$ of cardinality $n$. What is the complexity of computing the VC dimension of $F$? Can we do better than looping through every subset of $X$ and checking if $F$ shatters it?
|
In 1996 Papadimitriou and Yannakakis noted that there exists an $n^{O(\log n)}$ brute-force algorithm (where $n$ is the size of the input) for computing VC-dimension of a 0-1 matrix by checking all the subsets of size up to the trivial bound, the logarithm of the number of hypotheses.
Manurangsi and Rubinstein later showed this bound basically cannot be improved assuming the Exponential Time Hypothesis. So, there is a brute-force quasi-polynomial time approach, but we don't expect to be able to improve it to get a polynomial-time algorithm.
|
stackexchange-cstheory
|
{
"answer_score": 7,
"question_score": 6,
"tags": "ds.algorithms, co.combinatorics, vc dimension"
}
|
Does abundance of max cliques make it easy to solve COLORABILITY?
Let $q\geq 3$. We know that $q$-COLORABILITY is an NP-complete problem.
Suppose that $G$ is a graph such that **each vertex of $G$ is part of a $q$-clique** (i.e. $K_q$). Since we may assume that $G$ does not contain $K_{q+1}$, the condition is the same as saying that $G$ has a clique cover $S$ comprised of maximum cliques in $G$.
Does this condition make it easy to solve $q$-COLORABILITY of $G$? If not, would the following extra condition make it possible: $|C\cap D|\leq 2$ for every two distinct members $C,D\in S$ ?
_Remark_ : there are known results on $q$-COLORABILITY when $G$ has a clique **edge** cover $S$ such that (i) $|C\cap D|\leq 1$ for every two distinct members $C,D\in S$, and (ii) each vertex of $G$ belong to at most 2 members in $S$. For instance, see Walter Klotz, Clique Covers and Coloring Problems of Graphs.
|
As written, the problem is NP-complete even when you require the elements of $S$ to be pairwise disjoint (which also implies that every vertex belongs to a unique element of $S$): as a reduction from $q$-COLOURABILITY, we can attach to each vertex $q-1$ new vertices to form a $q$-clique.
This does not contradict the results from the cited paper by Klotz, because in that paper, “clique cover” means “clique edge cover” rather than “clique vertex cover”.
If you restate the question with clique edge covers, i.e., requiring that each edge of $G$ is included in a $q$-clique, the problem is still NP-complete: similar to the above, we can attach to each edge $q-2$ new vertices to form a $q$-clique. This will make $|C\cap D|\le1$ for each distinct $C,D\in S$. (It will violate condition (ii), as every original vertex belongs to as many elements of $S$ as was its degree in the original graph.)
|
stackexchange-cstheory
|
{
"answer_score": 3,
"question_score": 2,
"tags": "graph theory, graph colouring"
}
|
Reference request for linear algebra over GF(2)
I have been looking for materials on the linear algebra over $GF(2)$ but so far I haven't found any substantial textbooks or notes on this subject. In fact in one of the notes I found the introduction states that,
> Normally, we would cite a series of useful textbooks with background informa- tion but amazingly there is no text for finite field linear algebra. We do not know why this is the case.
In particular I would like to learn about elementary notions such as linear independence, orthogonality, linear equations, rank and kernels, etc. I would especially like to understand what the implications of this properties are when considering the same problem over $\mathbb{R}$. (Eg. does linear independence over $GF(2)$ imply linear independence over $\mathbb{R}$? Does orthogonality over $GF(2)$ imply linear independence over $\mathbb{R}$?)
What are some good notes, textbooks or other sources to learn about this subject?
|
Strangely, linear algebra specific to finite fields is best studied in textbooks on the theory of error-correcting codes (for example, MacWilliams and Sloane).
Pretty much all familiar notions in linear algebra extend to finite fields and GF(2). The notable exceptions are: 1) Orthogonal space may have a nontrivial intersection with the original space. That can cause significant confusion. Over GF(2), it is even possible to have a linear space that is its own orthogonal space. 2) There is no effective counterpart to spectral decomposition over finite fields. (speaking of which, does anyone know about existing efforts to address this "shortcoming"?)
|
stackexchange-cstheory
|
{
"answer_score": 5,
"question_score": 6,
"tags": "reference request, linear algebra, coding theory, finite fields"
}
|
Full names of C. K. Chow and C. N. Liu
Where can I find the full names of C. K. Chow and C. N. Liu, of the Chow-Liu tree fame?
<
<
|
From what I could find, the names are
* Chao-Kong Chow
* Chao-Ning Liu
I found the second from this IEEE note, which links Chao-Ning Liu to IBM Thomas .J. Watson (which he joined in 1957). Then, a search for patents gave the name of C.K. Chow.
C.K. Chow's name is also stated in this paper by O'Donnell and Servedio on The Chow Parameters Problem (see abstract).
|
stackexchange-cstheory
|
{
"answer_score": 11,
"question_score": 5,
"tags": "reference request"
}
|
Which universities in the U.S. are doing research in type theory?
The question is meant to be broad in that recommendations with mentions of the particular areas within type theory research are greatly appreciated. Also, the research need not be conducted in computer science departments. Thanks in advance.
|
Any such list is always subjective, but the best approach to answer this question is:
1. Look at journals/conferences in the area you're interested in. For type theory, I'd look at LICS, LMCS, POPL, ICFP, and JFP.
2. Find papers that seem interesting, or are in the area you're interested in.
3. Look at the schools the authors are from.
There's another important underlying point here, which is the _for graduate studies, the school matters much less than the supervisor_. So you're far better off looking for the research you're interested in, and then trying to work with the people conducting it, then to choose a school first and then try to find a supervisor and area.
|
stackexchange-cstheory
|
{
"answer_score": 14,
"question_score": 7,
"tags": "soft question, type theory"
}
|
Algorithm for finding traffic equilibrium
I watched a youtube video about a certain interesting property of springs and road networks. It made me think: if we represent a network of roads as a graph where edges are roads described by a throughput and latency, and vertices correspond to road junctions what would an efficient algorithm for determining the equilibrium (such that no car can make a faster route) of traffic look like? And as a bonus question: how would that algorithm change for different types of agents (say we try the find the equilibrium for super rational agents)
|
The problem you are interested in is called the Traffic equilibrium problem.
The paper "Traffic Equilibrium and Variational Inequalities" by Stella Dafermos formalizes it, shows that there is a unique equilibrium, and gives an algorithm for computing it.
Note that this works for a particular formalization, for example, it assumes "a fixed travel demand [..] for every origin-destination pair". One could want to model more complex things, like time-dependent travel demand, or arbitrarily complex relations between road congestion and throughput/latency. If you have something specific in mind, I'd recommend to ask another question.
|
stackexchange-cstheory
|
{
"answer_score": 0,
"question_score": -1,
"tags": "ds.algorithms, graph theory, gt.game theory"
}
|
Alternative to LBA for recognising context-sensitive languages
I've always felt that there's no "canonical" automata for recognising context-sensitive languages. Much like there's DFA for regular, PDA for context-free and Turing machines for RE.
I'm aware of LBA, but that's a finite restriction of Turing machines. In my view, it doesn't really stand on its own.
I once read a paper which gave a very interesting alternative, but I can't find it anymore. A link to that paper would be great, but I'd appreciate something more substantive too.
|
Here is an alternative model:
Benedek Nagy: Left-most derivation and shadow-pushdown automata for context-sensitive languages, ICCOMP'06: Proceedings of the 10th WSEAS international conference on Computers, pp. 1015-1020.
|
stackexchange-cstheory
|
{
"answer_score": 6,
"question_score": 5,
"tags": "automata theory, grammars, space bounded"
}
|
Deterministic communication complexity of refinement
A partition of $[n]$ is a collection $\mathcal{P}$ of non-empty subsets of $[n]$ such that for each $i \in [n]$ there is a unique $P \in \mathcal{P}$ with $i \in P$. For partitions $\mathcal{P}, \mathcal{Q}$ we say that $\mathcal{P}$ _refines_ $\mathcal{Q}$, denoted $\mathcal{P} \sqsubseteq \mathcal{Q}$, if for every $P \in \mathcal{P}$ there is some $Q \in \mathcal{Q}$ such that $P \subseteq Q$. Define the two party _refinement problem_ by: $$ REF_n(\mathcal{P}, \mathcal{Q}) = \begin{cases} 1 & \text{ if } \mathcal{P} \sqsubseteq \mathcal{Q}, \\\ 0 & \text{ otherwise. } \end{cases} $$ I studied the randomised communication complexity and showed that $\Omega(n) \le R^{cc}_{1/3}(REF_n) \le O(n \cdot log(n))$.
Is the deterministic communication complexity known for this problem?
|
The deterministic communication complexity of the problem is $\Theta(n\log{n})$: it is sufficient to show the existance of a family $S$ of partitions such that $|S|= 2^{\Omega(n\log{n})}$ and that for any $P_1,P_2 \in S$, $P_1$ refines $P_2$ iff $P_1 = P_2$, as this is a fooling set that implies a bound of $\Omega(n\log{n})$. Let $S$ be the set of partitions that partition $[n]$ into pairs (sets of size $= 2$). It is easy to see that $|S| \geq (n/2)! = 2^{\Omega(n\log{n})}$, and clearly no pair-partition is a refinement of another pair-partition. On the other hand, the problem can trivially be solved in $O(n\log{n})$ by having Alice send her entire input to Bob.
|
stackexchange-cstheory
|
{
"answer_score": 4,
"question_score": 0,
"tags": "randomized algorithms, communication complexity"
}
|
Is a grid graph a vertex-minor of a complete graph?
Consider a graph $G$. A graph $H$ is the vertex-minor of the graph $G$ if $H$ can be obtained from $G$ using vertex deletions and local complementations. For more information, look at Definition 2.1 and 2.2 here.
Now, let $G$ be a complete graph with $n^{2}$ vertices and let $H$ be a $k \times k$ grid graph, with $k < n$.
For some choice of $k$, is $H$ a vertex-minor of $G$?
|
Vertex-minors of complete graphs are either complete graphs, star graphs, or edgeless graphs, so this does not hold for $k \ge 2$.
Proof that vertex-minors of complete graphs are complete, star, or edgeless: From a complete graph, vertex deletion gives a complete graph and local complementation gives a star graph. From a star graph, deletion of the central vertex gives an edgeless graph, local complementation of the central vertex gives a complete graph, deletion of an outer vertex gives a star graph, and local complementation of an outer vertex gives again the same star graph. From an edgeless graph, both vertex deletion and local complementation give again an edgeless graph.
|
stackexchange-cstheory
|
{
"answer_score": 3,
"question_score": 0,
"tags": "ds.algorithms, graph theory, graph algorithms, planar graphs"
}
|
FOCS virtual fee $600
I'm not sure this is on topic here, but probably can be best answered by this community, so I'm posting it as a soft-question.
Due to the pandemic, FOCS 2021 will be a virtual conference. Most conferences, when moving online, reduced the registration fee to some two digits number. At FOCS, however, the cheapest registration fee for grown-ups still starts at $600, without any reduction from the offline figures. Is there any justification for this?
|
This isn't an answer, but felt it was good to post here as a sort of announcement. Also, it makes the question at least a little bit moot for many people. [Making it CW so I don't get any points from people upvoting just b/c they're happy about the result :).]
While I think one author is still needed to register at the higher fee, **attendee fees have been significantly reduced!** They are now \$150 for non-members, \$125 for society members, \$100 for students, and \$70 for lifetime members. <
|
stackexchange-cstheory
|
{
"answer_score": 6,
"question_score": 23,
"tags": "soft question, conferences"
}
|
Randomized algorithms not based on Schwartz-Zippel
Are there any problems that are known to be in a randomized complexity class (e.g. RNC, ZPP, RP, BPP, or even PP), but not in any lower non-randomized class (e.g. NC, P, NP), and whose membership in the randomized class is **not** based on the Schwartz-Zippel lemma?
If not, is there some fundamental barrier that prevents us from developing new tools? (apart from the obvious fact that we don't know whether randomization helps)
|
Here is a natural problem known to be in $\mathsf{BPP}$ but not $\mathsf{RP} \cup \mathsf{coRP}$, Problem 2.6 of [1]: Given a prime $p$, integers $N$ and $d$, and a list $A$ of invertible $d \times d$ matrices over $\mathbb{F}_{p}$, does the group generated by $A$ have a quotient of order $\geq N$ with no abelian normal subgroups? In [1] it is shown that this problem is in $\mathsf{BPP}$.
[1] L. Babai, R. Beals, A. Seress. Polynomial-time theory of matrix groups. STOC 2009.
|
stackexchange-cstheory
|
{
"answer_score": 8,
"question_score": 11,
"tags": "cc.complexity theory, randomized algorithms, derandomization"
}
|
Name for words without squared symbols
Is there a common name in combinatorics for words that do not have square of size 1 ? That is words such that no symbols appears twice in a row or, more formally, words _not_ in $\bigcup_{s\in\Sigma} \Sigma^* s s \Sigma^*$ where $\Sigma$ is our alphabet.
|
I would call them _stutter-free_ words, since there is the notion of _stutter-invariant language_ , which is already well-known.
|
stackexchange-cstheory
|
{
"answer_score": 8,
"question_score": 5,
"tags": "reference request, fl.formal languages, combinatorics"
}
|
Separating 2-SAT from Clique
Since the P vs. NP problem is still an open problem, 2-SAT and Clique might both be in P if P = NP. Is there any known complexity measure whatsoever that is already mathematically proven to distinguish 2-SAT from Clique?
|
2-SAT is NL-complete so separating 2-SAT from Clique would separate NP from NL, also a major open problem.
|
stackexchange-cstheory
|
{
"answer_score": 8,
"question_score": 2,
"tags": "cc.complexity theory, clique, 2sat"
}
|
The complexity of 3SAT
It is well known that 3SAT remains NP-complete if every variable occurs exactly twice positively, exactly once negated.
then, does 3SAT remain NP-complete if every variable occurs exactly once positively, exactly once negated?
|
Satisfiability of CNFs where each variable occurs at most twice is easily seen to be computable in P. Repeat in any order the following steps, each of which decreases the number of variables, and preserves satisfiability:
* Remove clauses containing both a literal and its negation.
* If some variable occurs only positively or only negatively, remove the corresponding clauses (i.e., set the occurring literal to true).
* Pick any variable that occurs both positively and negatively, and resolve the two clauses where it occurs (i.e., remove $C\cup\\{x\\}$ and $D\cup\\{\neg x\\}$, and replace them with $C\cup D$).
If neither step is no longer applicable, no variable can occur in the CNF any longer, which means that either the CNF is empty (whence true, i.e., satisfiable), or it consists of the empty clause (whence it is false, i.e., unsatisfiable).
|
stackexchange-cstheory
|
{
"answer_score": 8,
"question_score": 2,
"tags": "np hardness, sat"
}
|
Intuitive way to handle variable binding
Suppose we have an algebraic datatype parameterised by a type variable `name`, e.g.
data Prog name = Var name
| App (Prog name) (Prog name)
| Abs name (Prog name)
deriving (Show, Eq)
What is the most straightforward and intuitive way to handle bindings and substitutions? Specifically, I am hoping for something that only relies on the type parameter (`name`) so that the underlying algebraic datatype doesn't need to be altered (unlike, for example, De Bruijn index). Thanks.
|
Your suggestion does not quite work, but **polymorphic higher-order abstract syntax** does:
data Prog name = Var name
| App (Prog name) (Prog name)
| Abs (name -> Prog name)
See the paper Parametric Higher-Order Abstract Syntax for Mechanized Semantics by Adam Chlipala.
|
stackexchange-cstheory
|
{
"answer_score": 2,
"question_score": 2,
"tags": "pl.programming languages, lambda calculus, functional programming"
}
|
Order notation quirk
> Is it true that $$O(n) = \bigcap \\{ O(g) \mid g \in \omega(n) \\}?$$
This appears to be a straighforward question about sets of functions, but on closer examination leads to some murky waters. I would be interested either in a construction of a counterexample function which doesn't require a choice principle independent of ZF set theory, or a proof which avoids invoking such a principle.
|
The identity is provable in ZF (or even in $\mathrm{RCA}_0^*$). The $\subseteq$ inclusion is trivial. For the $\supseteq$ inclusion, let $f\notin O(n)$. Define an integer sequence $\\{n_k:k\in\mathbb N\\}$ by $$n_k=\min\\{n:|f(n)|\ge k^2(n+1)\\}.$$ Note that $n_k$ is non-decreasing, $n_0=0$, and $\lim_kn_k=\infty$, thus $\mathbb N$ is the disjoint union of the intervals $[n_k,n_{k+1})$, and we can define a function $g$ by $$g(n)=kn,\qquad n_k\le n<n_{k+1}.$$ Then $g\in\omega(n)$, but $f\notin O(g)$, as $|f(n_k)|\ge kg(n_k)$ for all $k$.
|
stackexchange-cstheory
|
{
"answer_score": 7,
"question_score": 7,
"tags": "cc.complexity theory, set theory"
}
|
$AC^0$[subexp] vs. NC
My question is about the possibility of trading size for depth in circuits.
Under what conditions is it true (or, plausible) that $AC^0[2^{n^\delta}] \subseteq NC^i$ for some constants $\delta < 1, i>0$?
Or, is there anything known at all?
I can see that $NC^1$ circuits of depth $\epsilon\mathrm{log}(n)$ (for $\epsilon<1$) are contained in $AC^0[2^{n^\epsilon}]$. My question goes in the other direction.
|
No, $\mathrm{AC}^0[2^{n^\delta}]$ is not included in NC; it is not even included in $\mathrm{SIZE}[2^{n^\epsilon}]$ for $\epsilon<\delta$. Indeed, any Boolean function on $n^\delta$ inputs, padded to input size $n$ with dummy variables, has depth-2 circuits of size $2^{n^\delta}$, but the vast majority of such functions require circuit size $\Omega(2^{n^\delta}/n^\delta)$.
By the way, the opposite inclusion can be improved: not only does $\mathrm{AC}^0[2^{n^\epsilon}]$ contain $\mathrm{NC}^1$ without the $\epsilon\log n$ depth restriction, we have, in fact, $$\mathrm{NL/poly}\subseteq\bigcap_{\epsilon>0}\mathrm{AC}^0[2^{n^\epsilon}].$$ This is a form of Nepomnjaščij’s theorem.
|
stackexchange-cstheory
|
{
"answer_score": 10,
"question_score": 5,
"tags": "circuit complexity, dc.parallel comp, circuit depth, bounded depth"
}
|
Question about BPP complexity class
Good morning everyone, I just started studying the BPP complexity class and the amplification lemma. There is one exercise about BPP that I don't understand, I hope that you can help me.
Let $L$ be a language over a finite alphabet and $M$ a $PPT$ (Probabilistic Turing Machine) such that:
* $w \in L \rightarrow P(M\ accepts\ w) \ge b$
* $w \notin L \rightarrow P(M\ accepts\ w) \le a$
where $0<a<b<1$.
I have to prove that the language $L$ is in $BPP$. This should be a fairly easy exercise, but I've tried several approaches and no one was successful. To "match" the definition of a language in $BPP$, I should have $b$ greater than $\frac{1}{2}$ and $a$ less than $\frac{1}{2}$, but the text of the exercise does not say so, we only know that $0<a<b<1$.
Thank you in advance
|
This feels like a typical university exam and here is not the best place to answer this. EDIT: Here is a detailed answer.
Let us consider $M_n$ that given $w$, runs $n$ times $M$ and returns $1$ when the number of accepting runs is over $n\times (b+a)/2$.
Let us consider $w\in L$. Because $P(M \text{ accepts } w)\geq b > (b+a)/2$ we know from the central limit theorem that the probability $P(M_n \text{ accepts } w)$ tends towards 1 and therefore there exists $n_b$ such that for all $n\geq n_b$, $P(M_n \text{ accepts } w)>2/3$.
In the same manner, for $w\not\in L$, we can find $n_a$ such that for all $n\geq n_a$, $P(M_n \text{ accepts } w)<1/3$. Now we see that $L$ is in $BPP$ by considering the program $M_{max(n_a,n_b)}$.
|
stackexchange-cstheory
|
{
"answer_score": 0,
"question_score": 0,
"tags": "cc.complexity theory, complexity classes, probabilistic computation, probabilistic complexity"
}
|
Maximum flow with parity requirement on certain edges
Consider the maximum integral flow problem on a directed graph $G=(V,E)$ with integral capacities $c:E\to \mathbb{N}$. We have an additional constraint that for the set of edges in $F\subseteq E$, the flow value has to be even. Such flow is called $F$-even max-flow.
Is finding the maximum $F$-even max-flow NP-hard?
**The gap between $F$-even max-flow and max-flow**
Consider 2 edges $(s,v)$,$(v,t)$. where edge $(s,v)$ has capacity $2$, and $F=\\{(s,v)\\}$. $(v,t)$ has capacity 1. The max flow is 1 and $F$-even max-flow is 0. One can use this to construct larger examples.
The difference between $F$-even max-flow and max-flow is bounded by $|E|$. Setting $c'(e) = \lfloor c(e)/2 \rfloor$, we can compute a maximum flow with respect to $c'$. We can scale it to obtain a $E$-even max-flow with respect to $c$. The difference with the max-flow with respect to $c$ is at most $|E|$.
Maybe one can show it is bounded by $|F|$.
|
We can construct a widget for an all-or-nothing flow of capacity 4 from vertex s to t using the widget below. The stars (*) indicate even flows. By recursively applying similar widgets one can emulate all-or-nothing flows of any small size, so we can reduce the all-or-nothing flow problem to it, which we know is NP-hard (<
$ the $st$-connectivity of $G$. That is, $k_{st}(G)$ is the size of any minimum $st$-separator of $G$.
(*) It can be shown that if a vertex $v$ belongs to any minimum $st$-separator of $G$, then $k_{st}(G-v)=k_{st}(G)-1$.
How do you prove the following: Let $S$ be a minimal $st$-separator of $G$ that is **not** a minimum $st$-separator of $G$. Prove that $S$ contains at least one vertex $v\in S$ such that $k_{st}(G-v)=k_{st}(G)$.
It seems very intuitive that this would hold, especially since (*) can be shown. However, I have yet to find a formal proof...
|
It does not hold, as can be seen from the red separator in this example.
,\dots, (x_N,y_N) \\}$, $F$ is set of a data-generating functions and $h : X \to Y$ is a classifier. $L(f(x),y) $ is $1$/$0$-loss function. Then I want to show that $$\frac{1}{|F|}\sum_{f \in F} E [L(f(q),h(q))] = \frac{1}{2}$$ where $q$ is a test point such that $x_i \neq q$ for all $i$.
* * *
**My attempt**
$$E[L(f(q),h(q))] = E[I_{f(q) \neq h(q) }(q)] = P(\\{f(q) \neq h(q))$$ where $I_{f(q) \neq h(q) }(q)$ is an indicator function. How should i evaluate $P(\\{f(q) \neq h(q)\\})$?
My intuition says that $P(\\{f(q)\neq h(q)\\}) = \frac{1}{2}$, but I cant come up with a formal argument for that. Any help would be appreciated.
|
Let $\Omega$ be a finite set and $F=\\{0,1\\}^\Omega$ be the (finite) collection of all Boolean functions on $\Omega$. We claim that for any $h\in F$ and any distribution $P$ on $\Omega$, the following holds: $$ (*)=\frac{1}{|F|}\sum_{f\in F}\mathbb{E}_{x\sim P}1[h(x)\neq f(x)]=\frac12. $$ Indeed, we can treat $\frac{1}{|F|}\sum_{f\in F}$ as taking an expectation over $f\sim Q$, where $Q$ is the uniform distribution over $F$. By symmetry and Fubini's theorem (i.e., exchanging the order of the 2 expectations), this is equivalent to labeling each point $x$ independently with a Bernoulli$(1/2)$ label $B(x)$. Thus, $$ (*)=\mathbb{E}_{x\sim P}1[h(x)\neq B(x)]=\frac12. $$
|
stackexchange-cstheory
|
{
"answer_score": 0,
"question_score": 1,
"tags": "machine learning"
}
|
VC-dimension of the infinite intersection of two spheres
I'm searching for an upper-bound for the VC-dimension of the infinite intersection of two spheres. Thanks
|
Once the OP has clarified that the question is about the VC-dimension of the 2-fold intersection of spheres in $\mathbb{R}^d$ (in fact, $d=2$ was specified), a simple upper bound can be stated. The VC-dim of spheres in $\mathbb{R}^d$ is the same as that of half-spaces (via an easy scaling argument) --- namely, $d+1$. Lemma 3.2.3 of Blumer et al. (1989) bounds the VC-dim of the $k$-fold union/intersection of a class of VC-dim $d$ by $$ 2kd\log(3k). $$ Thus, $2\cdot2\cdot2\log(6)\approx 14$ is an upper-bound on the VC-dimension in the OP.
A. Blumer, A. Ehrenfeucht, D. Haussler, and M. K. Warmuth. Learnability and the Vapnik-Chervonenkis dimension. J. Assoc. Comput. Mach., 36(4):929–965, 1989. ISSN 0004-5411.
|
stackexchange-cstheory
|
{
"answer_score": 1,
"question_score": 0,
"tags": "machine learning, vc dimension"
}
|
Efficient transformation into CNF preserving entailment
Suppose you have two propositional formulas $\varphi$ and $\psi$, **not necessarily in CNF**. I want to convert them to 3CNF efficiently (hence introducing auxiliary variables) in such a way that $\varphi \models \psi$ if and only if $\varphi' \models \psi'$, where the latter are the transformed formulas, written in 3CNF.
The usual Tseitin encoding for Boolean formulas preserves satisfiability, but it does not preserve entailment, so it doesn't work. Is there any other known notion of translation that preserves this?
**Edit**. I need $\varphi'$ to depend exclusively on $\varphi$.
|
This is impossible in polynomial time unless P = NP. Such a transformation would give a reduction of the coNP-complete validity problem $\\{\psi:\top\models\psi\\}$ to the polynomial-time decidable problem $\top'\models\psi'$, where $\top'$ is a constant-size formula, and $\psi'$ is a CNF.
The best you can do is to reduce $\varphi\models\psi$ to $\varphi'\models\psi''$ where $\varphi'$ is a 3CNF (using the Tseitin transform) and $\psi''$ is a 3DNF (using the dual Tseitin transform, making sure that the extension variables introduced in $\varphi'$ and $\psi''$ are disjoint). You can’t make $\psi''$ a CNF by the argument above, and dually, you can’t make $\varphi'$ a DNF (if $\psi''$ depends only on $\psi$, not on $\varphi$).
|
stackexchange-cstheory
|
{
"answer_score": 4,
"question_score": 3,
"tags": "lo.logic, polynomial time, boolean formulas"
}
|
Restriction of a convex function to {0, 1}^n
Suppose I have a real-valued _convex_ function $f$ on the unit hypercube $[0,1]^n$, and let $\bar{f}$ be its restriction to the integer points $\\{0,1\\}^n$. Does $\bar{f}$ satisfy any properties, or can any function on $\\{0,1\\}^n$ be obtained as a restriction of a convex function?
|
Any real valued function $g$ defined on $\\{0,1\\}^n$ can be extended to a convex function over $[0,1]^n$ (it is called the convex closure). See Dughmi's nice survey. The implication for your question is that indeed $\bar{f}$ will not have any specific properties.
|
stackexchange-cstheory
|
{
"answer_score": 6,
"question_score": 4,
"tags": "convex optimization, convex geometry, submodularity, convex hull"
}
|
Parameterized complexity of Hitting Set with slightly bigger parameter
The Hitting Set problem, when parameterized by the size $k$ of the hitting set, is **W** [2]-hard. Is it also **W** [2]-hard when parameterized by $k$ plus the number of subsets in the instance?
I explain in a bit more detail. A Hitting Set instance consists of a universe $U = \\{ u_1, \dots, u_n\\}$ and a set $S = \\{ S_1, \dots, S_m\\} \subseteq \mathcal{P}(U)$ together with a natural number $k$. A hitting set is a set $H \subseteq U$ of size $k$ such that for each $i \in [m]$, $H \cap S_i \neq \emptyset$. We know that Hitting Set parameteized by $k$ is **W** [2]-complete. Is it still **W** [2]-hard when parameterized by $k + |S|?$
|
This is FPT, because by interchanging sets with elements in the usual way, this is just set cover where the universe size $n$ is the parameter. This is known to be FPT (see e.g. the parameterized algorithms book, Chapter 6)
|
stackexchange-cstheory
|
{
"answer_score": 2,
"question_score": 0,
"tags": "cc.complexity theory, reductions, parameterized complexity"
}
|
Vidick's proof of parallel DI-QKD
This question is based on the paper- <
As far as I understand, for this proof Vidick uses a quantum parallel repetition for 3 player- Alice, Bob and Eve but the results in the anchored games paper and Lemma 4 of the paper are only for 2 players. Am I missing something?
|
It seems that the proof indeed relies on a 3 player parallel repetition result for anchored games. A proof sketch for multiplayer anchored games has been given in Sec 5.3 of the anchored games paper v1.
There seems to be typo in Lemma 4 of the parallel DIQKD paper\-- it should have been $\tau_{\eta, t}^\ast (G_\eta) \leq e^{-\Omega(\delta^9 n)}$ in the lemma.
|
stackexchange-cstheory
|
{
"answer_score": 0,
"question_score": 0,
"tags": "quantum computing, quantum information, cryptography"
}
|
Does double majoring with math in undergrad help one grasp TCS topics more easier?
I'm a CS major. However, a lot of TCS topics seem to be in the realm of pure math. Should I add a math major to complement understanding and for a future career in TCS?
|
Three benefits of math classes:
* Knowledge of particular mathematical topics that are useful in TCS. This is a bit specific to the circumstance, but of course it helps! Probability, combinatorics, algebra, sometimes analysis, number theory, logic, ....
* Mathematical maturity, general comfort with proofs and mathematical reasoning. Very important.
* Showing your qualifications on grad school applications.
But I think the specific classes you take and skills you gain are more important than whether you officially major in math or not.
|
stackexchange-cstheory
|
{
"answer_score": 2,
"question_score": 4,
"tags": "soft question"
}
|
Complexity of the Complete (3,2) SAT problem?
A complete $k$-CNF formula is a $k$-CNF formula which contains all clauses of size $k$ or lower it implies.
Deciding the satisfiability of a complete $k$-CNF formula is clearly a tractable problem since a $k$-CNF formula is satisfiable as long as it does not contain the empty clause. What happens when it is mixed with a 2-CNF formula?
Let define the Complete (3,2) SAT problem : Given $F_3$, a complete 3-CNF formula, and $F_2$, a (complete) 2-CNF formula ($F_3$ and $F_2$ are defined on the same variables). Is $F_3 \wedge F_2$ satisfiable?
What is the complexity of this problem ?
(The question is different as in the post Complexity of the (3,2)s SAT problem? where it concerned non complete formulas.)
|
Consider the standard reduction from 3-coloring to SAT: for each vertex $v \in V$ we introduce three variables, $v_R,v_G,v_B$, add a clause $(v_R \vee v_G \vee v_B)$, and clauses $(\lnot v_R \vee \lnot v_G)$, $(\lnot v_R \vee \lnot v_B)$, and $(\lnot v_G \vee \lnot v_B)$. Then for each edge $(u,v) \in E$ we add clauses $(\lnot v_R \vee \lnot u_R)$, $(\lnot v_G \vee \lnot u_G)$, and $(\lnot v_B \vee \lnot v_B)$.
Observe that the clauses of size 3 in the resulting CNF are disjoint, and therefore the CNF consisting of them is complete. Therefore deciding the satisfiability of $F_3 \wedge F_2$, where $F_3$ is a complete 3-CNF formula and $F_2$ is a 2-CNF formula is an NP-complete problem.
|
stackexchange-cstheory
|
{
"answer_score": 2,
"question_score": 2,
"tags": "cc.complexity theory, np hardness, complexity classes, sat"
}
|
CFG - How can I describe a language that dictates a word and its opposite?
I have this question from my Automata class and I am unsure if there's a way to do this. Assuming u,v ∈ {0,1}* and at every character in the word u, the character at the same position in the word v is the opposite of it.
### Example :
if u is 0011 , v is 1100
if u is 0011011010 , v is 1100100101
I was first going with L = { uv | u,v ∈ {0,1}* and v=(u')} but I am not sure if this describes what I am saying.
Please help me understand this. Thank you!
|
The term used to describe such a concept is called "string complement". Your language can be defined as:
$L=\\{ uv | u, v \in \\{0,1\\}^* \wedge v=u' \\}$
Or simply:
$L=\\{ uu' | u \in \\{0,1\\}^* \\}$
|
stackexchange-cstheory
|
{
"answer_score": 1,
"question_score": 1,
"tags": "context free"
}
|
Are there survey papers in theoretical computer science?
Are there conferences or journals where we can publish surveys/literature review papers related to theoretical computer science problems? If provide a list of such conferences and journals.
I know there are many options in applied areas of computer science, but I have not seen this trend in theoretical computer science.
I work in computational algebra and haven't seen any survey papers so far.
|
Yes!
These survey series come to mind:
Foundations and Trends in TCS (many authors put a free version on their web page)
Theory of Computing Graduate Surveys
SIGACT News Complexity Column (and also sometimes other technical columns etc in SIGACT News)
Bulletin EATCS regularly has surveys and tutorials
To your more specific question, can you be even more specific? "Computational algebra" is a pretty big field. I recall seeing surveys on computational algebraic geometry, computational real algebraic geometry, computational group theory (several links at that page).
|
stackexchange-cstheory
|
{
"answer_score": 10,
"question_score": 9,
"tags": "reference request, survey"
}
|
What are the application of Scott-Topology in theoretical computer science?
During a work I came across the Scott-Topology and I see that Scott-continuous functions show up in the study of models for lambda calculi. What I cannot understand is how this enrich the lambda-calculus as we know.
I'm searching for paper that give -maybe- some application of Scott-topology in the computability field, as I have not find anything related.
Hoping for help from this great community
|
Scott-continuity emerged when Dana Scott build the first model of untyped λ-calculus, while trying to prove that no such model can exist (since any such model $D$ needs to be, simplifying a bit, isomorphic to the function space $D \rightarrow D$ which is not possible set-theoretically, but turns out to be possible when you restrict your attention to computable functions).
Scott-continuity can be understood as a mathematically well-behaved approximation to computability.
[1] is a gentle introduction to the general area of order theory that Scott continuity emerged out of, and [2] is a reference article. [3] has a bit on domain-theory and Scott-continuity and might be the easiest introduction for computer scientists.
* * *
1. B. A. Davey, H. A. Priestley, _Introduction to Lattices and Order_.
2. S. Abramsky, A. Jung, _Domain theory_ ,
3. G. Winskel, _The Formal Semantics of Programming Languages: An Introduction._
|
stackexchange-cstheory
|
{
"answer_score": 3,
"question_score": 7,
"tags": "computability, lambda calculus, topology"
}
|
Question about "Free-ness" of Free SCWF
In Category with Family by Castellan et al., they introduce the concept of Free SCWF as correspondence of STLC with base type. Seemingly, they define Free B-SCWF as the synonym of initial B-SCWF.
My question is, since this is a "free" scwf, I thought there should be free and forgetful functors between the category of B-SCWF and something. Where can I find (the citation of) such adjoint functors?
$ is the Scwf defined in Proposition 4.
This is an instance of a general equivalence between left adjoints and collections of initial objects in certain slice categories, see the nlab for details.
|
stackexchange-cstheory
|
{
"answer_score": 3,
"question_score": 2,
"tags": "type theory, lambda calculus, ct.category theory, typed lambda calculus"
}
|
What is a "Covering Function"?
In Idris2, I will sometimes get an error telling me that a function "is not covering", which is apparently distinct from it not being total (and I do understand what a total function is). I have not been able to find a reference to a "covering function" anywhere on Wikipedia, Wolfram, or any Stack Exchanges (except here). When I _do_ find references to it (in some Idris question), it is not explained what it actually means -- apparently everyone there just knows.
What I _do_ find is something called a "covering **space** ", which is related to a "covering map", but apparently a covering map is always continuous, so that can't be what Idris is talking about, since no function from ℕ to anything (for example) can be continuous.
|
I finally found an answer here.
@MaxNew was right; it's just a part of totality. A function definition is not covering if there is a possible input which has not been handled in the pattern matching.
|
stackexchange-cstheory
|
{
"answer_score": 1,
"question_score": 0,
"tags": "functional programming, function, definitions"
}
|
What is the computational power of the Calculus of Constructions?
The calculus of constructions (CoC) without `fix` is clearly not Turing complete, as the program that loops infinitely cannot be expressed in it. What I'm wondering: Is there a decidable problem that cannot be computed in the CoC?
I found Statman's theorem, which says that the simply typed lambda calculus is limited computationally (I think? I find it hard to understand), but I'm pretty sure the CoC should be more powerful than the simply typed lambda calculus. Are there any theoretical results on that?
|
There is no total language where all total computable $\mathbb{N} \to \mathbb{N}$ functions are definable. In a total language, an interpreter for the same language is not definable because if it were then general recursion could be recovered by diagonalization; see theorem 3.2 here. This is analogous to Gödel's second incompleteness theorem.
The range of definable total functions in a theory is called _proof-theoretic strength_. The calculus of constructions has the same strength as System F$\omega$. That is in turn at least as strong as System F, and probably strictly stronger (see cody's answer). System F has the same strength as second-order arithmetic. You can read about this in chapter 15 of Girard's Proofs and Types. Second-order arithmetic is in turn weaker than ZFC, but it's strong enough to be beyond the reach of ordinal analysis, which tries to measure proof-theoretic strength by ordinal notations.
EDIT: taking cody's remarks into account.
|
stackexchange-cstheory
|
{
"answer_score": 7,
"question_score": 2,
"tags": "type theory, lambda calculus, calculus of constructions"
}
|
Can a Turing machine quickly move to any position of a large string?
I hope this question is not too basic and I am not missing something dumb. But suppose we simulated a Turing machine on a long string $s$, where $|s| = 10^{100}$ for example. Then if we wanted to learn the value of $s_i$, the $i$th value in the string, could we do this in say time polynomial in the length of the string?
The issue I am having is differentiating between the theoretical construction of the Turing machine vs. real computers which can for example index arrays in constant time due to their structure in memory. Could a TM obtain $s_i$, the ith value in the array, in time polynomial in $|s|$, regardless of the chosen value of $i$? Or would the head have to "slide over to $i$" with some cap on it's speed, so it could not do this task efficiently?
|
This isn't a research-level question, so it's probably better on the normal Computer Science Stack Exchange.
But, to answer the thrust of your question: no, they cannot jump to the middle, and it can be viewed as a limitation of the model. You can use this fact to prove $\Omega(n^2)$ lower bounds on the language of palindromes for a single-tape TM, for example. To avoid this, there are various random access Turing machine models used in the literature.
|
stackexchange-cstheory
|
{
"answer_score": 0,
"question_score": -2,
"tags": "turing machines, search problem"
}
|
What's the state of research on automated theorem proving?
I'm interested in writing my undergraduate thesis on automated theorem proving, and I've been looking for some material to document myself on the topic.
I was introduced to automated and assisted theorem proving by reading a few books that describe the idea to non-necessarily-technical readers, but they were written between the 1970s and the 80s, and most technical books I am finding on the topic are from that same period. That's not to say that old books are not good, most math books I own are reprints of books from that very period, I'm just wondering whether or not the topic has been of any interest to researchers in the last few years.
If it hasn't, why do you think this is the case? And if it has, what do you think would be a good starting point for me to dive into it?
|
I would suggest you to have a look at modern implementations of open source theorem provers frameworks, such as Lean and Coq. From there you can have a look into their bibliography to find relevant manuscripts.
|
stackexchange-cstheory
|
{
"answer_score": 1,
"question_score": 6,
"tags": "automated theorem proving"
}
|
How can arbitrary combinational logic be done with just addition and multiplication?
Once when I was reading about SPDZ (a multi-party computation protocol), and once when I was reading about homomorphic encryption, it was taken for granted that since both addition and multiplication could be done, any other function could be derived from that (I am assuming this only includes combinational logic).
From this explanation of fully homomorphic encryption
> The combination of addition and multiplication allows arbitrary functions to be computed on encrypted data.
Other than the "product of sums", I've never heard of boolean operations being interchangeable with arithmetic operations, and I can't think of how how to derive boolean operations from arithmetic.
How can this be done? Can I optimize an algorithm by multiplying integers directly but also use boolean logic?
|
If you can do addition modulo 2 and multiplication modulo 2, you can implement XOR and AND gates, which are a universal basis for circuits, so you can implement any circuit.
As @holf points out, if you don't have modular arithmetic and are working with integers, alternatively you can implement NAND gates as follows: if you ensure that all inputs are 0 or 1, then $f(x,y) = 1-xy$ acts as a NAND gate and ensures that its output is 0 or 1. Since NAND gates are universal for circuits, you can implement any circuit in this way.
|
stackexchange-cstheory
|
{
"answer_score": 3,
"question_score": -1,
"tags": "homomorphic encryption"
}
|
Quadratic lower bound
Consider three arrays $A,B,C$ of size $N$ consisting of integers. I want to verify the following constraint: for any two indices $0 \leq i,j < N$, $A[i] < A[j] \land B[i] < B[j] \implies C[i] < C[j]$. The trivial algorithm here does the verification in $O(N^2)$ time. Is there a sub-quadratic algorithm for this problem? Any insight on proving a super linear lower bound or a (near) linear time algorithm would be greatly appreciated.
|
One can also find an $O(n \log n)$ time algorithm in Jon Bentley, "Multidimensional Divide and Conquer", Communications of the ACM, April 1980.
|
stackexchange-cstheory
|
{
"answer_score": 7,
"question_score": 12,
"tags": "ds.algorithms, lower bounds"
}
|
Containment: deterministic versus with probability one
As I was browsing the Complexity Zoo, I came across this statement:
> Relative to a random oracle, PH is strictly contained in PSPACE with probability 1 [Cai86].
What confused me was the addition of "with probability 1". What does that mean and why is the current formulation different from
> Relative to a random oracle, PH is strictly contained in PSPACE [Cai86].
A related question: is there a difference between saying something is deterministic or some process succeeds with probability one?
|
The latter would imply that the statement holds for _every_ random oracle; the former statement only asserts it is true for "most" random oracles, with some vanishingly small fraction that don't satisfy the claim.
~~For example: "a randomly chosen integer is non-zero with probability 1" is true, because the odds of picking 0 from all infinitely many integers is 0. But "a randomly chosen integer is non-zero" is false, because we could pick 0.~~
Edit: as pointed out in the comments, a correct example distribution would instead be a uniform distribution over $[0,1]$. A number selected from this distribution is non-zero with probability 1, but not all numbers in this distribution are non-zero.
|
stackexchange-cstheory
|
{
"answer_score": 4,
"question_score": 2,
"tags": "cc.complexity theory"
}
|
A Coq question : How to prove the image of the two same valued variables under a function are same?
I want to prove the following Coq theorem. However, I couldn't proceed. Please, give me an advice if possible. Thank you.
* * *
Require Import QArith.
Variable f : Q -> Q.
Theorem function (x y : Q) : x == y -> f x == f y.
Proof.
* * *
|
You can't do that.
You can actually define a function which doesn't respect Q's setoid structure.
Require Import QArith.
Goal exists (f : Q -> Q) (x y : Q), x == y /\ ~(f x == f y).
Proof.
exists (fun q => Qmake (Qnum q) 1).
exists (Qmake 2 1), (Qmake 4 2).
split.
- reflexivity.
- discriminate.
Qed.
You have to prove the well-definedness for each function. For example, the well-definedess of Qplus and Qle are provided in QArith as
Instance Qplus_comp : Proper (Qeq==>Qeq==>Qeq) Qplus.
Instance Qle_comp : Proper (Qeq==>Qeq==>iff) Qle.
By defining them as instances of `Proper`, you can use Generalized rewriting with those functions.
|
stackexchange-cstheory
|
{
"answer_score": 1,
"question_score": 0,
"tags": "coq"
}
|
Would the following be an acceptable part of an algorithm if used for prime factorization
Suppose I have some super fancy algorithm for prime factorization. I want to demonstrate its potential on a difficult case, like an RSA sized number composed of two primes,$\space n=p_1p_2$. As far as I know, 2-factor primes are considered to be most difficult. I want to demonstrate that it performs in a good runtime. Would it be considered cheating to hard code into the algorithm an expression that checks immediately after finding $p_1$ whether the $n$ contains a $p_2$ such that $p_2= \frac{n}{p_1}$ and terminating if it is so?
Would this be okay for demonstration purposes? Would it fly in an RSA challenge? Is a provision for such difficult cases a faux-pas in algorithm design?
|
It's not cheating. The last step of an algorithm can certainly be: compute $n/p_1$ and check whether that is an integer and is prime. That's an allowable step in an algorithm and can be computed efficiently.
RSA challenges allow you to do whatever you want to obtain a factorization, as long as you can implement it and it finishes running and gives you a result.
|
stackexchange-cstheory
|
{
"answer_score": 2,
"question_score": 1,
"tags": "ds.algorithms, factoring, primes"
}
|
What if NP = coNP?
Are there any major implications of NP = coNP (if true) the way there would be if P=NP? I'm thinking of real-world implications analogous to the encryption-pocalypse (excuse the drama) that would happen if P=NP.
|
This got a bit too long for a comment, I might edit this to provide a more coherent answer at a later point. There is this answer to Is it possible to construct an encryption scheme for which breaking is NP complete but there nearly always exists an efficient breaking algorithm on crypto stackexchange. It argues, we want hard problems for encryption to be in both NP and coNP. So I would say we might have some hope for designing actually good cryptosystems from NP-complete problems instead of having to design them from problems in that intersection. See here for what we know about possibility of complete problems for the intersection.
|
stackexchange-cstheory
|
{
"answer_score": 2,
"question_score": 1,
"tags": "cc.complexity theory, np, p vs np"
}
|
Is the Church-Turing thesis a theorem? Conjecture? Axiom?
One thing I was never clear on when taking Computational Complexity in college is whether the Church-Turing "thesis" is (or can be) proven.
Is it..
* **A theorem?** If so, where's the proof?
* **A conjecture?** If so, why isn't considered one of the great open problems? This seems even more important than P=NP
* **An axiom?** If so, does that mean we can study mathematical systems where the thesis is _not_ true?
* * *
The wikipedia page calls it a "conjecture", but then goes on to say
> it cannot be formally proven, as the concept of effective calculability is only informally defined.
A statement which makes no sense to me. If we have a proof that the "thesis" is undecidable in some system, wouldn't that make it an axiom?
|
The Church-Turing thesis is not a theorem, conjecture, or axiom. For it to be one of these, it would need to be a mathematical statement that has the potential to have a rigorous proof. It does not.
The Church-Turing thesis is, in one common formulation:
> every effectively calculable function can be computed by a Turing machine.
The problem is that "effectively calculable" does not have a rigorous mathematical definition. You can give it one, and then you have a theorem, such as the following:
> every general recursive function can be computed by a Turing machine,
or
> every $\lambda$-definable function can be computed by a Turing machine,
but this doesn't show that there aren't other ways of effectively calculating functions that cannot be computed by a Turing machine.
The above two theorems, by the way, are what led to the proposal of the Church-Turing thesis.
|
stackexchange-cstheory
|
{
"answer_score": 7,
"question_score": 0,
"tags": "cc.complexity theory"
}
|
NP-hard problem which is easy on average
I have a feeling like I read somewhere that the Hamiltonian circuit problem is NP-hard, but it is easy on average, or easy for a random instance. However, I cannot find a reference for that, nor an algorithm.
Are there NP-hard (NP-complete) problems which are easy on average?
|
This 1989 article by Dyer and Frieze answers the question directly:
The Solution of Some Random NP-hard Problems in Polynomial Expected Time.
|
stackexchange-cstheory
|
{
"answer_score": 4,
"question_score": 4,
"tags": "np hardness, average case complexity"
}
|
Are there research papers on related to algorithm fairness in theoretical computer science?
I have seen several articles related to algorithmic fairness in machine learning and AI. I am not able to find out research paper on algorithm fairness in theoretical computer science. Kindly suggest some research articles also mention the future of algorithmic fairness in the theoretical computer science.
|
You could refer to the curated sessions/talks on Trustworthy ML
|
stackexchange-cstheory
|
{
"answer_score": 2,
"question_score": 2,
"tags": "reference request"
}
|
Can one find any solution to this matrix problem in polynomial time?
I am given an M * N (M > 1, N > 1) matrix with all the numbers blackened but their row and column sums are visible.
For example, I am given this 3 * 3 matrix.

row[x] -= take
col[y] -= take
m[x][y] = take
Indeed the invariant $\sum_x \text{row}[x]= \sum_y \text{col}[y]$ is preserved after each iteration of the loop and when the inner loop is finished for some $x$ we can see that row$[x]=0$ which means (by invariant) that when the outer loop is finished we have $\sum_y \text{col}[y]=0$.
In the case of negative numbers we can get back to non-negative numbers by virtually adding some constant $c$ to all matrix cells (i.e. adding $cN$ to all columns and $cM$ to all rows).
Note that if $\sum_x \text{row}[x] \neq \sum_y \text{col}[y]$ then the problem is unsolvable.
|
stackexchange-cstheory
|
{
"answer_score": 1,
"question_score": 0,
"tags": "matrices, np, polynomial time"
}
|
Defining normalization with respect to judgmental equality instead of reduction
In type theory with a type $\mathbb{N}$ of natural numbers (or some other base type such as booleans) and judgmental equality instead of reductions, canonicity is a meta-theoretical statement claiming that a closed term of type $\mathbb{N}$ is judgmentally equal to a unique numeral (and hopefully the metatheory is constructive or proves the existence of untyped lambda calculus expression that computes such a numeral).
Most presentations of strong normalization on the other hand require picking a directed beta reduction relation, not just an undirected judgmental equality, such as the $\rhd$ in Definition 2.1.2 in An Extended Calculus of Constructions by Luo.
Is there an equivalent characterization of strong normalization that can be stated without picking a notion of reduction for CoC in a similar manner to canonicity, i.e. can it be stated purely in terms of seemingly undirected judgmental equality?
|
You could define a predicate $N(t)$ whose intuitive meaning is “term $t$ is in normal form”, and prove a theorem stating that for every closed term $t$ there is precisely one term $t'$ such that $N(t')$ and $t \equiv t'$. This way you capture the notion of "normal form". You cannot really capture the "strong" in "strong normalization" because that specifically refers to sequences of reductions.
Of course, we could just appeal to the axiom of choice and pick one term from each equivalence class, and declare it to be “normal”. The real work begins once we start asking _how_ is the predicate $N$ given, precisely. In lucky cases, one can specify $N(t)$ by a simple syntactic criterion, such as “$t$ does not contain any redeces“.
|
stackexchange-cstheory
|
{
"answer_score": 5,
"question_score": 1,
"tags": "type theory, calculus of constructions, normalization"
}
|
Is any computational complexity question solved by injury priority method except Post problem?
As we know, there are many questions of Turing Degree closed by injury priority method. Is any computational complexity question solved by injury priority method except Post problem or Turing Degree?
I'm curious about how to solve by same or similiar methods the parallel questions up and down the computational hierarchy
BTWany question in computer science is solved by forcing except continuum hypothesis?
|
Priority method gets used a lot in computability theory - see some of the later chapters of Soare's book on computability.
Buhrman and Torenvliet use a resource-bounded priority method to build an oracle $A$ such that $NEXP^A \subseteq P^{NP^A}$.
Forcing is used in complexity theory in the construction of generic oracles. See, for example, Fenner-Fortnow-Kurtz-Li, "An oracle builder's toolkit"00018-X). Generic oracles get used a lot.
|
stackexchange-cstheory
|
{
"answer_score": 4,
"question_score": 2,
"tags": "cc.complexity theory, set theory, recursion"
}
|
Number of stable matchings
In the stable marriage problem, is it possible to find an instance with $2^{n -1}$ stable matchings when $n$ is a power of 2 (or just even)? If yes, how? I know how to build an instance in which $2^{n/2}$ stable matchings can be obtained, but was wondering if the aforementioned number of stable matchings ($2^{n -1}$) can be obtained too.
|
Yes. Thurber showed [1,Theorem 5] that for all $n\geq 1$, the maximum number of stable matchings is at least $\frac{(2.28)^n}{(1+\sqrt{3})^{1+\log_2 n}}$.
If I'm not mistaken this is strictly greater than $2^n$ for all $n\geq 52$ (and of course asymptotically it's an exponential factor more).
[1] _Thurber, Edward G._ , **Concerning the maximum number of stable matchings in the stable marriage problem**00194-7), Discrete Math. 248, No. 1-3, 195-219 (2002). ZBL0997.05002.
|
stackexchange-cstheory
|
{
"answer_score": 5,
"question_score": 1,
"tags": "co.combinatorics, matching, bipartite graphs"
}
|
Dual of cut of embedded graph disconnects surface
Let $G$ be a graph that embedded on a surface of genus $g$, moreover the embedding is triangulated. Let $C$ be a collection of edges that forms a minimal edge cut for $G$. Let $C^*$ consist of the dual edges for edges in $C$. $C^*$ consists of vertex disjoint cycles in the dual $G^*$. How do we prove that cutting along the cycles in $C^*$ disconnects the surface? This holds for planar graphs by the Jordan curve theorem.
|
I assume that you require that all faces of $G$ are topological disks. After cutting along $C^*$, each face is a topological disk bounded by either a cycle of $G$, or a cycle that consists of two arcs (one on $G$ and one on $C^*$) with common endpoints (at the intersection between edges of $C$ and their duals), where the arc on $C^*$ lies on the boundary of the cut surface. In particular, the boundary of any face intersects $G$ in a single component.
Now, assume for a contradiction that the surface after cutting is still connected, and consider an edge $(u,v)\in C$. Then there exists a path $\pi$ on the cut surface connecting $u$ and $v$. By the above property of faces, we can snap $\pi$ to a path on $G-C$, showing that $u$ and $v$ lie in the same component of $G-C$. Therefore, $C-(u,v)$ is still an edge cut for $G$, contradicting minimality of $C$ and hence connectedness of the cut surface.
|
stackexchange-cstheory
|
{
"answer_score": 4,
"question_score": 2,
"tags": "reference request, graph theory, topological graph theory"
}
|
Check whether DFA accepts majority of words less than a cutoff with another DFA
### Question
Let $M$ be some DFA that reads integers in base $k$. Does there always exist some other DFA $M'$ that also reads integers in base $k$, where $M'(x)$ accepts if and only if $M$ accepts the majority of words less than $x$?
### Background and motivation
According to the cited paper, the above process works if we replace "majority" with "modulo $p$" for some constant $p$; that is, we can build a second DFA $M'$ where $M'(x)$ accepts if and only if $M$ accepts 0 mod $p$ words less than $x$. However, this is still in some sense a very "regular" task, in the sense that DFAs can calculate the number of 1s in a string mod $p$, so I am curious to see whether it extends to a simple problem that is not "regular" in the same way.
_Lecomte, P. B. A.; Rigo, M._ , **Numeration systems on a regular language**, Theory Comput. Syst. 34, No. 1, 27-44 (2001). ZBL0969.68095.
|
No. Consider the language $L$ of numbers whose binary representation starts with 10, except for the powers of 2. So the first few numbers in $L$ are 101, 1001, 1010, 1011, 10001, 10010, 10011, 10100, 10101, 10110, 10111, 100001 etc. It is easy to see that up to $2^n$, $L$ contains $2^{n-1}-n-1$ numbers. So somewhere around $2^n+n$ the shift happens from NoMaj to Maj, and around $2^n-n$ from Maj to NoMaj. Anyhow, as $n$ can look anything in binary, it is easy to show that the pumping lemma does not hold for $L'$.
|
stackexchange-cstheory
|
{
"answer_score": 2,
"question_score": 11,
"tags": "automata theory, regular language"
}
|
On lattice and code isomorphism
We know deciding isomorphism between lattices or codes is difficult if the presentation is through arbitrary bases. What if the presentation of the lattice is through minimum bases? Likewise the corresponding problem for codes?
|
The reduction from graph isomorphism to linear code isomorphism (Petrank and Roth '97) has the property that the vectors used in the reduction are precisely the lowest-weight vectors, having weight 5, while all other vectors have weight at least 6. So even when given by minimum-weight vectors, Linear Code Equivalence is still GI-hard.
|
stackexchange-cstheory
|
{
"answer_score": 4,
"question_score": 1,
"tags": "graph isomorphism"
}
|
Polynomial time solvable in series parallel graph but NP-hard in graph with bounded treewidth
Whether there is a problem to meet the conditions: it is polynomial time solvable in series parallel graphs but NP-hard in graph with bounded treewidth?
|
The quadratic traveling salesperson problem takes as input a graph and a cost for each pair of edges, and asks for a Hamiltonian cycle minimizing the sum of costs of its pairs of edges (not just adjacent pairs). It is NP-hard on Halin graphs (so its decision problem is NP-complete on all graphs of treewidth 3); see:
Brad Woods, Abraham Punnen, and Tamon Stephen (2017), "A linear time algorithm for the 3-neighbour Travelling Salesman Problem on a Halin graph and extensions", _Discrete Optimization_ 26: 163–182, doi:10.1016/j.disopt.2017.08.005
However it is trivially solvable in polynomial time on series-parallel graphs because they are only Hamiltonian if they are biconnected outerplanar and in that case the Hamiltonian cycle is unique (it is the outer face of the outerplanar embedding).
|
stackexchange-cstheory
|
{
"answer_score": 8,
"question_score": 2,
"tags": "np hardness, treewidth"
}
|
Graph coloring with limit on number of times a color is used
Are there any results on coloring a graph using a limited number of each color. In other words, the decision problem would be: given a list of colors $C = (c_1, \dots, c_k)$ where each color $c_i$ is associated with a bound on the number of times it can be used $b_i$ (where there can be at most $b_i$ nodes colored with color $c_i$) can you color the vertices of a graph where no two adjacent vertices are colored the same color. This decision problem is NP-hard when there are no constraints on the number of times a color is used so it is also hard in this setting.
But has this problem been studied in graphs where chromatic number is easy such as interval graphs or perfect graphs? It is not clear to me that this problem is easy on graph classes where finding the chromatic number is polynomial time.
|
This WALCOM 2022 paper by Bandopadhyay _et al_. introduces the variant of Coloring (that they refer to as "Budgeted Graph Coloring") that you are looking for!
Here is a summary of their results:
* It is NP-Hard even for 3 colors when restricted to bipartite graphs.
* It can be solved in polynomial time when restricted to cluster graphs.
* It is NP-hard when restricted to split graphs (which subsumes the family of both interval graphs and perfect graphs) and co-cluster graphs.
They also provide some FPT results when the problem is parameterized by some common structural parameters such as vertex cover, clique modulator size, (cluster vertex deletion size, number of colors), and (cluster vertex deletion size, number of clusters).
|
stackexchange-cstheory
|
{
"answer_score": 3,
"question_score": 4,
"tags": "ds.algorithms, graph colouring"
}
|
Looking for information on Information Theory applied to image pixelation
I'm in seventh grade and am doing a science project about how age and gender affects people's ability to recognize pixelated images. For background research I have been reading about information theory and the general topic seems to be related to mine -- the entropy of pixelated images is lower than less pixelated images -- but all the stuff I can find really talks about communication channels and bandwidths.
I was wondering if you guys can offer some pointers to articles that deal with this connection. How I can measure the entropy change and what affect other information theory research has on impacting this issue.
I'm not asking you to do my homework for me, just looking for some papers and articles I can use to read up and educate myself, or some other pointers on how to pursue this topic.
Thanks so much in advance.
|
Good luck on your project! Unfortunately I don't think information theory or theoretical computer science are going to be very useful or relevant to your project topic. Instead, your topic seems more to do with the characteristics of humans and human perception (e.g., biology and cognitive science) rather than the mathematical properties of information.
|
stackexchange-cstheory
|
{
"answer_score": 2,
"question_score": 0,
"tags": "it.information theory"
}
|
Number of permutations that satisfy a given set of comparisons
We are given a set of comparisons of the form `z[i] < z[j]` for various `i` and `j` and an unknown permutation `z` of length `n`.
We can assume those are transitively closed, or compute the closure relatively quickly by Floyd-Warshall.
Is there an efficient algorithm to determine the number of permutations compatible with the known comparisons? We can of course backtrack our way to the answer, but this would be quite slow.
The constraint we have looks like a forest of DAGs. It seems that by carefully counting the ways in which it can be collapsed into a line, we might get to the answer more directly.
|
As far as I can tell, your problem is equivalent to the following: given a partial order (represented by its comparability pairs, which forms a DAG), count how many linear extensions it has.
This problem is known to be #P-hard, see Brightwell and Winkler, "Counting Linear Extensions", Order, 1991. It is #P-hard even in quite restricted cases, e.g., <
|
stackexchange-cstheory
|
{
"answer_score": 12,
"question_score": 6,
"tags": "co.combinatorics, sorting, permutations, topological sorting"
}
|
Strongly polynomial time algorithm for shortest convex combination
Problem: Let $S$ be a finite set of vectors. Let $C$ be their convex hull. Compute $\operatorname{argmin}_{x \in C} \|x\|$.
Reference 1 gives an algorithm for this problem that is finite-time (Section 2). However, its worst-case time complexity is not given.
Reference 2 gives another algorithm for this problem, but it is not finite-time (Theorem 2).
Is there a strongly polynomial time algorithm for this problem?
References:
1. _Algorithm for a least-distance programming problem_. Philip Wolfe. Mathematical Programming Studies. Volume 1, March 1974, Pages 190-205.
2. _An algorithm for finding the shortest element of a polyhedral set with application to Lagrangian duality_. Mokhtar S Bazaraa, Jamie J Goode, Ronald L Rardin. Journal of Mathematical Analysis and Applications. Volume 65, Issue 2, September 1978, Pages 278-288.
|
It is known via a paper of De Loera, Haddock and Rademacher that a strongly polynomial time algorithm for finding a minimum norm point in a simplex implies a strongly polynomial time algorithm for general LP. We do not know whether LP has a strongly poly time algorithm - this is a fundamental open problem in optimization.
|
stackexchange-cstheory
|
{
"answer_score": 6,
"question_score": 3,
"tags": "time complexity, linear algebra, convex optimization, convex geometry, convex hull"
}
|
Communication complexity of correctly recovering 99% of a random bit string
Suppose Alice has a bit string of length $n$ where $n/2$ bits are chosen uniformly at random to be 1's; and the rest are 0's. Alice sends a message to Bob.
If Bob needs to reconstruct the bit string, then $\Omega(n)$ communication is needed.
However, suppose Bob only needs to reconstruct at least $0.99n$ bits correctly (without knowing which bits are correct). Would this lower bound still hold?
|
This is a basic exercise and is not research level. Hints:
How many possible answers from Bob would be accepted as valid? How many possible strings might Alice have chosen? From this, what can you conclude about the number of bits of entropy that must be communicated?
|
stackexchange-cstheory
|
{
"answer_score": 1,
"question_score": 0,
"tags": "communication complexity"
}
|
Algorithms for equivalence of 2 way finite automata (2DFA)
I'm interested in the computational complexity of deciding equivalence of 2DFAs.
It is known that converting 2DFA to DFA can incur a blow up in states. However I'm not sure whether this automatically tells us something about the complexity of 2DFA equivalence.
Which leads to my question: Is there a hardness result for 2DFA equivalence like the PSPACE-completeness of language equivalence of NFAs and regular expressions. Conversely, I am interested in literature I might have missed that algorithmically solves the equivalence problem directly on 2DFA, without translating to DFAs first.
|
According to the answer to this question: <
The complexity of emptiness for 2-WAY DFAs is already PSPACE complete, so equivalence is also PSPACE-hard (and membership in PSPACE is easy by the single-exponential translation to DFA).
|
stackexchange-cstheory
|
{
"answer_score": 4,
"question_score": 3,
"tags": "fl.formal languages, automata theory"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.