title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Prove the inequality $r+1 \leq \exp r^\varepsilon$ for any $\varepsilon>0$ for sufficiently large values of $r$
It seems the following. Choose a number $n>1/\varepsilon$. Then $$\exp (r^\varepsilon)>\frac {r^{n\varepsilon}}{n!}> r+1,$$ when $r$ is sufficiently large.
Looking for a way to find the proportional growth rate in time for any given notation
We say that $f(n)=O(g(n))$ iff: There exist constants $c,n_0$ such that for all $n\ge n_0$, if $c>0$, then $0 \le f(n) \le c \cdot g(n)$. Also, recall that $y$ is proportional to $x$ simply if $y=kx$ for some constant $k$. Thus, suppose some algorithm had a running time of $f(n)=O(n^3)$. Then we can conclude that this algorithm's (time or space) growth rate is proportional to $n^3$. For example, it might be the case that $f(n)=4n^3$ or $f(n)=50n^3$.
Real Analysis. Topologically homogeneous sets.
Hint on $X=[0,1]$: Both $X\backslash\{0\}$ and $X\backslash\{1\}$ are connected. Is this true for any other point of $X$? Consider $S^n$ to be the subset of $\mathbb{R}^{n+1}$ consisting of all points a unit distance from the origin of $\mathbb{R}^{n+1}$. Let $P,Q\in S^n$. In $R^{n+1}$ consider the intersection of the plane $OPQ$ ($O$ being the origin of $\mathbb{R}^{n+1}$) and $S^n$. The intersection is a circle. Rotate $\mathbb{R}^{n+1}$ about the line through $O$ with direction vector $\vec{OP}\times\vec{OQ}$ by a sufficient angle to move $P$ to $Q$. Restricted to $S^n$ this will be a homeomorphism mapping $S^n$ onto $S^n$ taking $P$ onto $Q$. For $\mathbb{R}^n$ a simple translation by $\vec{PQ}$ will do.
How to graph the trigonometric function when period is more than the range?
Simply sketch lightly two vertical lines, each intersecting the $x$ axis: one line $x = \pi/2$ that's perpendicular to the x-axis, the other, also a perpendicular line being $x = -\pi/2$. Then sketch the portion of the curves period that lies between those vertical lines. Feel free to sketch in the whole period, lightly, but then darken the portion contained on and between the two lines. The points where the graph intersect the two vertical lines should be darkened in, too, to illustrate that they are endpoints which are included in you closed interval. From Wolfram Alpha: below is the full period + of the curve $\;y = \sin\left(\frac x3\right) - 2$: And what follows is a graph of the two vertical lines at $x = \pi/2$, $x = -\pi/ 2$, with the portion of the sine curve $\;y = \sin\left(\frac x3\right) - 2$ that lies between (and intersects) those lines. Note that the portion of the curve that lies within the given range resembles a line: if you look at the vertical line $x = 0$ as a common reference point for both graphs, you can see which portion of the curve is the focus of the second graph: Indeed, the curve below which is sandwiched between our two vertical lines includes an inflection point of the given sine curve, and explains why it "looks like a straight line."
Prove the existence of a point not accounted for by mapping from N to R and deduce uncountability of R from this
Note: I will write $a_n$ in place of $a(n)$. So, following the hint here is what we do. Start out by defining $I_0=[a,b]$. Now according to the hint we don't want $a_0$ to lie in $I_1$, so we need to pick a closed interval contained in $[a,b]$ which does not contain $a_0$. If $a_0$ is not in this interval, then that's easy, we can just take $I_1=[a,b]$ again. If however $a_0$ is in the interval, then we could try $$I_1=\left[\frac{a_0+b}{2},b\right]$$ This works according to the hint, because: since $a_0\in[a,b]$, $a_0<b$, and so $\frac{a_0+b}{2}<\frac{b+b}{2}=b$, or in other words this closed interval actually makes sense to define. Moreover, $a_0=\frac{a_0+a_0}{2}<\frac{a_0+b}{2}$, so $a_0\notin I_1$. Finally, both $a_0\geq a$ and $b>a$, so their average $\frac{a_0+b}{2}>a$, meaning that $I_1\subset I_0$. Well, it almost works. I forgot to account for the case where $a_0=b$. If that happens to be the case, we can always just use the other endpoint though, defining instead: $$I_1=\left[a,\frac{a_0+a}{2}\right]$$ and this works just as well, so long as $a<a_0$. We can use this exact same idea to inductively define the sequence of nested intervals. So, let $I_0=[a,b]$. Suppose $I_n=[c,d]$, $c<d$ is defined already. If $a_n\notin I_n$, then let $I_{n+1}=I_n$. Otherwise, $c\leq a_n\leq d$. If $a_n=d$, then define $$I_n=\left[c,\frac{a_n+c}{2}\right]$$ and otherwise (so $a_n<d$) let $$I_n=\left[\frac{a_n+d}{2},d\right].$$ Now I will leave it to you to verify a couple of things: $a_n\notin I_{n+1}$ $I_{n+1}\subset I_n$ the endpoints of $I_{n+1}$ are not the same, provided the endpoints of $I_n$ are not (we did use the fact that $c<d$, otherwise it would be possible for $I_n=[a_n,a_n]$, and then we would be really stuck) By induction this gives us a sequence of closed, nested intervals $I_0\subset I_1\subset I_2\subset\cdots$, such that $a_n\notin I_{n+1}$. Now we consider the intersection $X=\cap_{n\in\mathbb{N}} I_n$. By Cantor's Intersection Theorem (which is what I presume the book is referring to), this intersection is nonempty. However, $a_n\notin X$ for all $n\in\mathbb{N}$. Why? because if it were in the intersection, then it would have to lie in every single interval. However $a_n\notin I_{n+1}$. So, we have a nonempty set containing no elements of $\{a_n\mid n\in\mathbb{N}\}$. Therefore the mapping was not surjective, which was what we wanted to show.
Transversal and on empty intersections.
Two submanifolds $M$ and $N$ of $Y$ are said to intersect transversally if for every $x\in M\cap N$ we have $T_xM+T_xN = T_yY$. This is of course vacuously true if $M\cap N = \emptyset$. It doesn't even make sense to talk about $T_xM$ nor $T_xN$ if we are considering $x\in M\cap N=\emptyset$.
Variance of $\overline{X}_n^2$
Thanks for showing your work so far. Hint: Write $$E[\overline{X}_n^4]={1\over n^4}\sum E[X_i X_j X_k X_\ell]$$ where all four indices run from $1$ to $n$. The expectations $E[X_i X_j X_k X_\ell]$ differ depending on whether $i,j,k,\ell$ are all distinct, one pair, two pairs, three of a kind, or four of a kind.
Test for linear dependence of 3 matrices
Hint: Show the linear (in)dependence of the vectors $$(1,1,0,0), (0,1,0,1), (0,0,1,1), (1,0,1,0).$$
A weak converse of $AB=BA\implies e^Ae^B=e^Be^A$ from "Topics in Matrix Analysis" for matrices of algebraic numbers.
The proof comes from Wermuth's "Two remarks on matrix exponential" (DOI 10.1016/0024-3795(89)90554-5) It seems to me that it is enough to assume that no two distinct eigenvalues of $A$ or $B$ differ by an integer multiple of $2\pi i$ (which is true if, but not only if $\pi$ is transcendental with respect to their entries). This is of course true if $A$ and $B$ have algebraic entries. The basic idea is to somehow reverse the exponential function, and express $A$ and $B$ as power series (in fact, polynomials) in their exponentials. Let $m(\lambda)=\prod_j (\lambda-\lambda_j)^{\mu_j}$ be the minimal polynomial of $A$. Then by assumption $e^{\lambda_j}$ are all different, so by Hermite's interpolation theorem we can find a polynomial $f$ such that for $g=f\circ \exp$ we have that $g(\lambda_j)=\lambda_j$, and if $\mu_j>0$, then $g'(\lambda_j)=1$ and $g^{(l)}(\lambda_j)=0$ for $2\leq l< \mu_j$. Then we have $g(A)=A$ (this part I' don't really see, but apparently it's common knowledge, it's probably a corollary of Jordan's theorem on normal form), but $g(A)=f(e^A)$, so $A=f(e^A)$, and similarly for some polynomial $h$ we have $B=h(e^B)$, so $AB=f(e^A)h(e^B)=h(e^B)f(e^A)=BA$.
Let $a_n$ be a convergent sequence. Prove that if $a_n\geq a$ for $a_n$ but finitely many $n$, then $\lim a_n\geq a$.
You have the right idea. Suppose $A=\lim_{n\to \infty}a_n$ and $a_n\geq a$ for all but finitely many $n$, but $A<a$. Then $a-A>0$, so by def'n of a limit we have, for all but finitely many $n$, that $|a_n-A|<a-A.$ But $|a_n-A|<a-A\implies a_n<A+(a-A)=a,$ which only holds for finitely many $n.$ Writing all the details is not just for style. You can confirm that you are right when you think you are.
Can't do basic integral
$$\displaystyle\int \cfrac{1}{\sqrt{-cx^3}} dx= \displaystyle\int (-cx^3)^{-\frac12}dx= \displaystyle\int (-c)^{-\frac12} x^{-\frac{3}{2} }dx= -2 (-c)^{-\frac12} x^{-\frac{1}{2} } +k=\frac{-2}{\sqrt{-cx}}+k $$ Sure, this is equal to your other answer. Let me rearrange: $$-\frac{2x}{\sqrt{-cx^3}}+k=-\frac{2x*x^{-1}}{\sqrt{-cx^3}*x^{-1}}+k=-\frac{2}{\sqrt{-cx^3}*\sqrt{x^{-2}}}+k=-\frac{2}{\sqrt{-cx^3*x^{-2}}}+k=-\frac{2}{\sqrt{-cx}}+k$$ So your answer can be reduced a bit to this.
The minimum edge cover of a tree is at least the maximum degree
Let $v$ be a vertex of a maximal degree $\Delta$ of the tree $T$. Let $N(v)$ be a set of neighbors of the vertex $v$. If any edge of $T$ is incident to at least two vertices $v,w$ of $N(v)$ then $u-v-w-u$ is a cycle, which cannot occur in a tree. So each edge cover need at least $|N(v)|=\Delta$ distinct vertices to cover $N(v)$.
Frame bundle is parallelizable - Kobayashi
A connection form $\omega$ on $L(M)$ is a $\mathfrak{gl}(n,\mathbb{R})$-valued 1-form on $L(M)$ (with $n=\dim M$) satisfying $$ (i)\quad \omega_p(p\cdot \xi) = \xi\quad \forall \xi\in\mathfrak{gl}(n,\mathbb{R}), p\in L(M) \\ (ii)\quad R_g^*\omega = \operatorname{Ad}_{g^{-1}}\omega \quad \forall g\in \operatorname{GL}(n,\mathbb{R}). \qquad $$ The Lie algebra $\mathfrak{gl}(n,\mathbb{R})$ is spanned by the matrices $E^i_{\ j}$ with 1 in the $ij$th position, and zeros elsewhere, and so we can write $\omega = \sum_{i,j=1}^n\omega^i_{\ j}E^i_{\ j}$, where the $\omega^i_{\ j}$ are now $\mathbb{R}$-valued 1-forms on $L(M)$. Property (i) above ensures that they don't vanish on $L(M)$. These forms are related to the Christoffel symbols $\Gamma^i_{kj}$ as follows: any coordinate patch $\phi:U\subset M\to\mathbb{R}^n$ induces a section $s_\phi:U\to L(M)$ by $s_{\phi}(x) = \left(\frac{\partial}{\partial x_1}\Big\vert_x,\ldots,\frac{\partial}{\partial x_n}\Big\vert_x\right)$. Then $s_\phi^*\omega^i_{\ j}$ will be a 1-form on $U\subset M$, and we can write $s_\phi^*\omega^i_{\ j} = \sum_{k=1}^n\Gamma^i_{kj}\,dx^k$ for some functions $\Gamma^i_{kj}:U\to\mathbb{R}$. I would suggest you read Kobayashi and Nomizu - Foundations of Differential Geometry - Volume I for most of this material, especially Chapter III, Section 2.
Tripling function and its periodic points
Edit: In light of the closure of the question, I should have first asked you what efforts have you made? If $x$ is periodic there is nothing to prove so assume that $x$ is not periodic. No matter what 'subdomain' $x=\frac{p}{q}$ falls into, $F(x)$ is still in the set $$\{t\cdot \frac{1}{q}\,:\,t=0,1,2,\dots q\}.$$ Prove this. Apply the tripling function $q+1$ times. Now the set of $q+2$ iterates (including $x$) lies in a set of $q+1$ elements. What can you conclude?
Boolean Algebra Syntax
It's asking for the size of $\neg(X \cup Y \cup Z)=\neg X \cap \neg Y \cap \neg Z$ by de Morgan's Law. This is $6!-|X \cup Y \cup Z|$. We can use Inclusion-Exclusion to find $|X \cup Y \cup Z|$, namely $$|X \cup Y \cup Z|=|X|+|Y|+|Z|-|X \cap Y|-|X \cap Z|-|Y \cap Z|+|X \cap Y \cap Z|.$$ The number of permutations with A in the 1 position is $5!$. The number of permutations with B in the 2 position is $5!$. The number of permutations with C in the 4 position is $5!$. Hence $|X|=|Y|=|Z|=5!$. The number of permutations with A in the 1 position and B in the 2 position is $4!$. The number of permutations with A in the 1 position and C in the 4 position is $4!$. The number of permutations with B in the 2 position is $5!$ and C in the 4 position is $4!$. Hence $|X \cap Y|=|X \cap Z|=|Y \cap Z|=4!$. The number of permutations with A in the 1 position, B in the 2 position and C in the 4 position is $3!$. Hence $|X \cap Y \cap Z|=3!$. So, using Inclusion-Exclusion, we obtain $$6!-3 \times 5!+3 \times 4!-3!$$ as the number of non-clashing permutations.
Comparing plots question
Let $S(1)=\sum_{n=1}^{2014} x_n$. Now, for $n \in \{1,2,...,2014\}$ we have $x_{2014+n}<x_{2014}x_n$. Hence, $S(2)=\sum_{n=1}^{4028} x_n < S(1)+a^{2014}S(1)$. Now $x_{4028} < {x_{2014}}^2=a^{4028}$, and for $n \in \{1,2,...,2014\}$ we have $x_{4028+n}<x_{4028}x_n$. Hence, $S(3)=\sum_{n=1}^{6042} x_n < S(1)+a^{2014}S(1)+a^{4028}S(1)$. Now, again $x_{6042}<x_{4028}x_{2014}<a^{6042}$. Repeating the process to infinity, we get that: $$\displaystyle S(\infty)=\sum_{n=1}^{\infty}x_n<\sum_{n=1}^{\infty}S(1)a^{2014n}=S(1)+a^{2014}S(1)+a^{4028}S(1)+...$$ Using the fact that $S(1)$ is a real number, we have a geometric series that converges to $$\frac{S(1)}{1-a^{2014}}$$ Now $\displaystyle y_N=\sum_{n=1}^{N} x_n$ is an increasing sequence which is bounded above by $S(\infty)$, so converges.
Given $p \equiv q \equiv 1 \pmod 4$, $\left(\frac{p}{q}\right) = 1$, is $N(\eta) = 1$ possible?
My Theorem 1 is Theorem 11.5.5 on page 279 of Alaca and Williams. My Theorem 2 is Theorem 11.5.7 on page 286 of Alaca and Williams. It turns out Theorem 2 is due to Dirichlet, 1834. I keep forgetting why $x^2 + xy - k y^2$ dominates $u^2 - (4k+1)v^2.$ One step: take $x = u-v, y = 2v.$ Then $$ x^2 + xy - k y^2 = u^2 - 2 uv + v^2 + 2 u v - 2 v^2 - 4 k v^2 = u^2 - (4k+1) v^2. $$ THEOREM 1: With prime $p \equiv 1 \pmod 4,$ there is always a solution to $$ x^2 - p y^2 = -1 $$ in integers. The proof is from Mordell, Diophantine Equations, pages 55-56. PROOF: Take the smallest integer pair $T>1,U >0$ such that $$ T^2 - p U^2 = 1. $$ We know that $T$ is odd and $U$ is even. So, we have the integer equation $$ \left( \frac{T+1}{2} \right) \left( \frac{T-1}{2} \right) = p \left( \frac{U}{2} \right)^2. $$ We have $$ \gcd \left( \left( \frac{T+1}{2} \right), \left( \frac{T-1}{2} \right) \right) = 1. $$ Indeed, $$ \left( \frac{T+1}{2} \right) - \left( \frac{T-1}{2} \right) = 1. $$ There are now two cases, by unique factorization in integers: $$ \mbox{(A):} \; \; \; \left( \frac{T+1}{2} \right) = p a^2, \; \; \left( \frac{T-1}{2} \right) = b^2 $$ $$ \mbox{(B):} \; \; \; \left( \frac{T+1}{2} \right) = a^2, \; \; \left( \frac{T-1}{2} \right) = p b^2 $$ Now, in case (B), we find that $(a,b)$ are smaller than $(T,U),$ but $T \geq 3, a > 1,$ and $a^2 - p b^2 = 1.$ This is a contradiction, as our hypothesis is that $(T,U)$ is minimal. As a result, case (A) holds, with evident $$p a^2 - b^2 = \left( \frac{T+1}{2} \right) - \left( \frac{T-1}{2} \right) = 1, $$ so $$ b^2 - p a^2 = -1. $$ THEOREM 2: With primes $p \neq q,$ with $p \equiv q \equiv 1 \pmod 4$ and Legendre $(p|q)=(q|p) = -1,$ there is always a solution to $$ x^2 - pq y^2 = -1 $$ in integers. The proof is from Mordell, Diophantine Equations, pages 55-56. PROOF: Take the smallest integer pair $T>1,U >0$ such that $$ T^2 - pq U^2 = 1. $$ We know that $T$ is odd and $U$ is even. So, we have the integer equation $$ \left( \frac{T+1}{2} \right) \left( \frac{T-1}{2} \right) = pq \left( \frac{U}{2} \right)^2. $$ We have $$ \gcd \left( \left( \frac{T+1}{2} \right), \left( \frac{T-1}{2} \right) \right) = 1. $$ There are now four cases, by unique factorization in integers: $$ \mbox{(1):} \; \; \; \left( \frac{T+1}{2} \right) = a^2, \; \; \left( \frac{T-1}{2} \right) = pq b^2 $$ $$ \mbox{(2):} \; \; \; \left( \frac{T+1}{2} \right) = p a^2, \; \; \left( \frac{T-1}{2} \right) = q b^2 $$ $$ \mbox{(3):} \; \; \; \left( \frac{T+1}{2} \right) = q a^2, \; \; \left( \frac{T-1}{2} \right) = p b^2 $$ $$ \mbox{(4):} \; \; \; \left( \frac{T+1}{2} \right) = pq a^2, \; \; \left( \frac{T-1}{2} \right) = b^2 $$ Now, in case (1), we find that $(a,b)$ are smaller than $(T,U),$ but $T \geq 3, a > 1,$ and $a^2 - pq b^2 = 1.$ This is a contradiction, as our hypothesis is that $(T,U)$ is minimal. In case $(2),$ we have $$ p a^2 - q b^2 = 1. $$ $$ p a^2 \equiv 1 \pmod q, $$ so $a$ is nonzero mod $q,$ then $$ p \equiv \left( \frac{1}{a} \right)^2 \pmod q. $$ This contradicts the hypothesis $(p|q) = -1.$ In case $(3),$ we have $$ q a^2 - p b^2 = 1. $$ $$ q a^2 \equiv 1 \pmod p, $$ so $a$ is nonzero mod $p,$ then $$ q \equiv \left( \frac{1}{a} \right)^2 \pmod p. $$ This contradicts the hypothesis $(q|p) = -1.$ As a result, case (4) holds, with evident $$pq a^2 - b^2 = \left( \frac{T+1}{2} \right) - \left( \frac{T-1}{2} \right) = 1, $$ so $$ b^2 - pq a^2 = -1. $$ The viewpoint of real quadratic fields an norms of fundamental units is more concerned with $x^2 + xy - k y^2,$ where $4k+1 = pq$ in the second theorem. However, we showed above that the existence of a solution to $u^2 - (4k+1)v^2 = -1$ gives an immediate construction for a solution to $x^2 + xy - k y^2=-1,$ namely $x=u-v, y=2v.$ EXTRA David Speyer reminded me of something Kaplansky wrote out for me, years and years ago. From David's comments, I now understand what Kap was trying to show me. If $x^2 + x y - 2 k y^2 = -1,$ then $(2x+y)^2 - (8k+1)y^2 = -4. $ This is impossible $\pmod 8$ unless $y$ is even, in which case $(x + \frac{y}{2})^2 - (8k+1) \left( \frac{y}{2}\right)^2 = -1.$ There is more to it when we have $x^2 + x y - k y^2 = -1$ with odd $k.$ Take $$ u = \frac{ 2 x^3 +3 x^2 y + (6k+3)x y^2 + (3k+1)y^3}{2}, $$ $$ v = \frac{3 x^2 y + 3 x y^2 + (k+1)y^3}{2}. $$ $$ u^2 - (4k+1) v^2 = -1, $$ since $$ u^2 - (4k+1) v^2 = \left( x^2 + x y - k y^2 \right)^3. $$
The number $ \frac{(m)^{(k)}(m)_k}{(1/2)^{(k)} k!}$
\begin{eqnarray*}& &\frac{2^{2k}(m)^{(k)}(m)_k}{(2k)!}\\ &=&\frac{2^{2k}(m-k+1)(m-k+2)\cdots (m-1)(m)(m)(m+1)\cdots (m+k-2)(m+k-1)}{(2k)!} \end{eqnarray*} Now we write one of the $m$ as $\frac{1}{2}[(m-k)+(m+k)]$ and distribute, and the last expression becomes $$2^{2k-1}\left[\frac{(m+k-1)(m+k-2)\cdots(m-k) }{(2k)!}+\frac{(m+k)(m+k-1)\cdots (m-k+1)}{(2k)!}\right]$$ which is equal to $$2^{2k-1}\left[{m+k-1\choose 2k}+{m+k\choose 2k}\right],$$ an integer.
Proving sum of recurrence sequence converges to $2^{-55}$
$$24r^3=26r^2-9r+1$$ solutions $$r=\frac{1}{2},\frac{1}{3},\frac{1}{4}$$ $$a_n=x\left(\frac{1}{2}\right)^n+y\left(\frac{1}{3}\right)^n+z\left(\frac{1}{4}\right)^n$$ Determine $x,y,z$ using base conditions $$a_0=46=x+y+z$$ $$a_1=8=x/2+y/3+z/4$$ $$a_2=1=x/4+y/9+z/16$$ $$\fbox{ x=4, y=-54,z=96 }$$ $$a_n=4\left(\frac{1}{2}\right)^n-54\left(\frac{1}{3}\right)^n+96\left(\frac{1}{4}\right)^n$$ $$\sum\limits_{k=3}^{\infty}a(k)=\sum\limits_{k=3}^{\infty}\left(4\left(\frac{1}{2}\right)^k-54\left(\frac{1}{3}\right)^k+96\left(\frac{1}{4}\right)^k\right)$$ $$\sum_{k=3}^{\infty}4\left(\frac{1}{2}\right)^k=4\sum_{k=3}^{\infty}\left(\frac{1}{2}\right)^k=4\cdot\frac{\left(\frac{1}{2}\right)^3}{1-\frac{1}{2}}=1$$ $$54\sum_{k=3}^{\infty}\left(\frac{1}{3}\right)^k=54\frac{\left(\frac{1}{3}\right)^3}{1-\frac{1}{3}}=3$$ $$96\sum_{k=3}^{\infty}\left(\frac{1}{4}\right)^k=96\cdot\frac{\left(\frac{1}{4}\right)^3}{1-\frac{1}{4}}=2$$ $$\sum\limits_{k=3}^{\infty}a_k=1-3+2=0$$
Prove or disprove $(6x,10)=(2x,10)$
$$6x=3\cdot\underline{2x}+0\cdot \underline{10}\in(2x,10)$$ $$2x=2\cdot\underline{6x}+(-x)\underline{10}\in(6x,10)$$
Can the mass of a proton be measured through algebra?
I admittedly didn't read the category theory-based sections carefully (mostly because I don't know nearly enough category theory to actually judge their results fairly), but the end results are what you're asking about anyway, so let's talk about those (i.e. section III). Unfortunately, all of the author's calculations are too far off from the actual values of things for the theory to hold any water. The calculated age of the universe (18.18 Gyr, given without uncertainty) is significantly different from the measured age ($13.799\pm.021$ Gyr). Since this age is also used in the calculation of $G$, those results are also thrown into question. The authors explain this away as bias based on faulty assumptions about constancy of constants, but they cannot do the same for later calculations. The authors report a .0007% discrepancy between the calculated mass of the proton and its measured counterpart, or a relative error of $7\times 10^{-5}$. The mass of the proton has been measured to a precision of $2\times 10^{-8}$, so they are reporting a discrepancy that is one thousand times larger than experimental error. They report no biases here, claiming that the closeness to the experimental result proves their theory correct. Instead of confirming their theory, this unforgivably large discrepancy damns it. Similarly, the calculated masses of the muon and tau lepton (which they call the "electron $\mu$" and "electron $\tau$") are also at odds with experimental results. The authors obtain a value of $105.9$ MeV/c^2 for the muon mass, whereas the actual measured muon mass is $105.6583745\pm.0000024$ MeV/c^2. The discrepancy here is on the order of $3\times 10^{-3}$, which is, one hundred thousand times larger than the experimental uncertainty of $2\times 10^{-8}$, an even worse discrepancy than before. The tau lepton mass is calculated by the authors to be $1.82146$ GeV/c^2, whereas the measured value is $1.77682\pm .00016$ GeV/c^2. Relative error on the calculated value is $2.5\times 10^{-2}$, and the relative uncertainty on the measured value is $9\times 10^{-5}$. Once again, the discrepancy between theory and experiment is hundreds of times larger than the uncertainty in experiment. My verdict? This theory does not predict anywhere near the correct values for the physical constants, even when the authors said it did. Therefore, it cannot be an accurate description of reality.
Simplified is 0? $\log_{3} 9x^4 - log_{3}(3x)^2 $
Use two properties: $$ \log_{a}{bc} = \log_{a}{b}+ \log_{a}{c}$$ and $$ \log_{a}{(b^c)} = c\cdot\log_{a}{b}.$$ And it seems it is typo and your answer is right.
Convergence - Numerical Analysis
For the first. $x_k=1+2^{-k}=1+\frac{1}{2^k}$ $\lim_{k \to +\infty}x_k=L=1$. $\lim_{k\to +\infty}|\frac{x_{k+1}-1}{x_k-1}|=\frac{1}{2}=m$ For the second $x_k=1+\frac{1}{2^{2^k}}$ so $\lim_{k\to+\infty}x_k=1$ $\lim_{k\to+\infty}|\frac{x_{k+1}-1}{(x_k-1)^2}|=1$ qed.
Time complexity from an arithmetic series
The sum $1+2+3+\cdots+n$ is a triangular number; there is a well known formula for them, which you seem to have rediscovered. Well done! More generally, what you have is an aritmetic series (and then a trailing $+1$). The sum of an arithmetic series can be computed as the number of terms, here $n$, times the arithmetic mean of the first and last terms, here $\frac{1+n}2$. (This formula is sometimes known as Gauss's trick, due to a -- probably apocryphal -- story that Gauss used it to calculate $1+2+3+\cdots +100=5050$ as a schoolchild).
Let $\:P: R^2 \to R$. To be more precise, $P(x,y)=x$. Is there a closed set A in $R^2$ s.t. P(A) is not a closed set in $R$?
Take $A=\left\{(x,y)\in\mathbb R^2\,\middle|\,x>0\wedge y=\frac1x\right\}$. It is closed, but $P(A)=(0,\infty)$.
Complex differential equation
With $z'=e^{it}\bar z$ you also get by conjugation $\bar z'=e^{-it}z$ and thus for the second derivative $$z'' = ie^{it}\bar z+e^{it}\bar z'=iz'+z.$$ This second order linear ODE has as characteristic polynomial $$ \lambda^2-iλ-1 =\left(λ-\frac i2\right)^2-\frac34 =\left(λ-\frac{i-\sqrt3}2\right)\left(λ-\frac{i+\sqrt3}2\right) $$ which allows you to construct the solution. $$ z=e^{it/2}\left(c_1e^{\sqrt3 t/2}+c_2e^{-\sqrt3 t/2}\right)\\ $$ You have to find a relation between the integration constants that restricts the general solution to the solutions of the original equation. \begin{align} z'&=e^{it/2}\left(\frac12(i+\sqrt3)c_1e^{\sqrt3 t/2}+\frac12(i-\sqrt3)c_2e^{-\sqrt3 t/2}\right) \\ z'-e^{it}\bar z&=e^{it/2}\left(\frac12\Bigl[(i+\sqrt3)c_1-2\bar c_1\Bigr]e^{\sqrt3 t/2}+\frac12\Bigl[(i-\sqrt3)c_2-2\bar c_2\Bigr]e^{-\sqrt3 t/2}\right) \end{align} This implies $$\Bigl[(\sqrt3+1)+(\sqrt3-1)i\Bigr]c_1=\Bigl[(\sqrt3+1)-(\sqrt3-1)i\Bigr]\bar c_1$$ where the right side is the conjugate of the left and thus both sides are real, so that $$c_1=A\,\Bigl[(\sqrt3+1)-(\sqrt3-1)i\Bigr]$$ and similarly $$c_2=B\,\Bigl[(\sqrt3-1)-(\sqrt3+1)i\Bigr]$$ with real constants $A,B\in \Bbb R$. As the components have absolute values $e^{\pm\sqrt3/2\,t}$, you get that all non-trivial solutions are unbounded over $\Bbb R$.
On the log of the Riemann zeta function.
Your mistake starts with "suppose by contradiction that..." The opposite of "something is true for all $x$" is NOT "something is false for all $x$". Your statement should be $\exists x$ such that $\pi (x) > Li(x) + \sqrt x \log x$. But then, since this is not true for all $x$ you don't get the integral inequality anymore.
Let $Z$ be a continuous random variable and let $M = \max{\{Z,1-Z\}}$. What is the density of $M$?
\begin{align*} F_M(t)=P(\max{\{Z,1-Z\}}\le t)&=P(Z\le t, 1-Z\le t)=P(Z\le t, 1-t\le Z)\\&=P(1-t\le Z\le t)=\begin{cases}0, \text{if } 1-t>t\\[0.2cm]\int_{1-t}^tf_z(x)dx, \text{if } 1-t\le t\end{cases}\\[0.2cm]&=\begin{cases}0, \text{if } t<1/2\\F_Z(t)-F_Z(1-t), \text{if } t\ge 1/2\end{cases} \end{align*} Hence, $$f_M(t)=f_Z(t)+f_Z(1-t)$$ for $t\ge 1/2$ and $f_M(t)=0$, otherwise.
Integral of delta function over a small interval around zero
Yes it is correct since the delta function is $zero$ outside of $(0^-,0^+)$ and the integral with $\Delta x$ reduces to the same form as before:$$\int_{-\Delta x}^{\Delta x}\delta(x)dx=\int_{0^-}^{0^+}\delta(x)dx=1$$
How to cleverly solve this kind of equations?
A general method that works for any size $n$ and is not prone to errors is to use a computer algebra system (such as Mathematica) to directly solve the system for any size $n$. Given that this is a well-posed finite linear system of equations, I don't see any clever tricks; if you want to solve such a system, you (or the computer) has to do the tedious work of solving it using elimination of variables until you get the result. Whether you call this Gaussian elimination or LU decomposition, you have to substitute the equations into each other and keep track of your operations without error in order to arrive at the correct result. In much more complicated problems (such as solving PDEs such as the wave or heat equation or Maxwell's equations), once you reduce things to a finite system of linear equations, you let the computer solvers take over. The fact that your problem is of this simple class to begin with leads me to believe there really aren't any clever tricks to be played here. For the approach, I transcribed the two equations you presented into Mathematica, then set it up to explicitly calculate all the equations for $i,j\in\{0,1\ldots n\}$, enforced the boundary conditions for $i=n+1$ and $j=n+1$, then told it to solve the equations. For $n=2$, the solution is: $$ a_{0,0} = -\frac{p^8-2 p^7+p^6-p^4+p^3-1}{p^9-4 p^8+6 p^7-4 p^6+3 p^4-3 p^3+2} $$ A happy consequence is that all the unknowns are solved for simultaneously by Mathematica's algorithms. Here is a plot of $a_{0,0}$ as a function of $p$: And here is the Mathematica code I used: (*setup the right hand side functions*) a[i_,j_] := p Subscript[b, i+1,j] + (1-p) Subscript[b, 0,j] b[i_,j_]:=p Subscript[a, i,j+1] + (1-p) Subscript[a, i,0] n=2; (*works for any order n*) (*set the variables equal to the right hand sides, enforce the boundary conditions, and collect into one list*) aEqs = Table[Subscript[a, i,j]==a[i,j],{i,0,n},{j,0,n}]/.{Subscript[a, i_,n+1]->0,Subscript[b, n+1,j_]->1}//Flatten[#,1]&; bEqs = Table[Subscript[b, i,j]==b[i,j],{i,0,n},{j,0,n}]/.{Subscript[a, i_,n+1]->0,Subscript[b, n+1,j_]->1}//Flatten[#,1]&; Eqs = Join[aEqs,bEqs]; (*collect all the variables into one list*) aVars = Table[Subscript[a, i,j],{i,0,n},{j,0,n}]//Flatten[#,1]&; bVars = Table[Subscript[b, i,j],{i,0,n},{j,0,n}]//Flatten[#,1]&; Vars = Join[aVars,bVars]; (*Run the solver*) Soln = Subscript[a, 0,0]/.Solve[Eqs,Vars][[1]]//Simplify; TraditionalForm[Soln] Plot[Soln,{p,0,1}]
If $T:X\to X$, $X$ is a Banach space, and $T$ linear, maps closed sets onto closed sets. Can I say that $T$ has a closed graph?
(Note that you are asking a different question than the one you linked to, here you are asking if $T$ maps closed sets to closed sets, does $T$ have a closed graph? In the other question you are asking if $T$ has a closed graph, does it map closed sets to closed sets?) First note that if $T:X\to Y$ is a linear operator between normed spaces that sends closed sets to closed sets, then the kernel of $T$ is either $0$ or all of $X$ (ie $T=0$). For suppose $x\notin\ker(T)$ and $y\in\ker(T)$ with $\|y\|>\|x\|$. Then $A=\{\frac1n x + 100 n^2 y\mid n\in\Bbb N\}$ is discrete in $X$, hence closed. But its image is $\{\frac1n T(x)\mid n\in\Bbb N \}$ which converges to $0$, but $0$ does not lie in $T(A)$, contradiction. So your map must either be the zero map or be injective. If its the zero map it is continuous, if it is injective then $T(X)$ is a closed subspace of $Y$. Then $T: X\to T(X)$ is a bijective map sending closed sets to closed sets, so it is an open map. If $X,Y$ is Banach then $T(X)$ and $X$ are Banach, so $T: X\to T(X)$ is an open map between Banach spaces, hence an isomoprhism and as such continuous. Continuous implies that the graph is closed.
Do These Primitive Definitions of Convergence Over the Natural Numbers Appear in Greek Mathematics?
Yes, this is a very important and well-understood notion. Your definition is more concisely written as $$\lim_{n\rightarrow\infty}{f(n)\over g(n)}=1,$$ which in turn is generally abbreviated "$f\sim g$." Here I'm taking the limit over naturals, so $f$ and $g$ are understood as functions with domain $\mathbb{N}$; we can just as easily define $\sim$ for functions on the rationals, reals, complexes, or etc. Some basic properties are described here, and you may also be interested in other varieties of asymptotic comparison. As to its history, I'm unaware of any treatment of growth rates in "ancient" mathematics, Greek or otherwise - so far as I know, the earliest such investigations occurred in the $1800$s in the context of analytic number theory (e.g. the prime number theorem is a result in asymptotic analysis) - but it's hard to prove a negative, and I could very easily be wrong.
Decouple a system of quadratic equations
If your $f_i$ all satisfy $f_i(0) = 0$, then yes, you can. Divide both sides of the $i$th equation by $x_i$, and bring the $x_i$ term over, leaving only the constant term on the right. Then you have a linear equation in the $x_i$, and assuming that it isn't degenerate, there is a unique solution. However, there will also be additional solutions to the original equation: choose any subset of the $x_i$ to set to $0$. Say that $k$ of them are to be $0$, and the rest non-zero. Set the $k$ values of $x_i$ to $0$ in the original equations, and discard the equations that are now "$0 = 0$". With the remaining equations, you can divide by $x_i$ and proceed as described before. If any of the $f_i$ does not satisfy $f_i(0) = 0$, then I know of know simple solution to the problem.
Linear Algebra Explanation required
The inverse of the matrix $$ \begin{pmatrix} 1 & -1 \\ 1 & 2 \end{pmatrix} $$ is $$ \frac{1}{3}\begin{pmatrix} 2 & 1 \\ -1 & 1 \end{pmatrix} $$ Since the matrix associated to the linear map with respect to the given basis on the domain and the canonical basis on the codomain is $$ \begin{pmatrix} 3 & 6 \end{pmatrix} $$ the matrix associated to it with respect to the canonical basis on the domain is $$ \begin{pmatrix} 3 & 6 \end{pmatrix} \begin{pmatrix} 1 & -1 \\ 1 & 2 \end{pmatrix}^{\!-1}= \begin{pmatrix} 3 & 6 \end{pmatrix} \frac{1}{3}\begin{pmatrix} 2 & 1 \\ -1 & 1 \end{pmatrix}= \begin{pmatrix} 1 & 2 \end{pmatrix} \begin{pmatrix} 2 & 1 \\ -1 & 1 \end{pmatrix}= \begin{pmatrix} 0 & 3 \end{pmatrix} $$
Does a subspace of a finite dimensional vector space has a unique complement
Hint Let $\{0\} \subsetneq W \subseteq V$, $\{e_1, \dots, e_n\}$ be a basis of $V$ where $\{e_1, \dots, e_m\}$ ($1 \le m <n$) is a basis of $W$. Then the linear subspaces $A,B$ generated respectively by $\{e_{m+1}, \dots, e_n\}$ and $\{e_{m+1}+e_1, \dots, e_n+e_1\}$ are different though both complement of $W$.
Use of undecidability
We do not know it in some sense, but taking Platonic stance we actually know it for sure. Assume that there exists a totality of standard natural numbers, say $\mathfrak{N}$. Suppose $\varphi$ is the Goldbach conjecture, which is $\Pi_1$ sentence (begins with a universal unbounded quantifier followed be a formula expressing recursive property or realtion). Finally suppose that you managed to show that neither $PA\vdash\varphi$ nor $PA\vdash\neg\varphi$ (where $PA$ is first-order Peano arithmetic). However, as a Platonist I believe that either $\varphi$ is true about $\mathfrak{N}$ or it is false about it. Assume it is false, which entails that its negation must be true. But $\neg\varphi$ is equivalent to a $\Sigma_1$ formula (truly) asserting existence of a natural number which is not a sum of two primes: $\exists_x\psi(x)$. Take this (standard) number, say $n$, and substitute $\overline{n}$ for $x$ to obtain $\psi(\overline{n})$ which asserts a recursive property of $n$. From representability of recursive properties you obtain that $PA\vdash\psi(\overline{n})$, so $PA\vdash\exists_x\psi(x)$, so $PA\vdash\neg\varphi$, contrary to the fact that $\varphi$ is undecidable. (The mehtod described applies to all $\Pi_1$ sentences and uses so called $\Sigma_1$ completeness of $PA$) Of course the hard part is proving undecidability of $\varphi$, and as it was said by Robert Israel in the comments above, this would probably be a longer way than challenging Goldbach conjecture directly. Moreover being Platonist is not enough as well, since you must engage some (stronger than $PA$) theory in which you can carry a proof of independence of $\varphi$. Thus reception of a proof of this kind would probably rely on nature of methods applied in showing undecidability of the conjecture. Last but not least, constructivists could attack the assumption about definite truth value of the conjecture or (maybe even more likely) the part in which one infers existence of $n$ from a premiss saying: not every natural number... .
Question on limits containing logarithm
$$\begin{align*}&\log a=\left(n+\frac1{\log n}\right)\log n=n\log n+1\\{}\\&\log b=n\log\left(n+\frac1{\log n}\right)\end{align*}$$ and now using some Taylor series: $$n\log\left(n+\frac1{\log n}\right)=n\log n+n\log\left(1+\frac1{n\log n}\right)=$$ $$=n\log n+n\left(\frac1{n\log n}-\frac1{2n^2\log^2 n}+\ldots\right)=n\log n+\frac1{\log n}+\mathcal O\left(\frac1{n\log n}\right)$$ Thus: $$\log\frac ab=\log a-\log b=1-\frac1{\log n}+\mathcal O\left(\frac1{n\log n}\right)\xrightarrow[n\to\infty]{}1$$ so that $$\frac ab=e^{\log\frac ab}\xrightarrow[n\to\infty]{}e$$
Probability of drawing a card
If $10$ cards were removed and someone else has already drawn $5$, there are $37$ cards left. If you are asking "What is the probability of drawing a particular card which I know is still remaining in the deck?" the answer is $1/37$. If you are asking "What is the probability of drawing a particular card, given that I don't know whether it has been drawn yet?" the answer is $1/52$, as just knowing that cards have been removed doesn't give you any usable information. You can think about it more clearly as "What is the probability that the 16th card in the deck is something specific?" That does not depend at all on what the first 15 cards are, unless you actually know what the first 15 cards are. If the question is, "If I am playing poker, what is the probability that I will draw a card" the answer is $0$ or $1$, depending on whether you have anted up or not.
Transforming quadratic parametric curve to implicit form
To begin with, you are dealing with the "great old theory" of elimination. In this case, the answer is : yes, it is always possible to eliminate parameter $t$ between polynomial expressions and obtain an implicit polynomial expression. Let us show it on an example, with a parametric curve defined by these equations : $$\tag{1}\begin{cases}x=t^2+t+1\\y=t^3-1\end{cases}$$ If you use a Computer Algebra System such as Mathematica, elimination process is implemented under the following form: Eliminate[{x==t^2+t+1,y==t^3-1},t] giving the following implicit equation : $$\tag{2}x^3-3x^2-3xy-y^2=0$$ as its result. But there is more to say. You need for this to be familiar with resultants (see remark below) (https://en.wikipedia.org/wiki/Resultant). The theorem you are looking for is the fact that the implicit equation can be obtained as the resultant of the $\color{red}{2}$nd degree polynomial $t^2+t+(1-x)$ (I was tempted to write $=0$...) and $\color{red}{3}$rd degree polynomial $t^3-(1+y)$ which the $5 \times 5$ parametric determinant (5 = $\color{red}{2+3}$): $$d=\left|\begin{array}{ccccc}1& 1& (1-x)& 0 &0 \\ 0& 1& 1& (1-x)& 0\\ 0& 0& 1& 1& (1-x)\\ 1& 0& 0& -(1+y)& 0\\ 0& 1& 0& 0& -(1+y)\end{array}\right|=0$$ (one writes $\color{red}{3}$ times the $\color{red}{2}$nd degree polynomial, then $\color{red}{2}$ times the $\color{red}{3}$rd degree one, with a right shift for every "carriage return"). The expansion of determinant $d$ gives back (up to a certain unimportant constant factor) implicit equation (2). This example can evidently been extended to a general case. Remark 1: The excellent reference (Cox, Little, O'Shea "Ideals, Varieties and Algorithms") given by @Jan-Magnus Økland can be found online. See pages 155-162 devoted to Sylvester's resultant. Remark 2: the main idea behind the concept of resultant is that it expresses a necessary and sufficient condition between parameter(s) present in coefficients of 2 equations for these equations to have at least a common root. Let us take an example : consider a quadratic polynomial $at^2+bt+c$ (with $a\ne0$) and its derivative $2at+b$. They have a common root $t_0$ which is a double root iff the following resultant is zero: $$\delta=\left|\begin{array}{ccc}a& b& c \\ 2a& b& 0\\ 0& 2a& b\end{array}\right|=-a(b^2-4ac)$$ This value is $0$ iff $b^2-4ac=0$, which is the classical criteria for a quadratic equation to have a double root. We have thus found back the concept of discriminant by using the resultant!
Uniform Distribution; Bus Arrival; 2 independent variables
Well the answer depends on $a$ and $b$. Let $X$ be the variable denoting his arrival in minutes past 6pm. That is $X\sim Uniform(0,60)$ There are 3 events where the downtown bus is taken. $\{X\leq a\}$, $\{b<X\leq 30+a\}$ and $\{X>30+b\}$. Therefore $$P(\text{John visits his girlfriend})=P(X\leq a)+P(b<X\leq 30+a)+P(X>30+b)$$ Using that the CDF is given by $P(X\leq x)=\frac{x}{60}$, we have $$P(\text{John visits his girlfriend}) = \frac{a}{60} + \frac{30+a-b}{60} + 1- \frac{30+b}{60}= 1 + \frac{a-b}{30} $$ Likewise we can compute $$P(\text{John visits his mom})=1-P(\text{John visits his girlfriend})=\frac{b-a}{30}$$ In a total of 30 days, the expected number of visits to his mom will be $$30P(\text{John visits his mom})=b-a$$
How many values does $\sqrt{\sqrt{i}}$ have?
Your list of four solutions only has two - as has been pointed out you listed two of them twice. (In the original post, anyway...) But there are four roots. First, $i=e^{i\pi/2}=e^{i5\pi/2}$ leads gives two values for $\sqrt i$, namely $e^{i\pi/4}$ and $e^{i5\pi/4}$. Each of these has two square roots: Since $e^{i\pi/4}=e^{i9\pi/4}$ it has the two square roots $e^{i\pi/8}$ and $e^{i9\pi/8}$. Similarly $e^{i5\pi/4}=e^{i13\pi/4}$ has the two square roots $e^{i5\pi/8}$ and $e^{i13\pi/8}$. For a total of four values of $\sqrt{\sqrt i}$, namely $$e^{i\pi/8},e^{i5\pi/8},e^{i9\pi/8},e^{i13\pi/8}.$$ NOTE Others have suggested that although a complex number has two square roots, the notation $\sqrt z$ refers to only one of them. I think it's wrong to put it that way; that notation refers to only one of them if we have clearly stated in advance which "branch" we're referring to. But Wolfram is wrong in any case. By my lights, $i$ has two square roots, each of which has two square roots, for a total of four as above. But if we are saying that the notation $\sqrt z$ refers to one square root, then Wolfram should say there's only one value for $\sqrt{\sqrt i}$, namely the one square root of the one square root. I think four is the right number. With that convention there's only one; there's no sensible way to give exactly two values, as WA does. You can't believe everything they tell you.
$Rad(I)$ is an ideal of $R$
Note $J$ this radical of $I$. If $a\in J$ and $b\in A$, $a^n\in I$ for some $n>0$ so that $(ba)^n = b^n a^n \in I$ because $I$ is a ideal, so that $ba\in J$. Now, if $a,b\in J$, whose respective powers $a^n$ and $b^m$ are in $I$, then by commutatativity we have $(a+b)^{n+m} = \sum_{k=0}^{n+m} {n+m \choose k} a^k b^{n+m-k}$ by the well-known binomial formula and this is in $I$ because in the sum, each term is in $I$, and this is because for each $k$, either $k\geq n$ or $n+m-k\geq m$ which ensures that one the two factors of the term are in $I$, so that the other is also in $I$ as $I$ is an ideal. This shows that $a+b\in J$, so that $J$ is an ideal of $A$.
GCD computations in $\mathbb{Z}[i]$
The Euclidean algorithm gives: $$ 85/(1+13i)=1/2-13i/2\approx -6i, \ 85=(-6i)(1+13i)+(7+6i), $$ $$ (1+13i)/(7+6i)=1+i, \ 1+13i=(1+i)(7+6i), $$ Hence the gcd is $7+6i$. When going through the Euclidean algorithm, you divide and take a nearest (Gaussian) integer, as you would over the (rational) integers.
Proving $\int_0^\infty \log\left (1-2\frac{\cos 2\theta}{x^2}+\frac{1}{x^4} \right)dx =2\pi \sin \theta$
We start off by some $x\rightarrow \frac{1}{x}$ substitutions while derivating under the integral sign: $$I(\theta)=\int_0^\infty \log \left(1-2\frac{\cos 2\theta}{x^2}+\frac{1}{x^4} \right)dx\overset{x\rightarrow \frac{1}{x}}=\int_0^\infty \frac{\ln(1- 2\cos(2\theta) x^2 +x^4)}{x^2}dx$$ $$I'(\theta)=4\int_0^\infty \frac{\sin(2\theta)}{x^4-2\cos(2\theta)x^2+1}dx\overset{x\rightarrow \frac{1}{x}}=4\int_0^\infty \frac{\sin(2\theta)x^2}{x^4-2\cos(2\theta)x^2+1}dx$$ Now summing up the two integrals from above gives us: $$\Rightarrow 2I'(\theta)=4\int_0^\infty \frac{\sin(2\theta)(1+x^2)}{x^4-2\cos(2\theta)x^2+1}dx=4\int_0^\infty \frac{\sin(2\theta)\left(\frac{1}{x^2}+1\right)}{x^2+\frac{1}{x^2}-2\cos(2\theta)}dx$$ $$\Rightarrow I'(\theta)=2\int_0^\infty \frac{\sin(2\theta)\left(x-\frac{1}{x}\right)'}{\left(x-\frac{1}{x}\right)^2 +2(1-\cos(2\theta))}dx\overset{\large x- \frac{1}{x}=t}=2\int_{-\infty}^\infty \frac{\sin(2\theta)}{t^2 +4\sin^2 (\theta)}dt$$ $$=2 \frac{\sin(2\theta)}{2\sin(\theta)}\arctan\left(\frac{t}{2\sin(\theta)}\right)\bigg|_{-\infty}^\infty=2\cos(\theta) \cdot \pi$$ $$\Rightarrow I(\theta) = 2\pi \int \cos(\theta) d\theta =2\pi \sin \theta +C$$ But $I(0)=0$ (see J.G. answer), thus:$$I(0)=0+C\Rightarrow C=0 \Rightarrow \boxed{I(\theta)=2\pi\sin(\theta)}$$
Why should one use the quotient rule instead of the power rule to differentiate a quotient?
There are two reasons why the quotient rule can be superior to the power rule plus product rule in differentiating a quotient: It preserves common denominators when simplifying the result. If you use the power rule plus the product rule, you often must find a common denominator to simplify the result. The tradeoff here is between more calculus/less algebra, and less calculus/more algebra. As I find the calculus to be less error-prone than the algebra, I would opt for the more calculus/less algebra approach. There are a few, a very few, integrals that succumb to the quotient rule (in reverse). If you are very familiar with the quotient rule, you can sometimes spot these integrals. If you find the quotient rule difficult to remember, here's a mnemonic device that can help you out: "low, dee-high minus high, dee-low over the square of what's below". There are countless variations on this mnemonic device, but I think the quotient rule is worth keeping in your arsenal, despite being, technically, unnecessary. You should have it for the same reason you should have the quadratic formula memorized, even though, if you know how to complete the square, you don't need the quadratic formula: it's faster and less error-prone.
The reals as an algebra over the rationals
Sure: take any noncommutative division algebra over $\mathbb{R}$, like the quaternions. They inherit a $\mathbb{Q}$-algebra structure from the embedding $\mathbb{Q}\rightarrow \mathbb{R}$, and they're clearly infinite dimensional, since $\mathbb{R}$ is already infinite dimensional.
Prove: the ratio between the areas of $ABC$ and $AB'C'$ is $AB'\cdot\frac{AC'}{(AC \cdot AB)}$
here is how I would go with a bit of dot's products. We denote by $AB$ the length of segment $AB$ and $\vec{AB}$ the vector going from $A$ to $B$ Using the $\frac{1}{2}\text{base}\times\text{height}$ formula, the area of triangle $ABC$ is given by $\left\lvert \frac{AB}{2} \left( AC - \frac{\vec{AB}\cdot \vec{AC}}{AB} \right) \right\rvert=\left\lvert \frac{AB}{2} \left(\left( \frac{\vec{AC}}{AC} - \frac{\vec{AB}}{AB} \right)\cdot \vec{AC} \right) \right\rvert$ similarly we can write the area of $AB'C'$ as $\left\lvert \frac{AC'}{2} \left(\left( \frac{\vec{AB'}}{AB'} - \frac{\vec{AC'}}{AC'} \right)\cdot \vec{AB'} \right) \right\rvert$. Now observe that the vectors $\frac{\vec{AC}}{AC} - \frac{\vec{AB}}{AB}$ and $\frac{\vec{AB'}}{AB'} - \frac{\vec{AC'}}{AC'}$ are equal, let's denote by $\vec{n}$ this vector. Finally observe that $\frac{\vec{n}\cdot \vec{AB'}}{\vec{n}\cdot\vec{AC}} = \frac{AB'}{AC}$ since $\vec{AB'}$ and $\vec{AC}$ are co-linear. The ratio you seek is now \begin{align*} \frac{\left\lvert \frac{AC'}{2} \left(\vec{n}\cdot \vec{AB'} \right) \right\rvert}{\left\lvert \frac{AB}{2} \left(\vec{n}\cdot \vec{AC} \right) \right\rvert}&= \frac{AC' \cdot AB'}{AB \cdot AC} \end{align*} Which almost match your formula with a additional ' in for the nominator. If you were talking about the dot product then please add some \vec for vectors.
Does the sequence $\{\sin(en)\}$ converge or diverge?
The sequence $(\sin rn)_{n\in\mathbb N}$ converges if and only if $r$ is a multiple of $\pi$ (in which case it is the constant sequence $0,0,0,\ldots$). If $r$ is any rational-but-not-integral multiple of $\pi$, then the sequence is periodic but not constant, and therefore doesn't converge. If $r/\pi$ is irrational, then values in the sequence are dense in $(-1,1)$ -- in particular there are infinitely many elements in $(-1,-\frac12)$ and also infinitely many elements in $(\frac12,1)$, so it cannot converge in this case either.
Why is $\{\{c\}:c \in \mathbb{R} \}$ not a generator of the Borel $\sigma$ field in $\mathbb{R}$?
The set $\mathcal{A}$ of all subsets of $\Bbb R$ that are either countable or have a countable complement, forms a $\sigma$-algebra. It clearly contains $\mathcal{S}:=\{\{c\}: c \in \Bbb R\}$, all elements being finite. So $\sigma(\mathcal{S}) \subseteq \mathcal{A}$ by minimality and in fact we can easily show equality. But the main point is that $\mathcal{A} \neq \text{Bor}(\Bbb R)$, as e.g. $(0,1)$ is not countable nor has it a countable complement. So $(0,1) \notin \sigma(\mathcal{S})$. It is true that singleton sets are Borel, so all members of $\sigma(\mathcal{S})$ are Borel too. But we can only "reach" countable sets with countable unions, and then complements give us the other type, and no more.
calculating partial derivatives at the origin when they are not defined.
The formula you obtained only shows that the derivative is likely to not be continuous at $(0,0)$, but you can't evaluate it at $(0,0)$. You have $$ \frac{\partial f}{\partial x}(0,0)=\lim_{h\to0}\frac{f(h,0)-f(0,0)}h=\lim_{h\to0}\frac{0-0}h=0. $$ Similar for the other partial derivative.
Solutions cannot cross
If they do cross at $t=t^*$, then both are solutions to some initial value problem with the same initial value at $t^*$. This all supposes that the ODE satisfies at least the local Lipschitz condition.
The derivative of a function is square integrable assuming Fourier transform dominated
The following gives the main idea. \begin{eqnarray*} |F(k)| &\leqslant &\frac{1}{(1+k^{2})^{2}}\Rightarrow F(k)\in L^{1}\cap L^{\infty } \\ |kF(k)| &\leqslant &|k|\frac{1}{(1+k^{2})^{2}}\Rightarrow H(k)=kF(k)\in L^{1}\cap L^{\infty } \end{eqnarray*} It follows that $g(x)\in L^{1}$ and $g(x)\in L^{2}\cap L^{\infty }$ so $% g(x)\in L^{1}\cap L^{\infty }$. Also $H(k)\in L^{1}\cap L^{2}$ so $$h(x)=\int dk\exp [-ikx]H(k)\in L^{2}\cap L^{\infty } $$ But $$ h(x)=\int dk\exp [-ikx]kF(k)=i\partial _{x}\int dk\exp [-ikx]F(k)=i\partial _{x}g(x) $$
Example of a recursive set $S$ and a total recursive function $f$ such that $f(S)$ is not recursive?
Take any set $A$ that is recursively enumerable but not recursive (e.g. the halting problem). It is enumerated by some total recursive function $f(x)$. But $\mathbb{N}$ is recursive while $f(\mathbb{N}) = A$ is not.
$f:\mathbb{R}^4 \to \mathbb{R}$ is a Constant when $Xf = Yf = 0$ for Vector Fields $X,Y$
The flow of $X$ is $f_u(x,y,z,t)=(x+u,y+uz,z,t)$ the flow of $Y$ is $g_u(x,y,z,t)=(x,y,z+ux,t+u)$. The flow of $[X,Y]$ is $h_u(x,y,z,t)=(x,y-ux,z+u.t)$ Let $(x,y,z,t)\in\mathbb{R}^4$, $f_{-x}(x,y,z,t)=(0,y-xz,z,t)$, $g_{-t}(0,y-xz,z,t)=(0,y-xz,z,0)$, $h_{-z}(0,y-xz,z,0)=(0,y-xz,0,0)$. This impies that $f(x,y,z,t)=f(0,y-xz,0,0)$. This implies that $\pmatrix{{{\partial f}\over{\partial x}} \cr{{\partial f}\over{\partial y}} \cr {{\partial f}\over{\partial z}} \cr {{\partial f}\over{\partial t}}}=$ $\pmatrix{{{\partial f}\over{\partial x}} &{{\partial f}\over{\partial y}} & {{\partial f}\over{\partial z}} & {{\partial f}\over{\partial t}}}$ $\pmatrix{0 &0&0&0\cr -z&1&-x&0\cr 0&0&0&0\cr 0&0&0&0}$ We deduce that${{\partial f}\over{\partial x}}=-z{{\partial f}\over{\partial y}}$ ${{\partial f}\over{\partial z}}=-x{{\partial f}\over{\partial y}}$ and ${{\partial f}\over{\partial t}}=0$. The fact that $x{{\partial f}\over{\partial z}}+{{\partial f}\over{\partial t}}=0$ implies that $x{{\partial f}\over{\partial z}}=0$ and ${{\partial f}\over{\partial z}}=0$ The fact that ${{\partial f}\over{\partial z}}-x{{\partial f}\over{\partial y}}=0$ implies that ${{\partial f}\over{\partial y}}=0$, similarly you deduce that ${{\partial f}\over{\partial x}}=0.$
Find the domain of convergence of the series $\sum_{n=1}^{\infty} \frac{n! x^{2n}} {n^n (1+x^{2n})}$
For all $x\in \mathbb{R}$, you have $$0\le \dfrac{x^{2n}}{1+x^{2n}} < 1$$ This means: $$0 \le \sum_{n\ge 1} \dfrac{n!x^{2n}}{n^n(1+x^{2n})} < \sum_{n\ge 1} \dfrac{n!}{n^n}$$ Since it is bounded and increasing as $|x|$ increases, if the RHS converges, you have convergence over all reals. By the ratio test: $$\dfrac{\left(\tfrac{(n+1)!}{(n+1)^{n+1}}\right)}{\left(\tfrac{n!}{n^n}\right)} = \dfrac{(n+1)n^n}{(n+1)^{n+1}} = \left(\dfrac{n}{n+1}\right)^n = \left( 1-\dfrac{1}{n+1}\right)^n \to \dfrac{1}{e} < 1$$ This implies the series converges absolutely.
Exact functor on inclusion
If $A \to B$ is a monomorphism, then you have an exact sequence $0 \to A \to B.$ Since $F$ is exact, you get the exactness of $0 \to F(A) \to F(B)$ which means that $F(A) \to F(B)$ is a monomorphism.
Projecting $512\times 1$ vector to lower dimensions
It sounds like what you're looking for is something along the lines of the "eigenfaces" application of PCA. I recommend you apply PCA to some set of "training data", which consists of $M$ column-vectors of length $N$. Let's say that $A$ is the training data matrix, and $\mu$ is the "average" matrix, i.e. $\mu = \frac 1{M} Aee^T$ where $e = (1,\dots,1)^T$. Let $u_1,u_2,u_3$ denote the first $3$ principal components, i.e. the first $3$ columns of $U$ in the SVD $(M - \mu) = U\Sigma V^T$. For your new measurement, a single $N \times 1$ vector $v$, the $3 \times 1$ vector $(u_1^T v, u_2^Tv,u_3^Tv)$ can be thought of as your projection that (in some sense) retains information about the data; moreso than any other projection onto $\Bbb R^3$. For more information on that, I would recommend looking at Bishop's "Pattern Recognition and Machine Learning".
Show that the stopping moment $\tau$ is finite, find the distribution of $X_{\tau}$, calculate $\mathbb{E}e^{\tau}$.
Hints: Conclude from $\mathbb{E}(X_t)=1$ and $\text{var}(X_t)=1$ that $X_0=1$ almost surely. Use the optional stopping theorem to show that $$\mathbb{E}((X_{t \wedge \tau}-1)^2) = \mathbb{E}(e^{t \wedge \tau})-1. \tag{1}$$ Using the continuity of the sample paths, prove that $|X_{t \wedge \tau}| \leq 3$. Conclude from $(1)$ and the monotone convergence theorem that $$\mathbb{E}(e^{\tau})< \infty.$$ In particular, $\tau<\infty$ almost surely. Show that $(X_t)_{t \geq 0}$ is a martingale. Apply once more the optional stopping theorem and the dominated convergence theorem to deduce that $$\mathbb{E}(X_{\tau})=\mathbb{E}(X_0)=1. \tag{2}$$ Since $\tau<\infty$ almost surely, it follows from the continuity of the sample paths that $X_{\tau} \in \{0,3\}$. Hence, $(2)$ can be written as $$3 \mathbb{P}(X_{\tau}=3) + 0 \cdot \mathbb{P}(X_{\tau}=0)=1.$$ Using $\mathbb{P}(X_{\tau}=3)+\mathbb{P}(X_{\tau}=0)=1$ calculate both probabilities. Letting $t \to \infty$ in $(1)$ gives $$\mathbb{E}((X_{\tau}-1)^2)=\mathbb{E}(e^{\tau})-1.$$ Use Step 5 to calculate the left-hand side explicitly.
Is this a Positive Definite Matrix
Counterexample with $\,1\times 1\;$ matrices: $$R=\begin{pmatrix}1&2\\2&1\end{pmatrix}$$
When does $\|\int{K_{j}(x,y)f(y)dy}\|\rightarrow 0$ implies $\lim_{j\rightarrow \infty}K_{j}(x,y)=0$ a.e.?
Take $K_j(x,y):=x\sin(jy)\chi_{(0,1)}(y)\chi_{(0,1)}(x)$: using an approximation argument and Riemann-Lebesgue lemma, we get that $\lVert \int K_j(x,y)f(y)\mathrm dy\to 0$ for each $f\in\mathbb L^1(\mathbb R^n)$. But we don't have the almost everywhere convergence.
how should a unbounded integrable function be like on a bounded set?
There is no need for the redefinition. $f(x) = \frac1{\sqrt x}$ is continuous on $(0,1)$ (an open set). The function is unbounded on $(0,1)$ and integrable with integral $$\int_0^x f(t) \;\mathrm dt = 2\sqrt x$$
$[F(\alpha, \beta): F]=mn$ if $m,n$ are coprime
Yes, $\deg(\alpha) = m$ means that the minimum polynomial of $\alpha$ has degree $m$. It is equivalent to $[F(\alpha):F] = m$. To proceed, use the fact that in a tower of (algebraic) extensions $F \subseteq K \subseteq L$, $[L:F] = [L:K] [K:F]$.
Real Analysis, Folland Problem 6.1.14 $L^p$ spaces
For $1\leq p<\infty$, $$\Vert Tf\Vert_p^p =\int |Tf|^p d\mu =\int |fg|^pd\mu\leq \Vert g\Vert_{\infty}^p \Vert f\Vert_p^p,$$ where we have used the fact that $|g| \leq \Vert g\Vert_{\infty}$ $\mu$-a.e. Take $p$th roots to get $$\Vert Tf\Vert_p \leq \Vert g\Vert_{\infty}\Vert f\Vert_p.$$ This is exactly what you wanted for $p$ finite. For $p=\infty$, first show that $\Vert fg\Vert_{\infty}\leq \Vert g\Vert_{\infty}\Vert f\Vert_{\infty}$, so then $\Vert Tf\Vert_{\infty}\leq \Vert g\Vert_{\infty}\Vert f\Vert_{\infty}.$ Again, this is what you want. Edit: An operator $T$ on a normed vector space $(X, \Vert \cdot \Vert)$ is bounded if there is $C>0$ such that $$\Vert Tx \Vert \leq C \Vert x\Vert$$ for all $x \in X$. Equivalently, $$\sup_{\Vert x\Vert \leq 1} \Vert Tx\Vert <\infty.$$ In this case, the norm is $L^p$ norm, and the problem asks you to show that $C$ can be chosen to be $\Vert g\Vert_{\infty}$.
Ways to Choose Three Adjacent Elements from a Set
Let us call the number of ways of choosing a subset of N people such that there exist 3 people in the subset who are adjacent in the original set as $a_n$. Call such subsets good. Let's try to find a recurrence for $a_n$. Now, let us take any such subset of $N-1$ people, and consider that set with $N$ in it and not in it. Clearly both are distinct good subsets. So, this amounts to $2 a_{n-1}$ ways. So, the elements not yet considered have $N$ in the triplet, and in fact the triplet $N-2, N-1,N$ should be the only one inside the subset, otherwise it has been calculated before. So, this amounts to $2^{n-4} - a_{n-4}$ ways, as we need the number of subsets of $N-4$ people not having any such triplet (Note that the element $N-3$ cannot be in the set). So, we get the recurrence as $$a_{n} = 2a_{n-1} + 2^{n-4} - a_{n-4}$$ with initial conditions as $a_1=a_2=0, a_3=1, a_4=3$. As regards to a general formula, the use of OEIS leads to this which tells us that the generating function is $$G(x) = \frac{x^3}{(1-2x)(1-x-x^2-x^3)}$$ and also gives a nicer looking recurrence: $$a_n = 3a_{n-1} - a_{n-2} - a_{n-3} - 2a_{n-4}$$
example reduced and divisible modules
Let $M$ be any module. If $\{N_i\}_{i\in I}$ is a family of divisible submodules of $M$ (that is, each $N_i$ is a submodule of $M$, and each $N_i$ is divisible), then $N=\langle N_i\rangle$ is also a divisible submodule of $M$: it is a submodulee, and given $n_1+\cdots+n_k\in N$, with $n_j\in N_{i_j}$, and $r\in R$, there exist $m_j\in N_{i_j}$ such that $rm_j = n_j$, and then $r(m_1+\cdots + m_k) = n_1+\cdots + n_k$. Therefore, if $M$ is any module, then it has a largest divisible submodule (namely, the submodule generated by all divisible submodules; note that the trivial submodule is divisible, so this collection is not empty). Call it $M_d$. Claim 1. The only divisible submodule of $M/M_d$ is the trivial submodule. that is, $(M/M_d)_d = \{0\}$. Proof. Let $K$ be a submodule of $M$ that contains $M_d$ such that $K/M_d$ is divisible; let $k\in K$, and let $r\in R$. Then there exists $u+M_d$ such that $r(u+M_d) = (k+M_d)$, so $k = ru+x$ for some $x\in M_d$, with $u\in K$. Since $M_d$ is divisible, there exists $v\in M_d$ (hence in $K$) such that $rv= x$. Thus, $k=r(u+v)$, and $u+v\in K$. Hence $K$ is a divisible submodule of $M$, so $K\subseteq M_d\subseteq K$. That is, $K/M_d$ is the trivial submodule. Claim 2. $M/M_d$ is reduced. Proof. Let $A$ be a divisible module, and let $\varphi\colon A\to M/M_d$ be a module homomorphism. Since $\varphi(A)$ is divisible (a quotient of a divisible module is divisible), then by Claim 1 $\varphi(A)=\{0+M_d\}$. Thus, $\varphi$ is the trivial map. Hence, $M/M_d$ is divisible. This gives you lots of nontrivial reduced modules: just pick any module $M$ that is not divisible, and consider $M/M_d$
What is expectation of $X$?
Let $X_i$ take value $1$ if $A'[i]=i$ and let it take value $0$ otherwise. Then: $$X=\sum_{i=1}^nX_i$$ Apply linearity of expectation as suggested in comment of Mees.
Does discrete topology have interior?
In the discrete topology (at least according to the definition i know of) every set is "clopen" so the interior of A is just A, just as the closure of A is just A.
Show that there exists holomorphic $f$ such that $f^2=\frac{\sin z}{z}$
Your idea for the first one looks sound to me. Concerning the second part, having negative value is not a real problem for the function $\sin z / z$. We can always choose a holomorphic version of $\sqrt{\cdot}$ near any point, except the origin. Indeed, what really matters are zeros of $\sin z / z$. If $u(z) := \sin z / z$ has no zero inside $|z| < r$, then we can define $\log u(z)$ by $$ \log u(z) := \int_{0}^{z} \frac{u'(\xi)}{u(\xi)} \, d\xi, $$ where the contour of integration can be any nice curve from $0$ to $z$ inside the disc $|z| < r$. So we can define $\sqrt{u(z)}$ by $$ {\textstyle\sqrt{u(z)}} := \exp(\tfrac{1}{2}\log u(z)). $$ Zeros of $u(z)$ are exactly $z = \pm \pi, \pm 2\pi, \cdots$. And more importantly, all those zeros are simple. Thus there is no way we can define $\sqrt{u(z)}$ near those zeros. These two arguments show that the radius of convergence of $f(z)$ at $z = 0$ is exactly the modulus of the nearest zero of $u(z)$, which is exactly $\pi$.
How to find vector answers over GF(2)?
Addition modulo $2$ is the (possibly more) familiar XOR operation on bits; we have $1 + 1 = 0$, and $x + 0 = x = 0 + x$ for all $x \in GF(2)$. However, there's something that may not be obvious when it comes to expressions like $x + y + z$. For sums with more than two terms, by definition we must add one pair at a time; $x + y + z = (x + y) + z$, and this can be viewed as a sequence of binary (meaning, having two inputs) XOR operations if you like, where the exact parenthesization doesn't matter, since $+$ is associative. So for example, $1 + 1 + 1 = (1 + 1) + 1 = 0 + 1 = 1$ (and more generally, a sum is $1$ if and only if an odd number of inputs are $1$). This might be a bit unusual, depending on how you think about XOR with more than two inputs (see for example this answer on EE.SE). Evidently some define this as a sequence of binary XOR operations, as we do here, while others output $1$ if and only if exactly one input is $1$. I hadn't given much thought to the latter viewpoint, and am not sure whether these differing views of multi-input XOR came up in practice, or in what fields one definition might be more common than the other. So with only two inputs, addition modulo $2$ acts exactly as you'd expect, but you may need some (mental) recalibration with more than two inputs.
Is this theorem valid only for parallelograms?
This theorem only works for objects where the area is the base times the height. Since a triangle area is half the base times height its area will always be half the area. Though parallelograms are the most conventional shape with this property you could also many create wacky shapes whose area is the base times height. For example start with a rectangle (or parallelogram) and cut out any random shape on one of the horizontal sides and paste it on to the other. You now have a wacky shape whose area is base times height.
numerical solution for coupled first order ODE
We can decouple the equations by defining the functions $F=A+B$ and $G=A-B$. Then $\frac{dF}{dz}=-2K(z)F,\qquad\frac{dG}{dz}=0$, so that $F(z)=F_0\exp\left[-2\int_0^z K(t)dt\right],\qquad G(z)=G_0,$ from which follows $A(z)=\frac{1}{2}F_0\exp\left[-2\int_0^z K(t)dt\right]+\frac{1}{2}G_0, \qquad B(z)=\frac{1}{2}F_0\exp\left[-2\int_0^z K(t)dt\right]-\frac{1}{2}G_0.$ In order to determine $F_0$ and $G_0$, we use the boundary conditions $\left(\xi = e^{-2\int_0^L K(t)dt}\right)$ $A(0) = \frac{1}{2}F_0+\frac{1}{2}G_0 = 1,\qquad B(L) = \frac{1}{2}F_0\xi-\frac{1}{2}G_0 = 1$, which yield $F_0=\frac{4}{\xi +1}, \qquad G_0=\frac{2(\xi-1)}{\xi+1}.$ Use the numerical method of your choice to compute the integral $\int_0^z K(t)dt$.
Considerations about real polynomial with complex numbers
For a) and b), think about what must be true of the complex roots for the polynomial $p(x)$ to be real. Hint: $p(x)$ is invariant under complex conjugation, hence so is the right hand side. What does this say about the roots on the. For c), try expanding the right hand side.
A function that is both open and closed but not continuous
Let $X$ and $Y$ be topological spaces on the same underlying set (with at least two elements), where $Y$ carries the discrete topology and $x$ the indiscrete topology. Then the identity map $X\to Y$ is open, closed, not continuous.
Finding the local and absolute values of a polynomial on a closed finite interval
For $x\in(0,5)$ we have $$y' = 3x^2 - 18x + 24=0 \implies x_1=2\:,x_2=4 \implies f(x_1)=18\:, f(x_2)=14$$ and since $$y'' = 6x - 18\implies f''(x_1)<0\:, f''(x_2)>0$$ then on the boundary $$x_0=0\:,x_3=5 \implies f(x_0)=-2\:, f(x_3)=18$$ Therefore $x_1$ is a point of local maximum $x_2$ is a point of local minimum $x_0$ is a point of absolute minimum $(-2)$ $x_1$ and $x_3$ are point of absolute maximum $(18)$
Looking for a boolean puzzle
Here's an example of formulating a simple "knights and knaves" puzzle using Boolean expressions. Consider the "Fork in the Road" puzzle: John and Bill are standing at a fork in the road. John is standing in front of the left road, and Bill is standing in front of the right road. One of them is a knight and the other a knave, but you don't know which. You also know that one road leads to Death, and the other leads to Freedom. By asking one yes–no question, can you determine the road to Freedom? Boolean variables: $L$: the left road leads to Freedom $J$: John is the knight You decide to ask John a question of the form "If I asked Bill $\ldots$, would he say Yes?" So new variables: $Q$: the true answer to the question to ask Bill $R$: Bill's answer to that question $S$: John's answer to your question Then: $$ \eqalign {R &= (Q \cap \neg J) \cup (\neg Q \cap J)\cr S &= (R \cap J) \cup (\neg R \cap \neg J)}$$ Simplify to: $S = \neg Q$. So you can make $Q = \neg L$ (i.e. ask John "If I asked Bill 'does the right road lead to Freedom', would he say Yes?"), and take John's answer as indicating whether the left road leads to Freedom.
Find the remainder when $2^{2013}$ is divided by $17$.
You don't even need the binomial theorem, you may just exploit the fact that $2^4$ and $17$ differ by one: $$2^{2013}= 2\cdot(2^4)^{503} \equiv 2\cdot(-1)^{503} \equiv -2 \equiv \color{red}{15}\pmod{17}. $$
Find $\alpha \in \mathbb{R}$ s.t. the second derivative ($x=0$) of a function exists.
Taylor at second order means that you have already calculated the second derivative. Just calculate the second derivative on the left and on the right, and make them equal. $$\left(e^{-\frac 1x}\right)''=-\frac{2x-1}{x^4}e^{-\frac 1x}$$ This term goes to $0$ as $x\to 0$. For the other side the second derivative is $$-\sin x+2\alpha+\frac{1}{(x+1)^2}$$ The limit at $0$ is $2\alpha+1$. So $2\alpha+1=0$ should give you the answer.
Solve $z=px+qy+p^2+q^2$ by Charpit's Method
The given equation is \begin{equation} F=z-px-qy-p^2-q^2=0. \end{equation} Charpit's auxiliary equations are \begin{align*} & \dfrac{dp}{F_x+pF_z}=\dfrac{dq}{F_y+qF_z}=\dfrac{dz}{-pF_p-qF_q}=\dfrac{dx}{-F_p}=\dfrac{dy}{-F_q}\\ \implies & \dfrac{dp}{-p+p(1)}=\dfrac{dq}{-q+q(1)}=\dfrac{dz}{-p(-x-2p)-q(-y-2q)}=\dfrac{dx}{-x-2p}=\dfrac{dy}{-y-2q}\\ \implies & \dfrac{dp}{0}=\dfrac{dq}{0}=\dfrac{dz}{px+qy+2p^2+2q^2}=\dfrac{dx}{-x-2p}=\dfrac{dy}{-y-2q} \end{align*} First fraction implies, $dp=0\implies p=a$. Similarly, let $q=b.$ Now, $dz=pdx+qdy\implies dz=adx+bdy\implies z=ax+by+c$. Putting the value of $z$ in the given equation, $ax+by+c=ax+by+a^2+b^2\implies c=a^2+b^2$ Therefore, $z=ax+by+a^2+b^2$. This is the complete solution.
Probability Question about Eyes
Hint: two different people are selected from the group of ten. In how many ways can you select those two people from the four who do not have brown eyes? In other words, suppose the group is labeled $$\{B1, B2, B3, B4, B5, B6, N1, N2, N3, N4\}.$$ Then how many ways can you choose two different people from the $N$ subgroup? Next, how many ways can you choose two different people from the entire group?
The classification theorem for modules over $\mathbb{Z}[i]/(5)$
$R=\mathbb Z[i]/(5)$ is isomorphic to $\mathbb Z[i]/(2-i)\times\mathbb Z[i]/(2+i)$. But $\mathbb Z[i]/(2-i)$ and $\mathbb Z[i]/(2+i)$ are finite fields with $5$ elements and this allows us to say that $R\simeq\mathbb F_5\times\mathbb F_5$. Thus any finitely generated $R$-module is a finitely generated $\mathbb F_5\times\mathbb F_5$-module. Let $M$ be a finitely generated $\mathbb F_5\times\mathbb F_5$-module. Then $M=(1,0)M\oplus(0,1)M$ and $(1,0)M$ and $(0,1)M$ are f.g. $\mathbb F_5$-modules. As a consequence, $M\simeq\mathbb F_5^r\times\mathbb F_5^s$ Remark. This question rise a harder problem: what can be said about the structure of finitely generated modules over principal ideal rings (which are not necessarily domains!)? For a good start see this topic.
Why $\int_0^\infty x\delta (y-x)dx=y$?
Mathematical approach. The Schwartz distribution $\delta(y-x)\;dx$ is a measure. It is the unit point mass at the point $y$. Let's write $\epsilon_y$ for that unit mass. So we have $$ \int_0^\infty x\delta(y-x)\;dx \qquad\text{is notation meaning}\qquad \int_0^\infty x\;\epsilon_y(dx) $$ If $y>0$, we get answer $y$. If $y<0$ we get the answer $0$. If $y=0$, then this is ambiguous. For the integral from $0$ to $\infty$ do we mean the set $[0,\infty)$ or the set $(0,\infty)$ ?? A mathematican would write either $$ \int_{[0,\infty)} x\;\epsilon_y(dx)\qquad\text{or}\qquad \int_{(0,\infty)} x\;\epsilon_y(dx) $$ instead of something ambiguous.
Harish Chandra isomorphism:Invariant polynomial functions
Note that $\Lambda$ is an additive subgroup of $H^*$, containing all roots. Thus $ \Lambda $ spans $H^*$. Let $\{\lambda_1, \dots, \lambda_l \}$ be a basis for $H^*$. Then $F[\lambda_1, \dots, \lambda_l]$ = $B(H^*)$, where $F$ is the base field of characteristic zero. Denote $P^l_d$ by the subspace of $F[\lambda_1, \dots, \lambda_l]$ consisting of the homogeneous elements of degree $d$. We can prove the following claim by the induction on $l$. Claim. $P^l_d$ is spanned by $\{\eta^d : \eta \in P^l_1 \}$ proof. The case $l=1$ is trivial. Suppose $l \geq 2$ and let $d \geq 1$ be given. Fix $\lambda \in P^{l-1}_1$. It suffices to show that $\lambda^k \lambda_l^m \in \mbox{span}\{\eta^d : \eta \in P^l_1\}$ for every nonnegative integers $k, m$ such that $k+m=d$. Consider a $F$-vector subspace $W$ generated by $\mathfrak{B}=\{\lambda^d,~\lambda^{d-1}\lambda_l, ~\dots~, ~ \lambda \lambda^{d-1}_l, ~\lambda^d_l\} $. By construction, $\dim W= d+1$. Now consider $d+1$ vectors $\lambda^d,(\lambda + \lambda_l)^d, \dots, (\lambda + d\lambda_l)^d$ of $W$. With respect to the basis $\mathfrak B$, the $(d+1) \times (d+1)$ matrix having $(\lambda+i\lambda_l)^d$ as $(i+1)$-th column can be written by $$ \begin{bmatrix} 1 & 1 & 1& \cdots & 1 \\ \binom{d}{1}0 & \binom{d}{1}1 & \binom{d}{1}2& \cdots &\binom{d}{1}d \\ \binom{d}{2}0^2 & \binom{d}{2}1^2 & \binom{d}{2}2^2&\cdots & \binom{d}{1}d^2 \\ \vdots & \vdots & \ddots & \vdots \\ \binom{d}{d}0^d & \binom{d}{d}1^d & \binom{d}{d}2^d &\cdots & \binom{d}{d}d^d \end{bmatrix} $$ whose determinant is $\prod_{i=1}^d \binom{d}{i} \prod_{0 \leq i < j \leq d}(j-i) \neq 0$. (See the determinant of Vandermonde matrix.) In other words, $(\lambda + i\lambda_l)^d$'s are linearly independent, so forms a basis for $W$. Now $$\mathfrak B \subset W = \mbox{span} \{(\lambda + i\lambda_l)^d : 0 \leq i \leq d\} \subset \mbox{span}\{\eta^d : \eta \in P^l_1 \}$$ as desired. $\blacksquare$ Alternatively, we can prove the result by the Jacobi's bialternant formula; see my another answer. This approach does not use induction.
Why do metric spaces that produce the same topology have different number theoretical difficulties?
The topology doesn't care about exact distance values in the metric; it only cares about what points are close to what others in a broad sense. So for a question where the exact distance values are important, you wouldn't expect the question to say the same under different metrics even if they induce the same topology.
Existence of convergent sequence implies convergence of function as $t\to\infty$
Let $u(t)=\sin(t\pi)f$ where $f$ is a fixed non-zero element of $H_0^{1}(\Omega)$. Take $t_n=n$.
Limit of the sum of functions and limit of the product of a function and a constant
C) I am assuming that you are considering a limit of infinity to be a valid limit. Let $x_o = o$ and $f(x)=1/x^2$ which has a limit of infinity. Let $g(x)$ be 1 for $x>0$ and 0 for $x\le0$. D) If we assume c is not 0 then if we consider a sequence coming from the left and the right of $x_o$, we can divide each term by c and have a new sequence converging to the limit of $c*f(x)/c$.
Eigenvalues of the Schur complement
Yes, from the Haynsworth inertia additivity formula $$ \text{In}(M)=\text{In}(C)+\text{In}(\underbrace{A-BC^{-1}B^T}_{M~\setminus~ C})=\text{In}(A)+\text{In}(\underbrace{C-B^TA^{-1}B}_{M~\setminus~ A}) $$ where $\text{In}()$ denotes the inertia (ordered set of the count of positive, negative and zero eigenvalues of a matrix).
Show $m^p+n^p\equiv 0 \mod p$ implies $m^p+n^p\equiv 0 \mod p^2$
From little Fermat, $m^p \equiv m \pmod p$ and $n^p \equiv n \pmod p$. Hence, $p$ divides $m+n$ i.e. $m+n = pk$. $$m^p + n^p = (pk-n)^p + n^p = p^2 M + \dbinom{p}{1} (pk) (-n)^{p-1} + (-n)^p + n^p\\ = p^2 M + p^2k (-n)^{p-1} \equiv 0 \pmod {p^2}$$
Combinations within different elements
I offer a possible approach: The number of solutions equal to the coefficient of $x^N$ of the polynomial $(1+x+x^2...+x^{K_1})(1+x+x^2...+x^{K_2})...(1+x+x^2...+x^{K_K})$ Err... So a Mathematica code(If you don't mind) for this might be solutions[N_, ks_] := Coefficient[Times @@ ((Total["x"^Range[0, #, 1]]) & /@ ks), "x"^N] or more efficiency: solutions[N_, ks_] := Coefficient[ Times @@ ((Total["x"^Range[0, #, 1]]) & /@ (If[# > N, N, #] & /@ ks)), "x"^N] for your examples solutions[100,{25,60,100,200}] gives 92781 while solutions[20, {10, 100, 1000, 2500, 20, 10}] gives 49126
Wave equation how to derive the form $u(x,t)=f(x+ct)+g(x+ct)$
$\newcommand{\+}{^{\dagger}} \newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack} \newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,} \newcommand{\dd}{{\rm d}} \newcommand{\down}{\downarrow} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,{\rm e}^{#1}\,} \newcommand{\fermi}{\,{\rm f}} \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{{\rm i}} \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\isdiv}{\,\left.\right\vert\,} \newcommand{\ket}[1]{\left\vert #1\right\rangle} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\pars}[1]{\left(\, #1 \,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}} \newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,} \newcommand{\sech}{\,{\rm sech}} \newcommand{\sgn}{\,{\rm sgn}} \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert} \newcommand{\wt}[1]{\widetilde{#1}}$ Lets $\ds{\xi \equiv x - ct\,,\quad \eta \equiv x + ct}$. Then, \begin{align} \partiald{}{x}& =\partiald{\xi}{x}\,\partiald{}{\xi} + \partiald{\eta}{x}\,\partiald{}{\eta} =\partiald{}{\xi} + \partiald{}{\eta} \\[3mm] \partiald{}{t}& =\partiald{\xi}{t}\,\partiald{}{\xi} + \partiald{\eta}{t}\,\partiald{}{\eta} =-c\,\partiald{}{\xi} + c\,\partiald{}{\eta} \end{align} \begin{align} \partiald[2]{}{x}&=\partiald[2]{}{\xi} + 2\,{\partial^{2} \over \partial\xi\,\partial\eta} + \partiald[2]{}{\eta} \\[3mm] \partiald[2]{}{t}&=c^{2}\pars{% \partiald[2]{}{\xi} - 2\,{\partial^{2} \over \partial\xi\,\partial\eta} + \partiald[2]{}{\eta}} \end{align} $$ \partiald[2]{}{x} - {1 \over c^{2}}\,\partiald[2]{}{t} =4\,{\partial^{2} \over \partial\xi\,\partial\eta} $$ $$ 0=\partiald[2]{u}{x} - {1 \over c^{2}}\,\partiald[2]{u}{t} =4\,{\partial^{2}u \over \partial\xi\,\partial\eta} \quad\imp\quad\partiald{u}{\eta} = {\rm F}\pars{\eta} \quad\imp\quad u = \overbrace{\int{\rm F}\pars{\eta}\,\dd\eta}^{\ds{\equiv\fermi\pars{\eta}}} + {\rm g}\pars{\xi} $$ $$\color{#44f}{\large% u = \fermi\pars{x + ct} + {\rm g}\pars{x - ct}} $$
Applying Open Mapping Theorem
I suspect there must be a simpler proof for Banach spaces, but as I don't see one, here is what I came up with, using a faint recollection of the proof of Schwartz' lemma (simplified, since we're dealing with Banach spaces, not Fréchet spaces): By the open mapping theorem, we know that there is a $C > 1$ such that $$\bigl( \forall y \in Y\bigr)\bigl(\exists x \in X\bigr)\bigl(\lVert x\rVert \leqslant C\cdot \lVert y\rVert\land y = F(x)\bigr).\tag{1}$$ Without loss of generality, assume that $K$ is contained in the (closed, not important) unit ball of $Y$ and nonempty. Since $K$ is compact, there is a finite set $M_1 = \{ y_{1,\nu} : 1 \leqslant \nu \leqslant n_1\}$ such that $$K \subset \bigcup_{\nu=1}^{n_1} B_{1/C}(y_{1,\nu}).$$ For every $1\leqslant \nu \leqslant n_1$, choose an $x_{1,\nu} \in X$ with $\lVert x_{1,\nu}\rVert \leqslant C$ and $F(x_{1,\nu}) = y_{1,\nu}$. Let $L_1 = \{ x_{1,\nu} : 1 \leqslant \nu \leqslant n_1\}$. For $k \geqslant 2$, if $M_1,\dotsc,M_{k-1}$ and $L_1,\dotsc,L_{k-1}$ have already been constructed, let $M_k = \{y_{k,\nu} : 1 \leqslant \nu \leqslant n_k\}$ a finite set such that $$K \subset \bigcup_{\nu = 1}^{n_k} B_{1/C^k}(y_{k,\nu}).$$ For each $\nu$, choose a $\mu_k(\nu) \in \{ 1,\dotsc,n_{k-1}\}$ such that $\lVert y_{k,\nu} - y_{k-1,\mu_k(\nu)}\rVert < C^{1-k}$, and by $(1)$ a $z_{k,\nu} \in X$ with $\lVert z_{k,\nu}\rVert \leqslant C\cdot \lVert y_{k,\nu} - y_{k-1,\mu_k(\nu)}\rVert$ and $F(z_{k,\nu}) = y_{k,\nu} - y_{k-1,\mu_k(\nu)}$. Let $x_{k,\nu} = x_{k-1,\mu_k(\nu)} + z_{k,\nu}$ and $L_k = \{x_{k,\nu} : 1 \leqslant \nu \leqslant n_k\}$. By construction, $F(L_k) = M_k$, and hence, setting $L_\ast = \bigcup\limits_{k=1}^\infty L_k$, we see that $$F(L_\ast) = F\left(\bigcup_{k=1}^\infty L_k\right) = \bigcup_{k=1}^\infty F(L_k) = \bigcup_{k=1}^\infty M_k$$ is dense in $K$. $L_\ast$ is also totally bounded: Let $\varepsilon > 0$ arbitrary. Choose a $k \geqslant 1$ such that $C^{-k} < \varepsilon(C-1)$. Then $$L_\ast \subset \bigcup_{j=1}^{k+2}\bigcup_{\nu=1}^{n_j} B_{\varepsilon}(x_{j,\nu}).\tag{2}$$ That is clear for $x \in L_m$ where $m \leqslant k+2$, so let $x'_m\in L_m$ where $m > k+2$. By construction, there is an $x'_{m-1} \in L_{m-1}$ with $\lVert x'_m-x'_{m-1}\rVert \leqslant C\cdot \lVert F(x'_m) - F(x'_{m-1})\rVert$ and $\lVert F(x'_m) - F(x'_{m-1})\rVert \leqslant C^{1-m}$, so $\lVert x'_m-x'_{m-1}\rVert \leqslant C^{2-m}$. If $m-1 > k+2$, again by construction, there is an $x'_{m-2} \in L_{m-2}$ with $\lVert x'_{m-1} -x'_{m-2}\rVert \leqslant C^{3-m}$. We thus obtain a finite sequence $(x'_{r})_{k+2 \leqslant r \leqslant m}$ with $\lVert x'_r - x'_{r+1}\rVert \leqslant C^{1-r}$ and $x'_r \in L_r$, whence $$\lVert x'_m - x'_{k+2}\rVert \leqslant \sum_{r=k+2}^{m-1} \lVert x'_{r+1} - x'_r\rVert \leqslant \sum_{r=k+2}^{m-1} C^{1-r} < \frac{1}{C^{k}}\frac{1}{C-1} < \varepsilon,$$ showing $(2)$. Since $\varepsilon > 0$ was arbitrary, $L_\ast$ is totally bounded. Hence $L := \overline{L_\ast}$ is compact, and therefore $$F(L) \subset \overline{F(L_\ast)} = K$$ is a dense compact subset of $K$, i.e. $F(L) = K$.
Multivariable Calculus Linear Approximation
$$ df = \frac{\partial f}{\partial x} \,dx+ \frac{\partial f}{\partial y}\, dy. \tag{This is a chain rule.} $$ $$ \Delta f \approx \frac{\partial f}{\partial x} \,\Delta x+ \frac{\partial f}{\partial y}\, \Delta y. $$ $$ f(0+\Delta x,0+\Delta y) = f(0,0) + \Delta f. $$ \begin{align} f(0,0) & =1 \\ \Delta x & = 0.01 \\ \Delta y & = -0.02. \end{align}
Formal Proofs of Logic
Without delving into proper Fitchean formalism, you can proceed as follows. 1) The premises say that $R$ is transitive and $R$ is symmetric. The conclusion says that (for all $x$) if $x$ bears $R$ to anything than it bears $R$ to itself. Suppose $x, y$ are such that $R(x,y)$ By 2., $R(x,y) \to R(y,x)$, hence modus ponens applied to this with the previous step yields: $R(y,x)$ By 1., $R(x,y) \wedge R(y,x) \to R(x,x)$, so $R(x,x)$ Because $x,y$ were arbitrary, $\forall x R(x,x)$ 2) Given 1. and 2. By 2. there is something $b$ such that $P(b) \wedge Q(b)$, so $P(b)$ In 1. taking $x = b$, we get $P(b) \to Q(a)$; thus, by modus ponens, $Q(a)$
Open balls in $p$-adic numbers.
As in any metric space, you can define an open ball of radius any real number. But if the metric only takes values that are powers of $p$, we might as well restrict attention to balls of radius a power of $p$. I suspect that this was precisely the point the video was trying to make.
Constructing a sequence of sets
What you want is to construct sets $A_k$ with $\lambda(A_k)=1$ such that every $x\in\mathbb{R}$ belongs to infinitely many $A_k$. Let $B_{2n}=[n,n+1]$ and $B_{2n+1}=[-n-1,-n]$. Now, arrange the $B_k$'s in a grid, like this $$\begin{array}{cccccc} B_0&B_0&B_0&\dots&B_0&\dots\\ B_1&B_1&B_1&\dots&B_1&\dots\\ B_2&B_2&B_2&\dots&B_2&\dots\\ \vdots&\vdots&\vdots&\ddots&\vdots&\dots\\ B_k&B_k&B_k&\dots&B_k&\dots\\ \vdots&\vdots&\vdots&\ddots&\vdots&\dots\\ \end{array}$$ And assign $A_k$ to the corresponding $B_n$ you're on as you traverse this grid in a zig zag pattern, like in the following image:
Is $f \circ g$ differentiable at $a$ iff $g$ is differentiable at $a$ and $f$ is differentiable at $g(a)$?
Consider the functions $$f : \mathbb R \to \mathbb R, f(x) = \begin{cases} 0 & x \le 0 \\ x & x \ge0 \end{cases}$$ $$g : \mathbb R \to \mathbb R, g(x) = \begin{cases} x & x \le 0 \\ 0 & x \ge0 \end{cases}$$ These functions are continuous and differentiable everywhere except at $a = 0$. The function $g$ is not differentiable at $0$ and $f$ is not differentiable at $g(0) = 0$. But $f \circ g = 0$ which is differentiable at $a = 0$.
An Expression for Nabla in Cylindrical Coordinates
See you can analyse it in this way. Let $u=u_\rho \hat e_\rho + u_\phi \hat e_\phi + u_z \hat e_z$, $\nabla=\frac{\hat e_\rho}{h_\rho} \frac{\partial}{\partial u_\rho} + \frac{\hat e_\phi}{h_\phi} \frac{\partial}{\partial u_\phi} + \frac{\hat e_z}{h_z} \frac{\partial}{\partial u_z}$, and $\psi=\psi'(x,y,z)=\psi(\rho,\phi,z)$ where SYMBOLS have their usual meanings. Hence we have $$(u\cdot\nabla)\psi=\frac{u_\rho}{h_\rho} \frac{\partial \psi}{\partial u_\rho} + \frac{u_\phi}{h_\phi} \frac{\partial \psi}{\partial u_\phi} + \frac{u_z}{h_z} \frac{\partial \psi}{\partial u_z}$$ Hope my answer is clear enough.
Compute the optimal point $x^*$ and $f^* = f(x^*)$?
Hint $$ f(x) = \max_i\{a_i^{\top}x+b\} $$ $$ f(x) = \min_i\{a_i^{\top}x+b\} $$
Confusion with definition of irreducible.
If the definition of unity you have helpfully put in your comments is applied in the integers, consider the number $7=1\times 7=7\times 1=-7\times -1=-1\times -7$ It clearly has factorisations involving $-1$, and $-1$ is a unit but not a unity: note that $-1\times 7=-7\neq 7$. For the integers, therefore, we already need a definition of irreducible which goes beyond "unity" to the beginnings of the idea of "unit" - we need to include $-1$ as well as $1$ otherwise the definition does not work as we would like and every line needs a comment on $-1$ as an exception. The idea of unit is generalised in other contexts to all elements with a multiplicative inverse. The point is that the definition is not useful unless we pick out the units in this way.
Conditional Probability that sum of dice is even
Let $A$ be the event $X+Y$ is even, let $B$ be the event $X$ is odd, and let $C$ be the event $X$ is even. By the Law of Total Probability we have $$\Pr(A)=\Pr(A\mid B)\Pr(B)+\Pr(A\mid C)\Pr(C).\tag{1}$$ It looks as if you calculated $\Pr(A\mid B)$ and $\Pr(A\mid C)$ correctly. They are both $1/2$. But $\Pr(B)=\Pr(C)=1/2$. Substituting in (1) we get $\Pr(A)=(1/2)(1/2)+(1/2)(1/2)=1/2$. There are many other ways to compute the probability that $X+Y$ is even. We can do it the long way, by counting the number of ordered pairs $(x,y)$ where $x+y$ is even, and $1\le x\le 6$, $1\le y\le 6$, and dividing by $36$. A much simpler way related to the answer above is that whatever the first roll is, the probability the second roll results in an even sum is $1/2$.
How to compute $a \ast b \ast c$ or more generally $a_1^{b_1} a_2^{b_2} a_3^{b_3} \cdots$?
Write $$(ab)^n = \underbrace{ababab \ldots ab}_{n {\text{ times}}}$$ Then $$(ba)^n = \underbrace{bababab \ldots ba}_{n {\text{ times}}} = (\underbrace{bababab \ldots ba}_{n {\text{ times}}})bb^{-1} = b(\underbrace{ababab \ldots ab}_{n {\text{ times}}})b^{-1} = b(ab)^nb^{-1}.$$ If $(ab)^n = e$, then the line above gives $$(ba)^n = b(ab)^nb^{-1} = beb^{-1} = e.$$ Suppose we were to replace $e$ with any element $z \in Z$ that commutes with any other element of $G$. Then what would we have.