title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Proving that any lower bounded set has an infimum
The statement in your title is equivalent to the axiom of completeness. If you have a definition of the real numbers, for instance as Dedekind cuts of the rationals, then you can prove it.
I know that rank (A+B)=rank(A)-rank(B)?
You can deduce it from the first fact: $$\operatorname{rank}(A) \le \operatorname{rank}(A+B) + \operatorname{rank}(-B) = \operatorname{rank}(A+B) + \operatorname{rank}(B) \implies \\ \operatorname{rank}(A) - \operatorname{rank}(B) \le \operatorname{rank}(A+B).$$
vectors -- is this a recognizable quantity?
You can simplify it as the scalar product $(u+v, u-v)$. Geometrically you can say that it measures the orthogonality of these two vectors (i.e. this is equal to zero iff $u+v$ and $u-v$ are orthogonal).
Absolute value definition
You should train to read formulas without reference to specific variables. The definition $$ |x|=\begin{cases} x & x\ge0 \\[4px] -x & x<0 \end{cases} $$ it is meant that the absolute value of a number is the number itself if it is greater than or equal to $0$, the negative of the number if it is less than $0$. You should also avoid using a variable with two different meanings in the same statement. It seems that you want to see when $|2x-4|=-(2x-4)$. According to the definition, this happens if and only if $2x-4<0$, or $2x-4=0$. Why the second case? Because $0=-0$. On the other hand, if $2x-4>0$, then we cannot have $(2x-4)=-(2x-4)$, because one term is positive and the other one is negative. One might make the initial definition more symmetric by declaring $$ |x|=\begin{cases} x & x>0 \\[4px] 0 & x=0 \\[4px] -x & x<0 \end{cases} $$ but you can also note that $$ |x|=\begin{cases} x & x>0 \\[4px] -x & x\le0 \end{cases} $$ would be a completely equivalent definition.
Prove that G is subgroup of $S_\mathbb R$
By definition, the symmetric group $S_\mathbb{R}$ is the set of all bijections from $\mathbb{R}$ to $\mathbb{R}$. For each $n \in \mathbb{Z}$, let $f_n\colon\mathbb{R}\rightarrow\mathbb{R}$ be defined by $f_n(x) = x + n$. Let $G = \{f_n \mid n \in \mathbb{Z}\}$. Outline: Note that $f_0$ is the identity map on $\mathbb{R}$. Show that $f_m \circ f_n = f_{m+n}$. Deduce that $G$ is closed under composition. What is the result of $f_n \circ f_{-n}$ and $f_{-n} \circ f_{n}$? Deduce that each $f_n$ is bijective. In addition, you get that each $f_n$ has an inverse in $G$. Conclude that $G$ is a subgroup of $S_\mathbb{R}$.
Find numbers of the form $\frac{n(n+1)}{2}$ such that the digits of the number are all same
Let $T_n$ be the $n^{th}$ triangular number. Modulo $10$, $T_n$ is $0,1,3,5,6$ or $8$. Clearly $8$ is not a triangle number. Modulo $100$, $T_n$ is never $88$. Therefore $8$ does not work, and the only solutions are $1,3,5,6$ (as shown in the comments)
PAQ=LU decomposition
The quetion on the determinant is more or less answered in the comments. If you use the convention that $L$ has ones on the diagonal, then $\det(A)=\pm \det(U)$ where $\pm$ is due to the fact that $\det(P)\det(U)=\pm 1$. Moreover $\det(U)=u_{11}\cdot ...\cdot u_{nn}$. What this means is that you can calculate $|\det(A)|$ easily and also you can see if $A$ is singular by just looking at the diagonal elements of $U$. One has $PAQ=LU$ so $PA=LUQ^{-1}$ hence $P=LUQ^{-1}A^{-1}$. Now let $x_i$ be the columns of $A^{-1}$ and $p_i$ the columns of $P$ then one needs to solve $p_i=LUQ^{-1}x_i$, for $i\in \{1,...,n\}$. This can be easily solved by setting $Q^{-1}x_i=y_i$ and solving $p_i=LUy_i$. Finally notice that both $Q$ and $P$ are orthogonal matrices, hence $Q^{-1}=Q^T$.
Automorphism extension property of Galois extensions
Take an algebraic closure $\overline{L}$ of $L$. Note that since $L/K$ is Galois, it is algebraic, separable and normal over $K$. Any element $\sigma \in Gal(M|K)$ gives us a homomorphism $\sigma: M \rightarrow \overline{L}$ (just compose with the inclusion $M \hookrightarrow \overline{L}$). Since $L$ is algebraic over $M$, $\sigma$ can be extended to a homomorphism $\overline{\sigma}: L \rightarrow \overline{L}$ (This is proved for example in Keith Conrad's online notes, Zorn's Lemma and some Applications, II). Note that $\overline{\sigma}$ is a $K$-homomorphism. As $L/K$ is a normal, algebraic extension of $K$, this means that $im(\overline{\sigma}) \subset L$. But, since $L/K$ is algebraic, any $K$-endomorphism $\overline{\sigma}: L \rightarrow L$ must be an automorphism (Lang's Algebra, Chapter 5, Proposition $2.1$). Then $\overline{\sigma}$ is a desired extension.
Integer multiplication vs. "multiple" notation in abstract algebra
Integers are elements in a field. For example, $1$ is the multiplicative identity, $2=1+1$, $3=1+1+1$, $4=1+1+1+1$, $5=1+1+1+1+1$, and so on. Also, $-1$ is the additive inverse of $1$, $-2$ is the additive inverse of $2$, $-3$ is the additive inverse of $3$, and so on. Therefore, you can treat them like regular elements and add, subtract, and multiply equations by integers. However, with division, you have to be careful because you might accidentally divide by $0$. For example, in $\Bbb{Z}_2=\{0, 1\}$, $2=1+1=0$, so you can't divide by $2$ because $2=0$. This can get kind of odd, but over time, you will become wary of division, so whenever you divide by an integer, check the characteristic of the field and make sure you are not dividing by $0$. Notice that $2ab=ab+ab$ because of the distributive property: $$2ab=(1+1)ab=1(ab)+1(ab)=ab+ab$$ Similar logic applied for multiplying by other integers, which is why your book's "multiple" notation is consistent with this definition of an integer.
After solving for eigenvalues, how do you solve for eigenvectors if your matrix has free variables?
As Eigenvectors are not unique, this will happen every time you want to calculate the eigenvectors. What you can do is, in order to calculate this undetermined system is to assume that, $x_1 = s$, $x_2 = t$ and calculate $x_3$ depending on these two parameters. If you do this, you will end up with a vector depending on these two parameters, or if you separate the parameters, an eigenvector which is a linear combination of two linearly independent vectors where $s,t$ are the coefficients: $v= sv_1 + tv_2$, i.e. your Eigen-space is $span \{v_1,v_2\}$
Computational complexity of unknotting problem?
There is a difference between the following two problems. (1) Decide if a given abstract graph $G$ admits a linkless embedding in $\mathbb{R}^3$. [This can be solved in polynomial time, as you observe. The authors of the cited paper call this the "flat and linkless embedding" problem.] (2) Decide if a given embedding of an abstract graph $G$ into $\mathbb{R}^3$ is linkless. [This is the subject of Conjecture 1.1 if the cited paper. The authors prove this polynomial time, relative to an oracle for the "link" problem.] Basically, in (2) you are given a fixed embedding that you have to understand. This is a lot more information than just the abstract graph, and appears to make the problem harder.
Representation of Complex-Valued Functions Isomorphic to Left Regular Representation
The left regular representation is a map $\rho^L:G\to GL(\mathbb{C}G)$, defined on the basis $\{e_x\mid x\in G\}\subset\mathbb{C}G$ by $$\rho^L(g)e_x=e_{gx}.$$ On the other hand, there is a basis of coordinate functions $\{\delta_x:G\to\mathbb{C}\mid x\in G\}\subset V$, where $$ \delta_x(h)=\begin{cases}1&\mbox{if }x=h\\0&\mbox{otherwise.}\end{cases} $$ Now, since $$ (g.\delta_x)(h)=\delta_x(g^{-1}h)=\begin{cases}1&\mbox{if }x=g^{-1}h\\0&\mbox{otherwise.}\end{cases} $$ we deduce that $g.\delta_x=\delta_{gx}$ (since $x=g^{-1}h\iff gx=h$). Therfore, we obtain a representation $\rho:G\to GL(V)$ given by $\rho(g)\delta_x=\delta_{gx}$. Let $T:\mathbb{C}G\to V$ be the linear map $T(e_x)=\delta_x$. Then, $T$ is the intertwiner you are looking for.
Diagonalization of symplectic matrix
Here is an overview of a direct and elementary argument proving the assertion. General facts: If a linear map $L : \mathbb{R}^n \to \mathbb{R}^n$ is diagonalizable over $\mathbb{R}$ if and only if there is a basis of $\mathbb{R^n}$ made of eigenvectors of $L$; In any such 'eigenbasis', $L$ is read as a diagonal matrix $D$ constituted of the eigenvalues of $L$; If $M$ is the matrix which represents $L$ in a given basis, and if $B$ is the matrix constituted of the readings in the given basis of the vectors in an eigenbasis, then $M = BDB^{-1}$. Hence our goal is to prove that given $M \in \mathrm{Sp}(2n, \mathbb{R})$ diagonalizable over $\mathbb{R}$, we can find a symplectic eigenbasis $B$ for $\mathbb{R}^{2n}$. Observation 1: Let $M \in \mathrm{Sp}(2n, \mathbb{R})$. If $\lambda$ is an eigenvalue of $M$, then so is $\lambda^{-1}$. Hint: Matrices $M$ and $M^T$ always have the same spectrum as they have the same characteristic polynomial. If $M$ is symplectic and if $J = \left( \begin{array}{cc} 0 & -I \\ I & 0 \end{array} \right)$, then $M^{-1} = J^{-1}M^T J$, so $M^{-1}$ and $M^T$ have the same spectrum. Observation 2: Let $M \in \mathrm{Sp}(2n, \mathbb{R})$ be a diagonalizable matrix over $\mathbb{R}$. Consider two (not necessarily different) eigenvalues $\lambda$ and $\mu$ of $M$ such that $\lambda \mu \neq 1$. Consider eigenvectors $v$ and $w$ of $M$ associated to the eigenvalues $\lambda$ and $\mu$ respectively. If $\omega$ denotes the standard symplectic form on $\mathbb{R}^{2n}$, then we have $\omega(v, w) = 0$. Hint: compute $\omega(Mv,Mw)$ in two different ways. Claim 1: Let $M \in \mathrm{Sp}(2n, \mathbb{R})$ be a diagonalizable matrix over $\mathbb{R}$. If $\lambda \neq \pm 1$ is an eigenvalue of $M$, then the sum of eigenspaces $E_{\lambda} \oplus E_{\lambda^{-1}}$ is a symplectic subspace of $(\mathbb{R}^{2n}, \omega)$. Observation 3: Let $M \in \mathrm{Sp}(2n, \mathbb{R})$ be a diagonalizable matrix over $\mathbb{R}$. If $-1$ is an eigenvalue of $M$, then its multiplicity is even. If $1$ is an eigenvalue of $M$, then its multiplicity is even. Hint: $M$ has an even number of eigenvalues (counting multiplicities). Claim 1 implies that there is(counting multiplicities) an even number of eigenvalues different from $\pm 1$, hence an even number of eigenvalues equal to $\pm 1$. But $\mathrm{det}(M) = 1$. Claim 2: Let $M \in \mathrm{Sp}(2n, \mathbb{R})$ be a diagonalizable matrix over $\mathbb{R}$. If $\lambda = \pm 1$ is an eigenvalue of $M$, then the eigenspace $E_{\lambda}$ is a symplectic subspace of $(\mathbb{R}^{2n}, \omega)$. Assertion/corollary: Let $M \in \mathrm{Sp}(2n, \mathbb{R})$ be a diagonalizable matrix over $\mathbb{R}$. Then $\mathbb{R}^{2n}$ admits a symplectic basis constituted of eigenvectors of $M$. In other words, given the standard symplectic basis, the matrix $B$ constituted of the readings of the eigenvectors of $M$ in that standard basis is a symplectic matrix.
Finding the median and modes of a discrete uniform distribution
If the cumulative distribution function is strictly increasing, you can define the median $M$ unambiguously as $$M=F_X^{-1}(0.5)$$ in which $F_X$ is the cumulative distribution function of your variable $X$. But this definition does not make sense for a discrete distribution, since the cumulative distribution is not injective, and hence non-invertible. A possible solution is to define a pseudo-inverse $F_X^{-}$ as follows $$F_X^{-}(y)=\inf\{x \in \mathbb{R}|F_X(x)\geq y\}$$ You can now define the median as being $F_X^-(0.5)$. If you apply this definition on a uniform discrete distribution on the numbers $\{1,2,3,4,5\}$, you get $3$ as a result. While applying it on the dice ($n=6$) will get you $3$. Maybe you're not very satisfied with that last result. So let's try another definition. We now define $F_X^{+}$ as $$F_X^{+}(y)=\sup\{x \in \mathbb{R}|F_X(x)\leq y\}$$ The median now becomes $F_X^{+}(0.5)$. Let's see what that gives us for our previous distributions. For the case of the uniform discrete distribution on $\{1,2,3,4,5\}$, we still get $3$, but for the dice we now get $4$. Still not happy I suppose? But wait, can't we just take the mean of those two? In fact, you can do even more you can define $$F_X^{\alpha}(y)=\alpha F_X^{-}(y)+(1-\alpha)F_X^{+}(y)$$ for any $\alpha \in [0,1]$. It is now clear that if we pick $\alpha=0.5$, we will get exactly what we wanted! In the even case, the median will be $\frac{n+1}{2}$.
Solve non-linear differential equation with boundary conditions
Let try another change $u=1/cf$. $u'=-f'/cf^2$ $u''=2f'^2/cf^3-f''/cf^2=(2f'^2-ff'')/cf^3$ Thus we get : $uu''-u'^2=(2f'^2-ff'')/c^2f^4-f'^2/c^2f^4=(f'^2-ff'')/c^2f^4=1$. The new equation is $$uu''-u'^2=1$$ According to this link : How to solve $1+y'^2=yy''$? If we derivate this and integrate we get $u=Ae^{kx}+Be^{-kx}$. Let's report in the initial equation [...] and I get $4ABk^2=1$, let's call $\lambda=2Ak$ which is arbitrary. We get $u=\frac{1}{2k}(\lambda e^{kx}+\frac{1}{\lambda}e^{-kx})$ And finally $f=\frac{2k}{c\space(\lambda e^{kx}+\frac{1}{\lambda}e^{-kx})}$ Note: there is a lot of questionnable stuff done in the way $u$ is solved, so we may have miss some solutions, but at least since we reported in the initial equation these we found, we are sure that what we found is at least correct. Addendum Jan 02: The substitution $u=1/cf$ does not pose problem, since there is already $f$ at denominator in original expression. The only delicate thing there is, comes from the resolution of $uu′′−u′^2=1$ because we derivate (assuming $u$ is 3 times derivable). We also divide by $uu′′$ but this is not an issue because $uu′′=1+u′^2>0$. As stated by Han.d.Bruijn we then get a linear ODE so unicity theorem applies. Yet $u''$ has an expression that depends only of $f,f',f''$ so $u'''$ exists if $f'''$ exists, but $f''$ itself has an expression that depends only from $f,f'$ so we can carry on the derivation. By the same process if one supposes $f$ is $C^1$ then it becomes automatically $C^\infty$ and so does $u$. Consequently we can be quite confident that we found all solutions at least $C^1$ and which do not annulate.
In a simple undirected graph, does the combination of cycles to form a new cycle ALWAYS yield a cycle larger than its components?
Start with a $4$-cycle. Now add one diagonal edge, and subdivide that edge into $n$ edges by adding $n-1$ vertices. You now have two $(n+2)$-cycles that share $n$ edges; adding them will produce the original $4$-cycle no matter how big $n$ is. Here is an example with $n=4$. * /|\ / * \ / | \ * * * \ | / \ * / \|/ *
Lattice points on a line y = -x + p visible from the origin
It is not true. $(p,0)$ is on the line, which is hidden by $(1,0)$. So is $(2p,-p)$ which is hidden by $(2,-1)$
Prove that a complete hypercube graph is connected. i.e. there is a path between every pair of vertices.
OK, let's fix a natural $n$. Let $X$ be a set with $n$ elements. The $n$-dimensional hypercube graph is defined as $G = (V, E)$, where $V = \mathcal{P}(X)$ is the power set of $X$, and $E$ consists of all the pairs of subsets $A, B \subseteq X$ that differ by exactly one element. For any $A, B \subseteq X$, let us denote by $d(A, B)$ the number of elements in the symmetric difference of $A$ and $B$: $d(A, B) = |A \triangle B|$. Let us prove by induction on $k$ the following statement: for every $A, B$ such that $d(A, B) = k$, there is a path in $G$ from vertex $A$ to vertex $B$. Clearly, if we manage to prove this for all nonnegative integers $k$, we will prove that $G$ is connected. The base of induction is easy: for $k=0$, if $d(A, B) = 0$, then $A \triangle B = \varnothing$, so $A=B$ and there is an empty path from $A$ to itself. Now the transition. Suppose we have proved the statement for some $k$, and $d(A, B) = k + 1$. We need to show that there is a path in $G$ from $A$ to $B$. The set $A \triangle B$ contains $k+1$ elements, so it is nonempty. Pick any $x \in A \triangle B$ and set $C = A \triangle \{x\}$. $A$ and $C$ differ by a single element, so there is an edge between $A$ and $C$. It remains to show that there is a path from $C$ to $B$. Observe that $$C \triangle B = (A \triangle \{x\}) \triangle B = (A \triangle B) \triangle \{x\} = (A \triangle B) \setminus \{x\}.$$ Then $d(C, B) = |(A \triangle B) \setminus \{x\}| = (k+1)-1 = k$, and by induction hypothesis there is a path from $C$ to $B$. The proof is complete. PS: I don't see any connection with complete bipartite graphs. Yes, the hypercube graph is bipartite, but so what?
Interpreting a Quotient Group ($D_8/\langle r^2\rangle$) in 2 Distinct Ways
when multiplying two complexes (subsets) $A,B \subset G$ the result is: $$ AB=\{ab|a \in A,b \in B\} $$ so for your problem case: $$ \{r,r^3\}^2 = \{rr,rr^3,r^3r,r^3r^3\} =\{r^2,1,1,r^2\}=\{1,r^2\} $$
Trigonometric Equation, quadratic using two functions
Hint Use $\sin^2 x = 1 - \cos^2 x$ to get: $$4 - 4\cos^2 x + 9\cos x - 6 = 0$$ Use the substitution $u = \cos x$ and solve the quadratic equation.
What line are inverse functions on the complex plane reflected over?
Since we have more dimensions to deal with when working with functions of a complex variable, you cannot expect there to be a reflection in a line, but if you consider the 4-dimensional space needed to draw a "graph" of a function $w=f(z)$, then you should find that you get something similar by doing a reflection in the plane $w=z$. The terminology can be a bit confusing: for a graph of $w=f(z)$, you need 2 real dimensions or one complex dimension for domain ($z$) and 2 real dimensions, or one complex dimension for the range ($w$). It is because of the 4-dimension nature of things here that we tend not to attempt to draw graphs of complex functions of a complex variable, but rather work with things like mappings between the $z$ plane and the $w$ plane, as they are easier to visualise.
Cofinal $\Sigma_0$-embeddings between transitive models of $\mathrm{ZFC}^-$
As I've said in an earlier comment, I figured how to to prove this shortly after posting my question. Since I don't particularly like my solution, I've postponed answering my own question in hope someone else posted a more elegant answer first. Now that that seems unlikely, let me settle this question by providing a sketch: The key to my proof is the proof of the Reflection Principle (as can be found in Kunen's Set Theory book for example) -- the statement alone doesn't help since $\pi[\mathrm{Ord}]$ may not contain an $N$-definable club (or any club, for that matter). First let's suppose that $(M; \in) \models \mathrm{ZFC}^-$. Let $\vec{x} \in M$ and $\phi$ be such that $(M; \in) \models \phi[\vec{x}]$. Fix your favorite $M$-definable, monotone, cofinal sequence $(M_{\alpha} \mid \alpha \in M \cap \mathrm{Ord})$ such that $\phi$ and all its subformulae are absolute between $M$ and all the $M_\alpha$ $(\dagger)$. For $\alpha \in M \cap \mathrm{Ord}$ let $N_\alpha = \pi(M_{\alpha}$). What threw me off at first is that $(N_{\alpha} \mid \alpha \in M \cap \mathrm{Ord})$ typically doesn't inherent all the nice additional properties of the $M_\alpha$-sequence, but it certainly remains monotone, cofinal and, for all $\alpha$ and all $\vec{y} \in M_{\alpha}$ we have $$ \begin{align*} (M; \in) \models \phi[\vec{y}] & \iff (M_{\alpha}; \in) \models \phi[\vec{y}] \\ & \iff (N_{\alpha}; \in) \models \phi[\pi(\vec{y})]. \end{align*} $$ Now use this to conclude, via an induction on the complexity of $\phi$, that $(N; \in) \models \phi[\pi(\vec{x})]$. If, on the other hand, $(N; \in) \models \mathrm{ZFC}^-$, basically the same proof works $(\ddagger)$. This time start with an $N$-definable, monotone, cofinal sequence $(N_\alpha \mid \alpha \cap N \cap \mathrm{Ord})$ of transitive sets such that $\phi$ and all its subformulae are absolute between all the $N_\alpha$ and $N$ and consider the pullback $$ (\pi^{-1}[N_{\alpha}] \mid \alpha \in N \cap \mathrm{Ord}). $$ This sequence, in my mind, looks even nastier than the one before but luckily we get that is still consists of transitive sets $(\Diamond)$ (which helps dealing with bounded formulae) that combined are cofinal in $M$ and we still get, for $\alpha \in N \cap \mathrm{Ord}$ and $\vec{y} \in \pi^{-1}[N_{\alpha}]$ that $$ \begin{align*} (\pi^{-1}[N_\alpha]; \in) \models \phi[\vec{y}] & \iff (N_{\alpha}; \in) \models \phi[\vec{y}] \\ & \iff (N; \in) \models \phi[\vec{y}]. \end{align*} $$ A similiar induction as before then finishes the proof. $(\dagger)$ This exists by the Reflection Principle -- which is provable in $\mathrm{ZFC}^-$ -- and we could, if we wanted to, impose more requirements on this sequence (e.g. choose it to be continuous or to be a club-subsequece of $(V_\alpha^M \mid \alpha \in M \cap \mathrm{Ord}$). $(\ddagger)$ We would like to have a cofinal sequence $(M_{\alpha} \mid \alpha \in M \cap \mathrm{Ord})$ such that $(\pi(M_{\alpha}) \mid \alpha \in M \cap \mathrm{Ord})$ is monotone, continuous, cofinal and $N$-definable to apply the Reflection Principle and pull the statement back via $\pi$. But we can't have that -- not even if we knew that $(M; \in) \models \mathrm{ZFC}^-$. $(\Diamond)$ To see that they are transitive, it seems easiest to look at $(\pi[M]; \in)$ as a $\Sigma_0$-substructure of $(N; \in)$ and view $\pi$ as the Mostowski collapse. Then $N_\alpha \cap \pi[M]$ collapses to the transitive set $\pi^{-1}[N_\alpha]$.
Borel Measurable Set Related to Sections
A part of the statement of Fubini's theorem asserts that for any Borel measurable set $E\in\mathcal{B}(\mathbb{R}^{n+1})$, $$ x\mapsto \lambda(E^x) $$ is a Borel measurable function. Note that $$ E^x \cap I = \left(E \cap [\mathbb{R}^{n}\times I] \right)^x. $$ Hence, it easily follows that $X_I$ is Borel measurable as a level set of a Borel measurable function. It is more subtle if we consider a Lebesgue measurable $E$. We can show that $$ x\mapsto \lambda(E^x) $$ is defined for almost every $x\in\mathbb{R}^n$ and it admits a measurable version. Thus, your set $X_I$ is Lebesgue measurable in this case.
Outer automorphism of A4 via automorphism of a free group
So, as stated in the comments, the solution was really at hand and mine was half a false problem. If we take $\alpha\in Aut(F)$ such that $x^\alpha=x$ and $y^\alpha=xy$, it induces on $A_4$ an automorphism fixing every element of order $2$, which is possible if we consider any element of the Klein $4$-group in $A_4$ acting by conjugation. In this case $\alpha$ induces the inner automorphism induced by $(13)(24)$. That is to say that $\alpha$ is not a total mistake, but is not what we are looking for. If we hence take $\beta\in Aut(F)$ such that $x^\beta=x^{-1}$ and $y^\beta=y^{-1}$, it fixes $R$ (in fact $(x^2)^\beta=x^2$, $(y^3)^\beta=y^{-3}$ and $((xy)^3)^\beta=x^{-1}y^{-1}x^{-1}y^{-1}x^{-1}y^{-1}=(yx)^{-3}=((xy)^{-3})^{x^{-1}}\in R$). In fact, $\beta$ induces on $A_4$ the conjugation by $(13)$ and so is what we were looking for. A little remark is that if we define a $\gamma$ such that $x^\beta=x$ and $y^\beta=y^{-1}$, it would induce exactly the same automorphism on $A_4$, provided that $x^2$ belongs to $R$, which makes $x$ and $x^{-1}$ essentially the same when talking about the presented group.
Representation Theory group element as a vector
It can be confusing to think of the group element $g$ and the vector $\mathbf{g}$ as the ``same thing.'' If you like, choose a basis $v_0, v_1, v_2, v_3$ which span a representation determined by $g v_0 = v_1, g^2 v_0 = v_2, g^3 v_0 = v_3$, etc. Here is one perspective that may be confusing now, but will be useful if you continue on in representation theory. If you view the underlying vector space of the regular representation as functions from $G = C_4$ to $\mathbb{C}$ (i.e. just an assignment of a number to each group element), then $G$ acts on the space of functions on itself by right translation: $g \cdot f(x) = f(x g)$. This is the regular representation, and you can view $\mathbf{g}$ as the function taking the value $1$ at $g$ and $0$ at all the other elements (the ``delta function at $g$'').
Finding the number of non-decreasing traversals of a grid
Replace each diagonal step by an up-step, and you get a path by rook moves from $\langle 1,1\rangle$ to $\langle n-m+1,m\rangle$. This path consists of $m-1$ up-steps and $n-m$ right-steps, for a total of $n-1$ steps. The steps can be taken in any order, and the path is completely determined when we know which steps are up-steps, so there are $\binom{n-1}{m-1}$ possible paths. On your $8\times 4$ board, for instance, there are $\binom73=35$ such paths. Note that the transformation from one of your paths to a rook path is reversible simply by replacing each up-step by a diagonal step, so it really does define a bijection between your paths on an $n\times m$ board on and rook paths on an $(n-m+1)\times m$ board.
Strange Notation in Linear Algebra
First of all, $V^*=\{T\colon V\longrightarrow\mathbb{R}\,|\,T\text{ is linear}\}$. Besides, it is not correct to write that $w\equiv T(x)$. What happens here is that, for every linear map $T\colon V\longrightarrow$, there is one and only one vector $w$ such that $T=\langle w,\cdot\rangle$, which means that$$(\forall v\in V):T(v)=\langle w,v\rangle.$$
Every set is contained in a $G_\delta$ set of the same Hausdorff dimension
Hint: This is easy from the definitions. First let's simplify one of the definitions a bit. Define $C_\alpha(E)$, sometimes called the Hausdorff capacity of $E$, by $$C_\alpha(E)=\inf\left\{ \sum r_j^\alpha: E\subset\bigcup B(x_j,r_j)\right\}.$$ It's important to note that $C_\alpha(E)$ is not the same as the Hausdorff measure $H_\alpha(E)$. But Exercise $C_\alpha(E)=0$ if and only if $H_\alpha(E)=0$. So we can use $C_\alpha$ in place of $H_\alpha$ in defining the Hausdorff dimension. Now assume the Hausdorff dimension of $E$ is $\alpha$. Lemma. If $\beta>\alpha$ and $\epsilon>0$ there exists an open set $V$ with $E\subset V$ and $C_\beta(V)<\epsilon$. So there is a sequence of open sets $V_n$ with $E\subset V_n$ and $C_{\alpha+1/n}(V_n)<1/n$. Let $V=\bigcap V_n$. Show that $C_\beta(V)=0$ for every $\beta>\alpha$; hence the dimension of $V$ is less than or equal to $\alpha$. Techinality: At some point you'll probably need to note this: If $\beta>0$, $r_j\ge0$ and $\sum r_j^\beta<1$ then $r_j<1$, hence $$\sum r_j^\gamma\le\sum r_j^\beta\quad(\gamma>\beta).$$
Which are the common points of a cone and the tangent hyperplane to this cone?
In general, YES, there are others. As an example, take $$A= diag(1, -1,\; 1,-1),\; u=(1,1,0,0)^T,\; x= (0,0,1,1)^T.$$ It is easy to check that the equations are satisfied and that $x$ and $u$ are not parallel. Here is how I found this counterexample: Take any $x \in \Bbb R^n$. Let $$\{\lambda_1,\ldots,\lambda_n\}=\sigma(A),$$ that is, the set of eigenvalues of $A,$ and let $$\{v_1,\ldots,v_n\}$$ be a ortonormal basis of eigenvectors, that is, $$\|v_i\|=1,\; v_i^Tv_j=0 \textrm{ if } i\neq j,\; Av_i=\lambda_i v_i.$$ This vectors are well defined since $A$ is symetric and hence diagonalizable. Since the $v_i$'s are a basis of $\Bbb R^n,$ there exists scalars $\alpha_i,\beta_i$ for $i=1,\ldots,n$ such that: $$u= \sum_{i=1}^n \alpha_i v_i,\; x=\sum_{i=1}^n \beta_i v_i.$$ The conditions $$u^TAu= x^TAx= u^TAx=0$$ are now equivalents to $$0=\sum_{i=1}^n \alpha_i^2 \lambda_i=\sum_{i=1}^n \beta_i^2 \lambda_i=\sum_{i=1}^n \alpha_i\beta_i \lambda_i.$$ Given the values of $\alpha_i,$ we see that $x$ satisfies the equations if and only if the $\beta_i$'s satisfies the above system, in which we have 2 equations and $n$ variables. This made me realize that there are indeed another vectors $x$ that do not lie on $span\{u\}.$ Hope this helps.
distance between two $\mathrm{SU}(2)$ matrices?
In short, I recommend two changes: adjust the phase of the product ab rather than handling a and b separately, take the absolute value of the argument of arccos. Also, we can make things slightly faster by noting that $\det(A^*B)$ has magnitude $1$, which means that $\det(A^*B)^{-1/2} = \overline{\det(A^*B)^{1/2}}$, where $\bar z$ dentoes the complex conjugate of $z$. Here are the distances from $I$ for some common gates, as measured by the metric defined below: $X,Y,Z,H$: $\pi/2$ $S: \pi/4$ Here's a derivation/interpretation of the resulting "distance" (metric over $U(2)/U(1)$). Let $A,B$ denote our unitary matrices, and let $A^*$ denote the conjugate transpose of $A$. If we consider $A^*B$ to be our "difference rotation", it suffices to define the distance of $A^*B$ from the identity matrix $I$. Note that $A^*B$ has eigenvalues $e^{i\theta_1},e^{i \theta_2}$ for some angles $\theta_1,\theta_2$, and we have $\theta_1 = \theta_2$ if and only if $A^*B$ is a multiple of the identity matrix. With that in mind, $|\theta_1 - \theta_2|$ gives us an angle corresponding to the "distance" of $A^*B$ from the identity. We further stipulate for this derivation that $\theta_1, \theta_2$ are such that $|\theta_1 - \theta_2| < 180^\circ$. If we multiply $A^*B$ by one of the values of $\det(A^*B)^{-1/2}$ to get $M = \frac{A^*B}{\det(A^*B)^{1/2}}$, we shift these angles to either $0^\circ \pm \theta$ or $180^\circ \pm \theta$, where $\theta = \frac{|\theta_1 - \theta_2|}{2}$. In the first case, the trace of $M$ is positive. In the second, the trace is negative. In either case, the trace of $M$ is real. In the first case, we find that $$ \operatorname{tr}(M) = e^{i \theta} + e^{-i \theta} = 2 \cos \theta = 2 \cos \frac{|\theta_1 - \theta_2|}{2}. $$ In this case, we could take our distance to be $\arccos(\frac 12\operatorname{tr}(M)) = \frac 12 |\theta_1 - \theta_2|$. In the second case, we find that $$ \operatorname{tr}(M) = e^{i(\pi +\theta)} + e^{i (\pi - \theta)} = -[e^{i \theta} + e^{-i \theta}] = -2 \cos \theta = -2 \cos \frac{|\theta_1 - \theta_2|}{2}. $$ In this case, we could take our distance to be $\arccos(-\frac 12 \operatorname{tr}(M)) = \frac 12 |\theta_1 - \theta_2|$. Putting everything together, we find that $$ d(A,B) = \arccos\left(\left|\frac{\operatorname{tr}(A^*B)}{2 \det(A^*B)^{1/2}}\right| \right) $$ (where $|x|$ denotes the absolute value of $x$) gives us a measure of the difference between two elements of $SU(2)$, up to a constant phase, which can be interpreted as an angle (from $0^\circ$ to $90^\circ$).
How to specify a vector valued function from one-dimension to one-dimension?
"...it would essentially look like a mapping from real numbers to real numbers" Yes, it would. All vectors spaces of dimension $n$ are isomorphic. Since there can only be one linearly independent vector in a 1D vector space, a 1D vector is described by a single real number, say $\lambda$. Mappings between this 1D vector space and a different 1D vector space are just mappings from real numbers to real numbers, changing the value of $\lambda $ assigned to each 1D vector.
Why is such sum of cosines always zero?
This is valid for any regular polygon. It can be viewed physically: Sum of cosines or sines of equi-spaced forces is zero by finding resultant using a statics force polygon vector diagram. Or mathematically: Formula for sum of $n$ equi-spaced ( interval $\alpha , \alpha = 2 \pi/n $) sines. It can be derived by summing terms from De Moivre's theorem as a geometric progression. $$ \frac { \sin n \alpha/2}{ \sin \alpha/2}\cdot \sin ( average\, angle) $$ Sum of $n$ equi-spaced ( interval $\alpha $) cosines $$ \frac { \sin n \alpha/2}{ \sin \alpha/2}\cdot \cos ( average\, angle) $$ Quantity in numerator is $ \sin n \pi $ that vanishes
A few questions about "small" transitive sets.
Assume $\{a\}$ is transitive. As you have shown that $\emptyset\in\{a\}$, we immediately have $a=\emptyset$ and $\{a\}=1$. Assume $\{a,b\}$ is transitive with $a\ne b$. As you have shown, $\emptyset$ is one of these, so assume wlog. $a=\emptyset$ and $b\ne\emptyset$. By transitivity, $b\subseteq \{\emptyset,b\}$. Using Foundation, we find $b\notin b$, hence $b=\emptyset$ (contradicting $b\ne a$) or as last option $b=\{\emptyset\}$. Hence $\{a,b\}=2$.
Mixed Strategy subgame perfect equilibrium
Hint: the probability must make player 2 indifferent between playing either strategies $p$ and $q$, or strategies $p$ and $r$.
Finding value of a summation series.
First ignore the restriction $i \ne j \ne k$. The result is $\left( \frac{1}{1 - \frac{1}{3}} \right)^3 = \frac{27}{8}$, since this is simply the geometric series $\sum_{i=0}^\infty \frac{1}{3^i}$ raised to the power 3. Now subtract the three series that come from the conditions $i=j, i=k, j=k$. These are each equal to $\frac{1}{1 - \frac{1}{3}} \cdot \frac{1}{1 - \frac{1}{9}} = \frac{27}{16}$. Finally add back two times the series coming from the terms with $i = j = k$, since it has been subtracted three times and should only be subtracted once. This series is equal to $\frac{1}{1 - \frac{1}{27}} = \frac{27}{26}$. The result is $$ \frac{27}{8} - 3 \cdot\frac{27}{16} + 2 \frac{27}{26} \approx 0.38942\dots $$
Commutative property of ring addition
Perhaps the comment refers to the fact that in order to generalize rings to structures with noncommutative addition, we cannot simply delete the axiom that addition is commutative, since, in fact, other axioms force addition to be commutative (Hankel, 1867 [1]). The proof is simple: apply both the left and right distributive law in different order to the term $\rm\:(1\!+\!1)(x\!+\!y),\:$ viz. $$\rm (1\!+\!1)(x\!+\!y) = \bigg\lbrace \begin{eqnarray}\rm (1\!+\!1)x\!+\!(1\!+\!1)y\, =\, x + \color{#C00}{x\!+\!y} + y\\ \rm 1(x\!+\!y)\!+1(x\!+\!y)\, =\, x + \color{#0A0}{y\!+\!x} + y\end{eqnarray}\bigg\rbrace\Rightarrow\, \color{#C00}{x\!+\!y}\,=\,\color{#0A0}{y\!+\!x}\ \ by\ \ cancel\ \ x,y$$ Thus commutativity of addition, $\rm\:x+y = y+x,\:$ is implied by these axioms: $(1)\ \ *\,$ distributes over $\rm\,+\!:\ \ x(y+z)\, =\, xy+xz,\ \ (y+z)x\, =\, yx+zx$ $(2)\ \, +\,$ is cancellative: $\rm\ \ x+y\, =\, x+z\:\Rightarrow\: y=z,\ \ y+x\, =\, z+x\:\Rightarrow\: y=z$ $(3)\ \, +\,$ is associative: $\rm\ \ (x+y)+z\, =\, x+(y+z)$ $(4)\ \ *\,$ has a neutral element $\rm\,1\!:\ \ 1x = x$ In order to state this result concisely, recall that a SemiRing is that generalization of a Ring whose additive structure is relaxed from a commutative Group to merely a SemiGroup, i.e. here the only hypothesis on addition is that it be associative (so in SemiRings, unlike Rings, addition need not be commutative, nor need every element $\rm\,x\,$ have an additive inverse $\rm\,-x).\,$ Now the above result may be stated as follows: a semiring with $\,1\,$ and cancellative addition has commutative addition. Such semirings are simply subsemirings of rings (as is $\rm\:\Bbb N \subset \Bbb Z)\,$ because any commutative cancellative semigroup embeds canonically into a commutative group, its group of differences (in precisely the same way $\rm\,\Bbb Z\,$ is constructed from $\rm\,\Bbb N,\,$ i.e. the additive version of the fraction field construction). Examples of SemiRings include: $\rm\,\Bbb N;\,$ initial segments of cardinals; distributive lattices (e.g. subsets of a powerset with operations $\cup$ and $\cap$; $\rm\,\Bbb R\,$ with + being min or max, and $*$ being addition; semigroup semirings (e.g. formal power series); formal languages with union, concat; etc. For a nice survey of SemiRings and SemiFields see [2]. See also Near-Rings. [1] Gerhard Betsch. On the beginnings and development of near-ring theory. pp. 1-11 in: Near-rings and near-fields. Proceedings of the conference held in Fredericton, New Brunswick, July 18-24, 1993. Edited by Yuen Fong, Howard E. Bell, Wen-Fong Ke, Gordon Mason and Gunter Pilz. Mathematics and its Applications, 336. Kluwer Academic Publishers Group, Dordrecht, 1995. x+278 pp. ISBN: 0-7923-3635-6 Zbl review [2] Hebisch, Udo; Weinert, Hanns Joachim. Semirings and semifields. $\ $ pp. 425-462 in: Handbook of algebra. Vol. 1. Edited by M. Hazewinkel. North-Holland Publishing Co., Amsterdam, 1996. xx+915 pp. ISBN: 0-444-82212-7 Zbl review, AMS review
Given that $(g \circ f)$ is invertible, can we conclude that $f$ and $g$ are invertible?
No, for example for any set $X$ with at least two points and $x\in X$, the obvious maps $f:\{x\} \to X$ and $g:X \to \{x\}$ satisfy $g\circ f=Id_{\{x\}}$ but it is easy to see that neither $f$ nor $g$ are invertible.
Value of $\psi\left(\frac{1}{2}\right)$
Note the simple fact that $$\lim_{N\to \infty} \sum_{k=1+\lfloor{\frac{N-1}{2}}\rfloor}^{N} \frac{2}{2k+1} =\lim_{n\to\infty} \int_{n}^{2n} \frac{2}{2x+1} \ dx= \log(2)$$ Q.E.D.
Canonical triangle for diagonal map in a triangulated category
Yes, that’s right. In any additive category the diagonal is a split monomorphism which is a kernel for $\pi_1-\pi_2$. Split short exact sequences in triangulated categories are distinguished triangles, with trivial connecting maps, and in fact give the only examples of monomorphisms and epimorphisms of any kind in triangulated categories.
Convert clock time to one number?
Just convert the total time to seconds. The most seconds is clearly the "largest" time. $$ T=3600h+60m+s $$
Regular permutations
Interleave the entries from each cycle to create one big cycle that starts with the first element of each cycle then (in the same ordering of cycles) the second element of each cycle, then the third element of each cycle, etc.
First five terms of the sequence $a_n=2/e^n$
Looks good to me. As for worrying about weather $n$ is a natural number or rational number, I wouldn't. Unless otherwise stated, (I believe) it is always assumed that a sequence is a map from the naturals to some other set. Hence, you are correct in evaluating the first five terms of your sequence as $$\frac{2}{e}, \frac{2}{e^2}, \dots ,\frac{2}{e^5}$$ The terms of the sequence cannot be simplified further because $e^n$ is irrational for integers $n>0$, so you can either leave your answer with the exponent in the denominator, or flip it up the numerator but change the exponent to be negative. However, the directions on the page you provided a link to specify to leave your answer with positive exponent only, so I wouldn't make the change.
$\frac{1}{{1 + {\left\| A \right\|} }} \le {\left\| {{{(I - A)}^{ - 1}}} \right\|}$
The answer by @MotylaNogaTomkaMazura shows why this holds if $\|\cdot\|$ is submultiplicative and satisfies $\|I\|=1$. However, if it is not submultiplicative, this does not need to be true. Consider, e.g., the $\max$-norm $$ \|X\|:=\max_{i,j}|x_{ij}|. $$ Clearly, $\|\cdot\|$ is a matrix norm (matrix analogue of the vector $\infty$-norm). Also, $\|X\|=1$. But with $$ X=\pmatrix{0&1\\1&1},\quad Y=\pmatrix{1&1\\0&1}, $$ $$ \|XY\|=2\not\leq 1=\|X\|\|Y\|, $$ so $\|\cdot\|$ is not submultiplicative. With $$ A=\frac{1}{5}\left(\begin{array}{rr}-3&4\\-4&-3\end{array}\right), $$ we have $\|A\|=3/5<1$. But $$ \frac{1}{1+\|A\|}=\frac{5}{9}\not\leq\frac{1}{2}=\|(I-A)^{-1}\|. $$
Solving a quadratic complex equation
OK, here is my idea, just a set up as you requested: Multiply by $-i$ to get $z^2+(-2-2i)z+2-2i(\sqrt{3}-1)=0$ Keeping the $z$ terms on the left, we have $z^2+(-2-2i)z=-2+2i(\sqrt{3}-1)$ Completing the square: $(z-1-i)^2=-2+2i(\sqrt{3}-1)+(-1-i)^2$ When you work out left hand side and setting $z-1-i=t$ we get the equation $t^2=-2+2i\sqrt{3}$ Can you figure out the argument and modulus of that complex number on the right and go from there to extract the roots for $t$? (Got to go now :) )
If $a>1$ and $f\in L^a([0,1])$, show that $\|f\|_a\to \|f\|_1$
The Dominated Convergence Theorem does apply to sequences. But you can apply it to any sequence $a_n$ with $a_n\to1$. So you are proving that $\varphi(a_n)\to\varphi(1)$ for every sequence $\{a_n\}$ with $\alpha_n\to1$, and $\varphi(a)=\int |f|^a$. This implies that $\varphi$ is continuous (continuity on a metric space can be tested on sequences). Now you stil need to deal with the case where $f$ is not bounded.
Verifying if a function is Lipschitz
Here is another approach: Choose $x,y$. If both lie in $K^\circ$ or neither lie in $K^\circ$ then since $1,u$ are Lipschitz we see that $v$ satisfies a Lipschitz condition. So, suppose $x \in K^\circ$, $y \in \bar{\Omega} \setminus K^\circ$. Let $\phi(t) = x+t(y-x)$, and let $T=\sup_{t \in [0,1]} \{ t | \phi(t) \in K^\circ \}$. Since $K$ is closed, we see that $T \in (0,1)$, and continuity shows that $v(\phi(T)) = u(\phi(T))=1$. Then \begin{eqnarray} |v(x)-v(y)| &\le & |v(x)-v(\phi(T))|+ |v(\phi(T))-v(y)| \\ &=& |v(\phi(T))-v(y)| \\ &=& |u(\phi(T))-u(y)| \\ &\le& L \|\phi(T)-y\| \\ &=& L (1-T) \|x-y\| \\ &\le& L \|x-y\| \end{eqnarray}
Longest shortest path in an undirected unweighted graph
All considered graphs are finite, simple and undirected. Let $G=(V,E)$ be such a graph on $n$ vertices. The minimum length of the paths connecting two vertices $v_x,v_y \in V$ is called the distance between $v_x$ and $v_y$ and is denoted by $d(v_x, v_y)$. If $G$ is disconnected and $v_x$ and $v_y$ are not in the same components, we define $d(v_x,v_y)=\infty$. The $n \times n$ - graph distance matrix $D=(d_{xy})$ consists of all graph distances from vertex $v_x$ to vertex $v_y$. The maximum value of all entries of $D$ is called the diameter of $G$. Example: Given $G=(V,E)$. Let $V=\{v_1,v_2,\cdots,v_n\}$ and $E=\{(v_1,v_2),(v_2,v_3), (v_2,v_5),(v_2,v_4)\}$. The $(x,y)^{th}$ entry of the distance matrix $D$ of $G$ is the distance $d(v_x,v_y)$, yielding in $$ \begin{pmatrix} 0 & 1 & 2 & 2 & 2 \\ 1 & 0 & 1 & 1 & 1 \\ 2 & 1 & 0 & 2 & 2 \\ 2 & 1 & 2 & 0 & 2 \\ 2 & 1 & 2 & 2 & 0 \end{pmatrix}$$ Now, finding the diameter using the distance matrix $D$ is easy. Question: How do we calculate $D$? We could use Floyd-Warshall algorithm which runs in $O(n^3)$. But since our graphs are undirected and unweighted, we can do better. There is an algorithm by Chan solving our problem for an unweighted and undirected graph with $n$ vertices and $m$ edges in $O(nm)$. Yamane and Kobayashi implemented an algorithm for unweighted graphs in general. Their paper contains a small list of known algorithms solving the same problem. You can find a preprint on arXiv here.
Let $R$ be a relation on a set $A$. Show that if $R \circ R \subseteq R$, then $R$ is transitive
You should start with $(x,y) \in R$ and $(y,z) \in R$ and from these two statements show that $(x,z) \in R$. This shows transitivity. Now, the two statements show that indeed $(x,z) \in R \circ R$ by definition (using $y$ as intermediate). We have the assumption that $R \circ R \subseteq R$, so we know that $(x,z) \in R$ as required, as $(x,z)$ is in the left hand side.
What exactly is a linear extension ( in intuitive terms)?
Standard lexicographic order, $L_1$, and colexicographic order, $L_2$, defined by $$ \begin{aligned} (a,b)L_1(c,d)&\Longleftrightarrow a<c\text{ or } (a=c\text{ and }b\le d),\\ (a,b)L_2(c,d)&\Longleftrightarrow b<d\text{ or } (b=d\text{ and }a\le c), \end{aligned} $$ are linear orders. When applied to your set $X$, these reduce to $$ \begin{aligned} (a,b)L_1(c,d)&\Longleftrightarrow a\le c,\\ (a,b)L_2(c,d)&\Longleftrightarrow b\le d, \end{aligned} $$ since the cases $a=c$ and $b=d$ only arise when $(a,b)=(c,d)$. You can check that both are linear extensions of $R$, that is, that $(a,b)R(c,d)\Longrightarrow (a,b)L_1(c,d)$ and $(a,b)R(c,d)\Longrightarrow (a,b)L_2(c,d)$. You can also check that the intersection of $L_1$ and $L_2$ is $R$.
Partial Derivatives vs Implicit Differentiation
The chain rule says: $$ dG = \frac{\partial G}{\partial x} dx + \frac{\partial G}{\partial y}dy. $$ If the point $(x,y)$ moves along a level set of $G$, then we have $dG=0$. Hence $$ \frac{dy}{dx} = \frac{-\partial G/\partial x}{\phantom{-}\partial G/\partial y}. $$ $$ = -\frac{2xy^4 - 12x^3 y}{x^2 4y^3 -3x^4} $$ and then we can cancel an $x$. Now let's try implicit differentiation: $$ x^2y^4 - 3x^4y = 0. $$ $$ 2x y^4 + x^2 4y^3 \frac{dy}{dx} - 12x^3y - 3x^4\frac{dy}{dx} =0. $$ Push the two terms not involving the derivative to the other side; then pull out the common factor, which is the derivative; then divide both sides by the other factor. We get $$ \frac{dy}{dx} =\frac{12x^3y - 2xy^4}{x^24y^3 - 3x^4} $$ and it's the same thing.
Minimum value of $\frac{ax^2+by^2}{\sqrt{a^2x^2+b^2y^2}}$
$$(a+b)(ax^2+by^2)=p^2+ab$$ where $p=\sqrt{a^2x^2+b^2y^2}$ Now assuming $ab>0,$ $$\dfrac{p^2+ab}p\ge2\sqrt{p\cdot\dfrac{ab}p}$$ using AM-GM inequality
Finding y-intercept of linear equation with two points.
Yes it is. As you say, we can find the slope: $$m=\frac{y_2-y_1}{x_2-x_1}.$$ Thus, $$y_1=mx_1+b=\frac{y_2-y_1}{x_2-x_1}x_1+b,$$ so $$y_1-\frac{y_2-y_1}{x_2-x_1}x_1=\frac{x_2y_1-x_1y_2}{x_2-x_1}=b.$$
Determining trigonometric limit
I will assume you want to find $$\lim_{x\rightarrow\infty}e^x\tan^{-1} x$$ (since the question was not clearly stated, and I cannot edit since another edit is pending). $\tan^{-1} x$ will go toward $\frac{\pi}{2}$ (inverse trig is different than trig, and $\tan^{-1} x\not=\frac{1}{\tan x}$). $e^x$ goes toward $\infty$, so the limit is $\infty$.
if $f(x,y)=\phi(x-cy)+\phi(x+cy)$ then $f_{22}=c^2f_{11}.$
The task is to show that a $C^2$ function $\phi$ can be used to build a solution to the wave equation $$\partial_{yy} f(x,y) = c^2 \partial_{xx} f(x,y)\\ f(x,y) = \phi(x-cy) + \phi(x+cy)$$ Not sure why you write $f_{11}$ (like matrix indexing). To prove this, just differentiate using the chain rule: $$\begin{align*} \partial_{yy} f(x,y) & = \partial_y ( -c\phi'(x-cy) + c\phi'(x+cy) ) \\ & = (-c)^2 \phi''(x-cy) + c^2 \phi''(x+cy) \\ & = c^2 (\phi''(x-cy) + \phi''(x+cy)) \\ &= c^2 \partial_x (\phi'(x-cy) + \phi'(x+cy)) \\ & = c^2 \partial_{xx} (\phi(x-cy) + \phi(x+cy)) \\ & =c^2 \partial_{xx} f(x,y) \end{align*}$$
A normal operator with no real roots
For example, take any invertible skew-symmetric real operator: $$ A=\pmatrix{0&-1\\1&0} $$ will do.
homomorphism between smooth algebraic groups of the same dimension
Yes, this is true. That said, the covering part is not in general true since $f$ needn't be faithfully flat (I'll give an example of this later). So, let's assume that $G'$ is reduced. Indeed, note that $\ker(f)$ being a closed subscheme of $G$ which is Noetherian is finite. Moreover, we have that we have a factorization $$G\to G/\ker(f)\hookrightarrow G'$$ where the first map is the canonical quotient map and the second map is a closed immersion (e.g. see [Mil, Theorem 5.39]). Note though that since $\ker(f)$ is finite we have that $G/\ker (f)$ has the same dimension as $G$ (e.g. see [Mil, Proposition 5.23]) and thus $G/\ker(f)\hookrightarrow G'$ is a closed immersion where both objects have the same dimension. This implies, first of all, that the image of $G/\ker(f)$ has topological image an irreducible component of $G'$ which, since $G'$ is connected and thus irreducible (e.g. see [Mil, Corollary 1.35]), this implies that $G/\ker (f)$ has the same underlying set of $G'$. Since $G'$ is reduced this implies that $G/\ker (f)\to G'$ is an isomorphism and thus $G'\cong G/\ker (f)$ so that $G$ covers $G'$. Note though that the inclusion $\{e\}\hookrightarrow \mu_p$ (where $\{e\}$ denotes the trivial group scheme) is an injective map where both have the same dimension, but is not a covering (in the sense that it's a quotient map with finite central kernel). [Mil] Milne, J.S., 2017. Algebraic groups: The theory of group schemes of finite type over a field (Vol. 170). Cambridge University Press.
A Question from Mathematical circles Book
I don't know if I understood the problem correctly, but this could be a solution: https://i.stack.imgur.com/K3Fze.jpg
Show that the p-Sylow subgroup is normal in $G$
Definition: Let $A$ act on $G$ via Automorphisms.The action of $A$ on $G$ is said to be Frobenius if $ \phi(g)\neq g$ for every nonidendity $\phi\in A$ and $g\in G$. Now, let $A=<\phi>$ then clearly $A$ acts on $G$ via Automorphisms.It is given that $\phi(g)\neq g$.Thus,we need to show that $\phi^2(g)\neq g$ for nonidendity $g\in G$. Assume $\phi^2(g)= g\implies \phi^4(g)=\phi^2 (g) $ since $\phi^3=I$ $$\phi(g)=\phi^2(g)=g $$ contradiction. so it is a Frobenius action which means that Frobenius kernel $G$ is nilpotent(Thompson theorem).Hence $G$ has normal Sylow-$p $subgroup. http://for.mat.bham.ac.uk/P.J.Flavell/research/publications/frobenius.pdf
find singular point and investigate its type for x^2 y" + (e^x-1)y' +(sin ^2 x)y=0 and Specify the answer form of y1(x), y2(x)
Near $x=0$,$e^x-1 \approx x, \sin x \approx x$ one can write the given ODE as: $$x^2y''+xy'+x^2y=0 \implies y''+\frac{1}{x}y'+y=0$$. So $x=0$,is regular sigular point. By putting $y=x^m$, we get indicial equation as $m^2=0$. So the both indical roots are zero. By frobbenius mrthod $y_1(x)$ and $y_2(x)$ will infinite series which are well behaved at $x=0$.
Confused on the relationship between Chi-square, its CDF, and p-value.
First question. You are right about being able to use software instead of tables of the chi-squared distribution. For example, if df = 9 and the chi-squared statistic is 20.16, you could look at a chi-squared table to see that $20.16 > 19.02,$ where 19.02 cuts area 0.025 from the upper tail of $Chisq(df = 9)$. You you would reject at that 2.5% level. If you wanted a P-value, you could use software to find the probability of the chi-squared statistic being greater than 20.16. In R software this is computed as follows, where pchisq stands for the CDF of a chi-squared distribution: 1 - pchisq(20.16, 9) ## 0.01695026 Thus the P-value (probability of a value more extreme than 20.16) is about 0.017. Some software will give you the P-value automatically. Second question. As far as binning is concerned, you are right that in some instances there are alternate possible ways of binning. You do not want so many bins that the expected counts in each bin get less than about 5, otherwise the approximation of the chi-squared statistic to the chi-squared distribution is not good. Given that restriction, it is usually better to use more bins rather than fewer. Also notice that the df of the chi-squared distribution depends directly on the number of $bins$ used, not on the overall number of $events$ counted. (I do not understand what you say about 'approximately Gaussian' in this context.) Examples: Here is an example in which we simulate 60 rolls of a fair die, so that we expect 10 instances of each face. The observed numbers of each face are tabulated. Finally, a chi-squared test that the die is fair has a chi-squared goodness-of-fit statistic of 3.0, and a P-value of 70% (consistent with a fair die). face = sample(1:6, 60, rep=T) # simulate 60 rolls of fair die table(face) ## face ## 1 2 3 4 5 6 ## 9 6 12 10 10 13 chisq.test(table(face)) ## Chi-squared test for given probabilities # default is equal probabilities ## data: table(face) ## X-squared = 3, df = 5, p-value = 0.7 In the test, the default is that faces have equal probabilities unless some other probability vector is specified. The test procedure chisq.test finds the P-value as follows (and rounds): 1 - pchisq(3, 5) ## 0.6999858 In our second example, we simulate 600 rolls of a die that is heavily biased in favor of faces 4, 5, and 6 (see prob vector). Here the null hypothesis that the die is fair is soundly rejected with an extremely small P-value. face = sample(1:6, 600, repl=T, prob=c(1,1,1,2,2,2)/9 ) table(face) ## face ## 1 2 3 4 5 6 ## 59 67 80 123 135 136 chisq.test(table(face)) ## Chi-squared test for given probabilities # default is test for 'fair' die ## data: table(face) ## X-squared = 62.2, df = 5, p-value = 4.263e-12
Measure of Elementary Sets Proof
$$\sum_{i = 1}^k m(B_i) = \sum_{i = 1}^k \sum_{j = 1}^k' m(B_i\cap B'_j) = \sum_{j = 1}^k' \sum_{i = 1}^k m(B_i\cap B'_j) = \sum_{i = 1}^k' m(B_j)$$
Linear Second order Differential operator proof questions
Suppose $L$ is a Linear Operator. Ques 1. Remark: There is something wrong. $ L(f) = 0*L(g) \implies L(f) = L(0*g) \implies L(f) = L(0)$ (done by linearity), and we know that $f=0$, so $ L(f)=L(0) \implies f=0$, which is the definition of injective. The definition of injective is given by $L(f)=L(g)$ iff. $f=g$ . But you didn't show. Proof : If $kerL = {0}$ , still assume $f \not =g $ and $ L(f)=L(g)$ to get $L(f-g)\in kerL \Rightarrow a-b=0$ $ $ Ques 2. Remark: You have Misunderstanded the definition of eigenvalue. Definition: $\lambda$ is a eigenvalue of L ,if for some vector $f\not = 0,L(f)=\lambda f$ Proof: It's easy to show $f\in ker(L-\lambda I)$ , hence $ker(L-\lambda I) \not = {0}$ $ $ Ques 3. Remark: Good!.In fact , it can be induced from Ques 2. since $det(L)=0 $ is equaivalent to $kerL \not = {0}$ (A basic theorme in Linear Algebra) The information of "second differential operator" is useless here. $ $
Eigenvectors of real normal endomorphism
The normality condition is irrelevant. If $\lambda$ is a real eigenvalue of the matrix $A$, take an eigenvector $\mathbf{v}$ and write it as $\mathbf{a}+i\mathbf{b}$, where $\mathbf{a}$ and $\mathbf{b}$ are vectors with real coefficients. Then $$ A\mathbf{v}=A\mathbf{a}+iA\mathbf{b} $$ so $$ \lambda\mathbf{a}+i\lambda\mathbf{b}=A\mathbf{a}+iA\mathbf{b} $$ and equating real and imaginary parts you get $$ A\mathbf{a}=\lambda\mathbf{a},\qquad A\mathbf{b}=\lambda\mathbf{b} $$ so you find a "real" eigenvector, because one among $\mathbf{a}$ and $\mathbf{b}$ must be non zero. If $\lambda$ is not real, you can apply conjugation: if $\mathbf{v}$ is an eigenvector, then $A\mathbf{v}=\lambda\mathbf{v}$, so also $$ A\bar{\mathbf{v}}=\bar{\lambda}\bar{\mathbf{v}} $$ This shows also that the map $\mathbf{v}\mapsto\bar{\mathbf{v}}$ is a bijection between the eigenspaces relative to $\lambda$ and $\bar{\lambda}$. It's not a linear map, but easy considerations show that the two eigenspaces have the same dimension (a linear dependency relation in one space translates into a linear dependency relation in the other, with conjugate coefficients).
How to find this angle? triangle
let $$AD=x$$ and the angle $$BCD=y$$ then we get after the Theorem of sines $$\frac{\sin(100^{\circ}-y)}{\sin(80^{\circ})}=\frac{\sin(80^{\circ}-y)}{\sin(20^{\circ})}$$ can you solve this?
Using the generating function $\zeta(s)/\zeta(2s)$ for the squarefree integers,how do you get the density result $6/\pi^2$?
If we start with $\mathbb{N}^+$ and perform a sieve by removing all the multiples of $2^2,3^2,5^2,\ldots,p^2$, by the inclusion-exclusion principle we get a set with density $$ \left(1-\frac{1}{2^2}\right)\cdot\left(1-\frac{1}{3^2}\right)\cdot\left(1-\frac{1}{5^2}\right)\cdot\ldots\cdot\left(1-\frac{1}{p^2}\right)$$ so, with some care in managing the error terms (the intersection of nested sets, all of them with natural density, may not have a natural density) we have that the density of square-free numbers is given by $$ \prod_{p}\left(1-\frac{1}{p^2}\right) $$ where Euler's product for the $\zeta$-function gives that $$ \prod_{p}\left(1-\frac{1}{p^s}\right)^{-1} = \zeta(s) $$ holds for any $s>1$. Taking $s=2$ we have that the density of square-free numbers is $\frac{1}{\zeta(2)}=\frac{6}{\pi^2}$. For the same reason the density of the square-free numbers among the odd integers is given by $$ \left(1-\frac{1}{3^2}\right)\cdot\left(1-\frac{1}{5^2}\right)\cdot\ldots = \frac{4}{3}\cdot\frac{1}{\zeta(2)} = \frac{8}{\pi^2}$$ so the density of the odd square-free numbers is $\frac{4}{\pi^2}$.
Which number remains alive?
The first case is similar to Josephus Problem and in general situation, we can use the following MATLAB code to get the remaining number: First Case: function RemainNum = JosephusProblem(n,m) A = 1:n; A = A'; for k = 1:n-1 A = circshift(A,length(A)-mod(m+1,length(A)) + 1); A = A(1:(end-1)); RemainNum = A; where $n$ is the total number of person in this circle and from the person #1, people count number one by one from 1 to $m$ circularly and the person who tells the number $m$ gets killed and this process doesn't end until there is only one person left, then the function returns this lucky guy by $\text{RemainNum}$. So in this case, $m=2$. People count like $1,2,1,2,...$ and persons who count 2 get killed. Second Case: Let $f(n)$ be the lucky number in this case, then I guess that $$f(n) = \begin{cases} 1, &\text{if $n=1$}\\ n-1, & \text{if $n$ is even} \\ n-2, & \text{if $n$ is odd} \\ \end{cases}$$ Since every time at most 2 persons are killed and 1 person is killed only when there are only 2 person left, so when $n$ is even, finally there will be only 2 persons left after $\frac{n-2}{2}$ times killing and person $n-1$ gets the sword, so $f(n) = n-1$ in this case. When $n$ is odd, after $\frac{n-3}{2}$ times killing, there will be 3 person left, and person $n-2$ gets the sword and then kills the other two, so $f(n)= n-2$ in this case.
What exactly is "the unit interval with two origins"?
Let $I=[0,1]$. This construction is precisely the quotient space $$X=(I\times \{0\}\sqcup I\times \{1\})/{\sim}$$ where $(x,1)\sim (x,0)$ if and only if $x\ne 0$. Then this quotient space "looks" like a line with two origins: namely $(0,1)$ and $(0,0)$, or $0'$ and $0$ if you wish. More intuitively, we take the disjoint union of two unit intervals, and then we "glue" all of the corresponding points together except for the origins. In this way, we get a "unit interval" with two origins. In general, when we make these sorts of "quotient" constructions, we define the open sets to be precisely those sets that make the quotient map continuous. In particular, $U\subset X$ is open if and only if $\pi^{-1}(U)$ is open in $I\times \{0\}\sqcup I\times \{1\}$. Here $\pi: (I\times \{0\}\sqcup I\times \{1\})\to X$ is the map which sends a point to its equivalence class. So, this lets us see that the open sets in this line with two origins containing the point $0'$ are precisely those of the form $[0', k)$ for $k\in (0,1)$. Indeed, then, a neighborhood basis for $0'$ should look like a neighborhood basis for $0:$ i.e., it is comprised of open sets of the form $[0',\epsilon)$ with $\epsilon\in (0,1)$.
For which $d<0$ is $\mathbb Z[\sqrt{d}]$ an Euclidean Domain?
First of all, note that $\mathbb{Z}[\sqrt{d}] = \{ a + \sqrt{d}b \mid a,b \in \mathbb{Z} \}$ is not always equal to the usual ring of quadratic integers $\mathcal{O}(\sqrt{d})$, which consist of the largest subring of $\mathbb{Q}(\sqrt{d})$ for which its intersection with $\mathbb{Q}$ is the integers. If $d \equiv 1 \pmod{4}$, then $\mathcal{O}(\sqrt{d}) = \{ a + b \frac{1 + \sqrt{d}}{2} \mid a,b \in \mathbb{Z}\} = \{ a' + b' \sqrt{d}\}$, with either both $a', b'$ integers, or both $a',b'$ an integer plus $\frac{1}{2}$. I will assume that the norm we use is $|a + b \sqrt{d} = a^2 - d b^2 = (a - \sqrt{d}b)(a + \sqrt{d}b)$, which is clearly multiplicative For $\mathcal{O}(d)$, then it's a Euclidean domain for $d = -11,-7,-3,-2,-1$, but if you consider $\mathbb{Z}(\sqrt{d})$, it's indeed just $d = -1,-2$. This can actually be seen geometrically. These integers form a rectangular grid, and for any (not necessarily integer) point in this grid, there's always an integer point within a distance of at most $|\frac{1}{2} + \frac{1}{2}\sqrt{d}|$. For $d = -1,-2$, this gives $\frac{1}{2}$ and $\frac{3}{4}$. If we now divide a number $a = a_1 + a_2 \sqrt{d}$ by $b = b_1 + b_2 \sqrt{d}$, then we get a rational point $c = c_1 + c_2 \sqrt{d}$ in the grid, and there must exist an integer point at distance less than 1. We thus have $$ \frac{a}{b} = c_1 + c_2 \sqrt{d} = q_1 + q_2 \sqrt{d} + r_1 + r_2 \sqrt{d} $$ with $q_1,q_2$ integers, and $r_1,r_2 \in [-\frac{1}{2},\frac{1}{2}]$. Now $$ a = bq + br $$ and since $|br| = |b||r| &lt; |b|$, the euclidean property is satisfied. Now for $d = -3,-5,\dots$, then the point $1 + \sqrt{d}$ divided by 2 gives a point at the center of a lattice rect, and so the distance from this point to any integer point is at least 1. It's thus not possible to write $1 + \sqrt{d}$ as $2q + r$, with $|r| &lt; 2$, which contradicts the definition of an Euclidean domain.
How to solve it using vector method. Not using product of vectors.
Let $A,B,C,D$ be the vertices of the parallelogram with direct orientation, meaning that $$\tag{0}\vec{AB}=\vec{DC}, \ \ \ \ \ \ \vec{AD}=\vec{BC}$$ (draw a figure!) The line joining point $A$ to the midpoint $I$ of $BC$ has $$\tag{1}\vec{AI}=\vec{AB}+\frac12 \vec{BC}$$ as its directing vector. We have to show that point $H$ defined by $$\tag{2} \vec{AH}=\frac23 \vec{AI}$$ is such that $$\tag{3}\vec{AH}=a\vec{AB} + (1-a) \vec{AD}$$ for a certain positive value $a$ ($H$, being on segment $BD$, is a weighted average of $B$ and $D$) It shouldn't be difficult now, using (1),(2),(3) AND the second equation in (0), to get such an $a$.
A sequence defined through an arc tangent and arc cotangent
$f(x) = \arctan{\frac{1}{x}}-arccot{\frac{1}{x}}$ Since $\arctan(x)+arccot(x) =\frac{\pi}{2} $, this becomes $f(x) = 2\arctan{\frac{1}{x}}-\frac{\pi}{2} $. If $f(x_n) = \frac1{n}$. then $2\arctan{\frac{1}{x_n}} =\frac{\pi}{2}+\frac1{n} $ or $\begin{array}\\ \frac{1}{x_n} &amp;=\tan(\frac{\pi}{4}+\frac1{2n})\\ &amp;=\frac{\tan(\frac{\pi}{4})+\tan(\frac1{2n})} {1-\tan(\frac{\pi}{4})\tan(\frac1{2n})}\\ &amp;=\frac{1+\tan(\frac1{2n})} {1-\tan(\frac1{2n})}\\ \text{so}\\ x_n &amp;=\frac{1-\tan(\frac1{2n})}{1+\tan(\frac1{2n})}\\ &amp;=1-\frac{2\tan(\frac1{2n})}{1+\tan(\frac1{2n})}\\ &amp;=1-\frac{2(\frac1{2n}+(\frac1{n^2}))}{1+\frac1{2n}+(\frac1{n^2})}\\ &amp;=1-\frac1{n}(1+\frac1{n})(1-\frac1{2n}+(\frac1{n^2}))\\ &amp;=1-\frac1{n}(1+\frac1{2n}+\frac1{n^2})\\ &amp;\to 1\\ \end{array} $
Random walk. Finding a distribution
We solve the more general problem of computing the distribution of $\sigma=\inf\{k\geqslant1\mid S_k=0\}$ in a more general setting, then we show that this answers your question. Assume that $P(\xi=1)=p$ and $P(\xi=-1)=q$ with $q=1-p$, for some $p$ in $(0,1)$. Then the Markov property at time $1$ shows that $\sigma=1+\sigma_\downarrow$ with probability $p$ and $\sigma=1+\sigma_\uparrow$ with probability $q$, where $\sigma_\downarrow$ denotes the first hitting time of $0$ starting from $1$ and $\sigma_\uparrow$ denotes the first hitting time of $0$ starting from $-1$. Another application of the Markov property at time $1$ and the homogeneity of the transition probabilities of $(S_k)$, show that $\sigma_\downarrow=1$ with probability $q$ and $\sigma_\downarrow=1+\sigma_\downarrow'+\sigma_\downarrow''$ with probability $p$, where $\sigma_\downarrow'$ and $\sigma_\downarrow''$ are independent copies of $\sigma_\downarrow$. Likewise, $\sigma_\uparrow=1$ with probability $p$ and $\sigma_\uparrow=1+\sigma_\uparrow'+\sigma_\uparrow''$ with probability $q$, where $\sigma_\uparrow'$ and $\sigma_\uparrow''$ are independent copies of $\sigma_\uparrow$. In terms of generating functions, these decompositions translate as follows. Consider, for some fixed $|s|&lt;1$, $$g=E(s^\sigma)\qquad g_\uparrow=E(s^{\sigma_\uparrow})\qquad g_\downarrow=E(s^{\sigma_\downarrow})$$ Then $$g=s(pg_\downarrow+qg_\uparrow)$$ while $$g_\downarrow=qs+ps(g_\downarrow)^2\qquad g_\uparrow=ps+qs(g_\uparrow)^2$$ Solving the latter yields $$g_\downarrow=\frac{1-\sqrt{1-4pqs^2}}{2ps}\qquad g_\uparrow=\frac{1-\sqrt{1-4pqs^2}}{2qs}$$ where one chooses the roots of the quadratics with a minus sign to guarantee that $g_\downarrow\leqslant1$ and $g_\uparrow\leqslant1$. Finally, these formulas for $g_\downarrow$ and $g_\uparrow$ yield $$g=1-\sqrt{1-4pqs^2}$$ Thus, for every $k\geqslant1$, $$P(\sigma=2k)=\frac2k\binom{2k-2}{k-1}(pq)^k$$ In terms of Catalan numbers (see here for quite a few first values), this reads $$P(\sigma=2k)=2C_{k-1}(pq)^k$$ The random time you consider is $$\sigma_8=\min\{9,\sigma\}$$ hence $P(\sigma_8=2k)=P(\sigma=2k)$ for every $k$ in $\{1,2,3,4\}$, that is, $$P(\sigma_8=2)=2pq\qquad P(\sigma_8=4)=2(pq)^2$$ $$P(\sigma_8=6)=4(pq)^6\qquad P(\sigma_8=8)=10(pq)^6$$ and $$P(\sigma_8=9)=1-\sum\limits_{k=1}^4P(\sigma=2k)$$
Extending a $k$-lipschitz function
An alternate explicit construction: First, you can continuously extend to the closure $\bar{Y}$ using the Lipschitz condition. Then, since $\bar{Y}$ is closed, for every $x\in \mathbb{R}\setminus \bar{Y}$ one can find $x_- = \max \bar{Y}\cap \{ y &lt; x\}$ and $x_+ = \min \bar{Y}\cap \{y &gt; x\}$. Then just linearly interpolate: $$ g(x) = f(x_-) + \frac{f(x_+) - f(x_-)}{x_+ - x_-} (x - x_-) $$ But let me explain Leonid Kovalev's comment. Notice that fixing some arbitrary $x' \in Y$, we have that for any $x\in\mathbb{R}$ now chosen to be fixed $$ f(y) - f(x') + k|x-y| \geq f(y) - f(x') + k|x' - y| - k|x-x'| $$ from triangle inequality. But using the $k$ lipschitz property you have that $$ f(y) - f(x') + k|x' - y| \geq 0 $$ so the expression $$ f(y) - f(x') + k|x-y| \geq -k|x-x'| $$ where the right hand side is independent of $y$. Or, in other words $$ f(y) + k|x-y| \geq f(x') - k|x-x'| $$ so the expression you want to take the infimum of (in $y\in Y$) is bounded from below by some constant, and hence the infimum exists.
complex analysis-roots of constant polynomial function
In general, it is known that there does not exist a closed form over radicals for the roots of arbitrary polynomials; there does exist closed forms for all polynomials of degree less than 5 and there does exist closed forms for some polynomials with higher degrees, but not all of them. Essentially this means that the only method to find the roots of polynomials is to approximate them. Here is an interesting way to "solve" polynomials: Start with: $$x^2-x-1=0.$$ By quadratic formula one of the roots is $$\frac{1+\sqrt{5}}{2}.$$ But what if I didn't know the quadratic formula (or in general, what if I didn't have a formula), well solve for $x$ instead: $$x=1+\frac{1}{x}.$$ Since $x=1+\frac{1}{x}$ we must have $$x=1+\frac{1}{1+\frac{1}{x}}.$$ Since $x=1+\frac{1}{x}$ we must also have $$x=1+\frac{1}{1+\frac{1}{1+\frac{1}{x}}}.$$ To infinity we arrive at: $$x=1+\frac{1}{1+\frac{1}{1+\frac{1}{1+\frac{1}{1+\frac{1}{\ddots}}}}}$$ and the more interesting consequence is that $$1+\frac{1}{1+\frac{1}{1+\frac{1}{1+\frac{1}{1+\frac{1}{\ddots}}}}}=\frac{1+\sqrt{5}}{2}. $$ They must be the same since they are both a positive root of a polynomial that only has one positive root. At the beginning we stated that not all polynomials are solvable by radicals; another way of making that statement is to say that not all "continued approximation expressions" will simplify to a nice radical expression. However, the creepy "continued approximation expressions" can always be found. One could approach the polynomial above in the same way and find one of the roots to be: $$z=\sqrt[100]{w-1-\sqrt[100]{w-1-\sqrt[100]{w-1-\sqrt[100]{w-1-\sqrt[100]{\cdots}}}}}$$ In this case the expression might not converge unless we pick an appropriate starting point, however when the expression above does converge it must be a root of the polynomial since it satisfies the equation. If the above answer is upsetting, then the more upsetting consequence is that every method provided to you to find a root will reduce to some "approximation formula" like the one shown above, unless (of course) the above expression "magically" reduces over radicals and we just didn't see how it does. Besides very tedious inefficient Galois Theory, there is no known simple method of determining when an expression will reduce over radicals. :-(
Cardinality of infinite product?
It turns out this does have a nice answer. I originally had a much more complicated proof, and almost immediately afterward, a simple one was given to be by Douglas Ulrich. The answer is that for any limit ordinal $\alpha$, $\prod_{\beta&lt;\alpha} \beth_\beta=\beth_{\alpha+1}$. What follows is a proof in more detail than is maybe necessary. The essential reason is the $\prod_i 2^{\kappa_i}=2^{\sum_i \kappa_i}$, which is easily verified. This gives us the following: $\prod_{\beta&lt;\alpha}\beth_\beta=\prod_{\beta&lt;\alpha}2^{\beth_\beta}$; this is seen by seeing both sides inject naturally into the other. Left to right is that $\beth_\beta\leq 2^{\beth_\beta}$, while right to left is because this is a product of a smaller set (since $\alpha$ is a limit ordinal). Then $\prod_{\beta&lt;\alpha}2^{\beth_\beta}=2^{\sum_\beta\beth_\beta}$. Clearly $\sum_\beta\beth_\beta=\beth_\alpha$, so this last is $2^{\beth_\alpha}=\beth_{\alpha+1}$, as desired.
Sturm Seperation Theorem
The key is here that $y_1$ is a solution of the 2nd order ODE $(py')' + ry = 0$, whose solution space is two-dimensional. So $y_1$ it uniquely determined by the values of $y_1(a)$ and $y_1'(a)$ - the trivial solution fulfills both being zero. I'm inclined to think that it should indeed read $y_1(b)=0$ - otherwise the line with chosing the interval $(a,b)$ such that $y_1$ is positive in this would not make sense. Essentially, we are looking at two consecutive zeros of $y_1$.
Why two symbols for the Golden Ratio?
The Golden Ratio or Golden Cut is the number $$\frac{1+\sqrt{5}}{2}$$ which is usually denoted by phi ($\phi$ or $\varphi$), but also sometimes by tau ($\tau$). Why $\phi$ : Phidias (Greek: Φειδίας) was a Greek sculptor, painter, and architect. So $\phi$ is the first letter of his name. The symbol $\phi$ ("phi") was apparently first used by Mark Barr at the beginning of the 20th century in commemoration of the Greek sculptor Phidias (ca. 490-430 BC), who a number of art historians claim made extensive use of the golden ratio in his works (Livio 2002, pp. 5-6). Why $\tau$ : The golden ratio or golden cut is sometimes named after the greek verb τομή, meaning "to cut", so again the first letter is taken: $\tau$. Source: The Golden Ratio: The Story of Phi, the World's Most Astonishing Number by Mario Livio; MathWorld
Inverse of block anti-diagonal matrix
I think this is the answer with all the blocks invertible. $$ A = \begin{pmatrix} &amp; &amp; &amp; A_1 \\ &amp; &amp; A_2 &amp; \\ &amp; \dots &amp; &amp; \\ A_d\end{pmatrix}, $$ $$ B = \begin{pmatrix} &amp; &amp; &amp; A_d^{-1} \\ &amp; &amp; A_{d-1}^{-1} &amp; \\ &amp; \dots &amp; &amp; \\ A_1^{-1}\end{pmatrix}, $$ we have $$AB=I$$
Symmetries of a dodecahedron vs $\mathrm{GL}(2,\mathbb C)$
Because $PSU(2)$ is not a subgroup of $GL(2,\mathbb C)$. It is a quotient of a subgroup of $GL(2,\mathbb C)$.
Structure sheaf of $Proj \ S$ in terms of compatible stalks
Yes, see Hartshorne's book, Section II.5. It's like for affine schemes, but localizations are replaced by homogeneous localizations.
Proving that the Bayes optimal predictor is in fact optimal
First the inequality: temporarily writing $p_x$ for $\mathbb P(y=0\mid x)$, note that $$\mathbb E[|h(x)-y| \mid x] = h(x)p_x+(1-h(x))(1-p_x)$$ is a convex combination of $p_x$ and $1-p_x$, so must be at least $\min(p_x,1-p_x)=\mathbb P(f(x)\neq y\mid x).$ As for conditional probabilities: there is an abstract definition of "a conditional expectation," and these can often be shown to exist as a Radon–Nikodym derivative. In your case, there is a conditional expectation $\mathbb P(y=0|x)$ of $\mathbb P(y=0),$ and it is a measurable function $\mathcal X\to [0,1]$. It can be constructed as the Radon–Nikodym derivative of the measure on $\mathcal X$ coming from the event $y=0$ with respect to the total probability measure.
Finding the determinant of a linear transformation
First let me explain you what is the meaning of the determinant of a linear operator: If $V$ is a finite-dimensional vector space, $n := \dim V$, and $T : V \to V$ is linear, in order to compute the determinant of $T$ choose any basis $v_1,v_2,\dots,v_n$ of $V$, and write each of the vectors $T(v_1),T(v_2),\dots,T(v_n)$ as a linear combination of the vectors $v_1,v_2,\dots,v_n$ like that: $$T(v_j) = a_{1j}v_1 + a_{2j}v_2 + \cdots + a_{nj}v_n \quad \textrm{for $j$ in $\{1,\dots,n\}$.}$$ Now, $$\det T = \det \begin{pmatrix} a_{11} &amp; a_{12} &amp; \cdots &amp; a_{1n} \\ a_{21} &amp; a_{22} &amp; \cdots &amp; a_{2n} \\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ a_{n1} &amp; a_{n2} &amp; \cdots &amp; a_{nn} \end{pmatrix}.$$ In your specific example, do it by the easy way: choose $$v_1 = \begin{pmatrix} 1&amp;0\\0&amp;0\end{pmatrix}, \ v_2 = \begin{pmatrix} 0&amp;1\\0&amp;0\end{pmatrix}, \ v_3 = \begin{pmatrix} 0&amp;0\\1&amp;0\end{pmatrix} \textrm{ and } v_4 = \begin{pmatrix} 0&amp;0\\0&amp;1\end{pmatrix}.$$
Equivalency of two statements of Dirichlet Fourier series conditions
You are correct that not both sets of conditions are &quot;Dirichlet conditions.&quot; Wolfram is accurate. Wikipedia is wrong. The Dirichlet conditions would not have included anything about bounded variation because that concept came half a century after Dirichlet's formulation. Bounded variation was introduced by Jordan (of Jordan curve fame,) around 1880. Dirichlet died in 1851. Wikipedia entries are not scrutinized all that carefully.
planar curve has infinitely many Bertrand mates
A parallel Bertrand curve is obtained by erecting a normal of constant length at each point on a curve C. Any smooth continuous curve C is a set of infinite connected points, so correspondingly there are infinitely many curved parallels.
Morphism of quasi affine varieties giving algebra homomorphism and back.
The problem with quasi-affine varieties is that, well, they are not affine. You should think of an affine variety as one that is ``determined by its global regular functions'' and this is not the case for arbitrary quasi-affine varieties. Here is a classic example: let $X = \mathbb{A}^2 \setminus \{ (0,0) \}$. Then I claim $\mathcal{O}(X) \cong k[x,y]$ where the map $\mathcal{O}(\mathbb{A}^2) \to \mathcal{O}(X)$ induced by the inclusion $X \subset \mathbb{A}^2$ is an isomorphism. I highly recommend you try to prove this if you are not familiar with the example. It is often thought of as an algebraic version of the Harthogs's phenomenon. This example shows that the answer to Question 1 is no. In Question 2, I don't know of any purely algebraic description and the category you suggest is useful it just does not give information about the quasi-affine varieties $X$ rather it tells you about the affine varieties $\mathrm{Spec}{(\mathcal{O}(X))}$. Note: when you say subsets, I assume you mean locally closed subsets for the Zariski topology since an arbitrary subset of $\mathbb{A}^n$ need not have the structure of an algebraic variety.
Suppose $f$ is real-valued function defined on $[1,\infty)$ with $f(1)=1$.Suppose, moreover, that $f$ satisfies...
To correct your integral inequality, you have $$ f(x) = 1+\int_1^x\frac{\mathrm dt}{t^2+f^2(t)}\le 1+\int_1^x\frac{\mathrm dt}{t^2+1}=1+F(x)-F(1)$$ for $x\ge 1$ where $F(x)$ is an antiderivative of $x\mapsto\frac1{1+x^2}$. Note that "$\le$" comes into play because we divide by someting "$\ge$". Now look up in your repository of standard integrals what $F$ might be. Hint: We may expect that $F(1)$ and $\lim_{x\to+\infty}F(x)$ somehow involve something with $\pi$.
How to evaluate $\int^\infty_{-\infty} \frac{dx}{4x^2+4x+5}$?
Hints: $$(1)\;\;\;\;\int\limits_{-\infty}^\infty\frac{dx}{4x^2+4x+5}=\int\limits_{-\infty}^\infty\frac{dx}{4\left(x+\frac{1}{2}\right)^2+4}=\frac{1}{4}\int\limits_{-\infty}^\infty\frac{dx}{1+\left(x+\frac{1}{2}\right)^2}$$ $$(2)\;\;\;\text{ For a derivable function}\;f(x)\;,\;\;\int\frac{f'(x)}{1+f(x)^2}dx=\arctan(f(x))+C\ldots$$
proving $a^2_{n+1}-a^2_{n}=a_{n+1} \cdot a_{n} $ is a Geometric progression
Divide both sides by $a_n^2$ and define $b=\frac{a_{n+1}}{a_n}$; you should find that the resulting equation is particularly simple. (And in particular, that $b$ is a constant independent of $n$, as the notation suggests.) (ETA: As Jack D'Aurizio notes in a comment, this isn't quite a complete solution - in fact, the original claim is false without the restriction that we take $a_n\geq 0$ for all $n$, as otherwise one could choose 'alternating' solutions of the quadratic. With that restriction, this proof goes through smoothly.)
what's the distribution of CDF(X) when X belong to normal distribution
Pick $p\in [0, 1]$. What is the probability that $CDF(X)\leq p$? It is the same as that of $X\leq CDF^{-1}(p)$, which is $p$ by definition of the CDF. Thus $CDF(X)$ follows the uniform distribution on $[0, 1]$. This holds for any RV with a continuous pdf.
Isomorphism between a quotient group and the 2-torsion subgroup
The result is true: first, note that we can write $G$ as the direct sum of its $2$-part and its prime-to-$2$ part: $$G = G_2 \oplus G_{2'},$$ where $$G_2 = \{g\in G\mid g\text{ has order a power of }2\},\qquad G_{2'}=\{g\in G\mid g\text{ has order relatively prime to }2\}.$$ Indeed, if the order of $g$ is $2^mq$ with $q$ odd, then we can express $1 = \alpha 2^m + \beta q$ for some integers $\alpha$ and $\beta$. Thus, $g=(g^q)^{\beta}(g^{2^m})^{\alpha}$, so $g\in\langle g^q\rangle\langle g^{2^m}\rangle$, and $g^q$ has order $2^m$, $g^{2^m}$ has order prime to $2$. Since $G_{2'}[2]=\{1\}$ and $2G_{2'}=G_{2'}$, the isomorphism on the $2'$ part is trivial: map everything to $1$. For the $2$-part, write $G_2$ as a direct sum of cyclic groups of order a power of $2$: $$G_2 = C_{2^{a_1}}\oplus\cdots\oplus C_{2^{a_m}},$$ then $2G_2 = 2C_{2^{a_1}}\oplus\cdots\oplus 2C_{2^{a_m}}$, and $G_2[2]=C_{2^{a_1}}[2]\oplus\cdots\oplus C_{2^{a_m}}[2]$. So it suffices to establish the isomorphism in the cyclic of order a power of $2$ case. And if $C_{2^m}$ is cyclic of order $2^m$, then $C_{2^m}[2]$ is cyclic of order $2$, $2C_{2^m}$ is cyclic of order $2^{m-1}$, and of course we have that $C_{2^m}/2C_{2^m}\cong C_2\cong C_{2^m}[2]$. The map that sends the generator $x$ to $x^{2^{m-1}}$ (the generator of $C_{2^m}[2]$) has kernel $\langle x^2\rangle = 2C_{2^m}$. Unfortunately, this isomorphism is not canonical/natural once we try to pull it back to $G$ itself, because there are many different ways of writing the decomposition of $G_2$ as a direct sum of cyclic groups of power-of-$2$ order. I think it is best to think of as being in the same "family" as the following theorem: Theorem. If $G$ is a finite abelian group, and $H$ is a subgroup, then $G$ has a subgroup that is isomorphic to $G/H$. One can prove this, but there is no "universal" definition of an isomorphism that one can make and that will "always work" to give the isomorphism. It is rather "accidental", as it were. For example, one can also give the isomorphism by invoking the Structure Theorem: write $G$ as a direct sum of cyclic groups, $$G\cong C_{m_1}\oplus C_{m_2}\oplus\cdots\oplus C_{m_n}$$ where $1\lt m_1|m_2|\cdots|m_n$. Let $x_i$ be the generator of the $i$th cyclic factor. Then define $f\colon G\to G$ by $$f(x_i) = x_i^{m_i/\gcd(m_i,2)}.$$ Again, this maps to $G[2]$, with kernel $2G$: an element $g$ lies in the kernel if and only if its $i$th component is $x_i^a$ and $m_i|m_ia/\gcd(m_i,2)$, if and only if $\gcd(m_i,2)=1$ (in which case $x_i\in 2G$) or $\gcd(m_i,2)=2$ and $2|a$ (in which case $x_i^a\in 2G$), so every component of $g$ lies in $2G$, hence $G$ lies in $2G$ and conversely. However, note that we cannot make this a general definition; if you attempt to define $$\varphi(x) = x^{m/\gcd(m,2)}, \quad m=|x|$$ then this is not a homomorphism. For example, take $G=C_2\oplus C_4$, generated by $x$ and $y$; then $xy$ and $y$ both have order $4$, so we would have $\varphi(xy) = (xy)^2 = y^2$, $\varphi(y) = y^2$; but $(xy)y=xy^2$ has order $2$, so $\varphi(xy^2)=xy^2$, yet $\varphi(xy)\varphi(y) = y^2y^2 = 1$.
On the Converse of Schur's Lemma for Modules
I think the converse is generally true for all $R$-modules ($R$ commutative or not). If $\text{End}_{R}(M)$ is a division ring, then every $\varphi \in \text{End}_{R}(M)$ that is not the 0 map has an inverse $\psi$. $\varphi$ is injective since you can invert anything that maps to 0. For all $m \in M$, $\psi(m)$ is defined, so $\varphi(\psi(m)) = m$ (inverses are 2-sided) and $m \in \text{img}(\varphi)$. Therefore every $\varphi$ is an $R$-module isomorphism. If $M$ had a nontrivial submodule $N$ then $M \to M / N \to M$ would be an endomorphism of $M$ with nontrivial kernel (not an isomorphism). By the above this cannot exist, so $M$ is simple.
Characterization of a closable linear operator
So we say that an operator is closed if it has a closed graph. This means that given an unbounded operator $T \in \mathcal{B}(H,K)$ that $\{x,Tx)\}$ is closed in $H \times K$. This explicitly means that given a sequence $x_n \to x $ in H such that $Tx_n \to y$ in K that $y=Tx$. Now, if you can show that this holds at a point in H then you can extend to the whole domain of T in the typical way.
Can a matrix be positive semidefinite, even though it has negative leading principle minors?
Zero is nonnegative, and for the correct value of $a$ we can have $-a^2=0$. Therefore, a real value of $a$ exists for which the matrix is positive semi definite. By contrast, a positive definite matrix would require the leading principal minors to be actually positive rather than possibly zero, thus is not possible for any real $a$ in this case.
Can at most $3$ distinct primes divide $n^3-n$, for infinitely many $n$?
Case 1: $n$ is a power of $2$. Then, with finitely many exceptions (as proved in the previous question), neither $n-1$ nor $n+1$ will be a power of $3$. Hence, each of them have a factor that is neither $2$ nor $3$. Now prove that those factors can't be the same, and you're done. Case 2: $n$ is a power of $3$. Similar to the previous. Case 3: $n$ has a factor that is neither $2$ nor $3$. The only way this could work is if one of $\{n-1,n+1\}$ is a power of $2$, and the other a power of $3$. A variation of the previous question should prove that there are only finitely many such possibilities.
Describe the Galois Group of a field extension
The Galois Group of some field extension $E/F$ is the group of automorphisms that fix the base field. That is it is the group of automorphisms $\mathrm{Gal}(E/F)$ is formed as follows: $$ \mathrm{Gal}(E/F) = \{ \sigma \in \mathrm{Aut}(E) \mid \sigma(f) = f \; \forall \; f \in F \} $$ So you are fairly limited actually with regards to this group. First off you can notice that the only elements that can be permuted around are the elements that are adjoined to form the splitting field. From this you can see that if you join $n$ elements, at max you will then have $n!$ elements in this group, for that's the maximum number of permutations of $n$ things you can have. In other words one can notice that if you adjoin $n$ elements then $$ \mathrm{Gal}(E/F) \subset S_n $$ where $S_n$ is the symmetric group of $n$ elements. Naturally you can already see that the maximum number of elements that one would have to adjoin to a base field for a degree $n$ polynomial is $n$ elements, so for any degree $n$ polynomial in a base field $F$ we will have at most $n$ elements adjoined and thus $$ \mathrm{Gal}(E/F) \subset S_n $$ By no means are any of the above proofs for any of these facts, but hopefully they provide intuition. In order to prove these facts you have to dive into how Splitting fields are only unique up to isomorphism. Another neat fact is that if $p(x)$ is a seperable polynomial then $$ \mid \mathrm{Gal}(E/F) \mid = [E : F] $$ Now let's address the question at hand. You're working with a base field of $\mathbb{Q}$ and with $p(x) = x^2 - 3 \in \mathbb{Q}[x]$. Note that this is degree $2$ so that $$ \mathrm{Gal}(E/\mathbb{Q}) \subset S_2 $$ where $E$ is the splitting field of $p(x)$. From this you can see that $\mathrm{Gal}$ has at maximum of $2$ elements. Note that it cannot have $1$ element for you must adjoin at least two elements here (both $\pm \sqrt{3}$) so then $\mathrm{Gal}$ must be identically $S_2$. Another way to go about this is to notice that $p(x)$ is separable so then we have that $$ \mid \mathrm{Gal}(E/F) \mid = [E : F] = 2 $$ (note showing that showing $[E : F] = 2$ takes some explanation) so then we have the only group with two elements, which is isomorphic to $S_2$. Lastly from an entirely intuitive perspective we can notice that we need to adjoin two elements to reach the splitting field, namely $\pm \sqrt{2}$ so then the automorphisms can only permute these two elements. Naturally we would have the identity in this group, so some automorphism that sends $$ \sqrt{3} \to \sqrt{3} $$ but notice that we can also have another element that sends $$ \sqrt{3} \to -\sqrt{3} $$ for this last isomorphism preserves the structure of the splitting field (in higher degree polynomials this can in fact fail. This turns out to be an incredibly deep subject, at least in my opinion - finding the galois group for an arbitrary higher degree polynomial. There is a lot to take into account that at the surface don't appear to be an issue). A few key points to keep in mind that would have helped me when first starting to learn this stuff: $\mathrm{Gal}(E/F)$ must fix the base field. I cannot stress this enough. This appears to be the base misunderstanding you have (based on how you asked the question). It cannot, whatsoever, by definition, permute anything in the base field (ever). $\mathrm{Gal}(E/F) \subset S_n$ it is not necessarily the case that $\mathrm{Gal}(E/F) = S_n$. For me, at first this seemed a little weird; why can't we just permute any of the roots? A good example to think about would be the polynomial $(x^2 -3)(x^2-2)$. We cannot place $\sqrt{3} \to \sqrt{2}$ for then the intermediate fields here would become messed up; that is, $\sqrt{2}$ is a root of $x^2-2$, and in some sense when you send $\sqrt{3} \to \sqrt{2}$ you're implying that $\sqrt{3}$ is also a root, but it actually is not so you cannot have this automorphism in your Galois Group. This is a hard topic in abstract algebra. It may appear like a simple question at first, but it takes some thought! Don't get too bogged down by all the notation and definitions. At some point I'm sure you'll gain an intuitive understanding for this very difficult subject. Hopefully this helps, let me know if you need more explanation!
Dual of the formula for Euclidean triangles
You may derive a similar expression for hyperbolic triangles by combining Heron's formula $$ \tanh\frac{\Delta}{4}=\sqrt{\tanh\frac{s}{2} \tanh\frac{s-a}{2}\tanh\frac{s-b}{2}\tanh\frac{s-c}{2}},\qquad s=\frac{a+b+c}{2} \tag{1}$$ or the expression for the area in terms of the angular defect $$ \Delta = \pi-(A+B+C)\tag{2}$$ together with the laws of sines and cosines: $$ \frac{\sinh A}{\sin a}=\frac{\sinh B}{\sin b}=\frac{\sinh C}{\sin c},\qquad \cosh c=\cosh a \cosh b-\sinh a\sinh b\cos C\tag{3}$$ Given $a,b,C$, you may derive $c$, then $s$, from the law of cosines, then plug $a,b,c,s$ into $(1)$. You can do the same in the Euclidean context.
Is $f_n(x)=\frac{x}{n}\sin\frac{x}{n}$ uniform convegent
The function convege pointwise to $0$ for all $x\in \mathbb{R}$ because $$\forall x\in \mathbb{R} |f_{n}(x)|=\Bigg| \frac{x}{n} \cdot \sin \Big( \frac{x}{n} \Big) \Bigg| \leq \Big| \frac{x}{n} \Big| \rightarrow_{n\to \infty}0 $$ However there no uniform convegent becaue for e.g $\epsilon=1$ for all $n\in \mathbb{N}$ we can choose $x_0 := \frac{n\pi}{2}$ : $$ f_{n}(x)= \frac{ \frac{n\pi}{2} }{n} \cdot \sin \Big( \frac{ \frac{n\pi}{2} }{n} \Big) = \frac{\pi}{2} &gt;1 $$ So $f_n\rightarrow 0$ but not uniformly.
The exercise from L. Breiman " Probability " page 76
$$S_{n+1}=S_n + X_{n+1}$$ $$S_{n+2}=S_{n+1} + X_{n+2}$$ $$\vdots$$ so that $$ E(X_1 | S_n, S_{n+1}, \dots )= E(X_1|S_n,X_{n+1},X_{n+2}, \dots) =E(X_1|S_N)$$ By symmetry or exchangability (De Finetti's theorem) $$E(X_1|S_n) =E(X_j|S_n) , \quad j \in \{1,2,\dots,n\}$$ $$n E(X_1|S_n) =\sum_{j=1}^n E(X_j|S_n) = E(S_n|S_n)=S_n $$ $$ E(X_1 | S_n, S_{n+1}, \dots )=E(X_1|S_n) = \frac{S_n}{n}$$
Flows on a manifold as a group action.
The answer is no, and the reason may be concisely stated: for any flow $\phi: \Bbb R \times M \to M \tag 1$ each $\phi_t: M \to M, \; \forall x \in M \; \phi_t(x) = \phi(t, x) \tag 2$ is homotopic to the identity map, via a homotopy $H(s, t, x) = \phi_{st}(x), \; s \in [0, 1]; \tag 3$ it is easily seen that $H(0, t, x) = \phi_0(x) = x, \; H(1, t, x) = \phi_t(x). \tag 4$ For many $M$ there are, however, diffeomorphisms $\psi:M \to M$ which are not homomtopic to the identity; an example is provided by the antipodal map $\psi(x) = -x$ on $S^{2n}$ for any positive integer $n$.
2 classes in the same classroom each with 100 seats and the same 100 students, find the probability that no one has the same seat for both classes
The problem is known as derangements, usually presented as passengers with tickets randomly assigned seats on a plane or guests receiving the wrong hats. Solution to it is an inclusion-exclusion principle/theorem. Obviously the total number of seatings is $100!$. How many do we need to exclude to get all 'wrong' ones? Assume at least 1 person has a correct seat. You have $\binom{100}{1}$ such allocations. For each there are $99!$ seatings for other students. After you've subtracted this from $100!$ you need to add back cases where at least two students, $\binom{100}{2} \times 98!$ and so on. Can you take it from here?
$\lim_{(x,y)\to (0,0)} \frac{\sin(x^2+y^2)}{x^2+y^2}$
Hint: Change to polar coordinates and use a well-known limit. One does not even need to go to polar coordinates, but that is a useful move in general when the denominator is $x^2+y^2$. (Your solution is correct.)