title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Show that $X \sim Y$ implies $E[f(X)] = E[f(Y)]$ in any subset of $\Omega$
Is this what you mean? No, it is not the same over any set. Imagine we roll two dice. Let $X$ be the first outcome, $Y$ the second. The outcomes are independent; they are identically distributed, $X \sim Y$. Of course (as noted) on the whole space $$ \mathbb E[X] = \mathbb E[Y] = \frac{7}{2} $$ Now consider this set: $A = \{X = 1\}$. Then $$ \mathbb E[X\mathbf1_A] = 1\cdot\frac{1}{6}=\frac{1}{6},\qquad \mathbb E[Y\mathbf1_A] = \frac{1}{6}\cdot\frac{7}{2} = \frac{7}{12} $$
Kronecker delta - can I change one index and not another one in the same expression?
Even though the Kronecker delta is sometimes treated like an index substitution symbol, it actually defined as $$ \delta_j^k = \begin{cases} 1, & j=k\\ 0, & j\neq k \end{cases} $$ Now consider what happens when summing over one of those indices $$ \sum_\mu \delta^\mu_\eta (\partial_\mu g_{\eta\nu})=\partial_\eta g_{\eta\nu} $$ since this is the only non-zero summand. If we sum over the other index, then $$ \sum_\eta\sum_\mu \delta^\mu_\eta (\partial_\mu g_{\eta\nu})=\sum_\eta \partial_\eta g_{\eta\nu}=\partial_1 g_{1\nu}+\partial_2 g_{2\nu}+\dotsb $$ which is the same result you would get for summing over $\eta$ and then over $\mu$. Observe that in both cases this is different from $\partial_\eta g_{\mu\nu}$. Finally, as I alluded to in my comment and as Branimir Ćaćić made explicit, in Einstein notation you sum only over those indices that appear both as an upper and a lower index. The other indices are considered constant throughout the summation.
Show that any subset of a metric space is open
$iii)$ With the function $d_2$ you defined as "metric" on the set of integrable functions on $[0,1]$ you have that both the characteristic function of the 0-singleton, $\chi_{\{0\}}$ and the constant function $0$ are integrable, moreover $d_2(\chi_{\{0\}},0)=0$ but they are not equal. $iv)$Each point in a metric space is closed, finite unions of closed sets are still closed so every set is closed (hence every set is open too)
Stopping the Coronavirus puzzle
Claim: On an $n$ by $n$ grid, if there are fewer than $n$ squares initially infected, then the infection will not spread to the entire region. Define a edge of a square to be a frontier edge if one side of the edge is infected but the other side is uninfected. (The region outside the entire $n$ by $n$ grid is considered to always be uninfected.) Key lemma: As the infection propagates, the number of frontier edges can never increase. Proof of key lemma: Whenever the infection spreads to a new square, then at least two of its neighbors was already infected, hence you lose at least two frontier edges and gain at most two. End of proof. Proof of claim: Suppose the infection spreads to the entire region. At that time, the number of frontier edges is $4n$ (the entire outer edge of the board). By the key lemma, the number of initial frontier edges must be at least $4n$. Therefore, there must have been at least $n$ initial squares infected. Put another way, if there were fewer than $n$ squares initially infected, then the infection will not propagate to the entire region. (By the way, there are many initial configurations of size $n$ that lead to the whole board becoming infected, not just the diagonals.)
If f is not surjective then f|C is not surjective.
It is easier to prove the transposition of this statement: If $f\restriction_C$ is surjective, then $f$ is surjective. Given any $b \in B$, there exists $a \in A$ such that $f\restriction_C(a) = b$, hence $f(a) = b$, and $f$ is surjective. $\blacksquare$ If you don't believe in transposition, you can reverse the logic of the above proof: Suppose that $f$ is not surjective, then $\exists b \in B\mid \forall a \in A, f(a) \neq b$. Hence $\forall a \in C, f\restriction_C(a) \neq b$. $\blacksquare$
nth derivative of a finite amount of composite functions
For the following it is easier to begin with functions which have no constant term, so say $$ \begin{eqnarray} f_1(x) &=&a_1 x + b_1 x^2 + c_1 x^3 + ... \\ f_2(x) &=&a_2 x + b_2 x^2 + c_2 x^3 + ... \\ ... \\ f_n(x) &= &a_n x + b_n x^2 + c_n x^3 + ... \\ \end{eqnarray} $$ This allows to use the concept of "Carleman"- or "Bell"-matrices for the composition of functions. We define a Carlemanmatrix for the above functions, for instance $F_k$ for $f_k(x)$ such that the columns contain the coefficients of the consecutive powers of$f_k$ beginning with that of $f_k(x)^0, f_k(x),f_k(x)^2,...$ where it is obvious that we deal with infinite sized matrices! But the matrices come out to be triangular so we can later operate with them. For instance , $F_1$ looks like $$ F_1 = \small \begin{bmatrix} 1 & . & . & . & . & \cdots \\ 0 & a_1 & . & . & . & \cdots \\ 0 & b_1 & a_1^2 & . & . & \cdots \\ 0 & c_1 & 2 a_1 b_1 & a_1^3 & . & \cdots \\ 0 & d_1 & b_1^2+2 c_1 a_1 & 3 a_1^2 b_1 & a_1^4& \cdots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \end{bmatrix} $$ We see in the columns the coefficients of the powers of the formal power series of $f_1$. Now with a "Vandermonde"-vector $V(x)=[1,x,x^2,x^3,...]$ the dotproduct $V(x) \cdot F_1 $ gives the evaluation of the function $f_1(x)$ and all of its (nonnegative) powers: $$ V(x) \cdot F_1 = [1,f(x),f(x)^2,f(x)^3,...] = V(f(x)) $$ The clue is here, that the result is again of the "Vandermonde"-type and that we can thus proceed with the next function and so on ... Then the composition of the functions $g(x) = f_3 \circ f_2 \circ f_1 (x)$ has simply the Carlemanmatrix $G$ which is $$ G = F_1 \cdot F_2 \cdot F_3 $$ To exercise with this it is -in the software Pari/GP- easy to define appropriate vector- and even matrix-functions for some finite size $n$: f1(x)= 'a1*x + 'b1*x^2+'c1*x^3+'d1*x^4+ O(x^5) \\ actually you would f2(x)= 'a2*x + 'b2*x^2+'c2*x^3+'d2*x^4+ O(x^5) \\ give some concrete values f3(x)= 'a3*x + 'b3*x^2+'c3*x^3+'d3*x^4+ O(x^5) \\ for the symbols 'a,'b,... V(x,n)=vector(n,r,x^(r-1)) Carleman(x,n)=matrix(n,n,r,c,...) where the latter function can accept a function written as truncated powerseries and then F1 = Carleman(f1(x)) F2 = Carleman(f2(x)) F3 = Carleman(f3(x)) G = F1 * F2 * F3 print(Ser(G[,2])) \\ shows the notation of the power series of g(x) and approximate results can be computed if the series converge well and the size of the matrices and vectors are sufficient just by x = 0.5 \\ give some initial value which gives reasonable approximation y1 = V(x )*F1[,2]) \\ shows evaluation of f1 at some value x y2 = V(y1)*F2[,2]) \\ shows evaluation of f2 at the value y1=f1(x) y3 = V(y2)*F3[,2]) \\ shows evaluation of f3 at the value y2=f2(f1(x)) g = V(x) * G[,2] \\ g and y3 should be identical (If you want to keep things symbolic then reduce the matrix-size to 4 or 5 - the composition creates huge stuff of terms!) It is hopeless to try to write out the resulting coefficients for an indeterminate number of functions $f_k(x)$, but for the composition of, say 3 functions, the top left part of $G = F_1 \cdot F_2 \cdot F_3$ looks like $$ \Tiny \begin{array} {} 1 & . & . & . \\ 0 & a_3 a_2 a_1 & . & . \\ 0 & a_3 a_2 b_1+a_3 a_1^2 b_2+a_2^2 a_1^2 b_3 & a_3^2 a_2^2 a_1^2 & . \\ 0 & (2 a_3 a_1 b_2+2 a_2^2 a_1 b_3) b_1+2 a_2 a_1^3 b_3 b_2+(c_3 a_2^3+c_2 a_3) a_1^3+c_1 a_3 a_2 & 2 a_3^2 a_2^2 a_1 b_1+2 a_3^2 a_2 a_1^3 b_2+2 a_3 a_2^3 a_1^3 b_3 & a_3^3 a_2^3 a_1^3 \end{array} $$ The n'th derivatives of $g(x)$ are now in the second column, for instance $ \displaystyle \qquad g'(x) = a_3 a_2 a_1 \\ \qquad g''(x)/2! = a_3 a_2 b_1+a_3 a_1^2 b_2+a_2^2 a_1^2 b_3 \\ \qquad g'''(x)/3! = (2 a_3 a_1 b_2+2 a_2^2 a_1 b_3) b_1+2 a_2 a_1^3 b_3 b_2+(c_3 a_2^3+c_2 a_3) a_1^3+c_1 a_3 a_2 \\ $ Additional remark: in the case of self-composition of functions, in this case all $f_k(x)$ are the same we have functional iteration and for this I've done a scheme for indeterminate iteration-height (but this was not asked for in the OP)
What's the sum of $\sum \limits_{k=1}^{\infty}\frac{t^{k}}{k^{k}}$?
Let's define : $\displaystyle f(t)=\sum_{k=1}^{\infty}\frac{t^{k}}{k^{k}} $ then as a sophomore's dream we have : $\displaystyle f(1)=\sum_{k=1}^{\infty}\frac 1{k^{k}}=\int_0^1 \frac{dx}{x^x}$ (see Havil's nice book 'Gamma' for a proof) I fear that no 'closed form' are known for these series (nor integral). Concerning an asymptotic expression for $t \to \infty$ you may (as explained by Ben Crowell) use Stirling's formula $k!\sim \sqrt{2\pi k}\ (\frac ke)^k$ to get : $$ f(t)=\sum_{k=1}^{\infty}\frac{t^k}{k^k} \sim \sqrt{2\pi}\sum_{k=1}^{\infty}\frac{\sqrt{k}(\frac te)^k}{k!}\sim \sqrt{2\pi t}\ e^{\frac te-\frac 12}\ \ \text{as}\ t\to \infty$$ EDIT: $t$ was missing in the square root Searching more terms (as $t\to \infty$) I got : $$ f(t)= \sqrt{2\pi t}\ e^{\frac te-\frac 12}\left[1-\frac 1{24}\left(\frac et\right)-\frac{23}{1152}\left(\frac et\right)^ 2-O\left(\left(\frac e{t}\right)^3\right)\right]$$ But in 2001 David W. Cantrell proposed following asymptotic expansion for gamma function (see too here and the 1964 work from Lanczos) : $$\Gamma(x)=\sqrt{2\pi}\left(\frac{x-\frac 12}e\right)^{x-\frac 12}\left[1-\frac 1{24x}-\frac{23}{1152x^2}-\frac{2957}{414720x^3}-\cdots\right]$$ so that we'll compute : $$\frac{f(t)}{\Gamma\left(\frac te\right)}\sim \sqrt{t}\left(\frac {e^2}{\frac te-\frac 12}\right)^{\frac te-\frac 12}$$ and another approximation of $f(t)$ is : $$f(t)\sim \sqrt{t}{\Gamma\left(\frac te\right)}\left(\frac {e^2}{\frac te-\frac 12}\right)^{\frac te-\frac 12}$$
How do I calculate the new matrix from the new basis?
$$T\begin{pmatrix}0&0\\1&1\end{pmatrix}:=\begin{pmatrix}\;\;2&\;\;2\\-4&-4\end{pmatrix}=\color{red}{-5}\begin{pmatrix}0&0\\1&1\end{pmatrix}+\color{red}2\begin{pmatrix}1&0\\0&0\end{pmatrix}+\color{red}{(-1)}\begin{pmatrix}\;\;0&0\\-1&1\end{pmatrix}+\color{red}2\begin{pmatrix}0&1\\0&1\end{pmatrix}$$ and that's how you get the wanted matrix's first column. Continue from here applying $\;T\;$ to the other members of that basis and writing the result as a linear combination of that same basis For (c): write that vector as a linear combination of the basis: $$\begin{pmatrix}\pi&3\\\sqrt2&0\end{pmatrix}=a\begin{pmatrix}0&0\\1&1\end{pmatrix}+b\begin{pmatrix}1&0\\0&0\end{pmatrix}+c\begin{pmatrix}\;\;0&0\\-1&1\end{pmatrix}+d\begin{pmatrix}0&1\\0&1\end{pmatrix}$$ For example,. it must clearly be that $\;b=\pi\;$ (why?), etc.
Proving $\arctan x > \frac x{1+x^2}, \forall x >0$ with a helper function
Your proof is not valid because $f''(0) = 0$ does not imply that $x = 0$ is not an extremum. As already said in the other answers, one can easily see that $f' > 0$ and therefore $f$ is strictly increasing. But you can prove the inequality also by applying the Mean Value Theorem directly to $\arctan$, without a helper function. For $x > 0$: $$ \arctan(x) = \arctan(0) + (x- 0)\arctan'(c) \quad \text {for some } c \in (0, x) \\ = \frac {x}{1 + c^2} > \frac {x}{1 + x^2} \, . $$
Proof/Counterexample: If $z$ is a complex number and $z\notin \mathbb Q$, then $\mathbb Q(z)=\mathbb Q(z^3,z^5)$.
No, you are way off base. Your reasoning doesn't make much sense (see my comments above). In fact you start from the wrong foot. The trivial inclusion is $\mathbb{Q}(z^3,z^5) \subseteq \mathbb{Q}(z)$. Indeed, it's clear that both $z^3$ and $z^5$ can be written as rational fractions of $z$: they're both polynomial in $z$. To prove the other inclusion $\mathbb{Q}(z) \subseteq \mathbb{Q}(z^3,z^5)$, you need to prove that $z$ can be written in terms of $z^3$ and $z^5$ using only addition, multiplication, division, and multiplication by a rational number. You can do that without using the representation $z=a+bi$, it's purely formal: since $5 \times 2 - 3 \times 3 = 1$, you find that: $$z = z^1 = z^{5 \times 2 - 3 \times 3} = \frac{(z^5)^2}{(z^3)^3}.$$
SVD decomposition in Weighted Least Square
The wiredness is due to that $X$ is not always invertible, that is, $\Sigma^{-1}$ does not always exist. Your lines of reasoning is correct, if $X$ is invertible. This may be comprehended alternatively as follows: when $X$ is invertible, there always exists an optimal solution $c^{*} \triangleq X^{-1}b$ such that $Xc^{*} = b$. Consequently, it is also true that $WXc^{*} = Wb$ for all $W$. Therefore, $c^{*}$ is the solution for both LS and WLS. If $X$ is not invertible, this is not generally true. An exception is that when $W = k I$ for some nonzero constant $k \in \mathbb{C}$, where we weigh everything equally.
How is one supposed to carry sign in Lie brackets? Is it bilinearity?
Yes, the Lie bracket is bilinear. So in particular $[x \frac{\partial}{\partial y}, -z \frac{\partial}{\partial y}] = -[x \frac{\partial}{\partial y}, z \frac{\partial}{\partial y}]$.
A way to find this shaded area without calculus?
The area is equal to difference between the area of two lenses. It is easy to find the area of lenses like the one I did in this question before: How to find the shaded area
Probability of having a disease
You want $P(D|+) = \frac{P(+|D)P(D)}{P(+)}=\frac{P(+|D)P(D)}{P(+|D)P(D) + P(+|\sim D)P(\sim D)}$
A die is thrown two times. $A$ is an event, where sum of obtained numbers is less than 6, and $B$ - that both numbers are even. What is $P(A\mid B)$?
hint...it would be easier to use $$p(A\mid B) = \frac{p(A\cap B)}{p(B)}$$
Showing a set is a subset of another set
We need to show that following holds: $(A∪B)⊆(A∪B∪C)$ Proof: Since by definition the above statement means that $(x∈A∨x∈B) \rightarrow (x∈A∨x∈B∨x∈C)$, we now have a new goal. Since our new goal has the form of a conditional, we start by assuming that the antecedent is true, that is, we assume that $(x∈A∨x∈B)$ is true. Then if $(x∈A∨x∈B)$ is true so is $(x∈A∨x∈B)∨x∈C$ (it is formalized in the natural deduction and known as the disjunction introduction rule). This shows that $(x∈A∨x∈B) \rightarrow ((x∈A∨x∈B)∨x∈C)$ and, therefore, that $(A∪B)⊆(A∪B∪C)$ holds as required. In my personal opinion, is always a good practice not omitting parenthesis when your statement starts getting longer (it avoids ambiguity and makes explicit the unique parsing of a formula). Let's take for instance the formula you just stated (parenthesis are now explicit): $(x∈A∨x∈B) \rightarrow (x∈A∨x∈B∨x∈C)$ this has an easier reading than the one omitting them: $x∈A∨x∈B \rightarrow x∈A∨x∈B∨x∈C$ doesn't it?
Finding a generating set
Obtain the solution for $2x+3y = 5z$ where $x,y \in \mathbb{Z}$ using Bezout's lemma. Hover your mouse over the gray area for a complete solution. A trivial solution is given by $$(x,y) = (z,z)$$ where $z \in \mathbb{Z}$. Hence, all the solutions to $2x+3y = 5z$ is given by $$(x,y,z) = (z+3a,z-2a,z)$$ where $a,z \in \mathbb{Z}$.
What is the formal definition of ordered tree?
Let $T$ be such a tree and let us write $xTy$ for the unique path in $T$ between vertices $x$ and $y$. Let us denote $r$ as the root vertex in $T$. We can impose a partial ordering on $V(T)$ by letting $x \leq y$ if $x \in rTy,\ \forall x,y \in V(T)$. In such a structure every leaf node is a maximal element of $T$. Here is an example, generated via sage: However, we can endow our tree structure with different orderings, e.g. horizontal orderings. Have a look on this exemplary collection of different partial orderings for rooted trees on wikipedia.
Normalized partial sums of normal random variables are dense in $\mathbb{R}$
In fact we don't even have to assume that the $X_i$ are normally distributed. Just assume they have mean zero and variance $1$. Fix some $a<b \in \Bbb R$. By reverse Fatou's Lemma we see that $$P\big( \frac{S_n}{\sqrt{n}} \in [a,b] \;\;\text{ for infinitely many $n$ } \big) \geq \limsup_{n \to \infty} P\big(\frac{S_n}{\sqrt{n}} \in [a,b] \big) = P\big(Z \in [a,b] \big)>0$$ where $Z$ is normally distributed, and the equality after the limsup follows by the central limit theorem. On the other hand, note that the event $E_{a,b} :=\big\{ \frac{S_n}{\sqrt{n}} \in [a,b] \;\;\text{ for infinitely many $n$ } \big\}$ is an exchangeable event and therefore the Hewitt-Savege 0-1 Law together with the above computation implies that $P(E_{a,b})=1$. Finally, note that $$P\big( \{ S_n/\sqrt{n}: n\in \Bbb N\} \text{ is dense in } \Bbb R \big) = P \bigg( \bigcap_{\substack{a<b \\ a,b \in \Bbb Q}} E_{a,b} \bigg) = 1$$
Limit of a floor sum
It works if you replace your $ A_n $ with the following: $$ A_n = \dfrac {(x-1) + (2x - 1) + ... + (nx - 1)} {n^2} = \dfrac {(x + 2x + ... + nx) - n} {n^2} = C_n - \dfrac {1} {n}$$ Then $$ \lim_{n \to \infty}A_n = \lim_{n \to \infty} C_n - \dfrac {1}{n} =\lim_{n \to \infty} C_n = \dfrac {x}{2} $$
Conditional Probability Question
I don't think you can find a fixed number for $P(D|(A\cap B))$ but you should be able to put bounds on it. Since $P(A \cap D)=0.18$ and $P(B \cap D)=0.19$, you have $P(B^c \cap D) = 0.01$ and so $0.17 \le P(A \cap B \cap D) \le 0.18$, and $P(A \cap B \cap D^c) \le P(A \cap D^c) =0.16$. So $$ \frac{17}{33} \le P(D|(A\cap B)) \le 1$$ and these are tight bounds as shown below
Continuous functions define a topology?
The first statement can be patched up using functions into the Sierpinski Space $\{0,1\}$ with topology $\{\varnothing , \{1\}, \{0,1\}\}$. Since a continuous function $X \to \{0,1\}$ can be identified with the open set $f^{-1}(1)$ we see the continuous functions into the Sierpinski space are the same thing as the open subsets of $X$.
Maximum principle for minimal hypersurfaces
I think that the following argument should give an answer to my own question. We prove the following theorem Let $ f_{1}:M_{1}\rightarrow \mathbb{R}^{n} $ and $ f_{2}:M_{2}\rightarrow \mathbb{R}^{n} $ be two minimal immersed hypersurfaces. Suppose $ M_{1} $ complete. If $ f_{1}(M_{1})\cap f_{2}(M_{2}) $ contains at least a point where $ f_2(M_2) $ lies locally on side of $ f_1(M_1) $ then $ f_{2}(M_{2})\subseteq f_{1}(M_{1}) $. In the proof we use the following two results: (Lemma 1)Let $ \Omega\subseteq \mathbb{R}^{n} $ be a bounded connected open set and let $ u_{1}, u_{2}:\Omega \rightarrow \mathbb{R} $ be two (smooth) solutions of the minimal equation. Suppose $ u_{1}, u_{2} \in C^{0}(\overline{\Omega}) $. Then $ v=u_{1}-u_{2} $ satisfies an equation of the form $ \sum _{ij}a_{ij}\frac{\partial^{2}v}{\partial x_{i} \partial x_{j}}+ \sum_{i}b_{i}\frac{\partial v}{\partial x_{i}}=0 $\ Moreover it holds that:\ $\sum _{ij}a_{ij}(x)\xi_{i}\xi_{j}\geq \mu^{2}|\xi|^{2} $\ for every $ x \in \Omega $ and $ \xi \in \mathbb{R}^{n} $. As an immediate consequence of Lemma 1 and the Maximum principle for elliptic equation we obtain: (Lemma 2)Let $ \Omega \subseteq \mathbb{R}^{n} $ be a bounded open neighbourhood of the origin and let $ u_{1},u_{2}:\Omega\rightarrow \mathbb{R} $ be two solutions of the minimal equation such that $ u_{1}(0)=u_{2}(0) $ and $ u_{1} \leq u_{2} $ in $ \Omega $. Then $ u_{1}=u_{2} $. (Proof of the Theorem)From Lemma 2 we easily conclude that there exist two open subsets $ V \subseteq M_{2} $ and $ U \subseteq M_{1} $ such that $ f_{1}(U)=f_{2}(V) $. Now let $ \mathscr{S} $ be the collection of open subsets of $ M_{2} $ such that for each $ V \in \mathscr{S} $ there exists an open subset $ U \subseteq M_{1} $ such that $ f_{1}(U)=f_{2}(V) $. Clearly $ \mathscr{S}\neq \varnothing $. Moreover observe that if $ f_{2}(V)=f_{1}(U) $ then $ f_{2}(\overline{V})\subseteq f_{1}(\overline{U}) $. In fact let $ q \in \partial V $ and let $ q_{n} \in V $ such that $ q_{n}\rightarrow q $. Let $ p_{n}\in U $ such that $ f_{1}(p_{n})=f_{2}(q_{n}) $. Since $ f_{2}^{-1}\circ f_{1}:U \rightarrow V $ is an isometry we have that $ \{p_{n} \} $ is a Cauchy sequence in $ M_{1} $. Since $ M_{1} $ is complete there exist $ p \in M_{1} $ such that $ p_{n}\rightarrow p $. Therefore $ p \in \overline{U} $ and $ f_{1}(p)=f_{2}(q) $.\ Let $ V_{0} \in \mathscr{S} $ be a maximal set and let $ U_{0}\subseteq M_{1} $ such that $ f_{1}(U_{0})=f_{2}(V_{0}) $. If $ \overline{V}_{0}=M_{2} $ then $ f_{2}(M_{2})=f_{2}(\overline{V}_{0}) \subseteq f_{1}(\overline{U}_{0})\subseteq f_{1}(M_{1}) $ We now suppose $ \overline{V}_{0}\subsetneq M_{2} $ and we obtain a contradiction. Let $ q \in \partial V_{0} $ and let $ p \in \partial U_{0} $ such that $ f_{1}(p)=f_{2}(q)=x $. Note $ T_{p}(M_{1})=T_{q}(M_{2}) $ (as subspaces of $ T_{x}\mathbb{R}^{n} $). Let $ u_{1}, u_{2}:\Omega \rightarrow \mathbb{R} $ be the functions representing $ f_{1} $ and $ f_{2} $ in a neighbourhood of $ x $ ($ \Omega \subseteq \mathbb{R}^{n-1} $ is a bounded open subset). From Lemma 1 the difference function $ v=u_{1}-u_{2} $ is a solution of a linear elliptic equation. Moreover, since $ f_{1}(U_{0})=f_{2}(V_{0}) $, there exists $ \Omega' \subseteq \Omega $ such that $ v|_{\Omega'}=0 $. By Unique continuation $ v=0 $ in $ \Omega $. Therefore we can find $ V' \in \mathscr{S} $ such that $ V_{0} \subsetneq V' $, contradicting the maximality of $ V_{0} $.\
Is it possible to apply cofactor expansion to a $2$x$2$ matrix? Why is the determinant of a constant seemingly one?
You've got the wrong idea about what the cofactor is. In the cofactor expansion $$\det\begin{bmatrix}12&3\\16&5\end{bmatrix}=12\det(5)-3\det(16),$$ the cofactors are $\det(5)$ and $-\det(16)$ The coefficients $12$ and $3$ are not part of the cofactors. There's a detailed discussion here.
Given a Brownian motion $W$, and $k \in (a,b)$, I'm trying to find the distribution of $W(k)$ in terms of $W(b)$, $W(a)$, and $k$
To see what's happening, set $X = \frac{W(b) + W(a)}{2}$ and $Y = W(k) -X$. I agree that your computation shows $Y \sim \frac{1}{2} N(0,b-a)$, regardless of the value of $k$. And the distribution of $X$ certainly doesn't depend on $k$. But this does not imply that the distribution of $W(k) = X+Y$ is the same for all $k$, because the covariance of $X$ and $Y$ does depend on $k$. (I will leave you to compute it.) Put another way, to conclude that the distribution of $W(k)$ was the same for all $k$, you'd need to show that the joint distribution of $(X,Y)$ was the same for all $k$. But you've only shown that the marginal distributions are the same for all $k$.
Minimize $\lceil k / j \rceil (j+1) + 2^j$ as a function of $j$, where $1 \le j \le k$.
An investigation of the desmos.com graph https://www.desmos.com/calculator/x66xvqgff3 indicates that for each value of $k$ the minimum value of \begin{equation} P(j,k)=\left\lceil\dfrac{k}{j} \right\rceil(j+1)+2^j \end{equation} occurs when $j=1.$ So \begin{equation} \min\left\{\left\lceil\dfrac{k}{j} \right\rceil(j+1)+2^j{\large\vert}1\le j\le k,\,\,j,k\in\mathbb{Z}\right\}=2k+2 \end{equation} For any fixed value of $j\,$, $P$ is a non-decreasing function of $k$ with minimum value when $k=j$. And for each increase in the value of $j$, the minimum value of $P$ at $k=j$ increases. Thus for any given value of $k$, the minimum value of $P$ occurs when $j=1$.
How to determine if a quintic polynomial is solvable by radicals
The only general method for determining the number of real roots of a polynomial without solving the equation is the Sturm's theorem. Its application is rather laborious, since require the calculus of a sequences of polynomials and the evaluation of table of their changes of sign in the interval in which we search the roots. There is also a variant of the theorem ( proved by J.M.Thomas), that allows to determine the multiplicity of the roots. You can find some examples of use of this method in : B.E. Meserve, Fundamental Concepts of Algebra, pag. 160...
morphism betweem sheaves that is an isomorphism in local sections of a basis
Your assumptions immediately imply, that your morphism is an isomorphism on each stalk, since any point lies in some basis open set and passing to a stalk is functorial. So the answer is yes.
Show that $T\mid (X_1=0) \sim \text{Binomial}((n-1)m,p)$
I think you mean this: $X_1, \ldots, X_n$ is a random sample of size $n$ from the binomial distribution with parameters $m$ and $p$, $T = \sum_{i=1}^n X_i$, and you want to know the conditional distribution of $T$ given $X_1 = 0$. Given $X_1 = 0$, $T = \sum_{i=2}^n X_i$, the sum of $n-1$ independent random variables, each with this same binomial($m, p$) distribution. Such a sum has binomial distribution with parameters $m(n-1)$ and $p$.
Use of arithmetic progression to find the sum of numbers
For guidance, I'll work a general problem: How can I find the sum of numbers divisible by 17 between 432 and 6789? I need to find the first element, the last element and the number of elements. The first element $a_S =17 \left\lceil \frac{432}{17} \right\rceil = 17\times 26 = 442$ The last element $a_E = 17 \left\lfloor \frac{6789}{17} \right\rfloor = 17\times 399 = 6783$ Number of elements $N= 399-26+1 = 374$ Sum: $$S=\frac{N(a_S+a_E)}{2}=\frac{374(442+6783)}{2} = 13521075$$ Note: $\lceil x\rceil$ means round $x$ up to the nearest integer, and $\lfloor x\rfloor$ means round down.
Showing that a sequence converges, in distribution, to a normal r.v.
The lindeberg CLT can be applied for this sequence of random variables. Hint: figure out a convinient lower bound for the cumulative variance $s_n^2=\sum_{i=1}^n \sigma_k^2$
How can I find the coordinates of a point which is the reflection of a point about a line in 3D
Assume a point on the line in parametric form and use the fact that the line from it to P is perpendicular to line. Remember the formula for perpenducular direction ratios? Then apply mid-point theorem. What is the mid point of what here?
Properties of triangles with integer sides and area
Must all of the terms be even? Yes, as Arthur suggested, Heron's formula states that if the sides of the triangles are $a$, $b$, and $c$, with semiperimeter $s$, then the area is $$A = \sqrt{s(s-a)(s-b)(s-c)}.$$ If the semiperimeter were not an integer, then $A$ would not be rational—thus the semiperimeter is an integer and the perimeter is even. Are there infinitely many even numbers that do not appear in the sequence? Yes! For example, $2p$ does not appear in the sequence for any prime $p$. By Heron's formula, in order for the area $A$ to be an integer, $s(s-a)(s-b)(s-c)$ must be square. But this means (without loss of generality) that $s-a = p = a$. However, this violates the triangle inequality of Euclidean geometry. In any case, I created OEIS sequence A305703, which lists the even numbers that do not appear in the sequence. Are there infinitely many terms $a(n)=a(n−1)+6?$ I don't know. But Zhang's proof of bounded gaps between primes implies that $a(n)=a(n-1) + k$ for some $k$. How many such triangles exist for a given n? (e.g. There are at least 4 non-congruent triangles with a perimeter of a(6)=36 and integer area.) I don't know, but I've added this to the OEIS as A305717. It looks like this may not have a lot of structure. What is the asymptotic growth of such multiplicity? This is a good question for the aforementioned OEIS sequence. What is the ratio of right vs isosceles vs scalene triangles with perimeter less than N? Again, I don't know, but this could also spin-off into a few good OEIS sequences. Can all triangles with integer perimeter and area be described with integer coordinates? (e.g. the 7-15-20 triangle can be described with (0,0),(12,16),(0,7).) Yes! Paul Yiu had a nice article in the March 2001 edition of The American Mathematical Monthly called Heronian Triangles Are Lattice Triangles with a neat proof.
Why is finite-ness so important in proofs?
The more explicit the proof, the clearer image we have of the objects involved in it. For example, if you want to prove that a function $T\colon X\to Y$ extends to a function from some larger $X'$ with certain properties, having a complete description of the extension is better than having used an abstract theorem to prove the existence of the extension. Finite sets are great this way, because of their finiteness they are easy to deal with by recursion: when you remove an element of a finite set, it becomes strictly smaller. Not so for infinite sets in general. Proofs are also finite, which makes sense, since proofs are something we want to think of as objects we could possibly write on a piece of paper. When we have to make finitely many steps, even if we are using somehow abstract logical rules, we can still think of them as being somewhat descriptive. For example, suppose that $X_1,\dots,X_n$ are non-empty sets. Then there is a function $f$ such that $f(i)\in X_i$. The easy way to prove this is to repeat the existential instantiation rule: $X_1$ is not empty, i.e. $\exists x(x\in X_1)$, so we can add a new symbol $x_1$ and declare $x_1\in X_1$. Rinse and repeat until $x_n$ is given, and define $f(i)=x_i$. If the family of sets is infinite we can no longer do this. In that case we need to use the axiom of choice we simply asserts the existence of $f$. But we have no idea what this $f$ might be. And while the existential instantiation is somehow abstract and mysterious, we can at least grasp it conceptually as an iterative process, compared to the use of the axiom of choice which just instantly creates this function.
Proof in the sequent calculus
I'll formalize Apostolos' argument with Natural Deduction in "sequent-calculus-style" [see Dirk van Dalen, Logic and Structure, (5th ed - 2013), pages 35, 88 and 91 for the rules]. These are the rules for quantifiers : $$(\forall I) \, \, {\Gamma \vdash \varphi(x) \over \Gamma \vdash \forall x \varphi(x)}\ x \notin FV(\psi) \, \text{for all} \,\, \psi \in \Gamma$$ $$(\forall E) \, \, {\Gamma \vdash \forall x \varphi(x) \over \Gamma \vdash \varphi(t)}\ t \, \text{free for} \, x \, \text{in} \,\, \varphi$$ $$(\exists I) \, \, {\Gamma \vdash \varphi(t) \over \Gamma \vdash \exists x \varphi(x)}\ t \, \text{free for} \, x \, \text{in} \,\, \varphi$$ $$(\exists E) \, \, {\Gamma, \varphi(x) \vdash \psi \over \Gamma, \exists x \varphi(x) \vdash \psi}\ x \notin FV(\psi) \, \text{and} \, x \notin FV(\gamma) \,\, \text{for all} \,\, \gamma \in \Gamma$$ Proof From the rule of assumption [i.e. $\Gamma \vdash \varphi$, if $\varphi \in \Gamma$] applied to axiom (c) we derive, following Apostolos' proof, by multiple applications of ($\forall$ E) : (1) $\Gamma \vdash ∃w(x≤w∧y≤w)$ (2) $\Gamma \vdash ∃u(y≤u∧z≤u)$ (3) $\Gamma \vdash ∃a(u≤a∧w≤a)$ (4) $\Gamma \vdash x \le w$ --- from (1) by ($\land$ E) and ($\exists$ E) Note : By ($\land$ E) we have : $$\Gamma \vdash x≤w∧y≤w \over \Gamma \vdash x≤w$$ Applying "weakening" [i.e.if $\Gamma \vdash \varphi$, then $\Gamma \cup \Delta \vdash \varphi$], we get : $$\Gamma \vdash x≤w∧y≤w \over \Gamma, x≤w∧y≤w \vdash x≤w$$ Thus, by ($\exists$ E) we have : $$\Gamma \vdash x≤w∧y≤w \over \Gamma, ∃w(x≤w∧y≤w) \vdash x≤w$$ But by (1) : $\Gamma \vdash ∃w(x≤w∧y≤w)$, we conclude (4) : $\Gamma \vdash x≤w$. end note. In the same way, we have : (5) $\Gamma \vdash w \le a$ --- from (3) Thus : (6) $\Gamma \vdash x \le w \land w \le a$ --- from (4) and (5) by ($\land$ I). (7) $\Gamma \vdash x \le a$ --- from axiom (b) with multiple applications of ($\forall$ E) and (6) and ($\rightarrow$ E). Repeating the "procedure" (4)-(7) with $y \le w$ (from (1)) and again with $z \le u$ (from (2)) we get : (8) $\Gamma \vdash y \le a$ (9) $\Gamma \vdash z \le a$. Now we apply ($\land$ I) twice with (7), (8) and (9) to get : (10) $\Gamma \vdash (x \le a \land y \le a) \land z \le a$. Thus by ($\exists$ I) : (11) $\Gamma \vdash \exists w(x \le w \land y \le w) \land z \le w$ and finally, by multiple applications of ($\forall$ I) : (12) $\Gamma \vdash \forall x \forall y \forall z \exists w[(x \le w \land y \le w) \land z \le w]$.
Show that if taken 14 number from 1 to 25 at least one of them is multiple of another
Label $13$ pigeonholes with the odd numbers $1,3,5,...,25$. In the pigeonhole labelled $2i+1$, put the numbers $(2i+1)\times 2^j \;\; (j=0,1,2,...)$. Then: each positive integer $\le 25$ occurs in exactly one pigeonhole; if two numbers are in the same pigeonhole, then one is a multiple of the other. Which is all we need.
Proof of an inequality by induction: $\sum_{k=1}^n \frac{1}{(k+1)\sqrt k}<2$
Note that $$\frac1{\sqrt{k}}-\frac1{\sqrt{k+1}}=\frac1{\sqrt{k}\sqrt{k+1}(\sqrt{k}+\sqrt{k+1})}\ge\frac1{2(k+1)\sqrt{k}}.$$
The $\sigma$-algebra generated by $f^{-1}\mathcal{Z}$, where $\mathcal{Z}$ generates $\mathcal{A}$
Yes you are on the right track. So you have $\mathcal{A}\subset \{F\subset Y:f^{-1}F\in\mathcal{A}'\}$. This means that $f^{-1}\mathcal A\subset\mathcal A'$. Conversely, since $\mathcal Z\subset\mathcal A$, we clearly have $f^{-1}\mathcal Z\subset f^{-1}\mathcal A$. Since $f^{-1}\mathcal A$ is a sigma algebra and $\mathcal A'$ is the smallest sigma algebra which contains $f^{-1}\mathcal Z$, we have $\mathcal A'\subset f^{-1}\mathcal A$. You just showed that $f^{-1}\mathcal A\subset\mathcal A'\subset f^{-1}\mathcal A$. Therefore, $f^{-1}\mathcal A=\mathcal A'$. The statement is true. PS : There is no need to say "By contradiction" in the beginning since the proof is "direct".
How to become really interested in math?
My impression is that people who are interested in solving puzzles and games also tend to be interested in math, if it is ever taught to them in a way that appeals to their creativity. It seems to me there is a distinct possibility that you could potentially already be interested in math, but you are feeling discouraged because your environment leads you to compare yourself to people who are more skilled at it than you are (at least for now). This would be emotionally difficult not just for you, but perhaps for most people. If you can work on solving easier problems than Problems 3 or 6 from the IMO, but which are still difficult enough to require creativity, there may be enough enjoyment in it to motivate you to study and improve. But constantly feeling that you are in competition with people like the ones you mentioned won't help. Remember that, although it is true that a good number of successful mathematicians participated in contest math in their youth, a similar proportion never did. So withdrawing from math competitions and focusing on learning math in a way that avoids forcing you to compare yourself constantly with others could be a way out of this conundrum, and won't prevent you from anything you might want to do later. Nonetheless, I would leave open the possibility that you aren't terribly interested in math - that's quite all right. But if you find that you are very interested in a field that relies heavily on math, like physics or computer science, it's true that you will need to find the motivation, somehow, to become competent in math. I would say, in that case, in addition to taking steps to make math feel less competitive for you, you could read a book like What Is Mathematics? by Courant and Robbins, to get a taste of various areas of math, presented in an accessible way. Alternatively, you could start studying math on your own in university-level textbooks (with no focus on high school contest material).
Inverse of $\sum^n_{i=1} 1/x_i$ convex?
If I've not made any mistakes in my calculations: if $n \geq 2$, then $$ \partial_1 f(x) = -\frac{1}{\left( \sum_{i=1}^n x_i^{-1} \right)^2} \cdot \left(- \frac{1}{x_1^2} \right) = \frac{1}{ \left(1 + \sum_{i \neq 1} \tfrac{x_1}{x_i} \right)^2 }, \\ \partial_1^2 f(x) = - \frac{2}{\left(1 + \sum_{i \neq 1} \tfrac{x_1}{x_i} \right)^3} \cdot \left( \sum_{i \neq 1} x_i^{-1} \right) &lt; 0 \text{ on } \mathbb{R}_{++}^n $$ so all Hessians will have a negative entry in the upper left corner, and hence they can't be positive (semi-)definite. Hence $f$ is not convex for $n \geq 2$. EDIT: It becomes even less mystical when considering restrictions of the function at specific values. Let $n = 2$ and $y = 1$, then $f(x,1) = \left(\tfrac{1}{x} +1 \right)^{-1}$ is easily plotted to not be convex.
Simple first-order linear differential equation
the general form of first order liner D.E $$y'+P(x)y=Q(x)$$ so that $P(x)=-\frac{4}{3}$ and $Q(x)=-4$ $$\rho=e^{\int P(x)dx}=e^{-\frac{4}{3}x}$$ $$\rho.y=\int \rho Q(x)dx=\int e^{-\frac{4}{3}x}(-4)dx=3e^{-\frac{4}{3}x}+C$$ hence $$y=3+Ce^{\frac{4}{3}x}$$
How to prove given relation on set is partial order?
The poset properties are inherited directly from $A$. What's more difficult is that every chain in $C_a$ has an upper bound in $C_a$. It has an upper bound in $A$, since $A$ has that property, but that upper bound needs to be in $C_a$.
Determine the adjoint of a linear operator
If $f\in D(A^*)$, then the form $g\longmapsto \int_0^1 fg''$ is bounded, so by Riesz Representation Theorem there exists $f''\in L^2$ such that $$ \int_0^1 fg''=\int_0^1f''g. $$ So $f$ has a second weak derivative. And conversely, if $f$ has a second weak derivative, then $f\in D(A^*)$. So $$ \langle A^*f,g\rangle=\langle f,Ag\rangle=\int_0^1 fg''=\int_0^1f''g=\langle f'',g\rangle. $$ Thus $A^*$ is the operator that maps $f\longmapsto f''$ (in the weak sense),.
On a polynomial with roots in $[1,3]$ that are also of the form $2+\csc\theta$
Note that $|\csc\theta|\ge1$, and hence $2+\csc\theta$ must lie outside $(1,3)$. Yet two distinct $2+\csc\theta$ values are the roots of the given quadratic, which is given to have roots within $[1,3]$. Hence the roots are 1 and 3 and the quadratic is $$(x-1)(x-3)=x^2-4x+3=(x-2)^2-1$$ Thus $|b+c|=|-4+3|=1$ and $\min(x^2+bx+c)=-1$.
Find the number of zeros of $f(z) = 2 + z^4 + e^{iz}$
Let us choose a function $g(z)=z^4+2$. Now we need to verify that on the boundary $|f(z)-g(z)|&lt;|g(z)|$. This is true on the boundary $r=2$. Since $|e^{iz}|&lt;2$ and $|z^4+2|&gt;14$. Now we have to show that the inequality is true even on the real line.$|e^{iz}|=1$ whereas $z^4+2 \ge2$. So by Rouches theorem it has same number of zeroes as $z^4+2$ which is two.
How can I simplify this quadratic optimization?
Your constraints are that $-C \|x\|_1 \leq x \leq C \|x\|_1$ componentwise. You can transform the $\|x\|_1$ by introducing new variables $x_+$ and $x_-$, both $\geq 0$ and splitting $x = x_+ - x_-$. With these new variables, $\|x\|_1 = \sum_{b \in B} (x_+^b + x_-^b)$. So your constraints become $$ -C \sum_{b \in B} (x_+^b + x_-^b) \leq x \leq C \sum_{b \in B} (x_+^b + x_-^b), $$ $$ x = x_+ - x_-, \quad (x_+, x_-) \geq 0. $$ In order to force one of $x_-$ or $x_+$ to be zero, you could add the complementarity constraint $$ x_- \cdot x_+ = 0 \quad \text{(or $\leq 0$)}. $$ The result is a quadratic program with complementarity constraints (QPCC or QPEC). There are various approaches to solve it, including certain interior-point methods. You need a specialized method to solve the problem in this form because it is degenerate, i.e., it does not satisfy the Mangasarian-Fromowitz constraint qualification condition.
How do I obtain the Jacobi theta function?
I see it as an exercise on Fourier series. $f(x)=\sum_{n=-\infty}^{\infty}e^{-(n+x)^2}$ is even (consider $x\mapsto-x$ together with $n\mapsto-n$), periodic (of period $1$), and smooth, so it has a Fourier expansion: \begin{align}f(x)&amp;=\frac{a_0}{2}+\sum_{k=1}^{\infty}a_k\cos 2\pi kx,\\a_k&amp;=2\int_0^1 f(x)\cos 2\pi kx~dx=2\sum_{n=-\infty}^\infty\int_0^1 e^{-(n+x)^2}\cos 2\pi kx~dx\\\color{gray}{[n+x=y]}&amp;=2\sum_{n=-\infty}^\infty\int_n^{n+1}e^{-y^2}\cos 2\pi ky~dy=2\int_{-\infty}^\infty e^{-y^2}\cos 2\pi ky~dy,\end{align} with a well-known integral remaining (evaluated using complex integration, or power series for $\cos$, or even Feynman's trick).
Write and solve an inequality
Denote by $c(n)$ the number of pieces of candy Ryan has at the end of day $n$. Define, according to the given conditions, $$ c(0)=70, \quad c(n) = 70 - 4n. $$ It turns out that at the end of day 12 Ryan has 22 pieces of candy left; at the end of day 13 Ryan has only 18 pieces of candy left. We can find this by solving these simultaneous inequalities: $$ 70-4d\ge20, $$ $$ 70-4(d+1)&lt;20. $$ Here day $d$ is the last day when Ryan has at least 20 pieces of candy left. From these inequalities we indeed find $$d=12.$$ Day 12 is the last day when Ryan has at least 20 pieces of candy (at the end of the day).
Least-squares problem with quadratic equality constraint
We have a homogeneous linear system in $x \in \mathbb R^n$ $$A x = 0_m$$ where $A \in \mathbb R^{m \times n}$. We also have constraints on the last $n_2$ entries of $x$. Hence, we write $$\begin{bmatrix} A_1 &amp; A_2\end{bmatrix} \begin{bmatrix} x_1\\ x_2\end{bmatrix} = 0_m$$ where $A_1 \in \mathbb R^{m \times n_1}$ and $A_2 \in \mathbb R^{m \times n_2}$. We would like to minimize $\|Ax\|_2$ subject to the equality constraint $\|x_2\|_2 = 1$. Hence, we have the following quadratically constrained quadratic program (QCQP) $$\begin{array}{ll} \text{minimize} &amp; \|A_1 x_1 + A_2 x_2\|_2^2\\ \text{subject to} &amp; \|x_2\|_2^2 = 1\end{array}$$ Let the Lagrangian be $$\mathcal{L} (x_1, x_2, \lambda) := \begin{bmatrix} x_1\\ x_2\end{bmatrix}^T \begin{bmatrix} A_1^T A_1 &amp; A_1^T A_2\\ A_2^T A_1 &amp; A_2^T A_2\end{bmatrix} \begin{bmatrix} x_1\\ x_2\end{bmatrix} - \lambda (x_2^T x_2 - 1)$$ Taking the partial derivatives and finding where they vanish, we obtain three equations $$A_1^T A_1 x_1 + A_1^T A_2 x_2 = 0_{n_1} \qquad \qquad A_2^T A_1 x_1 + A_2^T A_2 x_2 = \lambda x_2 \qquad \qquad x_2^T x_2 = 1$$ Assuming that $A_1$ has full column rank, then $A_1^T A_1$ is invertible. From the 1st equation, we have $$x_1 = -(A_1^T A_1)^{-1} A_1^T A_2 x_2$$ and $$A_1 x_1 = - \underbrace{A_1 (A_1^T A_1)^{-1} A_1^T}_{=:P_1} A_2 x_2 = -P_1 A_2 x_2$$ where $P_1$ is the projection matrix that projects onto the column space of $A_1$. Note that $I_m - P_1$ is the projection matrix that projects onto the left null space of $A_1$, which is orthogonal to the column space of $A_1$. As $I_m - P_1$ is a projection matrix, $(I_m - P_1)^2 = I_m - P_1$, which we will use. Hence, $$\begin{array}{rl} \|A_1 x_1 + A_2 x_2\|_2^2 &amp;= \|(I_m - P_1) A_2 x_2\|_2^2\\\\ &amp;\geq \sigma_{\min}^2 ((I_m - P_1) A_2) \|x_2\|_2^2\\\\ &amp;= \sigma_{\min}^2 ((I_m - P_1) A_2)\\\\ &amp;= \lambda_{\min} (A_2^T (I_m - P_1)^2 A_2)\\\\ &amp;= \lambda_{\min} (A_2^T (I_m - P_1) A_2)\end{array}$$ where we used the equality constraint $\|x_2\|_2^2 = 1$. Thus, we conclude that the minimum is attained at the intersection of the eigenspace of the minimum eigenvalue of $A_2^T (I_m - P_1) A_2$ with the unit Euclidean sphere in $\mathbb R^{n_2}$. $x_2$ is a normalized eigenvector of $A_2^T (I_m - P_1) A_2$. It is also a (normalized) right singular vector of $(I_m - P_1) A_2$. Once we have $x_2$, we obtain $x_1$ via $x_1 = -(A_1^T A_1)^{-1} A_1^T A_2 x_2$. We can arrive at the same conclusion using a different approach. Using $A_1 x_1 = - P_1 A_2 x_2$, the 2nd equation produced by the vanishing gradient of the Lagrangian then becomes $$A_2^T (I_m - P_1) A_2 x_2 = \lambda x_2$$ Thus, $(\lambda, x_2)$ is an eigenpair of $A_2^T (I_m - P_1) A_2$. Which eigenpair? The eigenpair corresponding to the minimum eigenvalue of $A_2^T (I_m - P_1) A_2$, and in which the eigenvector has unit $2$-norm. In MATLAB, use function inv to invert matrices and function eig to find eigenvalues and eigenvectors. If $A_1$ does not have full column rank, then use function pinv to compute the pseudoinverse of $A_1^T A_1$. Lastly, there are several errors in Huang &amp; Barth [PDF]. For example, in equation (7), the Lagrangian function is obviously incorrectly typed, as the $2$-norms are missing.
Can more than one hamiltonian graph have the same set of hamiltonian paths?
Here’s a trivial counterexample. Neither of these non-isomorphic graphs has a Hamilton path: o---o---o o | | o o---o---o | | o o Any bijection between the vertices, however, matches up their (empty) sets of Hamilton paths.
Converting image plane coordinates to world coordinates such that they lie in a given plane
Turns out the math is sound. The problem was due to image dimension conventions across python packages i.e. I had one library which mapped (y,x) to (width, height) and another which mapped (x,y) to (width, height). Correcting for this removed the offset.
Why is $f(x) \equiv 0 \;\bmod \; p$ for all $p \in \mathbb{P}$?
$x^2+x=0\pmod p\implies p|x(x+1)$,but since p is a prime $p|x$ or $p|(x+1)$ giving two solutions $x=0\pmod p$ or $x=-1\pmod p$.
Partial derivative problem on absolute value function
When $2x^2-y&gt;0$, $f(x,y)=2x^2-y$, and$$f'_x=4x, f'_y=-1, f''_{xx}=4,f''_{xy}=f''_{yx}=0,f''_{yy}=0.$$ When $2x^2-y&lt;0$, $f(x,y)=-2x^2+y$, and$$f'_x=-4x, f'_y=1, f''_{xx}=-4,f''_{xy}=f''_{yx}=0,f''_{yy}=0.$$ When $2x^2-y=0$, $f'_x$ does not exist except at $x=0$ where $f'_x=0$, $f'_y$ does not exist, $f''_{xx}$ does not exist, $f'_{xy}=0$, $f''_{yx}$ does not exist, $f''_{yy}$ does not exist.
Is for a curve existence of partial derivatives equivalent to differentiability?
Your proof is correct. However, you should note that the trick fails when the domain has more than one dimension because you cannot apply the trick of putting the denominator inside dividing as it will be a vector and not an scalar.
Calculating syzygies with Macaulay2
I think the problem here is understanding Macaulay2's notation. Let's take an example: i1 : R = QQ[x,y,z] o1 = R o1 : PolynomialRing i7 : I = ideal(random(2,R), random(3,R), random(2,R)); o7 : Ideal of R i8 : betti res I 0 1 2 3 o8 = total: 1 3 3 1 0: 1 . . . 1: . 2 . . 2: . 1 1 . 3: . . 2 . 4: . . . 1 Here the last table gives us all the numerical information about the resolution of some ideal in $\mathbb Q[x,y,z]$. The way to read this is as follows: In the first syzygy $1$ there's two generators of degree $2$ and one of degree $3=1+2$. In the second syzygy there's one generator of degree $4=2+2$ and two of degree $5=3+2$. In general, the number of generators of degree $i$ in the $j$th syzygy is found by reading the $(i,j-i)$ position in the Betti table.
Is differentiation of zero, zero?
$$f'(x)=\lim_{h\to 0}\dfrac{f(x+h)-f(x)}h=\lim_{h\to0}\dfrac0h=0.$$ Geometrically speaking, the graph of $f\colon x\mapsto0$ is a horizontal line, so its slope at each point is zero, hence its derivative is equal to zero everywhere. From another perspective, $f\colon x\mapsto0$ is a constant function, it doesn't vary, so its rate of change is zero.
Arithmetic Series versus Arithmetic Progression
The original AIME version of this question, which can be found at https://gogangsa.com/338 , says arithmetic sequence, not arithmetic series. It's not clear how or why the AoPS version changed it, but people do slip up from time to time. The OP is quite correct to question the interpretation of the wording in the version they came across.
Why don't we expand the whole equation using polynomial interpolation?
Because it makes it very easy to check that the proposed expression indeeds solve the interpolation problem: every term is zero at all but one interpolation point.
checking if a function is injective without the so called 'horizontal line'
A function $f:[a,b]\rightarrow\mathbb{R}$ is injective (on $[a,b]$) iff $f$ is continuous on $[a,b]$ and $f$ is (strictly) monotone on $[a,b]$. Also, a function $f:[a,b]\rightarrow I\subseteq\mathbb{R}$ is surjective iff $Im(f)=I$ (the image matches the codomain).
A question on metamathematics of forcing.
No, $\text{ZFC}^+$ does not prove $\text{Con}(\text{ZFC})$. It does not prove that $M$ is a model of ZFC. For each individual axiom $\varphi$ of ZFC, it proves that $M\vDash\varphi$. But it does not prove the single sentence $\forall\varphi\in\mathrm{ZFC} (M\vDash\varphi)$ which is what it would mean to prove that $M$ is a model of ZFC. (Note that by the completeness theorem, this means it is possible to have a model of $\text{ZFC}^+$ in which the sentence $\forall\varphi\in\mathrm{ZFC} (M\vDash\varphi)$ is false. How can this be? The reason is that in this sentence, $\mathrm{ZFC}$ refers to the set in your model which encodes the set of axioms of ZFC. If the natural numbers of your model are nonstandard, this set will have elements that do not actually correspond to true axioms of ZFC (since their length is actually a nonstandard natural number). So, even though $M$ satisfies every actual axiom of ZFC, your model may not think that $M$ satisfies every element of the set it calls ZFC.)
Combinatorial Inequality
(combinatorial proof of RHS) Multiply the RHS by $\frac{n^n}{n^n}$: $$ \frac{(2n)!} { n! n!} \leq \frac{2^n n^n} { n!}.$$ The LHS is the number of ways of picking $n$ objects out from $2n$ objects. For the RHS, consider a string of $n$ objects, each of which are picked out from $2n$ objects, possibly with repetition. Then, there is a bijection between "pick $n$ objects out from $2n$ objects", and the $n!$ ways that these objects can be ordered into a string of $n$ objects. Hence, $LHS \leq RHS$. It is clear that when we are able to pick an object twice, then we will have string inequality. Hence, for $n\geq 2$, $LHS &lt; RHS$. (algebraic proof of RHS) Multiplying by $\frac{n^n}{n^n}$, we want to show that $$ \frac{(2n)!} { n! n!} \leq \frac{2^n n^n} { n!}.$$ This is obvious, since $$ \frac{ (2n) ! } { n!} = \prod_{i=1}^n (n+i) \leq (2n)^n = 2^nn^n.$$ It is obvious that if $n\geq 2$, then the inequality is weak. Hence, for $n \geq 2$, $LHS &lt; RHS$.
Cardinality of the set of clopen subsets of a topological space
I don’t have an answer to the general question, but I can answer it for the specific spaces mentioned. The clopen algebra of the Cantor space is the free Boolean algebra on $\omega$ generators, which has cardinality $\omega$. For $n\in\omega$ let $B_n=\{n\}\times\omega^\omega$. Then $\bigcup_{n\in A}B_n$ is a clopen subset of $\omega^\omega$ for each $A\subseteq\omega$, so the Baire space has at least $2^\omega$ clopen sets. On the other hand, $\omega^\omega$ is second countable, so it has only $2^\omega$ open sets and therefore precisely $2^\omega$ clopen sets.
Probability proof of a function with independent uniformly distributed variables
Your third calculation $$P\left[\{\left( Y_0 \lor Y_1 \right)\oplus \left( Y_0 \land Y_3 \right) \}=0\right]=\frac{1}{4}\cdot\frac{3}{4}+\frac{3}{4}\cdot\frac{1}{4}=\frac{3}{8}$$ is incorrect. It is true that $$P(A\oplus B=0)=P(A=0)P(B=0) + P(A=1)P(B=1)$$ when $A$ and $B$ are independent, but here $Y_0\lor Y_1$ is not independent of $Y_0\land Y_3$, since $Y_0$ is involved in both of these. (You have the same problem with your final line.) I get $$P\left[\{\left( Y_0 \lor Y_1 \right)\oplus \left( Y_0 \land Y_3 \right) \}=0\right]=\frac12$$ by tediously counting the possibilities. This may be the most direct way to work out the final probability (which I find is $1/2$, as claimed): evaluate the truth of the assertion on all 16 possibilities (aka "inspection").
Can you expand induction proofs to the real numbers?
To add on to Rob Arthan's answer: the induction principle does not work if you put the quantifiers differently, that is: for all $x$ there is some $\epsilon &gt; 0$ such that if $S_x$ holds, then also $S_y$ for all $y\in \mathbb{R}$ with $\lvert x-y\rvert &lt; \epsilon$. The problem is that now the $\epsilon$ you get might get smaller and smaller when you try to go away from $x_0$ and you don't get very far. A counterexample could be the statement $$S_x = x \in (-1,1),$$ which is certainly true for $x = 0$ and around each $x\in (-1,1)$ you can find a small interval that is still completely contained in $(-1,1)$.
Proof for $\sum_{r=1}^{n}r(r!)=\sum_{r=1}^{n}[(r+1)!-r!]=(n+1)!-1$
$$\sum_{k=n_0}^{N}{(a_{k+1}-a_k)}=a_{N+1}-a_{n_0}$$ $\underline{Derivation}$ One can proof that by Induction. Here’s another way. $$\sum_{k=n_0}^{N}{(a_{k+1}-a_k)}= \sum_{k=n_0}^{N}a_{k+1}-\sum_{k=n_0}^{N}{a_k}$$ $$=\sum_{k=n_{0}+1}^{N+1}a_k-\sum_{k=n_0+1}^{N}{a_k}-a_{n_0}$$ $$=a_{N +1}-a_{n_0}+\sum_{k=n_0+1}^{N}{a_k}-\sum_{k=n_0+1}^{N}{a_k}$$ $$=a_{N+1}-a_{n_0}$$
prove that a black-box multivariable problem is convex or concave
If your function is black-box one, then by definition you have no access to its explicit form and so no way to asses its convexity or concavity. But since you mention interior-point methods (IPM), I wonder whether your function is really a black-box one: IPM are second-order methods, i.e. they require the Hessian. I guess some can use Hessian approximation and even gradient approximation, but then you loose all the advantages. Moreover, when you mention global optimality, well this follows from convexity assumptions, not by IPM itself. In fact there are IPM designed for non-convex problem. For instance you can give a look to the Ipopt solver. So when you say people usually use IPM, I suspect that you have a convex problem in closed form that you can solve using one of the many available solvers, for instance MOSEK (ok I am biased, it's the company I work for). Most of them provide trial or academic licenses. If you stick with bloack-box or derivative free methods, I might suggest you to search for NOMAD.
Is there any convention regarding the order of operation of the binary modulo operator?
Using $\bmod$ as binary operation is actually quite rare in mathematics, so authors who use it will usually not assume any knowledge about precedence rules on the part of readers; they will prefer to put in possibly redundant parenthesis if necessary. Therefore it will be hard to deduce from usage any hard evidence about the assumed precedence rules. Nevertheless I think most would agree at least that $\bmod$ binds more strongly than addition or subtraction, but not more strongly than multiplication. This does not fix rules completely, but at least it implies that $a+b\bmod n$ reads $a+(b\bmod n)$ while $a\cdot b\bmod n$ reads $(a\cdot b)\bmod n$. [Though it does not play a very important role in the current discussion, one should note that what counts for precedence it the symbols used, not their meaning, as an expression must be read (parsed) before any meaning can be attached to it. So I should not have just said "multiplication", but rather either "operator $\cdot$" or "juxtaposition", and it would be conceivable to give those two a different precedence.] Of course programming languages must fix rules of precedence, and I'm pretty sure that most languages consider $\bmod$ to have the same precedence as division, which almost always is the same as for multiplication (at least when the latter is written using an operator symbol, not by juxtaposition), while associating left as most other operators. I know one book that spends some time on this operation, Concrete Mathematics, whose section 3.4 has title "Mod, the binary operation". This section gives some interesting evidence. It contains an equation $$ x = \lfloor x\rfloor + x \bmod 1 $$ for $x\in\Bbb R$, from which one can deduce that $\bmod$ binds stronger than addition. Indeed the sentence following the equation says Notice that parentheses aren't needed in this formula; we take $\bmod$ to bind more tightly than addition or subtraction. A bit latter equation $(3.23)$ reads $$ c(x\bmod y) = (cx)\bmod(cy) $$ from which as such nothing can be concluded, other than that $\bmod$ does not obviously bind stronger than multiplication written as juxtaposition (otherwise the LHS could be "$cx\bmod y$" instead) nor obviously weaker (otherwise the RHS could be "$cx\bmod cy$" instead). But in fact the authors write after this equation: (Those who like $\bmod$ to bind less tightly than multiplication may remove parentheses from the right side here, too.) Note that while programming languages usually have no precedence level between additive and multiplicative operations, mathematical notation does seem to have such a level; indeed it seems, as I argued in this answer, that large operators like $\sum$ and $\prod$ as well as things like $\lim$ all have this precedence level towards their right (being unary operators, they have no precedence at all to their left). It would then be perfectly acceptable to allow $\bmod$ to share this precedence level, and the authors of Concrete Mathematics are apparently willing to admit (but not impose) this point of view. If one should decide so, I think it would be logical to also accept division written with "$/$" at this level, so that $ab/cd$ can mean what at first glance it seems to mean.
Vector Field vs. Gradient Field?
The "usual" result is that this is impossible: it's a direct consequence of the Hodge decomposition of vector fields, and can be derived by assuming that $$F = \nabla \phi = \nabla \times G$$ and noticing that $$\langle F, F\rangle = \langle \nabla \phi, \nabla \times G\rangle = -\langle \phi, \nabla \cdot (\nabla \times G)\rangle = \langle \phi, 0\rangle = 0,$$ so that $F=0$. Here we have used the fact that the gradient and divergence are adjoint operators, which can be proven using integration by parts: \begin{align*} \langle \nabla \phi, v\rangle &amp;= \int_{\mathbb{R}^3} \nabla \phi \cdot v\,dA \\ &amp;= \int_{\mathbb{R}^3} \left[\nabla \cdot (\phi v) - \phi \nabla \cdot v\right]\,dA\\ &amp;= - \langle\phi, \nabla \cdot v \rangle, \end{align*} where the first term of the second line vanishes by Stokes's theorem, provided that $\|p\|^2\phi(p) v(p)$ vanishes as $\|p\|\to \infty.$ However since you have not said anything about the behavior of $F$ as $\|p\|\to\infty$, the "usual" result does not necessarily apply. $$F = (0,0,1)$$ is both the gradient of $$\phi = z$$ and the curl of $$G = \frac{1}{2}(-y, x, 0).$$
Question about equivalent equations
To make things clear, let's denote $a = f(x), b = g(x), c = h(x)$, then $$a^{1/3}+b^{1/3} = c$$ $$(a^{1/3}+b^{1/3})^3 = a + 3a^{2/3}b^{1/3} + 3a^{1/3}b^{2/3} + b = a+3a^{1/3}b^{1/3}(a^{1/3} + b^{1/3}) + b =$$ $$= a+3a^{1/3}b^{1/3}c + b = a+3(ab)^{1/3}c + b =c^3.$$ Therefore $$(a+3(ab)^{1/3}c + b)^{1/3} = c = a^{1/3} + b^{1/3}$$ or $$ a+ b =c^3-3(ab)^{1/3}c = c(c^2-3(ab)^{1/3}).$$ Because you are mixing the terms, you can get some messy equations, but it does not mean they are wrong.
AM-GM-HM Relationship
Yes, given the AM,GM and HM there's no other way to get their relation. So that is the only possible equation to relate the Means.You are correct.
Question on geometric distribution
The data 0.01 is probably 0.1.
Riemann-Stieltjes integral with respect to a discontinuous function
Since $f$ is increasing, the lower and upper sums for the partition $\{0,(1- \epsilon),1\}$ are $$L(P,f,h) = f(0)(h(1-\epsilon) - h(0)) + f(1-\epsilon)(h(1) - h(1-\epsilon))\\=0 \cdot (0 - 0) + (1-\epsilon)^2(1 - 0)= (1-\epsilon)^2, $$ and $$\\U(P,f,h)= f(1-\epsilon)(h(1-\epsilon) - h(0)) + f(1)(h(1) - h(1-\epsilon)) \\ (1-\epsilon)^2 (0 - 0) + (1)^2(1 - 0)= 1,$$ The integral must fall in between the lower and upper sums, $$(1-\epsilon)^2 \leqslant \int_0^1 f \, dh \leqslant 1$$ Since the limits as $\epsilon \to 0$ of the LHS and RHS are both $1$, it follows that the integral has a value of $1$. Thus, $$\int_0^1f \, dg = \int_0^1 x^2\, d(3x) + \int_0^1 f \, dh = 3 \int_0^1 x^2 \, dx + 1 = 2$$ In general, to compute the value of the integral with Riemann-Stieltjes sums, you should take the limit of a sequence of sums corresponding to partitions $P_n$ with $\|P_n\| \to 0 $ as $n \to \infty$. Taking $P_n = \{0,\frac{1}{n},\frac{2}{n}, \ldots, \frac{n-1}{n},1\}$, we have $$S(P_n,f,g)= \sum_{k=1}^{n-1} \left(\frac{k}{n} \right)^2\left(3\frac{k}{n} - 3\frac{k-1}{n} \right)+ (1)^2 \left(4 - 3\frac{n-1}{n} \right) \\ = \frac{3}{n^3}\sum_{k=1}^{n-1}k^2+ 1 + \frac{3}{n} ,$$ and we have $$\int_0^1 f \, dg = \lim_{n \to \infty}\left(\frac{3}{n^3}\sum_{k=1}^{n-1}k^2+ 1 + \frac{3}{n}\right) = 3\cdot \frac{1}{3} + 1 = 2$$
Power Tower modulo sequence
I attempted working this out for the given sequence by brute force, using the observation that solutions are only determined $\bmod 10^{n-1}$. I found that the sequence $$a_n=9,09,109,2109,02109,902109,8902109$$ solves increasingly longer initial segments of the pattern (that is, $8902109\uparrow\uparrow n\equiv b_n\pmod{10^{n-1}}$ for $n\le 7$ where $b_n$ is the sequence in the OP), but at this point the pattern ends; there are no solutions that extend $8902109$ and hence no solution exists, not even as a $10$-adic limit. I'm not very strong on the theory here, so if someone could verify that this is a valid method of proof I'd appreciate it.
Given dividend and divisor, can we know the length of nonrepeating part and repeating part?
Given the reduced rational number $\frac{p}q$, you are seeking the smallest number of the form $10^n\left(10^r-1\right)$ which is divisible by $q$. Then there are $n$ non-repeating digits and $r$ repeating digits. In particular, it only depends on $q$, not $p$ (assuming they are relatively prime.) That would make your answer for $13/92$ wrong. So, if we write $q=2^a5^bq'$ where $\gcd(q',10)=1$, then we have $n=\max(a,b)$, and we'd have $r$ to order of $10$ modulo $q'$, which will be a divisor of $\mathbb \phi(q')$. It is elementary if we know $n,r$ that $10^n(10^r-1)\frac{p}{q}$ must be an integer - that is the grade-school method for figuring out the value of a repeating decimal. For example, if $p=1,q=5\cdot 37$, then $n=1$ and $r$ is the smallest value such that $10^r-1$ is divisible by $37$, which turns out to be $3$. And that's what we get: $$\frac{1}{5\cdot 37} = 0.0\overline{054}$$ Another example: $p=5,q=2^3\cdot 3\cdot 7$. Then $n=3$ and $q'=21$. That means that $r$ must be a divisor of $\phi(21)=12$. Actually, we can show it must be a divisor of $6$, and is $6$ since $1/7$ has repeating sequence of $6$. And, indeed: $$\frac{5}{168} = 0.029\overline{761904}$$
$x_1^3+3x_2+3x_3$ in terms of the roots $x_1$, $x_2$, $x_3$ of $x^3−3x−15=0$
$3(x_1+x_2+x_3)=0$ since the polynomial has no $x^2$ term. Thus, $$ x_1^3+3x_2+3x_3=x_1^3-3x_1=15. $$
A small question on fourier series
Because you assumed that the series from which you derived it could be differentiated, when it could not.
basic question on integration
$$X=\int\Big(\dfrac{dX}{dl}\Big) \cdot \,dl $$ $$Y=\int\Big(\dfrac{dY}{dl}\Big) \cdot \,dl$$ $$X+Y=\int\Big(\dfrac{d(X+Y)}{dl}\Big) \cdot \,dl$$ $$\implies \int \Big(\dfrac{dX}{dl}\Big) \cdot \,dl+\int\Big(\dfrac{dY}{dl}\Big) \cdot \,dl=\int\Big(\dfrac{d(X+Y)}{dl}\Big) \cdot \,dl$$ Now put $$\Big(\dfrac{dX}{dl}\Big)=A$$$$\Big(\dfrac{dY}{dl}\Big)=B$$ Fnally $$ \int(A + B) \cdot \,dl = \int A \cdot \,dl+ \int B \cdot \,dl $$
How does canonical form relate to a particular $\xi$ and $\eta$?
We compose the equation of characteristics $$ (dt)^2 - dx dt -20 (dx)^2 = 0. $$ We solve it with respect to $dt$. And we get $dt = -4 dx$ or $dt = 5 dx$, so we have $t + 4 x = \operatorname{const}_1$ and $t - 5 x = \operatorname{const}_2$. These lines are characteristics of your equation. So, change of variables $\xi = t + 4 x$ and $\eta = t - 5 x$ leads to the second canonical form $\partial_\xi\partial_\eta u = 0$. We can conclude that $u(x, t) = \phi(t + 4 x) + \psi(t - 5 x)$ is general solution of your equation, where $\phi$ and $\psi$ are some twice continuously differentiable functions.
What is the formal problem caused by interpreting dx as an infinitesimal?
The Archimedean Property forbids the existence of infinitesimal values. It states: $$ (\forall x &gt; 0)(\exists N \in \mathbb{N})\left( \frac{1}{N} &lt; x \right). $$ If $ x $ is an infinitesimal, then the above statement isn’t true. Hence, if you want the Archimedean Property to hold, then you can’t have infinitesimals. Note that the Archimedean Property is a characteristic property of $ \mathbb{R} $, being one of the axioms of the real-number system. Note: There are real-number systems with infinitesimals. Have a look at non-standard analysis with the set of hyperreal numbers. It’s thus possible to have infinitesimals, and there are no formal problem with them. However, you’ll lose the Archimedean Property once you impose the existence of infinitesimals...
Time to make n pancakes on a pan that takes k.
For n = 4 and k = 3, Call the pancakes 1,2,3,4, each needing frying on both sides a and b. For 20 seconds each, do the following batches of 3 : (1a 2a 3a) (4a 1b 2b) (3b 4b 1a) (2a 3a 4a) (1b 2b 3b) (4b 1a 2a) (3a 4a 1b) (2b 3b 4b) So in your example with n=4 and k = 3, the answer is 8 x 20 seconds = 2 minutes 40 seconds. If n > k or n = k, you can always use a similar pattern of listing 2kn 'frys', each of time 1/k minutes in a 2n x k rectangular array, and so a minimum time of 2n x 1/k = 2n/k minutes. If n &lt; k just put all the pancakes in at once and cook for 2 minutes. If you add the restriction of needing each side to Be cooked continuously for a minute, the results are similar. With n > k, the number of minutes needed is the smallest integer not less than 2n/k. Rounding up required if the last batch isn't full. For n = 4 and k = 3 cook each of these batches for 1 minute : (1a 2a 3a) (4a 1b 2b) (3b 4b )
What is a 2-sided inverse?
If I give you an arbitrary function $f:A\to B$ then it might turn out there exists $h:B\to A$ such that $f\circ h={\rm id}_B$ or $g:B\to A$ such that $g\circ f={\rm id}_A$. In such a case we say $h$ is a right inverse, and that $g$ is a left inverse. It might happen that there exists a $j$ which is both a right and a left inverse for $f$, and I such case we merely say $j$ is an (actually the) inverse of $f$, and denote it by $f^{-1}$. In particular, injective functions are those who have left inverses, or which are said to be left cancellable ($fh=fg\implies h=g$), and surjective functions are those who have (AOC) right inverses, or which are said to be right cancellable ($gf=hf\implies g=h$).
embeddings of projective spaces into Euclidean spaces
A necessary condition for a closed $n$-manifold $M$ to (smoothly) embed or immerse into $\mathbb{R}^{n+1}$ is that its tangent bundle $T$ becomes trivial after adding a single line bundle $L$ (namely the normal bundle of the embedding). This condition is sufficient for an immersion by Hirsch-Smale theory, but the question of embedding is more delicate. The condition that $T \oplus L$ is trivial implies that $w(T) w(L) = 1$. When $M = \mathbb{RP}^n$, recall that the total Stiefel-Whitney class of $\mathbb{RP}^n$ is $(1 + \alpha)^{n+1}$ where $\alpha \in H^1(\mathbb{RP}^n, \mathbb{F}_2)$ is a generator. If $w_1(L) = 0$ then we need $(1 + \alpha)^{n+1} = 1$, or equivalently ${n+1 \choose k}$ even for $1 \le k \le n$. By Kummer's theorem this happens iff $n+1$ is a power of $2$, so the only candidates are $\mathbb{RP}^{2^k - 1}$. Similarly, if $w_1(L) = \alpha$ then we need $(1 + \alpha)^{n+2} = 1$, and this happens iff $n+2$ is a power of $2$, so the only candidates are $\mathbb{RP}^{2^k - 2}$. This table of embeddings and immersions of real projective spaces shows that that $\mathbb{RP}^3$ and $\mathbb{RP}^7$ don't embed into $\mathbb{R}^4$ and $\mathbb{R}^8$ respectively, and a 1963 theorem of James rules out $\mathbb{RP}^{2^k - 1}$ for sufficiently large $k$ (I think $k \ge 4$ but I haven't checked). Also, $\mathbb{RP}^6$ doesn't embed into $\mathbb{R}^8$ and $\mathbb{RP}^{14}$ doesn't immerse into $\mathbb{R}^{21}$, so prospects seem poor for this case as well. For $\mathbb{CP}^n$ we can instead compute the total Pontryagin class, which is $(1 + \alpha^2)^{n+1}$, where $\alpha \in H^2(\mathbb{CP}^n, \mathbb{Z})$ is a generator. The condition that $T \oplus L$ is trivial implies that this vanishes, which is only possible for $n = 1$: for $n \ge 2$ the first Pontryagin class is $(n+1) \alpha^2 \in H^4(\mathbb{CP}^n, \mathbb{Z})$ which doesn't vanish. Hence for $n \ge 2$ we find that $\mathbb{CP}^n$ cannot immerse into $\mathbb{R}^{2n+1}$; this argument in fact shows that the smallest possible codimension of an immersion is $4$ (because this is the smallest codimension for which the normal bundle can have a nontrivial Pontryagin class). Presumably a similar computation works for the total Pontryagin class of $\mathbb{HP}^n$ but I'm less familiar with this case.
where would a triangle point be located?
If your triangle has base $b$ and height $h$, then you have one point at $(0,0)$, one at $(b\cos(\alpha),b\sin(\alpha))$ for some angle $\alpha$, and the third point at $(\frac{b\cos(\alpha)}{2}\mp h \sin(\alpha),\frac{b\sin(\alpha)}{2}\pm h \cos(\alpha))$ Alternatively, if you know the base $b$ and the hypotenuse $H$, you have the internal angle $\beta$ at the base $$H\cos(\beta)=b/2$$ $$\beta=\arccos\left(\frac{b}{2H}\right)$$ and then you have your points at $(b\cos(\alpha),b\sin(\alpha))$ and $(H\cos(\alpha\pm\beta),H\sin(\alpha\pm\beta))$
understanding of an application of the Zorn's lemma
For the sake of an answer, I would like to repeat @Daniel Fishcher's comment here: They might use a form of Zorn's lemma that states "If $(X,≤)$ is a nonempty inductively ordered set, then for every $x∈X$ there is a maximal element $m$ of $X$ with $x≤m$". Then the proof is correct. If their version of Zorn's lemma is the usual one that states only existence of maximal elements, their proof is not strictly correct.
Seating arrangement round table
Define a graph $G$ as follows. The vertices are the $p$ places at the table. Two vertices are joined by an edge if there is exactly one place between them. So $G$ is a $2$2$-regular graph with $p$ vertices and $p$ edges; the task is to assign the ladies to independent (i.e. non-adjacent) vertices at the table. If $p$ is odd, then $G$ is just a $p$-cycle; a seating arrangement is always possible if $p_1\lt\frac p2$. If $p=10$ then $G=2C_5$ (two $5$-cycles), so a seating arrangement is possible for up to $4$ ladies, no more. You can take it from there.
Prove that there are infinitely many integers of the form $111\ldots111$ that are multiple of $n$.
Try subtracting one of those numbers from another among $n$ consecutive such numbers and see what kind of differences you get. Note also that $n$ and 10 are relatively prime. Also note that once you find one such number, you can make infinitely many more. It's just a pigeonhole problem.
Proving an inequality with n positive real numbers
We write the inequality as $$\sum_{i=1}^n\frac{1}{a_i+1}\leq \frac{n-1}{2}+\frac{1}{a_1a_2\cdots a_n+1}. \quad (1)$$ For $n=1$ the inequality become $$\frac{1}{a_1+1}+\frac{1}{a_2+1} \leqslant \frac{1}{2} + \frac{1}{a_1a_2+1}, \quad (2)$$ or $$\frac{(a_1-1)(a_2-1)(a_1a_2-1)}{2(a_1+1)(a_2+1)(a_1a_2+1)} \geqslant 0.$$ Which is true. Now, let $x=a_1a_2 \cdots a_k,$ suppose $(1)$ is true for $n=k,$ we get $$\sum_{i=1}^k\frac{1}{a_i+1} \leqslant \frac{k-1}{2}+\frac{1}{x+1}. \quad (3)$$ We will show that this is also true for $n=k+1.$ Indeed, setting $y = a_{k+1},$ use $(3)$ and $(2)$ we have $$\sum_{i=1}^{k+1}\frac{1}{a_i+1} = \sum_{i=1}^k\frac{1}{a_i+1} + \frac{1}{y+1} \leqslant \frac{k-1}{2}+\frac{1}{x+1} + \frac{1}{y+1}$$ $$\leqslant \frac{k-1}{2}+\frac{1}{2}+\frac{1}{xy+1} = \frac{k}{2}+\frac{1}{x_1x_2\cdots x_{k+1}+1}.$$ Done.
Trouble showing convergence/divergence for alternating series
Observe that $$\sin\frac{(n+1)\pi}{3n+4}\le\sin\frac{n\pi}{3n+1}\iff\sin\frac{(n+1)\pi}{3n+4}-\sin\frac{n\pi}{3n+1}\le0\stackrel{\text{trig. identities}}\iff$$ $$\iff2\sin\frac\pi{(3n+1)(3n+4)}\cdot\cos\frac{6n^2+8n+1}{(3n+1)(3n+4)}\pi\le0$$ But in the last expression above the angle of the sine is in the first quadrant and thus its sine is positive, whereas the angle of the the cosine is between $\;\pi/2\;$ and $\pi\;$ for any values of $\;n\;$ (why? This is a nice exercise in quadratic functions in natural numbers...) so its cosine is negative and thus your sequence is the product of two decreasing positive sequences....
Show that $\int_{-\pi}^\pi ~f(x) \cos (nx) \mathrm{d}\mu(x)$ converges to $0$
First, let's examine the functions $C_n(x)=\cos (nx)$. For large $n$, the graph of this function consists of many cosine waves of small common period. If you integrate this function over $[-\pi,\pi]$, you'll, of course, obtain 0. Now if $I$ is any interval in $[-\pi,\pi]$, for $n$ big, the graph of $C_n$ over $I$ will consist of many cosine waves of small common period together with a portion of a cosine wave near each endpoint of $I$. Here, $\int_I C_n$ will be the same as integrating $C_n$ just "near the endpoints" of $I$ (the middle portion will integrate to 0). But as $n$ grows large, the measure of those portions gets small, and as a result, $\lim\limits_{n\rightarrow\infty} \int_I C_n =0$. Thus, for any interval $I$ and for any number $a$, we have $$\lim_{n\rightarrow\infty} \int_I a\cdot C_n =0.$$ Using the above result and linearity of integration, we can show that if $g$ is a function of the form $$g(x)=\sum_{i=1}^n a_i \chi_{I_i},{\text{ where } }\ I_j\cap I_k\ \buildrel{j\ne k}\over=\ \emptyset{\text{ and }}\bigcup_{i=1}^n I_i=[-\pi,\pi],$$ then $$\lim_{n\rightarrow\infty} \int_{[-\pi,\pi]} g C_n =0.$$ Thus, the theorem is true for any step function in $L_1$. The general result follows from the fact that the step functions are dense in $L_1$ (that is, given $f\in L_1$ and $\epsilon&gt;0$, there is a step function $g$ with $\Vert f-g\Vert_{L_1}&lt;\epsilon$). I can provide more details here if you like, just let me know.
Can someone please explain how the answer is 48.
First seat Edward &amp; Bella. There are two of them and exactly three places (relative to the other couples who sit together. $$ ? == ? == ?$$ So there are $3 \cdot 2 = 6$ ways of seating Bella &amp; Edward before we seat the other two couples. Now seat the other two couples. There are two places for the couples, and each couple can switch places with each other, hence there are $6 \cdot 2 \cdot (2 \cdot 2) = 48 $ places.
Proving the Riesz Representation Theorem for $\ell^p$.
As you said all it remains is to show is $\sum_n b_n e_n \in l^q$ i.e $\sum_n c_n e_n \in l^p$. As you also said $\sum_{n=1}^N c_n e_n \in l^p$ and $\sum_{n=1}^N |c_n|^p =\sum_{n=1}^N |b_n|^{(q-1)p}= \sum_{n=1}^N |b_n|^{q}$ More over: $$T\left(\sum_{n=1}^N c_n e_n \right)=\sum_{n=1}^N c_n T\left(e_n \right)=\sum_{n=1}^N c_n b_n=\sum_{n=1}^N |b_n|^{q-1+1}$$ But (and this where you use the continuity of $T$) there exist $C$ ($C=\|T\|_{(l^p)^*}$) such that for all $u \in l^p$: $$|T(u)| \leq C \|u\|_{l^p}$$ so: $$\sum_{n=1}^N |b_n|^{q}=\left|T\left(\sum_{n=1}^N c_n e_n \right) \right| \leq C \left( \sum_{n=1}^N |c_n|^p \right)^\frac{1}{p}=C \left(\sum_{n=1}^N |b_n|^{q} \right)^\frac{1}{p}$$ i.e: $$\left(\sum_{n=1}^N |b_n|^{q} \right)^{\left(1-\frac{1}{p}\right)}=\left(\sum_{n=1}^N |b_n|^{q} \right)^\frac{1}{q} \leq C $$ taking $N \to + \infty$ you obtain: $$\left(\sum_{n=1}^\infty |b_n|^{q} \right)^\frac{1}{q} \leq C$$
Derivative and integral of $u(f(t),g(t))/v(f(t),g(t))$
Let $a(t):=u(f(t),g(t))$, $b(t):=v(f(t),g(t))$ and $h(t):=\frac{a(t)}{b(t)}$. You are looking for $h'(t)$. This can be done with the quotient rule. Therefore you need $a'(t)$ and $b'(t)$. For example: $a'(t)=u_x(f(t),g(t))f'(t)+u_y(f(t),g(t))g'(t)$. Your turn !
How to prove this isomorphism of k algebras.
You could find the inverse of $\phi$. Try $\psi:k[t]\to k[x,y]/\left&lt;y-x^2\right&gt;$ as the $k$-algebra map with $\psi(t)=x$ (or strictly speaking the coset $x+\left&lt;y-x^2\right&gt;$). Then $\phi(\psi(t))=\phi(x)=t$, $\psi(\phi(x))=\psi(t)=x$ and $\psi(\phi(y))=\psi(t^2)=x^2$, but $x^2=y$ in the quotient ring $k[x,y]/\left&lt;y-x^2\right&gt;$. So $\psi$ and $\phi$ are inverses.
Isolated Singularity Behavior
You do not need L'Hospital's rule. All you have to know that the zeroes $z_k = k\pi$ of $\sin z$ have order $1$ because $\sin' z_k = \cos z_k \ne 0$. This means that $\sin z = (z - z_k) \cdot f_k(z)$ with a holomorphic function $f_k : \mathbb C \to \mathbb C$ such that $f_k(z_k) \ne 0$. This show that $\lim_{z \to 0} \dfrac{z^2}{\sin z} = \lim_{z \to 0} \dfrac{z}{f_0(z)} = 0$. Thus you have a removable singularity at $0$. For $k \ne 0$ you have $\dfrac{z^2}{\sin z} = \dfrac{z^2}{f_k(z)} \cdot \dfrac{1}{z - z_k}$ which shows that there is pole of order $1$ at $z_k$.
Solving $\frac{1}{x} + \frac{1}{x+1} < \frac{2}{x+5}$
When multiply we need to take into account the sign; in this case your step are not correct since you are assuming positive terms. As an alternative to avoid multiplication, we have $$\frac{1}{x} + \frac{1}{x+1} &lt; \frac{2}{x+5} \iff\frac{1}{x} + \frac{1}{x+1} - \frac{2}{x+5}&lt;0 \iff \frac{9x+5}{x(x+1)(x+5)}&lt;0$$ then study the sign of each term.
Grothendieck's axiom for abelian categories: AB5 and AB4, understanding a proof from Popescu's book
Special thanks to Captain Lama in the comment section! Let $(l_{I\setminus F})_i\colon X_i\to \coprod_{i \in I\setminus F} X_i$. Form a coproduct $(\coprod_{i \in F} X_i)\sqcup (\coprod_{i \in F\setminus I} X_i)$ together with canonical injections $q_1\colon \coprod_{i \in F} X_i \to (\coprod_{i \in F} X_i)\sqcup (\coprod_{i \in F\setminus I} X_i)$ and $q_2 \colon \coprod_{i \in F} X_i \to (\coprod_{i \in F} X_i)\sqcup (\coprod_{i \in F\setminus I} X_i)$. Then, by the associativity theorem for coproducts, $(\coprod_{i \in F} X_i)\sqcup (\coprod_{i \in F\setminus I} X_i)$ is a coproduct of the family $(X_i)_{i \in I}$ together with the canonical injections $q_i\colon X_i \to (\coprod_{i \in F} X_i)\sqcup (\coprod_{i \in F\setminus I} X_i)$ where $q_i = q_1\circ (l_F)_i$ if $i \in F$ and $q_i = q_2\circ (l_{F\setminus I})_i$ otherwise. By uniqueness of coproduct, there is a unique isomorphism $u$ such that $l_i = u\circ q_i = u\circ q_1\circ (l_F)_i$ for all $i \in F$ and $l_i = u\circ q_2 \circ (l_{F\setminus I})_i$ for all $i \in F\setminus I$. In particular, $v_F\circ (l_F)_i = l_i = u\circ q_1 \circ (l_F)_i$ for all $i \in F$, so $v_F = u\circ q_1$ is a monomorphism as coproduct injections are monomorphisms in abelian categories.
Quantifying the angle metric on the Grassmannian in terms of the norm on the exterior power
$\newcommand{\R}{\mathbb{R}} \newcommand{\U}{\mathbf{U}} \newcommand{\W}{\mathbf{W}} \newcommand{\A}{\mathbf{A}} \newcommand{\tr}{\mathrm{tr}} \newcommand{\rank}{\mathrm{rank}} $ Choosing an arbitrary basis, we may assume $V=\R^n$. We will show that the mapping \begin{align} \mathbb{S}\big(\bigwedge\nolimits^k_s(\R^n)\big)&amp;\to Gr_k(\R^n)\\ \sigma=u_1\wedge\ldots\wedge u_k &amp;\mapsto\operatorname{span}\{u_1,\ldots,u_k\}=\{v\in\R^n\colon v\wedge\sigma=0\} \end{align} is continuous w.r.t. the angle metric on the Grassmannian and induced inner product metric on the exterior power. The following lemma implies this assertion. Here $\mathbb{S}(X)$ denotes the set of elements in $X$ of unit length and $\bigwedge\nolimits^k_s(\R^n)$ is the set of simple $k$-vectors. Let $U,W\in Gr_k(\R^n)$ and choose orthonormal bases $u_1,\ldots,u_k$ and $w_1,\ldots,w_k$ of $U$ and $W$, respectively. Let further $\U,\W\in\R^{n\times k}$ be the matrices with columns $(u_i)$ and $(w_i)$, respectively. Finally, define the unit $k$-vectors $\sigma=u_1\wedge\cdots\wedge u_k$ and $\tau=w_1\wedge\cdots\wedge w_k$. Lemma \begin{align} d(U,W) \leq \sqrt{k}\left\|\sigma-\tau\right\|_{\bigwedge^k(\R^n)} \end{align} Proof First, note that by the Cauchy-Schwartz inequality \begin{align} \det(\U^T\W)=\langle\sigma,\tau\rangle_{\bigwedge^k(\R^n)} \leq \|\sigma\|_{\bigwedge^k(\R^n)}\|\tau\|_{\bigwedge^k(\R^n)}=1. \tag{1}\label{eq:CS_ineq} \end{align} Secondly, the Euclidean operator norm of a linear operator $A$ is bounded by its Frobenius norm, i.e. $\|A\| \leq \|A\|_F$. This follows from the fact that the Euclidean norm and the Frobenius norm of a vector coincide and the submultiplicativity of the Frobenius norm. Therefore, \begin{align} d(U,W)^2 &amp;= \left\| P_U-P_W \right\|^2 \leq \left\| P_U-P_W \right\|_F^2 =\left\| P_U\right\|_F^2 +\left\|P_W \right\|_F^2 -2\left\langle P_U,P_W \right\rangle_F. \end{align} Using that $P_U=\U^T\U$ and $P_W=\W^T\W$ are idempotent, we get that $\left\| P_U\right\|_F^2=\rank(P_U)=k$ and also $\left\| P_W\right\|_F^2=k$. Thus, \begin{align} \left\| P_U-P_W \right\|_F^2 &amp;=2k-2\tr(\U\U^T\W\W^T) =k\big(2-\tfrac2k \tr((\U^T\W)^T\U^T\W)\big). \end{align} Due to the inequality of the arithmetic and geometric mean, it holds that \begin{align} \sqrt[k]{\det(\A)} \leq \tfrac1k \tr(\A) \end{align} for any matrix $\A\in\R^{k\times k}$ (because the trace is the sum and the determinant the product of the eigenvalues of $\A$). If $\det(\A)\leq1$ then for any $k\geq2$ \begin{align} \sqrt[2]{\det(\A)} \leq \tfrac1k \tr(\A). \tag{2}\label{eq:AGM} \end{align} Therefore we can estimate using \eqref{eq:CS_ineq} and \eqref{eq:AGM} for $\A=(\U^T\W)^T\U^T\W$ \begin{align} \left\| P_U-P_W \right\|_F^2 &amp;\leq k\big(2-2\sqrt{\det((\U^T\W)^T\U^T\W)}\big) =k\big(2-2\det(\U^T\W)\big). \end{align} Note that we can always arrange for $\det(\U^T\W)$ to be positive be interchanging two columns of, say, $\U$. This is possible because we investigate the unoriented Grassmannian. Finally, direct computation shows that \begin{align} \|\sigma-\tau\|_{\bigwedge^k(\R^n)}^2=\|\sigma\|_{\bigwedge^k(\R^n)}^2+\|\tau\|_{\bigwedge^k(\R^n)}^2 - 2 \langle\sigma,\tau\rangle_{\bigwedge^k(\R^n)}^2 = 2-2\det(\U^T\W) \end{align} which concludes the proof. The next statement is not really needed but just included here for completion. Corollary For arbitrary $\sigma,\tau$ arbitrary $k$-vectors such that they span the subspaces $U$ and $W$, it holds that \begin{align} d(U,W) \leq \sqrt{k}\left\|\tfrac{\sigma}{\left\|\sigma\right\|_{\bigwedge^k(\R^n)}}- \tfrac{\tau}{\left\|\tau\right\|_{\bigwedge^k(\R^n)}}\right\|_{\bigwedge^k(\R^n)} \leq \sqrt{k}\frac{\left\|\sigma-\tau\right\|_{\bigwedge^k(\R^n)}}{\left\|\sigma\right\|_{\bigwedge^k(\R^n)}\left\|\tau\right\|_{\bigwedge^k(\R^n)}} \end{align} Proof If $\sigma=a_1\wedge\cdots\wedge a_k$ for an arbitrary basis $(a_1,\ldots,a_k)$ of $U$ then we can change basis to an orthonormal basis $(u_1,\ldots,u_k)$, e.g. via QR-decomposition of $\A=\U\mathbf{R}$. Then $\sigma=\det(\mathbf{R})u_1\wedge\cdots\wedge u_k$ and $\left\|\sigma\right\|_{\bigwedge^k(\R^n)}=\det(\mathbf{R})$. Then use the lemma for $\sigma/\left\|\sigma\right\|_{\bigwedge^k(\R^n)}$ and $\tau/\left\|\tau\right\|_{\bigwedge^k(\R^n)}$. The second inequality follows from the fact that $2ab\leq a^2+b^2$ for any real numbers $a,b$.
Compute the variance for x:
Given what you've posted here, your professor's calculation of $\mathbb E[X^2] = 6.7$ does indeed appear to be incorrect.
Prove two parallel lines intersect at infinity in $\mathbb{RP}^3$
$\newcommand{\Reals}{\mathbf{R}}$Here's a "parametric" way to think of it: When you write "last coordinate $0$", presumably you're thinking of $\Reals^{3}$ embedded in $\Reals^{4}$ as $(x, y, z, 1)$. Take a non-zero direction $v = (a, b, c)$ in $\Reals^{3}$. A pair of parallel lines in $\Reals^{3}$ can be parametrized by \begin{align*} \ell_{1}: &amp;\quad (x_{1} + at, y_{1} + bt, z_{1} + ct, 1) \sim \tfrac{1}{t}(x_{1} + at, y_{1} + bt, z_{1} + ct, 1), \\ \ell_{2}: &amp;\quad (x_{2} + at, y_{2} + bt, z_{2} + ct, 1) \sim \tfrac{1}{t}(x_{2} + at, y_{2} + bt, z_{2} + ct, 1). \end{align*} Distribute the division by $t$, and let $|t| \to \infty$.
Unbounded strictly increasing non-Cauchy sequence
Correct. Given a strictly increasing sequence, if it is bounded then it converges and is therefore Cauchy. Therefore, if a strictly increasing sequence is not Cauchy, then it is not bounded. I think this is a simpler justification.