title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Does there exist a positive integer $n$ such that $P^n = I$, where $P$ is a rotation matrix? | $$P^n$$ corresponds to a rotation by angle $2\pi qn$ radians, or $qn$ full turns. $qn$ can only be an integer when $q$ is rational. |
Polar decompostion should be a diffeomorphism, right? | I think I've proved a lemma which lets me finish this off. [Relevance: Note the squaring map $GL_+ \to GL_+$ is a bijection, since every positive matrix has a unique positive square root. The derivative of the squaring map at $a \in GL_+$ is $b \mapsto ba +ab : SA \to SA$, so the following lemma implies that the square root function is a diffeomorphism of $GL_+$.]
Lemma: If $a \in GL_+$, then $b \mapsto ab + ba : SA \to SA$ is invertible.
Proof: There exists an orthonormal basis $\xi_1,\ldots,\xi_n$ of $\mathbb{C}^n$ over which $a$ diagonalizes: $a \xi_i = \lambda_i \xi_i$. Moreover, since $a$ is positive and intertible, $\lambda_i > 0$ for all $i$. For each $i,j$ denote by $e_{ij}$ the rank-1 matrix $\xi_i \xi_j^T$. We have a basis for $SA$ consisting of the $n + 2\binom{n}{2} = n^2$ self-adjoint matrices
\begin{align*}
e_{ii} && 1 \leq i \leq n \\
e_{ij} + e_{ji} && 1 \leq i < j \leq n \\
i(e_{ij} - e_{ji}) && 1 \leq i < j \leq n.
\end{align*}
Each of these is an eigenvector of $b \mapsto ab + ba$ with corresponding eigenvalues
\begin{align*}
2 \lambda_i && 1 \leq i \leq n \\
\lambda_i + \lambda_j && 1 \leq i < j \leq n \\
\lambda_i + \lambda_j && 1 \leq i < j \leq n.
\end{align*}
By positivity of the original eigenvalues, none of these new eigenvalues is zero. Hence, $b \mapsto ab + ba : SA \to SA$ is a (diagonalizeable) invertible transformation. QED.
OK, now to finish things off. Suppose that $(u,p) \in U \times GL_+$. Suppose that $(a,b) \in T_{(u,p)}(U \times GL_+)$. That is, $u^*a$ is anti-self-adjoint and $b$ is self-adjoint. Suppose $DF_{(u,p)}(a,b) = ub + ap =0$. Multiplying by $u^*$ on the left and $p^{-1}$ on the right gives $bp^{-1} + u^* a = 0$. Taking the adjoint of the preceding gives $p^{-1} b - u^* a = 0$. Adding the last two equations gives $p^{-1} b + b p^{-1} = 0$. Applying the lemma with $a$ replaced by $p^{-1}$ shows that $b = 0$. It follows easily that $a=0$ as well, so we are done.
Any feedback/additional perspectives would be welcome. |
Derivative of the solution of a linear program | This is known as sensitivity analysis. If you have a non-degenerate optimal basic feasible solution, it is relatively simple to find derivatives of the optimal BFS or the optimal objective value with respect to changes in b or c. Changes to A can also be analyzed, but this is somewhat more complicated.
If the optimal BFS is degenerate, these derivatives may not exist.
See just about any textbook on linear programming.
Here's a brief explanation.
First, put your LP into standard form by adding slack variables to eliminate the inequality constraints:
$\min c^{T}x$
subject to
$Ax=b$
$x \geq 0$
Here $A$ is a matrix of size $m$ by $n$ with rank $m$.
If there is a unique and non-degenerate optimal basic feasible solution, Then the variables in $x$ can be split up into a vector $x_{B}$ of $m$ basic variables and a vector $x_{N}$ of $n-m$ nonbasic variables.
Let $B$ be the matrix obtaining by taking the columns of $A$ that are in the basis and $A_{N}$ consist of the remaining columns of $A$. Similarly, let $c_{B}$ and $c_{N}$ be the coefficients in $c$ corresponding to basic and nonbasic variables.
You can now write the problem as
$\min c_{B}^{T}x_{B}+c_{N}^{T}x_{N}$
subject to
$Bx_{B}+A_{N}x_{N}=b$
$x \geq 0$.
In the optimal basic solution, we solve for $x_{B}$ and write the problem as
$\min c_{B}^{T}B^{-1}b + (c_{N}^{T}-c_{B}^{T}B^{-1}A_{N})x_{N} $
subject to
$x_{B}=B^{-1}b-B^{-1}A_{N}x_{N} $
$x \geq 0.$
An important optimality condition that the simplex method ensures is that
$r_{N}=c_{N}-c_{B}^{T}B^{-1}A_{N} \geq 0.$
If the solution is also dual non-degenerate, then $r_{N}>0$. We'll need to assume that as well.
In the optimal basic solution, we set all of the variables in $x_{N}$ to $x_{N}^{*}=0$ and get the values of the basic variables from $x_{B}^{*}=B^{-1}b$. By assumption, this optimal basic feasible solution is non-degenerate, meaning that $B^{-1}b$ is strictly greater than 0. Small changes to $b$ won't change $r_{N}$ and won't violate $x_{B} \geq 0$, so the solution will remain optimal after small changes to $b$.
Now, it should be clear that
$\frac{\partial x_{B}^{*}}{\partial b}=B^{-1}$
and
$\frac{\partial x_{N}^{*}}{\partial b}=0.$
Small changes in $c$ won't change $x_{B}$ at all, and will not lose $r_{N} \geq 0$. Thus the solution will remain optimal, and $x_{B}$ won't change, although the optimal objective value will change. Thus
$\frac{\partial x_{B}^{*}}{\partial c}=0$
and
$\frac{\partial x_{N}^{*}}{\partial c}=0.$
If the assumptions of non-degeneracy are violated then these derivatives may simply not exist!
In a similar way, you can analyze changes in $A_{N}$ or in $B$.
This stuff is discussed in V. Chvatal's Linear Programming among many other textbooks on the subject. |
What does $\lim_{\epsilon\to 0} \frac{\zeta(1+\epsilon) + \zeta(1-\epsilon)}{2} =\gamma$ really mean? | Consider the function $f(x) = \dfrac{1}{x-1} + c,$ which has a singularity (simple pole with residue $1$, if you like) at $x = 1$ and is well-defined everywhere else.
A trivial computation gives $\lim_{\epsilon \to 0} \dfrac{ f(1+ \epsilon) + f(1 - \epsilon) }{2} = c.$
So the equation you ask about says that in a n.h. of $s = 1$,
the function $\zeta(s)$ looks like $$\dfrac{1}{s-1} + \gamma + \text{higher order terms in } (s-1).$$ |
Can you suggest some challenging calculus questions? | Here are two that I've seen on this network.
Problem 1. Show that
$$
\sum_{n=1}^{\infty}\big(\frac{1}{4n+1}-\frac{1}{4n}\big)=\frac{1}{8}(\pi-8+6\ln 2)
$$
Problem 2. Evaluate the indefinite integral
$$
\int e^{\sin x}(x\cos x-\sec x\tan x)dx
$$ |
Logical Expression and truth table | In your truth table, make 7 columns S, P, Q, $\lnot S$, S and P, $\lnot S$ and Q, and finally (S and P) or ($\lnot$ S and Q)
The last column corresponds to what you want:
(S and P) OR ($\lnot$ S and Q) |
Find a solution of $x\frac{dy}{dx} = y^2 -y$ that passes through the points (1/2, 1/2) | (S)he is right. Note that if you set: $$\frac{1}{y(y-1)}=\frac{A}{y-1}+\frac{B}{y}$$ for some proper values of $A$ and $B$ so you'll have:
$$\frac{1}{y(y-1)}=\frac{(A+B)y-B}{y(y-1)}\longrightarrow A+B=0,~~B=-1$$ |
Determine whether or not $H$ is a subgroup of $G$ (assume that the operation of $H$ is the same as that of $G$) | First of all, regarding additive closure-
$$\log(a)+\log(b) = \log(ab) = 0 + \log(ab) = \log(1) + \log(ab) \in H+S$$
As for the inverse, functional inverse and inverse as a subgroup are very different things. Here, the operation is addition, and the additive identity is $0$. So, given an element $\log(a)$ in $H$, additive inverse should be $\log(b)\in H$, such that
$$ \log(a)+\log(b) = \log(ab) = 0 = \log(1)$$
As $a>0$ and $a\in \mathbb{Q}$, $\log(\frac{1}{a})$ is the required inverse. |
What is the derivative of $f(x)=\ln x$? | $$
y=\ln(x)\Rightarrow x=e^y \Rightarrow 1=y'e^y \Rightarrow y'=\frac{1}{e^y}\Rightarrow y'=\frac{1}{x}
$$ |
Example of a Zariski sheaf which is not representable? | You just have to take the constant sheaf defined by an infinite set. |
$\infty$-categories definition disambiguation | The intuitive idea of an $\infty$-category is a category-like structure where you have morphisms between morphisms between morphisms between morphisms and so on. That is, you have ordinary objects and morphisms, but you also have "2-morphisms" between parallel 1-morphisms (think natural transformations between functors, or homotopies between maps), and "3-morphisms" between parallel 2-morphisms, and so on. An $(\infty,1)$-category is then a structure of this sort where all the $k$-morphisms for $k>1$ are invertible. The basic motivating example is the $(\infty,1)$-category of spaces, where objects are (nice) topological spaces, morphisms are continuous maps, 2-morphisms are homotopies between maps, 3-morphisms are homotopies between homotopies, 4-morphisms are homotopies between homotopies between homotopies, and so on. Since every homotopy has an inverse (just reverse it), all the $k$-morphisms for $k>1$ are invertible.
So, Lurie's definition of "$\infty$-category" is actually just modeling this notion of $(\infty,1)$-category, not more general $\infty$-categories in which you can have non-invertible morphisms of all dimensions. You should think of an element of $S_2$ as not just a commuting triangle in the ordinary categorical sense, but a triangle which commutes up to a given homotopy. That is, you have objects $A$, $B$, and $C$, maps $A\to B$, $B\to C$, and $A\to C$, and a $2$-morphism ("homotopy") between the composition $A\to B\to C$ and the morphism $A\to C$. Similarly, higher-dimensional simplices are diagrams which commute up to given higher-dimensional morphisms.
In the case that you have an ordinary category, you consider it as an $(\infty,1)$-category by saying there are no $k$-morphisms for $k>1$ other than identity morphisms. So in this case all the higher morphisms in our simplices are identities, and the diagrams actually literally commute. So in that case, the simplicial set is just the usual nerve of the category. |
Graphs with pairs of vertices connected by multiple edges | These are known as multigraphs. |
How can I establish the inverse function? | There is no inverse function to $f(x)=x^{4}-3x^{3}+4x^{2}-6x+4$. This is because $f(x)$ has two $x$ values for one $y$ value, which means $f^{-1}(x)$ will have two $y$ values for one $x$ value. Since this does not pass the vertical line test, the inverse of $f(x)=x^{4}-3x^{3}+4x^{2}-6x+4$ is not a function.
If you want to get the inverse function of $f(x)=x^{4}-3x^{3}+4x^{2}-6x+4$ on a specific domain to make the inverse a function, one way is to use the tedious quartic formula, which restates the inverse function as $f^{-1}(x)$ in terms of $x$. |
How does one calculate the norm $||f_n||_{\infty} = sup_{x \in [0,1]} |f_n(x)|$? | The norm $\|f\|_\infty$ is basically the largest value of $|f(x)|$
(as long as $f$ is continuous, which it is here). Here $f(x)\ge0$.
On $[0,1/2n]$ the largest value of $f(x)$ is at $x=1/2n$ where $f(x)=2n$.
I reckon the largest value on $[1/2n,1/n]$ is also $2n$.
Putting $f$ into the formula for $\|f\|_1$ gives
$$\|f\|_1=\int_0^{1/2n}4n^2x\,dx+\int_{1/2n}^{1/n}(4n-4n^2x)\,dx$$
which I'm sure you can do. Alternatively, just look at the graph
and write down the answer. |
Prove $\sum_{n=2}^{\infty}{\frac{1}{n\ln (n)}}$ diverge | group the terms into blocks of size $2^n$. sum in each block will be bigger than
$$ 2^{n} \frac{1}{2^{n+1} \log{2^{n+1}}}$$ so it will be something like the series $\frac{1}{n(2ln(2)} $ which diverges (hope you can use this fact) |
Sum involving zeta functions | One may observe that
$$
\zeta(n)=1+\frac1{2^n}+\frac1{3^n}+\cdots,\quad n>1,
$$
gives
$$
\lim_{n \to \infty}\zeta(n)=1,
$$ then
$$
\lim_{n \to \infty}{\left(\frac{(n-1)\zeta(n)}{4n-1}\right)}=\frac14 \times 1\neq0.
$$
Your series, as written, is divergent. |
The definition of a normal subgroup | It's logically sound.
Let $G$ be group, $N$ a normal subgroup of $G$.
$N$ is the kernel of the canonical homomorphism $G \rightarrow G/N$.
Conversely let $f\colon G \rightarrow G'$ be a group homomorphism.
$Ker(f)$ is a normal subgroup of $G$. |
Prove that if $f:\mathbb{R\to R}$ is a monotonically increasing function then for all $a$, $\{x:f(x)>a\}$ is an interval. | As Mlazhinka Shung Gronzalez LeWy commented, you strictly only showed that teh set is contained in a certain interval.
Also, I personally find the long sequence of if-nots confusing.
Instead I'd go like this: Let $S=\{x:f(x)>a\,\}$. If $S=\Bbb R$ or $S=\emptyset$, we are done. Hence we may assume that $S\ne \emptyset$ and there exists $x_0$ with $x_0\notin S$. Then $x_0$ is a lower bound for $S$: If $x<x_0$ then $f(x)\le f(x_0)\le a$. Hence $\lambda:=\inf S$ exists. This implies $S\subseteq [\lambda,\infty)$.
If $x>\lambda$, then there exists $y\in S$ with $\lambda<y<x$, hence $f(x)\ge f(y)>a$, i.e., $x\in S$. So $(\lambda,\infty)\subseteq S$.
We conclude that $S=(\lambda,\infty)$ or $S=[\lambda,\infty)$, depending on whether $\lambda\in S$ or not. |
How to classify singularities in a given complex funtion and how to compute the residues of the same function. | You have several things wrong. The singularities may come from
Points where the denominator vanishes: $\cot z-1=0$ when $z=\dfrac\pi4+k\,\pi$. These are poles of order $2$.
Points where the denominator is not defined. This happens when $z=k\,\pi$, where $\cot z$ has a pole of order one. Since
$$
\lim_{z\to k\pi}\frac{z^4}{(\cot z-1)^2}=0,
$$
they are removable singularities. Moreover, they are zeroes of order $2$, except $z=0$, which is of order $6$. |
Convergence of $\sum_{n=1}^\infty\left(1+\frac{2}{n}\right)^{n^3+n^2+1} \mathrm{e}^{-2n^2}$ | Since
$$
\log(1+x)=x-\frac{x^2}{2}+\frac{x^3}{3}-\frac{x^4}{4}+\mathcal{O}(x^5), \quad \lvert x\rvert<1,
$$
we have that
$$
\log\left(1+\frac{2}{n}\right)=\frac{2}{n}-\frac{2}{n^2}+\frac{8}{3n^3}-\frac{4}{n^4}+{\mathcal O}(n^{-5}).
$$
In particular, using the Taylor expansion theorem, we obtain that the ${\mathcal O}(n^{-5})$
term is of the form $n^{-5}a_n$, whre $a_n$ is a bounded sequence.
Then we have that
$$
\log a_n=\log \left(\left(1+\frac{2}{n}\right)^{n^3+n^2+1} \left(\frac{1}{e^2}\right)^{n^2}\right) \\
=(n^3+n^2+1)\left(\frac{2}{n}-\frac{2}{n^2}+\frac{8}{3n^3}-\frac{4}{n^4}+{\mathcal O}(n^{-5})\right)-2n^2\\=
\frac{2}{3}+\frac{2}{3n}+{\mathcal O}(n^{-2})
$$
Hence $a_n=\mathrm{e}^{2/3+2/3n+{\mathcal O}(n^{-2})}\to\mathrm{e}^{2/3}$ and thus the series diverges. |
Determining posterior gaussian distribution having marginalised over hyperparameters. | Should we instead be viewing this as taking an average of gaussian random variables?
Yes. See Doucet & Johansen 2008 - A tutorial on Particle Filtering and Smoothing section 3.1; they make that explicit in their discussion of MC methods and use notation that stops this confusion from arising in the first place. |
Minimize $x + z$ subject to $x^2 + y^2 = 1$ and $y^2+z^2 = 4$ | Yes, your solution is correct, I just corrected a small typo in the notations of $f(x, y, z)$. |
Prove min(L) = all words in L that they don't have any prefix of themselves in L | Hint: Consider a deterministic automaton, if you happen to be at the accepting state, anything that comes later should send you to the "trash" state. |
An Inequality Involving $\min(x, y)$ | My approach would have been:
$|xy-x_{0}y_{0}|=|(xy-xy_{0})+(xy_{0}-x_{0}y_{0})|\le|x||y-y_{0}|+|y_{0}||x-x_{0}|$
$<|x|\big(\frac{\epsilon}{2(|x_{0}|+1)}\big)+|y_{0}|\big(\frac{\epsilon}{2(|y_{0}|+1)}\big)=\frac{|x|}{|x_{0}|+1}\frac{\epsilon}{2}+\frac{|y_{0}|}{|y_{0}|+1}\frac{\epsilon}{2}<\frac{|x|}{|x_{0}|+1}\frac{\epsilon}{2}+\frac{\epsilon}{2}$
Then noting that $|x|-|x_{0}|\le|x-x_{0}|<1$ (using that $|x-x_{0}|<1$ since it is bounded by the min of $1$ and $\frac{\epsilon}{2(|x_{0}|+1)}$) so $|x|<|x_{0}|+1$. So the above is bounded by
$<\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon$ |
Banach's Fixed Point theorem - banach vector space | Since $U$ is closed in $V$ (which is complete), then $U$ must be $\mathbf{complete}$( this is the key fact in the solution to this problem). Suppose $U \neq \varnothing$. Hence, there exists $u_0 \in U$. We define a sequence as follows:
$u_1 = T u_0 $
$u_2 = T u_1 = T^2 u_0
$
...
$u_m = T^m u_0 $
Next, we show $(u_m) $ is complete. First of all, notice
$$ ||u_{m+1} - u_m || = ||T u_m - T u_{m-1} || \leq c || u_m - u_{m-1} || \leq ... \leq c^m ||u_1 - u_0||$$
Hence, for $n > m $, we have (by triangle inequality)
$$ ||u_m - u_n || \leq ||u_m - u_{m+1} || + || u_{m+1} - u_{m+2} || + ... + |u_{n-1} - u_n || \leq (c^m + c^{m+1} + ... + c^{n-1} ) ||u_1 - u_0 || =_{why?} c^m \frac{1 - c^{n-m}}{1-c} ||u_1-u_0|| < \frac{c^m}{1-c} ||u_1 - u_0 ||$$
Notice, you can make the last expression as small as you want. You should verify this. This implies that $(u_m)$ is a cauchy sequence. Since $U$ is complete, there exists $u \in U$ such that $u_m \to u $. Now, I leave it to you to show that $u$ is the fixed point and is unique.
Added: To show $u$ is indeed the fixed point, we use the triangle inequality:
$$ ||Tu -u|| \leq ||Tu - u_n || + ||u_n - u || = ||Tu - T u_{n-1} || + ||u_n - u || < c||u - u_{n-1} || + ||u_n - u|| < c \frac{1}{n} + \frac{1}{n} \to 0$$
This we can do since $(x_n)$ is cauchy. Hence
$$ ||Tu - u || = 0 $$
which implies $Tu = u$ |
Order of a cyclic group | The order of a group $G$ is indeed the number of elements in it. The order of a subgroup $H$ generated by $(12)$ in the symmetric group $G=S_3$, say, is two, because we have $H=\{(1), (12)\}$, with $(12)^2=(1)$. Similarly the subgroup generated by $(15)(34)$ in $S_5$ has only two elements. The cyclic subgroup generated by $(134)(25)$ has order $lcm(2,3)=6$, since the order of $(134)$ is $3$, and the order of $(25)$ is $2$, and $gcd(2,3)=1$, and the two cycles commute. Note that $S_5$ is not cyclic, we need at least two generators, e.g. $(12345)$ and $(12)$. |
Show that $\sum\limits_{i=0}^{n/2} {n-i\choose i}2^i = \frac13(2^{n+1}+(-1)^n)$ | One way is by considering generating functions.
Denoting $\displaystyle a_n = \sum_{k=0}^{n} 2^k{n-k\choose k}$
(where, we use the notation $\displaystyle \binom{n}{m} = 0$ for $m > n$)
$\displaystyle \begin{align}
\sum_{n=0}^\infty a_nx^n
&=\sum_{n=0}^\infty x^n\sum_{k=0}^n2^k\binom{n-k}{k} =\sum_{k=0}^\infty\sum_{n=k}^\infty2^kx^n\binom{n-k}{k}=\sum_{k=0}^\infty\sum_{n=0}^\infty2^kx^{n+k}\binom{n}{k}\\
&=\sum_{k=0}^\infty2^kx^k\frac{x^k}{(1-x)^{k+1}}=\frac{1}{1-x}\sum_{k=0}^\infty2^k\left(\frac{x^2}{1-x}\right)^k\\
&=\frac{1}{1-x}\frac{1}{1-\frac{2x^2}{1-x}}\\
&=\frac{1}{1-x-2x^2}\end{align}$
Using partial fractions to decompose:
$\displaystyle \frac{1}{1-x-2x^2} = \frac{1}{3}\left(\frac{2}{1-2x}+\frac{1}{1+x}\right) = \sum_{n=0}^{\infty} \frac{2^{n+1}+(-1)^n}{3}x^n$
Thus, $\displaystyle a_n = \frac{2^{n+1}+(-1)^n}{3}$ |
Recurrence relation $f(n)=5f(n/2)-6f(n/4) + n$ | You can only define $f(2^k)$. (Try to define $f(3)$. It is impossible.) Hence make the transformation
$$
n=2^k, \quad \text{and define}\quad g(k)=f(2^k).
$$
Then for $g$ you have that
$$
g(0)=2, \quad g(1)=1\quad \text{and}\quad g(k+2)=5g(k+1)-6g(k)+2^{k+2}.
$$
If $2^{k+2}$ was not there, then $g$ would have to be of the form $g(k)=c_12^k+c_23^k$. With this additional term the general solution of the recursive relation is of the form $g(k)=c_12^k+c_23^k+c_3k2^k$. Just find the values of the constants. |
Is it possible to rearrange this for c? $\frac{\sin(\pi-a-b-c)}{\sin(a)}=\frac{\sin(c)}{\sin(b)}$ | Notice that $\sin(\pi-x)=\sin(x)$, which allows us to simplify a bit. We then have the sum of angles formula, which states that $\sin(x+y)=\sin(x)\cos(y)+\cos(x)\sin(y)$. This simplifies things down a bit:
$$\sin(\pi-a-b-c)=\sin(a+b+c)=\sin(a+b)\cos(c)+\cos(a+b)\sin(c)$$
Now we divide both sides by $\sin(a)\sin(c)$ to get
$$\sin(a+b)\cot(c)+\cos(a+b)=\frac{\sin(a)}{\sin(b)}$$
Subtract $\cos(a+b)$ from both sides:
$$\sin(a+b)\cot(c)=\frac{\sin(a)}{\sin(b)}-\cos(a+b)$$
And divide by $\sin(a+b)$.
$$\cot(c)=\frac1{\sin(a+b)}\left(\frac{\sin(a)}{\sin(b)}-\cos(a+b)\right)$$
And then take inverse cotangent to find $c$. |
Prove the sum $\sum_{i=1}^n a_{\sigma_i}$ computed over all permutations σ of the set {1, 2, …, n} remains unchanged. | The way I'd go about proving this is as follows:
Let $(1, 2, ..., n)$ be the order we'd normally find $a_i$ in and $(\sigma1, \sigma2, ..., \sigma n)$ the order after our permutation has been applied. Then let $\sigma j$ correspond with $1$. If we now exchange $\sigma j$ and $\sigma(j-1)$, we'll keep the same sum (addition is commutative). Therefore, we can move $\sigma j$ to the first place in $j-1$ steps and get a new permutation which has the same sum. We can then repeat this process for all terms to "revert" the permutation made and end up with our original sum.
Therefore it must be that $$\sum_i a_i=\sum_i a_{\sigma i}$$ |
why showing that $\mathcal{R} \cong \mathbb{C}$ as rings implies that $\mathcal{R}$ is a field? | The definition of ismorphism, is that these two objects preserve estructural properties, and be a field is a estructural property.
Also remember that a ismorphism between two rings is a aplication that is one to one and onto, so too preserves the operation. |
Number of maximum consecutive heads | For $k>n$ the first probability is obviously $0$ and for $k\le n$,
$$
\mathsf{P}(l_n\ge k)=\mathsf{P}(X_n=1,X_{n-1}=1,\ldots, X_{n-k+1}=1)=2^{-k}
$$
by independence. As for the second probability, $L_n$ is the length of the longest run of heads in $n$ trials. Its distribution can be calculated recursively, that is
$$
\mathsf{P}(L_n\ge k)=1-2^{-n}S_n(k-1),
$$
where $S_n(j)$ is the number of sequences of length $n$ in which the longest run of heads does not exceed $j$ and is given by
$$
S_n(k)=\cases{\sum_{i=0}^k S_{n-i-1}(k), & $k<n$, \\ 2^n, & $k\ge n$.}
$$ |
Why are there several axiom systems for propositional logic? | There are indeed many presentations of classical propositional logic1.
If complexity was measured purely in terms of number of axioms, then yes, Mendelson's axiom system would be more "economical". We could be even more "economical" with Meredith's system: $$((((A\to B)\to(\neg C\to\neg D))\to C)\to E)\to((E\to A)\to(D\to A))$$ Just rolls right off the tongue. While minimality is often a driver, people do still need to discover more minimal systems. (Indeed, the above is not the most minimal system if we include the complexity of the axiom and not just the number.) There's also the question of accepting the axioms. Ideally, we want the axioms to be "self-evident" or at least easy to understand intuitively. Maybe it's just me, but Meredith's axiom does not leap out at me as something that should obviously be true, let alone sufficient to prove all other classical tautologies.
Minimality is, however, not the "whole point" of axiomatic systems. You mention another reason: sometimes you actually do want to prove things at which point it is better to have a richer and more intuitive axiomatic system. You may argue that we can just derive any theorems we want to use from some minimal basis, and then forget about that basis. This is true but unnecessary complexity if we have no other reason for considering this minimal basis. When we compare different styles of proof system (e.g. Hilbert v. Sequent Calculus v. Natural Deduction), translations between them (especially into Hilbert-style systems) can involve a lot of mechanical complexity. That complexity can sometimes be significantly reduced by a careful choice of axioms.
For the Laws of the Excluded Middle (LEM) and Non-contradiction, the first thing you'd need to do is define the connectives. You can't prove $\neg(P\land\neg P)$ in a system that doesn't have $\land$. Given $\neg$ and $\to$ as primitives, standard definitions of $\land$ and $\lor$ are $P\land Q:\equiv \neg(P\to\neg Q)$ and $P\lor Q:\equiv\neg P\to Q$. With these definitions (or others), then yes, the LEM and Non-contradiction can both be proven in the systems you mention and any other proof system for classical propositional logic. Your concern here is an illustration that we often care about which axioms we have and not just that they're short and effective.
This also leads to another reason why we might want a certain presentation. We may want that presentation to line up with other, related logics. As you're starting to realize, it is ill-defined to say something like "intuitionistic propositional logic (IPL) is classical propositional logic (CPL) minus the LEM". When people say things like this, they are being sloppy. However, "CPL is IPL plus LEM" is unambiguous. Any presentation of IPL to which we add LEM is a presentation of CPL. For that presentation, it makes sense to talk about removing LEM. It doesn't make sense to talk about removing an axiom without a presentation of axioms that contains that axiom. It is also quite possible to have a presentation of CPL containing LEM that becomes much weaker than IPL when LEM is removed. In fact, you'd expect this because a presentation of IPL with LEM added is likely to be redundant because things which are intuitionistically distinct become identifiable when LEM is added. The story is the same for paraconsistent logics.
While it isn't as much of a driver for Hilbert-style proof systems, for natural deduction and sequent calculi the concerns of structural proof theory often push for more axioms (or rules of inference, rather). For example, many axioms in a Hilbert-style proof system mix together connectives, e.g. as Meredith's does above. This means we can't understand a connective on its own terms, but only by how it interacts with other connectives. A driving force in typical structural presentations is to characterize connectives by rules that don't reference any other connectives. This better reveals the "true" nature of the connectives, and it makes the system more modular. It becomes meaningful to talk about adding or removing a single connective allowing you to build a logic à la carte. In fact, structural proof theory motivates many constraints on what a proof system should look like such as logical harmony.
1 And that link only considers Hilbert-style proof systems. |
A "simple, detailed" reference for Topological groups, which is sufficient for number-theoretic reasons. | If your final goal is just studying infinite Galois theory, I believe Milne's book on Galois theory is fine(as fas as I could remember, he treats Krull topology in this book). For other purposes such as studying Lie groups or improving your knowledge in commutative algebra, these following books are highly recommended:
J.Rotman, An Introduction to Algebraic Topology: you could take a glance at some last pages of chapter $3$, The Fundamental Group, where the author defines the notions of topological groups and the so-called $H$-spaces. Roughly, $H$-spaces are topological spaces behaving like topological groups. Next, you may wish to have more insightful treatments about these objects, let turn to chapter $11$, which is also very readable, where Rotman introduces the categorical approach to group objects, $H$-groups, $H'$-groups in a general category. In this point of view, a topological group is simply a group object in $\mathbf{Top}_{*}$ or $\mathbf{hTop}_{*}$.
M.Atiyah, An Introduction to Commutative Algebra, chapter $10$. In this book, Atiyah gives a short introduction about the motivation of considering topological groups. The method of taking completion arises from purely number theoreic aspect ($p$-adic integers,...) and expressed to be extremely useful in commutative algebra.
Loring Tu, An Introduction to Manifolds, chapter $4$ about Lie groups and Lie algebras. I strongly believe Tu presents the whole book in a clearest way along with lots of explicit, detailed examples, including $\mathrm{Gl}_n(\mathbb{R})$.It is true that every mathematician should know at least a little theory of Lie groups, as they naturally occur everywhere in mathematics, even from theoretical physics. With this idea in mind, I strongly believe Tu's book is undoubtedly a good choice.
Topological groups are very well-known objects, especially when you are a algebraic inclined students, you should know them like back of you hand. For instance, consider $\mathrm{GL}_n(\mathbb{R})$, this is an example of the so-called Lie groups, which means that the group operators are all $C^{\infty}$. This is even stronger than asserting it to be just a topological group, which just requires the group operators to be continuous. A Lie group (or a topological group) has the advantage of being a infinesimal group, that is, all other elements are infinitesically close to the identity element, or all your problems with group are transfered to an open neighborhood of the identity element. Back to $\mathrm{GL}_n(\mathbb{R})$, as a set, it is just
$$\mathrm{GL}_n(\mathbb{R}) = \left \{A = [a_{ij}]\in \mathbb{R}^{n^2} \mid \mathrm{det}(A) \neq 0 \right \}$$
Since the $\mathrm{det}$ map is continuous, $\mathrm{GL}_n$ is an open set, therefore, a submanifold of $\mathbb{R}^{n^2}$. The product of two matrices $A=(a_{ij}),B = (b_{jk})$ reads
$$(AB)_{ik} = \sum_{j=1}^n a_{ij}b_{jk}$$
which is obviously $C^{\infty}$ as this is just a polynomial of entrices of $A$ and $B$. Similarly, the entrices of the inverse $A^{-1}$ can be expressed in terms of that of $A$, consequently, the inverse map is also infinitely differentiable. |
What are the relations between complex numbers and visual representation on $\mathbb{R^2}$? | If the polynomial has real coefficients, then negative discriminant means two complex roots that are conjugates of each other (reflections across the real axis).
If the polynomial has complex coefficients, then it is meaningless to compare the discriminant with zero. |
sub-Markovianity and extensions of $L^2$ semigroup contractions to $L^p$ | By assumption, $(T_t)_{t \geq 0}$ is a contraction on $L^2(X,m)$. If we can show that $(T_t)_{t \geq 0}$ extends to a contraction on $L^{\infty}(X,m)$, then it follows from the Riesz-Thorin interpolation theorem that $(T_t)_{t \geq 0}$ extends to a contraction on $L^p(X,m)$ for any $p \geq 2$. The following proof assumes that $(X,m)$ is a $\sigma$-finite measure space.
First, we show that $T_t$ is positive, i.e. $$f \in L^2(X,m), f \geq 0 \implies T_t f \geq 0. \tag{1}$$ If $f \in L^2(X,m) \cap L^{\infty}(X,m)$, this follows directly from the sub-Markov property. For general $f \in L^2(X,m)$, $f \geq 0$, we set $f_n := f \wedge n$. Then $f_n \in L^2(X,m) \cap L^{\infty}(X,m)$ and $f_n \uparrow f$. Note that $T_t f_n$ is increasing in $n$. Moreover, as $$\|T_t f_n-T_t f\|_{L^2} \leq \|f_n-f\|_{L^2} \to 0$$ we can choose a subsequence $(T_t f_{n(k)})_k$ such that $T_t f_{n(k)} \to T_t f$ almost surely as $k \to \infty$. By the monotonicity, $T_t f_n \uparrow T_t f$ almost surely. In particular, $$T_t f = \lim_{n \to \infty} T_t f_n \geq 0$$ as $T_t f_n \geq 0$.
Note that $(1)$ implies
$$f,g \in L^2(X,m), f \leq g \implies T_t f \leq T_t g, \tag{2}$$
i.e. $T_t$ is monotone.
Now let $f \in L^{\infty}(X,m)$, $f \geq 0$. Choose a sequence $(f_n)_{n \in \mathbb{N}} \subseteq L^2(X,m) \cap L^{\infty}(X,m)$ such that $f_n \uparrow f$ (e.g. $f_n := 1_{A_n} f$ where $m(A_n) <\infty$ and $A_n \uparrow X$). By $(2)$, $T_t f_n$ is increasing (in $n$). Therefore, $$T_t f := \sup_{n \in \mathbb{N}} T_t f_n$$ exists and is well-defined. (Well-defined means that $T_t f$ does not depend on the approximating sequence $(f_n)_{n \in \mathbb{N}}$; this follows again from the monotonicity.) In particular, by the sub-Markov property,
$$|T_t f| = T_t f \leq \sup_{n \in \mathbb{N}} T_t f_n \leq \sup_{n \in \mathbb{N}} f_n = f,$$
i.e. $T_t$ is a contraction on $L^{\infty}_+(X,m)$. For general $f \in L^{\infty}(X,m)$, we extend $T_t$ by linearity $$T_t f := T_t(f^+)- T_t(f^-)$$ where $f^+$ ($f^-$) denotes the positive (negative) part of $f$. Then
$$|T_t f| \leq \max\{T_t(f^+),T_t(f^-)\} \leq \max\{f^+,f^-\} \leq |f|$$
as $T_t(f^+) \geq 0$ and $T_t(f^-) \geq 0$. This shows that the so-defined operator is an $L^{\infty}(X,m)$-contraction. |
Example of homeomorphism of $S^1$ which behaves badly relatively to the Lebesgue measure | Such maps abound and, moreover, they appear very naturally. Recall that a map of two measure spaces is said to have Luzin's Property N if it sends each set of zero measure to a set of zero measure. In your setting (of homeomorphisms $f: S^1\to S^1$) Luzin's Property N is equivalent to absolute continuity of $f^{-1}$. Examples of homeomorphisms $S^1\to S^1$ which are not absolutely continuous appear naturally in the Teichmuller Theory: Suppose that $S_1, S_2$ are two compact hyperbolic surfaces of the same genus, $S_i=H^2/\Gamma_i$, where $\Gamma_i< PSL(2,R)$ is a Fuchsian subgroup. Then there exists a quasi-symmetric homeomorphisms $f: S^1\to S^1$ equivariant with respect to the isomorphism $\Gamma_1\to \Gamma_2$ (induced by a homeomorphism $S_1\to S_2$).
Theorem. $f$ is an element of $PLS(2,R)$ if and only if it is absolutely continuous.
See this paper by Agard: "Remarks on the boundary mapping for a Fuchsian group". In fact, Agard proves much more: If $f$ has nonzero derivative at one point, then $f\in PSL(2,R)$. |
Horizontal Asymptote & Range | No. A simple example would be f (x) = 1/x for x> 0 and f (x) = 1+x for x <= 0.
Then 0 is horizontal asymptote but 0 is in the range. |
Showing $||L|| = \sup_{x,\, y\, \in H,\, x,\, y\, \neq 0} \frac{|\langle Ax,y\rangle|}{||x||\cdot ||y||}.$ | Note that $$|\langle Ax, y\rangle |\le \|Ax\|\cdot \|y\|.$$ So,
$$\frac{|\langle Ax, y\rangle |}{\|x\|\cdot \|y\|}\le \frac{\|Ax\|}{\|x\|}.$$
Thus
$$||L|| = \sup_{x,y \in H, x,y \neq 0} \frac{|\langle Ax, y\rangle|}{||x||\cdot ||y||}\le \sup_{x\in H, x \neq 0}\frac{\|Ax\|}{\|x\|}. $$
On the other hand, for $x=y$ you get the desired equality. |
Summation of Cosine Series | $\dfrac{\sin\frac{(n+1)d}{2}}{\sin \frac{d}{2}} \cdot \cos(a + \dfrac{nd}{2})$
For summation from $0$ to $n$
That will work
A very basic proof if you want would be substitute n = t -1 and then using the summation formula for cosine.
Proof of summation formula of cosine and sine up to n-1 terms |
Question about $T_n = n + (n-1)(n-2)(n-3)(n-4)$ | Your reasoning is correct. You can generalize it to say that
$$T_n = (an + b) + (n-1)(n-2)\cdots(n-k)$$
is an arithmetic progression for $1 \le n \le k$, for some arbitrary integer $k \ge 1$. This is because similarly, the product evaluates to $0$ for these values of $n$, to leave
$$T_n = an + b$$
This remainder essentially defines an arithmetic progression with common difference $a$. |
Intuition for why a convex set with empty interior lies in an affine set | A convex set contains the line segment between any two points therein. If it is not contained in an affine subspace, it contains an $n$-simplex. |
Compact preimage of a point by C¹ function | HINT
Consider a closed ball containing the preimage of the point. Assume that neither of the other two sets is contained in this ball. Take a point outside where the value is > c , another where the value is < c, and join them by a path not intersecting the ball. |
Is this field extension finite? | This is a well-known consequence of a result known as Zariski's lemma (Wikipedia link).
The proof from Wikipedia is reproduced below: |
How to prove an event is impossible to happen? | We usually define our probability problem as a triplet $\Omega$, $\Sigma$, $\mu$ where
$\Omega$ is the sample space
$\Sigma$ is the $\sigma$-algebra of our experiments, it contains all the possible events.
$\mu : \Sigma \rightarrow \mathbb{R}^+$ is the probability measure that maps an event to its probability.
The probability of an event $A\in \Sigma$ is $\mu(A)$, but this event may happen with probability zero. What you want to prove is that the set $A\notin \Sigma$. You must be careful when you define $\Sigma$ to be minimal in some sense. |
How to get the Max Likelihood Estimators for $\theta_1$ & $\theta_2$ | First of all you have to evaluate if $P(\text{Daisy})+
P(\text{Rose})+P(\text{Sunflower})=1$. Thus the equation, which you have to check, is $$\theta_1+(1-\theta_1)\theta_2+(1-\theta_1)(1-\theta_2)=1$$
Is this true if $\theta_1,\theta_2 \in \mathbb R$ ?
If the equation above holds then indeed $X$ is a multinomial distributed random variable with $p_1=\theta_1, p_2=(1-\theta_1)\theta_2, p_3=(1-\theta_1)(1-\theta_2)$ and the pmf
$$f_{X}(x_1,x_2,x_3) = \displaystyle {n! \over x_1!\cdot x_2! \cdot x_3!} \cdot p_1^{x_1}\cdot p_2^{x_2}\cdot p_3^{x_3},$$
$\text{when } \sum\limits_{i=1}^3 x_i=n \ \text{and} \ 0<p_i<1 \ \forall \ i\in \ \{1,2,3\}$ |
What is the difference between a sequence of functions $(f_n)$ and a sequence of functions $f_n(x)$? | A sequence is any map whose domain in the natural numbers, that is, it is a function $x\colon \mathbb N\to \text{Somewhere}$. The name of the sequence is $x$ and the image of each element $n\in \mathbb N$ is $x(n)$ but often abbreviated as $x_n$. It is common to denote $x$ by $(x_n)_{n\in \mathbb N}$.
In this case you have $(f_n)_{n\in \mathbb N}$, where presumably $f_n$ are functions whose domain and image are subsets of $\mathbb R$. If $x\in \mathbb R$, the notation $(f_n(x))_{n\in \mathbb N}$ is not a sequence of functions, it's a regular sequence where $x$ is acting out as a parameter. The correct notation is $(f_n)_{n\in \mathbb N}$.
It should be noted that the notation $(f_n)_{n\in \mathbb N}$ yields some ambiguity because $f$ is denoting two different things here. One of them is the sequence whose image of an element $n\in \mathbb N$ is determined by $f(n)=f_n$, it is a sequence. The other one is the function $x\mapsto \lim \limits_{n\to \infty}(f_n(x))$, the pointwise convergence function. In this context the first meaning of $f$ given is usually abandoned in favor of the latter.
Could someone help me with I suppose an intuitive explanation of the difference?
Intuitively, for some people, there is not difference. The authors mean the same with $(f_n(x))_{n\in \mathbb N}$ as they do with $(f_n)_{n\in \mathbb N}$. The use of the (actually inaccurate) $(f_n(x))_{n\in \mathbb N}$ is to remind the reader that $f_n$ are 'functions of $x$' or functions of one real variable. |
Irreducibility of $x^5 - 6x^3 +2x^2 - 4x +5$ in $\mathbb Q[x]$ | COMMENT.-There is no linear factor. If $f(x)=x^5 - 6x^3 +2x^2 - 4x +5$ and $f$ is reducible then $f(x)=g(x)h(x)$ with, say, degree of $g$ and $h$ respectivement equal to $3$ and $2$. It follows $f(n)$ is prime iff $g(n)$ or $h(n)$ is equal to $\pm1$. We can expose nine prime values for $f(n)$
$$f(0)=5\\f(1)=-2\\f(-2)=37\\f(2)=-11\\f(-4)=-587\\f(4)=661\\
f(-5)=6379\\f(-8)=-29531\\f(12)=2387\\$$ What else? |
The left creation operators on the Fock space | Well, $S_i(S_j1)=S_i(e_j)=e_i\otimes e_j$ and
$S_j(S_i1)=e_j\otimes e_i$. In $H\otimes H$, $e_i\otimes e_j\ne e_j
\otimes e_i$ unless $i= j$, so if $i\ne j$ then $S_iS_j\ne S_j S_i$. |
Issue with relatively simple integral | Setting $x=a\tan \theta$, we have $$dx=\frac{ad\theta}{\cos^2\theta},(a^2+x^2)^{3/2}=\left(\frac{a^2}{\cos^2\theta}\right)^{3/2}=\frac{a^3}{\cos^3\theta}.$$
So,
$$\int_{-\infty}^{\infty}\frac{dx}{(a^2+x^2)^{3/2}}=\int_{-\pi/2}^{\pi/2}\frac{(ad\theta)/(\cos^2\theta)}{a^3/(\cos^3\theta)}=\int_{-\pi/2}^{\pi/2}\frac{\cos\theta}{a^2}d\theta.$$
I'm sure that you can take it from here. |
Showing continuity of a function that depends on another continuous function. | The proof, in general, is correct. It is generally visible from it that you understand why $M$ is continuous and are able to prove it.
There are only two small issues I could find. The first is in your analysis of the first case, where you needlesly complicated your proof, making it confuzing. The $\delta_1$ you found in the second paragraph of the first case actually tels you that every value of $x$, so long as $|x-x_0|<\delta_1$, shares the property that $f(x)<M(x_0)$. There is nothing limiting you to the right side of $x_0$, therefore, this $\delta_1$ is all you need to finish the first case of your proof.
The second issue is an inconsistency in writing when you wrote the last paragraph. There, you wrote that you "define" $M(x):=f(s)$ for som $f\in [x_0, x]$. Since $M$ is already defined, you cannot define it again, so I believe what you tried to say here is that you know that $M(x)$ is equal to $f(s)$ for some $s$. It would be nice to show why this is so. |
Functions on Congruence Classes | If $[a]_{mn}=[b]_{mn}$ then $mn\ |\ b-a$ so in particular $m | b-a$ and $n|b-a$. Thus the map $f$ is well-defined. Now, if $m=6$ and $n=10$ then $f(30)=0$ but $30\not\equiv 0(mod 60)$ hence $f$ is not injective. On the other hand, since both groups have the same size a function is injective if and only if is surjective. Therefore, $f$ is not surjective too. Using the same idea we can handle the general case. |
summation of series by telescoping series method (feedback needed) | What you have done is correct. Now it is straightforward that $$\lim_{n \to \infty} -\ln 2 + \ln(n+2)= -\ln 2 + \lim_{n \to \infty}\ln(n+2)=+\infty$$ since $\ln$ is a monotone increasing function.
If you need to prove that $\ln n$ is unbounded you need the following: $$\ln' n=\dfrac{1}{n}>0$$ so that $\ln$ is monotone increasing. Moreover $\ln 2>\ln 1=0$. Now, take $M\in \mathbb R$, arbitrarily large. Then there exists $m \in \mathbb N$ such that $$M<m \ln 2$$ (since $\ln 2>0$) or equivalently $M< \ln 2^m$. Therefore for any $n>2^m$ you have that $$M<\ln 2^m <\ln n$$ from which you can conclude that $\ln n$ is unbounded since $M$ was arbitrarily large. |
Bessel process hits zero? | I think I can answer my own question so here's a shot.
First, recall that the Brownian motion $B$ almost surely hits every positive number. In fact, $\limsup_{t \to \infty} t^{-\frac{1}{2}} B_{t} = \infty$ almost surely.
Now $X$ can be written as $X_{t} = \int_{0}^{t} \frac{a}{X_{s}} \, ds + B_{t}$ and if $t < T$, then this yields $X_{t} \geq B_{t}$.
Therefore,
$$\mathbb{P} \left\{T = \infty, \, \, \limsup_{t \to \infty} X_{t} = \infty \right\} = \mathbb{P}\{T = \infty\}.$$
However, I already argued that $\lim_{t \to \infty} X_{t} = 0$ almost surely. Consequently, the left-hand side is zero, which implies $\mathbb{P}\{T < \infty\} = 1$. |
How to determine the expected value of the $f(x,y)$? | For $y=1$ it is the average of $0,1,...,x-1$ giving $(x-1)/2$. This is obvious but is the first case, which can be written as
$$(1/x)\sum_{x_2=0}^{x-1}x_2.$$
For $y=2$ it is
$$(1/x)\sum_{x_2=1}^{x-1}(1/x_2)\sum_{x_3=1}^{x_2-1}x_3,$$
which comes to
$(x-1)(x-2)/(4x).$
For even as small as $y=3$ the similar iterated sum didn't evaluate to anything nice when put into maple; the result involved the $\Psi$ function and the constant $\gamma.$
But the summation (not closed form) version for $y=3$ is obtained by stringing along one more variable:
$$(1/x)\sum (1/x_2) \sum (1/x_3) \sum x_4,$$ where each sum goes from 1 to one less than the next outer variable.
Given the intricacy of maple's closed form for the $y=3$ case, I would be surprised by a closed form for the case of general $y$. |
Let $x > 0$. Prove that the value of $\int^{x}_{0} \frac{1}{1+t^2}dt + \int^{\frac{1}{x}}_{0} \frac{1}{1+t^2}dt$ does not depend on $x$. | In the second integral, let $t=1/u$ to get
$$\int_0^{1/x}\frac1{1+t^2}\ dt=\int_x^\infty\frac1{1+(1/u)^2}\frac{du}{u^2}=\int_x^\infty\frac1{1+u^2}\ du$$
Add it to the first integral to get
$$I=\int_0^\infty\frac1{1+t^2}\ dt=\frac\pi2$$
which does not depend on $x$. |
Orthogonal Projection onto a Variation of the Unit Simplex | This is basically Projection onto the Simplex with some modifications.
The problem is given by:
$$
\begin{alignat*}{3}
\arg \min_{x} & \quad & \frac{1}{2} \left\| x - y \right\|_{2}^{2} \\
\text{subject to} & \quad & 0 \leq {x}_{i} \leq \alpha \\
& \quad & \boldsymbol{1}^{T} x = 1
\end{alignat*}
$$
The problem is valid for $ \alpha \geq \frac{1}{n} $ otherwise the constraint $ \boldsymbol{1}^{T} x = 1 $ isn't feasible.
For $ \alpha \geq 1 $ the problem matches the Projection onto the Simplex as the upper boundary can not be an active constraint (Well it is for 1, but then it is equivalent for the equality constraint and the non negativity).
The Lagrangian in that case is given by:
$$ \begin{align}
L \left( x, \mu \right) & = \frac{1}{2} {\left\| x - y \right\|}^{2} + \mu \left( \boldsymbol{1}^{T} x - 1 \right) && \text{} \\
\end{align} $$
The trick is to leave non negativity constrain implicit.
Hence the Dual Function is given by:
$$ \begin{align}
g \left( \mu \right) & = \inf_{0 \leq {x}_{i} \leq \alpha} L \left( x, \mu \right) && \text{} \\
& = \inf_{0 \leq {x}_{i} \leq \alpha} \sum_{i = 1}^{n} \left( \frac{1}{2} { \left( {x}_{i} - {y}_{i} \right) }^{2} + \mu {x}_{i} \right) - \mu && \text{Component wise form}
\end{align} $$
Taking advantage of the Component Wise form the solution is given:
$$ \begin{align}
{x}_{i}^{\ast} = { \left( {y}_{i} - \mu \right) }_{0 \leq \cdot \leq \alpha}
\end{align} $$
Where the solution includes the inequality constrains by Projecting onto the box $ \mathcal{B} = \left\{ x \mid 0 \leq {x}_{i} \leq \alpha \right\} $.
The solution is given by finding the $ \mu $ which holds the constrain (Pay attention, since the above was equality constrain, $ \mu $ can have any value and it is not limited to non negativity as $ \lambda $).
The objective function (From the KKT) is given by:
$$ \begin{align}
0 = h \left( \mu \right) = \sum_{i = 1}^{n} {x}_{i}^{\ast} - 1 & = \sum_{i = 1}^{n} { \left( {y}_{i} - \mu \right) }_{0 \leq \cdot \leq \alpha} - 1
\end{align} $$
The above is a Piece Wise linear function of $ \mu $.
Since the function is continuous yet it is not differentiable due to its piece wise property theory says we must use derivative free methods for root finding. One could use the Bisection Method for instance.
MATLAB Code
I wrote a MATLAB code which implements the method with Bisection Root Finding. I verified my implementation vs. CVX. The MATLAB Code which is accessible in my StackExchange Mathematics Q3972913 GitHub Repository. |
Not understanding how to do this logic question | First think about it informally. If $C$ is true, then $A$ is true, and if $C$ is false, then $B$ is true; since $C$ must be true or false, $A$ or $B$ must be true. What we’ve used here besides the given hypotheses is the fact that $C\lor\neg C$ is a tautology. The problem now is to convert this into a more formal argument. The details will depend on just what formalism you’re using. I’ll use $\equiv$ to indicate that two propositional expressions are logically equivalent, and $\Rightarrow$ to indicate that one entails another. I’ll start with the two givens, which are essentially that $C\to A$ and $\neg C\to B$ are true, and the tautology $C\lor\neg C$, and use fairly standard manipulations to arrive $A\lor B$ in a way that more or less mimics the informal argument.
$$\begin{align*}
(C\lor\neg C)\land(C\to A)\land(\neg C\to B)&\equiv\Big(\big(C\land(C\to A)\big)\lor\big(\neg C\land(C\to A)\big)\Big)\\
&\qquad\land(\neg C\to B)\\
&\Rightarrow\Big(A\lor\big(\neg C\land(C\to A)\big)\Big)\land(\neg C\to B)\\
&\equiv\big(A\land(\neg C\to B)\big)\lor\Big(\big(\neg C\land(C\to A)\big)\land(\neg C\to B)\Big)\\
&\Rightarrow A\lor\Big(\big(\neg C\land(\neg C\to B)\big)\land(C\to A)\Big)\\
&\Rightarrow A\lor\big(B\land(C\to A)\big)\\
&\equiv(A\lor B)\land\big(A\lor(C\to A)\big)\\
&\Rightarrow A\lor B
\end{align*}$$ |
non-constant coefficient Differential Equation with Dirac delta (unsure how to properly write the solution) | Thanks to @quarague for helping get to this answer!
So the expressions:
$$-\int_0^M\delta_{g^{-1}(\zeta)}(y) * \frac{e^{\int_y^M\frac{g'(r)}{r-g(r)}dr}}{y-g(y)}dy = \frac{-e^{\int_{g^{-1}(\zeta)}^M\frac{g'(r)}{r-g(r)}dr}}{g^{-1}(\zeta)-\zeta} = $$
are only valid if the dirac measure's fixed element ($g^{-1}(\zeta)$ in our case) is in the set the integral is done over, so in this case, $[0,M]$, and otherwise the integral will be $0$. So we need for $g^{-1}(\zeta) \in [0,M]$
So basically the expression can be written:
$$-\int_0^M\delta_{g^{-1}(\zeta)}(y) * \frac{e^{\int_y^M\frac{g'(r)}{r-g(r)}dr}}{y-g(y)}dy = 1_{0 \leq g^{-1}(\zeta) \leq M}(M)\frac{-e^{\int_{g^{-1}(\zeta)}^M\frac{g'(r)}{r-g(r)}dr}}{g^{-1}(\zeta)-\zeta} $$ and since we use the condition $g^{-1}(\zeta) > 0$, we can rewrite $1_{0 \leq g^{-1}(\zeta) \leq M}(M)$ as $1_{g^{-1}(\zeta) \leq M}(M)$, and therefore we have as final answer:
$$a(M) = a(0)*e^{\int_0^M\frac{g'(y)}{y-g(y)}dy} -\frac{e^{\int_{g^{-1}(\zeta)}^M\frac{g'(r)}{r-g(r)}dr}}{g^{-1}(\zeta)-\zeta}*1_{M \geq g^{-1}(\zeta)}(M)$$ |
Minimal Polynomial of Algebraic Number | By the division algorithm, there are $v(x), r(x)\in\mathbb{Q}[x]$ such that $g(x)=m(x)v(x)+r(x)$ with either $r(x)=0$ or $0\le\deg r<\deg m$. Since $m(\xi)=g(\xi)=0$, we have $$r(\xi)=g(\xi)-m(\xi)v(\xi)=0.$$ Note that if $r(x)\neq 0$, we can find some $\ell\in\mathbb{Z}$ such that $\frac{1}{\ell}\cdot r(x)\in\mathbb{Q}[x]$ is monic with $\deg\frac{1}{\ell}r<\deg m$ which has $\xi$ has a root, contradiction (since $m$ is the minimal polynomial of $\xi$). Thus, $g(x)=m(x)v(x)\in\mathbb{Q}[x]$. Thus, by Gauss' Lemma, there exist $m_0(x), v_0(x)\in\mathbb{Z}[x]$ such that $g(x)=m_0(x)v_0(x)$ and $m_0(x)=\alpha m(x)$ for some $\alpha\in\mathbb{Q}_{>0}$. WLOG, let the leading coefficient of $m_0(x)$ be positive (otherwise we can just consider $g(x)=(-m_0(x))(-v_0(x))$). Since $g$ is monic, $m_0$ is as well, so comparing coefficients of $m_0(x)$ and $\alpha m(x)$ yields $\alpha=1\implies m=m_0\in\mathbb{Z}[x]$, as desired. |
An optimal control problem with fixed final time and free final state | You do not necessarily need knowledge in optimal control. The question asks you to drive the system as close as possible to the origin in one time unit.
Now, if we look at the differential equations we see that they are decoupled and we can solve them as follows
$$\dot{x}_1=-u(t)x_1\implies x_1(t) = x_1(t=0)\exp\left[-\int_{\tau=0}^{t}u(\tau)~d\tau \right]$$
$$\dot{x}_2=u(t)x_2\implies x_2(t)=x_2(t=0)\exp\left[\int_{\tau=0}^{t}u(\tau)~d\tau \right]$$
Now, we set time to $t=1$ to obtain the position in which we terminate after one time unit.
$$x_1(1) = x_1(t=0)\exp\left[-\int_{\tau=0}^{1}u(\tau)~d\tau \right]$$
$$x_2(1)=x_2(t=0)\exp\left[\int_{\tau=0}^{1}u(\tau)~d\tau \right]$$
Instead of minimizing the distance to the origin we will minimize the square of the distance to the origin.
$$D^2(u)=x_1^2(1)+x_2^2(1)=x^2_1(0)\exp\left[-2\int_{\tau=0}^{1}u(\tau)~d\tau \right]+x^2_2(0)\exp\left[2\int_{\tau=0}^{1}u(\tau)~d\tau \right]$$
$$=\dfrac{x^2_1(0)+x^2_2(0)\left(\exp\left[2\int_{\tau=0}^{1}u(\tau)~d\tau \right]\right)^2}{\exp\left[2\int_{\tau=0}^{1}u(\tau)~d\tau \right]}$$
Now, introduce the new variable
$$\tilde{u}=\exp\left[2\int_{\tau=0}^{1}u(\tau)~d\tau \right]$$
and our goal will be to minimize
$$D^2(\tilde{u})=\dfrac{x^2_1(0)+x^2_2(0)\tilde{u}^2}{\tilde{u}}$$
by solving
$$\dfrac{d}{d\tilde{u}}D^2=0.$$
After solving this for $\tilde{u}>0$ (the solution has to be positive because the exponential function is always positive) you will have to solve
$$\tilde{u}=\exp\left[2\int_{\tau=0}^{1}u(\tau)~d\tau \right]$$
$$\implies 0.5\ln\tilde{u}=\int_{\tau=0}^{1}u(\tau)~d\tau $$
for $u(t)$ The solution of such an integral equation can be obtained by an ansatz function (e.g. constant function, linear function, ...) or by using the solution to the Fredholm integral equation. Note that a constant $u(t)=u_0$ is enough for this problem if we do not consider the input constraints.
An additional observation is that the ratio of the differential equations is given by
$$\dfrac{dx_2}{dx_1}=-\dfrac{x_2}{x_1}$$
$$\implies \dfrac{dx_2}{x_2}=-\dfrac{dx_1}{x_1}$$
$$\implies \ln x_2 - \ln x_2(0)=-\ln x_1+\ln x_1(0)$$
$$\implies x_2x_1 = x_2(0)x_1(0)$$
$$\implies x_2 = \dfrac{x_1(0)x_2(0)}{x_1}$$
These equations are hyperbolas ... What can you conclude from this? This is a constraint for the squared distance to the origin. We can reformulate the problem as a constrained optimization with Lagrange multipliers $$\text{minimize: } f(x_1,x_2)=x_1^2+x_2^2$$
$$\text{subject to: } x_1x_2 = x_1(0)x_2(0)$$
Or simply replace $x_2 = \dfrac{x_1(0)x_2(0)}{x_1}$ in $f(x_1,x_2)$ to obtain $f(x_1)$ and maximize this by standard school calculus. |
Question on symplectic geometry | This question was asked and answered on MathOverflow. I have replicated the accepted answer by user17945 below.
It just looks like a basic application of the chain rule to the immediately preceding equation. Maybe it's the notation that's confusing you; the previous equation has the form
$$
(\partial_2F)(x,y) + (\partial_1 G)(y,z) = 0.
$$
Considering the left hand side as a function of three independent variables $(x,y,z)$, and differentiating in the direction $(\delta x, \delta y, \delta z)$, gives
$$
\partial_1\partial_2 F\, \delta x + (\partial_2\partial_2F + \partial_1\partial_1G)\,\delta y + \partial_2\partial_1 G\,\delta z = 0,
$$
which is equation $(A.2)$. This gives a constraint that must be satisfied by the three variations $(\delta x, \delta y, \delta z)$. |
Why do I get infinity when I compute the Weingarten Map of the cone? | Your flaw seems to be in thinking that $$\left(\frac{\partial u}{\partial x}\right)^{-1} = \frac{\partial x}{\partial u}.$$
When dealing with partial derivatives you can't just "flip fractions" like this - instead the correct relationship is between the full Jacobian matrices. In a simplified $2\times 2$ case we have $$\frac{\partial(u,v)}{\partial(x,y)} = \left( \frac{\partial(x,y)}{\partial(u,v)} \right)^{-1}$$
where both sides are $2 \times 2$ matrices and the inverse is the matrix inverse. |
If $f$ and $f\circ h$ are nonconstant and real-analytic, with $h\in C^\infty$, does it follow that $h$ is also real-analytic? | If we add that $f'$ does not vanish nowhere, then the answer is yes.
It can be checked locally. As $f$ is real-analytic, it extends analytically in an neighbourhood of $h(I)$ in $\mathbb C$ - We assume that $f$ is real analytic on an interval $J$, with $h(I)\subset J$.
Let now $h(t_0)=w_0\in h(I)$.
As $f'(w_0)\ne 0$, then $f$ possesses an analytic inverse $F$ in a neighbourhood $D(w_0,r)\subset \mathbb C$ of $t_0$, and hence $h=F\circ g$, for $t\in I$, such that $h(t)\in (w_0-r,w_0+r)$, and therefore $h$ is analytic in a neighboorhood of $t_0$.
However, if $f'$ vanishes, then it DOES NOT hold. For example $h(x)=\lvert x\rvert$ and $f(x)=x^2$. |
Sets of null (Lebesgue)-measure and sigma compacts | The answer is no. There exists counterexample. The detailed answer for the related question is here.
A short answer from Andreas Blass:
A closed set of Lebesgue measure zero has empty interior. So a countable union of such sets, a $_$ of measure zero, is meager (also called "first Baire category), and so are all its subsets. But there are Lebesgue null sets that are not meager, for example, the set of those numbers in $[0,1]$ whose binary expansion does not have asymptotically half zeros and half ones (i.e., those numbers whose binary expansions violate the strong law of large numbers). |
Mustering $\sum_{k=1}^{\infty} \frac{(-1)^kH_{2k}}{k^2}$ with complex series | Recall
$$\int_0^1 x^{n-1}\ln(1-x)dx=-\frac{H_n}{n}\tag1$$
where if we replace $n$ by $2n$ then multiply both sides by $-\frac{2(-1)^n}{n}$ and sum up over $n$ we have
$$\sum_{n=1}^\infty\frac{(-1)^nH_{2n}}{n^2}=-2\int_0^1 \frac{\ln(1-x)}{x}\sum_{n=1}^\infty\frac{(-x^2)^n}{n}dx=2\int_0^1\frac{\ln(1-x)\ln(1+x^2)}{x}dx$$
$$\overset{IBP}{=}-2\ln(2)\zeta(2)+4\int_0^1\frac{x\text{Li}_2(x)}{1+x^2}dx$$
$$=-2\ln(2)\zeta(2)+4\int_0^1\frac{x}{1+x^2}\left(-\int_0^1\frac{x\ln(y)}{1-yx}dy\right)dx$$
$$=-2\ln(2)\zeta(2)+4\int_0^1\ln(y)\left(-\int_0^1\frac{x^2}{(1+x^2)(1-yx)}dx\right)dy$$
$$=-2\ln(2)\zeta(2)+4\int_0^1\ln(y)\left(\frac{\pi}{4}\frac{1}{1+y^2}+\frac{\ln(2)}{2}\frac{y}{1+y^2}+\frac{\ln(1-y)}{y}-\frac{y\ln(1-y)}{1+y^2}\right)dy$$
$$=-2\ln(2)\zeta(2)+4\left(-\frac{\pi}{4}G-\frac{\ln(2)}{16}\zeta(2)+\zeta(3)-\int_0^1\frac{y\ln(y)\ln(1-y)}{1+y^2}\right)$$
$$=4\zeta(3)-\frac94\ln(2)\zeta(2)-\pi G-4\int_0^1\frac{y\ln(y)\ln(1-y)}{1+y^2}dy$$
Divide both sides by $4$
$$\sum_{n=1}^\infty\frac{(-1)^nH_{2n}}{(2n)^2}=\zeta(3)-\frac9{16}\ln(2)\zeta(2)-\frac{\pi}4 G-\int_0^1\frac{y\ln(y)\ln(1-y)}{1+y^2}dy\tag2$$
For the latter integral, differentiate both sides of $(1)$ with respect to $n$ we have
$$\int_0^1 x^{n-1}\ln(x)\ln(1-x)dx=\frac{H_n}{n^2}+\frac{H_n^{(2)}}{n}-\frac{\zeta(2)}{n}$$
Replace $n$ by $2n$ then multiply both sides by $(-1)^n$ and consider the summation we have
$$\sum_{n=1}^\infty\frac{(-1)^nH_{2n}}{(2n)^2}+\sum_{n=1}^\infty\frac{(-1)^nH_{2n}^{(2)}}{2n}+\frac12\ln(2)\zeta(2)=\int_0^1 \frac{\ln(x)\ln(1-x)}{x}\sum_{n=1}^\infty (-x^2)^ndx$$
$$=-\int_0^1\frac{x\ln(x)\ln(1-x)}{1+x^2}dx\tag3$$
Solving $(2)$ and $(3)$ yields
$$\boxed{\sum_{n=1}^\infty\frac{(-1)^nH_{2n}^{(2)}}{2n}=\frac{\pi}{4}G+\frac1{16}\ln(2)\zeta(2)-\zeta(3)}$$
to get your sum, exploit the identity
$$-\ln(1-x)\text{Li}_2(x)=2\sum_{n=1}^\infty\frac{H_n}{n^2}x^n+\sum_{n=1}^\infty\frac{H_n^{(2)}}{n}x^n-3\sum_{n=1}^\infty\frac{x^n}{n^3}$$
Replace $x$ by $i$ then consider the real parts of the two sides we have
$$-\Re\{\ln(1-i)\text{Li}_2(i)\}=2\Re\sum_{n=1}^\infty\frac{H_n}{n^2}i^n+\Re\sum_{n=1}^\infty\frac{H_n^{(2)}}{n}i^n-3\Re\sum_{n=1}^\infty\frac{i^n}{n^3}$$
use the fact that $$\Re\sum_{n=1}^\infty i^n f(n)=\sum_{n=1}^\infty (-1)^n f(2n)$$
Thus,
$$-\Re\{\ln(1-i)\text{Li}_2(i)\}=2\sum_{n=1}^\infty\frac{(-1)^nH_{2n}}{(2n)^2}+\sum_{n=1}^\infty\frac{(-1)^nH_{2n}^{(2)}}{2n}-3\sum_{n=1}^\infty\frac{(-1)^n}{(2n)^3}$$
where $\Re\{\ln(1-i)\text{Li}_2(i)\}=\frac{\pi}{4}G-\frac1{16}\ln(2)\zeta(2)$ and $\sum_{n=1}^\infty\frac{(-1)^n}{n^3}=-\frac34\zeta(3)$
substituting these two results along with the result of $\sum_{n=1}^\infty\frac{(-1)^nH_{2n}^{(2)}}{2n}$ gives
$$\boxed{\sum_{n=1}^\infty\frac{(-1)^nH_{2n}}{(2n)^2}=\frac{23}{64}\zeta(3)-\frac{\pi}{4} G}$$ |
At which points is: $f(x) = \lim_{n \to \infty} \frac{x^{2n}-1}{x^{2n}+2}$ discontinuous? | You can explicity determine what $f(x)$ is for $x \in\mathbb{R}$. For example if $|x|<1$, then $x^n\to 0$ whence
$$
\frac{x^{2n}-1}{x^{2n}+2}\to \frac{-1}{2}.
$$
If $|x|>1$, then $1/x^n\to 0$ whence
$$
\frac{x^{2n}-1}{x^{2n}+2}=\frac{1-x^{-2n}}{1+2x^{-2n}}\to 1.
$$
If $x=1, -1$, the limit is zero. Hence
$$
f(x)=\begin{cases}
-1/2 &\quad |x|<1\\
1&\quad |x|>1\\
0&\quad |x|=1.
\end{cases}
$$
You should be able to take it from here. |
decomposition of $\mathbb{C}[A_3],\mathbb{R}[A_3]$ and $\mathbb{F}_{p}$ into simple algebras | If $F$ is any field, then $F[A_3]$ is $F[x]/(x^3 - 1)$. By the Chinese remainder theorem this decomposes as follows:
If $F$ has characteristic $\neq 3$ and $x^2 + x + 1$ is irreducible over $F$, then $F[x]/(x^3 - 1) \cong F \times F[x]/(x^2 + x + 1)$.
If $F$ has characteristic $\neq 3$ and $x^2 + x + 1$ is reducible over $F$, then $F[x]/(x^3 - 1) \cong F^3$.
If $F$ has characteristic $3$, then $F[x]/(x^3 - 1) \cong F[x]/(x - 1)^3 \cong F[x]/x^3$.
In particular your answers are correct. More generally, CRT can be used to describe $F[G]$ where $G$ is a finite abelian group. The nonabelian case is harder. |
Formula using fibonacci numbers | If I see a recurrence relation where $a_{n+1}$ depends on $a_n$ as a linear fraction. I will write $a_n$ as a ratio $\frac{p_n}{q_n}$ for two other sequences $(p_n)$ and $(q_n)$ to be determined. Simplify the relation and see what I can get.
For the recurrence relation at hand, we have
$$\frac{p_{n+1}}{q_{n+1}} = a_{n+1} = \frac{1}{1+a_n} = \frac{q_n}{q_n + p_n}$$
If the two sequences $(p_n)$, $(q_n)$ satisfies
$$\begin{cases}
p_{n+1} &= q_n\\
q_{n+1} &= q_n + p_n
\end{cases}
\quad\implies\quad
\begin{cases}
p_{n+1} &= q_n\\
q_{n+1} &= q_n + q_{n-1}
\end{cases},
\quad\text{ for }n > 1
$$
then $\frac{p_n}{q_n}$ will be a solution of original recurrence relation.
Notice the recurrence relation for $q_n$ is the one for Fiboniacci numbers.
One should be able to express $q_n$ and hence $p_n$ in terms of Fibonacci numbers. Since $a_1 = 1$, we can take
$$p_1 = q_1 = 1 \quad\iff\quad q_0 = q_1 = 1$$
Now $F_1 = F_2 = 1$, it suggest us to pick
$$\begin{cases}p_n &= F_n,\\ q_n &= F_{n+1}\end{cases}
\quad\iff\quad
a_n = \frac{F_n}{F_{n+1}}
$$
Up to this point, we haven't proved $a_n$ is given by above expression. We only have an ansatz of what $a_n$ should be. By direct subsitution, we can verify
this ansatz do satisfy the original recurrence relation.
$$a_1 = \frac{F_1}{F_2} = 1\quad\text{ and }\quad
a_{n+1} = \frac{F_{n+1}}{F_{n+2}} = \frac{F_{n+1}}{F_{n+1} + F_{n}} = \frac{1}{1 + \frac{F_n}{F_{n+1}}} = \frac{1}{1 + a_n}$$ |
(From Milne) Splitting field over a finite field. | Here are some ideas for you to mull...and understand and prove:
=== Prove that the set of all roots of $\;x^{p^r}-x\in\Bbb F_p[x]\;$ (these roots "live" in some algebraic closure of $\;\Bbb F_p\;$ , or if you prefer: in some extension field of $\;\Bbb F_p\;$) is a field $\;\Bbb F_{p^r}\;$ with $\;p^r\;$ elements under usual addition and multiplication modulo $\;p\;$ .
=== Show $\;\dim\left(\Bbb F_{p^r}\right)_{\Bbb F_p}=r\;$ and deduce that $\;\Bbb F_{p^r}=\Bbb F_p(\alpha)\;$ , for some element $\;\alpha\in\Bbb F_{p^r}\;$ whose minimal polynomial in $\;\Bbb F_p[x]\;$ has degree $\;r\;$
=== In general, suppose that for some $\;g(x)\in\Bbb Z[x]\;$ we have that $\;h(x):=g(x)\mod p\in\Bbb F_p[x]\;$ is irreducible. Then $\;g(x)\;$ is irreducible in $\;\Bbb Z[x]\;$ and thus also (Gauss...?) in $\;\Bbb Q[x]
\;$ |
How are these expressions equal? | $64=7+57$, so $64\cdot 8^{2k+1}=(7+57)\cdot 8^{2k+1}=7\cdot 8^{2k+1}+57 \cdot 8^{2k+1}$. |
Is N the cts image of the Sorgenfrey line? | The OP seems to have a pretty good handle on the question and maybe just needs to think it through to be fully satisfied.
But here is a slightly different approach which makes it even more clear (to me) that the OP's method will work. This was borne out of the remark I made (to myself) upon reading the OP's clarification that the natural numbers include zero (you're damn right they do, by the way): namely, it certainly doesn't matter, because any two countably infinite discrete spaces are homeomorphic.
Taking that one step further, it is clearly enough to realize $\mathbb{Z}$ with the discrete topology as a continuous image of the Sorgenfrey line: having done this, compose with any homeomorphism (i.e., bijection!) from $\mathbb{Z}$ to $\mathbb{N}$. (Or, in fact, with any surjection, as the OP has done.) For this, literally take the greatest integer function. The preimage on any given basis element -- i.e., a singleton set $\{n\}$ -- is the half-open interval $[n,n+1)$. I don't keep too much information about the Sorgenfrey line in my head, but I'm pretty sure those sets are open! |
Some Questions About Chess | For question 1
The basics of computer player chess is it has value (coin weightage point) for each chess items and in the same way it has value(move weightage point) for each possible move for on chess item.
With this it will decide which the best move is.
For example if the opponent’s queen and pan are possible to cut-down in your next move the program suggest to cut-down the queen (coin weightage point for queen is higher than pan)
For question 2
It is not such simple to explain! Need deep knowledge about subject and programming concepts (follow some google links!). |
Intersection of ideals is zero then $R$ is Noetherian | Consider the homomorphism ring $$\phi : R \to \bigoplus_{j=1}^{n}R/I_j$$ $$r \mapsto (r+I_1, \ldots , r + I_n )$$
We see that $\ker(\phi) = \cap_j I_j = 0 $ , and this implies that $R$ is isomorphic to a submodule of $\bigoplus_{j=1}^{n}R/I_j$.
A finite direct sum of noetherian modules is noetherian, thus $\bigoplus_{j=1}^{n}R/I_j$ is noetherian, and then also $R$ is noetherian as a submodule of a noetherian module. |
The existence of a tuple of integers | I would say that the tuple $ x $ is perpendicular to $ a $.
In general, ignoring tuples with all zero elements, $ x = (an, an, an,...-\sum_{i=1}^{n-1} ai) $ is an integer tuple solution for $ a.x = 0 $ for any integer tuple $ a = (a1, a2, a3,...an) $. There are, of course, an infinite number of other integer tuple $ x $s also perpendicular to $ a$.
And just for fun, for any integer tuple $ A0 $, you can always find N-1 integer tuples $ A1,A2,A3 $ which are all perpendicular to $ A0 $ and are also all mutually perpendicular to each other. |
Prove that this function is bounded | OK, a second trick is needed (but it actually finishes the problem). It is nice and simple enough that it's probably what the authors intended by a "Book" solution.
Let $f(x) = x \log(2) - \log(1+x)$. We want to show that $S(x) = f(x) + f(x^2) + f(x^4) + \dots$ is bounded. Because $f(0)=f(1)=0$ and $f$ is differentiable, we can find a constant $A$ such that $|f(x)| \leq Ax(1-x) = Ax - Ax^2$. The sum of this bound over the powers $x^{2^k}$ is telescopic.
Notice that the role of $\log(2)$ was to ensure that $f(1)=0$. |
Determine the Size of a Test Bank | This can be solved by 'capture-recapture' or 'mark-recapture' methods of estimating population size. One person is 'capture' and the other is 'recapture'. The 'Chapman' estimator (see Wikipedia on 'mark recapture') in this case is $\hat N_C = (30 + 1)(30 + 1)/(7 + 1) -1 \approx 119.$ Based on a hypergeometric model, this estimator is nearly unbiased. The Wikipedia gives two methods for finding a
corresponding confidence interval.
The older and simpler 'Lincoln-Peterson' estimator is
simply $\hat N = 30^2/7 \approx 128.$ It gives an infinite value
if there happen to be no repeated questions. Thus $E(\hat N)$ does not exist, and one cannot
discuss the unbiassedness of this estimator.
Addendum: The comments and the answer by @GregoryGrant are using
the Lincoln-Peterson estimator, which is the maximum likelihood
estimator, based on knowledge that there are 7 coincidences. Here is some
relevant R code and a figure:
N = 100:150
like = choose(30,7)*choose(N-30, 30-7)/choose(N, 30)
N[like==max(like)] # value of N that maximizes 'like'
## 128
plot(N, like, pch=20); abline(v=128, lty="dotted")
Note: Here is one method to get an analytic solution for the maximum: Let
$f(N|7) = {30 \choose 7}{N-30 \choose 23}/{N \choose 30}.$
Then look at $f(N|7)/f(N-1|7),$ simplifying it with lots
of cancellation. Then notice the behavior of the ratio. |
Solving for form of CDF that satisfies $G^t(x) = G(t^{-\theta} x)$ | Given the argument $t^{-\theta}x$, one might try the substitution $x\mapsto y^{-\theta}$, giving
$$G^t(y^{-\theta}) = G((ty)^{-\theta}).$$
Suppose we define a new function
$$H(x) := G(x^{-\theta}).$$
Then our equation becomes simpler: $H^t(y) = H(ty)$.
Now, what function has the property whereby raising it to a power $t$ is equivalent to multiplying the argument by $t$? An exponent has this property because $(a^b)^c = a^{bc}$ for $a,b,c\in\mathbb{R},\;a\geq 0$. So $H(x) = \alpha^x$ for some $\alpha\gt 0$.
Working backwards now, we see $G(x) = H(x^{-1/\theta}) = \alpha^{x^{-1/\theta}}.$
The function $x^{-1/\theta}$ is a decreasing one so $\alpha^{x^{-1/\theta}}$ is increasing (which we require for a CDF) if $0\lt\alpha\lt 1$.
We could say instead that $1\lt \alpha$ and $G(x) = \alpha^{-x^{-1/\theta}}.$
When $x\rightarrow\infty$ we have $G(x)\rightarrow 1$ and when $x\rightarrow 0$ we have $G(x)\rightarrow 0$, which is compatible with its being a CDF.
The choice of $\alpha=e$ is fine, and best it could be said, but any $\alpha\gt 1$ is valid. |
Proof of Weak Law of Large Numbers with non-zero covariance | Since $\left\lvert\operatorname{Cov}(X_i,X_j)\right\rvert\leq (i-j)^{-4}\leq 1$, summing over all $n(n-1)$ ordered pairs $(i,j)$ with $i\neq j$ gives the easy upper bound $n(n-1)<n^2$. |
Quantifier difference notation | The first one is equivalent to : $∀xPx → ∀yPy$, which is always true.
The second one is not equivalent to the first one, and is not always true. Consider the following counter-example : "if there is a number that is even, then every number is even".
To have an insight about the difference, consider what happens to the first one with the same interpretation used above : domain $\mathbb N$ and predicate symbol $P(x)$ interpreted with "$x$ is Even".
We have that $\forall y P(y)$ is False (because it is not true that every natural is Even).
But also $P(1)$ is False.
Thus, $P(1) \to \forall y P(y)$ is True (because $\text F \to \text F$ is $\text T$) and thus :
$\exists x (Px \to \forall y P(y))$
is True.
See the so-called Drinker paradox.
And see also this post for proofs of the validity of the formula. |
Locus of the midpoint of Hyperbola | $$\implies Q(a\csc\theta,b\cot\theta)$$
If $R(h,k)$ is the midpoint,
$$\dfrac{2h}a=\sec\theta+\csc\theta=\dfrac{\sin\theta+\cos\theta}{\sin\theta\cos\theta}$$
$$\dfrac{2k}b=\tan\theta+\cot\theta=\dfrac1{\cos\theta\sin\theta}\iff\cos\theta\sin\theta=?$$
On division, $$\cos\theta+\sin\theta=\dfrac{\dfrac{2h}a}{\dfrac{2k}b}=\dfrac{bh}{ak}$$
Now use $(\cos\theta+\sin\theta)^2=1+2\cos\theta\sin\theta$ |
How do you show that every non-constant real valued harmonic function on $\mathbb{R}^n$ has a zero? What about $\mathbb{R}^2$ \ $\{0\}$? | If it is non-constant, then it can't be either bounded from above or from below.
You can use the short proof from here. In particular, it must have a zero.
On $\mathbb{R}^2\setminus\{0\}$ you can take $1/\sqrt{x^2+y^2}$. |
How to find orthonormal frame of given metric? | I don't know how to do it directly. Probably using Gram-Schmidt Algorithm ?. In my experience, you may write down the metric as $ds^2 = g_{ij} dx^i dx^j$. Rearrange the terms gives
\begin{equation}
ds^2 = dx^2 + dy^2 + (wdy+ydx-dt)^2 +(dw+tdy)^2 + (dz+tdx)^2
\end{equation}
take the coframe field
$$\theta^1 = dx, \quad \theta^2=dy,\quad \theta^3=wdy+ydx-dt, \quad \theta^4=dw+tdy, \quad \theta^5 = dz+tdx$$
Then use transformation law for the basis vector to get the orthonormal frame. If $\theta^{\mu}=e^{\mu}_i dx^i$ and let $[E^{i}_{\mu}]$ be the inverse of $[e^{\mu}_i]$, then the orthonormal frame is $e_{\mu} = E_{\mu}^i \partial_i$.
$\textbf{EDIT :}$
I kind a feel uneasy by finding the orthonormal frame without Gram-Schmidt. So here i present the result by using that algorithm. Because $\{\partial_x,\partial_y, \partial_z, \partial_w, \partial_t\}$ is a local frame, the GS method gives an orthonormal frame $\{E_i\}_{i=1,\dots, 5}$ as
$$
E_i = \frac{\partial_i - \sum_{j=1}^{i-1} (\partial_i \cdot E_j) E_j}{|\partial_i - \sum_{j=1}^{i-1} (\partial_i \cdot E_j) E_j|}
$$
$$
E_1 = \partial_t, \quad E_2=\partial_w,\quad E_3 = \partial_z, \quad E_4 = \partial_y+w\partial_t-t \partial_w, \quad E_5 = \partial_x-t\partial_z +y \partial_t
$$
By the way, in this case actually Gram-Schmidt is faster than my construction before. |
Find the distribution with the following Laplace transform. | You just need to compute a convolution. If $X$ is a random variable having pdf: $$ f_X(x) = \frac{1}{2}e^{\frac{1-x}{2}}\cdot \mathbb{1}_{[1,+\infty)}(x) $$
then, for every $t>-\frac{1}{2}$,
$$ \mathbb{E}[e^{-tX}]=\frac{e^{-t}}{1+2t} $$
as wanted. |
Does iterating $x \cdot \sin(\frac 1 x) + x$ near $0$ approach $0$? | For convenience, I shall assume for the sake of continuity that
$f(0)=0$.
The sequence can
diverge to infinity; This happens precisely for $x>\frac1\pi$
or converge to fixed point $x_\infty$ of $f$ (that is, $x_\infty=0$ or $x_\infty=\frac1{k\pi}$).
or converge to a periodic cycle, such as points where $f(f(x))=x$ but $f(x)\ne x$.
or behave chaotically.
In particular, the second bullet point tells us that there are points that do not converge to $0$ in every neighbourhood of $0$. |
Under what conditions is $J\cdot M$ an $R$-submodule of $M$? | Based on your comment describing the context of the situation, (thank you for that, by the way) it's clear that the text intends to use the standard definition of the product:
$JM:=\{\sum j_i m_i\mid j\in J, m\in M, i\in I \text{ for a finite index set $I$ }\}$
This is what is intended when looking at any sort of ideal- or module-wise product in ring theory, precisely because the group-theoretic version ($HK:=\{hk\mid h\in H, k\in K\}$) does not produce an acceptable result in the presence of both $+$ and $\cdot$ operations. |
Is there a $\sigma$-algebra on $\mathbb{R}$ strictly between the Borel and Lebesgue algebras? | In ZFC the cardinality of $\frak B(\Bbb R)$ is $2^{\aleph_0}$ while the cardinality of $\frak L(\Bbb R)$ is $2^{2^{\aleph_0}}$. By that virtue alone there are plenty of other sets in between.
If you wish to take $\frak G(\Bbb R)$ to be some sort of a $\sigma$-algebra, and not just any set, let $\cal A\subseteq\frak L(\Bbb R)$ such that $|{\cal A}|<2^{2^{\aleph_0}}$, and consider $\frak G(\Bbb R)$ to be the $\sigma$-algebra generated by $(\frak B(\Bbb R)\cup\cal A)$. If we took $\cal A$ such that $\cal A\nsubseteq\frak B(\Bbb R)$ this would be a $\sigma$-algebra which strictly contains the Borel sets and is strictly contained in the Lebesgue measurable sets.
Edit:
Some time after Byron's comment below I realized that indeed this may not be accurate. I suddenly realized that I cannot be certain that $\frak L(\Bbb R)$ is not generated by a set of less than $2^{2^{\aleph_0}}$ many elements. It's not that bad, though. One can still bound it with some certainty:
We require that $|{\cal A}|^{\aleph_0}<2^{2^{\aleph_0}}$. If $|{\cal A}|\leq\frak c$ then this is indeed true, however one can come up with models where this need not be true for any subset of $\cal P(\Bbb R)$.
For what it's worth, I asked a question on MathOverflow some time ago, but did not receive any answer regarding this question yet.
I still believe this is true, though.
There is a class of sets called analytic sets which are defined to be the continuous image of Borel sets (in fact $G_\delta$ sets are enough, but it turns out to be the same thing). It is a theorem that analytic sets properly contain the Borel sets and they are Lebesgue measurable.
Therefore the complement of analytic (co-analytic) sets are also Lebesgue measurable sets (they also contain all the Borel sets, and an amazing result is that a set is Borel if and only if it is both analytic and co-analytic.)
If you consider the $\sigma$-algebra generated by the union of all analytic and co-analytic sets, you will find yourself still inside the Lebesgue measurable universe, but with a strictly larger family of sets.
Further reading on how $\sigma$-algebras are born:
The $\sigma$-algebra of subsets of $X$ generated by a set $\mathcal{A}$ is the smallest sigma algebra including $\mathcal{A}$
Measurable Maps and Continuous Functions
Cardinality of Borel sigma algebra |
number of irreducible representations | I'm not sure what Serre's proof is but here's one way: Let $M$ be a simple $G$-module over a field $k$. You want to show that $\dim M \leq |G|/|H|$.
As simple $H$-modules are one dimensional (cause $H$ is abelian) there is a nonzero $m \in M$ such that $H\cdot m \subseteq km$. Then $kG\cdot m$ has dimension at most $|G|/|H|$. But $M$ is a simple $G$-module so $kG\cdot m = M$. |
A version of the product rule | Note that the first formula works due to $$ \log (fg) = \log f + \log g $$
All we have to do is changing the sign in front of $\log g $, hence $$ K ( f, g) = \log f - \log g = \log\frac fg $$ will do the trick. |
Find all positive integers satisfying $\frac{2^n+1}{n^2} =k $ | This problem is extremely famous in world of Olympiad mathematics(It appeared in IMO 1990 , Problem-3) . This problem is famous, because it is also very easily solvable by use of Lifting the exponent lemma If you don't know about this lemma yet, then read this article, you will never regret.
Otherwise, see this. |
Faà di Bruno's formula for $C^k$ Banach-valued functions. | The proof of your second statement is actually pretty easy using induction. The base case $k=0$ is true based on elementary arguments. Now, suppose inductively that the result is true for any $k \geq 0$. We'll show it's true for $k+1$. Note that by the chain rule,
\begin{align}
D(g \circ f)_x &= (Dg)_{f(x)} \circ Df_x.
\end{align}
Now, the following three maps:
\begin{align}
\begin{cases}
K:\mathcal{B}(Y;Z) \times \mathcal{B}(X;Y) \to \mathcal{B}(X;Z) \qquad &(T,S) \mapsto T \circ S \\\\
\iota_1:\mathcal{B}(Y;Z) \to \mathcal{B}(Y;Z) \times \mathcal{B}(X;Y) \qquad &T \mapsto(T,0) \\\\
\iota_2:\mathcal{B}(X;Y) \to \mathcal{B}(Y;Z) \times \mathcal{B}(X;Y) \qquad &T \mapsto(0,T)
\end{cases}
\end{align}
$K$ is the "composition map", and $\iota_1, \iota_2$ are the "canonical inclusions." Note that $K$ is a continuous bilinear map, and hence $C^{\infty}$ (the third derivative vanishes identically), and $\iota_1, \iota_2$ are continuous linear maps, and hence $C^{\infty}$ (their second derivatives vanish). With this, we can write:
\begin{align}
D(g \circ f)_x &= K\left( Dg_{f(x)}, Df_x\right) \\
&= K\bigg( [\iota_1 \circ (Dg) \circ f](x) + [\iota_2 \circ Df](x)\bigg) \\
&= \bigg[K \circ \left(\iota_1 \circ (Dg) \circ f + \iota_2 \circ Df \right) \bigg](x)
\end{align}
Or as an equality of functions, we can write:
\begin{align}
D(g \circ f) &= K \circ \bigg( \iota_1 \circ (Dg) \circ f + \iota_2 \circ Df\bigg) \tag{$*$}
\end{align}
By the induction hypothesis, $f$ and $g$ are $C^{k+1}$, so $Df, Dg$ are $C^k$. As explained above, the maps $K, \iota_1, \iota_2$ are all $C^{\infty}$. Thus, in $(*)$, we have expressed $D(g \circ f)$ as a sum and composition of functions all of which are atleast $C^k$. By induction hypothesis, it follows that $D(g \circ f)$ is $C^k$, but this means exactly that $g \circ f$ is $C^{k+1}$. Hence, the induction is completed.
Very often, to show smoothness of maps between Banach spaces the quickest way is to define such auxillary maps, defined on a larger space, which we already know to be smooth. Then, after some practice, it becomes unnecessary to explicitly introduce them, and you can just "see" for example directly from the equation $D(g \circ f)_x = Dg_{f(x)} \circ Df_x$ that the RHS is smooth "as a function of $x$".
For example, in a Banach algebra $A$ (such as $\mathcal{B}(X,Y)$ with "multiplication" being composition of linear maps), let $U$ be the open set of all invertible elements of the algebra $A$ (the fact that this set is open shouldn't be too hard to prove). Consider the inversion mapping $\psi: U \to U$, $\psi(a) = a^{-1}$. By a direct "difference-estimate", one can show that $\psi$ is differentiable on $U$, with derivative given by
\begin{align}
D \psi_a(h) &= -a^{-1}\cdot h \cdot a^{-1} \\
&= - \psi(a) \cdot h \cdot \psi(a).
\end{align}
Notice that this is a type of "differential equation" for the function $\psi$ (we have the derivative on the LHS and the function on the RHS). By similar trickery in the induction process, one can prove that $\psi$ is actually $C^{\infty}$. |
Basic Probability Question Conditional Probability | Your edit seems like a valid solution. For larger values of $n$, the computation gets much trickier.
In general, if we have $n$ hats, the probability that no one gets their own hat can be obtained
in many ways. Two such methods are to take
$$\sum_{i=0}^{n} {(-1)^i}\cdot{\frac{1}{i!}}$$
In our case, $n$ is $3$ so we get
$$\sum_{i=0}^{3} {(-1)^i}\cdot{\frac{1}{i!}}=1-1+{\frac{1}{2}}-{\frac{1}{6}}={\frac{1}{3}}$$
Alternatively, this probability can be obtained by
$$\frac{[\frac{n!}{e}]}{n!}$$
where $[\frac{n!}{e}]$ is the closest integer to $\frac{n!}{e}$
In our case, $n$ is $3$ so we get
$$\frac{[\frac{3!}{e}]}{3!} = {\frac{[2.207]}{6}}={\frac{1}{3}}$$
Finally, as $n \rightarrow \infty$ the probability that no one gets their own hat
approaches $\frac{1}{e}$ and it approaches this quite quickly.
Consider $n=9$. Then $$\frac{[\frac{9!}{e}]}{9!} \approx .3678792 \approx \frac{1}{e} \approx .3678794$$ |
Proof by induction that $ 169 \mid 3^{3n+3}-26n-27$ | Hint $\ $ Conceptually, the induction is simply the first two terms of BT = Binomial Theorem:
${\rm mod}\,\ 26^2\!:\ \color{#c00}{(1\!+\!26)^{n+1}}\!\equiv 1\!+\!(n\!+\!1)26\equiv \color{#0a0}{26n\!+\!27}\,\Rightarrow\, 13^2\mid26^2\mid \color{#c00}{27^{n+1}}\!\color{#0a0}{-\!26n\!-\!27}$
Remark $ $ For completeness, below is the simple inductive proof of the first two terms of BT.
$\begin{align}{\rm mod}\,\ \color{#c00}{a^2}\!:\,\ (1+ a)^{\large n}\ \, \ \ \equiv&\,\ \ 1 + na\qquad\qquad\,\ \ \ \ \ {\rm i.e.}\ \ P(n)\\
\Rightarrow\ \ (1+a)^{\color{}{\large n+1}}\! \equiv &\ (1+na)(1 + a)\quad\ \ \ \ {\rm by}\ \ 1+a \ \ \rm times\ prior\\
\equiv &\,\ \ 1+ na+a+n\color{#c00}{a^2},\, \ \text{but }\ \color{#c00}{a^2\equiv 0}\ \ \rm so\\[1pt]
\equiv &\,\ \ 1\!+\! (n\!+\!1)a\qquad\ \ \ \ \ \ {\rm i.e.}\ \ P(\color{}{n\!+\!1})
\end{align}$
Generalization $ $ Using the above idea it is easy to generalize, e.g. from this answer
$\!\begin{align}\rm{\bf Theorem}\ \ \forall n\in\Bbb N\!:\ d\mid f(n) = a^n\! + b\:\!n + c &\rm \iff d\mid \color{blue}{(a\!-\!1)^2},\, \color{brown}{a\!+\!b\!-\!1},\, \color{darkorange}{1\!+\!c}\\ &\rm \iff d\mid f(0),\,f(1),\,f(2)\end{align}$ |
Does there always exist an even $m$ that is a multiple of exactly $n$ of the numbers $1$, $2$, ..., $2n$? | This is a proof for $n\ge93$.
Let $A_p=\{1\le k\le 2n:p|k\}$. For primes $p,q>\sqrt{2n}$, $A_p\cap A_q=\emptyset$ because $pq>2n$.
If we find some nice set of primes $Q=\{q_1,q_2,\cdots, q_k\}$ ($\sqrt{2n}<q_1< q_2<\cdots<q_k\le2n$) that $\displaystyle\sum_{t=1}^k|A_{q_t}|=n$, setting $\displaystyle m=\prod_{\substack{p\le 2n\\p\notin Q}}p^{\left[\log_p 2n\right]}$ works.
To show that such set $Q$ exists, we use two lemmas.
Lemma 1. For $x>185$, $$\sum_{p>\sqrt{x}}\left[\frac xp\right]>\frac x2$$
Proof)
From this, we have $$\left|\sum_{p
\le x}\frac1p-\log\log x-B\right|\le\frac1{10\log^2x}+\frac4{15\log^3x}$$ for $x\ge10372$. And according to wikipedia, $\pi(x)<1.3\frac{x}{\log x}$ for $x\ge17$. Using this we can get, $$\sum_{p>\sqrt{x}}\left[\frac xp\right]>\sum_{p>\sqrt{x}}\frac xp-\pi(x)\ge x\log2-1.3\frac{x}{\log x}-\frac{15}{2\log^2x}-\frac{12}{5\log^3x}>\frac x2$$And computation gives this lemma true for $x>185$.
Lemma 2. For $n\ge49$, $$\sqrt{2n}\le\pi(2n)-\pi(n)$$
Proof)
This is just a modification of Erdos's proof of Bertrand's postulate.
$$\frac{4^n}{2n}\le\binom{2n}{n}<(2n)^\sqrt{2n}4^{2n/3}(2n)^{\pi(2n)-\pi(n)}$$
Applying logarithm on both sides,
$$\pi(2n)-\pi(n)\ge \frac{\log 4}{3\log(2n)}n -\sqrt{2n}-1$$
For $n>5000$, $\frac{\log 4}{13}\sqrt {2n}>\log 2n$ holds. So we can get
$$\pi(2n)-\pi(n)\ge \frac{\log 4}{3\log(2n)}n -\sqrt{2n}-1>\frac 76\sqrt{2n}-1>\sqrt{2n}$$ for all $n>5000$.
Computation shows that lemma 2 is also true for $49\le n\le5000$.
Let $Q_0$ be the set of all primes $p$ in the range $\sqrt {2n}<p\le 2n$. Then we have $\displaystyle\sum_{p\in Q_0}|A_p|>n$ from lemma 1. Now we do a process of discarding $p$ from $Q_0$ starting from the smallest element of $Q_0$. Let $q_s$ as the smallest element of $Q_s$ and $Q_{l+1}:=Q_l-\{q_l\}$. Then there exist an $s$ that $$\sum_{p\in Q_{s+1}}|A_p|< n\le\sum_{p\in Q_s}|A_p|$$
Since $$\sum_{p\in Q_s}|A_p|-n< |A_{q_s}|=\left[\frac{2n}{q_s}\right]\le\sqrt{2n}\le\pi(2n)-\pi(n)$$ from lemma 2, we can pick $\displaystyle\sum_{p\in Q_s}|A_p|-n$ primes in the range $n<p\le 2n$ given that $q_s\le n$. If we put $Q$ as those primes discarded from $Q_s$, we have $\displaystyle\sum_{p\in Q}|A_p|=n$ we are done. So it only remains to show $q_s\le n$.
Assume that $q_s>n$. Then we have $$n\le\sum_{p\in Q_s}|A_p|=\sum_{p\in Q_s}1=|Q_s|\le\pi(2n)-\pi(n)=\pi(2n-1)-\pi(n)<n$$ which is a contradiction.
Remark. Note that arguments made in this proof except for lemma 1 are elementary. Lemma 1 can also be shown for sufficiently large $x$ by the Merten's theorem but it is not effective bound because my calculation gave it only being true for roughly $x>10^{30}$...
To wrap it up, we can prove the statement by elementary means for all sufficiently large $n$ but to give the complete proof using this argument, we need a powerful effective bound. |
show that $x_n$ converges to root of alpha | Since you showed in the first two parts that the sequence is bounded and monotonic, you know it has a limit $x.$ Taking $n\to\infty$ in the recursion, you have
$$x=\frac{1}{2}\left(x+\frac{\alpha}{x}\right).$$
Now just solve the equation for $x.$ |
Why do siamese magic squares have real eigenvalues, symmetric around zero? | Partial answer (explaining but not proving why the non-trivial eigenvalues come in pairs with opposite signs).
It should not be too hard to prove that construction yields a matrix with the property that rotating it by 180 degrees gives "an entrywise complementary magic square", i.e. the sum of $A=S_m$ and its rotated copy $\tilde{A}$ is the matrix with all entries equal to $m^2+1$.
In other words
$$
A_{i,j}=m^2+1-A_{m+1-i,m+1-j}
$$
for all $i,j$. Let $B=(A-\tilde{A})/2$. It follows that the row and column sums of $B$ are all zero, and that
$$B_{i,j}=-B_{m+1-i,m+1-j}\qquad(*)$$ for all $i,j$. When $m=5$ we have
$$
B=\left(
\begin{array}{ccccc}
4 & 11 & -12 & -5 & 2 \\
10 & -8 & -6 & 1 & 3 \\
-9 & -7 & 0 & 7 & 9 \\
-3 & -1 & 6 & 8 & -10 \\
-2 & 5 & 12 & -11 & -4 \\
\end{array}
\right).
$$
If $u=(x_1,\ldots,x_m)$ is an eigenvector of $B$ belonging to eigenvalue $\lambda$, then it is easy to show that $\tilde{u}=(x_m,x_{m-1},\ldots,x_1)$ is an eigenvector of $B$ belonging to eigenvalue $-\lambda$. Namely
the assumption is that for all $i$ we have
$$
\sum_j B_{i,j}x_j=\lambda x_i.
$$
Taking $(*)$ into account this implies that
$$
\sum_j B_{i,j}x_{m+1-j}=-\sum_j B_{m+1-i,m+1-j}x_{m+1-j}=-\lambda x_{m+1-i}
$$
as required.
Clearly $B$ commutes with the all ones matrix $\mathbf{1}$ (the products are zero in either direction, because row and column sums of $B$ vanish). So if $B$ is diagonalizable, then it is simultaneously diagonalizable with $\mathbf{1}$. We can take advantage of the known eigenvalues of $\mathbf{1}$: the all ones vector spans the 1-dimensional eigenspace with $\lambda=m$, and its orthogonal complement $V$ (the zero sum subspace of $\Bbb{R}^m$) is the $(m-1)$-dimensional eigenspace belonging to $\lambda=0$.
The all ones vector belongs to eigenvalue $0$ of $B$, so all the eigenvectors of $B$ belonging to non-zero eigenvalues are in $V$. This is important because
$$
A=\frac{m^2+1}2\mathbf{1}+B.
$$
If $u$ belongs to eigenvalue $\lambda\neq0$ of $B$, then $\mathbf{1}u=0$ and thus $Au=\lambda u$. So the non-zero eigenvalues of $B$ are also eigenvalues of $A$ and the claim follows.
Missing:
Why are the eigenvalues of $B$ all real and distinct (so that $0$
has multiplicity one, and that $B$ is diagonalizable)?
Prove that the Siamese construction implies $(*)$. |
find : $\lim\limits_{x \to 0} \frac{1}{x}\int_0^x{(1 + \sin2t)^{1/t}dt}$ | using lhopitals and FTC we have
$$ \lim_{x \to 0 } (1 + \sin 2x )^{1/x} = \lim_{x \to 0} e^{ \frac{1}{x} \ln (1 + \sin 2x )} = e^A$$
where
$$ A = \lim_{x \to 0 } \frac{ \ln(1 + \sin 2x) }{x} $$
by continuity of $x \to e^x$ and to evaluate $A$ we again use lhopitals
$$ A = \lim_{x \to 0 } \frac{ 2 \cos 2x }{1 + \sin 2x } =2 $$
Thus, the answer is $\boxed{ e^2} $ |
Find a function $f$ such that $\lim_{x\to{}0}{f(x^2)}$ exists, but $ \lim_{x\to{}0}{f(x)}$ does not. | $$\begin{align}f\colon \Bbb R\setminus\{0\}&\to \Bbb R\\x&\mapsto \frac x{|x|}\end{align}$$ |
Is a coherent locally free sheaf isomorphic it's dual? | As Jesko says, already for line bundles there are problems. The easiest example is probably $\mathscr{O}_{\mathbf{P}^n_k}(1)$ and its dual $\mathscr O(-1)$. The first has lots of nonzero global sections, but the second has none; they can't even be non-canonically isomorphic.
One interesting thing to think about, which I saw pointed out in a book review by Kollár, is that in differential geometry one can choose a metric on any vector bundle and use that to identify $E \simeq E^\vee$. So this is a good example of how algebraic geometry differs. |
Intersection of an exponential function and a line | As qbert already commented, welcome to the wonderful world of Lambert function !
As the Wikipedia page will show you, a series of transformations would lead to
$$x=-W\left(-\frac{1}{m}e^{-\frac{b}{m}}\right)-\frac{b}{m}$$ where $W(z)$ is the solution of $z=W(z)\, e^{W(z)}$.
Now, the problem is : how many solutions ?
Basically, you are looking for the zero(s) of function
$$f(x)=e^x-mx-b$$ $$f'(x)=e^x-m$$ $$f''(x)=e^x >0\qquad \forall x$$ The first derivative could cancel for $x_*=\log(m)$; this could only happen if $m>0$.
$$f(x_*)=-b+m(1- \log (m))$$ So, if $f(x_*) <0$, you would get two roots corresponding to different branches of Lambert function. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.