title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
How to prove a regression analytically while having a complicated integral? | $$\int_0^H \frac{e^{-(r^2+h^2)} + \frac{\operatorname{erfc}\sqrt{\ r^2+h^2}}{ \sqrt{\ r^2+h^2}}}{r^2+h^2}dh \,\stackrel{u=r^2+h^2}{=} \,\int_{a}^{a+b} \frac{e^{-u} + \frac{\operatorname{erfc}\sqrt u}{\sqrt u}}{2u}\frac{du}{\sqrt{u-a}}$$
Where we have simply let $a=r^2$ and $b=H^2$
We now assume that $b \gg a$, and notice that our integral decays extremely quickly. We can thus estimate this well as
$$\int_a^{\infty} \frac{e^{-u} + \frac{\operatorname{erfc}\sqrt u}{\sqrt u}}{2u\sqrt{u-a}}du \;\stackrel{\text{Mathematica}}{=}\; \frac{\left(\pi- 2\pi^{1/2}\right)\operatorname{erfc}\left(a^{1/2}\right)}{2a^{1/2}}+\frac{e^{-a}}{a}$$ |
Is $(p \to q) \to r$ logically equivalent to $p \to (q \to r)$? | No, compare $(p\implies p)\implies r$, which is equivalent to $r$, and $p\implies (p\implies r)$, which is equivalent to $p\implies r$. |
How to define the sign of a function | You want to determine the sign of $y$ given $x$, correct? In other words, you want to figure out when $y > 0$, which means, as you said, that we have to solve the inequality
$$\arctan \frac{x + 1}{x - 3} + \frac{x}{4} > 0.$$
The first step in solving this inequality are finding its "edges"—where the inequality goes from true to false, or vice versa. It turns out that there are only two places where this can happen: where the equation
$$\arctan \frac{x + 1}{x - 3} + \frac{x}{4} = 0$$
is true, and where the expression
$$\arctan \frac{x + 1}{x - 3} + \frac{x}{4}$$
is undefined. So, begin by making a list of all the points where the equation is true, along with all the points where that expression is undefined. You will use this list to determine the sign of $y$ given $x$ at all points. |
second-order indescribable - inaccessible cardinals | If $\kappa$ is not inaccessible, then it is either singular or not a strong limit. Then you can describe it letting $A$ encode an ordinal $\alpha$ and a cofinal map from $\alpha\to\kappa$ in the first case, or $P(S)$ for some $S\in V_\kappa$ and a surjective map $P(S)\to \kappa,$ in the second case. We can write a sentence that says something like "the ordinal exists and is the domain of the map" for the singular case and observe this sentence must fail when we restrict to any smaller $V_\lambda.$ This is a first order sentence, so certainly $\Sigma_1^1.$
Conversely, if $\kappa$ is inaccessible, and $(V_\kappa,\in, A)\models\exists X\varphi(X),$ let $X\subseteq V_\kappa$ be a witness. We can construct an elementary submodel $(V_\lambda,\in, A\cap V_\lambda, X\cap V_\lambda)\prec (V_\kappa,\in, A, X)$ for some $\lambda<\kappa$ in the same manner in which the reflection theorem is proved. |
Epsilon-delta proof of an absolute value function's limit for positive case | You can eliminate the possibility of $x - 2 \ge 0$ by forcing $\delta \le 4$, since
$$|x + 2| < \delta \le 4 \implies -4 < x + 2 < 4 \implies -8 < x - 2 < 0.$$
So, your choice of $\delta$ could be
$$\delta = \min \lbrace \varepsilon, 4 \rbrace.$$ |
Showing the set with a supremum has an increasing sequence converging to that supremum. | I am not convinced that your sequence converges to $u$
You have picked an epsilon and formed a sequence between $u-\epsilon $ and $u$.
How do you know that the sequence converges to u.
We know the sequence will converge to a number $l$ such that $u-\epsilon <l\le u$, but how do we know that $l=u$ ?
Why don't you pick the terms of your sequence between $u-1/n$ and $u$ ? |
Trouble understanding limit of a sequence of complex numbers, James Brown | First of all, it is certainly true that $z_n \to -2$ as you have observed. What the book is trying to say is that the principal logarithm of $z_n$ does not tend to principal logarithm of $-2$. The principal logarithm is defined using the principal argument which is always assumed to be a number in $(-\pi, \pi]$. For a point like $-2+it$ with $t$ positive and close to $0$ the argument is less than and close to $\pi$. On the other hand, for a point like $-2-it$ with $t$ positive and close to $0$ the argument is gretater than and close to $-\pi$. Hence the limit of $arg(z)$ as $z \to -2$ does not exist. |
Does every pyramid has inradius | This will only be true for triangular pyramids (pyramids based on a triangle, rather than a square as for Egyptian pyramids). It is the sphere of the largest radius, or the insphere, contained inside the pyramid. The proof is a first variation argument. |
Fundamental group of complement of $n$ lines through the origin in $\mathbb{R}^3$ | There is a deformation retraction of ($\mathbb{R}^3$ minus $n$ lines through the origin) to (the unit sphere with $2n$ points removed). The $2n$ points are the intersections of the lines with the sphere, the deformation retraction is along the rays from the origin.
As a result, the fundamental group is actually $F_{2n-1}$, not $F_n$. |
Probability on entering direction of a simple random walk | The $\frac14$ limit is of course true, but the conjectured error term seems to be off by a log; the first-order correction can be computed exactly using discrete potential theory. This post will not be self-contained; I refer to these lecture notes for details on the background claims.
Given a function $h:\mathbb{Z}^2\to \mathbb{R}$, define its discrete Laplacian
$$
\Delta h(x):=\frac14 \sum_e (h(x+e)-h(x)),
$$
the sum being over the four neighbours of $x$. If $\Omega\subset \mathbb{Z}^2$ and $v\in \Omega$, let $G_\Omega(\cdot ,v)\to \mathbb{R}$ be the Green's function, that is, the unique function satisfying $-\Delta G_\Omega(\cdot,v)=\delta_{\cdot,v}$ in $\Omega$, and $G(\cdot,v)\equiv 0$ outside $\Omega$. It is easy to see that if the random walk starts from $x$, then we have $$\mathbb{P}(X_{\tau}=v,X_{\tau-1}=v+e)=G_{\Omega\setminus\{v\}}(x;v+e),$$ where $\tau=\min\{t:X_t\notin\Omega \setminus\{v\}\}$. Also,
$$
G_{\Omega\setminus\{v\}}(\cdot;v+e)=-\frac{G_\Omega(v,v+e)}{G_\Omega(v,v)} G_{\Omega}(\cdot;v)+G_\Omega(\cdot,v+e),
$$
because the right-hand side satisfies the defining conditions of Green's function in $\Omega\setminus\{v\}$.
Now, we will plug in the above identity a family $\Omega_R$ of growing domains, so that the rescaled $\Omega_R$ converge: $$R^{-1}\Omega_R\to\Omega,\quad R\to\infty,$$ say, in the sense of Hausdorff. As explained in the lecture notes, if $h_R$ solves the discrete Dirichlet problem $\Delta h_R\equiv 0$ in $\Omega_R$, $h_R(x)=\varphi(xR^{-1})$ outside $\Omega_R$ for a continuous $\varphi$, then $h_R(Rx)=h(x)+o(1)$, uniformly over compact subsets of $\Omega$, where $h$ solves the (usual) Dirichlet problem in $\Omega$ with boundary conditions $\varphi$. In fact, also all the "discrete derivatives" of $h_R$ converge to the corresponding derivatives of $h$
Moreover, as explained in the notes, we can construct the discrete full plane Green's function, or discrete analog of the logarithm: the unique function $G_0(\cdot):\mathbb{Z}^2\to\mathbb{R}$ with the propeties $-\Delta G_0(\cdot)=\delta(\cdot)$, $G_0(0)=0$, and $G_0(x)=-\frac{1}{2\pi}\log|x|+c+O(|x|^{-2})$ as $|x|\to\infty$. We can write
$$
G_{\Omega_R}(x,v)=G_0(x-v)+\tilde{G}_{\Omega_R}(x,v),
$$
where $\tilde{G}$ solves the discrete Dirichlet problem in $\Omega_R$ with boundary data $\varphi(x)=-G_0(x,v)=\frac{1}{2\pi}\log|x-v|-c+O(R^{-2})=\frac{1}{2\pi}\log R+\frac{1}{2\pi}\log\frac{|x-v|}{R}-c+O(R^{-2})$. From the above remark on convergence of solutions to Dirichlet problem, we deduce that
$$
G_{\Omega_R}(x,v)=G_0(x-v)+\frac{1}{2\pi}\log R - c + \tilde{g}_\Omega(xR^{-1};vR^{-1})+o(1),
$$
where $\tilde{g}(\cdot,\hat{v})$ solves the Dirichlet problem in $\Omega$ with boundary data $\frac{1}{2\pi}\log |\cdot-\hat{v}|$. Note that if $x$ and $v$ are at distance of order $R$ from each other, this can be written as
$$
G_{\Omega_R}(x,v)=g_\Omega(x,v)+o(1),
$$
where $g_\Omega$ is the Green's function of $\Omega$. What is more, the remark about convergence of discrete derivatives, together with symmetry of Green's function, implies that if $x$ and $v$ are at distance of order $R$ from each other and the boundary, then
$$G_{\Omega_R}(x,v+e)=G_{\Omega_R}(x,v)+R^{-1}\nabla_{2} g(x,v)\cdot e+o(R^{-1}),$$ where $\nabla_{2}$ denotes the gradient in the second argument. Plugging everything together, we arrive at
$$
\frac{G_\Omega(v,v+e)}{G_\Omega(v,v)}=\frac{-\frac14+\frac{1}{2\pi}\log R-c+\tilde{g}_\Omega(vR^{-1};vR^{-1})+o(1)}{\frac{1}{2\pi}\log R-c+\tilde{g}_\Omega(vR^{-1};vR^{-1})+o(1)}=1-\frac{2\pi}{4\log R}+O\left(\frac{1}{(\log R)^2}\right),
$$
and
$$
G_{\Omega\setminus\{v\}}(x;v+e)=\frac{2\pi}{4\log R}g_\Omega(x,v)+R^{-1}\nabla_2 g_\Omega(x,v)\cdot e+o((\log R)^{-1})+o_e(R^{-1}),
$$
where $o(\cdot)$ does not depend on $e$ and $o_e(\cdot)$ is allowed to depend on $e$. It follows that the event $S_R>T_v$ has probability
$$
\frac{2\pi}{\log R}+ o((\log R)^{-1}),
$$
and the conditional probability to exit through the move $v+e\to e$ is
$$
\frac{1}{4}+\frac{\log R}{2\pi R}\nabla_2g_\Omega(x,v)\cdot e+o\left(\frac{\log R}{R}\right).
$$ |
Prove $\sum_{k= 0}^{n} k \binom{n}{k} = n \cdot 2^{n - 1}$ using the binomial theorem | You can avoid calculus altogether by first using the identity $k\binom{n}k=n\binom{n-1}{k-1}$:
$$\begin{align*}
\sum_{k=0}^nk\binom{n}k&=\sum_{k=1}^nk\binom{n}k\\
&=\sum_{k=1}^nn\binom{n-1}{k-1}\\
&=n\sum_{k=0}^{n-1}\binom{n-1}k\\
&=n\sum_{k=0}^{n-1}\binom{n-1}k1^k1^{n-k}\\
&=n(1+1)^{n-1}\\
&=n2^{n-1}
\end{align*}$$
Added: I prefer a combinatorial proof, but that doesn’t use the binomial theorem. For completeness I’ll include one, though I’m pretty sure that I’ve posted one before. You have a pool of $n$ players, and you want to choose a team of one or more players, one of whom you will designate as captain. If you choose a team of $k$ players, there are $\binom{n}k$ ways to pick the team, and then there are $k$ ways to pick the captain, so there are $k\binom{n}k$ captained teams of $k$ players; summing over $k$ gives the total number of captained teams. On the other hand, you can pick the captain first (in $n$ different ways) and then pick any subset of the remaining $n-1$ players to fill out the team. Since there are $2^{n-1}$ subsets of any set of $n-1$ things, there are altogether $n2^{n-1}$ ways to pick a captained team. |
Decomposition of a finite dimensional representation of $C^*$ algebra | If you had infinitely many direct sumands, the image would be infinite-dimensional, contradicting your assumption. |
Number Theory involving Pythagorean triplets | You can rewrite
$$n^2+(n−1)^2=k^2$$
to get the negative Pell equation:
$$m^2 - 2k^2 = -1$$
where $m=2n-1$.
If I'm not mistaken, the solutions for $m$ are $1$, $7$, $41$, $239$, $1393$, $8119$, $47321$, $275807$, $1607521$, $9369319$, ...
This is given by the formula:
$$m_i = \frac{1+\sqrt{2}}{2} (3+2\sqrt{2})^i + \frac{-1+\sqrt{2}}{2} (3-2\sqrt{2})^i$$
That second term is very small, so you can just use the first term and round it up to an integer.
$$m_i = \lceil\frac{1+\sqrt{2}}{2} (3+2\sqrt{2})^i\rceil$$ |
Every irreducible recurrent Markov chain has a positive recurrent state? | First of all, 6.6.1 in Durrett says
$$\frac{1}{n}\sum_{m=1}^n p^m(x,y) \to \frac{1}{E_yT_y} 1_{\{T_y<\infty\}}.$$
Second of all, the sum over $y$ does not necessarily commute with the limit if there are infinitely many $y$'s. The essence of this statement is that for each $y$ and each $n$, we have some value $a_n(y)$, and $a_n(y) $ converges to some $a(y)$ as $n \to \infty$. You want to conclude that
$$\sum_{ y} a_n(y) \to \sum_{y} a(y)$$
but this is not necessarily the case.
For instance, suppose that the $y$'s take on values in $\mathbb{N}$ and $a_n(y) = \frac{1}{n+y}$. Then $a_n(y) \to 0$ for each $y$, but for any fixed value of $n$ we have
$$\sum_y a_n(y) = \frac{1}{n+1}+\frac{1}{n+2}+\frac{1}{n+3}+\cdots = \infty.$$ |
Prove or disprove the following statement using the definition of big-Θ: | This is correct, for $Ω$ as well. |
About continuous functions on $p$-adic fields | If $f$ is continuous $\mathbb{Q}_p \to X$ then $f \circ \phi$ is continuous $\mathbb{C}_p\to X$ and extends $f$.
Where $\phi$ sends $x \in \mathbb{C}_p$ to the closest point in $\mathbb{Q}_p$ :
let $u(x) = \sup_{t \in \mathbb{Q}_p} v_p(x-t)$ and $$\phi(x) = \cases{ x \text{ if } x\in \mathbb{Q}_p \\ 0 \text{ if } v_p(x)=u(x) \not \in \mathbb{Z} \\ p^{v_p(x)} \min \{n \in \mathbb{Z}_{\ge 0}, v_p(x-p^{v_p(x)} n) = u(x)\} \text{ otherwise}}$$
Since $v_p(x-\phi(x)) = u(x)$ then $v_p(\phi(x)-\phi(y)) \ge v_p(x-y)$ |
Possible values of $\frac{1}{x}+\frac{1}{y}+\frac{1}{z}$ given $x+y+z=1$ | (1) Consider the function $f(x)=x^3-x^2-2x+2=(x-1)(x^2-2)$ we have three real roots and $x+y+z=-{1\over -1}=1,{1\over x}+{1\over y}+{1\over z}=-{2\over-2}=1$.
(2) Consider the function $f(x)=x^3-x^2-40x+20$. Set the derivative equal $0$ we get $x=4$ and $x=-{10\over3}$ are two local extrema. When $x=4, f(x)<0$, when $x=-{10\over 3}, f(x)>0$ so there are three real roots. We have $x+y+z=1$ and ${1\over x}+{1\over y}+{1\over z}=-{-40\over20}=2$.
(3) Consider the function $f(x)=x^3-x^2-40x+{40\over3}$. We still have the same local extrema as (2) and when $x=4, f(x)<0$ and when $x=-{10\over 3}, f(x)>0$ so there are three real roots. We have $x+y+z=1$ and ${1\over x}+{1\over y}+{1\over z}=-{-40\over{40\over3}}=3$. |
meridian of a surface of revoltion | In my text, it writes a surface of revolution as ${\bf x} = (r(t)cos(\theta),r(t)sin(\theta),z(t))$. Then, the meridians are the $t-curves$, and the circles of latitude are the $\theta - curves$. We get the $t-curves$ by fixing $\theta$, and vice versa.
We know that a curve ${\bf \gamma}$ is a geodesic if its second derivative is everywhere normal to the surface along ${\bf \gamma}$. Thus, what you should do is calculate ${\bf x_1}$ and ${\bf x_2}$, the partial derivatives of the surface with respect to $t$ and $\theta$, and then show that $<{\bf \gamma} '',{\bf x_i}>=0$. This implies that $\gamma ''$ is normal to the surface, since the normal vector ${\bf n} = {\bf x_1}\times{\bf x_2}$ , and thus ${\bf \gamma}$ is a geodesic.
For the $t-curves$, you just have to prove that $<{\bf \gamma} '',{\bf x_i}>=0$.
For the $\theta-curves$, you should calculate $<{\bf \gamma} '',{\bf x_i}>$, and determine what condition would make that innerproduct $0$.
I believe the condition should be that $r'=0$, meaning that the function $r(t)$ is in a local maximum or minimum. This makes sense, as we know that the equator of a sphere is a geodesic, and the equator can be viewed as a $\theta-curve$ for a surface of revolution of a half-circle. |
Use Implicit Function Theorem to show that J(x,y)=0 | Derivating $h(u,v)=0$ respect to $x$ and $y$ we have:
\begin{equation}
\frac{\partial{h}}{\partial{u}}\frac{\partial{u}}{\partial{x}} + \frac{\partial{h}}{\partial{v}}\frac{\partial{v}}{\partial{x}}=0
\end{equation}
and
\begin{equation}
\frac{\partial{h}}{\partial{u}}\frac{\partial{u}}{\partial{y}} + \frac{\partial{h}}{\partial{v}}\frac{\partial{v}}{\partial{y}}=0
\end{equation}
If we denote $A$ to the system above, then its straightforward to see that
\begin{equation}
det(A)= \frac{\partial{(u,v)}}{\partial{(x,y)}}
\end{equation}
So , it would be sufficient to prove that $det(A)=0$ $\forall (x,y) \in R$. This will guarantee that the Jacobian is exactly $0$ too. Note that:
If A has only the trivial solution, then (in $R$):
$$\frac{\partial{h}}{\partial{u}}=\frac{\partial{h}}{\partial{v}} = 0$$
This is not possible because, according to the hypothesis, $h$ is a non constant function. So at least one of these derivatives is not $0$.
$\therefore$ $A$ has a non trivial solution
$\therefore$ $det(A)=0$
\begin{equation}
\therefore
J=\frac{\partial{(u,v)}}{\partial{(x,y)}}=0
\end{equation}
$\forall (x,y) \in R$ |
Computational discrete $\log$ for known $x$ | If $x_1$ were known so would $y = y_2^{x_1}$ be. So both should be unknown (which is true if DL is hard).
And if it's hard to compute $g^{x_1\cdot x_2}$ from $y_1$ and $y_2$ alone the same holds for $y =\frac{g^{x_1x_2}}{g^{x_1}}$, because $y_1 \cdot y = g^{x_1 x_2}$ and $y_1$ is given, and multiplication is feasible. The same holds for $y = g^{x_1 x_2}\cdot g^{x_1}$, as we can divide by $y_1 = g^{x_1}$. Division by an integer is also easy (supposing $z$ is indeed in the multiplicative group in which we work). |
Proofs of VC theorem | It is an old question... for the record, the shortest proof I know goes via a lemma of Alain Pajor.
Gil Kalai credits Noga Alon while R. P. Anstee, Lajos Rónyai, and Attila Sali credit others.
P.S. I realize a proof is also in the Wikipedia article on Sauer–Shelah lemma. |
Why do we have circles for ellipses, squares for rectangles but nothing for triangles? | This is a question about linguistics and psychology and teaching, not really about mathematics.
We have special words for things we refer to often. Circles come up way more often than ellipses so it's convenient (and clear historically) that they have their own word. "Square" is much nicer than "equilateral rectangle" and requires a lot less cognitive processing.
I spend a fair amount of time in K-5 classrooms, so I've some experience with the questions you raise.
Yes, kids in elementary school are confused by the fact that a square is a rectangle. So are some elementary school teachers. That's a problem with trying to impose correct formal mathematics on informal everyday speech. It happens a lot - this is an instance (in a way) of whether "or" means "and or" or "or but not and". In mathematics it's always the former. In daily life, sometimes one sometimes the other.
One problem I have with elementary school "geometry" is its focus on categorizing and naming things and its paucity of theorems - or at least observations of properties. I wish kids were taught to notice that the diagonals of a parallelogram bisect each other, or that the medians of a triangle meet at a point, way before they encounter proofs.
And yes, this should be migrated to math education SE. |
Combinatorial Proof of Identity | This is not a combinatorial identity! For example
$$\binom{7}{6}=7 \neq 7\binom{6}{6}+1\binom{6}{5}=7+6=13$$
Then, you have $$(k+1)\binom {n-1}{k}+(n-k)\binom {n-1}{k-1} = (k+1)\left(\frac{n-2k}{n}\binom{n}{k}+ \binom {n-1}{k-1}\right)+(n-k)\binom {n-1}{k-1}=\frac{k(n-2k)}{n}\binom{n}{k}+k\binom {n-1}{k-1}+\frac{n-2k}{n}\binom{n}{k}+ \binom {n-1}{k-1}+n\binom {n-1}{k-1}-k\binom {n-1}{k-1}=(n-2k)\binom{n-1}{k-1}+\frac{n-2k}{k}\binom{n-1}{k-1}+(n+1)\binom{n-1}{k-1}=\left(n-2k+\frac{n-2k}{k}+n+1\right)\binom{n-1}{k-1}=\frac{2nk+n-k-2k^2}{k}\binom{n-1}{k-1}=\frac{2nk+n-k-2k^2}{n}\binom{n}{k}=\binom{n}{k}\Rightarrow 2nk+n-k-2k^2=n \Rightarrow k=0 \text{ or } k = \frac{2n-1}{2}=n-\frac12$$
The last solution is not valid, as $n \in \Bbb{N} \Rightarrow n-\frac12=k \notin \Bbb{N}$ so $k!$ is not defined. The solution $k=0$ is not acceptable too, as $-1!$ doesn't exist $\to$ there are no solutions $\in \Bbb{N}$ |
How do you check relations in mathematically? | We are proving: if $A_i\subset B_i$ then $\bigcup_iA_i\subset\bigcup_iB_i$.
Take $x\in\bigcup_iA_i$ and deduce that there is at least some $A_k$ such that
$x\in A_k$. So
$$x\in A_k\subset B_k\subset\bigcup_iB_i.$$
Hence $\bigcup_iA_i\subset\bigcup_iB_i$. |
suppose $G$ is strongly regular graph srg$(n,k,\lambda,\mu)$,prove that $k\geq 2\lambda -\mu +3 $. | We can suppose that $\mu\geq 1$, therefore there exist three vertices $u_1,u_2,u_3$ such that
$u_2u_3$ is a non-edge, but $u_1u_2$ and $u_1u_3$ are edges. Denote $a$ the number of vertices which are common neighbours of the chosen vertices $u_1,u_2,u_3$. Clearly, $a\leq \mu-1$. (Minus one, because $u_1$ is a common neighbour of $u_2$ and $u_3$.)
For example, after drawing a picture, it is easy to see that the set $B$ containing vertices, which are joined to $u_1$, but are not neighbours of $u_2$ nor $u_3$ has cardinality equal to $k-(a+2(\lambda-a)+2)$.
Obviously, $|B|\geq 0$.
Thus,
$$k\geq a + 2(\lambda - a) + 2$$
and using $a \leq \mu-1$ we, finally, obtain
$$k \geq 2\lambda +3-\mu.$$ |
Cauchy sequence and bijective function | Your argument is circular and the claim is false.
In the same spirit as this answer, which is only missing the detail of $X$ being complete, call $R_n=\{n\}\times[0,\infty)$, $C_n=\{(x,y)\in\Bbb R^2\,:\, (x-n)^2+(y-1)^2=1\}$, $L=\Bbb R\times\{0\}$ and consider the metric space $X\subseteq \Bbb R^2$ $$X=L\cup\left(\bigcup_{n\ge 0} R_{-4n}\right)\cup\left(\bigcup_{n\ge 1} C_{4n}\right)$$
Then consider this map $$f(x,y):\begin{cases} (x+4,y)&\text{if }(x,y)\notin R_{0}\\ \left(x+4+\sin\left(4\arctan y\right), 1-\cos(4\arctan y)\right)&\text{if }(x,y)\in R_0\end{cases}$$
In other words, $f$ translates everything on the right by four, except the last closed ray $R_0$, which is winded counterclockwise around the first circle $C_4$. You can check $X$ is closed in $\Bbb R^2$, and thus complete, and that the function $f:X\to X$ is uniformly continuous and bijective, but $f^{-1}$ is not continuous. |
Understanding John Lee's proof of the Transversality Homotopy Theorem. Restriction of a smooth map to a coordinate does not change the differential? | A submersion has full rank. Suppose
$$
F : R^k \times R \to R^p
$$
is differentiable, and $F'(a, b)$ has rank $p$. Then $F$ is a submersion at $(a, b)$, right?
Now let $G(x) = F(x, b)$. Then $G$ is a map from $R^k \to R^p$.
Suppose I tell you that $G'(a)$ has rank $k$. What can you tell me about the rank of $F'(a, b)$? Well, it's at least the rank of $G'(a)$ because
$$
F'(a, b) = \pmatrix{G'(a) & * \\ * & *}
$$
so at least $k$ of its columns (the ones containing $G'(a)$ are linearly independent.
In Lee's case, we know that $G'(a)$ has rank $p$ (for every choice of $b$!). Hence $F'(a,b)$ has rank $p$ for every $(a,b)$.
So that's the proof for euclidean space, but for your manifold, everything is locally euclidean, so the chain rule finishes out the argument. |
If $a^{7!} +b^{8!} +c^{9!} +d^{10!} =x$ where a,b,c and d are natural numbers that are not multiples of 10, the..... | Notice $4$ divides all of $7!, 8!, 9!,10!$.
Euler's theorem says that if $\gcd(k,10) = 1$ then $k^4 \equiv 1 \pmod {10}$ so the last digit of $k^{v!}$ is $1$ if $k$ is an odd number that doesn't end with $5$.
If you don't know Euler's theorem or modular arithmetic notice that if $k = 10w + v$ where $v=\pm 1, \pm 3$ then $k^4 = 10^4w^4 + 4*10^3w^3v + 6*10^2w^2v^2 + 4*10w*v^3 + v^4$ so the last digit if $k^4$ is the same as the last digit of $v^4$ and $v^4 =1$ or $v^4 = 81$. So the last digit is one. So $k^{4m}$ will have the last digit of $1$ if $k$ ends with $1, 3, 10-3=7, $ or $10-1=9$.
The other things to worry about are if $k$ is even or $k$ ends in $5$.
If $k$ is ends with $5$ then $k^m$ ends with $5$ (that's obvious isn't it?)
And if $k$ is even... well by the chinese remainder theorem $k^w\equiv 0 \pmod 2$ and $k^4\equiv 1 \pmod 5$ but Euler Th so $k^4 \equiv 6\pmod {10}$.
If you don't know Euler's th or CRT... well,... if $k =2j$ is even then $k^4 = 2^4 j^4 = 16*j^4$. If $j$ is odd and not a multiple of $5$ then $j^4$ ends with $1$ and $k^4$ ends with $6$. If $j$ is even just repeat: $j = 2l$ and $k^4 = 16*16*l^4$ and that ends with $6$ if $l$ is odd and if $l$ is even repeat as often as necessary.
So you have the last digits are $1, 5$ or $6$.
So we can have $4$ ones and end with $4$.
We can have $3$ ones and a five and end with $8$
We can have $3$ ones and a six and end with $9$.
We can have $2$ ones and $2$ fives and end with $2$ and .... so on.
Edit: an improved way of enumerating the possibile sums $\bmod 10$
Once you render the units digit of each term as $\in\{1,5,6\}$, you can consider the sum separately $\bmod 5$ and $\bmod 2$.
In $\bmod 5$ the residue matches the number of $1$s plus the number of $6$s in the sum. This can be any of $0,1,2,3$ or $4$.
In $\bmod 2$ you can have either residue $0$ or $1$ by swapping a $1$ for a $6$ or vice versa, provided that at least one of these is included. But that requires an overall residue of $1,2,3$, or $4\bmod 5$ from the above. The combination with residue $0\bmod 5$, $5+5+5+5$, does not allow this swapping and thus forces a residue of $0\bmod 2$.
So the can all combinations of resudues $\bmod 2$ and $\bmod 5$ except $(1\bmod 2, 0\bmod 5)$ allowing to all units digits except $5$. |
Evaluate $\int _0^1\:\frac{2-x^2}{(1+x)\sqrt{1-x^2}} \, dx$ | Let $x=\sin t$. Then,
$$\int _0^1 \frac{2-x^2}{\left(1+x\right)\sqrt{1-x^2}}dx
=\int _0^1\left(\frac{1}{\left(1+x\right)\sqrt{1-x^2}}
+\frac{\sqrt{1-x^2}}{1+x}\right)dx$$
$$=\int_0^{\pi/2} \left( \frac{dt}{1+\sin t}+1-\sin t\right)dt$$
$$=\left(-\frac{\cos t}{1+\sin t} + t + \cos t\right)_0^{\pi/2}=\frac\pi2$$ |
Calorimetry Question | Q=c*dt
Q=24.48 KJ
C=440KJ
dt=?
24.48=440dt
24.48/440=dt
dt= .05563
Therefore change in temperature is approximately .05563 degrees celsius or kelvin. |
Example of a characteristic zero local ring with a quotient of positive characteristic | An example will be the ring $\mathbb{Z}[x]$ localized at $(x,2)$, so $\mathbb{Z}[x]_{(x,2)}$. An important fact here that makes this a reasonable example to come up is that the localization of a ring $R$ at a prime ideal $P$ will be a local ring $R_P$, the max ideal being $P_P$, and furthermore the prime ideals of $R_P$ will all be of the form $Q_P$ for some prime ideal $Q$ of $R$ that is contained in $P$. So for our particular example, we're looking at the chain of prime ideals $(0) \hookrightarrow (2) \hookrightarrow (2,x) \hookrightarrow \mathbb{Z}[x]$. The ideal $(2)_{(2,x)}$ will be prime in $\mathbb{Z}[x]_{(x,2)}$, and since $\mathbb{Z}[x]_{(x,2)}$ is still unital, the quotient of $\mathbb{Z}[x]_{(x,2)}$ by $(2)_{(2,x)}$ will have characteristic $2$.
The ring $\mathbb{Z}_{(2)}[\![x]\!]$ mentioned in the question is an example too, for nearly the same reason: the ideal $(x,2)$ is maximal and $(2)$ is prime. |
Determine the points on the parabola $y=x^2 - 25$ that are closest to $(0,3)$ | Just set up a distance squared function:
$$d(x) = (x-0)^2 + (x^2-25-3)^2 = x^2 + (x^2-28)^2$$
Minimize this with respect to $x$. It is easier to work with the square of the distance rather than the distance itself because you avoid the square roots which, in the end, do not matter when taking a derivative and setting it to zero. |
Linear matrix equation involving $\sum_i A_i X B_i$ | Each summand in the summation sign is symmetrised and $C$ is symmetric. Hence $X$ must be symmetric and the equation is equivalent to
$$
X+\sum_i\left(A_iXB_i+B_i^TXA_i^T+B_iXA_i+A_i^TXB_i^T\right)=C,
$$
which can be rewritten as (see Wikipedia)
$$
\left[I+\sum_i\left(B_i^T\otimes A_i+A_i\otimes B_i^T+A_i^T\otimes B_i+B_i\otimes A_i^T\right)\right]\operatorname{vec}(X)=\operatorname{vec}(C).
$$
Call the matrix inside the pair of square brackets $M$. The equation $M\operatorname{vec}(X)=\operatorname{vec}(C)$ is solvable if and only if $MM^+\operatorname{vec}(C)=\operatorname{vec}(C)$, where $M^+$ denotes the Moore-Penrose pseudoinverse of $M$. In case it is solvable, $\operatorname{vec}(X)=M^+\operatorname{vec}(C)$ is always a solution. |
Prove $A_1, A_2, \ldots,A_{\ell-1}, A_{\ell+1}\ldots \ldots, A_{m} $ in $\mathbb R^m$ are linearly independent viewed as vectors in $\mathbb R^{m-1}$? | This is not true. Let $m=3$ and $A_1=(0,2,3),~ A_2=(1,1,1),~ A_3=(0,1,1)$. Then
\begin{align}
\det(A_1,A_2,A_3)=\begin{vmatrix}
0 & 1 & 0\\
2 & 1 & 1\\
3 & 1 & 1
\end{vmatrix}
=-1\neq 0.
\end{align}
If we delete the first components of $A_2$ and $A_3$ the system $\{A_2,A_3\}$ wont be linearly independent in $\mathbb R^2$
Let $A_1=(1,2,3),~A_2=(0,1,1),~A_3=(-1,1,1)$. Then
$$B=\begin{bmatrix}
1&0&-1\\ 2&1&1\\ 3&1&1
\end{bmatrix}\quad \text{and}\quad
B^{-1}=\begin{bmatrix}
0&-1&1\\ 1&4&-3\\ -1&-1&1
\end{bmatrix}.$$
The first row of the matrix $B^{-1}[A_2,A_3]$ is only 0 and the system $\{A_2,A_3\}$ after deleted the first components is not linearly independent. |
Probability of at least one 2-hop neighbor | Let $X_1,X_2,\dots X_n$ be the sets of neighbours of $A_1,A_2,\dots,A_n$.
We need to calculate the probability each of these events happens: $|X_1\cap X_j|=1$.
If we condition this to the size of $X_1$ being equal to $k$ then the events are independent.
The probability that $|X_1\cap X_2|=1$ is clearly $kp(1-p)^{k-1}$.
The probability that $X_1$ has size $k$ is $\binom{m}{k}p^k(1-p)^{m-k}$
So the answer is $p\sum\limits_{k=0}^m k\binom{m}{k}p^k(1-p)^{m-k}(1-p)^{k-1}=p(1-p)^{m-1}\sum\limits_{k=0}^mk\binom{m}{k}p^k$.
Notice that $f(x)=(x+1)^m=\sum\limits_{k=0}^m\binom{m}{k}x^k$ so $pf'(p)=\sum\limits_{k=0}^m k\binom{m}{k}p^k$.
Hence what we want is $p^2(1-p)^{m-1}f'(p)$ |
Derivatives of implicit functions | $$F(x)=\left(\frac{1}{3x-f(x)}\right)^4=\frac{1^4}{(3x-f(x))^4}=(3x-f(x))^{-4}$$
Then,
$$\frac{d}{dx}F(x)=-4\cdot(3x-f(x))^{-5}\cdot(3-f'(x))$$ |
Symmetric difference of sets and convergence in integration. | We have $\def\norm#1{\left\|#1\right\|_1}\def\abs#1{\left|#1\right|}$
\begin{align*}
\abs{\int_{A_n} f_n \, dm - \int_A f\, dm} &\le \abs{\int_{A_n} (f_n- f)\, dm} + \abs{\int_{A_n} f\, dm - \int_A f \, dm}\\
&\le \int_{A_n} \abs{f_n-f}\, dm + \int_{A_n \Delta A} \abs{f}\, dm\\
&\le \norm{f_n -f} + \norm{f \cdot \chi_{A_n \Delta A}}\\
&\to 0
\end{align*}
Where $\chi_{A_n \Delta A}$ denotes the charateristic function. For the second term on the second to last line, note that $\chi_{A_n \Delta A}f \to 0$ almost everywhere and $\abs f$ is a integrable bound, hence the bounded convergence theorem gives $\norm{f \chi_{A_n \Delta A}} \to 0$. |
Computing the index $\left(\mathbb Z\left[\frac{1+\sqrt{5}}{2}\right]:\mathbb Z \left[\sqrt{5}\right]\right)$? | Let $\theta=\frac{1+\sqrt{5}}{2}$.
Then $\mathbb Z[\sqrt{5}]=\mathbb Z 1 + \mathbb Z 2\theta$ and $\mathcal O = \mathbb Z 1 + \mathbb Z \theta$.
Therefore, $\left(\mathcal O:\mathbb Z [\sqrt{5}]\right)= 2$.
Equivalently, write
$$
\pmatrix{1 \\ \sqrt 5}
=
\pmatrix{\hphantom{-}1 & 0 \\ -1 & 2}
\pmatrix{1 \\ \frac{1+\sqrt{5}}{2}}
$$
and note that the determinant of the matrix is $2$. |
Sample space probability | HINT
Let $B$ be a discrete uniform random variable, with values in $[1,2,3,\ldots,365]$. Each person's birthday is then one of these.
You have a group of $n$ people, and hence $n$ independent birthdays $B_1, B_2, \ldots, B_{100}$. How many pairs $(i,j)$ can you find that $B_i=B_j$? |
Prove that a directed graph with no cycles has at least one node of indegree zero | Suppose that there exists a graph with no cycles and there are no nodes of indegree $0$. Then each node has indegree $1$ or higher. Pick any node, since its indegree is $1$ or higher we can go to its parent node. This node has also indegree $1$ or higher and so we can keep doing this procedure until we arrive at the node we already visited. This will prove that there exists a cycle which contradicts our initial assumption. So we proved that every directed graph with no cycles has at least one node of indegree zero. |
Complicated Planar Geometry | take the diameter of the circumcircle to be $2.$ let $$\angle MAC = \angle MBC = \alpha, \angle MBA = \angle MCA = \beta, \angle BMC = \angle BMA = \angle BAC = \angle ABC = \gamma.$$ observe that $$\alpha + \beta + 2\gamma = 180^\circ$$
then by the rule of $sine,$ we have $$MC = \sin \alpha, MA = \sin \beta, MB = \sin(\beta + \gamma), AB = \sin \gamma.$$
now, $$\begin{align}MB^2 - AB^2 &= \sin^2(\beta+\gamma) - \sin^2\gamma \\
&=(\sin(\beta+\gamma)-\sin\gamma)(\sin(\beta+\gamma)+\sin \gamma)\\
&=2\cos(\gamma + \beta/2)\sin (\beta/2)2\sin(\gamma + \beta/2)\cos (\beta/2)\\
&=\sin\beta\sin(2\gamma + \beta)\\
&=\sin\beta\sin \alpha\\
&=MA \cdot MC \end{align}$$ |
$G$ is a nonabelian finite group, then $|Z(G)|\leq \frac{1}{4}|G|$ | Hint: If $G/Z(G)$ is cyclic, then $G$ is abelian. |
Finding the homogeneous system | Since
$$
\det\begin{bmatrix} 1 & 2 & 3 \\ 1 & 1 & 0 \\ 1 & 1 & 1 \\ \end{bmatrix}=-1 \neq 0
$$
in $\mathbb{Z}_5$, the vectors $(1,2,3)$, $(1,1,0)$ and $(1,1,1)$ are linearly independent. Thus, $\mathrm{span}(B)=(\mathbb{Z}_5)^3$.
Thus, your task is to find a matrix $A$ such that $A\mathbf{x}=\mathbf{0}$ for all $\mathbf{x} \in \mathrm{span}(B)$ [i.e., for all $\mathbf{x} \in (\mathbb{Z}_5)^3$]. (The important bit here is the "for all".) |
What does 'factors through' mean in this context? | If $p(z)$ has no roots at all then it defines a map $\mathbb C \to \mathbb C - \{0\}$. Therefore, in this case we can write $p\vert_{S^1(R)}$ as a composition
$$
S^1(R) \stackrel{i}{\to} \mathbb C \stackrel{p(z)}{\to} \mathbb C - \{0\}
$$
where the first map is the inclusion. Since $\mathbb C$ is contractible, the map $p(z)$ is nullhomotopic so that the composition, which is $p\vert_{S^1(R)}$, is also nullhomotopic.
The reasoning it's called factoring is that we've decomposed the original map $p\vert_{S^1(R)}$ into a composition $p\circ i$.
Note that even if $p$ has roots, $p\vert_{S^1(R)}$ is still $p \circ i$ but now the functions in the homotopy may have 0 as a value. |
Definition of isomorphism of graded rings | An isomorphism of graded rings (or anything else really) is a homomorphism with an inverse such that the inverse is also a homomorphism.
You can prove that a ring isomorphism that is an homomorphism of graded rings is an isomorphism of graded rings. To see this think about where $f^{-1}$ maps $S_n$, or alternatively think about what it would mean for $f(R_n)$ to not equal $S_n$. |
Solving a 2nd order homogenous differential equation with complex coefficient | It's $\Delta= -(b^2+4ac)=i^2(b^2+4ac)$ and you can also use a change of variable:
$$a\ddot{x}(t)+ib\dot{x}(t)+cx(t)=0$$
Substitute $y=it$
$$i^2a\dfrac {d^2x}{dy^2}+i^2b\dfrac {dx}{dy}+cx=0$$
$$a\dfrac {d^2x}{dy^2}+b\dfrac {dx}{dy}-cx=0$$
Solve and substitute back $y=it$. |
Bounded stochastic process is uniformly bounded. | In general, without further assumptions, it is not possible to conclude uniform boundedness. For example, let $X_t(\omega)=Y(\omega)$ where $Y$ is a non bounded random variable (that is, $Y\notin \mathbb L^\infty$ or $\mathbb P(\lVert Y\rVert\gt n)>0$ for all $n$). For example, if $\Omega$ is $(0,1)$ endowed with the Lebesgue measure, take $Y\colon\omega\mapsto 1/\omega$. |
A set that is open in any metric space that contains it | The question makes sense only if $X$ is metrizable, in which case the answer is yes.
If $X\ne\varnothing$, fix $p\in X$. Let $Y_0=\{y_n:n\in\Bbb N\}$ be a set of distinct points not in $X$, and let $Y=X\cup Y_0$. Let $d$ be a metric on $X$, and define a metric $d_1$ on $Y$ as follows.
$$d_1(x,y)=\begin{cases}
d(x,y),&\text{if }x,y\in X\\
2^{-n},&\text{if }\{x,y\}=\{p,y_n\}\\
|2^{-n}-2^{-m}|,&\text{if }\{x,y\}=\{y_n,y_m\}\\
2^{-n}+d(x,p),&\text{if }x\in X\setminus\{p\}\text{ and }y=y_n\\
2^{-n}+d(y,p),&\text{if }y\in X\setminus\{p\}\text{ and }x=y_n\;.
\end{cases}$$
It’s not hard to check that $d_1$ is a metric on $Y$ that agrees with $d$ on $X$. However, $p\in\operatorname{cl}_YY_0$, so $X$ is not open in $Y$. (In case the details obscure what’s really going on, I’ve just added a simple sequence $\langle y_n:n\in\Bbb N\rangle$ converging to $p$.)
Added: The same idea works in general topological spaces. Just declare a set $V\subseteq Y$ to be open iff either it’s an open subset of $X$ that does not contain $p$; it’s a subset of $Y_0$; or $p\in V$, $V\cap X$ is open in $X$, and there is an $n_0\in\Bbb N$ such that $V\supseteq\{y_n\in n\ge n_0\}$. (If you’re familiar with quotient spaces and the one-point compactification, this $Y$ is homeomorphic to the quotient of $X\sqcup Y_0^*$, where $Y_0^*$ is the one-point compactification of the discrete space $Y_0$, obtained by identifying $p$ and the point at infinite in $Y_0^*$.) |
solution of a differential equation | The problem is that at $y=-1$, you're not allowed to divide by $1+y$.
Thus $y^\prime = (1+y)x$ means that either $1+y=0$ or $\frac{y^\prime}{1+y} = x$.
If you really insist on having a heuristic argument consider:
$$\ln|1+y| = \int \frac{\mathrm{d}y}{1+y} = \int x\,\mathrm{d}x = \frac{1}{2}x^2+c,$$
thus $y = -1\pm e^ce^{\frac{1}{2}x^2}$. Your solution is now given by picking $c=-\infty$, so that $e^c=0$. |
Jordan-Hölder composition series of the additive group $\mathbb{Z}/a\mathbb{Z}$ of integers modulo a | $C_a = \mathbb{Z}/a\mathbb{Z}$ is a cyclic group of order $a$, so every subgroup is cyclic of order that divides $a$. Suppose that $a=p_1\dots p_n$ (not necessarily distinct). Then you just pick the sequence of cyclic groups
$$ 1 \lhd C_{p_1} \lhd C_{p_1p_2} \lhd \dots \lhd C_{p_1p_2\dots p_{n-1}} \lhd C_a$$
In your example, this would correspond to the sequence $(2,2,3,3)$
$$ 1 \lhd C_2 \lhd C_{4} \lhd C_{12} \lhd C_{36}$$
which, if $C_a = \langle x \rangle$, corresponds to the groups generated by
$$ \langle x^{36} \rangle \lhd \langle x^{18}\rangle \lhd \langle x^{9}\rangle \lhd \langle x^{3}\rangle \lhd \langle x^1\rangle $$ |
Basis of Complex Eigenspace | We have the matrix
$$\begin{bmatrix}1&5\\-2&3\end{bmatrix}$$
We find the eigenvalues from $|A - \lambda I| = 0$ as
$$\lambda_{1,2} = 2 ~ \pm 3i$$
We find the eigenvector of $\lambda_1 = 2 + 3i$, using the RREF of $[A - \lambda I]v_1 = 0$ as
$$\begin{bmatrix}
1 & \dfrac{-1 + 3 i}{2} \\ 0 & 0 \\ \end{bmatrix}v_1 = \begin{bmatrix}
0 \\ 0 \ \end{bmatrix}$$
Eigenvectors are not unique, and we can write (verify that each $v_1$ works)
$$v_1 = \begin{bmatrix} \dfrac{1 - 3 i}{2} \\ 1 \ \end{bmatrix}~~\text{or}~~ v_1 = \begin{bmatrix} 1 - 3 i \\ 2 \ \end{bmatrix}$$
For $\lambda_2's$ eigenvector, we just take the complex conjugate of $\lambda_1's$.
In other words, your result and the book's result are equivalent and both are correct. |
Find $5x^3+11y^3+13z^3=0$ integer solutions | Look at cubes modulo $13$ and show that $x$ or $y$ is divisible by $13$, then follow the proof for the equation $x^3 - 3y^3 - 9z^3 = 0$. |
Problem on the Cech Cohomology of a Sheaf over a Paracompact Space | There is indeed a mistake in the theorem. The correct version should be :
if $\dim\mathcal{N(U)}<n$, then $\check H^q(\mathcal{U,F})=0$ for all $q\geq n+1$.
in particular, if there exists $\mathcal{U}$ is a good cover for $\mathcal{F}$ (that is one such that any finite intersections $V=U_1\cap...\cap U_n$ satisfies $H^q(V,\mathcal{F})=0$ for all $q>0$) such that $\dim\mathcal{N(U)}<n$ then $\check H^q(\mathcal{F})=0$ for all $q\geq n+1$. |
Sobolev Spaces and Measure Theory | The points in the Sobolev space are not functions, they are equivalence classes of functions. In particular, you are allowed to alter your functions by sets of measure zero. So when you write $u \in W_0^{1,p}(\Omega)$, what you're actually doing is saying that $u$ is a representative of a class of functions in the Sobolev space. But I'm allowed to change $u$ on any set of measure 0 and not change the equivalence class. In particular, I can change the value of $u$ on any point of the form $(p,q)$ for rational $p,q$ and stay inside the equivalence class.
What does this mean for the problem? It means that even if the equivalence class of $u$ did happen to contain a function $v$ so that $\Omega_a(v)$ contains an open set and has positive measure, I could simply redefine $v(p,q) = a+1$ for every rational point $(p,q) \in \Omega_a(v)$ and this would be in the equivalence class of $u$. If $v'$ is this new, redefined function, then certainly $\Omega_a(v')$ has no open subsets... I've punched out all the rational points! But it also has positive measure, since I changed $v$ on a set of measure zero. So your statement does not hold for this function $v'$. |
How to find the expectation of a Poisson process related variable | Tried an approach that I thought would be simpler, but ended up with a bigger mess than I thought. Posting here in case someone can do something with it, but I'll probably delete it later...
$e^t Z = \sum_{k=1}^\infty e^{Y_k} \mathbf{1}_{\{Y_k \le t\}}$
By Tonelli's theorem, it suffices to compute $E[e^{Y_k} \mathbf{1}_{\{Y_k \le t\}}]$ and sum over $k$.
$$E[e^{Y_k} \mathbf{1}_{\{Y_k \le t\}} \mid N(t) = n] = E[e^{Y_k} \mid N(t) = n] \mathbf{1}_{\{k \le n\}}.$$
Conditioned on $N(t)=n$, the random variable $Y_k$ has the same distribution as the $k$th order statistic of $n$ i.i.d. $\text{Uniform}(0,1)$ random varaibles. (See Theorem 1.5 of these notes.) In particular, the conditional distribution is $\text{Beta}(k, n+1-k)$. Thus the expectation we want can be evaluated using the moment generating function. We have
$$E[e^{Y_k} \mathbf{1}_{\{Y_k \le t\}} \mid N(t) = n]
=\mathbf{1}_{\{k \le n\}}
\left(1 + \sum_{j=1}^\infty \frac{1}{j!} \prod_{r=0}^{j-1}\frac{k+r}{n+1+r}\right) \tag{1}$$
\begin{align}
E[e^{Y_k} \mathbf{1}_{\{Y_k \le t\}}]
&= \sum_{n=0}^\infty P(N(t)=n) E[e^{Y_k} \mathbf{1}_{\{Y_k \le t\}} \mid N(t) = n]
\\
&= \sum_{n=k}^\infty e^{-\lambda t}\frac{(\lambda t)^k}{k!}
\left(1 + \sum_{j=1}^\infty \frac{1}{j!} \prod_{r=0}^{j-1}\frac{k+r}{n+1+r}\right).
\end{align}
Then sum over all $k \ge 1$ and multiply by $e^{-t}$. |
Direct evaluation of complete elliptic integral | The first step towards the goal is to perform a rational transformation. As in the previous question, using the symmetry $\phi \to (2\pi - \phi)$ and the change of variables $t = \sin^2\left(\phi/2\right)$:
$$
\mathcal{I} = \int_0^{2 \pi} \frac{1-x \cos(\phi)}{\left(1-2 x \cos(\phi) + x^2\right)^{3/2}} \mathrm{d} \phi = \frac{2}{(1-x)^2} \int_0^1 \frac{1+\frac{ 2 x}{1-x} t}{1+\frac{4x}{(1-x)^2} t} \frac{\mathrm{d} t}{\sqrt{ t(1-t)\left( 1+\frac{4x}{(1-x)^2} t\right)}}
$$
Now, perform a rational substitution:
$$
t = \frac{1-x}{2} \frac{y+1}{1- x y} \qquad \text{with} \qquad \mathrm{d} t = \frac{1-x^2}{( 1-x y)^2} \frac{\mathrm{d} y}{2}
$$
which maps $0 <t<1$ into $-1<y<1$. With it:
$$
t (1-t) \left( 1+\frac{4x}{(1-x)^2} t\right) = \frac{(1+x)^2}{4 (1-x y)^4} (1-y^2) (1- x^2 y^2)
$$
and
$$
\frac{1+\frac{ 2 x}{1-x} t}{1+\frac{4x}{(1-x)^2} t} = \frac{1-x}{1+ x y}
$$
Combining, and using $1-x>0$ and $1-x y>0$:
$$
\mathcal{I} = \frac{2}{1-x} \int_{-1}^1 \frac{(1-x)^2}{(1+ x y)} \frac{\mathrm{d} y}{\sqrt{(1-y^2)(1-x^2 y^2)}} = 2(1-x) \int_{-1}^1 \frac{1}{1 + x y} \frac{\mathrm{d} y}{\sqrt{(1-y^2)(1-x^2 y^2)}}
$$
The integral above is reduced to the rational form with substitution $y = \operatorname{sn}(u| x^2)$, where $\operatorname{sn}(u|m)$ stands for the Jacobi elliptic sine function. Indeed:
$$
(1-y^2)(1-x^2 y^2) = \left( 1- \operatorname{sn}^2(u|x^2)\right)\left( 1- x^2 \operatorname{sn}^2(u|x^2)\right) = \operatorname{cn}^2(u|x^2) \operatorname{dn}^2(u|x^2)
$$
$$
\mathrm{d} y = \operatorname{cn}(u|x^2) \operatorname{dn}(u|x^2) \mathrm{d} u
$$
The substitution maps $-1<y<1$ into $-K(x^2) < u < K(x^2)$, where $K(x^2)$ is the complete elliptic integral of the first kind, and both $\operatorname{cn}(u|x^2) > 0$ and $\operatorname{dn}(u|x^2) > \sqrt{1-x^2} > 0$ on this interval:
$$
\mathcal{I} = \int_{-K(x^2)}^{K(x^2)} \frac{2(1-x) \mathrm{d u}}{1 + x \operatorname{sn}(u| x^2) } = \frac{2}{1-x^2} \left. \left(\operatorname{E}\left( \operatorname{am}(u|x^2), x^2\right) + x \frac{\operatorname{cn}(u|x^2) \operatorname{dn}(u|x^2) }{1+ x \operatorname{sn}(u|x^2)} \right) \right|_{-K(x^2)}^{K(x^2)}
$$
Since $\operatorname{cn}\left( \pm K(x^2)| x^2\right) = 0$, and $\operatorname{am}(\pm K(x^2)| x^2) = \pm \frac{\pi}{2}$ we arrive at the desired result:
$$
\mathcal{I} = \frac{4}{1-x^2} \operatorname{E}(x^2)
$$ |
If $|f(x)| \leq Mx_0 sup_{y\in[0, x_0]} |f(y)|$ then why is f the zero-function? | Note the inequality $|f(x)|< \sup\limits_{y\in [0, \frac{1}{M})} |f(y)|$ does not depend on $x$. If $M$ can be made arbitrarily large, then we would have $|f(x)|\leq 0$ and the result would follow.
It is difficult to claim that this is valid however without more information (see my comment above). |
Prove the complex polynomial $P(z,\bar{z})$ is zero if and only if all the coefficient is zero. | W.lo.g. we may assume that $n \ge m$. Hence we may even assume $n = m$ (otherwise fill with coefficients $b_{m+1} = 0,\ldots, b_n = 0$). Thus for all $z$
$$F(z) = \sum_{j=0}^n (a_j z^j + b_j \bar{z}^j)= 0 .$$
Note that I suppressed $\bar z$ from $F(z, \bar z)$ because $\bar z$ is uniquely determined by $z$.
Write $z = e^{i\theta}x$ with $\theta, x \in \mathbb R$. Then
$$F(z) = \sum_{j=0}^n (e^{i\theta j}a_j + e^{-i\theta j}b_j) x^j= 0 .$$
Separating real and imaginary part of $F(z)$ yields two real ploynomials in the variable $x$ which are both $0$. This shows $\Re(e^{i\theta j}a_j + e^{-i\theta j}b_j) = 0$ and $\Im(e^{i\theta j}a_j + e^{-i\theta j}b_j) = 0$ for all $j$ and all $\theta$, hence
$$e^{i\theta j}a_j + e^{-i\theta j}b_j = 0 .$$
Taking $\theta = 0$ we get
$$a_j + b_j = 0. \tag{1}$$
For $j > 0$ we take $\theta = \pi/2j$ and get
$$ia_j - ib_j = 0, \text{ i.e. } a_j = b_j. \tag{2}$$
$(1)$ and $(2)$ show that $a_j = b_j = 0$ for $j > 0$. Thus
$$F(z) = a_0 + b_0 = 0 .$$
For $j = 0$ we cannot conclude that $a_0 = b_0 = 0$. In fact, we may take any $a_0 \in \mathbb C$ and $b_0 = -a_0$.
Update:
Daniel Schepler suggests in a comment to consider the more general
$$P(z) = \sum_{j,k} a_{jk}z^j \bar z^k .$$
This has an additional benefit: The constant term is $a_{00}$ and does not split as $a_0 + b_0$.
With $z = e^{i\theta}x$ we get
$$P(z) = \sum_{j,k} e^{i\theta (j - k)}a_{jk}x^{j+k}$$
and conclude that for all $n$ and $\theta$
$$\sum_{j+k=n} e^{i\theta (j - k)}a_{jk} = \sum_{j=0}^n e^{i\theta (2j - n)}a_{j(n-j)} = 0 .$$
Via multiplying with $e^{i\theta n}$ we see that this is equivalent to
$$\sum_{j=0}^n e^{i\theta 2j}a_{j(n-j)} = \sum_{j=0}^n (e^{2i\theta})^ja_{j(n-j)} = 0 .$$
That is, for each $\theta$ we have a linear equation for the $n+1$ variables $a_{0n},\ldots,a_{n0}$.
Let $\theta_l = \frac{\pi l}{2(n+1)}$ for $l = 0,\ldots,n$. These are $n+1$ distinct points and we get a system of $n+1$ linear equations for the $a_{0n},\ldots,a_{n0}$. The matrix of this system is the Vandermonde matrix $V = V(\theta_0,\ldots,\theta_n)$ whose determinant is $\prod_{0\le r < s \le n} (\theta_s - \theta_r)$. Thus $\det V \ne 0$ which implies that $a_{0n} = \ldots = a_{n0} = 0$. |
A geometry problem hinting similarity of triangles . | The angle constraint gives that the circumcircle of $CDA$ is also tangent to the $CE$-line.
So the three quantities $d=CE$, $R=OE$, $\theta=\widehat{BCE}$ fix the above configuration: the circumcentre of $CDA$ is given by the intersection between the perpendicular bisector of $CD$ and the perpendicular to $CE$ through $C$. Now it is just a terribly painful trigonometry exercise to describe the lengths of $CD,DB,BA,AC$ in terms of $R,d,\theta$. Do that, then impose the constraints $AC=33$ and $BC=\frac{3}{5}AB+\frac{3}{5}AC$ to find the length of $CD$: the wanted configuration is reached at $\widehat{CBA}=90^\circ$. |
Prove that $n<2^n$ for every positive integer $n$ | Assume that $n<2^n$ is true.
We need to prove that $n+1<2^{n+1}$ is also true.
This is true because $n<2^n\Rightarrow n+1<2^n+1<2^n+2^n<2^n\times2=2^{n+1}$
($2^n>1$ for all $n\in\mathbb{Z^+}$) |
what is the probability that the contractor's estimate will be within 5 weeks of the true mean | How many weeks are there in a month? The answer will depend on that! We will use $4$, despite the fact that it is almost always wrong. So we take $5$ weeks to be $\frac{5}{4}$ months.
Let $\bar{X}$ be the sample mean. Since $\bar{X}$ is a sum of a large number of identically distributed independent random variables, it is reasonable to suppose that $\bar{X}$ has a close to normal distribution.
The standard deviation of $\bar{X}$ is $\frac{35}{\sqrt{250}}$. For brevity, call this $c$. Let $\mu$ be the true mean. Then
$$\Pr\left(|\bar{X}-\mu|\lt \frac{5}{4}\right)=\Pr\left(\left|\frac{\bar{X}-\mu}{c}\right|\lt \frac{5}{4c}\right).\tag{1}$$
The probability on the right of (1) is approximately $\Pr(|Z|\lt \frac{5}{4c})$, where $Z$ is standard normal. Now we can use tables, or software, to finish. |
Least degree of splitting field for $x^n-1$? | Since $x^n-1$ splits in $K$, all its roots are in $K$, so $x^n-1\mid x^{q^f-1}-1$
Here you are implicitly assuming that $x^n-1$ has distinct roots - if there are repeated roots, the divisibility cannot be concluded, since $x^{q^{f}-1} - 1$ is divisible by each linear factor only once. |
Counting binary words by runs | As I mentioned in a comment, you are not counting strings which end with a $1$. That is, letting $\mathcal B=\{\text{binary words ending in 0}\}\cup\{\epsilon\}$, and letting $B(x,y,z)=\sum_{\beta\in \mathcal B}x^{|\beta|_0}y^{|\beta|_1}z^{|\beta|_r}$, you have shown that
$$
B(x,y,z)= \sum_{m\geq k} \sum_{n\geq k} \sum_{k\geq 0} {{m}\choose{k}}{{n-1}\choose{k-1}} x^{m}y^{n}z^k
$$
However, there is a quick fix. You can note that
$$
xA(x,y,z)+1=B(x,y,z)
$$
corresponding the to the equation $\mathcal B=(\mathcal A\times \{0\})\cup \{\epsilon\}$. This gives you
$$
A(x,y,z)= \sum_{m\geq k} \sum_{n\geq k} \sum_{k\geq 0} {{m}\choose{k}}{{n-1}\choose{k-1}} x^{m-1}y^{n}z^k
$$
Extracting the $[x^my^nz^k]$ coefficient, you get $\binom{m+1}{k}\binom{n-1}{k-1}$.
Here is a different method which works; let
$\mathcal M=\{\text{binary words beginning with a }0\}\cup \{\epsilon\}$,
$\mathcal N=\{\text{binary words beginning with a }1\}$.
Then you get the mutual recurrence
$$
\begin{align}
\mathcal M &= \left(\bigcup_{i\ge1}0^i× \mathcal N\right)\cup \{\epsilon,0,00,\dots\},\\
\mathcal N &= \left(\bigcup_{i\ge1}1^i× \mathcal M\right)
\end{align}
$$
which, letting $M$ and $N$ be the corresponding generating functions, implies
$$
\begin{align}
M&=\frac{x}{1-x}N + \frac1{1-x}\\
N&=\frac{yz}{1-y}M
\end{align}
$$
You can then solve this system for $M$ and $N$. Conclude by noting $A=M+N$. |
Function of elements of functions is also a bijection. | $A_1$ and $B_2$ are disjoint, since $B_2$ is defined as having elements which are not in $A_1$. Therefore for any $n$, $h(n)$ is either in $A_1$ or $B_2$ but it's impossible for it to be in both. If $h(n)$ is in $A_1$, then $n$ is odd. If $h(n)$ is in $B_2$, then $n$ is even. |
Basic proof that Isogeny = group homomorphism without Riemann Roch? | This is proven for abelian varieties (of any dimension, the 1-dimensional case being elliptic curves) as Corollary 1 in section 4 of Mumford's Abelian Varieties. It is an immediate corollary of the "rigidity lemma" stated immediately prior:
Rigidity Lemma (Form I). Let $X$ be a complete variety, $Y$ and $Z$ any varieties, and $f\colon X \times Y \to Z$ a morphism such that for some $y_0 \in Y$, $f(X \times \{y_0\})$ is a single point $z_0$ of $Z$. Then there is a morphism $g\colon Y \to Z$ such that if $p_2\colon X \times Y \to Y$ is the projection, $f = g \circ p_2$.
In other words, if $f$ is constant on one fiber of the projection map $p_2$, then $f$ is constant on every fiber of $p_2$.
The proof is brief, so I'll just reproduce it here:
Proof. Choose any point $x_0 \in X$, and define $g\colon Y \to Z$ by $g(y) = f(x_0, y)$. Since $X \times Y$ is a variety, to show that $f = g \circ p_2$, it is sufficient to show that these morphisms coincide on some open subset of $X \times Y$. Let $U$ be an affine open neighbourhood of $z_0$ in $Z$, $F = Z - U$, and $G = p_2(f^{-1}(F))$; then $G$ is closed in $Y$ since $X$ is complete and hence $p_2$ is a closed map. Further $y_0 \notin G$ since $f(X \times \{y_0\}) = \{z_0\}$. Therefore $Y - G = V$ is a non-empty open subset of $Y$. For each $y \in V$, the complete variety $X \times \{y\}$ gets mapped by $f$ into the affine variety $U$, and hence to a single point of $U$. But this means that for
any $x \in X$, $y \in V$, $f(x, y) = f(x_0, y) = g \circ p_2(x, y)$, and this proves
our assertion.
In particular, if $f \colon X \to Y$ is a morphism of abelian varieties such that $f(0_X) = 0_Y$, then the morphism $\phi\colon X \times X \to Y$ defined for all $x_1, x_2 \in X$ by
$$\phi(x_1, x_2) = f(x_1 + x_2) - f(x_1) - f(x_2)$$
is constant with value $0_Y$ on $X \times \{0_X\}$ and $\{0_X\} \times X$. Applying the rigidity lemma, $\phi$ is constant everywhere on $X \times X$, which means that $f$ is a homomorphism.
The analogous relative statement for abelian schemes (over any base scheme) is Corollary 6.4 in Mumford, Fogarty, and Kirwan's Geometric Invariant Theory. The general idea of that proof is similar.
References:
Mumford, D. Abelian Varieties. Corrected reprint of the second (1974) edition. Tata Institute of Fundamental Research Studies in Mathematics, vol. 5. Hindustan Book Agency, New Delhi, 2008. ISBN: 978-81-85931-86-9. [MR 2514037]
Mumford, D., Fogarty, J., and Kirwan, F. Geometric Invariant Theory. Third edition. Ergebnisse der Mathematik und ihrer Grenzgebiete, vol. 34. Springer-Verlag, Berlin, 1994. ISBN: 3-540-56963-4. [MR 1304906] |
Step in a solution of $y^2 = x^3 - 2$ | Thanks to the encouragement from Sanchez, I have found the answer (making it community wiki):
Suppose $p$ is any prime dividing $y+\sqrt{-2}$. Then, $p$ divides $x^3$, and since $p$ is prime, $p$ divides $x$. Thus,
$p^3$ divides $x^3=(y+\sqrt{-2})(y-\sqrt{-2})$ (*)
Now, $p$ divides $y+\sqrt{-2}$. Also, $y+\sqrt{-2}$ and $y-\sqrt{-2}$ are relatively prime. So, $p$ cannot divide $y-\sqrt{-2}$. Since $p$ is prime, $p$ must be relatively prime to $y-\sqrt{-2}$, and consequently, $p^3$ is relatively prime to $y-\sqrt{-2}$. Using (*), we get that $p^3$ divides $y+\sqrt{-2}$, and as a result $y+\sqrt{-2}$ is a cube (up to a unit), as desired.
Here I have used two facts concerning UFDs:
1) If $a$ is relatively prime to $b$, then $a^3$ is relatively prime to $b$.
2) If $a$ divides $bc$, and furthermore, $a$ is relatively prime to $b$, then $a$ must divide $c$. |
Prove $e^\alpha = \lim_{n\to\infty}(1+\frac{\alpha}{n})^n$ from first principles. | If you have the proof
$\lim_\limits{n\to\infty} (1+\frac 1n)^n = e$
And don't need to prove this again.
$\lim_\limits{n\to\infty} (1+\frac an)^n$
let $n = am$
$\lim_\limits{m\to\infty} (1+\frac 1m)^{am} =\lim_\limits{m\to\infty} ((1+\frac 1m)^{m})^a$ |
Can a Simple Random Walk be describe as the difference of 2 Bernoulli Variables? | If $Y$ and $Z$ have the same distribution then $EX=EY-EZ=0$ so $p =\frac 1 2$.
If $Y$ and $Z$ are independent then $Y-Z$ takes at least three values with positive probability so it cannot have the same distribution as $X$. |
Diagonals of power series in combinatorics | Exactly the obvious thing: if you are interested in some sequence $a_n$ and it is best expressed as $b_{n,n}$ for some two-parameter sequence $b_{n,m}$ whose generating function you know. For example, the sequence $a_n = {2n \choose n}$ is $b_{n,n}$ where $b_{m,n} = {m+n \choose m}$. This has bivariate generating function
$$B(x, y) = \sum_{m, n \ge 0} {m+n \choose m} x^m y^n = \frac{1}{1 - x - y}$$
and taking its diagonal gives
$$A(x) = \sum_{n \ge 0} {2n \choose n} x^n = \frac{1}{\sqrt{1 - 4x}}.$$ |
Convergence rate of Newton's method | For iterative methods, we have a fixed point formula in the form:
$$\tag 1 x_{n+1} = g(x_n)$$
The Newton iteration is given by:
$$\tag 2 \displaystyle x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}$$
So, $(2)$ is of the form $(1)$.
Since $r$ is a root of $f(x) = 0, r = g(r)$. Since $x_{n+1} = g(x_n)$, we can write:
$$x_{n+1} - r = g(x_n) - g(r).$$
Lets expand $g(x_n)$ as a Taylor series in terms of $(x_n -r)$, with the second derivative term as the remainder:
$$g(x_n) = g(r)+g'(r)(x_n-r) + \frac{g''(\xi)}{2}(x_n-r)^2$$
where $\xi$ lies in the interval from $[x_n, r]$, since:
$$g'(r) = \frac{f(r)f''(r)}{[f'(r)]^2} = 0.$$
Because $f(r) = 0$ ($r$ is a root), we have:
$$g(x_n) = g(r) + \frac{g''(\xi)}{2}(x_n-r)^2.$$
Letting $x_n-r = e_n$, we have:
$$e_{n+1} = x_{n+1}-r = g(x_n) - g(r) = \frac{g''(\xi)}{2}(e_n)^2.$$
Each successive error term is proportional to the square of the previous error, that is, Newton's method is quadratically convergent.
Multiple Root
Following the same sort of reasoning, if $x_n$ is near a root of multiplicity $\delta \ge 2$, then:
$$f(x) \approx \frac{(x-\xi)^\delta}{\delta !}f^{(m)}(\xi)$$
$$f'(x) \approx \frac{(x-\xi)^{\delta-1}}{(\delta-1) !}f^{(m)}(\xi)$$
So we have:
$$\tag 3 x_{n+1} -\xi = x_n - \xi -\frac{f(x_n)}{f'(x_n)} = \left(\frac{\delta -1}{\delta}\right)(x_n - \xi)$$
The $\delta$ term on the RHS of $(3)$ in not quadratic, hence we have linear convergence.
You should be able to use with your approach to clean up what you did.
I am confused about what you wrote after your derivation, but I am going to guess that you want to figure out the convergence rate for this $f(x)$.
We are given:
$$f(x) =x^2(x-1)$$
There are two roots to this equation at:
$x = 0$ (a double root)
$x = 1$ (a single root)
So, we would expect linear convergence at the double root and quadratic convergence at the single root.
The Newton iteration is given by:
$$x_{n+1} = x_n - \frac{(x_n-1) x_n^2}{x_n^2+2 (x_n-1) x_n}$$
For the first root, lets pick a starting point of $x = 0.1$, we get the following cycle:
$24$ steps to converge to the root $x = 5.341441708552285 \times 10^{-9}$ (yikes!)
For the second root, lets pick a starting point of $x = 1.4$, we get the following cycle:
$6$ steps to converge to the root $x = 1.000000000000000$ (much better!)
Now, you would use the exact results and compare them numerically and show the convergence rates for each of the cases.
Note: one must choose a sufficient starting point that will converge to one root or the other. Based on that initial selection, the rate is going to be quadratic when the algorithm converges to $1$ and linear when it converges to $0$. We pick a nearby starting point and see where we end up. You could also graph the function to have an idea about starting points. We typically do not know apriori what roots will give us what behavior. Obviously there is a range where convergence happens to one root or the other. I am not sure how one would calculate that analytically because you may as well figure out the roots without numerical methods in that case. |
Is it possible to prove dot product by the law of cosines? | The law of cosines works just as well for degenerate triangles as for non-degenerate ones. |
Does this fail to be a category? | There's something not written but implicit in your definition: in order for $\sigma f_1$ and $f_2$ to be equal, they must have the same domain, i.e. morphisms $f_1\to f_2$ only exist in your (prospective) category when $f_1$ and $f_2$ are both maps $Z\to A$ for the same $Z$.
That is, an element of $\mathrm{Hom}_{\mathsf{C}^*_A}(f_1,f_2)$ is a commutative triangle made up of the maps $f_1: Z\to A$, $f_2: Z\to A$ and $\sigma: A\to A$ (I don't know how to draw commutative diagrams on here, so you can do it yourself!). In particular, elements of the set $\mathrm{Hom}_{\mathsf{C}^*_A}(f_1,f_2)$ "remember" their domain and codomain $f_1$ and $f_2$ by definition: these elements aren't just maps $\sigma$, but maps $\sigma$ along with domain $f_1$ and codomain $f_2$. The domain and codomain are built into these commutative triangles.
In practice, people will usually write these maps as $\sigma\in \mathrm{Hom}_{\mathsf{C}^*_A}(f_1,f_2)$, but strictly speaking that's not quite true. A better notation, capturing all of the information in the commutative triangle, would be $(\sigma, f_1, f_2)\in\mathrm{Hom}_{\mathsf{C}^*_A}(f_1,f_2)$.
Long story short: if $\mathrm{Hom}_{\mathsf{C}^*_A}(f_1,f_2)$ and $\mathrm{Hom}_{\mathsf{C}^*_A}(g_1,g_2)$ intersect - say, the element $(\sigma,f_1,f_2)\in \mathrm{Hom}_{\mathsf{C}^*_A}(f_1,f_2)$ can also be written as $(\tau,g_1,g_2)\in \mathrm{Hom}_{\mathsf{C}^*_A}(g_1,g_2)$ - then the whole commutative triangle this element represents must be the same. That is, $\sigma = \tau, f_1 = g_1, f_2 = g_2$.
To address something you said about "naturality": there's nothing wrong with your category at all. I wouldn't call it "unnatural". It's just that it doesn't crop up all that often in mathematics. Aluffi's category $\mathsf{C}_A$ does crop up a lot, though; it's even got its own special name, the slice category. It's a particular way of viewing how objects in $\mathsf{C}$ behave "relative to" $A$. In this case, the notion of "being relative to $A$" is a part of the structure of an object $Z\to A$, and if you want to study objects with a certain standing relative to $A$, then it's natural to ask for the morphisms to preserve that structure. I think this was the sense in which Aluffi used the word. |
Entire extension of $f(x+y)=g(x)g(y)-h(x)h(y)$ | This is the proof I ended up coming up with for this question.
Assume we have that
$$
f(x+y)=g(x)g(y)-h(x)h(y)
$$
for all $x,y\in \mathbb{R}$, for some entire functions $g,h$. We will show that there exists an entire function $\tilde{f}$ such that
$$
\tilde{f}(z+w)=g(w)g(w)-h(z)h(w)
$$
for all $z,w\in \mathbb{C}$.
To see this we define the function $\tilde{f}$ by;
$$
\tilde{f}(z)= g(z)g(0)-h(z)h(0) \ \ \ z\in \mathbb{C}.
$$
Then by the algebra of differentiation $\tilde{f}$ is also differentiable on all of $\mathbb{C}$, hence $\tilde{f}$ is entire. Notice
$$
f(x) = g(x)g(0)-h(x)h(0)
$$
for all real $x$. So we have that $f(x)= \tilde{f}(x)$ for all $x\in \mathbb{R}$. Therefore, we can conclude
$$
\tilde{f}(x+y)=g(x)g(y)-h(x)h(y)
$$
for all $x,y\in \mathbb{R}$. Now we fix an arbitrary $y \in \mathbb{R}$ and consider $\tilde{f}(z+y)$ for $z\in \mathbb{C}$ (here we view $z$ as a variable). If $z \in \mathbb{R}$ we have
$$
\tilde{f}(z+y)=f(z+y) = g(z)g(y)-h(z)h(y)
$$
and so by the uniqueness theorem we can conclude that
$$
\tilde{f}(z+y)= g(z)g(y)-h(z)h(y)
$$
for all $z \in \mathbb{C}$ and since $y$ was arbitrary this also holds for all $y\in \mathbb{R}$. Now fix an arbitrary $z\in \mathbb{C}$ and consider $w \in \mathbb{C}$( here we view $w$ as a variable). Then if $w \in \mathbb{R}$ we have
$$
\tilde{f}(z+w)= g(z)g(w)-h(z)h(w).
$$
By the uniqueness theorem we can then conclude that this holds for all $w \in \mathbb{C}$. Since $z$ was arbitrary we can also conclude that
$$
\tilde{f}(z+w)= g(z)g(w)-h(z)h(w)
$$
for all $z,w\in \mathbb{C}$. This however only gives existence. To see uniqueness suppose we have two entire functions $\tilde{f}$ and $\hat{f}$ such that
$$
\tilde{f}(z+w)= g(z)g(w)-h(z)h(w)\ \ \ \text{ and }\ \ \ \hat{f}(z+w)= g(z)g(w)-h(z)h(w)
$$
for all $z,w\in \mathbb{C}$. Then in particular if we restrict $z \in \mathbb{R}$ and $w =0$ we find that $\tilde{f}$ and $\hat{f}$ agree on the real line and thus by the uniqueness theorem $\tilde{f}$ and $\hat{f}$ are the same functions. |
How to use a character table to get the centre | Let $G$ be some finite group and $\text{irr}(G)$ the set of irreducible characters of $G$. For $\chi\in\text{irr}(G)$ define $\mathbf{Z}(\chi)=\left\{g\in G:|\chi(g)|=\chi(1)\right\}$ (called the center of the character). Then, it's a common fact that
$$\mathbf{Z}(G)=\bigcap_{\chi\in\text{irr}(G)}\mathbf{Z}(\chi)$$
So, if you are given a character table, then for each row you can look at the entries for which the modulus of that entry matches with the first entry of that row (assuming you are writing character tables with $\{1\}$ corresponding to the first column), and take the union over the conjugacy classes those entries sit below, call this union $Z_k$ if $k$ is the row we are in. Then, the above theorem says that $\mathbf{Z}(G)=Z_1\cap\cdots\cap Z_n$ if you have $n$ rows.
For proofs of the above statements you can see my blog post here, or for a more comprehensive source you can see Isaac's Character Theory of Finite Groups.
EDIT: Thankfully Yemon Choi has pointed out that you were just looking how to obtain the center of the CHARACTER from the character table, and not the center of the group. This is implicitly stated in the second paragraph of the above. Namely, to find $\mathbf{Z}(\chi)$, locate the row corresponding to $\chi$, and then take the union of the conjugacy classes lying above the row entries whose modulus equals $\chi(1)$. |
Coordinates of sector of circle | Same technique as in your other question here.
Let $A = (x,y)$ and let $B= (x_1, y_1)$. Then
$$
x_1 = \; x\cos(0.262) + y\sin(0.262) \\
y_1 = -x\sin(0.262) + y\cos(0.262) \\
$$
This assumes the center is again at $(0,0)$. |
System of quadratic Diophantine equations $x^2-xy+y^2=a^2$,$x^2-xz+z^2=b^2$,$y^2-yz+z^2=c^2$ | Dickson book (history of theory of numbers) volume 2 page 511 has solution.
$x=(n^2-1)(m^2-1)$
$y=(2n-1)(m^2-1)$
$z=(n^2-1)(2m-1)$
Where, $m=2×(2q^2-pq-qv)/(3q^2-2pv+pq-2p^2)$
$(p,q,v)=((2n-1),(n^2-1),(n^2-n+1))$
For n=3, $(p,q,r)=(5,8,7)$
& $m=(4/7)$
$(x,y,z)=((-264),(-165),(56))$
$(a,b,c)=(231,199,296)$ |
Where does this result come in use? | This is a really important theorem. One consequence for example is, that a power series $\displaystyle\sum_{n\in\mathbb N} a_nz^n$ is differentiable in its domain of convergence and $\displaystyle\frac{d}{dz}\sum_{n\in\mathbb N} a_n z^n = \sum_{n\in\mathbb N}\frac{d}{dz}a_nz^n = \sum_{n\in\mathbb N} na_nz^{n-1}$. |
How to know if $\mathrm{Im} A = \mathbb R^n$ given a matrix? | Any $m\times n$ matrix can be thought of as describing a linear transformation from $\mathbb R^n$ to $\mathbb R^m$. The fact that $A$ is a $5 \times 6$ matrix tells us that its column space (or, image) will be a subspace of $\mathbb R^5$. Since $\dim \mathrm{Im}\, A = 4$, we know that this subspace has dimension four, but this is not the same as $\mathbb R^4$.
Consider the the following subspace of $\mathbb R^3$:
$$
V=\mathrm{span}\, \left\{ \begin{bmatrix}1\\1\\0\end{bmatrix}, \begin{bmatrix}0\\1\\1\end{bmatrix}\right\}
$$
It is easy to see that the subspace $V$ is a plane situated in $\mathbb R^3$. However, this plane is clearly not the same as $\mathbb R^2$; it is a separate object altogether. The same is true for the column space of your matrix. Though it has four dimensions, it is not the "four-dimensional space," but rather a subsection of "five-dimensional space." |
Cartesian product of two normal subgroups | The proof is easier using conjugation:
Let $a' \in A', b' \in B'$, and let $(a,b)$ be any element of $A \times B$.
Then:
$(a,b)(a',b')(a,b)^{-1} = (a,b)(a',b')(a^{-1},b^{-1}) = (aa'a^{-1},bb'b^{-1})$.
Since $A' \lhd A$ and $B' \lhd B$, we have $aa'a^{-1} \in A'$, and $bb'b^{-1} \in B'$, thus:
$(a,b)(a',b')(a,b)^{-1} \in A'\times B'$, that is $A'\times B' \lhd A\times B$.
As an aside, in the direct product $A \times B$, the elements of $A$ and $B$ "don't interact", so proofs about the direct product often boil down to two separate proofs in $A$, and also $B$. |
Abuse of notation in relative homology theory | The isomorphism between the two definitions is natural in everything in sight (the pair $(X,A)$ and the coefficients —which in you case are just $\mathbb Z$) so for every diagram of pairs of spaces involving one of the the two definitions there is a corresponding diagram involving the other one, and a big commutative diagram comparing the two with isomorphisms.
You will never run into any difficulties, provided you are careful not to do anything that is not given to you by the naturality of the situation.
The attempt to «fix» this or somehow mitigate it is understandable but hopeless. There is an immense number of such natural isomorphisms at play all the time, and taking all of them into account is just impossible. You have to develop a sixth sense which will allow you do not worry at all except when you have to worry. |
If $A$ is a rotation matrix, then $||Ax||=||x||$. | You didn't go wrong anywhere. You showed what the problem wanted you to show, and then some.
Note that some times the restrictions given in problems and exercises are necessary, and thus not using them means you're wrong somewhere. So you are right to be sceptical. But in this specific case the restriction $0\leq \theta\leq\pi$ is completely superfluous, so there is no issue. |
Two-Sided confidence interval | See http://www.kean.edu/~fosborne/bstat/06evar.html where it is described how the distribution of the variance estimator is related to a chi-squared distribution and they tell you the number of degrees of freedom for the chi-squared. (It's crucial to assume your original distribution is normal). To get confidence intervals for such a chi-squared distribution, you can e.g. see Wikipedia page on chi-squared distribution and get the CDF. |
TV Distance between Bernoulli and Poisson | You've made a few mistakes here. First, $P(x)=p^x(1-p)^{1-x}$ only for $x=0$ and $x=1$, so this term shouldn't be part of the summation for $x>1$.
Second, you say, "the second term is $1-P(X=0)$". What random variable is $X$ supposed to be? It looks like you're assuming that there's random variable $X$ with:
$$P(X=x) = \left|p^x(1-p)^{1-x}-\frac{p^xe^{-p}}{x!}\right|$$
so you can use the identity $\sum_1^\infty P(X=x) = 1 - P(X=0)$, but there is no such random variable, because for this "probability", we have $\sum_0^\infty P(X=x) \neq 1$. This is true even if you fix the first problem and get rid of the erroneous $p^x(1-p)^{1-x}$ term for $x>1$.
That said, your overall approach isn't too bad. Try separating out the two cases where $x=0$ and $x=1$, noting that for $x>1$, only the Poisson probability is left:
$$\sum_{x=2}^\infty \left|\frac{p^xe^{-p}}{x!}\right|$$
where the absolute value has no effect, so you now have a summation of Poisson probabilities. |
Is there a relation between $\ker T^x, x \in \Bbb N$ and the dimension of the space? | If $V$ is finite dimension, the pigeonhole principle shows there exists $m<m_1\in\mathbf N$ such that $\dim\ker T^m=\dim\ker T^{m_1}$. Therefore, since these kernels are linearly ordered, we have $\ker T^m=\ker T^{m+1}=\dots=\ker T^m_1$. Let's show by induction that, for any $k\ge 0$, we have
$$\ker T^m=\ker T^{m+1}=\dots=\ker T^{m+k}$$
The initialisation is true for all $k\le m_1-m$.
Suppose now the assertion is valid for some $k>m_1-m$, and consider a vector $v\in\ker T^{m+k+1}$; this implies $T(v)\in\ker T^{m+k}$, hence by the inductive hypothesis, $T(v)\in\ker T^{m}$, so that $v\in\ker T^{m+1}=\ker T^{m}$. |
What do negative eigenvalues for Laplacian matrix, if possible, represent? | Assuming you are talking about the Laplacian matrix of a simple (undirected) graph, you were right: it never has negative eigenvalues. As such, negative eigenvalues of the Laplacian do not represent anything; they merely indicate that you made a mistake in computing the Laplacian or finding its eigenvalues. That's all one can say with the information you provided.
If you are somehow working with a directed graph, then it is not clear what the Laplacian even means (what are the diagonal entries?). Even if you overcome this obstacle in the definition, you will typically end up with complex eigenvalues since the matrix is no longer symmetric.
I suggest this: if you wrote your own code (or used built-in functions of a computer algebra system) to compute the eigenvalues, then you may post your code on the relevant Q&A Community (such as StackOverflow, Mathematica Stack Exchange or ASKSAGE). In that case, you may want to read the hints provided here to avoid your question being closed as off-topic. |
Bound 1D gaussian domain in the interval $[-3\sigma, 3\sigma]$ so it still is a probability density function | It seems you are not clear about what you want. To truncate any variable to a given range, you just restrict its density to that range, and divide by its integral so that integrates to 1.
But if you want to generate a random variable that just "looks like" a gaussian, but has support on an interval, and its density is smooth, you can sum three (or more) uniforms. For example, if you sum three uniforms in $[-1,1]$, the result is a random variable that has support in $[-3,3]$, and its variance is $1$; you can multiply the result by $\sigma$ to get a suport $[-3 \sigma,3 \sigma]$ and standard deviation $\sigma$. The density is piecewise quadratic, it's continuous and derivable (though not infinitely differentiable, of course). |
Is $(a,b) \in \phi \equiv \phi(a,b)$? | I'm going to address this question as though we're building mathematics from the ground up. If the question is meant to assume a working knowledge of set theory from the outset, then ignore this answer.
Is $(a,b)\in\phi$ logically equivalent to $\phi(a,b)$?
Yes... ish. There are caveats. Technically, the terms "set," "relation" and "function" have distinct meanings that vary depending on context and your choice of foundations. In conversation, "relation," "binary predicate," and "set of pairs" tend to be regarded as the same thing. However, this assumes 1) set theory as a foundation, 2) that every set-theoretic object is a set.
Since "relation" and "function" are not necessarily set-theoretic notions, it is not necessary that $\phi$ be regarded as a set. For instance $\phi$ might be regarded as a function of type $A\times B\to\mathbf{bool}$ (that is a function from $A\times B$ to the type of truth values), a binary relation symbol (as in a first-order theory), or a 2-place predicate (note that predicate and relation are distinct concepts).
The last of these cases is important, as it relates closely to the comments.
...if we examine $A=\{x\mid p(x)\}$... is saying $x\in A$ equivalent to saying $p(x)$ is true, where $p(x)$ is the defining property for $x$ being an element of the set $A$?
In general, for arbitrary predicate $\psi$, the object $\{x\mid \psi(x)\}$ is not a set. This is made evident by Russell's Paradox. It is not difficult to see how the proposed equivalence can lead to a similar situation - take $\phi(a,b)\equiv a=b$, for instance (this leads to $(\phi,\phi)\in \phi$).
To clear things up, set-theorists will sometimes introduce [proper] classes to account for predicates that are not sets. As far as I know, set theories with classes regard every predicate as a class (though, not necessarily a set). In such set theories, the identity $\phi(a,b)\equiv(a,b)\in\phi$ always holds. |
Poisson process and uniform random variable | Let $N$ denote the number of cars that will be fully served, in other words, if $N=n$, then car $n+1$ is the first one ordering more petrol than the amount left at the station.
Changing slightly the notations and using the homogeneity of the problem,
$$
[N\geqslant n]=[\text{car}\ n\ \text{is fully served}]=[U_1+\cdots+U_n\leqslant2],
$$ where the random variables $U_n$ are i.i.d. and uniform on $(0,1)$. The mean time elapsed between two cars are served is $1/\lambda$ hence the mean time $\mathtt t$ before some petrol order cannot be satisfied is
$$
\mathtt t=\frac{E(N)+1}\lambda.
$$
Now, let $N_x$ denote the number of cars that will be fully served if the initial volume of petrol at the station is $x\cdot50$ liters. Then $N=N_2$ and, for every $x\geqslant0$, conditioning on the amount of petrol the first car is served and using the identity $P(U_1\leqslant x)=\min\{x,1\}$, one gets
$$
E(N_x)=\min\{x,1\}+\int_0^{\min\{x,1\}}E(N_{x-u})\,\mathrm du.
$$
Thus, if $x\geqslant1$ then
$$
E(N_x)=1+\int_0^1E(N_{x-u})\,\mathrm du=1+\int_{x-1}^xE(N_u)\,\mathrm du,
$$
while if $x\leqslant1$ then
$$
E(N_x)=x+\int_0^xE(N_{x-u})\,\mathrm du=x+\int_0^xE(N_u)\,\mathrm du.
$$
Let $n(x)=E(N_x)$. If $x\leqslant1$, then
$$
n'(x)=1+n(x),
$$
and $n(0)=0$ hence, for every $x$ in $(0,1)$,
$$
n(x)=\mathrm e^{x}-1.
$$
If $1\leqslant x\leqslant2$, then
$$
n'(x)=n(x)-n(x-1)=n(x)-(\mathrm e^{x-1}-1).
$$
Integrating this and using the initial condition $n(1)=\mathrm e-1$ yields, for every $x$ in $(1,2)$,
$$
n(x)=\mathrm e^x-1-\mathrm e^{x-1}(x-1),
$$
in particular,
$$
E(N)=n(2)=\mathrm e^2-1-\mathrm e,
$$
which yields
$$
\mathtt t=\frac{\mathrm e^2-\mathrm e}\lambda\approx\frac{4.67}\lambda.
$$ |
range of variation for a conditional entropy | Short answers are as following:
\begin{align*}
\max H(X|Y)&=4,\\
\max H(Y|X)&=11,\\
\min H(X|Y)&=0. \\
\min H(Y|X)&=7,
\end{align*}
To show the first two equations, first note that $H(Y|X) \leq H(Y)=11$ and $H(X|Y) \leq H(X)=4$. Now, we give an example to show that they are achievable. Suppose that X is a 4 bit random variable, where each bit is uniform and independent of other bits, and Y is a 11 bit random variable, where each bit is uniform and independent of other bits, and also independent of X. This gives $H(X|Y)=4$ and $H(Y|X)=11$.
To show the last two relations, first notice that $H(X|Y)\geq 0$ and moreover
$$H(Y|X)=H(X,Y)-H(X)=H(X|Y)+H(Y)-H(X)=7+H(X|Y) \geq 7.$$
Now, we show that these bounds are tight. Suppose, Y is a 11 bit random variable, where each bit is uniform and independent of other bits and X is equal to the first four bits of Y. Then you have $H(X|Y)=0$ and $H(Y|X)=7$. |
Find the value of the following series. | HINT :
$$a\tan\beta+2a\cot(2\beta)=a\left(\tan\beta+2\cdot\frac{1-\tan^2\beta}{2\tan\beta}\right)=a\cot\beta.$$
So, one has
$$2^{14}\tan(2^{14}\theta)+2^{15}\cot(2^{15}\theta)=2^{14}\cot(2^{14}\theta)$$
$$2^{13}\tan(2^{13}\theta)+2^{14}\cot(2^{14}\theta)=2^{13}\cot(2^{13}\theta)$$
and so on. |
How do I combine two trigonometric waveforms that have the same $\omega t$ but different $\phi$ | Using the identities
$$\begin{align}
\cos(A-B) &= \cos A \cos B + \sin A \sin B \\
\sin(A+B) &= \sin A \cos B + \cos A \sin B
\end{align}$$
and defining
$$s_1 := \sin 1 \qquad c_1 := \cos 1 \qquad s_4 := \sin 4 \qquad c_4 := \cos 4$$
The initial equation becomes
$$\begin{align}
&\;c_1\cos 3t +s_1\sin 3t-2\left(\;c_4\sin 3t+s_4\cos 3t\;\right) \\
=&\;(s_1-2c_4)\sin 3t + (c_1-2s_4)\cos 3t \tag{1}
\end{align}$$
while the target equation (with an appended coefficient of $A$) is
$$A \sin \omega t \cos\phi + A\cos \omega t \sin\phi \tag{2}$$
Equating coefficients:
$$\begin{align}
A\cos\phi = s_1-2c_4 \tag{3} \\
A\sin\phi = c_1-2s_4 \tag{4}
\end{align}$$
Therefore,
$$\begin{align}
A^2 &= A^2\cos^2\phi + A^2\sin^2\phi \\
&= (s_1-2c_4)^2+(c_1-2s_4)^2 \\
&= \left(s_1^2+c_1^2\right) - 4\left(s_1 c_4+c_1 s_4\right) + 4\left(c_4^2+s_4^2\right) \\
&= 1 - 4\sin(1+4) + 4 \\
&= 5 - 4\sin 5 \\
\to A &= \sqrt{5-4\sin 5} = 2.972\ldots \tag{5}
\end{align}$$
and
$$
\tan\phi = \frac{A\sin\phi}{A\cos\phi}= \frac{c_1-2s_4}{s_1-2c_4}
\quad\to\quad \phi = \arctan\frac{\cos 1-2\sin 4}{\sin 1 - 2\cos 4} = 0.7628\ldots \tag{6}$$
Thus, we have
$$2.972\ldots\cdot\sin(3t+0.7628\ldots) \tag{$\star$}$$ |
Joint density of Markov process as product of conditional densities. | The question might be based on a confusion between transition probabilities and transition densities.
In the first case the conditional distribution of $X(t+s)$ conditionally on $\mathcal F_s=\sigma(X(u);u\leqslant s)$ is such that
$$
P[X(t+s)=y\mid X(s)=x,\mathcal F_s]=p(s,t+s,x,y),
$$
for every states $(x,y)$. Thus, every entry $p(s,t+s,x,y)$ is nonnegative and, for every $x$,
$$
\sum_yp(s,t+s,x,y)=1.
$$
This implies that, for every $x$, the set $\{y\mid p(s,t+s,x,y)\ne0\}$ is at most countable.
In the second case the conditional distribution of $X(t+s)$ conditionally on $\mathcal F_s$ is such that
$$
P[X(t+s)\in B\mid X(s)=x,\mathcal F_s]=\int_Bp(s,t+s,x,y)\mathrm dy,
$$
for every state $x$ and every (measurable) set $B$. Since the conditional distribution of $X(t+s)$ is absolutely continuous, for every $y$,
$$
P[X(t+s)=y\mid X(s)=x,\mathcal F_s]=0,
$$
but it may well happen that $\{y\mid p(s,t+s,x,y)\ne0\}$ is the whole state space, even when said state space is uncountable. |
Find a basis for $V$ for which it is the dual basis | The first dual basis vector should be $(a, b, c) \in \mathbb{R}^3$ such that $f_1(a, b, c) = 1$, and $f_2(a, b, c) = f_3(a, b, c) = 0$. Write this system out to get
$$\begin{matrix}
a & -&2b & && = & 1 \\
a & +&b & +&c & = & 0 \\
& &b & -&3c & = & 0
\end{matrix}
$$
And solve. Now do the same for the second and third dual basis vectors. |
Convex function strictly increasing | Lemma. For $\forall \,u \lt v \lt w \in (a,b)\,$:
$$
\frac{f(v)-f(u)}{v-u} \le \cfrac{f(w)-f(v)}{w-v}
$$
This simply formalizes the intuition that adjacent chords on the graph of a convex function have non-decreasing slopes, and follows directly from the definition of convexity with $\lambda = \frac{w-v}{w-u} \in [0,1]$ and $1-\lambda=\frac{v-u}{w-u}\,$:
$$
\require{cancel}
\begin{align}
& f\big(\lambda u + (1-\lambda)w)\big) \le \lambda f(u) + (1-\lambda)f(w) \\
\iff \quad & f\left(\frac{(\cancel{w}-v)u+(v-\cancel{u})w}{w-u}\right) \le \frac{w-v}{w-u} f(u) + \frac{v-u}{w-u} f(w) \\
\iff \quad & f(v) \big((w-v)+(v-u)\big) \le (w-v)f(u) + (v-u)f(w) \\
\iff \quad & \big(f(v)-f(u)\big)(w-v) \le \big(f(w)-f(v)\big)(v-u)
\end{align}
$$
Proof. Since $f$ is increasing and not constant there exists $\exists \,c \in (a,b)$ such that $f(c) \gt f(a)$. For $\forall \,x_1,x_2$ such that $a\lt c\lt x_1\lt x_2\lt b$ applying the previous lemma twice gives:
$$
\frac{f(x_2)-f(x_1)}{x_2-x_1} \ge \frac{f(x_1)-f(c)}{x_1-c} \ge \frac{f(c)-f(a)}{c-a} \gt 0
$$
The above proves that $f$ is strictly increasing on $[c,b)$ since $x_1 \gt c \implies f(x_1) \gt f(c)$ and $x_2 \gt x_1 \gt c \implies f(x_2) \gt f(x_1)$. |
Encryption and decryption | $$A(Bx)=(AB)x,$$
hence
\begin{align}\begin{pmatrix}
2 & 11 \\
10 & 9 \\
\end{pmatrix}\begin{pmatrix}
4 & 1 \\
3 & 7 \\
\end{pmatrix}&=\begin{pmatrix} 8+33 & 2+77 \\40+27 & 10+63\end{pmatrix} \\
& = \begin{pmatrix} 8+6 & 2+2(27)+23 \\13+2(27) & 10+2(27)+9 \end{pmatrix} \\
&=\begin{pmatrix} 14 & 25 \\13 & 19\end{pmatrix}\end{align}
where all computations are done in $\mod 27$. |
Working in $S_6$ compute $(134) \cdot (12).$ | Observe that the first cycle is
$$\color{red}{1\to 3, \quad 3 \to 4, \quad 4\to 1}$$
and the second cycle is
$$\color{blue}{1\to 2, \quad 2\to 1}$$
When we compose the permutations, then we get
$$\color{red}{1\to 3, \quad 3 \to 4, \quad 4} \color{green}{\to}\color{blue}{ 2, \quad 2 \to 1}$$
where the green line remark the point when we change the permutation we are working on. Then, the product permutation is $(1 3 4 2)$. |
27 different types of cereals. You are sent a random box out of these 27 six months in a row. Probability that you receive at least one repeated box? | Think about the complement. What is the probability all the boxes you receive will be different?
There are $27^{27}$ possibilities for what boxes you will receive in what order.
There are only $27!$ ways to receive all different boxes.
Can you take it from here? |
What's $(1 2 3)(1 4 5)$? Everybody gives a different answer. | Technically, the book isn't mistaken but might be slightly misleading.
The order of the permutation $(1 2 3)(1 4 5)$ is equal to the order of the permutation $(1 4 5 3 2)$, but the disjoint cycle form of $(1 2 3)(1 4 5)$ is $(1 4 5 2 3)$.
Also, note that Wolframalpha permutations goes from left to right, try inputting $(1 4 5)(1 2 3)$. |
Why Do We Associate Quadratic Forms To Symmetric Matrices Rather Than Upper Triangular Matrices? | I think the reason is that the quadratic form associated with any matrix $A$ is equal to the quadratic form associated to its symmetric part. To prove this, first consider that any matrix can be written as the sum of symmetric and anti-symmetric matrices as follows
$${\bf{M}} = {1 \over 2}\left( {{{\bf{M}}} + {\bf{M}}^T} \right) + {1 \over 2}\left( {{{\bf{M}}} - {\bf{M}}^T} \right) = {{\bf{M}}_S} + {{\bf{M}}_A}$$
Next, observe that
$$Q = {{\bf{x}}^T}{{\bf{M}}_A}{\bf{x}} = {\left( {{{\bf{x}}^T}{{\bf{M}}_A}{\bf{x}}} \right)^T} = {{\bf{x}}^T}{\bf{M}}_A^T{\bf{x}} = {{\bf{x}}^T}\left( { - {{\bf{M}}_A}} \right){\bf{x}} = - {{\bf{x}}^T}{{\bf{M}}_A}{\bf{x}} = - Q\,\,\,\, \to \,\,\,\,\,Q = 0$$
and finally
$${{\bf{x}}^T}{\bf{Mx}} = {{\bf{x}}^T}\left( {{{\bf{M}}_S} + {{\bf{M}}_A}} \right){\bf{x}} = {{\bf{x}}^T}{{\bf{M}}_S}{\bf{x}} + {{\bf{x}}^T}{{\bf{M}}_A}{\bf{x}} = {{\bf{x}}^T}{{\bf{M}}_S}{\bf{x}} + 0 = {{\bf{x}}^T}{{\bf{M}}_S}{\bf{x}}$$ |
At least ten language is spoken | It's more complicated than that, and you need to use Sperner's Theorem.
If you consider the set of languages spoken by each mathematician, you know that
all these sets are different, and
none of the sets is a subset of another.
This makes it a Sperner family (antichain), and the theorem gives the maximum size of such a family. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.