title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
More than two matrices multiplied in tensor notation | The trick is
$$Q^i{}_k=A^i{}_jB^j{}_rC^r{}_k.$$
For the differentiation one uses the Leibniz's Rule:
$$\frac{\partial M^i{}_k}{\partial x^u}=
\frac{\partial A^i{}_j}{\partial x^u}B^j{}_k
+A^i{}_j\frac{\partial B^j{}_k}{\partial x^u},$$
and then take $u=i$ for the contraction. |
When will the third radius be an integer? | You are asking when $r_3=\frac {r_1r_2}{(\sqrt {r_1} + \sqrt {r_2})^2}=\frac {r_1r_2}{r_1+2\sqrt {r_1r_2}+r_2}$ is an integer. Let $m=\gcd (r_1,r_2)$. To make the square root an integer, we must have $r_1=mp^2, r_2=mq^2$, with $p,q$ coprime, giving $r_3=\frac {m^2p^2q^q}{mp^2+2mpq+mq^2}=\frac {mp^2q^2}{p^2+2pq+q^2}=\frac {m(pq)^2}{(p+q)^2}$ Since $p+q$ is coprime to $p,q$, we must have $(p+q)^2|m$. One example would be $p=1,q=2,m=9$, giving $r_1=9,r_2=36, r_3=4$ |
A general formula for $(\frac{ds}{dt}\frac{d}{ds})^k$? | Maybe this rule can help you. See also this page. |
Why is $T/S$ isomorphic to $k^{\ast}$? (Remark 7.1.4 in Springer Linear Algebraic Groups) | Yes, since the only connected 1-dim algebraic groups are $G_a$ and $G_m$, and $G_a$ is unipotent, $G_m$ "semisimple". |
proving that a matrix has one solution for any c1,c2 | We know that the vector $\begin{bmatrix}0\\0\end{bmatrix}$ is a solution to the first matrix, so we know that if $$\begin{bmatrix} a & b \\ c&d\end{bmatrix}\cdot\begin{bmatrix}e\\f\end{bmatrix}=\begin{bmatrix}0\\0\end{bmatrix}$$ then $e,f=0$.
Now, suppose there were two solutions to $$\begin{bmatrix} a & b \\ c&d\end{bmatrix}\cdot\begin{bmatrix}e_1\\f_1\end{bmatrix}=\begin{bmatrix}c_1\\c_2\end{bmatrix}$$ $$\begin{bmatrix} a & b \\ c&d\end{bmatrix}\cdot\begin{bmatrix}e_2\\f_2\end{bmatrix}=\begin{bmatrix}c_1\\c_2\end{bmatrix}$$
Then, we know that $$\begin{bmatrix} a & b \\ c&d\end{bmatrix}\cdot\begin{bmatrix}e_1-e_2\\f_1-f_2\end{bmatrix}=\begin{bmatrix}0\\0\end{bmatrix}$$ which is a contradiction. Hence, we know there is at most 1 solution.
Now, assume that there are no solutions to $$\begin{bmatrix} a & b \\ c&d\end{bmatrix}\cdot\begin{bmatrix}e\\f\end{bmatrix}=\begin{bmatrix}c_1\\c_2\end{bmatrix}$$That implies that $\exists k\in\mathbb{R}$ such that $k\cdot\begin{bmatrix}a\\c\end{bmatrix}=\begin{bmatrix}b\\d\end{bmatrix}$ Why?, because if not, then we know that two independent vectors in $\mathbb{R}^2$ must span $\mathbb{R}^2$. Hence, we know that $$\begin{bmatrix} a & b \\ c&d\end{bmatrix}\cdot\begin{bmatrix}-k\\1\end{bmatrix}=\begin{bmatrix}0\\0\end{bmatrix}$$Which again is a contradiction, so there must be at least one solution. Hence, we are done. |
Suppose that $rank(x)=\alpha, rank(y)=\beta, and \alpha \leq \beta$, prove that $rank(x \cup y)=\beta$ | Note $z\in\operatorname{rank}(A)\iff \exists t\in A\colon z\in\operatorname{rank}(t)^+$.
Thus
$$ \begin{align}z\in\operatorname{rank}(x\cup y)&\iff \exists t\in x\cup y\colon z\in \operatorname{rank}(t)^+\\
&\iff \exists t\in x\colon z\in \operatorname{rank}(t)^+\lor \exists t\in y\colon z\in \operatorname{rank}(t)^+\\
&\iff z\in \operatorname{rank}(x)\lor z\in \operatorname{rank}(y)\\
&\iff z\in \operatorname{rank}(x)\cup \operatorname{rank}(y)\end{align}$$
In other words, rank commutes with finite union (in fact, with arbitrary union).
We conclude
$$ \operatorname{rank}(x\cup y)=\operatorname{rank}(x)\cup \operatorname{rank}(y)=\alpha\cup \beta=\beta.$$ |
Issue finding the models of a formula using semantic tableux method. | Yes, your tableaux is correct.
To find a model, simply take any open branch and see what literals (atomic statements or negations of atomic statements) you find on that branch.
For example, the right most branch contains $q$ as a literal, meaning that you have model by setting $q$ to true ... apparently it does not matter whether $p$ and $r$ are true of false: as soon as $q$ is true, the original sentence is true as well (it's always good to verify that fact ... if you find that this is not the case, then you know something went wrong with your tableaux) |
Showing eigenvalues $\lambda_n\to(n+\frac{1}{2})^2\pi^2$ as $n\to\infty$ | This is most easily seen visually: The roots of $\mu=\tan\mu$ are the points of intersection of the line $y=x$ and $y=\tan x$. So you're essentially finished; just note that the branches of the function $\tan\mu$ lie near their asymptotes at $\mu=\pi(n+1/2)$ at these points.
You cannot solve that equation explicitly (it's transcendental), so the best you can do is provide the asymptotic behaviour: $\mu_n\sim \pi(n+1/2)$ as $n\to\infty$ (this notation also addresses the comments in the question about the limit notation not being accurate). |
Dimension of endomorphisms subspaces | For the second question, it seems easiest to first look at the difference between two such projectors. Such a function $f:V\to F$ vanishes on $F$ (both projectors are identity there) and has its image contained in$~F$, so in particular $f^2=0$. Conversely if $f$ has those properties and $p$ is a projector with $\operatorname{Im}p=F$, then $(p+f)^2=p^2+pf+fp+f^2=p+f$ (since $p^2=p$, $pf=f$, and the last two terms vanish), so $p+f$ is a projector, and its image is easily checked to be$~F$. Therefore the set of such differences$~f$ is a vector subspace$~S$ of the space of endomorphisms of$~V$, whose dimension is $\dim(V/F)\dim(F)$. The set$~\overline{\mathcal P}_F$ of projectors is a translate of this subspace$~S$ by some$~p$, which gives an affine space; the span of$~\overline{\mathcal P}_F$ contains $S$ and also$~p$, while $p\notin S$ (unless $\dim F=0$). The presence of $p$ increases the dimension by$~1$. All in all the dimension of the span of$~\overline{\mathcal P}_F$ is $\dim(V/F)\dim(F)+1$ (unless $\dim F=0$, in which case it is$~0$). |
Solving $\frac{x}{w} \geq 1 - 0.5^{1/(y-1)}$ for $y$. Messed up sign somewhere? | $\log_{1/2}$ is a decreasing function on $\mathbb{R}_+^*$ so the second inequality is false!
I suggest you to write $0.5^{1/(y-1)} $ as $e^{\ln(0.5)/(y-1)}$.
Here is what I have:
$$ \dfrac{x}{w} \geq 1-(1/2)^{1/(y-1)}$$
$$ e^{\ln(1/2)/(y-1)} \geq 1 - \dfrac{x}{w}$$
From there I assume $x/w <1$ else the inequality holds at least for all $y>1$.
$$ \dfrac{\ln{(1/2)}}{y-1} \geq \ln{\left(1-\dfrac{x}{w}\right)}$$
$$ \ln{(1/2)} \geq (y-1) \ln{\left(1-\dfrac{x}{w}\right)}$$
$$y \ln{\left(1-\dfrac{x}{w}\right)}\leq \ln(1/2)+\ln\left(1-\dfrac{x}{w} \right) $$
Then if we suppose $x/w>0$ it means $\ln(1-x/w)<0$ so we have:
$$ y \geq \dfrac{\ln(1/2) + \ln\left(1-\dfrac{x}{w}\right)}{\ln\left(1-\dfrac{x}{w}\right)} $$ |
How to prove that for all $|z|<1$, $|(1-z)e^z-1|\leq |z|^2$? | There is a general phenomenon going on, which is encoded in Weierstrass's result about his elementary factors:
if we set $E_n(z) = (1-z)e^{z+z^2/2+\cdots+z^n/n}$, then $|E_n(z)-1|\leqslant |z|^{n+1}$ when $|z|\leqslant 1$.
To prove it, you can begin by noting that $$E_n'(z) = -z^ne^{z+z^2/2+\cdots+z^n/n}$$ and that if $t\geqslant 0$, we have $|E_n'(tz)|\leqslant -|z|^n E_n'(t)$ because $|\exp z|\leqslant \exp |z|$. From this it follows now that
$$\begin{align}
|E_n(z)-1| &= \left|z \int_0^1 E_n'(zt)dt \right| \\
&\leqslant -|z|^{n+1} \int_0^1 E_n'(t)dt\\
&=-|z|^{n+1}(E_n(1)-E_n(0)) \\
&= |z|^{n+1}
\end{align}$$
There is a second proof obtained by looking at the Taylor expansion: again, because the Taylor expansion of $E_n'$ starts at $z^n$, the expansion of $E_n-1$ starts at $z^{n+1}$, say $E_n(z) = 1+ \sum_{k>n} a_k z^k$ (because $E_n(0)=1$). Second, all the coefficients $a_k$ are negative from the expression of $E_n'$ and real so $|a_k| = -a_k$, and $E_n(1) = 0$ implies that $- 1 = \sum_{k>n} a_k$. Thus if $|z|\leqslant 1$ we have
$$|E_n(z)-1| \leqslant \sum_{k>n} |a_k||z|^k \leqslant -|z|^{n+1}\sum_{k>n}a_k = |z|^{n+1}$$ |
Expansion for $\cos(nx)$, with $n \in \mathbb{R}$ | I believe there is no such expansion. Indeed, any polynomial expression in $\cos(x)$ and $\sin(x)$ will be $2\pi$-periodic. But if $\alpha \in \mathbb{R}$ is not an integer, then $x \mapsto \cos(\alpha x)$ is not $2\pi$-periodic, hence it cannot be rewritten as a polynomial in $\cos(x)$ and $\sin(x)$.
Now, if you allow roots of polynomials then, by writing $\cos(x) = T_n(\cos(x/n))$ (with $T_n$ the Chebychev polynomial), we can express $\cos(x/n)$ as a root of $T_n - \cos(x)$ (which manageable for $n=2$ but gets complicated fast). |
Double integral over a parallelogram | hint: Let $O = (0,0), P = (2,4), Q = (5,6), S = (3,2)$. You can write equations for the lines: $\overline{OP}: 2x-y = 0, \overline{OS}: 2x-3y = 0, \overline{PQ}: 2x-3y = -8, \overline{SQ}: 2x-y = 4$. Now make the substitution: $u = 2x-y, v = 2x-3y$. Can you proceed to the next step? |
For what values of $(x,y)$ are $xy,x/y$ and $x-y$ all equal? | We want to solve $$\begin{align}xy &= x/y \\ xy &= x-y.\end{align}$$
If $x=0$, then the second equation becomes $0 = -y$, so $y=0$, which fails because then $x/y$ is undefined. So $x\ne 0$.
Then divide the first equation by $x$, giving $y=1/y$ so $y^2 = 1$. Then $y=\pm1$. Putting $y=1$ in the second equation we get $x = x- 1$, which is impossible. Putting $y=-1$ in the second equation we get $-x = x+1$, which makes $x=-\frac12$. So the only solution is $x=-\frac12, y=-1$ and the common value of $xy$, $x-y$, and $x/y$ is $\frac12$. |
Doubt in proof of Dual of the direct sum | Well, the two annihilator spaces are not disjoint since $0$ is in both of them, but their intersection is trivial: suppose that $\phi$ is in the annihilator of both $M$ and $N$. Then $\phi(v) = 0$ for all $v \in M$ and all $v \in N$, that is, for all $v \in M \oplus N = V$ by linearity, so $\phi = 0$. |
Find complex roots of quartic function $(3z + 1)(4z + 1)(6z + 1)(12z + 1) = 2$ | Let $z=\frac{x}{12}.$
Thus, $$(3z + 1)(4z + 1)(6z + 1)(12z + 1)-2=$$
$$=\left(\frac{x}{4}+1\right)\left(\frac{x}{3}+1\right)\left(\frac{x}{2}+1\right)(x+1)-2=$$
$$=\frac{1}{24}((x+4)(x+3)(x+2)(x+1)-48)=$$
$$=\frac{1}{24}((x^2+5x+4)(x^2+5x+6)-48)=$$
$$=\frac{1}{24}(x^2+5x+12)(x^2+5x-2)=$$
$$=(12z^2+5z+1)(72z^2+30z-1).$$
Can you end it now? |
Conway Problem, Space of Analytic Functions | The statement is correct if $f_n \to f$ is understood as locally uniform
convergence. I would suggest to show that the following statements
are equivalent:
$f_n \to f$ uniformly on each compact subset $K \subset G$.
$f_n \to f$ uniformly on $\{\gamma\}$ for each closed rectifiable
curve $\gamma $ in $G$.
$f_n \to f$ uniformly on the boundary of every closed disk
which is completely contained in $G$.
Each point $z \in G$ has a neighborhood $U$ such that $f_n \to f$ uniformly on $U$.
by proving $(1) \Rightarrow (2) \Rightarrow (3) \Rightarrow (4) \Rightarrow (1) $.
Some of the implications are trivial.
For $(3) \Rightarrow (4)$ you can use Cauchy's integral formula. |
Suppose a function $f(x)$ defined on $(a,b)$ is integrable and continuous on $(a,b)$, does this imply that $f(x)$ is uniformly continuous? | Consider the function $f(x) = \frac{1}{\sqrt{x}}$, when $ x \in (0,1)$. The function $f$ is continuous in $(0,1)$ and Riemann Integrable in $[0,1]$, improper integration in $[0,1]$ at $0$.
$f$ is not uniformly continuous in $(0,1)$, because of infinite discontinuity at $0$. Check using the Cauchy sequence $\{\frac{1}{n^2}\}$. |
Homomorphisms and exact sequences | Suppose $M' \rightarrow M \rightarrow M'' \rightarrow 0$ is exact.
Since $v$ is surjective, $Hom(M'', N) \rightarrow Hom(M, N)$ is injective.
Let $p\colon M \rightarrow N$ be a homomorphism such that $pu = 0$.
Let $y \in M''$.
There exists $x \in M$ such that $y = v(x)$.
Since $Im(u) = Ker(v) \subset Ker(p)$, $p(x)$ is independent of a choice of $x$.
Hence we get a map $q\colon M'' \rightarrow N$ such that $q(y) = p(x)$.
It is easy to see that $q$ is a homomorphism.
Since $qv = p$, $Ker(Hom(M, N) \rightarrow Hom(M', N)) \subset Im(Hom(M'', N) \rightarrow Hom(M, N))$.
The other inclusion is clear.
Conversely suppose $0 \rightarrow Hom(M'', N) \rightarrow Hom(M, N) \rightarrow Hom(M', N)$ is exact.
I will prove that $M' \rightarrow M \rightarrow M'' \rightarrow 0$ is exact.
Since $Hom(M'', N) \rightarrow Hom(M, N)$ is injective, $v$ is surjective.
So it suffices to prove that $Im(u) = Ker(v)$.
Since $0 \rightarrow Hom(M'', M'') \rightarrow Hom(M, M'') \rightarrow Hom(M', M'')$ is exact.
$vu = 0$. Hence $Im(u) \subset Ker(v)$.
Let $N = M/Im(u)$.
Let $p\colon M \rightarrow N$ be the canonical homomorphism.
Since $pu = 0$, there exists $q\colon M'' \rightarrow N$ such that $qv = p$.
Let $x \in Ker(v)$.
Since $0 = qv(x) = p(x), x \in Im(u)$.
Hence $Ker(v) \subset Im(u)$. |
Conformal map betwen unit disk and simply connected subset of $\mathbb{C}$ with positive derivative at $a$ | Note that certainly $\phi(a)\ne 0$. Hence appending a rotation solves the problem. |
Prove that if $ \ (A\cup B)^c = A^c \cup B^c$ then $ \ A = B$. | Here's a quick way to see it, I'll look over your proof in a second.
$(A \cup B)^c = A^c \cap B^c$ by DeMorgan, and similarly $A^c \cup B^c = (A \cap B)^c$. So your claim is equivalent to $(A \cup B)^c = (A \cap B)^c$, which after taking complements is the same as $A \cup B = A \cap B$, which obviously implies $A = B$.
Your proof seems to work on the same idea as this above, but it seems to be a little bit jumbled. When you say $x \in A \implies x \not \in B^c$, this is actually enough, since not being in $B^c$ is the same as being in $B$. So you actually already showed that $x \in A \implies x \in B$. This shows $A \subseteq B$. But the argument is totally symmetric, so you also can infer from analogous reasoning that $B \subseteq A$, which you then also state. It's just a matter of observing that $A \subseteq B$ and $B \subseteq A$ implies $A = B$. |
Hilbert basis of vector space | Consider all orthonormal systems $B$ containing $X$. By definition put $B_1\leq B_2$ if $B_1\subset B_2$. To prove that maximal system (which existence is guaranteed by Zorn's lemma) is a basis consider orthogonal complement of it. |
General Strategies for Convergence of Complex iterations. | This seems impossible.
For converge to be possible then consider the expression for $b_{n+1}$ and set $a_{\infty}=b_{\infty}=c_0$. Then :
$$c_0 = \left( c_0^m -( c_0-c_0)^m \right)^{\frac 1 m}$$
which means :
$$c_0 = \left({c_0^m}\right)^{\frac 1 m}$$
But in general $\left({c_0^m}\right)^{\frac 1 m}$ has $m$ distinct complex roots and they all have equal standing.
Let's just take $m=2$ as an example.
The roots of $c_0^2=r^2 e^{i2\theta}$ will be :
$$z_{\pm} = \pm r e^{i \theta }$$
But these clearly don't converge.
So in general we can't expect a convergence from these series. |
Are there any properties of a function that are valid when a function is defined on an open interval but not when it's defined on R? | Since open intervals $(a,b)$ are homeomorphic to $\Bbb{R}$ (both as topological spaces and linearly ordered spaces), every property dealing with continuity and order must hold in both cases, or none of them. Local properties (for example being locally Lipschitz, or locally bounded) behave in the same manner as well. So we need to look for global properties which deal with metric.
The following properties of a function $f$ are true whenever the domain is an open interval, but they need not to be true when the domain is $\Bbb{R}$:
if $f$ is uniformly continuous, then it is bounded
if $f$ is bounded and continuous, then it is Lebesgue integrable
if $f$ is continuous and bounded, then its Riemann integral always converges |
How do I find the derived subgroup for this particular group? | If you are given a presentation, it's easy to understand the quotient by the derived subgroup: pretend the all of the generators commute, i.e., add $ab=ba$ as a relation. The presentation then becomes
$$ \langle a,b\mid a^{2n}=b^4=1,\,a^2b^2=1,\,a^2b^{-2}=1,(ab=ba)\rangle.$$
Any word that resolves to the trivial word in this group is in the derived subgroup.
If $n$ is even then this is isn't $C_2\times C_2$. If $n$ is odd, then we see that $a^2=b^2$, and $b$ has order dividing $4$. Thus $a$ actually must have order dividing both $2n$ and $4$, i.e., $2$. Then $a^2=b^2$ shows that $b$ has order $2$, and both $a^2$ and $b^2$ lie in $G'$. |
Sum $S(n,c) = \sum_{i=1}^{n-1}\dfrac{i}{ci+(n-i)}$ | We have
$$S(n;c) = \sum_{k=1}^{n-1} \dfrac{k}{ck+(n-k)} = \sum_{k=1}^{n-1} \dfrac{k/n}{1+(c-1)k/n}$$
Hence,
$$\dfrac{S(n;c)}n = \sum_{k=1}^{n-1} \dfrac{k/n}{1+(c-1)k/n} \dfrac1n \sim \int_0^1 \dfrac{xdx}{1+(c-1)x} = \dfrac{(c-1)-\log(c)}{(c-1)^2} \text{ for }c \in (0,1)$$
Hence,
$$S(n;c) \sim \dfrac{(c-1)-\log(c)}{(c-1)^2} n \text{ for }c \in (0,1)$$
Better approximations can be obtained using Euler Maclaurin formula. |
"Closed" form for $\sum \frac{1}{n^n}$ | Certainly you need to check out Sophomore's Dream. |
Finding Constant Term in Product of Series | This has no constant term, as it contains a factor $z_4^{-6 t}$. |
Finding the complement of a set | $B = \{ x \mid 4 \leq x \leq 7\}$ and $C^c = \{x \mid x < 2 \text{ or } x \geq 6\}$, so
$$B \cap C^c = \{x \mid 4 \leq x \leq 7 \text{ and } ( x < 2 \text{ or } x \geq 6)\} = \{6,7\}.$$ |
Orbit of a vector field on $\Bbb R^2$ | From the conditions on $f$ we easily get that $f$ is strictly increasing and that $f(y) < 0$ for $y < 0$ and $f(y) > 0$ for $y > 0$. Thus if we start at any point $p=(u,v) \in \mathbb R^2$ with $v \neq 0$, the $y$-coordinate of $\Phi_t(p)$ will tend to $\mp \infty$ as $t \to +\infty$ according to $v < 0$ or $v > 0$. Though, it goes to $0$ for $t \to -\infty$ but then the $x$-coordinate of $\Phi_t(p)$ will tend, in principle, to exactly one of $-\infty$, $0$, $1$ or $+\infty$ depending on $u$ (in fact, it can go only to $-\infty$ or $1$) $-$ use here the fact that $(0,0)$ and $(1,0)$ are the only fixed points of the flow.
Thus we get that if we start at a point with non-zero $y$-coordinate, we can get (at best) one of the required points in the orbit closure.
So the only candidates for the points with orbit closures containing both $(0,0)$ and $(1,0)$ are the points on the $x$-axis. It is easy to check that the only orbit actually satisfying this closure condition is the orbit of any point $(x,0)$ with $x \in (0,1)$ (look also at the end of the paragraph above). Done. |
Proving that two crossbars of a bisector intersect the midpoint of one of the crossbars | Hint: You already showed that $D$ lies on the circumference of $ABC$.
Hence, apply the theorem which says that the foot of the perpendiculars from $D$ are collinear.
If you want to prove that, consider the cyclic quads $DEGC$ and $DFGB$. |
Eliminate $t$ from the equations: $x = \frac{1}{t} - t \, , \, y = \frac{1}{t^2} - 1$ | Assuming that the manipulations up till (4) are correct,
$$y^2 - x^2y=x^2$$
$$(y-\frac{x^2}{2})^2-\frac{x^4}{4}=x^2$$
$$(y-\frac{x^2}{2})^2=x^2+\frac{x^4}{4}$$
$y=\frac{x^2}{2}+\sqrt{x^2+\frac{x^4}{4}}$ or $y=\frac{x^2}{2}-\sqrt{x^2+\frac{x^4}{4}}$ |
Finite a.e. assumption of Egorov | This is exactly the problem - you don't have uniform convergence on any nonempty subset of $\Bbb{R}$.
Note that if a sequence $f_n$ converges uniformly on some nonempty set $A$, then it must satisfy the uniform Cauchy criterion, i.e. $||f_n - f_m||_{L^\infty(A)} \to 0$ as $n,m \to \infty$. However, letting i.e. $m=n+1$, we have $||f_n - f_m||_{L^\infty(A)} = 1$ for any $n$ for any nonempty $A$ (in your example, where $f_n = n$). |
Identify given function as Characteristic function | I presume that $\xi$ is one-dimensional random variavle.
Let us first consider the case when $\mathcal P$ is discrete distribution with p.m.f. $p_k=\mathcal P(\{a_k\})$, $\sum_k p_k=1$. Then
$$
\varphi(t) = \mathbb Ee^{-t^2\xi^2/2}=\sum_{k} p_ke^{-t^2a_k^2/2}.
$$
This is characteristic function of a product $\xi\eta$ of two independent r.v.'s, where the distribution of $\eta$ is standard normal.
For the general case, let $\mathcal P$ is the distribution of random variable $\xi$. Let the distribution of $\eta$ is standard normal and $\eta$ and $\xi$ are independent. Then
$$
\varphi_{\xi\eta}(t)=\mathbb Ee^{it\xi\eta}=\mathbb E\underbrace{\left(\mathbb E(e^{it\xi\eta}\mid \xi)\right)}_{\varphi_{\eta}(t\xi)}=\mathbb E\left(e^{-t^2\xi^2/2}\right)=
\int_{-\infty}^\infty e^{-t^2 x^2/2}\,\mathcal P(dx)
$$
Finally, $\varphi(t)$ is a characteristic function of a product of two independent random values: r.v. with distribution $\mathcal P$ and with standard normal distribution. |
When log is written without a base, is the equation normally referring to log base 10 or natural log? | In mathematics, $\log n$ is most often taken to be the natural logarithm. The notation $\ln(x)$ not seen frequently past multivariable calculus, since the logarithm base $10$ finds relatively little use.
This Wikipedia page gives a classification of where each definition, that is base $2$, $e$ and $10$, are used:
$\log (x)$ refers to $\log_2 (x)$ in computer science and information theory.
$\log(x)$ refers to $\log_e(x)$ or the natural logrithm in mathematical analysis, physics, chemistry, statistics, economics, and some engineering fields.
$\log(x)$ refers to $\log_{10}(x)$ in various engineering fields, logarithm tables, and handheld calculators. |
Let $A \in M_n $ be rank 1 PSD; $B = Q^{-\frac{1}{2}} A Q^{-\frac{1}{2}}$, $Q$ is PD. Then can be $B$ be unitarily diagonalized? | $B$ is symmetric since
$$
B^*=(Q^{-1/2}AQ^{-1/2})^*=(Q^{-1/2})^*A^*(Q^{-1/2})^*=Q^{-1/2}AQ^{-1/2}
$$
since $Q^{-1/2}$ and $A$ are symmetric. Hence, $B$ is unitarily diagonalisable.
$Q^{-1/2}$ is symmetric since, if $Q=U^*DU$, where $U$ is unitary and $D$ diagonal with positive elements, then $Q^{-1/2}=U^*D^{-1/2}U$.
Finally,
$$
\text{rank}(B)=\text{rank}(A)=1,
$$
since $Q^{-1/2}$ is invertible. |
Ring map-integers proof | $\def\Z{\mathbb Z}$
First suppose that there is a ring morphism $\phi\colon\Z/n\Z \to \Z/m\Z$. Then $0= \phi(0) = \phi(n \cdot 1) = n \cdot \phi(1) = n$, hence $n + m\Z = 0$, which gives $m \mid n$.
If on the other hand $m \mid n$, define $\phi\colon \Z/n\Z \to \Z/m\Z$ by $\phi(a + n\Z) = a + m\Z$, this is well defined: If $a + n\Z = b+n\Z$, then $n \mid a-b$, hence $m \mid a-b$, so $a + m\Z = b+m\Z$. Obviously $\phi$ is a homomorphism. |
How do I rewrite this expression so that it can be used with the binomial theorem? | Here's an alternative approach you might want to consider:
$$\sum_{k=0}^n {n \choose k}5^{3n+k}(-6)^{2k-2} = \dfrac{5^{3n}}{36}\sum_{k=0}^n {n \choose k}5^{k}6^{2k} = \dfrac{5^{3n}}{36}\sum_{k=0}^n {n \choose k} 180^k = \dfrac{5^{3n}}{36}(180+1)^n = \dfrac{5^{3n}}{36}181^n = \dfrac{22625^{n}}{36}$$ |
How many different possible permutations are there with k digits that add up to n? | Allocating units to digits, it's straight stars-and-bars until $n$ reaches $10$ - there will be ${n+k-1 \choose n}$ options. Each digit position is a class, and increments are added to each, so you need to find the classification options.
For example in the $k=3, n=4$ that you worked in your question, the possibilities are ${6 \choose 4} = 15$ (happily matching the cases you found).
Another example, $k=5, n=8$ would have ${12 \choose 8} = 495$ options.
For $n \ge 10$, you have an added constraint of maximum $9$ units per digit so you would need to subtract these outcomes away from the simple formula above. Each time $n$ reached another multiple of $10$ the adjustment would become more complex to account for the potential for multiple digits breaking the $9$-limit simultaneously.
As a relatively simple example in this case, take $k=5, n=12$. Then the previous calculation gives ${16 \choose 12} = 1820$ options, but some of these would imply digit values greater than $9$ . To find this excess we can preallocate $10$ units to one digit position and recalculate how many of these constraint-breaking options we make with the remaining $2$ units: ${6 \choose 2}= 15$. So the answer here involves removing those options for each digit, $1820 - 5 \times 15 = 1745$ valid options. |
Prove $\sum_{n=1}^{+\infty}{\left| C_n \right|}<+\infty $ | If I am not grossly mistaken, then I think it works as follows:
Expand $f$ over to the whole $I = [-\pi,\pi]$ by reflecting along the $y$ - axis. This gives an even function $g\in C^1[-\pi,\pi].$ Then, $g'$ is an $L^2$ function with Fourier coefficients $c_n\sim O(n^{-1}).$ This means $g$ has Fourier coefficients that are of the order $O(n^{-2})$ and so it is absolutely summable. |
Simpson's Rule and other Newton-Cotes Formulas | But I'm not convinced we should always apply this rule any time we cut
into $2n$ intervals. Why not just throw out the uneven weighting and
use a few more sample points? If the weighting is so helpful, why not
use a more complicated weighting (like the various n-point rules
(Newton-Cotes formulas) ... )?
The problem with Newton-Cotes methods of high order is that it inherits the same sort of problems you see with using high-order interpolating polynomials. Remember that the Newton-Cotes quadrature rules are based on integrating interpolating polynomial approximations to your function over equally spaced points.
In particular, there is the Runge phenomenon: high-order interpolating functions are in general quite oscillatory. This oscillation manifests itself in the weights of the Newton-Cotes rules: in particular, the weights of Newton-Cotes quadrature rules for 2 to 8 points and and 10 points (Simpson's is the three-point rule) are all positive, but in all the other cases, there are negative weights present. The reason for insisting on weights of the same sign for a quadrature rule is the phenomenon of subtractive cancellation, where two nearly equal quantities are subtracted, giving a result that has less significant digits. By ensuring that the all weights have the same sign, any cancellation that may occur in the computation is due to the function itself being integrated (e.g. the function has a simple zero within the integration interval) and not due to the quadrature rule.
The approach of breaking up a function into smaller intervals and applying a low-order quadrature rule like Simpson's is effectively the integration of a piecewise polynomial approximation. Since piecewise polynomials are known to have better approximation properties than interpolating polynomials, this good behavior is inherited by the quadrature method.
On the other hand, one can still salvage the interpolating polynomial approach if one no longer insists on having equally-spaced sample points. This gives rise to e.g. Gaussian and Clenshaw-Curtis quadrature rules, where the sample points are taken to be the roots of Legendre polynomials in the former, and roots (or extrema in some implementations) of Chebyshev polynomials in the latter. (Discussing these would make this answer too long, so I shall say no more about them, except that these quadrature rules tend to be more accurate than the corresponding Newton-Cotes rule for the same number of function evaluations.)
...is Simpson's rule so useful that calculus students should always use it for approximations?
As with any tool, blind use can lead you to a heap of trouble. In particular, we know that a polynomial can never have horizontal asymptotes or vertical tangents. It stands to reason that a polynomial will be a poor approximation to a function with these features, and thus a quadrature rule based on interpolating polynomials will also behave poorly. The piecewise approach helps a bit, but not much. One should always consider a (clever?) change of variables to eliminate such features before applying a quadrature rule. |
Prove $(7)$ is maximal in Gaussian Integers | The maximal ideals of $\mathbb Z[x]$ are of the form $(p, f(x))$ where $p$ is a prime and $f(x)$ is an irreducible polynomial mod $p$.
Given that description, since $\mathbb Z[i]/(7)\cong \mathbb Z[x]/(7, x^2+1)$, you can conclude that the denominators are maximal ideals after confirming $x^2+1$ is irreducible mod $7$.
Notice by the same token $(2)$ is not a maximal ideal in the Gaussian integers. |
How do we evaluate the degree of $x$ using sine law? | You don't need trigonometry to solve for $x$. Observe that $BCE$ is an equilateral triangle, so $BE=BC$, so $BDE$ is an isosceles triangle. From this, it's easy to see that $x=100°$. |
Projection on the hyperplane $H: \sum x_i=0$ | When $H\subseteq\Bbb{R}^n$ is the hyperplane through the origin, and $\vec{n}$ is the normal of $H$, the orthogonal projection $p:\Bbb{R}^n\to H$ is given by the recipe
$$
p(\vec{x})=\vec{x}-\frac{\vec{x}\cdot\vec{n}}{||\vec{n}||^2}\vec{n}.
$$
You can verify the recipe by checking that i) $p(\vec{x})\perp\vec{n}$ for all $\vec{x}$ and ii) $p(\vec{x})=\vec{x}$ whenever $\vec{x}\perp\vec{n}$.
You then get the matrix $P$ by the usual process of calculating the images of your basis vectors. |
difference between $+\infty$ and $\infty$ | In the context of real Analysis we usually consider
\begin{align*}
\lim_{x \to \infty}f(x)\qquad\text{and}\qquad\lim_{x \to +\infty}f(x)
\end{align*}
to be the same. It has mainly to do with preserving the order of the real numbers when $\mathbb{R}$ is extended by the symbols $+\infty$ and $-\infty$. We look at two references:
Principles of Mathematical Analysis by W. Rudin.
Definition 1.23: The extended real number system consists of the real field $\mathbb{R}$ and two symbols $+\infty$ and $-\infty$. We preserve the original order in $\mathbb{R}$, and define
\begin{align*}
\color{blue}{-\infty < x < +\infty}\tag {1}
\end{align*}
for every $x\in\mathbb{R}$.
(he continues with:) It is then clear that $+\infty$ is an upper bound of every subset of the extended real number system, and that every nonempty subset has a least upper bound.
If, for example, $E$ is a nonempty set of real numbers which is not bounded above in $\mathbb{R}$, then $\sup E=+\infty$ in the extended real number system. Exactly the same remarks apply to lower bounds.
Now we look at certain intervals of real numbers introduced in
Calculus by M. Spivak.
(We find in chapter 4:) The set $\{x:x>a\}$ is denoted by $(a,\infty)$, while the set $\{x: x\geq a\}$ is denoted by $[a,\infty)$; the sets $(-\infty,a)$ and $(-\infty,a]$ are defined similarly.
(Spivak continues later on:) The set $\mathbb{R}$ of all real number is also considered to be an "interval" and is sometimes denoted by
\begin{align*}
\color{blue}{(-\infty,\infty)}\tag{2}
\end{align*}
The connection with limits is presented in chapter 5:
The symbol $\lim_{x\rightarrow\infty}f(x)$ is read "the limit of $f(x)$ as $x$ approaches $\infty$," or "as $x$ becomes infinite", and a limit of the form
\begin{align*}
\lim_{\color{blue}{x\rightarrow\infty}}f(x)
\end{align*}
is often called a limit at infinity.
(and later on:) Formally, $\lim_{x\rightarrow\infty}f(x)=l$ means that for every $\varepsilon>0$ there is a number $N$ such that, for all $x$,
\begin{align*}
\text{if }x>N\text{, then }|f(x)-l|<\varepsilon\text{.}
\end{align*}
and we find as exercise 36 a new definition and the following two out of three sub-points
Exercise 36: Define
\begin{align*}
&\lim_{\color{blue}{x=-\infty}}f(x)=l\\
\\
&(b) \text{Prove that }\lim_{x\rightarrow\infty}f(x)=\lim_{x\rightarrow-\infty}f(-x)\text{.}\\
&(c) \text{Prove that }\lim_{x\rightarrow 0^{-}}f(1/x)=\lim_{x\rightarrow-\infty}f(x)\text{.}
\end{align*}
Conclusion: When looking at (1) and (2) together with Spivaks definition of limits we can conclude that $\infty$ and $+\infty$ are used interchangeably in the context of limits of real valued functions. |
Can we make any conclusion on the numerators of elements in the fraction division ring $D(x)$? | Being new to noncommutative algebra, one thing you may not realize is going on i that "$f^{-1}g$ is a member of the ring of left quotients, and $st^{-1}$ is a member of the right ring of quotients.
It is known that in this case the two rings of quotients are isomorphic, but one needs to be cautious before beginning to equate their elements. Let's say that $\theta$ is the isomorphism from the right ring of quotients to the left ring of quotients.
Then $f^{-1}g=\theta(xy^{-1})=\theta(x)\theta(y)^{-1}$, so you could set $s=\theta(x)$ and $t=\theta(y)$ to indeed get $xt^{-1}$. But the wrinkle is that it is not immediately clear you have what you want in terms of the degrees of $f,g,s,t$. This indicates you would next consider whether or not $\theta$ restricted to $D[x]$ preserves degrees.
I hope this gets you started. |
subgroup of $\Bbb Q_p^×$ of index p contains $(\Bbb Q_p^×)^p$ as subgroup | If $G$ is any group (written multiplicatively) and $H$ is any normal subgroup of $G$ of index $n$, then $H$ must contain $G^n$.
This is because the quotient group $G/H$ has order $n$ and hence the $n$-th power of any element of $G/H$ is the identity element.
Translating back to $G$, we see that $g^n \in H$ for any $g \in G$. |
Non linear constraints | You are providing very little information.
In general, the difficulty of an optimization problem depends on whether we can establish general properties for the objective function and the constraints, or not. These properties depend, in turn and among other things, on the functional forms and on the domains specified.
Your objective function is affine, and so convex... But in order for convexity to even be defined, the domain of the function under examination must be a convex set.
The fact that the constraints are non-linear means that we must check whether they have a similar property - you would want them to be convex too, because then you would have a convex minimization problem, which is a well developed field, with many theoretical results and numerical algorithms (and one of its authoritative textbooks is officially free to download). But non-linearity does not exclude convexity. Here too the domain of the $x's$ must be a convex set, to be able to define convexity, i.e. to be able to even check whether it holds. Is it a convex set?
Also, if your constraints are many, a question related to the feasible set arises: what are the actual values the $k$-dimensional vectors of $x$'s are allowed to take, given the constraints? the "feasible set" is the set of values the remain as candidate minimizers after the constraints are applied to the initial domain.
Moreover, are you looking to solve the problem theoretically, or the various coefficients involved have specific numerical values and you want to obtain a specific numerical solution, and you intend to plug it into a computer?
Perhaps I could expand on this answer (perhaps), if you provided more information, although sometimes, the exact form of the constraints is critical. |
What is the force acting on an object inside a spaceship? | Consider the forces acting on the particle alone. There is the tension and the weight of the particle. So using Newton's Law $$\underline{F}=m\underline{a}\Rightarrow T-mg=ma$$
Hence $T=mg+ma$ |
Limit of function doesn't exist | If $f(x)$ has a finite at $x_0$ then $\liminf_{x\to x_0} f(x)=\limsup_{x\to x_0} f(x)\in\mathbb{R}$. Hence if $f(x)$ does not have a finite at $x_0$ then we are in one of the following three cases:
1) $\limsup_{x\to x_0} f(x)=+\infty$;
2) $\liminf_{x\to x_0} f(x)=-\infty$;
3) $\limsup_{x\to x_0} f(x)=L \in \mathbb{R}$ and $\liminf_{x\to x_0} f(x)=l\in \mathbb{R}$ with $l<L$.
Now use the definition of $\limsup$ and $\liminf$ in order to find the required sequence $\{z_n\}_{n\geq 1}$:
1) there is a sequence $z_n\to x_0$ such that $f(z_n)=+\infty$;
2) there is a sequence $z_n\to x_0$ such that $f(z_n)=-\infty$;
3) for $\epsilon=(L-l)/3>0$, there is a sequence $u_n\to x_0$ such that $f(u_n)>L-\epsilon$ and there is a sequence $v_n\to x_0$ such that $f(v_n)<l+\epsilon$. Let $z_{2n}=u_n$ and $z_{2n-1}=v_n$, then $z_n\to x_0$ but $f(z_n)$ does not have a finite limit because
$$f(z_{2n})-f(z_{2n-1})>(L-\epsilon)-(l+\epsilon)=(L-l)/3>0.$$ |
Describing multivariable functions | So, you need two things:
$$ 4 - x^2 - y^2 \geq 0 $$
to get make the square root work, and also
$$ x-y > 0 $$
to make the logarithm work.
You will be graphing two regions in the $xy$-plane, and your answer will be the area which is in both regions.
A good technique for graphing a region given by an inequality is to first replace the inequality by an equality. For the first region this means
$$ 4 - x^2 - y^2 = 0$$
$$ 4 = x^2 + y^2 $$
Therefore, we're talking about the circle of radius two centered at the origin. The next question to answer: do we want the inside or outside of that circle? To determine that, we use a test point: pick any point not on the circle and plug it into the inequality. I'll choose $(x,y) = (5,0)$. Note that this point is on the outside of the circle.
$$ 4 - x^2 - y^2 \geq 0 $$
$$ 4 - 5^2 - 0^2 \geq 0 $$
That's clearly false, so we do not want the outside of the circle. Our region is the inside of the circle. Shade that lightly on your drawing.
Now, do the line by the same algorithm. |
Fourier series expansion of $ x_1(t) = \sum _{-\infty}^{\infty} \Delta (t-2n) $ | You've got wrong sign (lost a "$-$") when evaluating $$\left[(-t+1)\frac{e^{-jn\omega_0t}}{-jn\omega_0}\right]_0^1.$$ |
Domain decomposition | Your information states that the original domain $\Omega$ is replaced with two domains $\Omega_1$, $\Omega_2$ where $\bar{\Omega} = \bar{\Omega_1} \cup \bar{\Omega_2}$. That is quite general, and might allow some funny selections of the sub domains to yield $\Omega$, so I wonder if some of the other conditions ensure that only reasonable domains are chosen.
The first two new Poisson equations make sense, they require to solve the Poisson equation
$$
-\Delta u_i = f
$$
on each domain $\Omega_i$ and require to respect the boundary condition
$$
u_i = 0
$$ on the shared boundary $\Gamma \cap \partial \Omega_i$ of original and new domain.
The last two equations deal with stitching the solutions together the right way:
They ask that on the shared boundary of the new domains both solutions have the same values
$$
u_1 = u_2
$$
and that their first derivatives (along the normal of the boundary from their viewpoint) are the same.
$$
\partial_n u_1 = - \partial_n u_2
$$
That seems reasonable.
One interesting bit is that nothing is asked for the second order partial derivatives, so the requirements on orders zero and one seem enough.
I have yet to find an argument why this is sufficient, sorry.
At least the second order derivatives should be continous that way, if I am not wrong.
The other interesting bit is that the identification only is required for the shared boundary, so it must enforce the identity on the whole shared area $\Omega_1 \cap \Omega_2$.
I believe this uniqueness property can be explained by looking at the difference function $v$ of the two (maybe different) solutions $u_1$ and $u_2$:
$$
v = u_1 - u_2
$$
It satisfies
$$
\Delta v = \Delta u_1 - \Delta u_2 = -f + f = 0
$$
so $v$ is a solution of a Laplace equation, thus it is a harmonic function which has the interesting mean value property.
From that one can infer that the solution on a domain is bounded by its minimum and maximum on the boundary (Maximum-Minimum Principle).
See for example P. Duchateau and D.W. Zachmann: Partial Differential Equations, Schaum's Outline.
The difference function vanishes on the boundary thanks to the boundary condition and the above implies that the difference vanishes within the shared domain as well. So the condition just on the boundary is sufficient already. |
Doob's submartingale theorem | Say $X_n = n$ almost surely. Then $\mathbb{E}|X_n| <\infty$, but $\sup_{n}\mathbb{E}|X_n| = \infty$.
In other words, we need the expectations of $|X_n|$ to be uniformly bounded. |
Integral domains examples | $(\mathbb Z/5\mathbb Z)[X]$
$\mathbb Z$ |
p odd prime. Prove that if $a\equiv b\pmod p$ then $a^p\equiv b^p\pmod p^2$. Then show $x^5+y^5=z^5$ has no integer solutions with $5\not\mid xyz$ | Write the Fermat equation of exponent $5$ in the form $x^5+y^5+z^5=0$.
We have $x^5\equiv x \mod 5$, $y^5\equiv y \mod 5$ and $z^5\equiv z \mod 5$, hence $x+y+z\equiv 0\mod 5$. Without loss of generality we may assume, because $5\nmid xyz$, that $x\equiv y\mod 5$, hence $x^5\equiv y^5 \mod 25$, and $−z^5 ≡ x^5 + y^5 ≡ 2 x^5 \mod 25$. However, the equation $x ≡ y \mod 5$ also implies that
$−z ≡ x + y ≡ 2 x \mod 5$ and $−z^5 ≡ 2^5 x^5 ≡ 32 x^5 \mod 25$.
Combining the two results and dividing both sides by $x^5$ yields a contradiction
$2 ≡ 32 \mod 25$. |
probability(french card) | We'll need to define a few events of the cards missing in order to approach this problem.
Let's define the three possible events of the cards missing first:
$0$ aces may be missing. Let's call this event $X$.
$1$ ace may be missing. Let's call this event $Y$.
$2$ aces may be missing. Let's call this event $Z$.
Note the importance of defining the above events in the context of our problem.
Now let's jump into the calculations:
Note the following:
$$P(A) = P(A \cap X) + P(A \cap Y) + P(A\cap Z)$$
Now use the fact that
$$P(A \cap B) = P(A|B)\times P(B)$$
Leaving out the details for you to work em out.
Cheers! |
Why should we consider $\sqrt{b/a}$ instead of $\frac{b}{a}$? | As pointed out by Daniel Fischer, I need to concern my self with $f(cx, cy, cz)$. Not $c \cdot f(x, y, z)$ as I have done. So:
\begin{align}
f(\sqrt{b/a}x, \sqrt{b/a}y, \sqrt{b/a}z) &= (\sqrt{b/a}x^2) + (\sqrt{b/a}y)^2 - (\sqrt{b/a}z)^2 \\
&= \frac{b}{a}x^2 + \frac{b}{a}y^2 - \frac{b}{a}z^2 \\
&= \left(\frac{b}{a}\right)(x^2 + y^2 - z^2)
\end{align}
And the rest is similar to the question. |
What method is there to denest square roots? | Hint:
Try to write $$\sqrt{4+2\sqrt3}=a+b\sqrt 3$$ and now square it. Try with assumption that $a,b$ are integers. You get $$4+2\sqrt3 = a^2+3b^2 +2ab\sqrt{3}$$
So try with $a^2+3b^2 = 4$ and $2ab = 2$. |
Is this math correct for changing the variable of integration so I can put a function in it? | $\int x(t)dt$ to my understanding it's a shorthand for $X(t)+c$ with $X(t)$ some primitive of $x(t)$, so is: $X'(t)=x(t)$. The expression $\int_0^tx(\tau)d\tau$ it's a more rigorous form to write the primitive since $\int_0^tx(\tau)d\tau=X(t)-X(0)$. So $c=-X(0)$
The initial condition can be set for $t=0$ or not, but anyway the initial condition sets the value of the constant $X(0)$
So, the rigorous way to write the primitive is $\int_0^tx(\tau)d\tau$, clearer when many variables are at play, as it happens in the general solution of your equation. The move you ask to clarify is $y(t) = 1/e^{R/L*t}*\int_0^t x(\tau)*e^{R/L*\tau} *d\tau = \int_0^t x(\tau) *e^{R/L(\tau-t)}*d\tau$ (it's not the right equation, but it's simpler and thus better to think about notation). $1/e^{R/L*t}$ it's constant from, so said, the point of view of the integrand in $\int_0^t x(\tau)*e^{R/L*\tau} *d\tau$, so you can put it under the integral symbol. Think of t being 32, do calculations with the number 32 in two places and then you move one 32 as you need to do, later 12, do calculations with the number 12 in two places... obviously, t in the upper limit of the integral is as constant as t in any other place. |
relatively open sets | Definition of a relatively open set, U must be in D, but D=(0,2] and U=[1,2] , U is not in D, this is D in U, your example is not a counterexample |
Poisson equation in semi-infinite domain | The functions $\{ \cos(n\pi x/L) \}_{n=0}^{\infty}$ are the eigenfunction solutions of
$$
u'' = \lambda u,\;\;\; u'(0)=0,\; u'(L)=0.
$$
So these functions form an orthogonal basis of $L^2[0,2\pi]$. And the functions $\{ \cos(sx)\}_{s \ge 0}$ are eigenfunctions on $[0,\infty)$. You should be able to represent any $g\in L^2([0,\infty)\times[0,L])$ function as
$$
u(x,y) = \sum_{n=0}^{\infty}\left(\int_{0}^{\infty}C_{u}(n,s)\cos(sy)ds\right)\cos(n\pi x/L),
$$
where $C(n,s)$ is determined in the usual way:
$$
C_{u}(n,s)= \frac{2}{L}\int_{0}^{L}\left(\frac{2}{\pi}\int_{0}^{\infty}u(x,y)\cos(sy)dy\right)\cos(n\pi x/L)dx.
$$
(For $n=0$ the outer constant must be adjusted to be $\frac{1}{L}$.) Then
$$
(-n^2\pi^2/L^2-s^2)C_{u}(n,s)=C_{F}(n,s)
$$
The $n=0$ term does not come into play for you because of the condition at $\infty$, which is good because you wouldn't be able to deal with the $n=0$ equation otherwise. So, you only have to consider the above for $n=1,2,3,\cdots$. Once you know $C_{F}(n,s)$--which may be computed from the above--the solution $u(x,y)$ is determined. |
Differentiating $3\int_0^te^uf(u)du$ | $$\dfrac{d}{dt} \left( \int_0^t F(x) dx\right) = F(t)$$ Hence, the answer is just $$3 e^t f(t)$$ |
Why is the modulus $|\cdot|$ not an order on $\mathbb{C}$? | This relation is not an order because it doesn't satisfy the anti-symmetry property:
Anti-symmetry: $\forall a,b \in C: a\le b \wedge b\le a \Rightarrow a = b$
Here's a counter-example: $-1 \le 1 \wedge 1 \le -1$ but $-1 \neq 1$. |
Joint pdf as a product of two independent functions with dependent domain | You would write your joint density $f_{XY}$ as
\begin{align*}
f_{XY}(x,y) = g(x)h(y) \textbf{1}_{\{|x|\leq 1, |y| \leq x^2\}},
\end{align*}
which can no longer be split up into a product of functions depending only on one variable (since you can't factor the indicator into two indicators). |
Can we show that $1_A\text E[X\mid\mathcal F]=1_AY\Leftrightarrow\forall F\in\mathcal F:\text E[1_A1_FX]=\text E[1_A1_FY]?$ | In general, (1) fails: let $\mathcal F$ be the $\sigma$-algebra containing $\emptyset$ and $\Omega$. Then (1) reads
$$
1_A\operatorname E\left[X \right]=1_AY\text{ almost surely}\Leftrightarrow \operatorname E\left[1_A X\right]=\operatorname E\left[1_A Y\right],
$$
which is not true if $A=\Omega$, $X$ and $Y$ have the same expectation but $Y$ is not almost surely constant. |
Number of ordered partitions of N into K distinct parts modulo P | Here is a simple combinatorial proof that does not use hyperplanes.
The number of vectors in $\mathbb F_p^n$ whose entries are pairwise different is $p(p-1)(p-2)\cdots(p-n+1)$. Given such a vector, $\def\x{{\bf x}}\x$, let $f(\x)$ be the vector obtained by adding one to each entry. As long as $n$ and $p$ are corprime, which occurs whenever $p>n$, the sums of the entries of the vectors $$\x,f(\x),f^2(\x),\dots,f^{p-1}(\x)$$ will each have a different remainder modulo $p$. Therefore, exactly one of them will have a sum whose remainder is zero. Since the set of all vectors with pairwise different entries are partitioned into groups like this, the number of such vectors whose sum congruent to zero is $1/p$ times the total number, which is $(p-1)(p-2)\dots(p-n+1)$. |
What is wrong with the argument $1 = \lim_{n\to \infty} n/n = \lim_{n\to\infty} (1/n+1/n+\dotsb+1/n) = 0 $? | This is an excellent argument that we cannot in general find a limit by taking the limits of the parts of an expression.
When many students are first introduced to limit laws, they see their instructor go through a lot of complicated math in order to prove things that feel obvious. In this case, the relevant one is the addition law:
$$\lim_{x \to c}\left(f(x) + g(x)\right) = \lim_{x\to c}f(x) + \lim_{x\to c}g(x)$$
This seems obvious, right? A limit means "what number does this expression get close to". Of course $f(x) + g(x)$ would get close to the sum of whatever $f(x)$ gets close to and whatever $g(x)$ gets close to. So why does the instructor (or the textbook) spend half a page messing around with $\epsilon$s and $\delta$s to prove the law?
The answer is because of exactly the sort of thing you've pointed out. There are situations where the "intuitive" approach to limits stops working, essentially because infinity is hard. For those situations, we need to rely on the proof. Crucially, in this case, the proof relies on there being only two things added together. This means, if we want to adhere perfectly to the law as stated, we have to jump through hoops like this:
\begin{align*}
\lim_{x \to c} \left(f(x) + g(x) + h(x)\right) &=
\lim_{x \to c} \left(\left(f(x) + g(x)\right) + h(x)\right)\\
&= \lim_{x \to c}\left(f(x) + g(x)\right) + \lim_{x \to c}h(x)\\
&= \lim_{x \to c}f(x) + \lim_{x \to c}g(x) + \lim_{x \to c}h(x)
\end{align*}
We can do the same to deal with four, or five, or five hundred things added together. But how would we deal with $n$ things added together, when $n$ changes over the course of the limit? If we "peel off" one like I did above, there'd still be infinitely many left over. In other words, even with aggressive uses of this limit law, we can only handle sums of fixed size. One that "grows", like $\frac1n + \frac1n + \cdots + \frac1n$ does, can't be handled this way.
To summarize: Many of the limit laws feel like they're just saying "take the limit of the parts of the expression". This isn't true; in fact, they're saying "here is one precise way in which you can find a limit by using the limits of the parts". If you want to do something to a limit that isn't one of the standard limit laws, you're doing something special, which means you'll need to go back to the definition of the limit (or something similar) in order to make sure that what you're doing works. |
Limit in multivariable calculus | Let $t \in A:= \Bbb{R}\setminus\{2k\pi,2k\pi +\pi/2:k \in Z\}$
$x\log(xy)=x\log{x} +x\log{y}$ thus for $x=r\cos{t}$ and $y=r \sin{t}$
we have that $f(r,t) \to 0$ as $r \to 0$ using the fact that $r\log{r} \to 0$ as $r \to 0$
Since $t$ is arbitrary in $A$ we conclude that the limit exists. |
Compactness of a complete DVR with finite residue field | If the valuation is not discrete, then its image is not a discrete set of the real numbers, say $a = |x|$ is a limit point. Note that we can assume $x=1$ and that we can assume that a sequence, whose absolute values approximize $1$, is a sequence in $\mathring K$ (Take any approximizing sequence and invert those elements not contained in the valuation ring).
Then the open cover $$m = \bigcup_{n \in \mathbb N} \{|y| \in K ~|~ |y| > \frac{1}{n}\}$$ does not admit a finite subcover.
For the finiteness of the residue field, note that if $A$ is complete system of representatives for $\mathring K/m$, we $a-b \in \mathring K^*$ for any distinct $a,b \in A$, thus $|a-b|=1$. Hence $A$ is a discrete subset of a compact space and thus finite. |
(603 · 6004 + 60005) mod 6 is equal to? | ${\rm mod}\ 6\!:\,\ 6x\!+\!\color{#c00}y\equiv \color{#c00}y,\ $ so $\ (6i\!+\!\color{#c00}a)(6j\!+\!\color{#c00}b)+6k\!+\!\color{#c00}c \,\equiv\, \color{#c00}{ab}+\color{#c00}c\ $ by Basic Congruence Rules. |
How does this follow from the theorem?[normed linear space] | Let $\alpha = \sup(A), \beta = \inf(B)$, and $\epsilon > 0$, then $\beta-\epsilon \notin B$. Hence, $\exists x_0 \in X$ such that
$$
\|T(x_0)\| > (\beta-\epsilon)\|x_0\|
$$
But then $x_0\neq 0$, so with $y_0 := x_0/\|x_0\|$, we have $\|y_0\| \leq 1$ and thus
$$
\alpha \geq \|T(y_0)\| > \beta-\epsilon
$$
This is true for every $\epsilon > 0$, so $\alpha \geq \beta$ |
Canadian Mathematical Olympiad 1987, Problem 4 | As stated in the comments:
The argument as written is not correct. The initial assumption, that no pair fires on each other, is not possible. The two people $A,B$ at minimal distance from each other must fire at each other. (of course the case where there is only one person is trivial).
Two ways to solve the problem:
Method I: consider that minimal pair $A,B$. We distinguish two cases (according to whether anyone else shoots at either $A$ or $B$).
Since the case $n=1$ is trivial it makes sense to proceed by induction. Let's assume we have a counterexample with minimal $n$ (we will derive a contradiction).
If nobody else shoots at $A,B$ then we can ignore that pair and focus on the $n-2$ remaining people. By the induction hypothesis, at least one of those stays dry and we are done.
If somebody else, $C$ say, shoots at one of them, say $A$, then at least two people shoot at $A$. It follows that the map $F: \{1,\cdots, n\}\to \{1,\cdots,n\}$ which maps the $i^{th}$ person to their target is not injective. Thus it can't be surjective and again we are done.
Method II (sketch). Suppose we had a collection with odd $n$ in which nobody stayed dry. Then consider the shooting pattern. Since it must be the case that everyone shoots at (and is shot at by) a unique person, the collection must break up into distinct closed loops. These can't all have length $2$ since the collection is odd. There must, in fact, be an odd loop of length $>2$. But consider the members of that loop. There must be a minimal distance between any two members of that loop and, as before, we quickly see that those two people can't shoot at anyone else in that loop. Thus the loop is not possible, and we are done. |
a problem involving trigonometry functions | Why not use $\tan 88 = DC/50$, $\tan 84 = DC/(y-50)$ and $\cos 88 = y/x$. Now you have 3 variables, but also 3 equations. The first two equations alone, you can use to find $y$ as follows
$$\frac{y-50}{50}=\frac{\tan 88}{\tan 84} \; .$$
No need to find out what $DC$ is. |
Find a non-trivial function | Let $c$ be the finite fixed constant. Note that $Q_k = P(N \ge k) = \displaystyle\sum_{i = k}^{\infty}P(N = i)$.
So, by multiplying both sides of $\displaystyle\frac{1}{Q_k}\sum_{i=k}^{\infty} f^2(i)P(N=i) = c$ by $Q_k$, we get $\displaystyle\sum_{i=k}^{\infty} f^2(i)P(N=i) = \sum_{i = k}^{\infty}cP(N = i)$ for all $k \ge 0$.
Since this holds for all $k \ge 0$, it holds when $k$ is replaced by $k+1$, i.e. $\displaystyle\sum_{i=k+1}^{\infty} f^2(i)P(N=i) = \sum_{i = k+1}^{\infty}cP(N = i)$ for all $k \ge -1$.
Subtracting these two equations gives us $f^2(k)P(N = k) = cP(N = k)$ for all $k \ge 0$.
Thus, for each integer $k \ge 0$, we must have either $f^2(k) = c$ or $P(N = k) = 0$.
If $N$ is a random variable such that $P(N = k) > 0$ for all integers $k \ge 0$, then $f^2(k)$ must be constant. |
Some conclusions about $Alt(\omega)$ | Hint: Suppose that $v_i = v_j$. Let $\tau$ denote the transposition which switches the $i$th and $j$th elements. $v_i = v_j$ means that
$$
\omega(v_{1},..,v_{k}) = \omega(v_{\tau(1)},..,v_{\tau(k)})
$$
and for any $\sigma$, we have
$$
\omega(v_{\sigma(1)},..,v_{\sigma(k)}) = \omega(v_{\sigma (\tau (1))},..,v_{\sigma(\tau(k))})
$$
However, note that $sgn(\sigma \circ \tau) = -sgn(\sigma)$. |
Calculating the integral $ \int_0^t \int_{-s}^s f(y) \,dy\,ds$ . | HINT: first consider
$$ \int_{-s}^{s} f(y) dy $$
see if you can write an expression for this in "two pieces" i.e. over two intervals for s |
Examples for existence proofs using the parity of fixed points of involutions | A more general fact is that the number of fixed points of the action of a finite $p$-group on a finite set has the same cardinality $\bmod p$ as the set itself. This is used, for example, to prove that finite $p$-groups have nontrivial center and it is also part of the proof of the Sylow theorems.
While this isn't an existence proof, it can also be used to give a short proof of Fermat's little theorem: given a positive integer $a$, consider the action of $\mathbb{Z}/p\mathbb{Z}$ by cyclic permutation on the set of words of length $p$ on an alphabet of size $a$. There are $a$ fixed points and $a^p$ words, hence $a^p \equiv a \bmod p$. |
find if a function is injective or surjective, involving a set of all functions as well as the power set of the domain. | Suppose $f_1\neq f_2$
Then that implies that there must be at least one specific value of $a_0\in A$ for which $f_1(a_0)\neq f_2(a_0)$
Since $f~:~A\to\{0,1\}$ this implies that one of two things are true:
Case 1: $f_1(a_0)=1$ and $f_2(a_0)=0$
Case 2: $f_1(a_0)=0$ and $f_2(a_0)=1$
Without loss of generality, suppose it was the first case.
Then $g(f_1)\ni a_0$ but $g(f_2)\not\ni a_0$ implying $g(f_1)\neq g(f_2)$
The remaining case is analogous.
We learned then that $f_1\neq f_2\implies g(f_1)\neq g(f_2)$. By contrapositive, that is the same as saying $g(f_1)=g(f_2)\implies f_1=f_2$, the very definition of what it means to be injective.
As noted, if two finite sets have the same cardinality then a function between them is injective if and only if it is also surjective, so we learn that $g$ must also be surjective.
We could prove surjectivity without this however.
Suppose $K\in\mathcal{P}(A)$. We wish to prove that there exists some $f_K~:~A\to\{0,1\}$ such that $g(f_K)=K$.
Indeed, if we define $f_K~:~A\to\{0,1\}$ as the following:
$$f_K(x)=\begin{cases}0&\text{if}~x\notin K\\1&\text{if}~x\in K\end{cases}$$
then we trivially have $g(f_K)=K$ by construction, thereby proving that $g$ is surjective. |
Tangent with Geometry | If one assumes that $x$ in your first equation stays for the height of the slide, then the equation is correct. The second equation (for $y$) is not needed. |
How to prove by induction that $a^{2^{k-2}} \equiv 1\pmod {2^k}$ for odd $a$? | For the case $k=3$, you cannot choose $l$. You must argue that $4l(l+1)$ is a multiple of $8$, which can be done by arguing that $l(l+1)$ must be even.
Your induction step seems ok, but it could be written more simply as
$$
a^{2^{n+1-2}}-1 = a^{2^{n-2}\cdot 2}-1=(a^{2^{n-2}}-1)(a^{2^{n-2}}+1)
$$
and use that $a^{2^{n-2}}+1$ is even. |
Number of invertible 0-1 matrices | Your guess is almost right ($k$ should start at $0$). This is a particular case of the order of the general linear group over a field $F$.
In your case here is how the proof goes. You first want a non-zero column for your matrix. You have $$2^n-1$$ options to pick such first column. Then, you want to pick a column that is linearly independent from your first column, so you have a total of $2^n$ possible options for this entry minus the multiples of the first column, in this case $2$. For the third column you again have to substract the linear combinations of the first two columns in this case is $2^2$. Doing so you obtain: $$(2^n-1)(2^n-2)(2^n-2^2)...(2^n-2^{n-1})$$Hope this helps. |
logarithmic inequality with different bases | Rewrite the inequality as
$${1\over4}\left({\log5\over\log4}+{\log6\over\log5}+{\log7\over\log6}+{\log8\over\log7}\right)\gt{11\over10}$$
Now apply the AM-GM inequality:
$${1\over4}\left({\log5\over\log4}+{\log6\over\log5}+{\log7\over\log6}+{\log8\over\log7}\right)\ge\sqrt[4]{\log8\over\log4}=\sqrt[4]{3\over2}$$
It remains to observe that
$${3\over2}\gt\left({11\over10}\right)^4={14641\over10000}$$ |
How many ways can you distribute $10$ identical balls into $5$ distinguishable boxes so that the sum total of balls in the first two boxes equals $6$? | *Hint:
For problems of identical balls into distinguishable boxes, you must be familiar with stars and bars:
The problem needs to be broken into $2$ parts: $6$ balls in the first two boxes, $4$ balls in the next three, apply stars and bars for each, and then apply the multiplication principle.
For the first part, taking zero balls in boxes to be permissible, the number of ways = $\binom{6+2-1}{2-1} = 7$, you can verify the formula for this simple case by enumeration.
Proceed..... |
How many solutions are there to $x_1 + x_2 + ... + x_5 = 21$? | The system:
$\begin{cases} x_1+x_2+x_3+x_4+x_5=21\\ 0\leq x_1\leq 3\\ 1\leq x_2<4\\ 15\leq x_3\\ 0\leq x_4\\ 0\leq x_5\end{cases}$
can be rewritten via a change of variables as:
$\begin{cases} y_1+y_2+y_3+y_4+y_5=5\\ 0\leq y_1\color{blue}{\leq 3}\\
0\leq y_2\color{blue}{\leq 2}\\
0\leq y_3\\
0\leq y_4\\
0\leq y_5\end{cases}$
Do you see how?
Let $x_1=y_1, x_2=y_2+1, x_3=y_3+15$ and $x_4=y_4$ and $x_5=y_5$. Each of the conditions translate directly as above, including $y_1+y_2+y_3+y_4+y_5=x_1+x_2-1+x_3-15+x_4+x_5=21-1-15=5$
Now, approach via inclusion-exclusion on the first two terms. If we were to ignore the upper bounds, how many solutions are there?
$\binom{5+5-1}{5-1}=\binom{9}{4}$
How many of those solutions violate the first upper bound condition?
that would be when $y_1>3$ i.e. $4\leq y_1$. Making another change of variable, this would be $z_1+z_2+\dots+z_5=1, 0\leq z_i$ for a total of $\binom{5}{4}=5$
How many of them violated the second upper bound condition?
Approach similarly
How many violate both upper bound conditions simultaneously?
That would be with $x_1\geq 4, x_2\geq 4, x_3\geq 15$ implying that $x_1+x_2+x_3+x_4+x_5\geq x_1+x_2+x_3\geq 23$ and couldn't equal $21$, so none violate both.
Letting $|S|$ be the amount which ignore both upper bound conditions, $|A_1|$ be the amount which violate the first upper bound condition, $|A_2|$ be the amount which violate the second upper bound condition, we have the number of solutions which violate none of the upper bound conditions as:
$$|S|-|A_1|-|A_2|+|A_1\cap A_2|$$
$126 - 5 - 15 + 0 = 106$ |
How to get the following upper and lower bounds of $E[\sup_{n\geq 0} M_n]$? | For the lower bound notice that on $A^{c}$ $\sup_n M_n \geq 1/2$; and on $A$, by Lévy's zero–one law, $\sup_n M_n =1$. Therefore, $$E (\sup_n M_n) \geq \frac{1}{2}\cdot \frac{1}{2} +\frac{1}{2}\cdot 1=\frac{3}{4}$$
From Doob's maximal inequality $ \mathbb{P}(\sup_n M_n \geq t )\leq \min(1,\frac{1}{2t})$ for all $t \in [0,1]$. Therefore,
\begin{align}
E (\sup_n M_n) &\leq \int_{0}^{1}\min(1,\frac{1}{2t})\mathop{dt} \\
&=\frac{1+\ln (2)}{2}
\end{align} |
Time complexity of writing a number as power of 2's | We assume that initial number has $d$ digits. The test for parity takes constant time, while the divisions by $2$ can be done in linear time $O(d)$ (halve the digits from left to right and carry to the right when odd). As this is repeated $d$ times, the global complexity is $O(d^2)=O(\log^2n)$.
[The length of the number indeed decreases on every division, but the net effect is only to halve the number of operations so that the complexity remains $O(d^2)$.] |
Proving input-to-state stability for a set of equilibria | I would suggest to look at the literature on ecological and epidemiological systems as many of those systems have this structure. In fact, this model is the model of a reaction network with mass-action kinetics. So, you may also look at the literature on that topic.
Also, if you are actually working on that, you may need to explicitly consider the positivity of the variables. For instance, you may take the Lyapunov function $V(x,y)=x+y$. In this case, you have $\dot{V}(x,y)=-by$ and you can prove the asymptotic stability of the continuum of equilibrium points. Perhaps, you can use a variation of this Lyapunov function for your purpose. |
How to calculate: $ \lim \limits_{x \to 0^+} \frac{\int_{0}^{x} (e^{t^2}-1)dt}{{\int_{0}^{x^2} \sin(t)dt}} $ | Starting from user1337's answer, consider $$A=\frac{e^{x^2}-1}{2x\sin(x^2)}$$ when $x$ is small.
Now use Taylor expansions $$e^y=1+y+\frac{y^2}{2}+O\left(y^3\right)$$ $$\sin(y)=y-\frac{y^3}{6}+O\left(y^4\right)$$ and replace $y$ by $x^2$ so $$A=\frac{x^2+\frac{x^4}{2}+\cdots}{2x\Big(x^2-\frac{x^6}{6} +\cdots\Big)}=\frac{x^2\Big(1+\frac{x^2}{2}+\cdots\Big)}{2x^3\Big(1-\frac{x^4}{6} +\cdots\Big)}=\frac 1 {2x} \times\frac{1+\frac{x^2}{2}+\cdots}{1-\frac{x^4}{6} +\cdots}$$ When $x\to 0$, the last term goes to $1$ and then $A$ behaves as $\frac 1x$; so the limit. |
Maximum principle of a differential Equation | We have $-g''(x) = f(x) - g(x)$, so applying your proposition with "$f(x)$" = $f(x) - g(x)$ gives
$$
\|g\|_\infty \le \frac{1}{8}\|f - g\| \le \frac{1}{8}\|f\|_\infty + \frac{1}{8}\|g\|_\infty,
$$
where we also use the triangle inequality. Rearranging gives $\|g\|_\infty \le \|f\|_\infty/7$.
Now suppose that $g_1$ and $g_2$ are two solutions, then $g_1-g_2$ solves the equation $(g_1-g_2)(x)-(g_1-g_2)''(x) = f(x) - f(x) = 0$, so using our inequality with "$f(x)$" $=0$ gives $\|g_1 - g_2\|_\infty \le \|0\|_\infty/7 = 0$. Thus $\|g_1 - g_2\|_\infty = 0$ so $g_1 = g_2$. Uniqueness is shown. |
How to find the length of a segment in a quadrilateral made by two triangles? | The trick about this problem is that the given angles and lengths show $ABCD$ is inscribed in a circle. This is because
$$AD=DE \Longrightarrow \widehat{DAC}=\widehat{DEA}$$
$$\widehat{DEA} = \widehat{CEB}$$
$$BC=CE \Longrightarrow \widehat{CEB}=\widehat{CBD}$$
$$\Longrightarrow \widehat{DAC} = \widehat{CBD}$$
This is enough to show that the four points $A,B,C \& D$ are on the same circle.
What follows is interesting. From the given figure:
$$\widehat{BCA} = \widehat{ACD} =\alpha$$
And now that we know the points are on the same circle, we can see that
$$ \widehat{ABD} = \widehat{ACD} \quad \& \quad \widehat{ADB} = \widehat{ACB}$$
So:
$$ \widehat{ABD} = \widehat{ADB} \Longrightarrow AD=AB$$
Now, the problem tells us $AB=6$ and $AD=DE$ , so we have $DE=6$, and subtracting from the given length of $BD$ we see that $BE=5$. |
How to prove the uniqueness of the solution of $ax+b=0$? | A standard way of showing that a certain object is unique is two assume that you have two objects that satisfy the desired properties, and deduce that they must be equal (when we say "two objects", we mean two "names", but which may refer to the same object).
In the case of the solutions to the equation $ax+b=0$, you have to distinguish two cases: if $a=0$, then the equation either has no solutions (if $b\neq 0$), or it has infinitely many solutions (if $b=0$).
So uniqueness really only exists when $a\neq 0$. The uniqueness is based on the following fact about real numbers:
For any real numbers $r$ and $s$, if $rs=0$, then $r=0$ or $s=0$.
Once you have that:
Claim. If $a\neq 0$, then there is at most one solution to $ax+b=0$.
Proof. Suppose that both $x$ and $y$ are solutions. We aim to show that $x=y$. Since $x$ is a solution, $ax+b=0$. Since $y$ is a solution as well, $ay+b=0$. That means that $ax+b=ay+b$. Adding $-b$ to both sides we conclude that $ax=ay$. Adding $-ay$ to both sides, we obtain $ax-ay = 0$. factoring out $a$, we have $a(x-y)=0$. Since the product is $0$, then $a=0$ or $x-y=0$. Since $a\neq 0$ by assumption, we conclude that $x-y=0$, so $x=y$. Thus, if $x$ and $y$ are both solutions, then $x=y$, so there is at most one solution. $\Box$
Note that this argument works in the context of the real numbers, or other kinds of "numbers" where $rs=0$ implies $r=0$ or $s=0$. There are other situations where this is not the case. For example, if you work with "integers modulo 12" ("clock arithmetic", where $11+3 = 2$), then $2x+8 = 0$ has many different solutions $0\leq x\lt 12$: one solution is $x=2$ (since $2(2)+8 = 4+8=12=0$ in clock arithmetic), and another solution is $x=8$ since $2(8)+8 = 16+8=24 = 0$ in clock arithmetic). |
$\sin x+\sqrt{5} \cos x$ in a form $c\cdot \sin (x+d)$ | Divide by $c$ on both sides to have $$\frac1c\sin(x)+\frac{\sqrt{5}}{c}\cos(x)=\sin(x+d)$$
You would have the angle addition rule for sine here if
$$\begin{align}
\cos(d)&=\frac1c\\
\sin(d)&=\frac{\sqrt5}{c}
\end{align}$$
Square both sides of both equations and add corresponding sides together to give yourself a quadratic equation in $c$. From there, for each of the two solutions for $c$ (which are $\pm\sqrt{6}$), use the above two equations to deduce what quadrant $d$ is in (if $c>0$, $d$ is in quadrant I, otherwise $d$ is in quadrant III), and then use the appropriate variation on $\arccos$ or $\arcsin$ to solve for $d$. (If $d$ is in quadrant I, $d=\arccos\left(\frac1c\right)+2\pi k$, and if $d$ is in quadrant III, $d=\frac\pi2-\arccos\left(\frac1c\right)+2\pi k$.) |
Properties of $\Bbb Q[X,Y]/(X-Y^2,Y-X^2)$ | This ring is just isomorphic to $\mathbb Q[X]/(X-X^4) \cong \mathbb Q[X]/(X) \times \mathbb Q[X]/(X-1) \times \mathbb Q[X]/(X^2+X+1) \cong \mathbb Q \times \mathbb Q \times \mathbb Q(\zeta_3)$ |
abstract algebra, Gauss' lemma | This is the contrapositive of what is sometimes referred to as Gauss' Lemma. The lemma says that if $f(x)$ factors in $F[x]$, then $f(x)$ factors in $R[x]$. Thus if $f(x)$ is irreducible in $R[x]$, then $f(x)$ is irreducible in $F[x]$. For the sake of completeness, we give a short (almost complete) proof. The following lemmas will be required and should be proven:
Lemma 1. If $R$ is a UFD and $f, g \in R[x]$, then $c(f \cdot g) = c(f) \cdot c(g)$ where $c$ is the content.
Lemma 2. In a UFD, if $N$ divides $z_1 \cdot z_2$, then $N = n_1 \cdot n_2$ for some $n_1 \mid z_1$ and $n_2 \mid z_2$.
Suppose $f(x) \in F[x]$ factors as $f(x)=g(x) \cdot h(x)$ where $g(x),h(x) \in F[x]$. The coefficients of $g(x)$ and $h(x)$ are of the form $a/b$ where $a, b \in R$ and $b \ne 0$. Multiply through by $N = \text{lcm}(\text{all denominators})^2$ to obtain
$$
N \cdot f(x) = k(x) \cdot l(x)
$$
where $k(x), l(x) \in R[x]$.By Lemma 1, taking the content of each side gives $N \cdot c(f) = c(k) \cdot c(l)$. Then $N$ divides $c(k) \cdot c(l)$, so by Lemma 2, $N = n_1 \cdot n_2$ where $n_1 \mid c(k)$ and $n_2 \mid c(l)$. Hence
\begin{align}
n_1 \cdot n_2 \cdot f(x) & = k(x) \cdot l(x) \\
& = c(k) \cdot \tilde{k}(x) \cdot c(l) \cdot \tilde{l}(x) \\
& = n_1 \cdot d_1 \cdot \tilde{k}(x) \cdot n_2 \cdot d_2 \cdot \tilde{l}(x),
\end{align}
where $\tilde{k}, \tilde{l} \in R[x]$ are primitive and $n_1 d_1 = c(k)$, $n_2 d_2 = c(l)$. Canceling $n_1$ and $n_2$ gives
$$
f(x) = \underbrace{d_1 \cdot \tilde{k}(x)}_{\text{in $R[x]$}} \cdot \underbrace{d_2 \cdot \tilde{l}(x)}_{\text{in $R[x]$}} \in R[x].
$$ |
Proof of $\sum\limits_{n=1}^{\infty} \frac{x^n \log(n!)}{n!} \sim x \log(x) e^x$ as $x \to \infty$ | I will try to show the first asymptotics.
We begin with the following quantitative form of Stirling's formula:
Fact. For all $n \geq 0$,
$$ \log (n!) = (n + \tfrac{1}{2})\log(n+1) - n + \mathcal{O}(1). \tag{1} $$
Now let $N_t$ be a Poisson random variable of rate $t$. Then
\begin{align*}
\smash[b]{\sum_{n=0}^{\infty} \frac{t^n \log (n!)}{n!}e^{-t}}
&= \Bbb{E}[\log (N_t !)] \\
&= \Bbb{E}[N_t \log (N_t) + \tfrac{1}{2}\log (N_t + 1) - N_t + \mathcal{O}(1)] \\
&= \Bbb{E}[N_t \log (N_t + 1)] + \tfrac{1}{2}\Bbb{E}[\log(N_t + 1)] - t + \mathcal{O}(1). \tag{2}
\end{align*}
Now we claim the following:
Claim. For any $a \geq 0$ we have
$$ t\log(t+a)
\leq \Bbb{E}[N_t \log(N_t + a)]
= t \Bbb{E}[\log(N_t + a + 1)]
\leq t \log(t+ a + 1). \tag{3} $$
Here, we use the convention that $0 \log 0 = 0$.
Assuming this claim, we easily find that
$$ \Bbb{E}[N_t \log(N_t + 1)] = t \log t + \mathcal{O}(1)
\quad \text{and} \quad
\Bbb{E}[\log(N_t + 1)] = \log t + \mathcal{O}(t^{-1}). $$
Plugging this to $\text{(2)}$ gives
$$ \sum_{n=0}^{\infty} \frac{t^n \log (n!)}{n!}e^{-t}
= (t + \tfrac{1}{2})\log t - t + \mathcal{O}(1)
= \log (t!) + \mathcal{O}(1). $$
Dividing both sides by $t \log t$ yields the first asymptotics.
Proof of Claim. The last inequality of $\text{(3)}$ is easy to prove. Since the function $x \mapsto \log(x+a+1)$ is concave, by the Jensen's inequality we have
$$ \Bbb{E}[\log(N_t + a + 1)] \leq \log(\Bbb{E} N_t + a + 1) = \log(t+a+1). $$
In order to show the first inequality of $\text{(3)}$, notice that $x \mapsto x\log(x+a)$ is convex (with the 2nd derivative $(2a+x)/(a+x)^2 > 0$). Thus by the Jensen's inequality again
$$ \Bbb{E}[N_t \log (N_t + a)] \geq (\Bbb{E}N_t) \log (\Bbb{E}N_t + a) = t \log(t+a). $$
Finally, the middle equality of $\text{(3)}$ is given by
\begin{align*}
\Bbb{E}[N_t \log (N_t + a)]
&= \sum_{n=1}^{\infty} n \log(n+a) \cdot \frac{t^n}{n!}e^{-t} \\
&= \sum_{n=0}^{\infty} \log(n+a+1) \cdot \frac{t^{n+1}}{n!}e^{-t}
= t \Bbb{E}[\log (N_t + a + 1)].
\end{align*} |
Proving that $\left\lfloor\frac{n m -1}{m-1}\right\rfloor, n>0, m>0$ yields only natural numbers that is not a multiple of $m$ | To be clear/ensure I've understood your definition, you have defined a set
$$
S(m) = \{k \in \mathbb{N} \mid k \not\equiv 0 \pmod{m}\},
$$
and when viewed as an increasing sequence $S(m) = \{s_{m,i}\}_{i=1}^\infty$ (where $s_{m,i} \leq s_{m,i+1}$ for all $i$), your function $A(m,n)$ is simply
$$
A(m,n) = s_{m,n}.
$$
If my interpretation of the problem is correct (as it seems to be), then this follows from the division algorithm.
Note that the only natural $k$ that are omitted from $S(m)$ are the multiples of $m$, i.e.,
$$
S(m) = \bigcup_{q \in \mathbb{N}\cup\{0\}}\{qm+r \mid r=1,2,\ldots,m-1\}. \qquad (\star)
$$
Fix $m,n \in \mathbb{N}$, and assume $m > 1$. (Note that your claim is false if $m=1$.) Then $n = q(m-1) + r$ for $q = \lfloor n/(m-1) \rfloor \geq 0$ and $0 \leq r < m-1$. (The reason for the choice of divisor $m-1$ instead of $m$ is because each of the sets in the union in $(\star)$ is of order $m-1$.)
It's not hard to see from either the definition of $s_{m,n}$ or $(\star)$ that
$$
s_{m,n} = qm+r,
$$
and so it is enough to show that the right hand side of this equation equals $\lfloor nm/(m-1)\rfloor$. This can be derived as follows:
\begin{align*}
\left\lfloor\frac{nm}{m-1}\right\rfloor &= \left\lfloor \frac{(q(m-1)+r)m}{m-1} \right\rfloor\\
&= \left\lfloor qm + \frac{rm}{m-1} \right\rfloor\\
&= qm + \left\lfloor r + \frac{r}{m-1} \right\rfloor\\
&= qm + r = \left\lfloor \frac{n}{m-1} \right\rfloor m + r
\end{align*} |
Intersection of graphs of parametric equations with trig functions | The first curve is a circle with radius $4$ and centre $(2,k)$, the second curve is a circle with radius $1$ and centre $(1,-3)$. The solution are given by the points on the $x=2$ line such that the distance from $(1,-3)$ is three or five (i.e $4\pm 1$), so we have four solutions, and they are symmetric with respect to the $y=-3$ line, so the sum of all the possible $k$'s is just $-12$. |
What is Space Time code? Is there any good book/thesis to understand the subject? | For a beginning engineer I shamelessly recommend this book written by my former colleagues. It does not get into some of the newer aspects of the theory such as DMT-optimality and multiuser theory. If you are an engineering major interested in the applications of algebraic number theory in this area, but don't really know much about it, then Frederique Oggier, Emanuele Viterbo, Algebraic Number Theory And Code Design For Rayleigh Fading Channels is a quick read.
==============
Edit/addendum:
For DMT-stuff there is more than enough material available at David Tse's homepage |
Arranging objects in special way | I can keep out $37\%$ of people this way:
1. For $N/16<m\leq N/8$, put $m$ in seat $4m$ and $2m$ in seat $8m$.
2. For $N/4<m\leq N/2, 4\nmid m$, put $m$ in seat $2m$.
3. For odd $N/6<m\leq N/4$, put $m$ in seat $3m$.
4. For odd $3N/20<m\leq N/6$, put $m$ in seat $5m$.
5. For odd $N/8<m\leq3N/20,3\nmid m$, put $m$ in seat $5m$.
6. For odd $N/8<m\leq N/7, 3\mid m$, put $m$ in seat $7m$.
This unseats $2/16+3/16+1/24+1/120+1/120+1/336=157/420$ of the people.
You still have the first $N/16$ people to seat as well.
You can't do better than $1/2$ because everyone shut out has someone sitting in their chair. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.