title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Bernoulli Map: $f'(x) = 2$ Almost Everywhere and "Local Separation" Increases as $2^n$ | What 2. means is that $|f(x)-f(y)|=2\,|x-y|$ if both $x$ and $y$ are on the same side of $1/2$. Iterating, we get $|f^n(x)-f^n(y)|=2^n\,|x-y|$ as long as all the iterates are on the same side of $1/2$; $x$ and $y$ are separated by a factor of $2^n$ when $f$ is applied $n$ times. |
Show that any invertible matrix has a logarithm. | It suffices to show that each $\lambda$-Jordan block has a logarithm if $\lambda \neq 0$.
First note that the exponential of a Jordan block is
$$
\left[\begin{matrix}
\lambda & 1 & 0 & \dotsb & 0 \\
& \ddots & \ddots & \ddots & \vdots \\
& & \ddots & \ddots & 0 \\
& & & \ddots & 1 \\
& & & & \lambda\\
\end{matrix}\right]
\overset{\exp}{\mapsto}
\left[\begin{matrix}
e^{\lambda} & e^{\lambda} & \frac{e^{\lambda}}{2!} & \dotsb & \frac{e^{\lambda}}{(k-1)!} \\
& e^{\lambda} & \ddots & \ddots & \vdots \\
& & \ddots & e^{\lambda} & \frac{e^{\lambda}}{2!} \\
& & & e^{\lambda} & e^{\lambda} \\[7pt]
& & & & e^{\lambda} \\
\end{matrix}\right].
$$
So, reversing the process, given a Jordan block
$$J=\left[\begin{matrix}
a & 1 & \dotsb & 0 \\
& a & \ddots & \vdots \\
& & \ddots & 1 \\
& & & a
\end{matrix}\right]
$$
with $a\neq 0$, we want to show that $J$ is similar to
$$
\left[\begin{matrix}
a\quad & a & \frac{a}{2!} & \dotsb & \frac{a}{(k-1)!} \\
& a & \ddots & \ddots & \vdots \\
& & \ddots & a & \frac{a}{2!} \\
& & & a & a \\[7pt]
& & & & a \\
\end{matrix}\right],
$$
since we know how to find the logarithm of the above matrix, and since $\log(UMU^{-1})=U\log(M)U^{-1}$.
Now, since the scalar matrix $aI$ has the same form with respect to any basis, we can neglect this part, and just ask if we can conjugate
$$\left[\begin{matrix}
0 & 1 & \dotsb & 0 \\
& 0 & \ddots & \vdots \\
& & \ddots & 1 \\
& & & 0
\end{matrix}\right] \tag{$M_1$}
$$
into
$$
\left[\begin{matrix}
0\quad & a & \frac{a}{2!} & \dotsb & \frac{a}{(k-1)!} \\
& 0 & \ddots & \ddots & \vdots \\
& & \ddots & a & \frac{a}{2!} \\
& & & 0 & a \\[7pt]
& & & & 0 \\
\end{matrix}\right]. \tag{$M_2$}
$$
First of all, we can see algebraically that these two matrices should be similar, since $M_2= aN + aN^2 + \dots + \dfrac{a}{(k-1)!}N^{k-1}$, where $N$ is the elementary nilpotent matrix of size $k$ (here $N=M_1$, incidentally). Taking powers of $M_2$ shows that $M_2^{k-1}\neq 0 \,\, M_2^k=0$. Therefore $M_1$ and $M_2$ have the same minimal polynomial $x^k$. I claim they also have the same characteristic polynomial, also $x^k$. The only possibility for the invariant factors of $M_1$ and $M_2$ is $1, 1, \dotsc, 1, x^k$. Therefore they are similar.
That said, I thought I would tackle the more general problem: Suppose we want to conjugate
$$\left[\begin{matrix}
0 & 1 & \dotsb & 0 \\
& 0 & \ddots & \vdots \\
& & \ddots & 1 \\
& & & 0
\end{matrix}\right] \tag{$M_3$}$$
into
$$\left[\begin{matrix}
0 & a_{12} & a_{13} & \dotsb & a_{1k} \\
& 0 & a_{23} & \dotsb & a_{2k} \\
& & \ddots & \ddots & \vdots \\
& & & \ddots & a_{k-1,k}\\
& & & & 0
\end{matrix}\right]\tag{$M_4$}$$
Given that $\mathcal{B}=\{v_1, \dotsc, v_k\}$ is an ordered basis such that $M_3 = [T]_{\mathcal{B}\mathcal{B}}$, conjugating $M_3$ into $M_4$ is equivalent to finding an ordered basis $\mathcal{C} = \{w_1, \dotsc, w_k\}$ such that $[T]_{\mathcal{C}\mathcal{C}}=M_4$.
Let $v_1, \dotsc, v_k$ be the basis in $M_1$. Define $w_1, \dotsc, w_k$ as follows: let $w_1=v_1$. Now suppose
$$w_{i-1}=\sum_{1\leq j\leq i-1}c_jv_{j}$$
Then let
$$w_i = \sum_{1\leq j\leq i-1}a_{n-j}c_jv_{j+1}.$$
We need to make sure $w_i$ are linearly independent. I claim they will be iff $M_3$ is similar to $M_4$ iff $M_4^{k-1}\neq 0$. One condition which will guarantee this is:
$$\text{All elements on the superdiagonal of $M_4$ are nonzero.}$$
Perhaps someone can come up with some better conditions for it. |
Finding a set with probability satisfying an inequality, part 2 - slightly changed | Given the specified conditions, there will always exist an event $C$ such that
$
\left\{{\large{\frac{1}{3}}}\le P(C)\le {\large{\frac{2}{3}}}\right\}
$.
Proof:
A preliminary lemma . . .
Lemma:
For all $\epsilon > 0$ and all $Y$ with $P(Y) > 0$ there exists
$X\subset Y$ such that $0 < P(X) < \epsilon$.
Proof of the lemma:
Let $\epsilon > 0$ and let $Y$ be an event with $P(Y) > 0$.
Let $n$ be a positive integer such that ${\large{\frac{1}{n}}} < \epsilon$.
Applying the hypothesis, we can find a partition of $Y$ into $n$ sets $Y_1,...,Y_n$ such that $P(Y_k) > 0$ for all $k\in\{1,...,n\}$.
But then from
$$
P(Y_1)+\cdots +P(Y_n)=P(Y) \le 1
$$
it follows that $P(Y_k) \le {\large{\frac{1}{n}}}$ for some $k\in\{1,...,n\}$.
This completes the proof of the lemma.
Returning to the main proof . . .
Let $S=\left\{X{\,\colon\,}P(X)\ge{\large{\frac{2}{3}}}\right\}$, and let $w=\inf\{P(X){\,\colon\,}X\in S\}$.
Necessarily $w\ge{\large{\frac{2}{3}}}$.
Choose $X_1,X_2,X_3,...\in S$ such that ${\displaystyle{\lim_{n\to\infty}}} P(X_n)=w$.
For each positive integer $n$, let $Y_n={\displaystyle{\bigcap_{k=1}^n}} X_k$.
First suppose $Y_n\not\in S$ for some positive integer $n$.
Let $m$ be the least positive integer such that $Y_m\not\in S$.
Note that $Y_1\in S$, hence $m > 1$.
From $Y_m=Y_{m-1}\cap X_m$ we get
$$
P(Y_m)
=
P(Y_{m-1}\cap X_m)
=
P(Y_{m-1})+P(X_m)-P(Y_{m-1}\cup X_m)
\ge
\frac{2}{3}+\frac{2}{3}-1
=
\frac{1}{3}
$$
and from $Y_m\not\in S$ we have $P(Y_m) < {\large{\frac{2}{3}}}$, hence we can let $C=Y_m$ and we're done.
Next suppose $Y_n\in S$ for all positive integers $n$.
Let $Y={\displaystyle{\bigcap_{n=1}^\infty}} X_n$.
For all $n$ we have $Y_n\subseteq X_n$, so $P(X_n)\ge P(Y_n)\ge w$.
Then ${\displaystyle{\lim_{n\to\infty}}} P(Y_n)=w$, hence since
$$
Y_1\supseteq Y_2\supseteq Y_3\supseteq\cdots
$$
we get $P(Y)=w$.
Suppose $w > {\large{\frac{2}{3}}}$.
Applying the lemma, there exists $Z\subset Y$ such that $0 < P(Z) \le w-{\large{\frac{2}{3}}}$.
Then
$$
P(Y{\setminus}Z)=P(Y)-P(Z)=w-P(Z) > w-\left(w-{\small{\frac{2}{3}}}\right)={\small{\frac{2}{3}}}
$$
so $Y{\setminus}Z\in S$, contradiction, since
$P(Y{\setminus}Z)=P(Y)-P(Z) < P(Y)=w$.
It follows that $w = {\large{\frac{2}{3}}}$, hence since $P(Y)=w$, we can let $C=Y$ and we're done.
This completes the proof.
Note:
As regards your statement suggesting that the conditions
(1)$\;\;P(\{x\}) = 0$ for all $x\in\Omega$.$\\[4pt]$
(2)$\;\;$For all $D \subset \Omega$ with $P(D) > 0$, there is a set $E \subset D$ with $0 < P(E) < P(D)$.
are equivalent, the example I used in my answer to your prior question
Finding a set with probability satisfying an inequality
shows otherwise, since for that example, condition $(1)$ holds, but condition $(2)$ fails. |
The set of natural number functions is uncountable | Cantors diagonalisation method: If it is countable then it is $F=\{f_1,f_2,\ldots\}$. But the function $g(i):=f_i(i)+1$ is not in $F$. A contradiction to the initial assumption.
Proof:
For every $i $ we have
$$g(i)=f_i (i)+1 \implies g (i)\ne f_i (i) \implies g \ne f_i $$
and so $g \notin F$. |
Is there a way to visualize, like a picture in mind, the $n$-th derivative? | The tangent line is the first order Taylor polynomial. The n:th derivative can be "visualized" in the same sense as the n:th order Taylor polynomial. This will only give a localized significance to the derivative.
There are many other important properties of derivatives. For instance when solving differential equations the exponential functions are very important since they are "their own" derivatives. Then splitting the function into sums of exponentials may help solve the problem. |
Find the largest possible integer n such that $\sqrt{n}+\sqrt{n+60}=m$ for some non-square integer m. | As you have found out it is necessary that $n(n+60)$ is a perfect square. Therefore we want $n(n+60)=(n+r)^2$, or $(60-2r)n=r^2$, for some integer $r\geq0$. It follows that $r$ has to be even, and writing $r=2s$ we arrive at the condition $$(15-s)n=s^2\tag{1}$$ for integer $s$ and $n\geq0$. Considering the cases $0\leq s\leq 15$ in turn we see that the following pairs $(s,n)$ solve $(1)$:
$$(0,0),\quad(6,4),\quad(10,20),\quad(12,48),\quad(14,196)\ .$$
The largest occuring $n$ in this list is $196$, and this $n$ indeed leads to an integer $m=\sqrt{196}+\sqrt{196+60}=30$. As $30$ is not a square the answer to the question is $196$. |
Prove $\lim_{n}\sup{x_{n+1}\over x_n} < 1$ implies $\lim_n x_n = 0$ | Your proof is wrong because you only proved that, if $n$ is large enough, then $\lvert x_{n+1}\rvert<\lvert x_n\rvert$ and that is not enough to deduce that $\lim_{n\to\infty}x_n=0$, even if you mention the monotone convergence theorem, which has nothing to do with this.
Since $\limsup_n\left\lvert\frac{x_{n+1}}{x_n}\right\rvert<1$, if $c\in\left(\limsup_n\left\lvert\frac{x_{n+1}}{x_n}\right\rvert,1\right)$, then $\left\lvert\frac{x_{n+1}}{x_n}\right\rvert<c$, if $n\geqslant N$, for some $N\in\mathbb N$. So, $\lvert x_{N+1}\rvert\leqslant c\lvert x_n\rvert$, $\lvert x_{N+1}\rvert\leqslant c\lvert x_N\rvert$, $\lvert x_{N+2}\rvert\leqslant c^2\lvert x_N\rvert$, and so on. Therefore, by the squeeze theorem, $\lim_nx_n=0$. |
Bernoulli like inequality | (Too long for a comment)
Rewrite the inequality for $y\in(0,1)$
$$(1-y)^n<1-ny+\frac{n(n-1)}{2}y^2$$
while
$$(1-y)^n=1-ny+\frac{n(n-1)}{2}y^2+\sum_{k=3}^{n}{n\choose k}(-y)^{k}$$
So it is equivalent to prove
$$\sum_{k=3}^{n}{n\choose k}(-y)^{k}<0$$ |
A little help in this equation please. | One method which maybe used:
Let $f(x)=2^x-3x$
You can see that $f(3)=-3$ and $f(4)=4$. Since the function is changing it's value from negative to positive, you can say that a solution lies in $(3,4)$.
Also, $f^{'}(x)=2^x \ln2-3 $ which happens to be decreasing till $x$ approximately equal to $2$ and then increases till $\infty$. Now Since this function has no roots in $(0,2)$, You may as well conclude that the only one unique root lie in $x\in(3,4)$ as an increasing funtion won't be cutting the x-axis again.
Hope it helps. |
Need examples of $n^{20}$ & $n^{200}$ to see a pattern. | I expect what they're hoping for you to discover is that, for $n\neq 0\pmod{5}$, $n=0\pmod{2}$,
$$\begin{align}
n^{20} &= ? \pmod{100} \\
n^{200} &= ? \pmod{1000} \\
n^{2000} &= 9376 \pmod{10000} \\
n^{20000} &= 09376 \pmod{100000} \\
n^{200000} &= 109376 \pmod{1000000} \\
n^{2000000} &= 7109376 \pmod{10000000}\text{.}
\end{align}$$
These results follow from the binomial theorem, Euler's generalization of Fermat's little theorem, and the Chinese remainder theorem. |
Clarifying a proof that a certain set is an algebra | We have to show that $\mathcal{A}$ is closed under finite union if we assume that it is closed under pairwise intersection, pairwise disjoint union and complementation. We can show it for the union of two sets (the general case follows by induction). So let us consider two sets $X,Y \in \mathcal{A}$. Denote $W:=X\cap Y \in \mathcal{A}$. Then
$$ X \cup Y = (X \cap W^c) \sqcup (Y \cap W^c) \in \mathcal{A}.$$ |
A novelty integral for $\pi$ | The integrand can be broken up as
$$I=\int_0^{\infty} \left(\frac{2 \ln(1+x)\ln\left(\frac{1+x}{2}\right)}{x^{3/2} \ln^2 x} +\frac{\ln(1+x)\ln\left(\frac{1+x}{2}\right)}{x^{3/2} \ln x}-\frac{2\ln(1+x)}{x^{1/2} (1+x) \ln x}\right)dx.$$
But, by integration by parts,
$$\int \frac{2\ln(1+x)}{x^{1/2} (1+x) \ln x} dx= \int \frac{2\ln(1+x)}{x^{1/2} \ln x} d\left(\ln\left(\frac{1+x}{2}\right)\right)
\\=\small\frac{2 \ln(1+x)\ln\left(\frac{1+x}{2}\right)}{x^{1/2} \ln x}-2\int \left( \frac{\ln\left(\frac{1+x}{2}\right)}{x^{1/2} (1+x) \ln x}-\frac{ \ln(1+x)\ln\left(\frac{1+x}{2}\right)}{x^{3/2} \ln^2 x}-\frac{\ln(1+x)\ln\left(\frac{1+x}{2}\right)}{2 x^{3/2} \ln x}\right)dx$$
That is,
$$\int \left(\frac{2 \ln(1+x)\ln\left(\frac{1+x}{2}\right)}{x^{3/2} \ln^2 x} +\frac{\ln(1+x)\ln\left(\frac{1+x}{2}\right)}{x^{3/2} \ln x}-\frac{2\ln(1+x)}{x^{1/2} (1+x) \ln x}\right)dx \\= -\frac{2 \ln(1+x)\ln\left(\frac{1+x}{2}\right)}{x^{1/2} \ln x}+2 \int \frac{\ln\left(\frac{1+x}{2}\right)}{x^{1/2} (1+x) \ln x} dx$$
And the claim follows from Jack D'Aurizio's preliminary result. |
What is the reason behind conditioning on an ancillary statistics? | I think Wikipedia explains it fairly well (read all of that section, including the batting average example). When one statistic (such as the batting average) is not sufficient, it may be sufficient in combination with (or conditioned on) an ancillary statistic (in this case, the number of at-bats). So you are basically correct that conditioning on an ancillary statistic (often) allows us to get something which is conditionally sufficient.
RA Fisher felt that the Fisher Information should also be measured conditionally on an ancillary statistic, so that it measure how much information beyond the ancillary statistic your statistic of choice gives you. So this could be another reason to condition on the ancillary statistic, though unfortunately it's often not practical. |
Types of data in statistics | The nominal-ordinal-ratio-interval scheme is not a scheme for "data." It is a scheme for ${\it scales}$. A list of ordered pairs like (lead, 700 K) does not belong to a scale. So we don't ask what type of scale it is. That is like asking whether (Mars, Jupiter) is a terrestrial planet or a gas giant.
Here is the original paper that introduced this typology: http://personal.stevens.edu/~ysakamot/719/week3/Stevens_Measurement.pdf
If you were to consider only the names of the metals, that would be nominal. If you were to consider only the melting points (in Kelvin), that would be ratio. Both of those are scales and so can be classified according to a typology of scales. |
Show that if $s_n$ converges to $\beta$, then $t_n$ converges to $\beta/2$. | The following result is known:
If a sequence $a_n\stackrel{n\to\infty}{\longrightarrow}a$, then any subsequence of $\{a_n\}_{n\ge 1}$ also converges to $a$.
The observation you have made, if restricted to partial sums, says that $$\begin{aligned}t_{3n}&=\sum_{i=1}^n \left(\frac1{2i-1}-\frac1{4i-2}-\frac1{4i}\right)\\ &= \sum_{i=1}^n \left( \frac1{4i-2}-\frac1{4i}\right)\end{aligned}$$ so if we take the subsequence $\{t_{3n}\}_{n\ge 1}$ of $\{t_n\}$ and rename $t_{3n}$ and $w_n$, then we have $$\begin{aligned} w_n&=\sum_{i=1}^n \left( \frac1{4i-2}-\frac1{4i}\right)\\ &=\frac12-\frac14+\frac16-\frac18+\frac1{10}-\frac1{12}+\cdots-\frac1{4n}\\ &=\frac12 \left(1-\frac12+\frac13-\frac14+\frac15-\frac16+\cdots-\frac1{2n}\right)\\\implies w_n&=\frac{s_{2n}}2\end{aligned}$$
Now, $s_n\to \beta\implies s_{2n}\to \beta$ by the mentioned result, so $w_n=\dfrac{s_{2n}}2\to \dfrac{\beta}2$
After this, if you are willing to write $$\lim_{n\to\infty}t_n=\lim_{n\to\infty} \sum_{i=1}^\infty\left(\frac1{4i-2}-\frac1{4i}\right)=\lim_{n\to\infty}w_n=\dfrac{\beta}2$$ then you're already done.
Should you want to be more rigorous than that, given what we have proved in the answer, it should suffice to prove that the sequence $\{t_n\}$ itself actually converges so that by the abovementioned result again, any subsequence of $\{t_n\}$, in particular $\{w_n\}$ should also converge to the same limit, which we already have to be $\dfrac{\beta}2$.
$\underline{\text{Can you prove now that the sequence $\{t_n\}$ is convergent?}}$
This is actually a well-studied problem, thanks to Riemann and his Rearrangement Theorem (relevant upto section $3$ here), in particular, you can find a proof of what you want in this paper. What you have given is thus a rearrangement of the alternating harmonic series. |
Prove an upper bound for the determinant of a matrix A | Edit 3: This hint might not be the best way to approach the problem.
Hint: Start with $2\times 2$ matrices and work by induction.
Edit 2:
Show that for a $2\times 2$ matrix with those properties, $|\det A| \leq 1$. Evaluate different cases and convince yourself these are in fact the extrema ($-1$ and $1$). The matrices that achieve these values are:
$$
\begin{pmatrix} 1 & 0 \\ 0 & 1\end{pmatrix},\quad \begin{pmatrix} 1 & 1 \\ 0 & 1\end{pmatrix},\quad\begin{pmatrix} 1 & 0 \\ 1 & 1\end{pmatrix},
$$
and
$$
\begin{pmatrix} 0 & 1 \\ 1 &0\end{pmatrix},\quad \begin{pmatrix} 1 & 1 \\ 1 & 0\end{pmatrix},\quad\begin{pmatrix} 0 & 1 \\ 1 & 1\end{pmatrix}.
$$
Now, if $A$ is your $3\times 3$ we have $\det A = a_{11}\det A_{11} - a_{22}\det A_{22} + a_{33}\det A_{33}$. Can you find the matrices such that $|\det A| = 2$ ? ATM I don't know how to prove this is in fact the maximum. |
Working with Chebyshev inequality | Chebyshev allows you to bound probabilities, not compute them. I don't think this is a good exercise for Chebychev, but if you want to apply it, here is one thing you can do:
$$P(|X-10| \ge 2) \ge P(X \le 8) = P(0 \le X \le 8) = P(|X-1| \le 7)
\ge 1 - \frac{1}{7^2} \approx 1 - 0.0204.$$
However, if you want to compute this probability, you can do so directly:
$$P(|X-10| \ge 2)
= \int_0^8 e^{-x} \, dx + \int_{12}^\infty e^{-x} \, dx
= 1 - e^{-8} + e^{-12}
\approx 1 - 0.000329
\ge 1 - 0.0204.$$ |
The use of change of variables and the chain rule | There is a typo in Strauss's book. One should have
$$
\partial_t=c\partial_\xi-c\partial_\eta.
$$
Note that both $\xi$ and $\eta$ are functions of $x$ and $t$:
$$
\xi=h(x,t);\quad \eta=g(x,t).
$$
Now, suppose you have a function $u=u(\xi,\eta)$. Then you have
$$
u=u(h(x,t),g(x,t)).
$$
Can you write down by chain rule, what is $u_x$ and $u_t$? |
Given the solution of a RR, how can we find the BIG-O complexity? | What you do is to look at the relative growth rates of the terms. In this case, the fastest-growing one is $c^n$, so this is $O(c^n)$.
In detail, you'd show that $n \log^2 n = O(c^n)$ and $10^c n^c = O(c^n)$, for example by showing that $\lim_{n \rightarrow \infty} \frac{10^c n^c}{c^n} = 0$. |
The probability that there will be 3 cloves with four leaves? | Required probability $=\dbinom{100}{4}\left(\dfrac{1}{100}\right)^4\left(\dfrac{99}{100}\right)^{96}$ |
Largest Normal subgroup | It should be the largest normal subgroup of $G$ contained in $H$. The thing about normal subgroups $N_1, N_2$ is that $N_1N_2$ is still a normal subgroup. That means that the normal subgroup contained in $H$ of greatest cardinality is unique (if there are two, take their product and get a bigger one).
This object is also called the core of $H$, and equal to the intersection of all conjugates of $H$ in $G$, i.e. $\bigcap _{g \in G} g^{-1}Hg $. See http://en.wikipedia.org/wiki/Core_%28group%29
For infinite groups, measuring "largest" by cardinality isn't necessarily the right thing to do. You can look at the poset (in fact, lattice) of normal subgroups. The core is still the unique maximal (by inclusion) normal subgroup contained in $H$. |
If $\gcd(a,b)=1$ prove that $(a^2,b^2)=1$ or $2$. | If $ua+vb=1$, then
$$1^2=(ua+vb)^2=u^2a^2+2uvab+v^2b^2 $$
and
$$ub\cdot a^2+va\cdot b^2=ab,$$
hence
$$(u^2-2u^2vb)a^2+(v^2-2uv^2a)b^2=1 $$ |
Should I write out stuff? | I can't tell you what to do, but for me, I cannot overstate the benefits of writing things out! The things I learn seem to stick much better when I write them out on paper. I take notes all the time when I'm reading as well. Even if I don't save what I've written, it helps.
Maybe this is just me... I don't know, but as I said, for me this really works. Since I started doing this I remember things a lot better, especially definitions and conceptual stuff.
Try it! |
total $\omega$-limit sets of continuos maps on (compact) metric spaces | The union of all the $\omega$-limit sets of a function $f:X \rightarrow X$ $\,(X$ is a metric space) is sometimes denoted by $\Lambda(f),\,$ that is, $\,\Lambda(f) = \bigcup_{x \in X}\,\omega(x,f).$ See, for example, The persistence of $\omega$-limit sets defined on compact spaces by Emma DʼAniello and Timothy H. Steele (2014). I recommend sending an email to Steele (feel free to say I told you this, as I know him), asking him for some pointers to the kind of results you are interested in pursuing. |
Prove that $\operatorname{res}(A(ax),B(ax)) = a^{de}\operatorname{res}(A(x),B(x))$, using the fact that $\sum(i-\sigma(i))=0$ | Before we begin, a note on my convention for Sylvester matrices: I take the version with $e$ columns of $a_i$s followed by $d$ columns of $b_j$s.
Recall that $\det M = \sum_{\sigma\in S_n} \operatorname{sgn}(\sigma)\prod_{i=1}^n m_{i\sigma(i)}$ for a $n\times n$ matrix $M$. Letting $S$ be the Sylvester matrix of the polynomials $A(x)$ and $B(x)$ and $T$ be the Sylvester matrix of the polynomials $A(ax)$ and $B(ax)$, I claim that $$\prod_{i=1}^n t_{i\sigma(i)} = a^{de}\prod_{i=1}^n s_{i\sigma(i)}.$$
The big observation to make here is that when $s_{ij}\neq 0$, then $t_{ij}=a^{i-j}s_{ij}$ or $t_{ij}=a^{i-j+e}s_{ij}$, depending on whether $j\leq e$ or not. Here's an example to help you see this, where $A$ is of degree $3$ and $B$ is of degree $2$: $$S=
\begin{pmatrix} a_0 & & b_0 & & \\
a_1 & a_0 & b_1 & b_0 & \\
a_2 & a_1 & b_2 & b_1 & b_0 \\
a_3 & a_2 & & b_2 & b_1 \\
& a_3 & & & b_2 \end{pmatrix},
T=
\begin{pmatrix} a_0 & & b_0 & & \\
aa_1 & a_0 & ab_1 & b_0 & \\
a^2a_2 & aa_1 & a^2b_2 & ab_1 & b_0 \\
a^3a_3 & a^2a_2 & & a^2b_2 & ab_1 \\
& a^3a_3 & & & a^2b_2 \end{pmatrix}.$$
We therefore obtain $$\prod_{i=1}^n t_{i\sigma(i)} = \left(\prod_{i=1}^{e} a^{i-\sigma(i)}s_{i\sigma(i)}\right)\left(\prod_{i=e+1}^{d+e} a^{i-\sigma(i)+e}s_{i\sigma(i)}\right)=a^{de} \prod_{i=1}^n a^{i-\sigma(i)}s_{i\sigma(i)},$$ which simplifies to $$\prod_{i=1}^n t_{i\sigma(i)} = a^{de} \prod_{i=1}^n s_{i\sigma(i)}$$ from the fact $\sum i-\sigma(i)=0$. |
spectral projection of an element in a C*-algebra | Any normal operator $T$ gives rise to some spectral measure $E:Bor(\sigma(T))\to\mathcal{P}(H)$ which maps Borel subsets of the spectrum of $T$ into orthogonal projections in $H$. If you take Borel subset $A\subset\sigma(T)$, then $E(A)$ is called a spectral projection. Search spectral theorem on this site. |
Trigonometry tangent line question | The derivative seems to be incorrect:
$$\left(\frac{\cos x}{2+\sin x}\right)'=\frac{-\sin x(2+\sin x)-\cos^2x}{(2+\sin x)^2}=0\Longleftrightarrow -2\sin x-1=0\Longleftrightarrow$$
$$\sin x=-\frac{1}{2}\Longleftrightarrow x=\begin{cases}\frac{7\pi}{6}\\{}\\\frac{11\pi}{6}\end{cases}\;\;+2k\pi\;\;,\;\;k\in\Bbb Z\;\;\ldots$$ |
Changing $[0,2\pi)$ with $S^1$ such that a map defined on $[0,2\pi)$ stays unchanged | I think what you're looking for is the idea of passing to the quotient.
Theorem. Suppose $X$, $Y$, and $Z$ are topological spaces, and $q\colon X\to Y$ is a quotient map. If $\varphi\colon X\to Z$ is a continuous map that is constant on the fibers of $q$ (meaning that $q(x)=q(y)\implies \varphi(x)=\varphi(y)$), then there is a unique continuous map $\widetilde \varphi\colon Y\to Z$ satisfying $\widetilde \varphi\circ q =\varphi$.
This is proved, for example, in my book Introduction to Topological Manifolds (Theorem 3.73).
This theorem doesn't apply directly in your situation, however, because the map $\operatorname{id} \times p\colon (0,\infty)\times [0,2\pi) \to (0,\infty)\times S^1$ is not a quotient map. It's easy to see what goes wrong in that case: a map $\varphi\colon (0,\infty) \times [0,2\pi) \to \mathbb R^2$ will not descend to a continuous map from $(0,\infty)\times S^1$ unless $\varphi(r,\theta)$ approaches $\varphi(r,0)$ as $(r,\theta)\to (r,2\pi)$. If that's the case, $\phi$ extends to a continuous map defined on $(0,\infty)\times [0,2\pi]$, and the natural extension of $\operatorname{id} \times p$ to $(0,\infty)\times [0,2\pi]$ is a quotient map.
By the way, there's also a smooth version of this theorem: If $X$, $Y$, and $Z$ are smooth manifolds, $q$ is a smooth surjective submersion, and $\varphi$ is a smooth map, then the same conclusion holds with the additional property that $\widetilde\varphi$ is smooth. (See my Introduction to Smooth Manifolds, Theorem 4.30.) |
Proving if $-1 < x < 1$ then $x^1 + x^2 + \cdots + x^n = \frac{x-x^{n+1}}{1 - x}$ | Simply multiplying all the terms by $x^1$ increases the exponents of $x$ by $1$ in each term. So $x^n\cdot x^1=x^{n+1}.$ |
Interchanging Derivatives and Limits with limits as a dependent variable of the partial deriviative. | We have $$\frac{d}{dx}\int_0^x e^{-\lambda t}dt = e^{-\lambda x}$$ by the fundamental theorem of calculus.
In the situation where the dependence on $x$ is in the integrand rather than the bounds, we can sometimes take the derivative inside the integral like $$\frac{d}{dx}\int_a^bf(x,t)dt =\int_a^b\frac{\partial}{\partial x}f(x,t)dt. $$ Whether this is legal or not requires some technical conditions like the majorizing rule you brought up (related to the dominated convergence theorem).
In the situation where both the bounds, and the integrand depend on $x,$ provided that you have some technical condition like your majorizing rule, we have $$ \frac{d}{dx}\int_{a(x)}^{b(x)}f(x,t)dt = f(x,b(x))b'(x)-f(x,a(x))a'(x)+\int_{a(x)}^{b(x)}\frac{\partial}{\partial x}f(x,t)dt $$
Note this is just applying the fundamental theorem of calculus along with the chain rule at each endpoint and then bring the derivative inside (which is the thing that requires the technical conditions). |
The distance function of the geodesically convex manifold | Yes. As noted on Wikipedia
"The distance function from $p$ is a smooth function except at the point $p$ itself and the cut locus."
The cut locus is the set of all points $q \in M$ such that there is more than one distinct minimizing geodesic between $q$ and $p$. Therefore, $d$ will be smooth at all points other than $p$, and hence $d^2$ will be smooth on all of $M$. |
Solving limit without L'hopital $\lim_{x\to0} \left(\frac{\frac{xe^x}{e^x-1}-1}{x}\right)$ | You can do it with Taylor polynomials if you take $y:=1-e^{-x}$, so the limit is$$\lim_{y\to0}\left(\frac1y+\frac{1}{\ln(1-y)}\right)=\lim_{y\to0}\frac1y\left(1-\frac{1}{1-y/2+o(y)}\right)=-\frac12.$$ |
Does Hartshorne *really* not define things like the composition or restriction of morphisms of schemes? | I agree. On p. 72 locally ringed spaces and their morphisms are defined. Then on p. 73 we have the definition of an isomorphism of locally ringed spaces as a morphism with a two-sided inverse. In principle we need a composition of morphisms for this definition, but Hartshorne doesn't define it there. Instead, he characterizes isomorphisms $(f,f^\#)$ by the property that $f$ is a homeomorphism and $f^\#$ is an isomorphism of sheaves. By the way, with sheaves we have the same problem: On p. 63 isomorphisms of sheaves are defined to be morphisms of sheaves with a two-sided inverse, but not composition of morphisms of sheaves is defined! Finally, on p. 74 morphisms of schemes are defined as morphisms of the underlying locally ringed spaces.
I was a tutor for a lecture on algebraic geometry which was based on the book by Hartshorne. In one exercise one needed to know the precise definition of the composition of two morphisms. Nobody knew it, except for one student. It wasn't explained in the lecture, and the professor didn't even notice that the definition was missing.
One more reason why this book is not the best introduction to algebraic geometry. There are far better introductions, but they often also don't spell out the definition (except of course for EGA I, see Daniel's answer). In Görtz, Wedhorn, Algebraic geometry, the definition is sketched in a remark after Definition 2.29. In Qing Liu, Algebraic Geometry and Arithmetic Curves the defintition is sketched in a remark after Definition 2.20. I could not find the definition in Bosch, Algebraic geometry and commutative algebra; Eisenbud, Harris, The geometry of schemes; Ueno, Algebraic Geometry I.. In most texts such as Vakil, Foundations of algebraic geometry it is just said "there is an obvious notion of composition of morphisms". |
Existence of two unrelated pairs in a constrained relation | As I commented, this is false for infinite sets. To understand why, I think it is simplest to forget about elements of $T$ that are not in the range of $R$ and then to forget about the distinction between different elements of $S$ that have the same image under $R$ in $T$. When you have done that, you can also forget about induction. In more detail:
The given assumptions on $S$, $T$ and $R$ imply that the range of $R$ has at least two elements. In fact, instead of the first two assumptions, you just need that $S \not= \emptyset$: given $s_1 \in S$, there is $t_1 \in T$ with $(s_1, t_1) \in R$, then $s_2 \in S$ with $(s_2, t_1) \not\in R$ and then $t_2 \in T$ with $(s_2, t_2) \in R$, but then $t_1$ and $t_2$ are both in the range of $R$ and $t_1 \not= t_2$. So we can assume $T$ is the range of $R$ (because that preserves the assumptions on $S$, $T$ and $R$ and results in an equivalent conclusion).
Now let $U$ comprise all subsets $A$ of $T$ such that $A = A_s = \{t \mathrel {|} (s, t) \in R\}$ for some $s \in S$. We need to find $A, A' \in U$ such that $A \mathop{\backslash} A' \not= \emptyset \not= A' \mathop{\backslash} A$. But $U$ is a family of non-empty subsets of $T$ with $\bigcup U = T$, so if no such pair $A$ and $A'$ exists, then $U$ must be a chain. But if $U$ is a chain of subsets of $T$ whose union is $T$ and $T$ is finite, then $T \in U$, but no $A_s = T$, a contradiction. If $T$ is infinite, it certainly can be written as the union of a chain of proper subsets and you get a counter-example, e.g., by taking $S= \mathbb{N}$ and $A_n = \{ i \mathrel{|} i \le n\}$. |
What is the meaning of y = sin(x) over the interval [0,1]? | I am not completely sure what you are asking but I will try to answer. In the context of the question in the title, $[0,1] $ means that they are talking about $y=\sin x $ for values on $x $ on the closed interval from 0 to 1 (rather than say all of the real numbers.
In terms of what this means, the sin function just maps a real number to another real number so though $\sin x$ may not always be rational, it is always just a real number. |
Show that when $BA = I$, the solution of $Ax=b$ is unique | If $BA=I$ then:
$$Ax=b\quad \to \quad BAx = Bb \quad \to \quad x = Bb$$ |
How to convert from cartesian to polar equation with no intercept | Your algebra is incorrect. The proper rearrangement of the equation
$$\sin \theta = 2 \cos \theta$$
is$$
\tan \theta = 2.$$ |
Monotonicity of little lp space. | The correct inequality is $(\sum |a_n|^{2})^{1/2} \leq \sum |a_n|$. (With out the square root on the left the inequality is false). For a proof denote the right side by $r$ and let $b_n=\frac {a_n} r$. Then $\sum |b_n|=1$. This implies $|b_n| \leq 1 $ for all $n$. Hence $|b_n|^{2} \leq |b_n|$ and $(\sum |b_n|^{2})^{1/2} \leq \sum |b_n|=1$. Writing $b_n$ as $\frac {a_n} r$ we get the result. |
What was the motivation behind the open covering definition of compactness? | For every question, there is separate argument why the general notion of compactness is better. I guess that the final definition is a result of gradual improvements. It is hard to explain it simply. So let's go through some examples.
Heine-Borel property
A simple statement of the property is "closed and bounded".
Compactness should be a property of space, not a property of a set within a space. This is the reason why the definition "closed and bounded" fails. For instance the set of rational numbers in interval $[0, 1]$, or the interval $(0,1)$ are also topological (also metric) spaces. They are bounded and although they are not closed in $\mathbb R$, they are closed in themselves.
Instead of closed sets, we have to use another notion of metric spaces, completeness.
And we are not done, yet. There is still a problem with the boundedness. You can define another metric on the real line, $\rho(x, y) = \min(1, |x-y|)$. This is still a correctly defined metric on $\mathbb R$ and it produces the same topology, so if $\mathbb R$ is not compact, the modified space should still not be compact.
But the space happens to be bounded. So instead of bounded space, we need another notion, totally bounded space.
Now, it finally suffices. Actually, a metric space is compact if and only if it is complete and totally bounded. But
the standard definition is much simpler,
the notion of completeness and total bounded space requires the metric, while the standard definition uses just topology -- open sets. So the standard definition is more general.
Weierstrass Property
i.e. Every sequence has a convergent subsequence.
Sequences are bad in general topology. The reason is that you are used to a fact that whenever there is a point $x$ in a closure of a set $S$, then there is a sequence from $S$ converging to $x$ in metric spaces. This is not true in general. For example, you can take the space $\mathbb R$ and add a point $\infty$ (I do not mean usual $+\infty$) such that $\infty$ is in a closure of a set $S$ if and only if $S$ is uncountable. Then there is no sequence of real numbers converging to $\infty$ although $\infty$ is in the closure of real numbers.
There are more examples why sequences are bad but they are often more complicated and require set theory. An example of space which is not compact but every sequence has a convergent subsequence is omega 1
On the other hand, there is an equivalent definition of compactness similar to the Weierstrass Property: For every infinite set $S$ of size $\kappa$, there is a $\kappa$-accumulation point $x$. It means that every open set containing $x$ contains $\kappa$ elements of $S$. |
Show that $A$ is a scalar matrix $kI$ if and only if the minimum polynomial of $A$ is $m(t) = t - k$. | It is clear that $k\operatorname{Id}$ is a root of the polynomial $x-k$ and thatefore that its minimal polynomial is $x-k$.
On the other hand, Is $x-k$ is the minimal polynomial of $M$, then $M-k\operatorname{Id}=0$, which means the $M=k\operatorname{Id}$. |
How can it be true both that Complete Ordered Fields are unique up to isomorphism and that anything that can prove Peano Arithmetic is incomplete? | The theory of $(\mathbb{R};+,\times)$ is indeed consistent, complete and computably axiomatizable - it happens to be exactly the theory of real closed fields - and of course $\mathbb{N}\subseteq\mathbb{R}$. However, $\mathbb{N}$ is not a definable subset of $(\mathbb{R}; +,\times)$! This prevents the logical complexity of $(\mathbb{N};+,\times)$ from being inherited by $(\mathbb{R};+,\times)$: the latter is bigger, but not more complicated.
What is true - and is what you're gesturing at when you write "anything that can prove Peano Arithmetic is incomplete" - is that if $T$ is any computably axiomatizable theory such that some model of $T$ has a definable copy of $(\mathbb{N}; +,\times)$ then $T$ is not complete, and similarly any consistent computably axiomatizable theory which interprets the (very weak) theory of Robinson arithmetic is not complete. But none of that applies here since we don't have definability of $\mathbb{N}$ in $(\mathbb{R};+,\times)$.
It might be easier to consider a more algebraic example: $(\mathbb{C}; +,\times)$ is algebraically much simpler than $(\mathbb{R};+,\times)$ despite being a larger field. For example, the set of polynomials (in any number of variables) which have a zero is much simpler to describe over $\mathbb{C}$ than over $\mathbb{R}$. Similarly, while $\mathbb{R}$ has no nontrivial automorphisms at all there are lots of automorphisms of $\mathbb{C}$ - including ones which move reals! |
Let $\mathrm P$ be a $2 \times 2$ matrix such that $\mathrm P^{102}= \mathrm O$ | Show that $0$ is the only eigenvalue. Since zero is the only eigenvalue, you can show it is similar to the following matrix:
$$\begin{pmatrix} 0 & x \\ 0 & 0 \end{pmatrix}$$
for some $x$.
From here you can look at each of those cases. |
How many neighbours should a cell have in a cellular automata? | Your question is more aesthetic than mathematical -- it's your cellular automaton, you can design it how you like! (Especially if you don't have a particular goal in mind, but you don't say what that might be.)
The only thing I can suggest is that your neighbourhood encompass at least two other cells. Encompassing only one other cell is certainly possible, but it results in a one-way CA for which a two-dimensional playfield would be overkill. |
How to find the smallest integer X where the Remainder of X / Y = 0 | If $Y$ is not rational, there is no number $X$ that will work. If $Y$ is rational, write it in lowest terms and $X$ is the denominator. So for your examples $0.11=\frac {11}{100}$ and $X=100$, $0.5=\frac 12$ |
Uniform Convergence and Sup | Here is a sketch:
$$ |f_n(x) -f_n(a)| = |f_n(x) -f(x) - f_n(a) + f(a) +f(x) - f(a)|$$
$$ \leq |f_n(x) -f(x)| + |f_n(a) - f(a)| +|f(x) - f(a)|$$
Now take $\sup_{x \in X}$ on both sides, noting that supremum is subadditive. Then take limits to get your result. |
Iterative method for finding a tangent to circle, through a given point | I think intersection point between the orthogonal line from origin to $xb$and circle is a good choice.if $f(x)=\frac{1}{(x-b).(x-b)}(b_2-x_2,b_1-x_1)$ then finding a tangent to the circle means finding a fixed point for f and you can obtain order of convergence by standard method for this problems |
Prove a the convergence of a series in dual Banach Space | By uniform boundedness principle it follows that $(u_n)$ is bounded in $X'$. Moreover,
$$
u(x) = \lim_{n\to \infty} u_n(x)
$$
implies
$$
|u(x)| \le \lim\inf_{n\to\infty} |u_n(x)| \le \lim\inf_{n\to\infty} \|u_n\|_{X'} \|x\|,
$$
which shows
$$
\|u\|\le \lim\inf_{n\to\infty} \|u_n\|_{X'} .
$$ |
Convergence of Integral Implies Uniform convergence of Equicontinuous Family | You essentially proved: Every subsequence of $(f_n)$ has a (sub-)subsequence which converges uniformly to zero (why?).
The first bullet point in the link I provided in a comment asserts that then the sequence $(f_n)$ itself must converge to zero.
Here's why: Suppose $f_{n}$ does not converge uniformly to zero. Then there is an $\varepsilon \gt 0$ and a subsequence $f_{n_j}$ with sup-norm $\|f_{n_j}\|_{\infty} \geq \varepsilon$ for all $j$. This subsequence has again a convergent subsequence by Arzelà-Ascoli. Your argument shows that its limit must be zero, hence the subsequence must have uniform norm $\lt \varepsilon$ eventually, contradiction.
Here's the abstract thing:
Let $x_n$ be a sequence in a metric space $(X,d)$. Suppose that there is $x \in X$ such that every subsequence $(x_{n_j})$ has a subsubsequence $(x_{n_{j_k}})$ converging to $x$. Then the sequence itself converges to $x$.
Edit: As leo pointed out in a comment below the converse is also true: a convergent sequence obviously has the property that every subsequence has a convergent subsubsequence.
The proof is trivial but unavoidably uses ugly notation: Suppose $x_n$ does not converge to $x$. Then there is $\varepsilon \gt 0$ and a subsequence $(x_{n_j})$ such that $d(x,x_{n_j}) \geq \varepsilon$ for all $j$. By assumption there is a subsubsequence $x_{n_{j_k}}$ converging to $x$. But this means that $d(x,x_{n_{j_k}}) \lt \varepsilon$ for $k$ large enough. Impossible!
The way this is usually applied in "concrete situations" is to show
If a subsequence converges then it must converge to a specific $x$. This involves an analysis of the specific situation—this is usually the harder part and that's what you did.
Appeal to compactness to find a convergent subsubsequence of every subsequence—that's the trivial part I contributed. |
Angle between two line segments | The usual rotation matrix approach works. To rotate by an angle $\theta$, you have $x'=x \cos \theta - y \sin \theta, y'=y \cos \theta + x \sin \theta$ If you apply it to all the points, the angles between lines will stay the same. |
Showing that the theory DTO is consistent | We first define $\mathbb Q\models'\varphi[\sigma]$ by induction on the complexity of $\varphi$, for all $\sigma=(s_0,\ldots,s_n)$ such that $s_1\leq\ldots\leq s_n$; suitably rearranging the variables, as follows:
For atomic formulas let $\models'$ be the usual definition of satisfaction. The inductive step definitions for $\neg$ and $\wedge$ are the natural ones.
The tricky thing is for the existencial formulas, so if $\varphi(\bar x)=\exists z\psi(z,\bar x)$, you define $\mathbb Q\models'\varphi[\sigma]$ iff $\mathbb Q\models'\psi[s_1-1,\sigma]$ or $\mathbb Q\models'\psi[\sigma,s_n+1]$ or $\mathbb Q\models'\psi[s_1,\ldots,s_i\frac{s_i+s_{i+1}}{2},s_{i+1},\ldots,s_n]$ for some $i<n$ or $\mathbb Q\models'\psi(s_i,\sigma)$ for some $i\leq n$; as $\models'$ has been defined for $\psi$.
From this you can now define $\mathbb Q\models'\varphi[\sigma]$, regardless of the order of $\sigma$.
Now, the key part of the exercise is to show that $\models'$ coincides with $\models$; the proof is not in a finitistic fashion, i.e., for all formulas $\varphi$ and all $\sigma$: $$\mathbb Q\models\varphi[\sigma]\text{ if and only if }\mathbb Q\models'\varphi[\sigma],$$
which you prove by induction on the complexity of $\varphi$:
If $\varphi$ is atomic, there is nothing to prove by our definition of $\models'$, and the inductive steps for $\neg$ and $\wedge$ are trivial.
Now let us prove the case $\varphi(\bar x)=\exists z\psi(z,\bar x)$.
The $(\Leftarrow)$ direction is clear by the inductive hypothesis.
Now suppose $\mathbb Q\models\varphi[\sigma]$, then there is some $a\in\mathbb Q$ with $\mathbb Q\models\psi[a,\sigma]$. There exists some $b\in\{s_1-1,s_n+1\}\cup\{\frac{s_i+s_{i+1}}{2}:i<n\}\cup\{s_0,\ldots,s_n\},$ such that $b$ is ordered with respect to $s_0,\ldots,s_n$ as $a$ is to $s_0,\ldots,s_n$, then pick an isomorphism $f:(\mathbb Q,<)\longrightarrow(\mathbb Q,<)$ fixing all $s_0,\ldots,s_n$ and $f(a)=b$, then $\mathbb Q\models\psi[b,\sigma]$, thus by the inductive hypothesis $\mathbb Q\models'\psi[b,\sigma]$, and hence $\mathbb Q\models'\varphi[\sigma]$.
Now, the problem is to do this for an arbitrary $DLO$, $(A,<)$, without using choice. This can indeed be done using choice the same way as it was done for $(\mathbb Q,<)$, using instead of $s_1-1,\frac{s_i+s_{i+1}}{2},s_n+1; i<n$ some $a_0,a'_i,a_n\in A$ such that $a_0<s_0$, $s_i<a'_i<s_{i+1}$ and $s_n<a_n$, which exist as $(A,<)$ is a model of $DLO$. But I don't see whether there is a point of doing this with choice if the author asks for a finitistic proof. |
Bioinformatics Probability Question | You have the right idea using a binomial distribution:
For each block of five nucleotides, flip a coin that comes up heads (success) with probability $p$. We define success as finding our specific sequence, so $p=0.25^5$. We require that we only have a single success in our $ 6\cdot10^9-5+1$ flips, so the probability of this is
$$\binom{6\cdot10^9-5+1}{1}\times p(1-p)^{6\cdot10^9-1}\approx 6\cdot10^9 \times p(1-p)^{6\cdot10^9}\approx 0.$$ |
Finding limit of $p$-norm of succession of functions | $\newcommand{\xint}{\int\limits}
\newcommand{\dd}{\mathrm{d}}$
As the comments state, if $f,g$ are two Lebesgue-$L^p$ functions, surely their supports $A,B\subseteq\mathbb{R}^n$ are of finite measure. If we take $f_n(x):= f(x+x_n)$ for some $x_n\in\mathbb{R}^n$, we can conclude the support $A_n$ of $f_n$ is $A-x_n$. Indeed take a point $x\in A$. That means $f(x)\neq0$. If we take $x-x_n$, then $f_n(x-x_n)=f(x-x_n+x_n)=f(x)\neq0$, so $x-x_n\in A_n$. Therefore $A-x_n\subseteq A_n$. Viceversa, let $x\in A_n$. Then $f_n(x)\neq0$ But $f_n(x)=f(x+x_n)$, so $x+x_n\in A\implies x\in A-x_n$. Thus, $A_n\subseteq A-x_n$, which combined with the other inclusion gives an equality. A similar argument shows that letting $g_n(x):=g(x-x_n)$ we obtain that the support $B_n$ of $g_n$ is $B+x_n$. Summing up, we have:
$$\left\lbrace\begin{array}{@{}l@{}}
A_n=A-x_n \\
B_n=B+x_n
\end{array}\right. \quad\quad\ast$$
for all $n\in\mathbb{N}$. Let $x\in A_n,\,y\in B_n$ for some $n\in\mathbb{N}$. Then $x=a-x_n$ for $a\in A$ and $y=b+x_n$ for $b\in B$, by $\ast$. So their distance is $|x-y|=|(a-x_n)-(b+x_n)|=|a-b-2x_n|$. Now as said in the comments from the triangular equality $|x|\leq|x-y|+|y|$, and carrying $|y|$ to the left and applying an argument of symmetry yields $|\|x\|-\|y\||\leq\|x-y\|$, where I had to use $\|\|$ to avoid confusing the norm with the absolute value. If we apply this above we get:
$$d(x,y)=|x-y|=|a-b-2x_n|\geq2|x_n|-d(A,B),$$
where $d(A,B)$ is the infimum of $d(a,b)$. So fixing any $(a,b)\in A\times B$, the shifted points are further apart than $|x_n|-d(A,B)$, which tends to infinity for $n\to\infty$. This means that for all $(a,b)\in A\times B$ there exists an $n\in\mathbb{N}$ such that $d(a-x_n,b+x_n)>0$ strictly. Since $|x_n|\uparrow+\infty$, we can find an $n$ as above described such that the strict positivity holds for all $m\geq n$. Note that the inequality above holds for all couples, meaning the supremum of such $n$ is necessarily finite. This implies that, for $n$ sufficiently big, $A_n\cap B_n=\varnothing$, where by "sufficiently big" I clearly mean "greater than or equal to that supremum". Since we are taking the limit, we are not interested in what happens before the supremum. So we can, without any loss of generality, assume that supremum is 1. This is merely to write what follows without repeating "for $n$ etc" every time. Let's see the norm. Inside it we have an integral. Obviously, integrating over $\mathbb{R}^n$ is the same as integrating over the support of the integrand. But that support is $A_n\cup B_n$, which is a disjoint union, and the integral is additive over the integration domain. So we can write:
$$\|f(x+x_n)+g(x-x_n)\|_p=\sqrt[p]{\xint_{A_n\cup B_n}|f(x+x_n)+g(x-x_n)|^p\mathrm{d}m}=$$ $$=\sqrt[p]{\xint_{A_n}|f(x+x_n)+g(x-x_n)|^p\mathrm{d}m+\xint_{B_n}|f(x+x_n)+g(x-x_n)|^p\mathrm{d}m}.$$
Now the disjointness of the two supports also implies the first integral reduces to that of $|f(x+x_n)|^p$, since $g(x-x_n)$ is zero, and the second one analogously reduces to that of $|g(x-x_n)|^p$. By translation invariance of $m$, we get that those two integrals are the $p$-th powers of the $p$-norms:
$$\|f(x+x_n)+g(x-x_n)\|_p=\sqrt[p]{\|f\|_p^p+\|g\|_p^p}.$$
Now that very rarely reduces to the sum of the $p$-norms, except for the trivial cases when one of the norms (or both) is 0 or $p=1$. So your arguments were correct, but you conclusion wasn't careful enough. Of course, the limit value you gave is greater than or equal to this one: just take the $p$-th powers and see the extra terms on the side of your limit, which are all positive since they are positive-coefficient products of norms, which are non-negative. Note that this is a limit, since the disjointness and all that follows only holds for $n$ above the supremum above mentioned.
Edit: I wasn't too explicit on this, but the fact that the $p$-norms of the shifted functions are all equal to those of the unshifted ones is due to the translation invariance of the Lebesgue measure, which grants the equality for the integrals, as can be seen by the definition with the repartition function. You can easily add details to this, I take it.
Edit 2: In fact, I just realized there are $L^p$ functions with infinite support, meaning it is not compact, as you assumed, but most relevantly that the supremum above is in fact $+\infty$. If you opportunely connect $\frac{1}{x^2+y^2}$ and $x^2+y^2$, for example, you can easily see by switching to polar coordinates that the resulting function is $L^p$ for any $p$. However, it is never zero. The above limit holds only if the supremum is finite, because otherwise the supports of the shifted functions are never disjoint. I guess, however, it can still be a good approximation of the actual limit, since for very big $n$'s the intersection of the supports is a place where both shifted functions are really close to zero. That is required by the $L^p$ condition, so in general the limit wil be a little greater than the $p$-th root of the sum of the $p$-th powers of the $p$ norms. When it is not equal, you can give a range of values for the limit, since it is less than the sum of the $p$ norms as I showed in a comment somewhere. |
Inequality with a square root | HINT
You can't always square inequalities.
You should make sure that both sides are positive. And you also have to make sure the radicand is positive.
However, in this case, the inequality is not to hard to prove that it is true for $-2 \le x<0$, so it does not really matter. Now you merely have to check $x \ge 0$.
Now, you can square both sides and get $$x+2 \ge x^2 \Leftrightarrow (x+1)(x-2) \le 0$$
When faced with problems such as $$f(x) \ge g(x)$$Where both sides involve square roots, you should split it into two cases (Well, maybe not just two.) in order to avoid the above mentioned problem. |
Second Order Differential Equation Repeated Eigen Values | This is a standard situation. When the indicial equation has a repeated root $\lambda$, you have $e^{\lambda t}$ and $te^{\lambda t}$ as solutions. |
Integrate using trigonometric substitution. Am I on the right path? | Yes, your solution so far is correct. Now, to integrate $\int \sin^2{\theta}\,d\theta$, use the half-angle formula for the sine function:
$$
\sin^2{\theta}=\frac{1-\cos(2\theta)}{2}.
$$
Also, a bit later, you're going to need this formula:
$$
\sin{(2\theta)}=2\sin{\theta}\cos{\theta}.
$$ |
how to find the integral of a rational logarithmic function | Notice that $\frac 1 x$ is the derivation of $\log x$ so your integral has the form
$$\int f(x)f'(x)dx$$
can you take it from here? |
The derivative of $x^TAx$ w.r.t $t$ | So I suppose $x = x(t)$ and $A$ is a constant vector. Then by product rule
$$P' = (x^TAx)' = (x')^T Ax + x^T Ax' = (Bx)^T Ax + x^TA Bx \ .$$
Not sure how you get $A^T$ and that $2$.
Edit: As suggested by sleeve chan in the comment, the transpose comes from the fact that $(Bx)^T Ax$ is really a function, so
$$(Bx)^T Ax = \big((Bx)^T Ax\big)^T = x^TA^TBx $$
put this back into the original equation you have
$$P' = x^T(A+A^T) Bx\ .$$
You have to be more careful about interchanging the order of your multplication.You can't do that arbitrarily when dealing with matrix.
Still don't know how you get a 2. |
Show that the Cauchy distribution, whose density is $f\left(x\right)=\frac{1}{\left[\pi\left(1+x^{2}\right)\right]}$ does not possess finite moments. | $\int_R\frac{|x^n|}{1+x^2}dx$ does not exist for $n\ge 1$, since integrand ~$x^{n-2}$ as $x\to \infty$. |
Pull back of a vector representing a 2-form in $\mathbb R^3$ | Divergence of a vector field $Y_p$ shows by how much the volume form $\mu$ (or infinitesimal volume elements) changes along the flow:
$$
(\operatorname{div} Y)\mu=\mathcal{L}_Y(\mu)=d(Y\lrcorner\mu),
$$
where I also used the Cartan's magic formula. The interior product is the $(n-1)$-form $$[Y\lrcorner\mu](Y_1,Y_2,\ldots,Y_{n-1})=\mu(Y, Y_1,Y_2,\ldots,Y_{n-1}).
$$
The two expressions for divergence form will help us pull it back along a given map, or coordinate change, $\phi$. We now see the vector (field) Y as a result of pushforward by the map derivative $Y_{\phi(p)}=\phi_*(X_p)$. The pullback of the volume form shows the expected determinant of the Jacobian matrix, $f=det(\phi_*)$,
$$
\phi^*\mu=f\nu
$$
-- this is another factor that may cause the volume form $\nu$ to change.
The pull back of the interior product form is
$$
\phi^*(Y\lrcorner\mu)=X\lrcorner \phi^*\mu=X\lrcorner (f\nu).
$$
The pullback commutes with exterior derivative, and so
$$
\phi^*d(Y\lrcorner\mu)=d(\phi^*(Y\lrcorner\mu))=d(X\lrcorner f\nu)=\mathcal{L}_X(f\nu),
$$
where we've again used the Cartan's formula.
We will now use properties of Lie derivative and the atiderivation of the inner product to show that
$$
\mathcal{L}_X(f\nu)=(\mathcal{L}_X\;f)\nu +f\mathcal{L}_X\nu=\\
(X\lrcorner df)\nu+\left(\mathcal{L}_{fX}(\nu)-df\wedge(X\lrcorner\nu) \right)=\\
\mathcal{L}_{fX}(\nu)+\left((X\lrcorner df)\nu -df\wedge(X\lrcorner\nu) \right)=\\
=\mathcal{L}_{fX}(\nu)+X\lrcorner(df\wedge\nu)=\\
\mathcal{L}_{fX}(\nu)+0=\operatorname{div}(fX)\nu.
$$
Therefore,
$$\phi^*((\operatorname{div} Y)\mu)=\operatorname{div}(fX)\nu,$$
which suggests that the original volume form $\nu$ may change due to both divergence of the field along which we observe it and the mapping with the determinant scalar field $f$. |
How to determine the parity of a function about a specific point. | You cannot prove a definition correct. You can only explain why you choose to define something in a certain way. Moreover, what do you want odd and even about a point to mean? If you want "even about $a$" to mean "remains the same when reflected about $x = a$", and "odd about $a$" to mean "remains the same when rotated 180 degrees about $(a,0)$, then your definitions are both correct.
If however you have already defined "odd" and "even" for functions without using the algebraic definition, then you'll have to state precisely how you did so, if not we can't prove what we don't know. Note that "reflection" and "rotation" are most easily defined algebraically, so it's going to amount to almost a tautology if you do it that way. |
About the concept of a valuation ring | (1) But $K$ is the field of fractions of $A$ in Definition 2. To see this, note that valuation takes value in a totally ordered group, so for every $x\in K^\times$ either $v(x)\geq 0$ or $v(x^{-1})\geq 0$, i.e., $x$ or $x^{-1}$ is in $A$. So $\operatorname{Frac}A\supseteq K$. But $A\subseteq K$ by definition.
(2) Yes. There is a totally ordered group $\Gamma$ and a map $v\colon K^\times\to\Gamma$ such that $A=v^{-1}(\Gamma_{\geq 0})\cup\{0\}$. Just take $\Gamma:=K^\times/A^\times$ (where $A^\times$ is the units of $A$). The ordering is by looking at whether $xy^{-1}\in A$. |
All Subgroups of $Z_{200}$ | Denote by $d(n)$ the number of positive divisors of $n$. Then $\mathbb{Z}_n$ has $d(n)$ subgroups, because a cyclic group of order $n$ has a unique subgroup for each divisor $d\mid n$, see here:
Subgroups of a cyclic group and their order.
Since $d(200)=12$, we are done. |
Subsets of $\Omega$ which are not events | An usual example of non-Borel subset is the following : consider the equivalence relation $\sim$ on $[0,1]$ defined by $x \sim y \Longleftrightarrow x-y \in \mathbb{Q}$.
Then consider a subset $A$ of $[0,1]$ constituted of exactly one element from each equivalence class for the relation $\sim$. To show that $A$ is not Borel, do the following :
Consider $(r_n)_{n \in \mathbb{N}}$ an enumeration of the elements of $\mathbb{Q} \cap (-1,1)$. For each $n \in \mathbb{N}$, let $A_n = A + r_n$.
Show that for $i \neq j$, $A_i \cap A_j = \emptyset$, and that $$[0,1) \subset \bigcup_{n \in \mathbb{N}} A_n \subset (-1,2] \quad \quad (*)$$
Assuming that $A$ is Borel, show that $A_n$ is also Borel and deduce from $(*)$ that
$$1 \leq \sum_{n \in \mathbb{N}} \lambda(A_n) \leq 3$$
Finally see that $\lambda(A)=\lambda(A_n)$ for each $n$, to get a contradiction from the last inequalities. |
Optimal Area within two equal intersecting circles and its expectation | For a given radius, the best circle is centered at the midpoint of $AB$. You can justify this by noting that among the points $o$ a given distance off the segment $AB$ the one with smallest distance $Ao+oB$ is above or below the midpoint of $AB$. The sum of distances is monotonically increasing as you leave the midpoint, so having the set of points symmetric around the midpoint is a good thing. It seems likely that for a given point the circle with smallest expectation is one of zero radius, just the point. The surprising thing is the set of points with minimum sum of distances is not just the midpoint of $AB$ but the whole segment. |
$\text{tr}(X)=\text{tr}(A^{-1}B)$ for $AX+XA=2B$ | Since $A$ is positive definite, it has an inverse. Hence
$$ +=2 \\ \Rightarrow
A^{-1}XA +X=2A^{-1}B.$$
Taking $Tr(\cdot)$ on both sides leads to
$$Tr(A^{-1}XA) + Tr(X) = 2Tr(A^{-1}B)\\
\Rightarrow Tr(AA^{-1}X) + Tr(X) = 2Tr(A^{-1}B)\\
\Rightarrow Tr(X) + Tr(X) = 2Tr(A^{-1}B) \\
\Rightarrow Tr(X) = Tr(A^{-1}B) $$
The second line above is due to the rotation property of the $Tr(\cdot)$ operation. |
What is FOIL and how is it done? | FOIL is a mnemonic for how to multiply thing in the form $(a+b)*(c+d)$
$$(a+b)*(c+d)=ac+ad+bc+bd$$
This is you add up the **F**irst terms (ac) , **O**utside terms (ad) , **I**nside terms (bc), and **L**ast terms (bd)
Its just another way to simplify this type of multiplication. |
Compute trigonometric limit without use of de L'Hospital's rule | We can solve in two steps fistly multipling both sides to ${ x }^{ 2 }$ and then to $1+\cos { \left( x \right) } $ and consider the fact $\lim _{ x\rightarrow 0 }{ \frac { \sin (x^{ 2 }) }{ x^{ 2 } } =1 } $ we will get
$$\lim _{ x\to 0 } \frac { (x+c)\sin (x^{ 2 }) }{ 1-\cos (x) } =\lim _{ x\to 0 } \frac { (x+c){ x }^{ 2 } }{ \left( 1-\cos (x) \right) } \frac { \sin (x^{ 2 }) }{ { x }^{ 2 } } =\\ =\lim _{ x\to 0 } \frac { (x+c){ x }^{ 2 } }{ \left( 1-\cos (x) \right) } \frac { \left( 1+\cos (x) \right) }{ \left( 1+\cos (x) \right) } =\lim _{ x\to 0 } \frac { (x+c){ x }^{ 2 } }{ 1-\cos ^{ 2 }{ \left( x \right) } } \left( 1+\cos { \left( x \right) } \right) \\ =\lim _{ x\to 0 } \frac { (x+c){ x }^{ 2 } }{ \sin ^{ 2 }{ \left( x \right) } } \left( 1+\cos { \left( x \right) } \right) =\lim _{ x\to 0 } \left( x+c \right) \left( 1+\cos { \left( x \right) } \right) =2c$$ |
What are the non-trivial normal subgroups of $O(3)$? | The orthogonal group $O(3)$ has a nontrivial subgroup consisting of the identity matrix and its negative. These lie in the center of $O(3)$, so the subgroup is normal.
To show that $SO(3)$ is normal, you might want to use the fact that $SO(3)$ is precisely the set of matrices in $O(3)$ with determinant $1$. This allows you to characterize cosets of $SO(3)$ in terms of determinants. After working this out, demonstrating that left and right cosets coincide will reduce to properties of the determinant.
Alternately, you could use this determinant fact to figure out the index $[O(3):SO(3)]$, and use this to conclude that $SO(3)$ is normal in $O(3)$.
Edit: As @QiaochuYuan points out, the most economical method would be to consider the determinant homomorphism, whose kernel is $SO(3)$. |
The value of $\sum_{1\leq l< m <n}^{} \frac{1}{5^l3^m2^n}$ | \begin{align*}\sum_{1\leq l< m <n}^{} \frac{1}{5^l3^m2^n}&=\sum_{l=1}^\infty \frac{1}{5^l}\sum_{m=l+1}^\infty \frac{1}{3^m}\sum_{n=m+1}^\infty\frac{1}{2^n}\\&=\sum_{l=1}^\infty \frac{1}{5^l}\sum_{m=l+1}^\infty \frac{1}{3^m}\cdot\frac{1}{2^m}\\&=\sum_{l=1}^\infty \frac{1}{5^l}\cdot\frac{1}{6^l}\cdot\frac15\\&=\left(\frac{1}{1-\frac1{30}}-1\right)\cdot\frac15\\&=\frac{1}{29\cdot5}\end{align*}
or rather $$\frac{1}{(5\cdot3\cdot2-1)(3\cdot2-1)(2-1)}$$ |
Why does $2$ ramify in $\mathbb{Z}[i]$ but not 3? | "$(\Bbb Z/2\Bbb Z)[i]$" is $(\Bbb Z/2\Bbb Z)[x]/(x^2+1) = (\Bbb Z/2\Bbb Z)[x]/((x+1)^2)$ which is not a field because e.g. $(x+1)(x+1) = 0$ in the quotient but $x+1 \ne 0$ in the quotient. |
Finding the derivative of sinus and cosinus. Trigonometric identities | For any $a,b$, we have the well known "addition theorems"
$$ \sin(a+b) = \sin a \cos b + \sin b \cos a$$
and $$ \sin(a-b) = \sin a \cos b - \sin b \cos a $$
Subtracting these two equations, we get
$$ \sin(a+b) - \sin(a-b) = 2\sin b \cos a $$
For the cosine, we have
$$ \cos(a+b) = \cos a \cos b - \sin a \sin b $$
and $$ \cos(a-b) = \cos a \cos b + \sin a \sin b $$
Subtraction again
$$ \cos(a+b) - \cos(a-b) = -2\sin a\sin b$$
Now let $b = \frac h2$, $a = x+\frac h2$. |
positive linear bijection whose inverse isn't positive | Let $T:M_n(\mathbb{C})\rightarrow M_n(\mathbb{C})$ be $T(X)=X+\frac{1}{2}X^t$.
This map is clearly a linear positive map. Moreover, $T(X)=\dfrac{3}{2}\dfrac{(X+X^t)}{2}+\dfrac{1}{2}\dfrac{(X-X^t)}{2}$.
Note that, $T(S)=\frac{3}{2}S$ and $T(A)=\frac{1}{2}A$, for every symmetric matrix $S$ and for every anti-symmetric matrix $A$.
Thus, $\frac{3}{2}$ and $\frac{1}{2}$ are the only eigenvalues of $T$.
So $T$ is a bijection and its inverse is $T^{-1}(X)=\dfrac{2}{3}\dfrac{(X+X^t)}{2}+2\dfrac{(X-X^t)}{2}=\frac{4}{3}X-\frac{2}{3}X^t$.
Let $v=(1,i)^t$. So $T^{-1}(v\overline{v}^t)=\frac{4}{3}v\overline{v}^t-\frac{2}{3} \overline{v}v^t$.
Note that $(\frac{4}{3}v\overline{v}^t-\frac{2}{3} \overline{v}v^t)\overline{v}=-\frac{4}{3}\overline{v}$.
So $T^{-1}$ is not a positive map. |
Let $f : A \to B$ be a function. If $f$ is surjective, injective, or bijective what can we say about $|A|$ in comparison to $|B|$? | Note that the cartesian relation is not a function. All we know is that every element of the domain $A$ must be mapped once and only once, so for each element of $A$, there is an element of $A\times B$ in $f$. As such, $|f|=|A|$.
For part b, if the function is injective, we know that every member of $A$ maps to a unique member of $B$. That means that $|A|\leq|B|$. Likewise, if the function is surjective, then all $B$ are mapped to, but multiple elements of $A$ can map to the same element of $B$. As such, $|A|\geq|B|$. Both of these conditions together ensure that $|A|=|B|$ |
Hyperbolic contour integration | Note that
$$ \int_{-\infty}^\infty\frac{x\cos(ux)}{\sinh(x/2)}dx=2\int_{-\infty}^\infty\frac{x\cos(2ux)}{\sinh(x)}dx.$$
Let $f(z)=\frac{z\cos(2uz)}{\sinh(z }$.
Choose the rectangle $\gamma$: $z=-R$ to $z=R$ in $x$-axis, $z=R$ to $z=R+\pi i$ in the line $x=R$, $z=-R+\pi i$ to $z=-\varepsilon+\pi i$ and $z=\varepsilon+\pi$ $z=R+\pi i$ in the line $y=\pi i$, $z=-R+\pi i$ to $z=-R$ in the line $x=-R$, and a small semicircle $C_\varepsilon$ below the pole $z=\pi i$ with radius $\varepsilon$. Then $f(z)$ is analytic inside $\gamma$. Note that he contribution from the short sides of the rectangle is zero since $\sinh$ grows exponentially as $x\to\pm\infty$. Then one has
$$ \int_{-\infty}^\infty f(x)dx - \int_{\varepsilon}^\infty f(x+i\pi)dx -\int_{C_\varepsilon} f(x)dz - \int_{-\infty}^{-\varepsilon} f(x+i\pi)dx =0.$$
Note that $z=\pi i+\varepsilon e^{it}$ on $C_\varepsilon$, $t\in[\pi,2\pi]$. So
$$ \int_\pi^{2\pi} \frac{(\varepsilon e^{it}+i\pi)\cos[2u(\varepsilon e^{it}+i\pi)]}{\sinh(\varepsilon e^{it}+i\pi)}i\varepsilon e^{it}dt \to \int_{\pi}^{2\pi} \pi\cosh(2\pi u) dt = \pi^2\cosh(2u\pi)$$
as $\varepsilon\to0$. Here $\sinh(x+i\pi) \approx -x$ near $x=\pi i$ is used.
Thus we have $$ \pi^2\cosh(2u\pi) = \int_{-\infty}^\infty f(x)dx - \lim_{\epsilon\to 0}\left(\int_{-\infty}^{-\epsilon} f(x+i\pi)dx+\int_{\epsilon}^\infty f(x+i\pi)dx\right). $$
Using $$\sinh(x+i\pi) = -\sinh(x),\cos(xi)=\cosh(x),\sin(xi)=i\sinh(x)$$ and the principle integral, it is easy to see
\begin{eqnarray*}
&&\int_{-\infty}^{-\epsilon} f(x+i\pi)dx+\int_{\epsilon}^\infty f(x+i\pi)dx\\
&=&-\int_{-\infty}^{-\epsilon}\frac{(x+\pi i)\cos[2u(x+\pi i)]}{\sinh(x)}dx-\int_{\epsilon}^\infty \frac{(x+\pi i)\cos[2u(x+\pi i)]}{\sinh(x)}dx\\
&=&-\bigg(\int_{-\infty}^{-\epsilon}+\int_{\epsilon}^\infty\bigg)\frac{(x+\pi i)[\cos(2ux)\cosh(2\pi u)-i\sin(2ux)\sinh(2\pi u)]}{\sinh(x)}dx\\
&=&-\bigg(\int_{-\infty}^{-\epsilon}+\int_{\epsilon}^\infty\bigg)\frac{x\cos(2ux)\cosh(2\pi u)+\pi\sin(2ux)\sinh(2\pi u)}{\sinh(x)}dx\\
&&-i\bigg(\int_{-\infty}^{-\epsilon}+\int_{\epsilon}^\infty\bigg)\frac{\pi \cos(2ux)\cosh(2\pi u)-x\sin(2ux)\sinh(2\pi u)}{\sinh(x)}dx\\
&=&-\bigg(\int_{-\infty}^{-\epsilon}+\int_{\epsilon}^\infty\bigg)\frac{x\cos(2ux)\cosh(2\pi u)}{\sinh(x)}dx\\
&\to&-\cosh(2\pi u)\int_{-\infty}^\infty f(x)dx
\end{eqnarray*}
as $\varepsilon\to0$.
$$ \int_{-\infty}^\infty f(x)dx = \frac{\pi^2\cosh(2\pi u)}{1+\cosh(2\pi u)}.$$
So
$$
\int_{-\infty}^\infty\frac{x\cos(ux)}{\sinh(x/2)}dx=\frac{2\pi^2\cosh(2\pi u)}{1+\cosh(2\pi u)}.
$$ |
Strong inductive proof for recursive sequence given Explicit formula | You want to show:
$$3^{n}+4(-1)^{n} = 2[3^{n-1}+4(-1)^{n-1}] + 3[3^{n-2}+4(-1)^{n-2}]$$
Simplifying from the left we have:
$$2[3^{n-1}+4(-1)^{n-1}] + 3[3^{n-2}+4(-1)^{n-2}] = 2[3^{n-1}] + 8[(-1)^{n-1}] + [3^{n-1}] + 12[(-1)^{n-2}] =$$
$$=3[3^{n-1}] - 8[(-1)^{n-2}] + 12[(-1)^{n-2}] = 3^{n} + 4(-1)^{n-2} = 3^{n} + 4(-1)^{n-2}$$ |
Show that every extreme point in Q is either an extreme point of P or a convex combination of two adjacent extreme points of P | Suppose $x$ is an extreme point in $Q$. If this point coincides with an extreme point in $P$, there is nothing to show.
Suppose it is not an extreme point in $P$.
Since it's an extreme point in $Q$, it is also a basic feasible solution, which means that $n$ of the constraints are active in $x$.
One of these constraints is $a'x = b$. the other active constraints are $n-1$ of those constraints that are associated with the polyhedron $P$.
Two adjacent extreme points are two points that by definition share $n-1$ constraints. Therefore, $x$ lies in the segment that lies in between those two extreme points. |
Equation of the line perpendicular to the asymptote of the graph | $$f(x)=\frac{-3x+1}{x^2-2x+1}=\frac{-3x+1}{(x-1)^2}$$
So the function has a vertical asmptote where $x=1$. The perpendicular to this asmptote is any line of the form $y=k$ where $k \in \mathbb{R}$ becuase the asmptote does not have an associated $y$ value. |
Numerically stable method for angle between 3D vectors | Short answer: Method 2 is better for small angles. Use this slight rearrangement:
$$
\theta = 2~atan2(||~||v||u - ||u||v~||,~||~||v||u + ||u||v~||)
$$
This formula comes from W. Kahan's advice in his paper "How Futile are Mindless Assessments of Roundoff in Floating-Point Computation?" (https://www.cs.berkeley.edu/~wkahan/Mindless.pdf), section 12 "Mangled Angles."
Before I found that paper, I also did my own analysis, comparing the double-precision results with result's using bc with 50 digits of precision. Method 2 is commonly within 1 ulp of correct, and almost always within 50 ulps, whereas Method 1 was much, much less accurate.
This might seem surprising, since Method 1 is mathematically insensitive to the magnitude of u and v, whereas they have to be normalized in Method 2. And indeed, the accuracy of the normalization is the limiting factor for Method 2 - virtually all the error comes from the fact that even after normalization, the vectors aren't exactly length 1.
However, for small angles you get catastrophic cancellation in the cross product for method 1. Specifically, all the products like $u_y v_z - u_z v_y$ end up close to 0, and I believe the cross-multiplication before the subtraction loses accuracy, compared to the direct subtraction in Method 2. |
cyclotomic field automorphism | Let $\zeta$ be a primitive $8$-th root of unity in an extension of $\Bbb F_3$, and let $K=\Bbb F_3(\zeta)$ (so $K=\Bbb F_9$). The Frobenius
automorphism $\phi$ maps $x\mapsto x^3$ in $K$. The primitive $8$-th roots of unity in $K$ are $\zeta$, $\zeta^3$, $\zeta^5$ and $\zeta^7$,
and $\phi(\zeta)=\zeta^3$, $\phi(\zeta^3)=\zeta$,
$\phi(\zeta^5)=\zeta^7$ and $\phi(\zeta^7)=\zeta^5$. They fall into two
orbits under the action of $\phi$. Consider
$$f_1(X)=(X-\zeta)(X-\zeta^3)=X^2-(\zeta+\phi(\zeta))X+\zeta\phi(\zeta).$$
Its coefficients are stable under $\phi$ so lie in $\Bbb F_3$. Likewise
with
$$f_2(X)=(X-\zeta^5)(X-\zeta^7).$$
Then the $8$-th cyclotomic polynomial is
$$\Phi_8(X)=(X-\zeta)(X-\zeta^3)(X-\zeta^5)(X-\zeta^7)=f_1(X)f_2(X)$$
and is a product of two irreducible quadratics over $\Bbb F_3$.
For there to be an automorphism of $K$ with $\zeta\mapsto\zeta^m$,
it is necessary that $\zeta^m=\zeta^{3^l}$ for some $l$, but this
can't happen for $m=5$ or $7$. There is no "map" $\zeta\to\zeta^5$
in $K$.
In general, consider $\Phi_n(X)$ over $\Bbb F_p$ with $p\nmid n$.
The minimal polynomial of $\zeta$, a primitive $n$-th root of unity,
will have order $k$, where $k$ is the order of $p$ in $(\Bbb Z/n\Bbb Z)^*$. |
Interesting questions (with answers) about concepts in topology for an amateur audience | I always liked the: hang a picture on two nails, such that if you remove one the picture falls down .
While the solution to this can be found with a bit luck and without the knowledge of fundamental groups the more complicated ones (hanging it on $n$ nails) is probably impossible without any mathematical advanced ideas. |
What are mathematically definable/ useful properties of ternary relations? ( How to define for example symmetry or transitivity?) | You could try defining things like so:
$1)$ Reflexivity: $(x,y,x)\in R\;$ for all $\;x,y\in A$
$2)$ Symmetry: $(x,y,z)\Leftrightarrow(x,z,y)\Leftrightarrow(y,x,z)\Leftrightarrow(y,z,x)\Leftrightarrow(z,x,y)\Leftrightarrow(z,y,x)$ (all permutations are equivalent, essentially) for all $\;x,y,z\in A$
$3)$ Transitivity: $(x,y,z)\wedge(y,z,w)\wedge(z,w,a)\implies (x,z,a)\;$ for all $\; x,y,z,w,a\in A$
This equivalence relation produces a natural partition over $A^2$, although its usefulness, of course, depends on where the "ternary equivalence relation" is being used.
Define the equivalence class of any element, $i\in A$, under a ternary equivalence relation as $[i]=\{(j,k)\;|\;(i,j,k)\}$
(here, the first set of parentheses denotes an ordered pair while the second represents the ternary relation- in the remainder of the answer, take parentheses around two elements to mean an ordered pair and parentheses around three elements to mean that the three elements satisfy the ternary equivalence relation).
By the reflexive property, we will have $(i,i)\in [i]$ for all $i$ so that each equivalence class is nonempty.
Also, by reflexivity, we have $(k,j,k)$ for any pair of $j,k\in A$ so each element in $A^2$ belongs to some equivalence class.
Suppose for some $x,y\in A$ $[x]\cap[y]\neq \varnothing$.
There must then exist some $j,k\in A$ s.t. $(x,j,k)$ and $(y,j,k)$.
Now, let $(b,c)$ be any element in $[y]$.
Step 1: By symmetry we have $(j,k,y)$. By reflexivity we have $(y,k,y)$ and, then, symmetry we get $(k,y,y)$. Overall, we have $(x,j,k)\wedge (j,k,y)\wedge (k,y,y)$ which, by transitivity, gives us $(x,k,y)$. This was the first application of transitivity.
Step 2: We have shown $(x,k,y)$. $(k,y,y)$ still holds (by reflexivity+symmetry). By analogous reasoning we have $(y,y,b)$ (recall $(b,c)$ was chosen to be an arbitrary element in $[y]$). The net result is $(x,k,y)\wedge (k,y,y)\wedge (y,y,b)\implies(x,y,b)$.
Step 3: $(x,y,b)$ has been shown and $(y,b,c)$ holds by assumption. $(b,c,c)$ is obvious. A final application of transitivity yields $(x,b,c)$, i.e. $(b,c)\in [x]$.
Since $(b,c)$ was an arbitrary member of $[y]$, we have $[y]\subset [x]$. We can then reverse the roles of $x$ and $y$ to similarly get $[x]\subset [y]$ so that $[x]=[y]$ whenever $[x]$ and $[y]$ are not disjoint, meaning that the ternary equivalence relation partitions $A^2$.
One thing worth noting is that the condition for symmetry may be stronger than required. It was assumed that any permutation implied all the others simply to simplify the argument. If we were more careful, we could note that, treating the permutations of $(x,y,z)$ as members of the permutation group $S_3$, we require only three cycles of $S_3$ to prove that the equivalence relation produces a partition: $(132), (12)$ OR $(213)$, and $(32)$ OR $(231)$.
So one could weaken the symmetry condition, if need be, while still retaining the partition property. |
Shape of universe from the metric's scalar factor | $\newcommand{\dd}{{\rm d}}$
$\newcommand{\vect}[1]{{\bf #1}}$
There are many things going on there
Cosmological principle
In general a metric of the form
$$
\dd s^2 = c^2\dd t^2 - a^2(t) \dd \vect{x}^2 \tag{1}
$$
Satisfies that the cosmological principle: universe is homogeneous an isotropic. In fact, imagine that $t$ is fixed, the effect of $a(t)$ is just "stretching" the coordinates, so if the universe satisfies the cosmological principle at a given time, it will follow it at any time.
Now, if the universe is to be homogeneous, its curvature $\kappa$ must be the same everywhere. There are only a handful of geometries that satisfy this constraint, one is the plane ($\kappa = 0$), other is the sphere ($\kappa = +1$) and finally an hyperbolic universe ($\kappa = -1$). Note that I deliberately used $\kappa = \pm 1$, because the metric can be scaled through the factor $a$ in Eq. (1), so the actual value of the curvature is unimportant.
The most general metric for an universe that follows the cosmological principle is
$$
\dd s^2 = c^2 \dd t^2 -a^2 \left(\frac{1}{1 - \kappa r^2} \dd r^2 + r^2\dd \Omega^2\right) \tag{2}
$$
Einstein's field equations
The equations that describe the coupled evolution of the spacetime geometry and its content are
$$
R_{\mu\nu} -\frac{1}{2}Rg_{\mu\nu} + \Lambda g_{\mu\nu} = \frac{8\pi G}{c^4}T_{\mu\nu} \tag{3}
$$
where $R_{\mu\nu}$ is the Ricci curvature tensor, $R$ the scalar curvature $R = R^\mu_\mu$, $g_{\mu\nu}$ the metric tensor, which can be directly extracted from Eq. (2) $\dd s^2 = g_{\mu\nu}\dd x^\mu \dd x^\nu$, $\Lambda$ the cosmological constant and $T_{\mu\nu}$ the stress-energy tensor which contains information about the physical content of the universe, for example $T^{00} = \rho$
This equation can be then explicitly written in terms of the metric $g_{\mu\nu}$ which results in a relation between the geometry $\kappa$ and the energy content $\rho c^2$ of the universe. The result is what we know as Friedmann equations
$$
\frac{\dot{a}^2 + \kappa c^2}{a^2} = \frac{8 \pi G \rho + \Lambda c^2}{3} \tag{4}
$$
which is the result of calculating the 00 component of Eq. (3). If you take the trace instead, you get
$$
\frac{\ddot{a}}{a} = -\frac{4 \pi G}{3}\left(\rho+\frac{3p}{c^2}\right) + \frac{\Lambda c^2}{3} \tag{5}
$$
Set $\Lambda = 0$ in Eq. (4) and you will conclude about the dynamics of $a$ in terms of the geometry $\kappa$ and the density $\rho$
The fate of the universe
Let us consider for instance a universe for which $\Lambda =0$ and define
$$
\rho_c = \frac{3H^2}{8\pi G} ~~~\mbox{and}~~~ H = \frac{\dot{a}}{a}
$$
You can see that Eq. (4) becomes
$$
1 = \frac{\rho}{\rho_c} - \frac{\kappa c^2}{a^2H^2} \tag{5}
$$
which can be solved for $\kappa$
$$
\kappa = \frac{a^2 H^2}{c^2}\left(\frac{\rho}{\rho_c} - 1\right)
$$
This establishes a clear connection between density and geometry
$\rho > \rho_c$, then $\kappa > 0$ and the universe has spherical geometry (closed)
$\rho < \rho_c$ then $\kappa> 0$ and the universe has hyperbolical geometry (open)
$\rho = \rho_c$ in this case $k = 0$, which corresponds to the Euclidean flat space |
Does there exist a real differentiable function f with the following propertie simultaneously? | NO. Apply the following, with $f'(n)=B(n).$
Theorem. For a real sequence $(B(n))_n$ let $S(n)=\sum_{j=1}^n B(n)$ and $T(n)=B(n)+S(n).$ If $S(n)$ does not converge then $T(n)$ does not converge. Equivalently if $T(n)$ converges then $S(n)$ converges.
Proof: Suppose $L=\lim_{n\to \infty}T(n).$ By contradiction suppose that $(S(n))_n$ does not converge. Then (i) there exists $U>L$ such that $S(n)>U$ for infinitely many $n$, or (ii) there exists $U<L$ such that $S(n)<U$ for infinitely many $n.$ Case (ii) can be converted to Case (i) by replacing $B(n)$ with $-B(n)$ so it suffices to consider Case (i).
Take $d>0$ with $3d<(U-L)$ and take $n_1$ such that $n\geq n_1\implies |T(n)-L|<d.$ Take $n_2\geq n_1$ such that $S(n_2)>U.$ We need the following Lemma:
Lemma. There exists $n>n_2$ such that $S(n)<U.$
Proof of Lemma: By contradiction, suppose $S(n)>U$ for all $n\geq n_2.$ Then for every $j\geq 0$ we have $$L+d>T(n_2+j+1)=S(n_2+j)+2B(n_2+j+1)>U+2B(n_2+j+1)$$ implying $B(n_2+j+1)<-(U-L-d)/2<-d.$
But then for positive integer $k>(S(n_2)-U)/d$ we have $S(n_2+k)=S(n_2)+\sum_{j=0}^{k-1}B(n_2+j+1)<S(n_2)-kd<U,$ contrary to $S(n_2+k)>U.$
So the lemma is proved.
By the Lemma let $n_3>n_2$ such that $S(n_3)<U.$ Now $S(n)>U$ for infinitely many $n$ so let $n_4$ be the $least$ $n>n_3$ such that $S(n)>U.$ Then $S(n_4-1)<U<S(n_4)=S(n_4-1)+B(n_4)$ so $$B(n_4)>0.$$ And we have $L+d>T(n_4)$ because $n_4>n_1.$ Therefore $$L+d>T(n_4)=S(n_4)+B(n_4)>S(n_4)>U>L+d$$ a contradiction. |
Cyclic subspace and one dimensional range | We can write $Tx=f(x)w$ where $w$ is some non-zero vector and $f$ is a linear map: $V \to \mathbb R$ (assuming that your vector space is over $\mathbb R$). Also $Tw=cw$ for some scalar $c$. This gives $T^{m}x=f(x)c^{m-1} w$ by induction . Hence $T^{m}=c^{m-1}T$ for all positive integers $m$. Put $m=2018$. |
Is $f(z)=\exp (-\frac{1}{z^4})$ holomorphic? | It is neither obvious nor true. Your $f$ has an essential singularity at $z=0$. (what happens if you approach $z=0$ along the line $x=y$ or $x=-y$?)
However, if you split $f = u+iv$, it is true that $u$ and $v$ satisfy Cauchy-Riemann's equations, even at $z=0$. (But $f$ is not continuous at $z=0$, so $u$ and $v$ are certainly not differentiable at $0$.)
See also this answer
Follow-up If $u$ and $v$ are differentiable (as functions $\mathbb{R}^2 \to \mathbb{R}$) and $u$ and $v$ satisfy Cauchy-Riemann's equation at a point $z$ then $f$ is complex-differentiable at $z$. Just assuming $f$ to be continuous is not enough.
However, if we assume that $f$ is continuous on an open set $U$ and that $u$ and $v$ satisfy C-R everywhere on $U$ then $f$ is in fact analytic on $U$, even if we don't assume that $u$ and $v$ are differentiable. This is known as Looman-Menchoff's theorem. |
Question on Urysohn's lemma | You are right. Note that 2. is also true. |
How to replace floor to ceil | Consider two cases.
In the first case, if $A$ is divisible by $B$. Then $\dfrac AB$ is an integer and
$$\left\lfloor\frac AB\right\rfloor = \frac AB = \left\lceil\frac AB\right\rceil,$$
but $\dfrac{A - 1}B < \dfrac AB$ and
$$\left\lfloor\frac{A - 1}B\right\rfloor = \left\lfloor\frac AB\right\rfloor - 1
= \left\lceil\frac AB\right\rceil - 1.$$
In the second case, $A$ is not divisible by $B.$ Then
$$\left\lfloor\frac{A - 1}B\right\rfloor = \left\lfloor\frac AB\right\rfloor
= \left\lceil\frac AB\right\rceil - 1.$$
In either case,
$$\left\lfloor\frac{A - 1}B\right\rfloor = \left\lceil\frac AB\right\rceil - 1$$
so
$$\left\lfloor\frac{A - 1}B\right\rfloor + 1 = \left\lceil\frac AB\right\rceil.$$
Let $A = x$ and $B = 2^N,$ and you have your result.
The proof above is closely related to one you have already seen, so as a bonus,
here is a proof of the original formula.
For an arbitrary positive integer $x,$
we assume that $-x$ (a negative number)
is represented by a two's-complement binary integer
in a storage location that is $L$ bits wide, where $2^{L - 1} \geq x.$
The bits in this storage location are then exactly the same as the unsigned binary integer representation of $2^L - x$ and the leftmost bit is a "one".
We further assume that the right-shift operator applied to a negative signed integer inserts "one" bits on the left as it shifts bits off to the right.
Notice that
$$ 2^L - x = (2^L - 1) - (x - 1),$$
where $2^L - 1$ is just a string of $L$ "one" bits, and the subtraction merely cancels any "one" bits in $2^L - 1$ that align with "one" bits in $x - 1.$
There are no "borrow" operations; the result in each bit is unaffected by the bits to its right.
So you can obtain $-x$ right-shifted $N$ bits (for $N\geq 0$) as follows:
On one line write the bits of $2^{L + N}-1,$ that is, a string of $L + N$ "one" bits.
On the line below, write the bits of $x - 1$, with the last bit under the last bit of the line above.
On the third line, write the difference $(2^{L + N}-1) - (x - 1).$
Shift all three lines $N$ bits to the right; that is, erase the rightmost $N$ columns of bits.
You now have one line containing the bits of $2^L - 1$ (a string of $L$ "one" bits),
the line below containing the bits of $\left\lfloor \dfrac{x - 1}{2^N} \right\rfloor$,
and the third line containing the bits of $-x$ shifted $N$ bits to the right;
but the third line is also the difference of the first two lines,
$$ (2^L - 1) - \left\lfloor \dfrac{x - 1}{2^N} \right\rfloor. $$
We write this as
$$ 2^L - \left( \left\lfloor \dfrac{x - 1}{2^N} \right\rfloor + 1 \right), $$
which is the two's-complement representation in $L$ bits of the negative number
$$ - \left( \left\lfloor \dfrac{x - 1}{2^N} \right\rfloor + 1 \right). $$
That proves the formula for positive $x.$
If $x$ is not positive, then $-x$ is non-negative and shifting $-x$ right by $N$ bits has exactly the same result as shifting an unsigned binary integer right by $N$ bits, that is, the result is
$$ \left \lfloor \frac {-x}{2^N} \right\rfloor. $$
Note that
$$ \left \lfloor \frac {-x}{2^N} \right\rfloor
= - \left \lceil \frac {x}{2^N} \right\rceil, $$
and as we have already shown,
$$ \left \lceil \frac {x}{2^N} \right\rceil =
\left \lfloor \frac {x - 1}{2^N} \right\rfloor + 1. $$
Therefore the result of shifting $-x$ right by $N$ bits is
$$ -\left(\left \lfloor \frac {x - 1}{2^N} \right\rfloor + 1\right). $$
That proves the formula for non-positive $x.$
I would just like to point out here that what all these formulas come down to is the simple fact that if $y$ is a binary integer, then the result of shifting $y$ to the right $N$ bits (for $N\geq 0$) is
$$ \left \lfloor \frac {y}{2^N} \right\rfloor, $$
which is relatively obvious if $y \geq 0$ and only slightly less obvious
(but still true) if $y < 0.$
I think this is the simplest way of representing a right-shift for either positive or negative binary integers. |
Prove that $\sqrt[n]{2}+\sqrt[n]{3}$ is irrational for every natural $n \ge 2$ | The polynomial $P(z) = z^n - 2$ is irreducible over the rationals by Eisenstein's criterion. So if $\alpha$ is a root of $P(z)$, $P(z)$ is its minimal polynomial over the rationals. Now if $(\alpha - r)^n = b$ is rational
where $r$ is a nonzero rational, then $Q(z) = (z - r)^n - b - (z^n - 2)$ is a nontrivial polynomial of degree $n-1$ over the rationals such that $Q(\alpha) = 0$. |
A Version of the Converse Implication of the Cayley-Hamilton Theorem | The basic conflict here is that matrix identities can only give information about the minimal polynomial, but the characteristic polynomial is neither uniquely determined by the minimal polynomial nor vice versa. Yes, the two polynomials must share the same complex roots, but the multiplicities are nearly independent of one another, save for the obvious fact that the minimal polynomial divides the characteristic polynomial.
The fact that some $p(T) = 0$ only tells you that the minimal polynomial of $T$ is a divisor of $p$, but it could be a very low-degree divisor that fails to give much information about the characteristic polynomial of $T$. For example, if $T = 0$ then any polynomial with $a_n = 0$ satisfies the condition, but obviously the characteristic polynomial can’t be simultaneously equal to all possible values of $(a_0x^n + \cdots + a_{n-1}x)^2$. |
If $q_i = q_j$ for all $i \neq j$ what's $\sum_{i=1}^n{\sum_{j=1}^{i-1}{q_i q_j}}$? (need confirmation) | Your approach is fine, also with respect to the usage of an empty sum
$\sum_{i=1}^{0}f(i)=0
$.
Using the sigma notation and setting $q:=q_1$ we can write
\begin{align*}
\sum_{i=1}^n\sum_{j=1}^{i-1}q_iq_j&=\sum_{i=2}^n\sum_{j=1}^{i-1}q_iq_j\tag{1}\\
&=\sum_{i=1}^{n-1}\sum_{j=1}^{i}q_{i+1}q_j\tag{2}\\
&=q^2\sum_{i=1}^{n-1}\sum_{j=1}^{i}1\tag{3}\\
&=q^2\sum_{i=1}^{n-1}i\tag{4}\\
&=q^2\frac{(n-1)n}{2}\tag{5}\\
\end{align*}
Comment:
In (1) we start with index $i=2$, since the inner sum is empty when $i=1$.
In (2) we shift the index $i$ to start from $i=1$.
In (3) we factor out $q$ since $q=q_i=q_j$ for all $i\ne j$.
In (4) we simplify the inner sum.
In (5) we simplify the outer sum by applying the corresponding summation formula. |
How to prove Stirling's formula. Using Wallis formula | Let $$c=\lim_{n\to\infty}\frac{n!}{\sqrt{2}n^{n+1/2}e^{-n}}$$
Rewrite Wallis formula as the following:
$$\lim_{n\to\infty}\frac{2^{4n}(n!)^4}{((2n)!)^2(2n+1)}=\frac{\pi}{2}$$
Note that
$$\frac{2^{4n}(n!)^4}{((2n)!)^2(2n+1)}\sim\frac{2^{4n}(c\sqrt{2}n^{n+1/2}e^{-n})^4}{(c\sqrt{2}(2n)^{2n+1/2}e^{-2n})^2(2n+1)}$$
Take the limit on both sides of the relation, and apply L'Hospital,
$$\frac{\pi}{2}=\lim_{n\to\infty}\frac{n^2}{n(2n+1)}=\frac{c^2}{2}$$
which gives $c=\sqrt{\pi}$.
My notation is a little bit different from you, but the general method should work. |
Show that $||x+y||\cdot||x-y||\le||x||^2+||y||^2$ | Well, in the same way that you got your formula, notice that
$$\|x - y\|^2 = \|x\|^2 - 2 \langle x, y \rangle + \|y\|^2$$
so that
$$\|x + y\|^2 \|x - y\|^2 = (\|x\|^2 + \|y\|^2)^2 - 4 \langle x, y \rangle^2.$$
What can you conlude from this? |
Four Children Combination Problem | Adam/Honey/Caran/Emily/Jack/Laura
Barbie/George/Dave/Fiona/Ian/Laura
Each parent keeps two of their own kids (one of each sex). Is that the question?
EDIT: I will just answer it.. there are $4~ choose~ 2 = 6$ options to pick kids for the first family. Two of those options result in picking two kids of the same sex and the other four correspond to mixed sex choices.
For a mixed sex choice there are $(4~ choose~ 2) -2 = 4$ choices for picking kids from the second family to join the first (the minus 2 comes from the fact that I cannot choose two same sex kids and there are two options corresponding to this case).
For a same sex choice there is only one option for picking kids from the second family.
So, how many choices of family configurations are there? Just add them up.. 4+4+4+4+1+1 = 18 |
The function $f$ is defined, for $x \geq 0$ , by $f(x) = 4-3\cos (x/2)$. | Basically, we need to solve in $(a,b)$ as: $$4-3\cos(\frac{na+b}{2})=2.5$$ $$\implies \cos(\frac{na+b}{2})=\frac{1}{2} = \cos \frac{\pi}{3}$$ Note the solution to $\cos x = \cos y$ is given by $x=y + 2\pi \alpha\,, \alpha \in \mathbb Z$.
Thus, we get: $$\frac{na+b}{2} = \frac{\pi}{3}+2\pi \alpha$$ $$na+b = \color{green}{\frac{2\pi}{3}}+\color{red}{4\pi}\alpha$$ Can you take it from here? |
Geometry - What's the approach when formulating a conditional statement about any given scenario? | The statement "$9x + 5 = 23$, because $x = 2$" in exercise 7 should be interpreted to mean "If $x = 2$, then $9x + 25 = 23$" since it means that the statement $9x + 5 = 23$ is true when $x = 2$.
The question about the band is, alas, ambiguous.
The answers you provided in the comments for exercises 8 and 10 are correct.
In exercise 12, you should have obtained "If two angles are complementary, then the sum of their measures is $90^\circ$."
Exercise 11 is tricky. It is certainly true that if you are registered, then you are allowed to vote. On the other hand, only those people who are registered are allowed to vote. Hence, if you are allowed to vote, then you must be registered. Therefore, I would interpret the statement "Only those people who are registered are allowed to vote" to mean "If you are allowed to vote, then you are registered." |
Can this series be expressed as a Hyper Geometric function | According to http://en.wikipedia.org/wiki/Fox%E2%80%93Wright_function,
$\sum\limits_{k=0}^\infty\dfrac{a^k}{k!}\dfrac{\Gamma\left(\dfrac{b+k}{c}\right)}{\Gamma\left(\dfrac{b}{c}\right)}=~_1\Psi_1\left[\begin{matrix}\left(\dfrac{b}{c},\dfrac{1}{c}\right)\\\left(\dfrac{b}{c},0\right)\end{matrix};a\right]$ |
Probability identities questions | Yes, the terms are interchangeable, but you have to be careful how you do so. The six valid permutations are:
$$\begin{array}\mathsf P(A\cap B\cap C) &=\mathsf P(A\mid B\cap C)~\mathsf P(B\mid C)~\mathsf P(C)\\&=\mathsf P(B\mid A\cap C)~\mathsf P(A\mid C)~\mathsf P(C)\\&=\mathsf P(C\mid A\cap B)~\mathsf P(A\mid B)~\mathsf P(B)\\&=\mathsf P(A\mid B\cap C)~\mathsf P(C\mid B)~\mathsf P(B)\\&=\mathsf P(B\mid A\cap C)~\mathsf P(C\mid A)~\mathsf P(A)\\&=\mathsf P(C\mid A\cap B)~\mathsf P(B\mid A)~\mathsf P(A)\end{array}$$
As long as neither probability equals zero, then $\lg(\mathsf P(A)\cdot\mathsf P(B))=\lg(\mathsf P(A))+\lg(\mathsf P(B))$ as normal.
So therefore, $\lg\mathsf P(A\cap B\cap C)=\lg\mathsf P(A\mid B\cap C)+\lg\mathsf P(B\mid C)+\lg\mathsf P(C)$ |
What method is required to find out "for what values of k is $4x^2+kx+\frac14$ a perfect square?" | According to this if $b^2=4ac$ then the expression is a perfect square.
So $$ k^2=4 (4) (\frac{1}{4}) $$
So $$ k=±2 $$
Edit:
See I am going to show you why we assume $4k^2$ as $x^2$ and $4$ as $x$.
So lets compare the $2$ expressions-
$$ a^2x^2+2abx+b^2=4x^2+kx+4 $$
Now see if we consider $4x^2$ as $b^2$ then notice this is wrong because $4x^2$ has $x^2$ sign which means $4x^2$ can only considered with a term having $x^2$. Thus we can say-
$$ a^2x^2=4x^2 , 2abx=kx, b^2=4$$
Now you can compare and find the answer like you did in question |
Irreducibles - difficulty with the definition | the way you write it it is not correct, because you could choose $x=y=p$, with $p$ irreducible and not a unit. then you would have counterexample
As far as i know there is a definition for "prime" in integral domains:
$p \mbox{ is prime } :\Leftrightarrow \forall x,y \in R : p|xy \Rightarrow p|x \mbox{ or } p|y$. "prime" and "irreducible" are equivalent ind factorial rings, but i think not in every integral domain.
At the beginning of your proof you made a mistake: if $p|xy$ it doesnt mean that $p=xyr$, it does mean that $pr=xy$ for some $r$ |
Solution to the equation $\Delta f=\nabla f\cdot \nabla g$, where $g$ is radial. | Let $g = -\log h$ which makes $0 < h < 1$. Then by chain rule we have that
$$\Delta f - \nabla f \cdot \nabla(-\log h) = \Delta f + \frac{1}{h}\nabla f \cdot \nabla h = 0$$
$$\implies \nabla \cdot ( h \nabla f) = 0$$
from here there doesn't seem to be any general characterization of the potentials of divergenceless vector fields in two dimensions. The radial condition might be of some use if a practical characterization can be found.
A problem I encountered while trying to tackle the equation head on with the Laplacian in polar coordinates is that the angular dependence ends up being a linear combination of sines and cosines. This may explain why your first attempt failed, because sines and cosines would have had complex values for a first order differential equation. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.