title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
If $B$ is an orthonormal basis | Hint:
Since $\{ b_1,\ldots,b_n\}$ is an orthonormal basis, you can write
$$x = \sum x_i b_i$$
$$y = \sum y_i b_i$$
for unique choices of coefficients $x_i,y_i$ with $1\leq i \leq n$. Now prove that $x_i = \langle x,b_i \rangle$ and $y_i = \langle y,b_i \rangle$ and then express $\langle x,y \rangle$ using the above linear combinations. |
a property for points in convex hull | More generally, if $T: X \to Y$ is a linear transformation of vector spaces and
$C$ is the convex hull of $A \subseteq X$, then $T(C)$ is the convex hull of
$T(A)$. In this case, you're taking the linear transformation of $\mathbb R^2$ given by $(s,t) \to x_1 s + x_2 t$. |
Prove the lines are concurrent (using vectors) | claim: all ten lines go through the point $\frac{1}{3}(a+b+c+d+e).$
proof: i will use complex numbers instead of vectors. chooses a coordinate system so that the circle has unit radius and the point $D$ is at $1.$ let the complex numbers
$a, b, c=e^{2i\gamma}, d = 1, e = e^{2i\epsilon}.$ the line through the center of the triangle $ABC$ and orthogonal to line $DE$ has the parametric form
$$\mbox{ line1:} \frac{1}{3}(a+b+c) +\frac{s}{3}e^{i\epsilon} , s \mbox{ real}$$ in the same way the line through the center of the triangle $ABE$ orthogonal to $DC$ has the parametric form
$$\mbox{ line2:} \frac{1}{3}(a+b+e) + \frac{t}{3}e^{i\gamma}, t \mbox{ real}$$
solving the two equation and their complex conjugates, we find that
$$ s =e^{i\epsilon} + e^{-i\epsilon} \mbox{ and } t = e^{i\gamma} + e^{-i\gamma}$$
and the common points of intersection as claimed. |
Number of integer solutions to a strict inequality | Imagine 538 placeholder spaces to put an object. First add a balance variable $x_7$ to make an equality.
$$x_1+x_2+x_3+x_4+x_5+x_6+x_7=538|x_7>0$$
The restriction on this variable is that $x_7>0$ for the equality and original strict inequality to hold true. Now distribute 1 object each to variables ${x_1...x_7}$ due to the restriction that $x_i>0$. This leaves 531 objects to distribute among 7 variables. There are 6 dividers to partition the set into 7 parts, which means there are $531+6=537$ placeholders for either a divider or an object. The answer is therefore $537\choose{6}$. |
Why determinant of a 2 by 2 matrix is the area of a parallelogram? | Spend a little time with this figure due to Solomon W. Golomb and enlightenment is not far off:
(Appeared in Mathematics Magazine, March 1985.) |
Why is this function composition undefined? | $$g(f(5))=g(2)=2$$
$$g(f(6))=g(3)=3$$
$$g(f(7))=g(3)=3$$
$$g(f(8))=g(4)=4$$
thus $g\circ f$ is well defined from $\{5,6,7,8\} $ to $ \Bbb Z /11 \Bbb Z$ |
Bernoulli Uniform Bayes Estimator | You are calculating the posterior distribution incorrectly. I'll use $\theta$ instead of $p$ to avoid confusing the notation, so that $\theta \sim U(0,1)$ and $p(\theta) = 1$:
\begin{align*}
p(\theta | X) &= p(X | \theta) p(\theta)\\
&= \prod_{i=1}^n p(X_i|\theta) p(\theta)\\
&= p(\theta) \prod_{i=1}^n \theta^{X_i} (1- \theta)^{1-X_i}\\
&= 1 \times \theta^{n \bar{X}}(1- \theta)^{n-n\bar{X}}\\
&= \theta^{n \bar{X}}(1- \theta)^{n-n\bar{X}}
\end{align*}
which is the kernel of a beta distribution with parameters:
$$
B(\alpha = n \bar{X} + 1, \beta = n-n \bar{X} +1 )
$$
where $n \bar{X} = \sum_{i} X_i$.
Then, the bayes estimator of $\theta$ with squared error is simply the posterior mean (mean of a beta distribution with the above parameters):
\begin{align*}
\theta_{\text{bayes}} &= E(\theta|X) \\
&= \frac{\alpha}{ \alpha + \beta}\\
&= \frac{ n \bar{X} + 1}{ n \bar{X} + 1 + n-n \bar{X} +1}\\
&= \frac{n \bar{X} +1}{n+2}\\
& = \frac{\sum_i X_i +1}{n+2}\\
\end{align*}
To solve for the Bayes estimator of $\theta(1-\theta)$, apply what we just did. The Bayes estimator under SE loss is simply the posterior expectation of the thing we are trying to estimate.
$$
E(\theta(1-\theta)) = E(\theta) - E(\theta^2) = E(\theta) - (V(\theta) + E(\theta)^2)
$$
We already know what $E(\theta)$ is from above, and $V(\theta)$ is just:
$$
\frac{\alpha \beta}{ (\alpha + \beta)^2 ( \alpha + \beta + 1)}
$$ |
Every compact chain has a convergent subchain | In effect you have a compact metric space $S$ endowed with a linear order $\preceq$. $S$ is sequentially compact, so every infinite sequence in $S$ has a convergent subsequence. Let $\langle x_n:n\in\omega\rangle$ be a sequence in $S$, and suppose that there is an infinite $N\subseteq\omega$ such that the subsequence $\langle x_n:n\in N\rangle$ is $\preceq$-increasing. Then there is an infinite $N_0\subseteq N$ such that $\langle x_n:n\in N_0\rangle$ converges to some $x\in S$. Let $A=\{x_n:n\in N_0\}$, and note that if $x_n,x_m\in A$, then $x_n\preceq x_m$ iff $n\le m$. For each $\epsilon>0$ there is an $m_\epsilon\in N_0$ such that $d(x,x_n)<\epsilon$ for all $n\in N_0$ such that $n\ge m_\epsilon$, but this just says that $d(x,y)<\epsilon$ for all $y\in A$ such that $x_m\preceq y$.
Thus, you want to rule out all order types in which $\omega$ cannot be embedded, i.e., all order types in which every strictly increasing sequence is finite. In all other order types there is a subsequence of the desired kind. |
Variation of Parameters: Why didn't multiply by $x$ in particular solution of $y''-2y'+y=\frac{e^x}{x}$? | I am not sure to be answering your question.
Life could have been easier if, from the very beginning, you would have defined $$y=e^x \,z\qquad y'=e^x z'+e^x z\qquad y''=e^x z''+2 e^x z'+e^x z$$ which would have transformed the differential equation to $$e^x z''=\frac {e^x} x\implies z''=\frac 1x $$ Integrating a first time $$z'=\log(x)+c_1$$ and a second time $$z=x \log (x)-x+c_1x+c_2=x \log (x)+(c_1-1)x+c_2=x \log (x)+c_3x+c_2$$ making finally $$y=e^x\left(x \log (x)+c_3x+c_2 \right)$$
Making the problem more general such as $$y''-2y'+y=\frac{e^x}{f(x)}$$ the same method would have given $$z''=f(x)$$ |
Largest integer less than 999 with $(n-1)^2\mid n^{2016}-1$ | $n-1$ divides $n^k-1$ automatically for nonnegative integer $k$, so we need the largest three-digit $n$ with
$$n-1\mid 1+n+n^2+\dots+n^{2015}$$
Now $n^k\equiv1\bmod n-1$ and there are 2016 such powers in $\frac{n^{2016}-1}{n-1}$, so $n-1\mid 2016$. The largest three-digit factor of 2016 is $2016/3=672$, so the required $n$ is 673. |
Find distance between a circle's point and line AB, where A and B are it's tangents' intersections? | In the corrected figure (see @Mercy King above), with MC = 4 perpendicular to the tangent at A, and MD= 9 perpendicular to the tangent at B, join MA and MB, and draw ME perpendicular to AB. Since triangles ACM and MEB are similar (see Euclid, Elements III, 32), $4/AM = ME/MB$. And since triangles AME and BMD are likewise similar, $AM/ME = MB/9$. Therefore, $4/ME = ME/9$, making $ME^2 = 36$ and $ME = 6$. |
Orthogonal set of a set in Hilbert space | Let $F$ be a closed subspace of $H$, such that: $E \subset F$.
As $F$ is a closed subspace, we get $F^{\perp \perp} = F$. But:
$E \subset F \implies F^{\perp} \subset E^{\perp} \implies E^{\perp \perp} \subset F^{\perp \perp} = F$
Edit: Proving that $F = F^{\perp \perp}$
Note that since $F$ is a closed subspace, and $H$ is a Hilbert, then $H = F \oplus F^{\perp}$.
Let $x \in F^{\perp \perp}$. $x \in H$, so by above, $\exists$ a unique $y \in F$ and a unique $z \in F^{\perp}$ such that: $x = y + z$. Write $z = x - y$. We have that $y \in F \subset F^{\perp \perp}$, but $F^{\perp \perp}$ is a subspace, hence $z \in F^{\perp \perp}$. Thus, $z \in F^{\perp} \cap F^{\perp \perp} = \{0\}$; so that $z = 0$, and $x = y \in F$. The result follows. |
Problem with Smith normal form over a PID that is not an Euclidean domain | $\xi^2=\xi-5$, and therefore $4+\xi^2=\xi-1$ (which is a prime element). On the other side, $2\xi+3=2(\xi-1)+5=2(\xi-1)+\xi-\xi^2=-(\xi-1)(\xi-2)$, so $b_{33}\mid b_{22}$.
The SNF of your matrix is $$\begin{pmatrix}1 & 0 & 0\\0&\xi-1&0\\0& 0 & (\xi-1)(\xi-2) \end{pmatrix}.$$ |
Application of double integrals-finding mass | Using polar coordinates is a good idea indeed.
$r$ varies from $0$ to the circle $r=2\sin{\theta}$, and $\theta$ varies from $y=x$ to $x=0$. If you replace $x$ by $r\cos{\theta}$ and $y$ by $r\sin{\theta}$, you get $r\cos{\theta}=0$ and $r\sin{\theta}=r\cos{\theta}$, and there are only two angles in the first quadrant that satisfy these equations: $\theta=\pi/2$ for the first one, and $\theta=\pi/4$ for the second, as you guessed. So $\theta$ varies from $\pi/4$ to $\pi/2.$
PS: your cartesian intervals are correct too. |
Lower bound on $P(X \geq \theta E [X] )$ | Note that your denominator is exactly $\mathbb{E}[(X - \theta \mathbb{E}[X])^2]$. So we have the following:
$$
\begin{aligned}
(1 - \theta) \mathbb{E}[X] &= \mathbb{E}[X - \theta E[X]] \leq \mathbb{E}\big[(X - \theta \mathbb{E}[X]) 1\{X - \theta \mathbb{E}[X] \geq 0\}\big] \\
&\overset{\text{(C.S.)}}{\leq} \sqrt{\mathbb{P}(X \geq \theta \mathbb{E}[X])} \sqrt{\mathbb{E}[(X - \theta \mathbb{E}[X])^2]}
\end{aligned}
$$
Rearranging will give you the result. |
Expectation for number of students who miss true value of $\beta_1$ | $p=5\%$ since that's the probability each student has for the true value of $\beta1$ lying outside his CI. $E(X) = 103(0.05)$. |
Prove that if $A \subset B$ then $P(A) \leq P(B)$ | You probably learned a fact on the lines of "if two events $X$ and $Y$ are disjoint and independent, then $P(X\cup Y)=P(X)+P(Y)$." Since $A$ and $A^c\cap B$ are disjoint, you have
\begin{align*}
P(B)&=P(A\cup(A^c\cap B))\\&=P(A)+P(A^c\cap B)\\&\geq P(A)+0\\&=P(A)
\end{align*}
where we used the fact that $P(A^c\cap B)\geq 0$. |
Evaluating $\sum_{n=1}^\infty \sqrt n$ $\log({n+3\over n})$ | Hint: use the fact that $$
\log (1+u)\sim_{u\to 0} u
$$ |
Need a hint to evaluate the indefinite integral $\int\frac{e^x(2-x^2)}{(1-x)\sqrt{1-x^2}}dx$? | Your trick does work.
$$e^x\frac{2-x^2}{(1-x)\sqrt{1-x^2}}=e^x\frac1{(1-x)\sqrt{1-x^2}}+e^x\sqrt{\frac{1+x}{1-x}}$$
And,
$$\frac{d\sqrt{\frac{1+x}{1-x}}}{dx}=\frac{\sqrt{1-x}}{2\sqrt{1+x}}\frac{2}{(1-x)^2}=\frac1{(1-x)\sqrt{1-x^2}}$$
So the antiderivative is, $$e^x\sqrt{\frac{1+x}{1-x}}+C$$ |
$x^p-a$ irreducible? | If $a\in F$, it's known that for $p$ a prime, $X^p-a$ has no root in $F$ iff $X^p-a$ is irreducible over $F$. A proof on this site can be found here, and in other linked duplicates.
The roots of $X^p-a$ in a splitting field are of form $\zeta^k\alpha$ for $0\leq k\leq p-1$, where $\zeta$ is a primitive $p$-th root of unity. If $F$ contains any of these roots, then since $F$ contains $\zeta$, it must also contain $\alpha$, a contradiction. So $X^p-a$ has no root in $F$, so by the linked result, it must be irreducible. |
Range of Controllability is A-Invariant | Let $p(x)=x^n+c_{n-1}x^{n-1}+\cdots+c_1x+c_0$ be the characteristic polynomial of $A$. Since $p(A)=0$, we get
$$
A^n=-c_{n-1}A^{n-1}-\cdots-c_1A-c_0I.
$$
Therefore, if $\eta=[B,AB,\ldots,A^{n-2}B,A^{n-1}B]x$ where $x^T=(x_0^T,\ldots,x_{n-1}^T)$ with each $x_i\in\mathbb R^m$, then
\begin{aligned}
A\eta
&=A[B,AB,\ldots,A^{n-2}B,A^{n-1}B]x\\
&=ABx_0+A^2Bx_1+\cdots+A^{n-1}Bx_{n-2}+A^nBx_{n-1}\\
&=ABx_0+A^2Bx_1+\cdots+A^{n-1}Bx_{n-2}-(c_{n-1}A^{n-1}B+\cdots-c_1AB+c_0B)x_{n-1}\\
&=B(-c_0x_{n-1})+AB(x_0-c_1x_{n-1})+A^2B(x_1-c_2x_{n-1})+\cdots+A^{n-1}B(x_{n-2}-c_{n-1}x_{n-1})\\
&=[B,AB,\ldots,A^{n-2}B,A^{n-1}B]\pmatrix{-c_0x_{n-1}\\ x_0-c_1x_{n-1}\\ x_1-c_2x_{n-1}\\ \vdots\\ x_{n-2}-c_{n-1}x_{n-1}}\\
&=[B,AB,\ldots,A^{n-2}B,A^{n-1}B]y.
\end{aligned} |
Exercising divergent summations: $\lim 1-2+4-6+9-12+16-20+\ldots-\ldots$ | Dear Gottfried, as you correctly observe, the sum is the Taylor expansion of
$$ \frac{1}{(1+x)^3 (1-x)} $$
for $x=1$. This function has a pole at $x=1$, so the result is a genuine divergence, the standard number "infinity" (without a specification of the phase) that is inverse to zero. The fact that some seemingly divergent sums have finite values doesn't mean that all of them have finite values.
Your second method is illegitimate because it clumps the neighboring values of $n$ - the exponents in the powers of $x$ - which means that the justification isn't robust under any infinitesimal deformations of the parameters. Note that it wasn't really legitimate that you wrote $1+1+1+\dots = \zeta(0)$. In fact, $\zeta(0)-n$ for any integer $n$ - and in fact, not only integer - would be equally (un)justified. In fact, none of them gives the right result.
It's a misconception that $1+1+1+\dots = -1/2$ "always" holds. It's only true if the terms $1$ are associated with values of $n$ that go over positive integers. But if they go over non-negative integers, the result would be $+1/2$, and so on. |
Homogenous and non-Homogenous equations | If $Ax=0$ has exactly one solution, then $A$ is injective, but it must not necessarily be surjective (indeed it is if, and only if $n=m$), so $Ax=b$ can have either one or zero solutions.
An example for the case with zero solutions is
$$\left(\begin{array}{cc}1 & 0\\0 & 1\\0 & 0\end{array}\right)x = \left(\begin{array}{c}1\\1\\1\end{array}\right)$$ |
Problem with the inverse expansion | There is no need to know anything about variable $z$ and the relation between $q$ and $z$ in order to solve this problem.
Just think that $q, t$ are variables connected by equation $$t = q - 12q^{2} + 66q^{3} - 220q^{4} + 495q^{5} - \cdots\tag{1}$$ and the problem here is find $q$ in terms of a power series in $t$. Let us assume that $$q = a_{0} + a_{1}t + a_{2}t^{2} + a_{3}t^{3} + \cdots\tag{2}$$ and we will find the numbers $a_{k}$ one by one. Clearly from $(1)$ we can see that if $q = 0$ then $t = 0$ and hence put $t = 0$ in $(2)$ we get $a_{0} = 0$ and hence $$q = a_{1}t + a_{2}t^{2} + \cdots$$ Putting this value of $q$ in $(1)$ we get $$t = (a_{1}t + a_{2}t^{2} + \cdots) - 12(a_{1}t + a_{2}t^{2} + \cdots)^{2} + \cdots = a_{1}t + (a_{2} - 12a_{1})t^{2} + \cdots$$ Hence by comparing coefficients of $t$ and $t^{2}$ on both sides we get $a_{1} = 1$ and $a_{2} = 12a_{1} = 12$. It is possible to apply the same technique (with more calculation effort to find $a_{3}, a_{4}, \cdots$. Thus for $a_{3}$ you will need to perform calculations such that we calculate coefficients till $t^{3}$. Note that calculation of $a_{k}$ will depend on the values of previously calculated $a_{1}, a_{2}, \ldots, a_{k - 1}$.
So far we have calculated $a_{1}, a_{2}$ above and from that we get $q = t + 12t^{2} + \cdots$. |
True/False: If $\sum_{n=1}^\infty a_{2n-1}+a_{2n}=0$ then $\sum_{n=1}^\infty a_n=0$ | Try $a_n = \begin{cases}-1 \text{ if $n$ is odd} \\ 1 \text{ if $n$ is even}\end{cases}$. Then $a_{2n-1} + a_{2n} = (-1) + 1 = 0$, so that $\sum\limits_{n = 1}^\infty (a_{2n-1} + a_{2n}) = \lim\limits_{N \to +\infty}\sum\limits_{n = 0}^N(a_{2n-1} + a_{2n}) = \lim\limits_{N \to +\infty} 0 = 0$, but the partial sum $\sum\limits_{n = 1}^N a_n = (-1) + 1 + (-1) + 1 + \dots = \begin{cases}-1 \text{ if $n$ is odd} \\ 0 \text{ if $n$ is even}\end{cases}$, so that $\sum_{n=1}^\infty a_n$ doesn't even exist. |
Indeterminate velocity components of a particle at the center of a polar coordinate system | If you are considering the velocity of a point particle at the centre of a polar coordinate system then you know that all of the velocity is directed radially outwards from this centre.
Therefore $v_r = v_{total}$ and $v_{\theta} = 0$.
Mathematically we have:
$$\dot x = \dot r \cos \theta - r\dot\theta \sin\theta = \dot r\cos\theta$$
$$\dot y = \dot r \sin \theta + r\dot\theta \cos\theta = \dot r\sin\theta$$
therefore,
$$ \dot r = \sqrt{\dot x^2+\dot y^2}$$
which is generally not true but is in this case because $v_{\theta} = 0$ by definition. |
definite Integral with limit approches $\infty$ | Taking the limit inside, we get:
\begin{equation}
I=\int\limits_{0}^{+\infty}\lim_{n\rightarrow \infty} \left(1+\frac{t}{n}\right)^{-n}\cos\left(\frac{t}{n}\right)\,dt
\end{equation}
Let $L$ be the limit, then using the product rule, we can split the limit as follows:
\begin{equation}
L=\underbrace{\lim_{n\rightarrow \infty}\left(1+\frac{t}{n}\right)^{-n}}_{L_{1}}\times\underbrace{\lim_{n\rightarrow \infty} \cos\left(\frac{t}{n}\right)\,dt}_{L_{2}}
\end{equation}
The second limit is equal to $1$ and the first limit is the definition of the exponential function $e^{-t}$. Then, we conclude that $L=e^{-t}$. Plugging this into I:
\begin{equation}
I=\int\limits_{0}^{+\infty}e^{-t}\,dt=\Gamma(1)=0!=1
\end{equation} |
How to evaluate $\int_{0}^{\pi }\theta \ln\tan\frac{\theta }{2} \, \mathrm{d}\theta$ | Obviously we have
$$\int_{0}^{\pi }\theta \ln\tan\frac{\theta }{2}\mathrm{d}\theta =4\int_{0}^{\pi /2}x\ln \tan x\mathrm{d}x$$
then use the definition of Lobachevskiy Function(You can see this in table of integrals,series,and products,Eighth Edition by Ryzhik,page 900)
$$\mathrm{L}\left ( x \right )=-\int_{0}^{x}\ln\cos x\mathrm{d}x,~ ~ ~ ~ ~ -\frac{\pi }{2}\leq x\leq \frac{\pi }{2}$$
Hence we have
\begin{align*}
\int_{0}^{\pi /2}x\ln\tan x\mathrm{d}x &= x\left [ \mathrm{L}\left ( x \right )+\mathrm{L}\left ( \frac{\pi }{2}-x \right ) \right ]_{0}^{\pi /2}-\int_{0}^{\pi /2}\left [ \mathrm{L}\left ( x \right )+\mathrm{L}\left ( \frac{\pi }{2}-x \right ) \right ]\mathrm{d}x\\
&= \left ( \frac{\pi }{2} \right )^{2}\ln 2-2\int_{0}^{\pi /2}\mathrm{L}\left ( x \right )\mathrm{d}x
\end{align*}
use
$$\mathrm{L}\left ( x \right )=x\ln 2-\frac{1}{2}\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k-1}}{k^{2}}\sin 2kx$$
(Integrate the fourier series of $\ln\cos x$ from $0$ to $x$.)
we can calculate
\begin{align*}
\int_{0}^{\pi /2}\mathrm{L}\left ( x \right )\mathrm{d}x&=\frac{1}{2}\left ( \frac{\pi }{2} \right )^{2}\ln 2-\frac{1}{2}\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k-1}}{k^{2}}\int_{0}^{\pi /2}\sin 2kx\mathrm{d}x \\
&= \frac{\pi ^{2}}{8}\ln 2-\frac{1}{2}\sum_{k=1}^{\infty }\frac{1}{\left ( 2k-1 \right )^{3}}
\end{align*}
So
\begin{align*}
\int_{0}^{\pi/2}x\ln\tan x\mathrm{d}x &=\frac{\pi ^{2}}{4}\ln 2-2\left [ \frac{\pi ^{2}}{8}\ln 2-\frac{1}{2}\sum_{k=1}^{\infty }\frac{1}{\left ( 2k-1 \right )^{3}} \right ] \\
&=\sum_{k=1}^{\infty }\frac{1}{\left ( 2k-1 \right )^{3}}\\
&=\sum_{k=1}^{\infty } \frac{1}{k^{3}}-\sum_{k=1}^{\infty }\frac{1}{\left ( 2k \right )^{3}}=\frac{7}{8}\zeta \left ( 3 \right )
\end{align*}
Hence the initial integral is
$$\boxed{\Large\color{blue}{\int_{0}^{\pi }\theta \ln\tan\frac{\theta }{2}\mathrm{d}\theta=\frac{7}{2}\zeta \left ( 3 \right )}}$$
in addition,as you mentioned
$$\int_{0}^{\pi }\theta \ln\tan\frac{\theta }{2}\mathrm{d}\theta=\color{red}{\sum_{n=1}^{\infty }\frac{1}{n^{2}}\left [ \psi \left ( n+\frac{1}{2} \right )-\psi \left ( \frac{1}{2} \right ) \right ]=\frac{7}{2}\zeta \left ( 3 \right )}$$
or
$$\sum_{n=1}^{\infty }\frac{1}{n^{2}}\psi \left ( n+\frac{1}{2} \right )=\frac{7}{2}\zeta \left ( 3 \right )-\left ( \gamma +2\ln 2 \right )\frac{\pi ^{2}}{6}$$ |
Let $f(x)=x^3+ax^2+bx+c$ Prove that for $a^2\lt 3b$ there exists only one $x_0$ such that $f(x_0)=0$ | $f'(x)=3x^2+2ax+b=3(x+\frac13a)^2-\frac13a^2+b>3(x+\frac13a)^2\ge 0$, so $f$ is strictly increasing. |
A characterization of semisimple module related to anihilators | Let $m$ be an arbitrary nonzero element of $M$. Let $ann(m)= I_1\cap \cdots \cap I_r$ with $I_r$ maximal left ideal of $R$. Then the canonical map
$$
R\cdot m \cong R/ ann(m) \to R/I_1 \times \cdots R/I_r=:N
$$
is an injective homomorphism of left $R$-modules. Since $N$ is clearly semisimple, $R\cdot m$ is semisimple as a left $R$-module. Thus $M$ is a sum of semisimple submodules, and so $M$ is itself semisimple. |
prove two rings are isomorphic | It is true, but I'm not clear on your observation.
Maybe in your observation you meant this: "I see there are two ideals of $R/I\cap J$, namely $I/I\cap J$ and $J/I\cap J$, that have trivial intersection, and add up to $R/I\cap J$, therefore their sum is direct." Not only does that say they are isomorphic, it says the two sets are equal.
You could also just apply elementary ring isomorphism theorems.
The Chinese remainder theorem it is isomorphic to $R/I\times R/J$, and the second isomorphism theorem shows that each of these is isomorphic to what you described. |
Relationship between OLS estimates of slope coefficients of simple linear regression Y on X and X on Y | There is some contradiction in your symbology: I am assuming that you estimated the betas through a regression $y,x$ and the gammas with a regression $x,y$.
That means that:
- in the first case, you are assuming the "error" to be in the $y$ data (while the $x$ values are precise);
- in the second, on the contrary, the "error" is on the $x$ data.
In any case the regression line is going to pass through $(x_{avg},y_{avg})$, so we are concerned only with determining the slope.
So $\beta_1$ is tied to the conditional probability that all the error be with $y$ and $1/\gamma_{1}$ that it be all with $x$.
The intermediate case in which we allow the error to be both on the $x$ and $y$ data is solved through Total Least Squares linear regression which would produce a different slope according to the presumed ratio of precision (or noise) between the two sets of data.
So a relation cannot be established, unless you state such an assumption.
-- addendum in reply to your comment --
The "physical" and "engineering" concept about regression is what I described above.
If you have, e.g., a set of measurements of (volume, weight) of a certain substance, made with instruments of which you know the precision and excluding systematic errors, and you want to determine the density of the substance then you resort to the statistics property of levelling out the errors by using multiple samples. That is to least squares method.
The standard least square method is to minimize $(\Delta y)^2=(y_n-(\beta_0+ \beta_1 x_n))^2$ which is the same to say that the "precise" value of $y$ is $\beta_0+ \beta_1 x_n$ which in turn means that $x_n$ is precise and you are minimizing the vertical distances $y_n$-line.
And that's acceptable if the expected r.m.s. error (the $\sigma$) on $x$ is much lower than that on $y$.
If it is the contrary, then you shall minimize the horizontal deviations from the line ($x,y$ regression).
In the case in which the expected r.m.s. errors are comparable, you shall minimize the distances from the line taken along a direction inclined same as the ratio of the $\sigma$'s (in between the two extreme cases above).
So, physically it is known that the vertical and horizontal approach might provide two different slopes, and mathematically you cannot concile them if not making assumptions about the said ratio.
-- addendum upon expliciting Gauss-Markov conditions --
Translated into the engineering approach described above, they read as:
- the systematic error is null;
- it is present on both $x$ and $y$, independently;
- it is distributed as a double Normal, on the two independent variables $x,y$, thus as a product of two Normals;
- the expected r.m.s. error is given and is respectively $\sigma_x , \, \sigma_y$.
So what explained above is totally justified and the actual slope shall be computed by minimizing the sum of
the square errors = deviations taken along the line $\Delta y / \Delta x = \sigma_y / \sigma_x$.
Or, in other words, transform $x$ and $y$ into
$$
\xi _{\,n} = {{x_{\,n} - \bar x} \over {\sigma _{\,x} }},\quad \eta _{\,n} = {{y_{\,n} - \bar y} \over {\sigma _{\,y} }}
$$
and calculate the errors orthogonally to the line : Orthogonal Regression.
Now you expect to have the inversion of the slope if you invert the axes. |
Finding total number of cases in probability questions | It's useful to think of the "balls going through holes" as a process, and then keep track of the different possibilities so far. First, think of the first ball. It can go through $7$ holes. So we have $7$ cases for the first ball. Then the second ball comes, and it also has $7$ different holes that it can go through. Now we multiply these values and we have $7\times 7 = 7^2$ different cases.
Can you see how to continue from here? |
Linear system of equations: change in one variable with respect to another | As noted in a comment, it does not really make sense to write
$\frac{\Delta x}{\Delta c}$ if $c$ is really a constant.
Moreover, if $x$ and $y$ are fully determined by these equations
when $c$ is a constant, it does not make sense to write
$\frac{\Delta x}{\Delta y}$ either.
This suggests that whoever posed this problem does not really
consider $c$ to be a constant.
It may be that they interpret $c$ as a "known" parameter in the context of
this problem. That is, is it supposed that $c$ will be somehow "given"
from outside the problem space, and that we then solve for $x$ and $y$
using this "given" value of $c$.
In that case, we can make both $x$ and $y$ change value
by changing the value of $c$.
You can solve the given set of equations assuming $c$ is known,
and get $x$ as an expression in $c$ and $y$ as an expression in $c$.
Luckily, in this particular problem you should then be able
to write $x$ as an expression in $y$,
that is, $x = f(y)$, eliminating $c$ from that equation.
I think it will then be fairly easy to see what
$\frac{\Delta x}{\Delta y}$ must be.
You can still think of the variation of $c$ as the "reason" why
$x$ and $y$ are able to vary, but you can find
the relative rates at which $x$ and $y$ vary.
By the way, notice that every term that is added to each side of each
equation in your system of equations is either one of the quantities
$x, y, s, c$ or a (truly) constant multiple of one of those quantities
such as $-c$ or $2c$.
As a result, if you multiply every one of the quantities $x, y, s, c$
by the same constant $k$, and substitute the result into the system of
equations, you get a set of equations equivalent to the original equations.
For example, if we multiply everything by $3$, we get:
\begin{align}
3s &= 3y - 3c\\
3y &= 2(3c) \\
3x &= 3s.
\end{align}
This implies that if the system of equations has a unique solution for
any $c$, all the quantities $x, y, s, c$ will be in the same
proportions to each other for any value of $c$; in particular,
$x = ry$ for some constant ratio $r$. What you are supposed to do in
this problem is to find $r$. |
A technical step in the proof of Atiyah-Bott fixed point formula | $\DeclareMathOperator{\Tr}{Tr}$I don't think $\Tr \, Fe^{-t\Delta_q}$ is generally an integer. (I'm not sure what you mean by "in the end it operates on $H^\ast (S_q)$".) It is only when we take the alternating sum on $q$ that we get something constant in $t$. This is the miracle of "supersymmetry".
As a simple example, take $F$ to be the identity map $C^\infty(S) \to C^\infty(S)$ (this is associated to the identity map $\varphi = \text{Id}: M \to M$, if you like). (I would think that one could come up with more interesting examples, but this will do for now.) Then $F^\ast$ is the identity on the cohomology, so $\Tr(F^\ast|_{H^q(S)} ) = \dim H^q(S)$, and the Lefschetz number is simply the Euler characteristic of the complex.
As you say, Roe's argument shows that this Euler characteristic is equal to the limit $\lim_{t\to \infty} \sum_q (-1)^q \Tr \, e^{-t\Delta_q}$, and that in fact, by "super-symmetry", $\sum_q (-1)^q \Tr \, e^{-t\Delta_q}$ is constant in $t$. (I think that this argument in this case is really just the usual McKean-Singer argument used in the heat equation proof of the index theorem.)
But the individual terms in the sum are definitely not constant in $t$. In fact, $\Tr \, e^{-t\Delta_q} \sim c_q t^{-\frac{n}{2}}$ as $t \to 0$ for some constant $c_q$, as discussed in Roe's book. So for small $t$, the terms $\Tr \, e^{-t\Delta_q}$ are quite singular, and it is somewhat miraculous that the singularities cancel in the alternating sum to give something constant in $t$. |
$N^{th}$ root of a sequence converges to the same limit. | Hint: Let $\epsilon>0.$ Then there exists $N$ such that $a_n<L+\epsilon$ for $n>N.$ Thus
$$(a_1a_2\dots a_n)^{1/n}= (a_1a_2\dots a_N)^{1/n}(L+\epsilon)^{(n-N)/n}.$$
Take the $\limsup$ as $n\to \infty$ to see
$$\limsup (a_1a_2\dots a_n)^{1/n}\le 1\cdot (L+\epsilon).$$
This shows $\limsup(a_1a_2\dots a_n)^{1/n}\le L.$ |
Evaluate Integral Using Stokes Theorem | Since $\Gamma$ is a circle in a plane, we can apply Stokes' Theorem to the disk $D$ that it encloses to get that $\int_{\Gamma} F \cdot ds = \int_D$ curl $F \cdot dA$. At each point of $D$, the normal to the disk is the same as the normal to the plane. In general, a normal to a plane is a vector $v$ such that the equation of the plane can be written in the form $(x - x_0, y - y_0, z - z_0) \cdot v = 0$, where $(x_0, y_0, z_o)$ is any fixed point on the plane. In this case, we can take $(x_0, y_0, z_0)$ to be the origin. So you just need to find a unit vector $v$ with this property, pointing in whichever direction agrees with the orientation of $\Gamma$. |
Reduction of quadratic forms | Method 2 is just an algorithm to find a diagonal matrix which is congruent to the first, so maybe you can call it matrix congruence method.
Another method along the lines of matrices is to do orthogonal diagonalization, but method 2 would be faster...
You can do all the row operations first: but be careful then you have $P^TA$, so the safest then is to take the resulting matrix on the right, in your augmented matrix (which is $P^T$), transpose it and multiply on the right which gives you $P^TAP$ - I don't think you will gain too much when you do this - it is much simpler to just alternate row and column operations. |
Why is $\mathbb{R}^3 - A \simeq S^1 \vee S^2$? Where $A$ is a circle. | It may help your intuition that $\mathbb{R}^3$ deformation-retracts onto the closed ball $B$ of radius $2$. Now remove the circle $A = \{(x,y,z)\ |\ x^2 + y^2=1, z=0\}$ from $B$. Notice that the space $B-A$ is homotopy equivalent to a ball with a donut removed, which is homotopy equivalent to its boundary sphere with a line segment connecting the north and south poles. (Visualize this as "inflating" the removed set $A$.)
Do you see why the result is homotopy equivalent to $S^1\vee S^2$? |
Convex hulls intesection | See that $(0,0)$ belongs to the line segment connecting $(-1,-1)$ and $(1,1)$. |
Total rotation of a circle (or other closed curve) when 'rolled' along a curve in $\mathbb{R}^2$ | Can anyone intuitively explain why the latter works as well?
It's easier to understand if the curve is parametrized by arc length. Also, I find it easier to understand the relation $T=S+rW$ relating absolute lengths than the relation $T/C=S/C+W/(2\pi)$ relating counts of rotations.
So let $s$ parametrize the curve by arc length. I'll also assume that the rotating circle is along the "outside" of the curve the entire time, and that $\kappa$ is nonzero. At the point of tangency, there is the osculating circle with radius $1/\kappa(s)$. The rotating circle has radius $r$. So the center of the rotating circle is (for an infinitesimal moment) tracing a circular path with radius $r+1/\kappa(s)$.
Over a short length $ds$ within the curve, the length of the circular arc that the center travels through can be calculated using proportional reasoning:
$$dt=\frac{r+1/\kappa(s)}{1/\kappa(s)}ds=(1+r\kappa(s))ds$$
Now integrate over $s$ and you get the arc length through which the center passes. That is, $$T=\int_0^S(1+r\kappa(s))\,ds$$
But break it up into two integrals:
$$\begin{align}
T&=\int_0^Sds+r\int_0^S\kappa(s)\,ds\\
T&=S+rW
\end{align}$$
Now divide by $C$ to get the form you have observed.
If the rotating circle is along the "inside", then the same reasoning changes the integral for $T$ to $$T=\int_0^S(1-r\kappa(s))\,ds$$ If the curve has zero curvature throughout, then $$T=\int_0^Sds=S$$ And lastly for more complicated curves, if they can be broken up piecewise into curves of these three types, you are set. |
Cake slicing hypothesis problem | Let $\mu$ and $\nu$ be the true weights of the two slices. Your hypotheses are $H_0 : \mu = \nu$ and $H_1 : \mu \ne \nu$.
To design a test, come up with a statistic (involving $X$ and $Y$) whose distribution is known under the null. |
$\arg (z) < 0$ , then $\arg (- z) - \arg (z)$ = | It doesn't matter that $\arg z<0$. Consider
$$
z=|z|e^{i\theta}\\
$$
Now,
$$-z=|z|e^{i(\theta\pm\pi)}=|z|e^{i\tilde\theta}$$
where $\tilde\theta=\arg(-z)$
Then
$$\arg(-z)-\arg(z)=\tilde\theta-\theta=\pm\pi$$
This has been verified numerically for random values of $z$. |
Partial derivative 8 | Your answer is correct. It's just a matter of simplifications.
\begin{align}
\frac{\partial f'(k)}{\partial b}
& = \frac{ak^{a}(1-a)(1+a+abk)}{(1+abk)^{a+1}}\\
& = \frac{ak^{a}(1+a+abk-a-a^2-a^2bk)}{(1+abk)^{a+1}}\\
& = \frac{ak^{a}(1+abk-a^2-a^2bk)}{(1+abk)^{a+1}}\\
&=\frac{ak^{a}(1+abk)^{-a}(1+abk-a^2-a^2bk)}{(1+abk)}\\
& = ak^{a}(1+abk)^{-a}\left [ 1-\frac{a(a+abk)}{(1+abk)} \right ]
\end{align} |
The letter E is thrown away from the word GEBON . How many ways can rest of the letters be jumbled so that O does not appear at the beginning? | Note: The solution is based on the assumption that a letter can appear only once in the word.
After removing E from the word GEBON, the word left is GBON.
Now the letter at the start can be G, B or N i.e, you've three options to start the letter.
After selecting the start letter, you're left with three other letters for the second letter in the word (for example, if the first letter of the word is G, then the second letter can be B, O or N).
After picking the first two letters, you can select the third letter in two different ways and the then you have a single choice left for the last letter.
Hence the total number of combinations will be $3 \times 3 \times 2 \times 1 = 18$.
==============================================================================
If the letters in the word can be repeated, then the total number of combinations can be $3 \times 4 \times 4 \times 4 = 192$. |
Does $\sum a_i b_i $ converge, if $ \sum a_i^2 , \sum b_i^2 $ converge? | Just remembered the inequality in its correct form (credit- Wikipedia). The statement itself is a one-line proof:
$|\langle u,v\rangle|^2 \leq \langle u,u\rangle\cdot\langle v,v\rangle$
I was writing it as $\sum{a_ib_i \leq (\sum a_i)^{1/2}(\sum b_i)^{1/2}}$
Credit : Qiaochu Yuan and Copper.hat in the comments section. |
Pigeon-Hole Principle Common Sum | For the $14$ case, we show that there exist at least one number from set $\{3,4,5,...,17\}$ is not obtainable and at least one number from set $\{199,198,...,185\}$ is not obtainable.
First consider the set $\{3,4,5,...,17\}$.
Suppose all numbers in this set are obtainable.
Then since $3$ is obtainable, $1$ and $2$ are of different color. Then since $4$ is obtainable, $1$ and $3$ are of different color. Now suppose $1$ is of one color and $2,3,...,n-1$ where $n-1<17$ are of the same color that is different from $1$'s color, then if $n<17$ in order for $n+1$ to be obtainable $n$ and $1$ must be of different color so $2,3,...,n$ are of same color. Hence by induction for all $n<17$, $2,3,...,n$ must be of same color. However this means there are $16-2+1=15$ balls of the color contradiction.
Hence there exist at least one number in the set not obtainable.
We can use a similar argument to show if all elements in $\{199,198,...,185\}$ are obtainable then $99,98,...,85$ must all be of the same color which means there are $15$ balls of the color contradiction so there are at least one number not obtainable as well.
Now we have only $195$ choices left and $196>195$ so $14$ case is the same as the $15$ case where identical sum must appear.
As to the comment, I constructed a counter-example list for the $13$ case as follows. The idea of constructing this list is similar to the proof for the $14$ case.
Red: $(1,9,16,23,30,37,44,51,58,65,72,79,86)$
Green: $(2,3,4,5,6,7,8;94,95,96,97,98,99)$
Note that $86+8=94$ and $1+94=95$ so there are no duplicated sum. |
$\det(A)={\pm 1} \iff A\in\mathcal O_n(\Bbb R)$? | Not every matrix with $\det(A)=\pm 1$ is orthogonal. For instance, the matrix
$$ A=\begin{bmatrix}1&1\\0&1\end{bmatrix} $$
is not orthogonal. |
Is there an analogue of Mordell-Weil theorem for other fields? | For $K=\Bbb Q_p$ there will be uncountably many $K$-rational points.
The exact structure of $K$-rational points will depend on the
reduction type of the curve, but points close to the base-point $O$ are
in essence parameterised by the formal group of the curve. See Silverman's book for much more detail. |
Paths of even and odd lengths between cube vertices | Imagine to color the vertices of the cube in this way:
Now imagine you start from a red vertex. Doing one step, in any direction, you go in a blue vertex. It is easy to verify that this hold for each red vertex.
Now check the blue vertices. It is easy to see that from one blue vertex, with one step, you can only go to a red one.
So, if you start from a red vertex, at each odd step you will be at a blue vertex, and at each even step you will be again at a red vertex.
This works no matter long is the path, and it can even go back and forth.
So, since two adjacent vertices have different colors, only odd-length paths can join them. |
Is the polynomial $(2x+2)$ irreducible in $\mathbb{Z}[x]$? | Your proof is incorrect - you display one factorisation in which one of the factors is a unit. However to be irreducible you have to consider all possible factorisations.
As in the comments the factorisation $2x+2=2(x+1)$ is a factorisation into non-units in $\mathbb Z[x]$ which shows that the polynomial is reducible.
The situation in $\mathbb Q[x]$ is different, because $2$ is a unit in this context.
You don't need Eisenstein here - since the product of polynomials has degree the sum of the degrees, and a linear polynomial has degree $1$ the only possible factors have degree $0$ (constants) or $1$. If there is a factorisation in the $\mathbb Z$ context, one of the factors must be a constant, and this will be a product of primes. The prime factors of the constant must clearly be factors of each of the coefficients of the polynomial, and any prime which divides the coefficients leads to a factorisation.
In the $\mathbb Q$ context every constant other than $0$ is a unit, so a non-trivial factorisation into irreducibles must involve two factors of degree greater than $0$. |
Winding map doesn't make sense to me | Let $x:[0,1)\rightarrow S^1$ and $x(t)=exp(2i\pi t)$.
Then $\omega_N(x(t))=exp(2i\pi nt)$.
When $t$ goes from $0$ to $1$, $x(t)$ goes around the circle exactly one time while $\omega_N(x(t))$ does it $N$ times (and in anticlockwise direction), each point will be visited $N$ times. That is why $\omega_N$ is the map winding the circle around itself $N$ times. |
Pythagorean triplet mutiple angles | Since $\mathbb Q[i]$ (the set of complex numbers with rational real and imaginary parts) is closed under multiplication, it follows directly from de Moivre's formula that if $\cos x$ and $\sin x$ are both rational, then so are $\cos(nx)$ and $\sin(nx)$ for all integer $n$. |
convergent subsequence in $S$ versus in $\overline{S}$ | Since $H$ is a metric space, a subset $S$ of $H$ is compact if and only if it is sequentially compact. This means that $S$ is compact if and only if every sequence in $S$ has a convergent subsequence, where the limit is also in $S$.
Now, as you state, a subset $S$ of $H$ is said to be precompact if and only if $\overline{S}$ is compact. Thus $S$ is precompact iff every sequence in $\overline{S}$ has a convergent subsequence, where the limit is also in $\overline{S}$.
It turns out that the above is equivalent to the statement that every sequence in $S$ has a convergent subsequence, where the limit here is not assumed to be in $S$. Let's prove this:
($\implies$): Suppose $S$ is precompact and let $(x_n)$ be a sequence in $S$. Then $(x_n)$ is a sequence in $\overline{S}$, which is compact. Hence there exists a subsequence $(x_{n_k})$ of $(x_n)$ and a point $x\in\overline{S}$ such that $x_{n_k}\to x$.
($\impliedby$): Suppose every subsequence in $S$ has a convergent subsequence. We show that $\overline{S}$ is compact. To do so, we show that every sequence in $\overline{S}$ has a subsequence which converges to a point in $\overline{S}$. Let $(x_n)$ be a sequence in $\overline{S}$. Since every element in $\overline{S}$ is the limit of a sequence in $S$, for each $n$ there exists a sequence $(y_{n,m})_m$ in $S$ such that $y_{n,m}\to x_n$.
Thus, for each $n$, there exists a natural number $M_n$ such that
$$
\|y_{n,m}-x_n\| < 1/n, \quad \text{for all $m\ge M_n$}.
$$
Since the diagonal sequence $(y_{n,M_n})_n$ is a sequence in $S$, our hypothesis gives a subsequence $(y_{n_k,M_{n_k}})_k$ of it that converges to a point $y$.
We show that $x_{n_k}\to y$.
Let $\varepsilon>0$.
Choose a natural number $N$ such that $1/N<\varepsilon/2$ and
$$
\|y_{n_k,M_{n_k}}-y\| < \varepsilon/2, \quad \text{for all $k\ge N$}.
$$
Then if $k\ge N$, we have $n_k\ge k \ge N$ and hence
$$
\|x_{n_k}-y\|
\le \|x_{n_k}-y_{n_k,M_{n_k}}\|+\|y_{n_k,M_{n_k}}-y\|
< 1/n_k+\varepsilon/2
< \varepsilon.
$$
Since $\overline{S}$ is closed, we also have that $y\in\overline{S}$.
This completes the proof. |
The diophantine equation $a^{a+2b}=b^{b+2a}$ | Interesting equation. We can solve it using the following fact:
Lemma: Let $a, b, n$ be positive integers. If $a^n\mid b^n$, then $a\mid b$.
This results follows inmediately from the fundamental theorem of the Arithmetic.
Since $a<b$ we have $b+2a<a+2b$, so $a^{b+2a}\mid a^{a+2b}=b^{b+2a}$, then for the above lemma we deduce that $a\mid b$. Let's write $b=ak$, where $k$ is a positive integer greater than $1$. Replacing $b=ak$ into the equation gives us $$a^{a(2k+1)}=a^{a(k+2)}k^{a(k+2)}$$ $$\implies a^{k-1}=k^{k+2}.$$
Now, since $k^{k-1}\mid k^{k+2}=a^{k-1}$, we obtain, by the lemma again, that $k\mid a$. So let's write $a=kn$, where $n$ is a positive integer. Replacing $a=kn$ gives us $$k^{k-1}n^{k-1}=k^{k+2}$$ $$\implies n^{k-1}=k^3.$$
The last equation can be solved using inequalities which can be proved by induction. Namely, if $n\ge 9$, $n^{k-1}\ge 9^{k-1}>k^3$ for all $k\ge 2$. Therefore $n\le 8$.
If $n=8$, $8^{k-1}>k^3$ for every $k\ge 3$. If $k=2$ we get an equality, and this gives us $a=16$ and $b=32$.
A similar reasoning gives us no solutions for every $n\le 7$, except for $n=4$, where we find $k=4$, and thus $a=16$ and $b=64$. Hence, all the solutions are $(a,b)=(16, 32)$ and $(a, b)=(16, 64)$ as you stated. |
Algorithm for finding many gcd(a,b) over a range of a? | Mathematica will vectorize this automatically for you, which be much faster than an explicit loop.
E.g. finding $gcd(a,b)$ where $a$ is a 70 digit number and $b$ ranges over a list of 10K composite numbers of the same magnitude took about 0.1 seconds. Just now, on a fairly old machine.
The Mathematica code is simply
mygcds = GCD[a,B];
where $B$ is the list containing the numbers $b$. |
Finding the general solutions of a linear equation system and proving it | $P$ is real, induction should not be the right tool. Also, when you suggest induction as the way to approach, you should check if the proposed solution indeed work for those examples that you tried.
When $P<0$, we can't have $x_1 < 0, x_2 > 0, x_3>0$ as the first entry would give us $Px_1+x_2+x_3>0$.
You should try Gaussian Elimination. That would also enable you to address the case when does it not have a solution, when does it have a unique solution, and when does it have infinitely many solutions. I would leave that to you as an exercise.
Also, you are right that when $P=1$, is has no solution.
However, let's see how I can use part of your conjecture. Your conjecture suggests that $x_2=x_3$ and $x_1=-2x_2$ and let's check:
$$\begin{bmatrix} P & 1 & 1 \\ 1 & P & 1 \\ 1 & 1 & P \end{bmatrix}\begin{bmatrix} -2 \\ 1 \\ 1 \end{bmatrix}=\begin{bmatrix} -2P+2 \\ P-1 \\ P-1\end{bmatrix}=(P-1) \begin{bmatrix} -2 \\ 1 \\ 1\end{bmatrix}$$
Hence when $P \ne 1$, we indeed have
$$\begin{bmatrix} P & 1 & 1 \\ 1 & P & 1 \\ 1 & 1 & P \end{bmatrix}\begin{bmatrix} \frac{-2}{P-1} \\ \frac1{P-1} \\ \frac1{P-1} \end{bmatrix}= \begin{bmatrix} -2 \\ 1 \\ 1\end{bmatrix}$$
Hence if $P \ne 1$ and if the solution is unique, then you are done.
Let's analyze when is the matrix singular.
The determinant is a cubic polynomial in terms of $P$. When $P=1$, the determinant must be $0$, indeed all the rows are equal and the nullity is $2$. $1$ is a double root. Also, we can easily see that each row sum to $P+2$, hence $P=-2$ is another root. When $P=-2$, it is consistent and it has infinitely many solution.
When $P=-2$, the rank of the matrix is $2$ and the nullity is $1$.
$\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}$ is a particular solution and each row would sum to $0$.
Hence, $$\begin{bmatrix} 1 \\ 0 \\ 0\end{bmatrix} + t \begin{bmatrix} 1 \\ 1\\ 1\end{bmatrix}$$ is the general solution when $P=-2$.
I think it is important to be able to solve such problem using a standard way. Do solve it using Gaussian Elimination or some other ways and then compare the solution. |
when is : $\int_{R^n\backslash\{0\}}|{f(x) \over |x|^{n-2}}| dx < \infty$ given $f$ is summable and continuous? | Outside the origin, $\frac{f(x)}{\lvert x\rvert^{n-2}}$ is continuous as the quotient of two continuous functions. For continuity in the origin, a necessary condition is that $f$ has a zero of "order" at least $n-2$ in $0$, but that is not sufficient. A zero of order strictly greater than $n-2$ would be sufficient, but not necessary.
However, for the result
$$\frac{\partial}{\partial r} \int_{B(0;r)} \frac{f(x)}{\lvert x\rvert^{n-2}}\,dx = \int_{\partial B(0;r)} \frac{f(x)}{\lvert x\rvert^{n-2}}\, dS(x),\; r > 0,\tag{1}$$
it is not necessary that the integrand be continuous in $0$. That $f$ is continuous and integrable is sufficient. The continuity of $f$ means that $f$ is bounded on $\overline{B(0;1)}$, so
$$\int_{\mathbb{R}^n}\left\lvert \frac{f(x)}{\lvert x\rvert^{n-2}}\right\rvert\,dx \leqslant \max \{\lvert f(x)\rvert : \lvert x\rvert \leqslant 1\}\int_{B(0;1)}\frac{dx}{\lvert x\rvert^{n-2}} + \int_{\mathbb{R}^n\setminus B(0;1)} \lvert f(x)\rvert\,dx < \infty$$
since $\lvert x\rvert^{2-n}$ is locally integrable, and $f$ integrable.
Thus $(1)$ holds already under the assumption that $f$ is continuous and integrable. |
Solving functional equation $f(x)\cdot f(y)-xy=f(x)+f(y)-1$ | Note that in your first step you had the following equation:
$$f(x)^2-x^2=2f(x)-1$$
Instead of factoring, if we rearrange all the terms on one side we get:
$$f(x)^2 - 2f(x) - x^2 + 1 = 0$$
Note we have a quadratic, with $f(x)$ acting like our $x$ and the term $1-x^2$ being our constant term.
Solving, we get:
$$f(x) = \frac{2 \pm \sqrt{4 -4(1-x^2)}}{2}$$
$$ = \frac{2 \pm \sqrt{4 - 4 + 4x^2)}}{2}$$
$$ = \frac{2 \pm \sqrt{4x^2}}{2}$$
$$ = \frac{2 \pm 2x}{2}$$
Therefore, we get $2$ solutions for $f(x)$:
$$f(x) = \frac{2 + 2x}{2} = 1+x$$
$$f(x) = \frac{2 - 2x}{2} = 1-x$$ |
Multiple choice questions in exam | Indeed, the minimum number is $4^9+1$. You can prove this works using the pigeonhole principle, with $4^9$ holes corresponding to the answers to the first $9$ questions.
To prove this is the minimum, you need to find a set of $4^9$ tests where no two agree on $8$ answers. From the pigeonhole argument, you know that every possible string of $4^9$ answers for the first $9$ questions must appear exactly once. You just need to choose the last answer for each of these in such a way that when any two answer sequences agree in $8$ places, their $10^{th}$ answers do not agree. Addition $\pmod 4$ is one way to accomplish this... |
Intersection of arbitrary union of compact subsets. | First: your intersection $\cap_{1 \le x < 2 }[x,3] = [2,3]$: The right is included in the left as $[2,3] \subset [x,3]$ for all $x < 2$. And if $p$ is in the left hand intersection, then $p \le 3$ is clear and if $p < 2$ would hold, $p$ would not be in the set $[\max(1,\frac{p+2}{2}), 3]$, which is a set the left hand set takes the intersection over, and this is a contradiction, so $p \ge 2$ and so $p \in [2,3]$. But it's good to try to come up with potential counterexamples and see what goes wrong.
Second, the statement is in fact true. As a hint: use that compact sets are closed (which uses the Hausdorffness!). |
If $\forall R >0$, $B(0,R)\cap F$ is finite, then $F$ is discrete. If $F$ is closed, then the converse is true. | If $B(0, R)\cap F$ were not finite for some $R>0$, then this is a bounded infinite subset of $\mathbb{R}^2$ and so has a limit point by Bolzano-Weierstrass Theorem. But $F$ is closed and so this limit point belongs to $F$. This contradicts that $F$ is discrete. |
How to find a matrix which corresponds to given linear transformation and a certain basis. | Hints:
Question 2): A (nonzero) polynomial of degree $d$ has at most $d$ roots. Now, if there's a non-trivial linear relation between $1, x,x^2,\dots, x^n$:
$$p(x)=c_0+c_1x+c_2x^2+\dots+c_nx^n=0,$$
this means the polynomial $p(x)$ has any real number $\alpha$ as a root. However, if the coefficients are not all zero, $\deg p(x)$ is defined (only the degree of $0$ is not defined), and $\deg p(x)\le n$, so hat it should have a finite number of roots.
Question 4): The column vectors of the matrix are the coordinates of $T(x^k),\ k=0,\dots, n-1\;$ in basis $\mathcal B$.
$$T(x^k)=x\cdot kx^{k-1}=kx^k,$$
hence $$T=\begin{bmatrix}
0&0&0&\dots&0\\
0&1&0&\dots&0\\
0&0&2&\dots&0\\
\vdots&\vdots&\vdots&\dots&0\\
0&0&0&\dots&n
\end{bmatrix}.$$ |
Why is $F_{i+2} ≥ (\frac {1+\sqrt5} {2})^i$ in the Fibonacci series? | Prove it inductively, noting that
$$
\left( \frac{1+\sqrt 5}{2}\right)^{i+2} = \left( \frac{1+\sqrt 5}{2}\right)^{i+1} + \left( \frac{1+\sqrt 5}{2}\right)^{i}
$$ |
In what mathematically rigorous sense does $ \mathbb{Q}$ extend $\mathbb{Z}$? | Your intuition is correct about the isomorphism between $\Bbb Z$ and a subset of $\Bbb Q$. To show the algebraic structure is maintained, you need to show that addition and multiplication work the same way. Let $\times$ and $+$ represent the operations in $\Bbb Z$ and $\oplus$ and $\otimes$ represent the operations in $\Bbb Q$. Then you need to show that $f(\frac n1 \oplus \frac m1)=f(n)+f(m)$ and $f(\frac n1 \otimes \frac m1)=f(n)\times f(m)$ Note that the operations have the proper sort of operands. In fact $\oplus$ and $\otimes$ are defined to make this true. Presumably you have already shown that $\oplus$ and $\otimes$ are well defined, so whichever elements of an equivalence class you choose for the operands, the result is in the same class. |
Invariant theory of $SL(n)$ acting on $F^{n\times m}$ by left-multiplication | The answer is contained in CLASSICAL INVARIANT THEORY A Primer by Hanspeter Kraft and Claudio Procesi, section 8.4 The First Fundamental Theorem for $SL_n$. As Omnomnomnom correctly suspected in the comments, the invariant ring is generated by all $n \times n$ minors, i.e. determinants of any $n$ out of $m$ vectors. In particular, there are no invariants in case $n > m$. |
Finding the error in an argument | Nothing wrong. Just change it into
$$\frac{d z}{d x}=\frac{\partial z}{\partial x}\frac{\partial x}{\partial x}+\frac{\partial z}{\partial y}\frac{\partial y}{\partial x}=\frac{\partial z}{\partial x}+2x\frac{\partial z}{\partial y}$$
Note that that the first term is $\frac{d z}{d x}$, which is different from $\frac{\partial z}{\partial x}$. So they cannot cancel out. The partial derivatives do exist, but be careful not to mix it up with $\frac{d z}{d x}$, which is NOT a partial derivative.
Actually, a better way to say this is that
$$\left[\frac{\partial z}{\partial x}\right]_{y=x^2}=\frac{\partial z}{\partial x}\frac{\partial x}{\partial x}+\frac{\partial z}{\partial y}\frac{\partial y}{\partial x}=\frac{\partial z}{\partial x}+2x\frac{\partial z}{\partial y}.$$
Where I have clearly written down the restriction $y=x^2$. |
Show the following limit doesn't exist $\lim_{(x,y)\to (0,0)}x\ln(xy)$ | I am assuming that this is for positive $x$, as the expression doesn't make sense for negative $x$.
The final limit is easier than you might think. You have two terms: $x\ln(x)$ and $-\frac1x$. The first you know is negative (you might know that it goes to $0$, but that's not really relevant), while the second you know goes to $-\infty$ as $x\to 0$. Their sum clearly also goes to $-\infty$. |
Differential of sin x series confusion | There is a mistake when you first take derivative:
$$y' \neq \sum_{n=1}^\infty\frac{(-1)^n}{(2n)!}x^{2n}$$ but rather
$$y' = \sum_{n=0}^\infty\frac{(-1)^n}{(2n)!}x^{2n}.$$
You can easily see that from
$$(x-\frac{x^3}{3!}+\frac{x^5}{5!}+\ldots)' = 1-\frac{x^2}{2!}+\frac{x^4}{4!}+\ldots$$
With that correction in mind, we now have that
$$y'' = \sum_{n=1}^\infty\frac{(-1)^n}{(2n-1)!}x^{2n-1} = \sum_{n=0}^\infty\frac{(-1)^{n+1}}{(2n+1)!}x^{2n+1} = -y.$$
Finally, $y'''' = (-y)'' = y$.
Addendum: When you differentiate power series, starting index doesn't necessarily shift, as we can see in our examples. So, how do we know whether to shift or not?
Well, it's rather simple, when we differentiated $y$, we got $a_nx^{2n}$ and $2n\geq 0$ for $n\geq 0$, so there is no shift needed. On the other hand, when we differentiated $y'$, we got $b_nx^{2n-1}$, but $2n-1\not\geq 0$ for $n = 0$, so we need to shift. |
How many square matrices are there with given columns, rows and diagonals | I have answered your question both in its literal interpretation, which has a trivial answer, and I've tried to interpret your actual question on the basis of your example, and your finding of $432$ matrices for $n=3$. For the latter, continue reading from the second horizontal line onward.
Literal interpretation, trivial answer:
Taking your question literally, the answer is $1$ for all $n$. After all, if the ordered sets
$$(\text{rows},\ \text{columns},\ \text{diagonals},\ \text{antidiagonals}),$$
of two matrices are the same, then in particular their sets of rows and columns are the same. Of course the $i,j$-th entry of a matrix is in the $j$-th place in some row and the $i$-th place in some column, so its sets of rows and columns together determine the matrix. If we consider the unordered sets
$$\{\text{rows},\ \text{columns},\ \text{diagonals},\ \text{antidiagonals}\},$$
then there are more options; transposing a matrix swaps the sets of rows and columns, but preserves the sets of diagonals and antidiagonals. Similarly, reflecting a matrix along a horizontal (or vertical) axis preserves the sets of rows and columns, but preserves the sets of diagonals and antidiagonals. In this way we already find $4$ matrices with the same unordered sets.
Proposition: For every square matrix are either $4$ or $8$ matrices with the same unordered set.
Proof. Given two matrices $M$ and $N$ with the same unordered set, the first row $M$ is either a row, column, diagonal or antidiagonal of $N$. The other rows of $M$ are precisely the $n$-tuples in the unordered set of $M$ that do not share a coordinate with the first row. Hence they correspond to the same type (row, column, diagonal or antidiagonal) as the first row. So the matrix $N$ induces a permutation of the set
$$\{\{\text{rows of }M\},\{\text{columns of }M\},\{\text{diagonals of }M\},\{\text{antidiagonals of }M\}\}.\tag{1}$$
As noted before, a matrix is determined by its sets of rows and columns. So given $M$, the matrix $N$ is determined by whether the first row and first column of $M$ are rows, columns, diagonals or antidiagonals. This shows that conversely $N$ is uniquely determined by the permutation of $(1)$ it induces, and moreover that $N$ is determined by where it maps the rows and columns of $N$. This shows that the set of matrices with the same unordered set are in bijection with a subgroup of $S_4$ of order at most $4\times3=12$. As noted before, transposing and reflecting $M$ preserves the set $(1)$. This show that this subgroup contains a subgroup isomorphic to $\langle(1\ 2),(3\ 4)\rangle$, and hence it is isomorphic either $V_4$ or $D_8$. Hence there are either $4$ or $8$ matrices with the same unordered set.
$\hspace{10pt}\square$
The proposition could be improved by determining whether there are $4$ or $8$ such matrices, and what this depends on, if anything. But given that you found $432$ matrices with the same 'ordered set', you must mean something else. My best guess is that you take the rows, columns, diagonals and antidiagonals as unordered sets as well.
Different interpretation, interesting answer:
Given a matrix $M$, let $X$ its unordered set of unordered sets. For you example $3\times3$-matrix, this is
$$X=\left\{\begin{array}{lll}\{1,2,3\},&\{4,5,6\},&\{7,8,9\},\\
\{1,4,7\},&\{2,5,8\},&\{3,6,9\},\\
\{1,5,9\},&\{2,6,7\},&\{3,4,8\},\\
\{3,5,7\},&\{2,4,9\},&\{1,6,8\}
\end{array}\right\}.$$
In general, if $M$ is an $n\times n$-matrix then $|X|=4n$ (if $n\geq3$)${}^1$, and each element of $X$ is an $n$-element subset of $\{1,\ldots,n^2\}$. Moreover, the relation on the set $X$ consisting of the pairs
$$(x,y)\in X\times X\qquad\text{ with }\qquad x\cap y=\varnothing,$$
is an equivalence relation that partitions $X$ into $4$ subsets of $n$ elements each (if is odd)${}^2$, say $X_1$, $X_2$, $X_3$ and $X_4$. These correspond to the sets of rows, columns, diagonals and antidiagonals of $M$.
As noted above (in the first part of my answer to the literal interpretation), any ordering of the rows and columns determines the matrix entirely. So any choice of $X_i$ and $X_j$ for the rows and columns, and any ordering of $X_i$ an $X_j$ determines a matrix with the same unordered set as $M$. This yields a total of $4\times3\times n!\times n!=12(n!)^2$ matrices with the same unordered set as $M$.
Of course, different choices of $X_i$ and $X_j$ yield different matrices, as their rows and columns will differ. Similarly, different orderings of $X_i$ and $X_j$ yield different matrices, as the orders of their rows and/or columns will differ. This shows that there are precisely $12(n!)^2$ matrices with the same unordered set, and this characterisation gives you a way to construct them all explicitly.
Note that in particular, for $n=3$ this gives $12\times(3!)^2=432$ matrices, in agreement with your finding.
More explicit description of algorithm to generate all solutions:
To generate all matrices with the same unordered set $X$ as some given $n\times n$-matrix $M$, where $n\geq3$ is odd,
Define $X_1$, $X_2$, $X_3$ and $X_4$ as the sets of rows, columns, diagonals and antidiagonals.
Choose distinct sets $X_a$ and $X_b$ for the rows and columns.
Choose orderings of $X_a$ and $X_b$, and denote them by
$$X_a=(X_a[1],\ldots,X_a[n])
\qquad\text{ and }\qquad
X_b=(X_b[1],\ldots,X_b[n]).$$
For each integer $k\in\{1,\ldots,n^2\}$, determine $i$ and $j$ such that $k\in X_a[i]$ and $k\in X_b[j]$.
Define $n_{ij}:=k$.
Define $N:=(n_{ij})_{1\leq i,j\leq n}$.
The matrix $N$ now has the same unordered set $X$ as $M$, and going through every choice of $X_a$ and $X_b$ and every ordering of $X_a$ and $X_b$ yields all matrices with the same unordered set $X$.
Below is my first attempt at writing some sort of pseudo-code. In stead of making the $X_i$ sets and choosing orderings, I make the $X_i$ lists and choose permutations of them.
Input matrix M
Define X_1 = list of rows of M
Define X_2 = list of columns of M
Define X_3 = list of diagonals of M
Define X_4 = list of antidiagonals of M
For each {integer a in {1,2,3,4}}{ // X_a is the set of rows
For each {integer b in {1,2,3,4} with b!=a}{ // X_b is the set of columns
For each {permutation s in S_n}{ // s is an ordering of X_a
For each {permutation t in S_n}{ // t is an ordering of X_b
For each {integer k in {1,...,n^2}}{
Define i such that k in X_a[s(i)]
Define j such that k in X_b[t(j)]
Define n_ij = k
}
Output matrix N_{a,b,s,t} = (n_ij).
}
}
}
}
This outputs all matrices with the the same unordered set $X$ as $M$.
Footnotes:
$1$: If $n\leq2$ then the sets of rows, columns, diagonals and antidiagonals are not all distinct; for $n=1$ all four sets coincide and so $|X|=1$, for $n=2$ the sets of diagonals and antidiagonals coincide and so $|X|=6$.
$2$: If $n$ is odd, then every diagonal and antidiagonal have precisely one entry in common. If $n$ is even this is not the case, and the relation is not an equivalence relation. |
Solution of $y'=(1+x)(1+y)$ | $$\ y'=\frac{d y}{d x}=(1+x)(1+y)$$
Then by separation of variables we get:
$$\ \frac{dy}{1+y}=(1+x)dx$$
And now you can easily integrate both sides |
Arithmetic Mean convergence and concave transformations | Here is a negative answer, but with an indirect construction:
Construction. We prepare the setting as follows.
$X$ and $Y$ are non-negative rv's such that $\Bbb{E}X = \Bbb{E}Y = \alpha < \infty$ and $\Bbb{E}X^{\delta} \neq \Bbb{E}Y^{\delta}$.
Generate independent rv's $X_1, Y_1, X_2, Y_2, \cdots$ such that $X_n \sim X$ and $Y_n \sim Y$.
$(m_k)$ and $(n_k)$ are non-decreasing sequence of non-negative integers such that $m_k + n_k = k$ and $m_k, n_k \uparrow \infty$.
Define $Z_1, Z_2, \cdots$ so that
$$ Z_1 + \cdots Z_k = (X_1 + \cdots + X_{m_k}) + (Y_1 + \cdots + Y_{n_k}) $$
for any $k$. As $k$ increases by 1, precisely one of $m_k$ or $n_k$ increases by 1, hence $Z_k$ are well-defined rv's. (In particular, $Z_k$ will be either $X_{m_k}$ or $Y_{n_k}$.)
Then the Strong Law of Large Numbers (SLLN) shows that
$$ \frac{1}{k} \sum_{i=1}^{k} Z_i
= \left( \frac{m_k}{k} \cdot \frac{1}{m_k} \sum_{i=1}^{m_k} X_i + \frac{n_k}{k} \cdot \frac{1}{n_k} \sum_{i=1}^{n_k} Y_i \right)
\rightarrow \alpha \quad \text{a.s.}$$
But at the same time, by SLLN again,
$$ \frac{1}{m_k} \sum_{i=1}^{m_k} X_i^{\delta} \to \Bbb{E}X^{\delta} \quad \text{and} \quad \frac{1}{n_k} \sum_{i=1}^{n_k} Y_i^{\delta} \to \Bbb{E}Y^{\delta} \quad \text{a.s.} $$
So by choosing $m_k$ and $n_k$ so that $(m_k/k)$ oscillates as $k \to \infty$, we can make
$$ \frac{1}{k} \sum_{i=1}^{k} Z_i^{\delta}
= \left( \frac{m_k}{k} \cdot \frac{1}{m_k} \sum_{i=1}^{m_k} X_i^{\delta} + \frac{n_k}{k} \cdot \frac{1}{n_k} \sum_{i=1}^{n_k} Y_i^{\delta} \right)$$
divergent almost surely.
Remark. You see that the probabilistic technicality is not that crucial to our construction. Indeed, all we need are two non-negative sequences $x_k$ and $y_k$ such that
$$ \lim_{n\to\infty} \frac{1}{n}\sum_{k=1}^{n} x_k = \lim_{n\to\infty} \frac{1}{n}\sum_{k=1}^{n} y_k \quad \text{but} \quad \lim_{n\to\infty} \frac{1}{n}\sum_{k=1}^{n} x_k^{\delta} \neq \lim_{n\to\infty} \frac{1}{n}\sum_{k=1}^{n} y_k^{\delta}. $$
I just wanted to avoid an explicit construction of such sequences, but you may figure out concrete examples. |
Prove if $n$ is an odd integer, then $3n$ is odd. | Use the fact that $6k+3=2(3k+1)+1$. |
Find the inverse of $f(x) = 1 + \frac{1}{x}, x \gt 0$ | You're confusing the input to $f_i$ with the input to $f$. To make this more clear, let's say we have $f(x)=1+\frac 1 x$ and then make $f_i(s)$ its inverse so that $f(x)=s$ and $f_i(s)=x$. As you said, we have that:
$$f_i\left(1+\frac 1 x\right)=x$$
Then, you say that we can subtract $1$ from the input and then multiply $x^2$ to get $x$. Since the input is $s$, this gives us:
$$f_i(s)=(s-1)\cdot x^2$$
This is where you made a mistake: You mixed up $x$ and $s$ by saying they're the same when they're not. In reality, $s=f(x)=1+\frac 1 x$ and $x=f_i(s)$. Therefore, while the above is correct, it really means:
$$f_i(s)=(s-1)\cdot f(i)^2$$
which means we still need to solve for $f_i(s)$ in terms of $s$. If we divide both sides by $f(i)^2$ and take the reciprocal, we get:
$$f_i(s)=\frac{1}{s-1}$$ |
What is the name of the technique for showing that $\mathbb{N}^2$ is countable? | Aha, I found it! The $f$ function I'm using is Cantor's pairing function. |
A free submodule of a free module having greater rank the submodule | The answer is yes: I advise you to look up to the MO question
https://mathoverflow.net/questions/30860/ranks-of-free-submodules-of-free-modules, where you can find several proofs.
However, if you are interested in the non-commutative case, there exists the following notion: a ring $R$ is said to have the IBN (invariant basis number) property if $R_R^n\cong R_R^m$ implies $n=m$, for every positive integers $n,m$.
Well, there do exist not IBN rings. Also, it is possible to find a ring $R$ and an $R$-module $M_R$ such that $M_R\cong M_R^n$ for every $n\geq 1$. This gives you examples of isomorphic modules of different "rank" (in fact, the rank is well defined only for commutative rings). I hope I have not digressed too much. |
How to sample from joint Bernoulli distribution? | First, knowing the pairwise distributions $p_{ij}$already gives you $p_i$. On the other hand, that data is not enough to determine the full join distribution.
If you only are interested in generating a Bernoulli that fit those $p_{ij}$, withouth regard for higher distributions, then simply assume a Markov process, and generate $x_1$ according to $p_1$, then $x_2$ according to $p_{2|1}=p_{12}/p_{1}$
Regarding you last question: no special preference. You can either give $p_1$ $p_2$ and $\rho_{12}$ or either $p_{12}$. In both cases there are three degrees of freedom, and it's trivial to convert from one representaion to another.
Update: The above only is useful to fit the pair distributions of consecutive elements, ($p_{1,2},p_{2,3},\dots$) but it does not fit other pair correlations. A complete treatment is more complicated.
A multivariate Bernolli is fully (and univocally) specified by the $2^n$ joint probabilities $p_{\bf x}$ restricted to $0 \le p_{\bf x} \le 1$ and $\sum p_{\bf x}=1$.
An alternate (and perhaps more convenient) representation is given (refer to
this answer) by the $2^n-1$ coefficients $a_{i...k}=E[x_i \cdots x_k]$, where $x_i=\pm1$ and $a_{\emptyset}=1$. These both representations are linearly related through a Hadamard matrix.
In our case, we are given the $n$ first order coefficients $a_i$ and the $n(n-1)/2$ second order coefficients $a_{i,j}$. If we are at liberty to choose the other coefficients, we can try by setting them to zero, and checking that the resulting ${\bf p}={\bf M a}$ falls into the allowed hypercube. Once you have find this (or some other acceptable solution), we have the full joint probability, and we can proceed to generate samples.
For samples generation, given the full joint probability, we can either generate the components in order using $P(x_i|x_{i-1} x_{i-2} \cdots x_1)$ or we can use some Gibss sampling or Metropolis algorithm |
Homomorphism $f:M \to M$ such that $\mathrm{im}(f) = L$ and $f \circ f = f$ where $L \le M$ and $M$ abelian | No, take $M=\mathbb Z$, $L=2\mathbb Z$. |
Is Digit-wise calculation possible? | If $a$ is coprime to $10$, then $a^n \mod 10^k$ is periodic in $n$ with period dividing $\varphi(10^k) = 4 \times 10^{k-1}$. Thus the lowest $k$ decimal digits of
$a^n$ are the same as those of $a^m$ where $n \equiv m \mod 4 \times 10^{k-1}$.
For example, since $21 \equiv 1 \mod 4$, the lowest digit of $7^{21}$ is the same as that of $7^1$, namely $7$.
In the case of $7$, it turns out that the order of $7 \mod 1000$ is actually $20$, so the lowest three digits of $7^{21}$ are the same as those of $7^1$, namely $007$. |
An extension of prime number theorem | Imagine a very large number $N$ and consider the range $[10^N,10^{N+1}]$. The natural logarithms of $10^N$ and $10^{N+1}$ only differ by $\ln(10)\approx 2.3$ Hence the reciprocals of the logarithms of all primes in this range virtually coincicde. Because of the approximation $$\int_a^b \frac{1}{\ln(x)}dx$$ for the number of primes in the range $[a,b]$ the number of primes is approximately the length of the interval divided by $\frac{1}{\ln(10^N)}$, so is approximately equally distributed. Hence your conjecture is true.
Benfords law seems to contradict this result , but this only applies to sequences producing primes as the Mersenne primes and not if the primes are chosen randomly in the range above. |
Find a lower bound of $k$ when $m!=100x^2+20x$ and $k*100x^2+20x=(m+5)!$ | Since $$(m+5)!=m! × (m+1)(m+2)(m+3)(m+4)(m+5)$$
$$k=(m+1)(m+2)(m+3)(m+4)(m+5)$$
or by estimation $ k>m^5$ |
Conserved Current for a PDE | Proof:
Consider equation (I) with $W(x)=const$ and the following initial conditions:
$$U(x,0)=0,$$
$$D_tU(x,0)=A,$$
and periodic boundary conditions:
$$U(x,t)=U(x+a,t),$$
where $a$ and $A$ are constants.
Solution of equation (I) depends only in $t$ in this case: $U=U(t)$, and $p=p(t)$, $j=0$. It has the following form:
$$U=\frac{A\hbar}{W}\sin{\frac{Wt}{\hbar}}$$
The first derivative is:
$$D_tU=A\cos{\frac{Wt}{\hbar}}$$
Function $p=p(U,D_tU)$ has 2 properties:
1 It must not depend on $t$ due to the conservation law.
2 It must not depend on $W$ due to property (i) - $p$ does not change after transformation $W\rightarrow W+c$.
However, such function $p$ is constant: one can invert expressions for $U$ and $D_t U$ to find functions $W(U,D_tU)$ and $t(U,D_tU)$ and use
$$\frac{\partial p}{\partial U}=\frac{\partial p}{\partial W}\frac{\partial W}{\partial U}+\frac{\partial p}{\partial t}\frac{\partial t}{\partial U}=0,$$
$$\frac{\partial p}{\partial (D_tU)}=\frac{\partial p}{\partial W}\frac{\partial W}{\partial (D_tU)}+\frac{\partial p}{\partial t}\frac{\partial t}{\partial (D_tU)}=0,$$
because $\frac{\partial p}{\partial W}=0$ and $\frac{\partial p}{\partial t}=0$.
UPDATE: answering the updated question:
Assume that $U=f(x-vt)$, where $v=(\hbar\omega/2m)^{1/2}$ (it follows from (ii) that $f(x)=\cos{((2m\omega/\hbar)^{1/2}x)}$). Assume that there is a function $p$ that depends only on $U$ and its derivatives, and does not depend on $\omega$. In this case, $P(x,t)$ is also a function of $x-vt$: $P(x,t)=F(x-vt)$.
As a consequence:
$$\frac{D_t P}{D_x P}=-v=-\sqrt{\frac{\hbar\omega}{2m}},$$
which contradicts the assumption that $P(x,t)$ does not depend on $\omega$. |
A question about corners of $C^\ast$-algebras | Not necessarily. Let $\mathcal A=C ([0,1]\cup [2,3]) $, $n=2$, $q=1_{[0,1]} $, and $$p=\begin {bmatrix}q&0\\0&0\end {bmatrix}. $$ Then $$pM_2 (\mathcal A)p =\begin {bmatrix}\mathcal Aq&0\\0&0\end {bmatrix}\simeq C ([0,1]), $$
which is projectionless, so not isomorphic to $M_k (\mathcal A) $ for any $k $. |
Is $z^2-i(x^2-y^2)$ analytic in $\mathbb{D}$? | I f you meant $\,z=x+iy\,,\,\,x,y\in\Bbb R\,$ , then
$$z^2-i(x^2-y^2)=x^2-y^2+(2xy+x^2-y^2)i = u(x,y)+iv(x,y)$$
$$u_x=2x\neq 2x-2y=v_y$$
and thus the Cauchy-Riemann conditions aren't fulfilled... |
What is the effect of elementary operation on a matrix? | If you perform such operations on matrix $A$, you obtain another matrix $B$ that
shares the same row space as $A$,
is row equivalent to $A$, and
shares the same reduce row echelon form.
$$B=E_n \ldots E_1 A$$
where $E_i$ are elementary matrices. |
Generalizations of Hilbert's Syzygy theorem | As suggested by Jason I put my comments into an answer. Hilbert's theorem means that all modules over $k[x_1,\ldots ,x_n]$ have projective dimension $\leq n$ -- one says that the global dimension (aka homological dimension) of $k[x_1,\ldots ,x_n]$ is $n$. This is a property of the ring $k[x_1,\ldots ,x_n]$, it does not depend on the grading. It applies to many other rings: by a famous theorem of Serre, a local (commutative) ring has finite global dimension if and only if it is regular. |
Why $F: [0,1) \rightarrow S^1$ with $f(x) = (\cos2\pi x, \sin2\pi x)$ continuous function? | You reasoning is quite correct: F is not continuous.
As you have noted, the set $[(x_1,y_1),(x_2,-y_2)]$ is open but its preimage is not.
In fact it is clopen - both open and closed. $[0,1)$ has no clopen subsets except itself and the empty set, so we can deduce that no map from $[0,1)$ onto $S^1$ with this topology can be continuous. (Look up the topological property "connectedness" if you haven't already met it. A continuous image of a connected space is always connected; $[0,1)$ is connected but $S^1$ is not.) |
Taylor series error: Interval of $\xi$ | The statement "there exists $\xi \in (x_0,x)$ such that ...." is stronger than the statement "there exists $\xi \in [x_0,x)$ such that ....". In general the best we can say is that $\xi \in (x_0,x)$ (assuming that $x_0<x$). |
Transformation and probability | I think that the procedure you described is essentially fine. But since $x$ and $y$ are not independent to start with, an extra step at the beginning will simplify the maths a lot. Here is my modification of your procedure.
First, let us introduce $z = y - x$. In this way, we can integrate $x$ and $z$ from $0$ to $+\infty$, independently. Formally, we are introducing a transformation
$$
\begin{align}
x &= x' \\
y &= x' + z,
\end{align}
$$
which has a Jacobian of $|\partial(x, y)/\partial(x', z)| = 1$. Thus,
$$
f(x', z) = f(x, y)\left|\frac{\partial(x, y)}{\partial(x', z)}\right| = \lambda \, \exp(-\lambda x') \times \lambda \exp(-\lambda z),
$$
which is a product of two normalized exponential distributions. So $x'$ and $z$ are independent.
Next, we follow the same procedure
$$
\begin{align}
u &= \frac{x'}{z} \\
v &= x' + z,
\end{align}
$$
which gives
$$
\begin{align}
x' &= \frac{uv}{u+1}, \\
z &= \frac{v}{u+1}
\end{align}
$$
which has the Jacobian $|\partial(x', z)/\partial(u, v)| = v/(1+u)^2$.
So
$$
f(u, v)
= f(x', z) \left|\frac{\partial(x', z)}{\partial(u, v)}\right|
= \Bigl[ \lambda^2 \exp(-\lambda v) \, v \Bigr]
\times \left[ \frac{1}{(1+u)^2} \right].
\qquad (1)
$$
The independence of $u$ and $v$ is kind of obvious now, since the probability density is a product of a function of only $u$ and that of only $v$. But formally, we still need to compute the marginal distributions of $u$ and $v$.
Domain of integration
Alternatively, we can show that the domains of the integration of $u$ and $v$ are independent. Note the fact that probability density is a product of a function of $u$ and a function of $v$ does not show that $U$ and $V$ are independent. For example, in the original distribution
$$
\lambda^2 \exp(-\lambda y) = \lambda^2 \, \exp(-\lambda y) \times 1
$$
where the last $1$ can be interpreted as a function of $x$ only. But this does not mean $x$ and $y$ are independent. In fact they are not, because the domain of integration of $x$ is limited by $y$. But if the domains of integration are independent, as we shall show below, then this argument holds, and $u$ and $v$ are independent.
For a fixed $v = x' + z$, we can set $x' \rightarrow 0^+$, which gives $u \rightarrow 0$, or $z \rightarrow 0^+$, which gives $u \rightarrow \infty$. This shows the domain of integration of $u$ is $(0, \infty)$ no matter the value of $v$.
For a fixed $u = x'/z$, with $x', z > 0$, we can simultaneously multiply $x'$ and $z$ by the same constant to reach and value of $v = x'+z$, so the domain of integration of $v$ is also $(0, \infty)$ no matter the value of $v$.
This shows that domains of integration are independent and hence the random variables $U$ and $V$ are independent.
A more “compact” solution
In the above, we introduce $x'$ and $z$, just to help us think. It is actually not necessary. So to go directly from $x$ and $y$ to $u$ and $v$, we have
$$
\begin{align}
u &= x/(y - x), &\qquad(2) \\
v &= y, &\qquad(3)
\end{align}
$$
or
$$
\begin{align}
x &= \frac{uv}{u+1}, \\
y &= v,
\end{align}
$$
and $|\partial(x, y)/\partial(u, v)| = v/(u+1)^2$, we also reach (1). But still, we need to determine the domains of integration of $u$ and $v$ from (2) and (3). That is, for a fixed $v$ or $y$, $u$ can go from $0$ to $+\infty$, and for a fixed $u$, $v$ can also go from $0$ to $+\infty$. These arguments are slightly easier with the introduction of $z = y - x$. |
Discrete distributions; find E(X) | The definition of the expectation of a function $g$ of a discrete random variable $X$, with probability mass distribution function $f$ measured over domain $\cal D$ is defined as:
$$\mathsf E[g(X)] = \sum_{x\in\mathcal D} g(x) f(x)$$
Thus
$$\begin{align}\mathsf E[X] &= \sum_{x\in\mathcal D} x f(x) \\[2ex] \mathsf E[X(X-1)] &= \sum_{x\in\mathcal D} x(x-1) f(x)\end{align}$$
You have the density functions and domains of the random variables to sum over. Substitute and solve. |
Why use Gauss Jordan Elimination instead of Gaussian Elimination, Differences | Gaussian Elimination helps to put a matrix in row echelon form, while Gauss-Jordan Elimination puts a matrix in reduced row echelon form. For small systems (or by hand), it is usually more convenient to use Gauss-Jordan elimination and explicitly solve for each variable represented in the matrix system. However, Gaussian elimination in itself is occasionally computationally more efficient for computers. Also, Gaussian elimination is all you need to determine the rank of a matrix (an important property of each matrix) while going through the trouble to put a matrix in reduced row echelon form is not worth it to only solve for the matrix's rank.
EDIT:
Here are some abbreviations to start off with:
REF = "Row Echelon Form". RREF = "Reduced Row Echelon Form."
In your question, you say you reduce a matrix A to a diagonal matrix where every nonzero value equals 1. For this to happen, you must perform row operations to "pivot" along each entry along the diagonal. Such row operations usually involve multiplying/dividing by nonzero scalar multiples of the row, or adding/subtracting nonzero scalar multiples of one row from another row. My interpretation of REF is just doing row operations in such a way to avoid dividing rows by their pivot values (to make the pivot become 1). If you go through each pivot (the numbers along the diagonal) and divide those rows by their leading coefficient, then you will end up in RREF. See these Khan Academy videos for worked examples.
In a system $Ax=B$, $x$ can only be solved for if $A$ is invertible. Invertible matrices have several important properties. The most useful property for your question is that their RREF is the identity matrix (a matrix with only 1's down the diagonal and 0's everywhere else). If you row-reduce a matrix and it does not become an identity matrix in RREF, then that matrix was non-invertible. Non-invertible matrices (also known as singular matrices) are not as helpful when trying to solve a system exactly. |
Inequality involving taking expectations | The result is true more generally without convexity.
Fact 1: If $r(X)$ and $q(X)$ are non-decreasing functions of a random variable $X$, then their covariance is non-negative. See link here for this fact:
covariance of increasing functions
Fact 2: If $m(X)$ and $n(X)$ are non-increasing functions of a random variable $X$, then their covariance is also non-negative, since:
$$ E[m(X)n(X)] = E[(-m(X))(-n(X))] \geq E[-m(X)]E[-n(X)] = E[m(X)]E[n(X)] $$
where the inequality follows from Fact 1 together with the observation that $-m(X)$ and $-n(X)$ are non-decreasing.
We can apply Fact 2 to your problem: The functions $f(X)^2$ and $g(X)h(X)$ are both non-increasing in $X$, and so by Fact 2:
$$ E[f(X)^2]E[g(X)h(X)] \leq E[f(X)^2g(X)h(X)] \leq E[f(X)g(X)] $$
where the final inequality uses the fact that $f(X)h(X)\leq 1$ for all $X$. The final term is less than it would be if we add the positive value $E[f(X)g(X)]E[f(X)h(X)]$.
Your previous question shows the result is not true if we remove the condition $f(X)h(X)\leq 1$. That link is:
inequality involving taking expectation |
square of a permutation cycle | Permutations are represented in two ways. One is a verbose description of the mapping: $$\sigma = \begin{bmatrix}
1 &2 &3 &4 &5 &6 &7 &8 &9 \\
1&5 &7 &4 &6 &9 &3 &2 &8
\end{bmatrix}$$ means that $1$ goes to $1$, $2$ goes to $5$, and so on.
We abbreviate this to cycle notation, which is more compact and also more revealing of important structure. This permutation abbreviates to $$(1)(25698)(37)(4)$$ which means the same thing as before, only more compactly. The $(25698)$ means that $2$ goes to $5$, $5$ goes to $6$, $6$ goes to $9$, $9$ goes to $8$, and $8$ goes back to $2$. Then we abbreviate further by dropping the $(1)$ and $(4)$, which can be inferred even if not written explicitly.
In the cycle notation, $\sigma^2$ is indeed written $(26859)$, as you should check. You are being asked to write $\sigma^{-2}$ in the compact cycle notation.
The inverse of a permutation $p$ is another permutation that un-does the effect of $p$. If $p$ takes $3$ to $7$, then $p^{-1}$ should take $7$ to $3$. Let's take a simple example:
$$p = \begin{bmatrix}
1 &2 &3 &4 &5 \\
4&5 &2 &1 &3
\end{bmatrix}$$
The inverse of $p$, in mapping notation, is
$$p^{-1} = \begin{bmatrix}
4&5 &2 &1 &3 \\
1 &2 &3 &4 &5 \\
\end{bmatrix}$$
because where $p$ took $2$ to $5$, $p^{-1}$ takes $5$ to $2$. As commented earlier, we have “flipped the notation upside down”. We usually arrange the columns in order:
$$p^{-1} = \begin{bmatrix}
1 &2 &3 &4 &5 \\
4 & 3& 5 & 1 & 2 \\
\end{bmatrix}$$
In cycle notation, $p$ is written $(14)(253)$. Again, this says that $2$ goes to $5$. As you observed in the comments, you can find $p^{-1}$ by writing the cycle notation for $p$ backwards: $$p^{-1} = (352)(41)$$ which we would usually write as $$p^{-1} = (14)(235).$$ Again, notice that this says that $p^{-1}$ takes $5$ to $2$, as before. |
determinant involving unitary hermitian matrices | Consider$$[\det(A+iB)]^2=\det[(A+iB)^2]=\det[A^2+iAB+iBA-B^2]\tag1$$Now note that due to $A$ (similar for $B$) being Hermitian, $A^\dagger=A$. Due to unitarity, $A^\dagger=A^{-1}$. Combining, $$A=A^{-1}\implies A^2=\Bbb I$$ So $(1)$ becomes $$i^n\det(AB+BA)$$ If $n\equiv2\mod4$, then $i^n=-1$. So $$[\det(A+iB)]^2=-\det(AB+BA)\tag2$$ Thus, if we can show that $\det(AB+BA)\ge0$ for all $A,B\in S$, then your hypothesis holds. However, inspired by user647468's comment, we can take $B=\Bbb I\in S$, and $A_{ij}=(-1)^{i+1}\delta_{ij}$. Then $$\det(AB+BA)=2^n\prod_{i=1}^{n}(-1)^{i+1}=-2^n<0\tag3$$Hence the $RHS$ of $(2)$ can be positive, and so $\det(A+iB)$ is not necessarily imaginary. |
Composition of 2 monotonic functions | Let $*$ represent either $<$ or $>$, depending on the direction of the monotonicity in each case. Then
$$x*y\implies g(x)*g(y)\implies f\circ g(x)*f\circ g(y)$$ |
A mathematics competition had 9 easy and 6 difficult problems | Explaining the question:
There are $n$ participants
There are $9$ easy problems and $6$ difficult problems
Each participant solved $14$ problems out of $15$ problems
There are $9\cdot6=54$ pairs of one easy problem and one difficult problem
If we add up the number of times that each pair was solved, then we get $459$
What is the value of $n$?
Explaining the answer:
Each participant solved either $9\cdot5=45$ pairs or $8\cdot6=48$ pairs
Let $x$ denote the number of participants who solved $45$ pairs
Let $y$ denote the number of participants who solved $48$ pairs
Then we know that $x+y=n$ and $45x+48y=459$
Therefore $0\leq{x}\leq\lfloor\frac{459}{45}\rfloor$ and $0\leq{y}\leq\lfloor\frac{459}{48}\rfloor$
We can solve this using trial & error for $x\in[0,10]$ and $y\in[0,9]$.
Python code below gives the answer of $10$ participants:
for x in range(0,459/45+1):
for y in range(0,459/48+1):
if 45*x+48*y == 459:
print x+y
Please note that there might be a way to solve it also using Diophantine Analysis. |
Power Series Example | Try to write out the first few terms:
$$(az)^0+(az)^2+(az)^4+\dots$$
$$1+a^2z^2+a^4z^4+\dots$$
$$1+0z+a^2z^2+0z^3+a^4z^4+\dots$$ |
The sides of an obtuse triangle have lengths $x$, $2x+2$ and $2x+3$. Between what values is $x$? | We need to solve the following system:
$$x>1$$ and $$(2x+2)^2+x^2<(2x+3)^2.$$
I got $$1<x<5.$$ |
How do I find the sum of a geometric series if it doesn't seem consistent? | This is simpler than you're thinking it is. The form you desire to pop up (at least in order to make some nice simplifications) is
$$1+x+x^2+x^3+\cdots$$
right? What would happen if you added $8$ to it? You would get
$$8+(1+x+x^2+x^3+\cdots)$$
or, equivalently,
$$9+x+x^2+x^3+\cdots$$
That is, these last two expressions are equivalent! The middle expression, provided $|x|<1$, lets us convert the infinite summation to a ratio in the usual way as well:
$$8+(1+x+x^2+x^3+\cdots) = 8 + \frac{1}{1-x}$$ |
Question on Galois Connections | This asterisk and apostrophe stuff is a little annoying to type, so I'm going to switch to $F:P\to Q$ and $G:Q\to P$.
So, $GF$ is a closure operator on $P$ in the sense that it is nondecreasing ($p\leq GF(p)$) and idempotent $GF(GF(p))=GF(p)$. Something similar applies for $FG$ on $Q$. The idea is that $GF$ extends elements to a "closed" element containing the original element. The equality $FGF=F$ allows you to conclude that $GF$ is an idempotent operator on $P$, and similarly the other equality allows you to conclude that $FG$ is idempotent on $Q$.
Restricting $GF$ to operate only on $G(Q)$ would be boring because all of the elements of $G(Q)$ are "already closed": $GF(G(q))=G(q)$ for all $q\in Q$! So $GF$ doesn't actually expand those elements at all. It's more interesting in cases where $GF(p)>p$, where the nonclosed element $p$ became closed after $GF$ is applied.
For $2)$, DeMorgan's laws are statements about greatest lower bounds and least upper bounds. You need the posets to be lattices so that GLB's and LUB's are defined in the first place. Otherwise you are trying to prove things about things that don't exist :)
Added
When I get to the DeMorgan's Laws part, I have a feeling you might have been meant to work with antitone connections all along. So for this part, let $F$ and $G$ be order reversing. $FG$ and $GF$ still have the idempotence and nondecreasingness they had before. Let $a,b\in F(P)$.
In particular, $FG(a)=a$ and $FG(b)=b$. Actually, $F(P)$ is closed under meets and joins. Here is how to show it for joins: I will leave meets up to you. Since $G\underline{(a\vee b)}\leq \underline{G(a\vee b)}$, it follows that $F(\underline{G(a\vee b)})\leq \underline{a\vee b}$. But this implies $FG(a\vee b)\leq a\vee b$, and the other containment is taken care of by the fact $FG$ is nondecreasing. Thus $FG(a\vee b)=a\vee b$.
One of DeMorgan's Laws on $F(P)$ would now look like this: $G(a\vee b)=G(a)\wedge G(b)$. Let's establish this particular one.
You have that $a,b\leq a\vee b$, which immediately yields $G(a\vee b)\leq G(a), G(b)$. By definition of the greatest lower bound, we have
$$G(a\vee b)\leq G(a)\wedge G(b)\leq G(a),G(b).$$
Applying $F$ reverses these:
$$a\vee b\geq F(G(a)\wedge G(b))\geq a,b$$
(Notice I used the fact that $FG$ is the identity on $F(P)$ to delete some unnecessary $FG$'s.)
But then by definition of the least upper bound, $a\vee b= F(G(a)\wedge G(b))$. Another application of $G$ on both sides yields that $G(a\vee b)=GF(G(a)\wedge G(b))=G(a)\wedge G(b)$.
I'm going to encourage you to work out $G(a\wedge b)=G(a)\vee G(b)$ on your own. You could reprove everything for $F$ too, but your teacher might rather want to see you appeal to similarity instead of the extra text :) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.